arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
2103.01320
|
\section{Introduction}
Multi-player competitions are a recurrent theme in physics, biology, sociology, and economics since they model several phenomena. We refer to the introduction of \cite{ben2007efficiency} for various examples of models of multi-player competitions related to evolution, social stratification, business world and other related fields.
Among all these fields, sports are indubitably one of the most famous examples where modelling competitions is important. This is the case for two reasons: first, there is accurate and easily accessible data; second, sports competitions are typically head-to-head, and so they can be viewed as a nice model of multi-player competitions with binary interactions.
Sports multi-player leagues where the outcome of each game is completely deterministic have been studied for instance by Ben-Naim, Kahng, and Kim~\cite{ben2006dynamics} (see also references therein and explanations on how this model is related to urn models). Later, Ben-Naim and Hengartner~\cite{ben2007efficiency} investigated how randomness affects the outcome of multi-player competitions. In their model, the considered randomness is quite simple: they start with $N$ teams ranked from best
to worst and in each game they assume that the weaker team
can defeat the stronger team with a fixed probability. They investigate the total number of games needed for the best team to win the championship with high probability.
A more sophisticated model was studied by Bena\"{\i}m, Benjamini, Chen, and Lima~\cite{MR3346459}: they consider a finite connected graph, and place a team at each vertex of the graph. Two
teams are called a \emph{pair} if they share an edge. At discrete times, a match is
played between each pair of teams. In each match, one of the teams defeats the other (and gets a point) with
probability proportional to its current number of points raised to some fixed
power $\alpha>0$. They characterize the limiting behavior of the proportion of points
of the teams.
In this paper we propose a more realistic model for sport multi-player leagues that can be briefly described as follows (for a precise and formal description see \cref{sect:the_model}): we start with $2n$ teams having i.i.d.\ initial strengths. Then we consider a calendar of the league composed by $2n-1$ different days, and we assume that on each day each team plays exactly one match against another team in such a way that at the end of the calendar every team has played exactly one match against every other team (this coincides with the standard way how calendars are built in real football-leagues for the first half of the season). Moreover, we assume that on each day of the league, the initial strengths of the teams are modified by independent ergodic processes. Finally, we assume that a team wins against another team with probability given by a chosen function of the strengths of the two teams in the day the match is played.
We prove (see \cref{sect:main_results} for precise statements) a quenched law of large numbers and a quenched central limit theorem for the number of victories of a team according to its initial strength. Here quenched means that the results hold a.s.\ conditioning on the initial strengths of the teams in the league.
\subsection{The model}\label{sect:the_model}
We start by fixing some simple and standard notation.
\bigskip
\textbf{Notation.}
Given $n\in\mathbb{N}=\{1,2,3,\dots\}$, we set $[n]=\{1,2,\dots,n\}$ and $[n]_0=\{0,1,2,\dots,n\}$. We also set $\mathbb{R}_+=\mathbb{R}\cap[0,\infty)$.
We refer to random quantities using \textbf{bold} characters. Convergence in distribution is denoted by $\xrightarrow{d}$, almost sure convergence is denote by $\xrightarrow{a.s.}$, and convergence in probability by $\xrightarrow{P}$.
Given a collection of sets $\mathcal A$, we denote by $\sigma\left(\mathcal A\right)$ and $\lambda\left(\mathcal A\right)$ the $\sigma$-algebra and the monotone class generated by $\mathcal A$, respectively. Given a random variable $\bm X$, we denote by $\sigma\left(\bm X\right)$ the $\sigma$-algebra generated by $\bm X$.
For any real-valued function $f$ we denote with $f^2$ the function such that $f^2(\cdot)=(f(\cdot))^2$.
\bigskip
\noindent We can now proceed with the precise description of our model for multi-player leagues.
\bigskip
\noindent\textbf{The model.} We consider a league of $2n\in2\mathbb{N}$ teams denoted by $\{T_{i}\}_{i\in [2n-1]_0}$ whose \emph{initial random strengths} are denoted by $\{\bm s_{i}\}_{i\in [2n-1]_0}\in \mathbb{R}_+^{2n}$.
In the league every team $T_i$ plays $2n-1$ matches, one against each of the remaining teams $\{T_{j}\}_{j\in [2n-1]_0\setminus\{i\}}$. Note that there are in total ${2n}\choose{2}$ matches in the league. These matches are played in $2n-1$ different days in such a way that each team plays exactly one match every day.
For all $i\in[2n-1]_0$, the initial strength $\bm s_i$ of the team $T_i$ is modified every day according to a discrete time $\mathbb{R}_+$-valued stochastic process $\bm \xi^i=(\bm \xi^i_j)_{j\in \mathbb{N}}$. More precisely, the strength of the team $T_i$ on the $p$-th day is equal to $\bm s_i\cdot \bm \xi^i_p\in \mathbb{R}_+$.
We now describe the \emph{rules} for determining the winner of a match in the league. We fix a function $f:\mathbb R_+^2\to[0,1]$ that controls the winning probability of a match between two teams given their strengths. When a team with strength $x$ plays a match against another team with strength $y$, its probability of winning the match is equal to $f(x,y)$ and its probability of loosing is equal to $1-f(x,y)$ (we are excluding the possibility of having a draw). Therefore, if the match between the teams $T_i$ and $T_j$ is played the $p$-th day, then, conditionally on the random variables $\bm s_i, \bm s_j,\bm \xi^i_p,\bm \xi^j_p$, the probability that $T_i$ wins is $f(\bm s_i\cdot \bm \xi^i_p,\bm s_j\cdot \bm \xi^j_p)$.
Moreover, conditionally on the strengths of the teams, the results of different matches are independent.
\subsection{Goal of the paper}
The goal of this work is to study the model defined above when the number $2n$ of teams in the league is large. We want to look at the limiting behavior of the number of wins of a team with initial strength $s\in \mathbb{R}_+$ at the end of the league. More precisely, given $s\in \mathbb{R}_+$, we assume w.l.o.g.\ that the team $T_{0}$ has deterministic initial strength $s$, i.e.\ $\bm s_{0}=s$ a.s., and we set
\begin{equation}
\bm W_n(s)\coloneqq\text{Number of wins of the team }T_{0}\text{ at the end of a league with $2n$ players}.
\end{equation}
We investigate a quenched law of large numbers and a quenched central limit theorem for $\bm W_n(s)$.
\subsection{Our assumptions}
In the following two subsections we state some assumptions on the model.
\subsubsection{Assumptions for the law of large numbers}\label{ass:LLN}
We make the following natural assumptions\footnote{The second hypothesis is not needed to prove our results but it is very natural for the model.} on the function $f:\mathbb R_+^2\to[0,1]\:$:
\begin{itemize}
\item $f(x,y)$ is measurable;
\item $f(x,y)$ is weakly-increasing in the variable $x$ and weakly-decreasing in the variable $y$.
\end{itemize}
Recall also that it is not possible to have a draw, i.e.\ $f(x,y)+f(y,x)=1$, for all $x,y\in \mathbb{R}_+$.
Before describing our additional assumptions on the model, we introduce some further quantities.
Fix a Borel probability measure $\nu$ on $\mathbb{R}_+$; let $\bm \xi=(\bm \xi_\ell)_{\ell\in \mathbb{N}}$ be a discrete time $\mathbb{R}_+$-valued stochastic process such that
\begin{equation}\label{eq:stationarity0}
\bm \xi_\ell\stackrel{d}{=}\nu,\quad\text{for all}\quad \ell\in \mathbb{N},
\end{equation}
and\footnote{This is a weak-form of the \emph{stationarity property} for stochastic processes.}
\begin{equation}\label{eq:stationarity}
\left(\bm \xi_\ell,\bm \xi_k\right)\stackrel{d}{=} \left(\bm \xi_{\ell+\tau},\bm \xi_{k+\tau}\right), \quad\text{for all}\quad \ell,k,\tau\in\mathbb{N}.
\end{equation}
We further assume that the process $\bm \xi$ is \emph{weakly-mixing}, that is, for every $A \in \sigma(\bm \xi_1)$ and every collection of sets $B_\ell \in \sigma(\bm \xi_\ell)$, it holds that
\begin{equation}\label{eq:unif_weak_mix1}
\frac{1}{n} \sum_{\ell=1}^n \left|\mathbb{P}(A \cap B_\ell)-\mathbb{P}(A)\mathbb{P}(B_\ell)\right|\xrightarrow{n\to\infty} 0.
\end{equation}
The additional assumptions on our model are the following:
\begin{itemize}
\item For all $i\in[2n-1]_0$, the stochastic processes $\bm \xi^i$ are independent copies of $\bm \xi$.
\item The initial random strengths $\{\bm s_i\}_{i\in[2n-1]}$ of the teams different than $T_0$ are i.i.d.\ random variables on $\mathbb{R}_+$ with distribution $\mu$, for some Borel probability measure $\mu$ on $\mathbb{R}_+$.
\item The initial random strengths $\{\bm s_i\}_{i\in[2n-1]}$ are independent of the processes $\{\bm \xi^i\}_{i\in[2n-1]_0}$ and of the process $\bm \xi$.
\end{itemize}
\subsubsection{Further assumptions for the central limit theorem}\label{ass:CLT}
In order to prove a central limit theorem, we need to make some stronger assumptions. The first assumption concerns the mixing properties of the process $\bm \xi$. For $k \in \mathbb{N}$, we introduce the two $\sigma$-algebras $\mathcal{A}_1^{k} = \sigma \left(\bm \xi_1, \dots, \bm \xi_k \right)$ and $\mathcal{A}_k^{\infty} = \sigma \left(\bm \xi_k, \dots \right)$ and we define for all $n\in\mathbb{N}$,
\begin{equation}\label{eq:def_alpha_n}
\alpha_n = \sup_{\substack{k \in \mathbb{N}\\ A \in \mathcal{A}_1^{k},B \in \mathcal{A}_{k+n}^{\infty}}}
\left| \mathbb{P} \left( A \cap B \right) - \mathbb{P}(A)\mathbb{P}(B) \right|.
\end{equation}
We assume that
\begin{equation}\label{eq:strongly_mix_plus}
\sum_{n=1}^{\infty} \alpha_n < \infty.
\end{equation}
Note that this condition, in particular, implies that the process $\bm \xi$ is \emph{strongly mixing}, that is, $\alpha_n \to 0$ as $n \to \infty$.
Finally, we assume that there exist two sequences $p=p(n)$ and $q=q(n)$ such that:
\begin{itemize}
\item $p\xrightarrow{n\to\infty} +\infty$ and $q\xrightarrow{n\to\infty} +\infty$,
\item $q=o(p)$ and $p=o(n)$ as $n \to \infty$,
\item $ n p^{-1 } \alpha_q =o(1)$,
\item $ \frac{p}{n} \cdot \sum_{j=1}^p j \alpha_j = o(1)$.
\end{itemize}
\begin{rem}
The latter assumption concerning the existence of the sequences $p$ and $q$ is not very restrictive. For example, simply assuming that $\alpha_n = O(\frac{1}{n \log(n)})$ ensures that the four conditions are satisfied for $p=\frac{\sqrt n}{\log \log n}$ and $q=\frac{\sqrt n}{(\log \log n)^2}$. Indeed, in this case, the first two conditions are trivial, the fourth one follows by noting that $\sum_{j=1}^p j \alpha_j=O(p)$ thanks to the assumption in \cref{eq:strongly_mix_plus}, and finally the third condition follows by standard computations.
\end{rem}
\begin{rem}\label{rem:fbkwufobw}
Note also that as soon as $p= O(\sqrt n)$ then the fourth condition is immediately verified.
Indeed, $\sum_{j=1}^p j \alpha_j\leq \sqrt p \sum_{j=1}^{\sqrt p} \alpha_j+p \sum_{j=\sqrt p}^{ p} \alpha_j =o(p)$.
\end{rem}
\subsection{Main results}\label{sect:main_results}
\subsubsection{Results for our model of multi-player leagues}
Let $\bm{V},\bm{V'},\bm U, \bm U'$ be four independent random variables such that $\bm{V}\stackrel{d}{=}\bm{V'}\stackrel{d}{=}\nu$ and $\bm U\stackrel{d}{=}\bm U'\stackrel{d}{=}\mu$. Given a deterministic sequence $\vec{s}=(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, denote by $\mathbb{P}_{\vec{s}}$ the law of the random variable $\frac{\bm W_n(s)}{2n}$ when the initial strengths of the teams $(T_i)_{i\in[2n-1]}$ are equal to $\vec{s}=(s_i)_{{i\in[2n-1]}}$, i.e.\ we study $\frac{\bm W_n(s)}{2n}$ on the event
$$\bm s_0=s\quad \text{ and }\quad(\bm s_i)_{{i\in[2n-1]}}=(s_i)_{{i\in[2n-1]}}.$$
\begin{thm}(Quenched law of large numbers)\label{thm:LLN}
Suppose that the assumptions in \cref{ass:LLN} hold. Fix any $s\in\mathbb{R}_+$. For $\mu^{\mathbb{N}}$-almost every sequence $\vec{s}=(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, under $\mathbb{P}_{\vec{s}}$ the following convergence holds
\begin{equation}
\frac{\bm W_n(s)}{2n}\xrightarrow[n\to\infty]{P}\ell(s),
\end{equation}
where
\begin{equation}
\ell(s)=\mathbb{E}\left[f\left(s\cdot\bm{V},\bm U\cdot\bm{V'}\right)\right]=\int_{\mathbb{R}^3_+} f\left(s\cdot v,u\cdot v'\right)d\nu(v)d\nu(v')d\mu(u).
\end{equation}
\end{thm}
We now state our second result.
\begin{thm}(Quenched central limit theorem)\label{thm:CLT}
Suppose that the assumptions in \cref{ass:LLN} and \cref{ass:CLT} hold. Fix any $s\in\mathbb{R}_+$. For $\mu^{\mathbb{N}}$-almost every sequence $\vec{s}=(s_i)_{i\in\mathbb{N}}\in\mathbb{R}^{\mathbb{N}}_{+}$, under $\mathbb{P}_{\vec{s}}$ the following convergence holds
\begin{equation}\label{eq:CLT}
\frac{\bm W_n(s)- \mathbb{E}_{\vec{s}}[\bm W_n(s)] }{\sqrt{2n}}\xrightarrow{d} \bm{\mathcal{N}}\left(0,\sigma(s)^2 + \rho(s)^2\right),
\end{equation}
where, for $F_s(x,y)\coloneqq\mathbb{E}\left[f\left(s\cdot x,y\cdot \bm V'\right)\right]$ and $\tilde F_s \left(x,y\right) \coloneqq F_s\left(x,y \right) - \mathbb{E}[F_s\left(\bm V, y\right)]$,
\begin{equation}\label{eq:variance1_CLT}
\sigma(s)^2=\mathbb{E}\left[F_s(\bm V,\bm U)-\left(F_s(\bm V,\bm U)\right)^2\right]=\ell(s)-\mathbb{E}\left[\left(F_s(\bm V,\bm U)\right)^2\right]
\end{equation}
and
\begin{equation}\label{eq:def_rho_s}
\rho(s)^2= \mathbb{E} \left[ \tilde F_s (\bm V, \bm U)^2 \right] + 2 \cdot \sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde F_s(\bm{\xi}_1, \bm U) \tilde F_s (\bm{\xi}_{1+k}, \bm U') \right],
\end{equation}
the last series being convergent.
\end{thm}
\begin{rem}
The assumption that the initial strengths of the teams $(\bm s_i)_{i\in\mathbb{N}}$ are independent could be relaxed from a theoretical point of view, but we believe that this is a very natural assumption for our model.
\end{rem}
\subsubsection{Originality of our results and analysis of the limiting variance: a quenched CLT for functionals of ergodic processes and i.i.d.\ random variables}\label{sect:litterature_sum_var}
We now comment on the originality of our results and contextualize them in relation to the established literature and prior studies on sums of dependent and not equi-distributed random variables. Additionally, we give some informal explanations on the two components $\sigma(s)^2$ and $\rho(s)^2$ of the variance of the limiting Gaussian random variable in \cref{eq:CLT}.
We start by noticing that, without loss of generality, we can assume that for every $j\in[2n-1]$, the team $T_{0}$ plays against the team $T_j$ the $j$-th day of the league. Denoting by $W_{j}=W_j(s)$ the event that the team $T_{0}$ wins against the team $T_j$, then $\bm W_n(s)$ rewrites as
\begin{equation}
\bm W_n(s)=\sum_{j=1}^{2n-1}\mathds{1}_{W_j}.
\end{equation}
Note that the Bernoulli random variables $(\mathds{1}_{W_j})_{j\in[2n-1]}$ are only independent conditionally on the process $(\bm \xi^{0}_j)_{j\in[2n-1]}$.
In addition, under our assumptions, we have that the conditional parameters of the Bernoulli random variables are given by $\mathbb{P}_{\vec{s}} \left(W_j\middle | \bm \xi^{0}_j\right)= F_s (\bm \xi^{0}_j, s_j )$.
Therefore, under $\mathbb{P}_{\vec{s}}$, the random variable $\bm W_n(s)$ is a sum of Bernoulli random variables that are \emph{neither independent nor identically distributed}. As a consequence, the proofs of the quenched law of large numbers (see \cref{sect:LLN}) and of the quenched central limit theorem (see \cref{sect:CLT}) do not follow from a simple application of classical results in the literature.
We recall that it is quite simple to relax the identically distributed assumption in order to obtain a central limit theorem: the Lindeberg criterion, see for instance \cite[Theorem 27.2]{MR1324786}, gives a sufficient (and almost necessary) criterion for a sum of independent random
variables to converge towards a Gaussian random variable (after suitable renormalization). Relaxing independence is more delicate and there is no universal theory to do it. In the present paper, we combine two well-known theories to obtain our results: the theory for stationary ergodic processes (see for instance \cite{MR74711, MR0148125, MR0322926, MR1176496, MR2325294}) and the theory for $m$-dependent random variables (see for instance \cite{MR26771,MR350815,MR1747098}) and dependency graphs (see for instance \cite{MR681466, MR920273, MR1048950, MR2068873, MR4105789}).
To explain the presence of the two terms in the variance, we start by noticing that using the law of total conditional variance, the conditional variance of $\mathds{1}_{W_j}$ is given by $\Var_{\vec{s}} \left(\mathds{1}_{W_j}\middle | \bm \xi^{0}_j\right)= F_s (\bm \xi^{0}_j, s_j ) - (F_s (\bm \xi^{0}_j, s_j ))^2$. The term $\sigma(s)^2$ arises as the limit of
$$ \frac{1}{2n} \sum_{j=1}^{2n-1} \Var_{\vec{s}} \left(\mathds{1}_{W_j}\middle | \bm \xi^{0}_j\right),$$
and this is in analogy with the case of sums of independent but not identically distributed Bernoulli random variables. The additional term $\rho(s)^2$, on the contrary, arises from the fluctuations of the conditional parameters of the Bernoulli variables, i.e.\ $\rho(s)^2$ comes from the limit of
\begin{equation}\label{eq:fbewbfw}
\frac{1}{2n} \sum_{j=1}^{2n-1} \Var \left(\mathbb{P}_{\vec{s}} \left(W_j\middle | \bm \xi^{0}_j\right) \right)=\frac{1}{2n} \sum_{j=1}^{2n-1} \Var \left( F_s (\bm \xi^{0}_j, s_j ) \right).
\end{equation}
Note that an additional difficulty arises from the fact that the sums in the last two equations are not independent (but we will show that they are asymptotically independent).
To study the limit in \cref{eq:fbewbfw} we prove in \cref{sect:CLT} the following general result that we believe to be of independent interest.
\begin{thm}\label{prop:clt_sum_of_the_g}
Suppose that the assumptions in \cref{eq:stationarity0,eq:stationarity}, and in \cref{ass:CLT} hold. Let $g:\mathbb{R}^2_+\to\mathbb{R}_+$ be a bounded, measurable function, and define $\tilde g :\mathbb{R}^2_+\to\mathbb{R}$ by $\tilde g (x,y) \coloneqq g(x,y) - \mathbb{E} \left[ g\left( \bm V, y\right) \right]$. Then, the quantity
\begin{equation}\label{eq:def_of_rho}
\rho_g^2 \coloneqq \mathbb{E} \left[ \tilde g (\bm V, \bm U)^2 \right] + 2 \cdot \sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde g(\bm{\xi}_1, \bm U) \tilde g (\bm{\xi}_{1+k}, \bm U') \right]
\end{equation}
is finite. Moreover, for $\mu^{\mathbb{N}}$-almost every sequence $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, the following convergence holds
\begin{equation}\label{eq:CLT2}
\frac{\sum_{j=1}^{2n-1}\tilde g\left(\bm \xi_j,s_j\right)}{\sqrt{2n}}\xrightarrow{d} \bm{\mathcal{N}}(0,\rho_g^2).
\end{equation}
\end{thm}
The main difficulty in studying the sum $\sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right)$ is that fixing a deterministic realization $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$ imposes that the random variables $g\left(\bm \xi_j,s_j\right)$ are not stationary anymore.
In many of the classical results available in the literature (see references given above) -- in addition to moment and mixing conditions -- stationarity is assumed, and so we cannot directly apply these results. An exception are \cite{MR1492353,MR3257385}, where stationarity is not assumed but it is assumed a stronger version\footnote{This is assumption (B3) in \cite{MR3257385} that is also assumed in \cite{MR1492353}. Since in our case all moments of $g\left(\bm \xi_j,s_j\right)$ are finite (because $g$ is a bounded function), we can take the parameter $\delta$ in \cite{MR3257385} equal to $+\infty$ and thus condition (B3) in \cite{MR3257385} requires that $\sum_{n=1}^{\infty} n^2 \alpha_n <+\infty$.} of our condition in \cref{eq:strongly_mix_plus}. Therefore, to the best of our knowledge, \cref{prop:clt_sum_of_the_g} does not follow from known results in the literature.
\subsection{Calibration of the parameters of the model and some examples}\label{sect:param_and_examples}
An interesting feature of our model consists in the fact that the main parameters that characterize the evolution of the league can be statistically calibrated in order for the model to describe real-life tournaments. Such parameters are:
\begin{itemize}
\item The function $f:\mathbb R_+^2\to[0,1]$ that controls the winning probability of the matches.
\item The distribution $\mu$ of the initial strengths $(\bm s_i)_i$ of the teams.
\item The marginal distribution $\nu$ of the tilting process $\bm \xi$.
\end{itemize}
We end this section with two examples. The first one is more theoretical and the second one more related to the statistical calibration.
\begin{exmp}\label{exmp:league}
We assume that:
\begin{itemize}
\item $f(x,y)=\frac{x}{x+y}$;
\item for all $i\in\mathbb{N}$, the initial strengths $\bm s_i$ are uniformly distributed in $[0,1]$, i.e.\ $\mu$ is the Lebesgue measure on $[0,1]$;
\item the tilting process $\bm \xi$ is a Markov chain with state space $\{a,b\}$ for some $a\in(0,1),b\in(1,\infty)$, with transition matrix $\begin{pmatrix}
p_a & 1-p_a \\
1-p_b & p_b
\end{pmatrix}$ for some $p_a,p_b\in(0,1)$. Note that the invariant measure $\nu=(\nu_a,\nu_b)$ is equal to $ \left(\frac{1-p_b}{2-p_a-p_b},\frac{1-p_a}{2-p_a-p_b}\right)$.
\end{itemize}
Under this assumptions, \cref{thm:LLN} guarantees that for any fixed $s\in\mathbb{R}_+$ and for almost every realization $\vec{s}=( s_i)_{{i\in\mathbb{N}}}$ of the random sequence $(\bm s_i)_{{i\in\mathbb{N}}}\in[0,1]^{\mathbb{N}}$, the following convergence holds under $\mathbb{P}_{\vec{s}}$:
\begin{equation}
\frac{\bm W_n(s)}{2n}\xrightarrow[n\to\infty]{P}\ell(s),
\end{equation}
where
\begin{equation}\label{eq:exemp_l_S}
\ell(s)=\int_{\mathbb{R}^3_+} \frac{s v}{s v+u v'}d\nu(v)d\nu(v')d\mu(u)=\sum_{i,j\in\{a,b\}} \frac{s\cdot i}{j}\log\left(1+\frac{j}{s\cdot i}\right)\nu_i\cdot \nu_j.
\end{equation}
In particular if $a=\frac{1}{2}, b=2, p_a=\frac 1 2, p_b=\frac 1 2$ then
\begin{equation}\label{eq:expression_exemp_ls}
\ell(s)=\frac{s}{16}\log\left(\frac{(1+s)^8(4+s)(1+4s)^{16}}{2^{32}\cdot s^{25}}\right),
\end{equation}
whose graph is plotted in \cref{fig:simple_exemp}.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.5]{simple_exemp}
\caption{The graph of the function $\ell(s)$ in \cref{eq:expression_exemp_ls} for $s\in[0,1]$. \label{fig:simple_exemp}}
\end{figure}
In addition, \cref{thm:CLT} implies that for any $s\in\mathbb{R}_+$ and for almost every realization $\vec{s}=( s_i)_{{i\in\mathbb{N}}}$ of the random sequence $(\bm s_i)_{{i\in\mathbb{N}}}\in[0,1]^{\mathbb{N}}$, the following convergence also holds under $\mathbb{P}_{\vec{s}}$:
\begin{equation}
\frac{\bm W_n(s)- \mathbb{E}_{\vec{s}}[\bm W_n(s)] }{\sqrt{2n}}\xrightarrow{d} \bm{\mathcal{N}}(0,\sigma(s)^2 + \rho(s)^2),
\end{equation}
where $\sigma(s)^2$ and $\rho(s)^2$ can be computed as follows. From \cref{eq:variance1_CLT,eq:def_rho_s} we know that
\begin{equation}
\sigma(s)^2=\ell(s)-\mathbb{E}\left[\left(F_s(\bm V,\bm U)\right)^2\right]
\end{equation}
and that
\begin{equation}
\rho(s)^2= \mathbb{E} \left[ \tilde F_s (\bm V, \bm U)^2 \right] + 2 \cdot \sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde F_s(\bm{\xi}_1, \bm U) \tilde F_s (\bm{\xi}_{1+k}, \bm U') \right].
\end{equation}
Recall that $\ell(s)$ was computed in \cref{eq:exemp_l_S}. Note that $$F_s(x,y)=\mathbb{E}\left[f\left(s\cdot x,y\cdot \bm V'\right)\right]=\sum_{v'\in \{a,b\}}f\left(s\cdot x,y\cdot v'\right)\nu_{v'}$$
and that
$$\tilde F_s(x,y)=F_s(x,y)-\mathbb{E}\left[F_s(\bm V,y)\right]=\sum_{v'\in\{a,b\}}f\left(s\cdot x,y\cdot v'\right)\nu_{v'}-\sum_{(v,v')\in \{a,b\}^2}f\left(s\cdot v,y\cdot v'\right)\nu_{v'}\nu_{v}.$$
Therefore
\begin{equation}
\mathbb{E}\left[\left(F_s(\bm V,\bm U)\right)^2\right]=\sum_{v\in \{a,b\}}\left(\int_0^1\left(\sum_{v'\in\{a,b\}}f\left(s\cdot v,u\cdot v'\right)\nu_{v'}\right)^2du\right)\nu_{v}
\end{equation}
and
\begin{equation}
\mathbb{E}\left[\left(\tilde F_s(\bm V,\bm U)\right)^2\right]=\sum_{w\in \{a,b\}}\left(\int_0^1\left(\sum_{v'\in\{a,b\}}f\left(s\cdot w,u\cdot v'\right)\nu_{v'}-\sum_{(v,v')\in \{a,b\}^2}f\left(s\cdot v,u\cdot v'\right)\nu_{v'}\nu_{v}\right)^2du\right)\nu_{v}.
\end{equation}
Moreover,
\begin{equation}
\sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde F_s(\bm{\xi}_1, \bm U) \tilde F_s (\bm{\xi}_{1+k}, \bm U') \right]= \sum_{k=1}^{\infty} \sum_{(i,j)\in \{a,b\}^2} \mathbb{P}(\bm{\xi}_1=i,\bm{\xi}_{1+k}=j) \int_0^1 \tilde F_s(i, u)\; du \int_0^1 \tilde F_s (j, u') \; du'.
\end{equation}
It remains to compute $\mathbb{P}(\bm{\xi}_1=i,\bm{\xi}_{1+k}=j)$ for $i,j\in \{a,b\}$. Note that
\begin{equation}
\begin{pmatrix}
p_a & 1-p_a \\
1-p_b & p_b
\end{pmatrix}=
\begin{pmatrix}
1 & \frac{1-p_a}{p_b-1} \\
1 & 1
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
0 & (p_a+p_b-1)
\end{pmatrix}
\begin{pmatrix}
\frac{p_b-1}{p_a+p_b-2} & \frac{p_a-1}{p_a+p_b-2} \\
\frac{1-p_b}{p_a+p_b-2} & \frac{p_b-1}{p_a+p_b-2}
\end{pmatrix}
\eqqcolon SJS^{-1}.
\end{equation}
Therefore,
\begin{align}
&\mathbb{P}(\bm{\xi}_1=a,\bm{\xi}_{1+k}=a)=\frac{p_b-1 + (p_a-1) (p_a + p_b-1)^k}{p_a + p_b-2}\cdot \nu_a,\\
&\mathbb{P}(\bm{\xi}_1=a,\bm{\xi}_{1+k}=b)=\frac{(p_a-1) ((p_a + p_b-1)^k-1)}{p_a + p_b-2}\cdot \nu_a,\\
&\mathbb{P}(\bm{\xi}_1=b,\bm{\xi}_{1+k}=a)=\frac{(p_b-1)((p_a + p_b-1)^k-1)}{p_a + p_b-2}\cdot \nu_b,\\
&\mathbb{P}(\bm{\xi}_1=b,\bm{\xi}_{1+k}=b)=\frac{p_a-1 + (p_b-1) (p_a + p_b-1)^k}{p_a + p_b-2}\cdot \nu_b.
\end{align}
With some tedious but straightforward computations\footnote{We developed a \emph{Mathematica} software to quickly make such computations for various choices of the function $f(x,y)$, and of the parameters $a,b,p_a,p_b$. The software is available at the following \href{https://drive.google.com/drive/folders/1CXZVpe-HJvtJNNGlThu2J-PO9OHme0KP?usp=sharing}{link}.}, we can explicitly compute $\sigma(s)^2$ and $\rho(s)^2$.
The graphs of the two functions $\sigma(s)^2$ and $\rho(s)^2$ for $s\in[0,1]$ are plotted in \cref{fig:diagram_variance} for three different choices of the parameters $a,b,p_a,p_b$. It is interesting to note that $\sigma(s)^2$ is much larger than $\rho(s)^2$ when $p_a$ and $p_b$ are small, $\sigma(s)^2$ is comparable with $\rho(s)^2$ when $p_a$ and $p_b$ are around 0.9, and $\rho(s)^2$ is much larger than $\sigma(s)^2$ when $p_a$ and $p_b$ are very close to 1.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.4]{diagram_variance2}
\hspace{0.1cm}
\includegraphics[scale=.4]{diagram_variance1}
\hspace{0.1cm}
\includegraphics[scale=.4]{diagram_variance}
\caption{In green the graph of $\sigma(s)^2$. In red the graph of $\rho(s)^2$. In blue the graph of $\sigma(s)^2+\rho(s)^2$.
\textbf{Left:} The parameters of the model are $a=1/2,b=2,p_a=2/5,p_b=2/5$.
\textbf{Middle:} The parameters of the model are $a=1/2,b=2,p_a=92/100,p_b=92/100$.
\textbf{Right:} The parameters of the model are $a=1/2,b=2,p_a=99/100,p_b=99/100$. \label{fig:diagram_variance}}
\end{figure}
\end{exmp}
\begin{exmp}
We collect here some data related to the Italian national basketball league in order to compare some real data with our theoretical results. We believe that it would be interesting to develop a more accurate and precise analysis of real data in some future projects.
In \cref{fig:table_basket}, the rankings of the last 22 national leagues played among exactly 16 teams in the league are shown (some leagues, like the 2011-12 league, are not tracked since in those years the league was not formed by 16 teams).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.95]{table_basket}
\caption{The rankings of the last 22 national italian basket leagues played with exactly 16 teams in the league. In the Italian basket league every team plays two matches against every other team and every victory gives two points.\label{fig:table_basket}}
\end{figure}
The mean number of points of the 22 collected leagues are (from the team ranked first to the team ranked 16-th):
\begin{multicols}{4}
\begin{enumerate}
\item 47,45
\item 42,27
\item 40,18
\item 37,59
\item 35,91
\item 34,09
\item 31,73
\item 30,55
\item 29,18
\item 27,77
\item 26,09
\item 24,36
\item 22,00
\item 19,59
\item 17,82
\item 12,41
\end{enumerate}
\end{multicols}
The diagram of this ranking is given in the left-hand side of \cref{fig:diagram_basket}.
\medskip
As mentioned above, an interesting question consists in assessing whether it is possible to describe the behaviour of these leagues by using our model. More precisely, we looked for a function $f(x,y)$ and two distributions $\mu$ and $\nu$ such that the graph of $\ell(s)$ for $s\in[0,1]$ can well approximate the graph in the left-hand side of \cref{fig:diagram_basket}. We find out that choosing $\mu$ to be the uniform measure on the interval $[0.1,0.999]$, $\nu=0.6\cdot\delta_{0.25}+0.9\cdot\delta_{1.3},$ and
\begin{equation}\label{eq:guess_f}
f(x,y)\coloneqq\frac{g(x)}{g(x)+g(y)},\quad\text{with}\quad g(x)\coloneqq\log \left(1-\min\left\{\frac{x}{1.3},0.999\right\}\right),
\end{equation}
then the graph of $\ell(s)$ for $s\in[0.1,0.999]$, is the one given in the right-hand side of \cref{fig:diagram_basket}. Note that the two graphs have a similar convexity.
\begin{figure}[htbp]
\centering
\includegraphics[scale=.55]{diagram_basket}
\hspace{1cm}
\includegraphics[scale=.55]{diagram_guessed}
\caption{\textbf{Left:} The diagram of the mean number of points of the 22 leagues collected in \cref{fig:table_basket}. On the $x$-axis the teams are ranked from the weakest to the strongest; on the $y$-axis the mean number of points are plotted.
\textbf{Right:} The graph of $\ell(s)$ for $s\in[0.1,0.999]$ for the specific $\mu,\nu$ and $f(x,y)$ given in (and before) \cref{eq:guess_f}. \label{fig:diagram_basket}}
\end{figure}
\end{exmp}
\subsection{Open problems}
We collect some open questions and conjectures that we believe might be interesting to investigate in future projects:
\begin{itemize}
\item Conditioning on the initial strengths of the teams, how many times do we need to run the league in order to guarantee that the strongest team a.s.\ wins the league? We point out that a similar question was investigated in \cite{ben2007efficiency} for the model considered by the authors.
\item In the spirit of large deviations results, we believe it would be interesting to study the probability that the weakest team wins the league, again conditioning on the initial strengths of the teams.
\item Another natural question is to investigate the whole final ranking of the league. We conjecture the following.
For a sequence of initial strengths $(s_i)_{i\in [2n-1]_0}$, we denote by $(\tilde T_i)_{i\in [2n-1]_0}$ the sequence of teams reordered according to their initial strengths $(s_i)_{i\in [2n-1]_0}$ (from the weakest to the strongest). Set
\begin{equation}
\tilde{\bm W}_n(i)\coloneqq\text{Number of wins of the team }\tilde T_{i}\text{ at the end of a league with $2n$ players}.
\end{equation}
Let $\tilde{\bm {\mathcal W}}_n(x):[0,1]\to \mathbb{R}$ denote the piece-wise linear continuous function obtained by interpolating the values $\tilde{\bm W}_n(i/n)$ for all $i\in [2n-1]_0$.
Denote by $H_{\mu}(y)$ the cumulative distribution function of $\mu$ and by $H_{\mu}^{-1}(x)$ the generalized inverse distribution function, i.e.\ $H_{\mu}^{-1}(x)=\inf\{y\in \mathbb{R}_{+}: H_{\mu}(y)\geq x \}$.
\begin{conj}
Suppose that the assumptions in \cref{ass:LLN} hold with the additional requirement that the function $f$ is continuous\footnote{The assumption that $f$ is continuous might be relaxed.}. For $\mu^{\mathbb{N}\cup\{0\}}$-almost every sequence $\vec{s}=(s_i)_{{i\in\mathbb{N}\cup\{0\}}}\in\mathbb{R}^{\mathbb{N}\cup\{0\}}_{+}$ and for every choice of the calendar of the league, under $\mathbb{P}_{\vec{s}}$ the following convergence of càdlàg processes holds
\begin{equation}
\frac{\tilde{\bm {\mathcal W}}_n(x)}{2n}\xrightarrow[n\to\infty]{P}\ell(H_{\mu}^{-1}(x)),
\end{equation}
where $\ell(s)$ is defined as in \cref{thm:LLN} by
\begin{equation}
\ell(s)=\mathbb{E}\left[f\left(s\cdot\bm{V},\bm U\cdot\bm{V'}\right)\right]=\int_{\mathbb{R}^3_+} f\left(s\cdot v,u\cdot v'\right)d\nu(v)d\nu(v')d\mu(u).
\end{equation}
\end{conj}
Note that the correlations between various teams in the league strongly depend on the choice of the calendar but we believe that this choice does not affect the limiting result in the conjecture above. We refer the reader to \cref{fig:sim_whole_league} for some simulations that support our conjecture.
We also believe that the analysis of the local limit (as defined in \cite{MR4055194}) of the whole ranking should be an interesting but challenging question (and in this case we believe that the choice of the calendar will affect the limiting object). Here, with local limit we mean the limit of the ranking induced by the $k$ teams -- for every fixed $k\in\mathbb{N}$ -- in the neighborhood (w.r.t.\ the initial strengths) of a distinguished team (that can be selected uniformly at random or in a deterministic way, say for instance the strongest team).
\item As mentioned above, we believe that it would be interesting to develop a more accurate and precise analysis of real data in order to correctly calibrate the parameters of our model collected at the beginning of \cref{sect:param_and_examples}.
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[scale=.4]{sim_whole_league2}
\hspace{0.1cm}
\includegraphics[scale=.4]{sim_whole_league1}
\hspace{0.1cm}
\includegraphics[scale=.4]{sim_whole_league3}
\caption{Two simulations of the diagrams of $(\tilde{\bm W}_n(i))_{i\in[999]_0}$ in the setting of \cref{exmp:league} with different choices of $a, b, p_a$ and $p_b$. The values $(\tilde{\bm W}_n(i))_{i\in[999]_0}$ are plotted in blue. The limiting functions $1000\cdot\ell(H_{\mu}^{-1}(x/1000))$ for $x\in[0,1000]$ are plotted in green and red respectively. \textbf{Left:} In this simulation the parameters are $a=\frac{1}{2}, b=2, p_a=\frac 1 2, p_b=\frac 1 2$.
\textbf{Middle:} In this simulation the parameters are $a=\frac{1}{10}, b=10, p_a=\frac{1}{10}, p_b=\frac{1}{10}$.
\textbf{Right:} The two limiting functions are overlapped in the same diagram to highlight the different slopes.
\label{fig:sim_whole_league}}
\end{figure}
\section{Proof of the law of large numbers}\label{sect:LLN}
The proof of \cref{thm:LLN} follows from the following result using standard second moment arguments.
\begin{prop}\label{prop:first_mom}
We have that
\begin{equation}
\mathbb{E}\left[\frac{\bm W_n(s)}{2n}\middle| (\bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.} \ell(s) ,
\qquad\text{and}\qquad
\mathbb{E}\left[\left(\frac{\bm W_n(s)}{2n}\right)^2\middle| (\bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.} \ell(s)^2.
\end{equation}
\end{prop}
The rest of this section is devoted to the proof of \cref{prop:first_mom}.
\medskip
We recall (see \cref{sect:litterature_sum_var}) that we assumed that, for every $j\in[2n-1]$, the team $T_{0}$ plays against the team $T_j$ the $j$-th day and we denoted by $W_{j}=W_j(s)$ the event that the team $T_{0}$ wins the match. In particular, $\bm W_n(s)=\sum_{j=1}^{2n-1}\mathds{1}_{W_j}$ and
\begin{equation}\label{eq:prob_win_match}
\mathbb{P}\left(W_j\middle | \bm s_j, \bm \xi^{0}_j,\bm \xi^{j}_j\right)=f(s\cdot \bm \xi^{0}_j,\bm s_j \cdot \bm \xi^{j}_j).
\end{equation}
\begin{proof}[Proof of \cref{prop:first_mom}]
We start with the computations for the first moment. From \cref{eq:prob_win_match} we have that
\begin{equation}\label{eq:evfwryibfweonfpiwe}
\mathbb{E}\left[\bm W_n(s)\middle| (\bm s_i)_i\right]=\sum_{j=1}^{2n-1}\mathbb{P}(W_j| (\bm s_i)_i)=\sum_{j=1}^{2n-1}\mathbb{E}\left[f\left(s\cdot \bm \xi^{0}_j,\bm s_j\cdot \bm \xi^j_j\right)\middle| \bm s_j\right].
\end{equation}
Since for all $j\in [2n-1]$, $\bm \xi^j_j$, $\bm \xi^{0}_j$ and $\bm s_j$ are independent, $\bm \xi^{0}_j\stackrel{d}{=}\bm V$, and $\bm \xi^j_j\stackrel{d}{=}\bm V'$, we have
\begin{equation}
\mathbb{E}\left[f\left(s\cdot \bm \xi^{0}_j,\bm s_j\cdot \bm \xi^j_j\right)\middle| \bm s_j\right]=G_s\left(\bm s_j\right),
\end{equation}
where $G_s(x)=\mathbb{E}\left[f\left(s\cdot \bm V,x\cdot \bm V'\right)\right]$.
By the Law of large numbers, we can conclude that
\begin{equation}\label{eq:evfwuitgwrefbwofnwryibfweonfpiwe}
\mathbb{E}\left[\frac{\bm W_n(s)}{2n}\middle| (\bm s_i)_i\right]=\frac{1}{2n}\sum_{j=1}^{2n-1}G_s\left(\bm s_j\right)
\xrightarrow[n\to\infty]{a.s.} \mathbb{E}\left[G_s\left(\bm U\right)\right]=\ell(s).
\end{equation}
We now turn to the second moment.
Since for all $i,j\in [2n-1]$ with $i\neq j$, conditioning on $(\bm s_i,\bm s_j, \bm \xi^{0}_i,\bm \xi^{i}_i,\bm \xi^{0}_j,\bm \xi^{j}_j)$ the events $W_i$ and $W_j$ are independent, we have that, using \cref{eq:prob_win_match},
\begin{multline}
\mathbb{E}\left[\bm W_n(s)^2\middle| (\bm s_i)_i\right]=\mathbb{E}\left[\bm W_n(s)\middle| (\bm s_i)_i\right]+\sum_{\substack{\ell,j=1\\ \ell\neq j}}^{2n-1}\mathbb{P}(W_\ell\cap W_j | \bm s_\ell,\bm s_j)\\
=\mathbb{E}\left[\bm W_n(s)\middle| (\bm s_i)_i\right]+\sum_{\substack{\ell,j=1\\ \ell\neq j}}^{2n-1}\mathbb{E}\left[f\left(s\cdot \bm \xi^{0}_\ell,\bm s_\ell\cdot \bm \xi^\ell_\ell\right)f\left(s\cdot \bm \xi^{0}_j,\bm s_j\cdot \bm \xi^j_j\right)\middle| \bm s_\ell,\bm s_j\right].
\end{multline}
For all $\ell,j\in [2n-1]$ with $\ell\neq j$, $\bm s_\ell$, $\bm s_j$, $\bm \xi^\ell_\ell$ and $\bm \xi^j_j$ are mutually independent and independent of $(\bm \xi^{0}_\ell,\bm \xi^{0}_j)$. In addition, $(\bm \xi^{0}_\ell,\bm \xi^{0}_j)\stackrel{d}{=}(\bm \xi_\ell,\bm \xi_j)$ and $\bm \xi^\ell_\ell\stackrel{d}{=}\bm \xi^j_j\stackrel{d}{=}\bm V'$. Thus, we have that
\begin{equation}
\mathbb{E}\left[f\left(s\cdot \bm \xi^{0}_\ell,\bm s_\ell\cdot \bm \xi^\ell_\ell\right)f\left(s\cdot \bm \xi^{0}_j,\bm s_j\cdot \bm \xi^j_j\right)\middle| \bm s_\ell,\bm s_j\right]
=\mathbb{E}\left[F_s\left(\bm \xi_\ell,\bm s_\ell\right)F_s\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right],
\end{equation}
where we recall that $F_s(x,y)=\mathbb{E}\left[f\left(s\cdot x,y\cdot \bm V'\right)\right]$.
Simple consequences of the computations done for the first moment are that
$$\frac{\mathbb{E}\left[\bm W_n(s)\middle| (\bm s_i)_i\right]}{n^2}\xrightarrow[n\to\infty]{a.s.} 0\quad\text{and}\quad\frac{\mathbb{E}\left[\sum_{j=1}^{2n-1} F_s^2 (\bm \xi_j, \bm s_j)\middle| (\bm s_i)_i\right]}{n^2}\xrightarrow[n\to\infty]{a.s.} 0.$$ Therefore, we can write
\begin{multline}\label{eq:fbnwiruefbeownfw}
\mathbb{E}\left[\left(\frac{\bm W_n(s)}{2n}\right)^2\middle| (\bm s_i)_i\right]=
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}F_s\left(\bm \xi_\ell,\bm s_\ell\right) F_s\left(\bm \xi_j,\bm s_j\right)\middle| (\bm s_i)_i\right]+\bm o(1)\\
=
\mathbb{E}\left[\left(\frac{1}{2n}\sum_{j=1}^{2n-1} F_s\left(\bm \xi_j,\bm s_j\right)\right)^2\middle| (\bm s_i)_i\right]+\bm o(1),
\end{multline}
where $\bm o(1)$ denotes a sequence of random variables that a.s.\ converges to zero.
We now need the following result, whose proof is postponed at the end of this section.
\begin{prop} \label{prop:conv_for_bnd_cont_funct}
For all bounded, measurable functions $g:\mathbb{R}^2_+\to\mathbb{R}_+$, we have that
\begin{equation}\label{eq:second_mom_funct}
\mathbb{E}\left[\left(\frac{1}{2n}\sum_{j=1}^{2n-1}g\left(\bm \xi_j,\bm s_j\right)\right)^2\middle| (\bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.}\mathbb{E}\left[g\left(\bm V,\bm U\right)\right]^2.
\end{equation}
\end{prop}
From \cref{eq:fbnwiruefbeownfw} and the proposition above, we conclude that
\begin{equation}
\mathbb{E}\left[\left(\frac{\bm W_n(s)}{2n}\right)^2\middle| (\bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.}\mathbb{E}\left[F_s\left(\bm V,\bm U\right)\right]^2=\ell(s)^2.
\end{equation}
This concludes the proof of \cref{prop:first_mom}.
\end{proof}
It remains to prove \cref{prop:conv_for_bnd_cont_funct}. We start with the following preliminary result.
\begin{lem} \label{lem:first_sec_mom_for_ret}
For every quadruple $(A,A',B,B')$ of Borel subsets of $\mathbb{R}_+$, we have that
\begin{equation}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1} \mathds{1}_{A \times B}\left(\bm \xi_\ell,\bm s_\ell\right) \mathds{1}_{A' \times B'}\left(\bm \xi_j,\bm s_j\right)\middle| (\bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.} \mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A' \times B'}(\bm V,\bm U)\right]\label{eq:second_mom_ind_ret}.
\end{equation}
\end{lem}
\begin{proof}
Note that since the process $\bm \xi$ is independent of $(\bm s_i)_i$,
\begin{multline}\label{eq:rewriting_the_expression}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1} \mathds{1}_{A \times B}\left(\bm \xi_\ell,\bm s_\ell\right) \mathds{1}_{A' \times B'}\left(\bm \xi_j,\bm s_j\right)\middle| (\bm s_i)_i\right]\\
=\frac{1}{2n}\sum_{\ell=1}^{2n-1}\mathds{1}_{B}(\bm s_\ell) \cdot \frac{1}{2n}\sum_{j=1}^{2n-1} \mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)\mathds{1}_{B'}(\bm s_j).
\end{multline}
For all $\ell\in[2n-1]$, we can write
\begin{align}\label{eq:split_sum_prob}
\frac{1}{2n}\sum_{j=1}^{2n-1} \mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)&\mathds{1}_{B'}(\bm s_j)\\
=\mathbb{P}\left(\bm V \in A\right)&\mathbb{P}\left(\bm V \in A'\right)\frac{1}{2n}\sum_{j=1}^{2n-1} \mathds{1}_{B'}(\bm s_j)\\
&+\frac{1}{2n}\sum_{j=1}^{2n-1} \left(\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V \in A\right)\mathbb{P}\left(\bm V \in A'\right)\right)\mathds{1}_{B'}(\bm s_j).
\end{align}
First, from the Law of large numbers we have that
\begin{equation}\label{eq:first_lim_real}
\frac{1}{2n}\sum_{j=1}^{2n-1}\mathds{1}_{B'}(\bm s_j)\xrightarrow[n\to\infty]{a.s.}\mu(B').
\end{equation}
Secondly, we show that the second sum in the right-hand side of \cref{eq:split_sum_prob} is negligible. We estimate
\begin{multline}
\left|\frac{1}{2n}\sum_{j=1}^{2n-1} \left(\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V \in A\right)\mathbb{P}\left(\bm V \in A'\right)\right)\mathds{1}_{B'}(\bm s_j)\right|\\
\leq
\frac{1}{2n}\sum_{j=1}^{2n-1} \left|\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|\\
=
\frac{1}{2n}\sum_{j=1}^{\ell-1} \left|\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|\\
+\frac{1}{2n}\sum_{j=\ell}^{2n-1} \left|\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|
\end{multline}
Using the stationarity assumption in \cref{eq:stationarity}, the right-hand side of the equation above can be rewritten as follows
\begin{multline}
\frac{1}{2n}\sum_{j=2}^{\ell} \left|\mathbb{P}\left(\bm \xi_j\in A,\bm \xi_1\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|\\
+\frac{1}{2n}\sum_{j=1}^{2n-\ell} \left|\mathbb{P}\left(\bm \xi_1\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|.
\end{multline}
Therefore, we obtain that
\begin{multline}
\left|\frac{1}{2n}\sum_{j=1}^{2n-1} \left(\mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V \in A\right)\mathbb{P}\left(\bm V \in A'\right)\right)\mathds{1}_{B'}(\bm s_j)\right|\\
\leq
\frac{1}{2n}\sum_{j=1}^{2n} \left|\mathbb{P}\left(\bm \xi_1\in A',\bm \xi_j\in A\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|\\
+\frac{1}{2n}\sum_{j=1}^{2n} \left|\mathbb{P}\left(\bm \xi_1\in A,\bm \xi_j\in A'\right)-\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\right|.
\end{multline}
The upper bound above is independent of $\ell$ and tends to zero because the process $\bm \xi$ is weakly-mixing (see \cref{eq:unif_weak_mix1}); we can thus deduce from \cref{eq:split_sum_prob,eq:first_lim_real} that, uniformly for all $\ell\in[2n-1]$,
\begin{equation}\label{eq:ifbuewbfoewnfoiewnfew}
\frac{1}{2n}\sum_{j=1}^{2n-1} \mathbb{P}\left(\bm \xi_\ell\in A,\bm \xi_j\in A'\right)\mathds{1}_{B'}(\bm s_j)
\xrightarrow[n\to\infty]{a.s.}
\mathbb{P}\left(\bm V\in A\right)\mathbb{P}\left(\bm V \in A'\right)\mu(B').
\end{equation}
Hence, from \cref{eq:rewriting_the_expression,eq:ifbuewbfoewnfoiewnfew} and the Law of large numbers, we can conclude that
\begin{multline}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1} \mathds{1}_{A \times B}\left(\bm \xi_\ell,\bm s_\ell\right) \mathds{1}_{A' \times B'}\left(\bm \xi_j,\bm s_j\right)\middle| (\bm s_i)_i\right]\\
\xrightarrow[n\to\infty]{a.s.} \mathbb{P}\left(\bm V\in A\right)\mu(B)\mathbb{P}\left(\bm V \in A'\right)\mu(B')
=\mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A' \times B'}(\bm V,\bm U)\right].\qedhere
\end{multline}
\end{proof}
We now show that we can extend the result of \cref{lem:first_sec_mom_for_ret} to all Borel subsets of $\mathbb{R}_+^2$, denoted by $\mathcal{B}(\mathbb{R}_+^2)$.
We also denote by $\mathcal{R}(\mathbb{R}_+^2)$ the sets of rectangles $A\times B$ of $\mathbb{R}_+^2$ with $A,B\in\mathcal{B}(\mathbb{R}_+)$.
We recall that we denote by $\sigma\left(\mathcal A\right)$ and $\lambda\left(\mathcal A\right)$ the sigma algebra and the monotone class generated by a collection of sets $\mathcal A$, respectively.
\begin{lem}\label{lem:ind_fnct}
For all $C,C'\in\mathcal{B}(\mathbb{R}_+^2)$, we have that
\begin{equation}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{C'}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]\xrightarrow[n\to\infty]{a.s.} \mathbb{E}\left[\mathds{1}_{C}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{C'}(\bm V,\bm U)\right]\label{eq:second_mom_ind_bor}.
\end{equation}
\end{lem}
\begin{proof}
We first fix a rectangle $A\times B\in\mathcal{R}(\mathbb{R}_+^2)$ and we consider the set
\begin{equation}
\mathcal{A}_{A\times B}\coloneqq \left\{C\in\mathcal{B}(\mathbb{R}_+^2)\middle |\text{Eq. }\eqref{eq:second_mom_ind_bor} \text{ holds with } C'=A\times B \right\}.
\end{equation}
By \cref{lem:first_sec_mom_for_ret}, we have $\mathcal{R}(\mathbb{R}_+^2)\subseteq\mathcal{A}_{A\times B}\subseteq \mathcal{B}(\mathbb{R}_+^2)$. If we show that $\mathcal{A}_{A\times B}$ is a monotone class, then we can conclude that $\mathcal{A}_{A\times B}=\mathcal{B}(\mathbb{R}_+^2)$. Indeed, by the monotone class theorem (note that $\mathcal{R}(\mathbb{R}_+^2)$ is closed under finite intersections), we have that
\begin{equation}\label{eq:class_inclusions}
\mathcal{B}(\mathbb{R}_+^2)= \sigma\left(\mathcal{R}(\mathbb{R}_+^2)\right)=\lambda\left(\mathcal{R}(\mathbb{R}_+^2)\right) \subseteq\mathcal{A}_{A\times B}.
\end{equation}
The equality $\mathcal{A}_{A\times B}=\mathcal{B}(\mathbb{R}_+^2)$ implies that \cref{eq:second_mom_ind_bor} holds for every set in $\mathcal{B}(\mathbb{R}_+^2)\times \mathcal{R}(\mathbb{R}_+^2)$. Finally, if we also show that for any fixed Borel set $C^*\in\mathcal{B}(\mathbb{R}_+^2)$, the set
\begin{equation}
\mathcal{A}_{C^*}\coloneqq \left\{C'\in\mathcal{B}(\mathbb{R}_+^2)\middle |\text{Eq. }\eqref{eq:second_mom_ind_bor} \text{ holds with } C=C^* \right\}
\end{equation}
is a monotone class, then using again the same arguments that we used in \cref{eq:class_inclusions} (note that $\mathcal{R}(\mathbb{R}_+^2)\subseteq\mathcal{A}_{C^*}$ thanks to the previous step) we can conclude that \cref{eq:second_mom_ind_bor} holds for every pair of sets in $\mathcal{B}(\mathbb{R}_+^2)\times \mathcal{B}(\mathbb{R}_+^2)$, proving the lemma. Therefore, in order to conclude the proof, it is sufficient to show that $\mathcal{A}_{A \times B}$ and $\mathcal{A}_{C^*}$ are monotone classes.
\medskip
We start by proving that $\mathcal{A}_{A\times B}$ is a monotone class:
\begin{itemize}
\item Obviously $\mathbb{R}_+^2\in \mathcal{A}_{A\times B}$.
\item If $C,D\in\mathcal{A}_{A\times B}$ and $C\subseteq D$, then $D\setminus C\in \mathcal{A}_{A\times B}$ because $\mathds{1}_{D\setminus C}=\mathds{1}_{D}-\mathds{1}_{C}$.
\item Let now $(C_m)_{m\in\mathbb{N}}$ be a sequence of sets in $\mathcal{A}_{A\times B}$ such that $C_m \subseteq C_{m+1}$ for all $m\in\mathbb{N}$. We want to show that $C\coloneqq\bigcup_m C_m\in \mathcal{A}_{A\times B}$, i.e.\ that
\begin{equation}\label{eq:ifbueiwbfowebnfoewinf}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]
\xrightarrow[n\to\infty]{a.s.}
\mathbb{E}\left[\mathds{1}_{C}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right].
\end{equation}
Since $\mathds{1}_{C}=\lim_{m\to \infty}\mathds{1}_{C_m}$, then by monotone convergence we have for all $n\in \mathbb{N}$,
\begin{multline}\label{eq:lim_of_indicator}
\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]
\xrightarrow[m\to\infty]{a.s.}\\
\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{C}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right].
\end{multline}
We also claim that:
\begin{itemize}
\item[(a)] For all $m\in\mathbb{N}$,
\begin{equation}
\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]\xrightarrow[n\to\infty]{a.s.} \mathbb{E}\left[\mathds{1}_{C_m}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right].
\end{equation}
\item[(b)] The convergence in \cref{eq:lim_of_indicator} holds uniformly for all $n\in\mathbb{N}$.
\end{itemize}
Item (a) holds since $C_m \in \mathcal{A}_{A\times B}$.
Item (b) will be proved at the end. Items (a) and (b) allow us to exchange the following a.s.-limits as follows
\begin{align}
\lim_{n\to \infty}\mathbb{E}&\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]\\
\stackrel{\eqref{eq:lim_of_indicator}}{=} &\lim_{n\to \infty}\lim_{m\to \infty}
\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]\\
= &\lim_{m\to \infty}\lim_{n\to \infty}
\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]\\
=
&\lim_{m\to \infty}\mathbb{E}\left[\mathds{1}_{C_m}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right]=\mathbb{E}\left[\mathds{1}_{C}(\bm V,\bm U)\right]\mathbb{E}\left[\mathds{1}_{A \times B}(\bm V,\bm U)\right],
\end{align}
where the last equality follows by monotone convergence. This proves \cref{eq:ifbueiwbfowebnfoewinf} and concludes the proof (up to proving item (b)) that $\mathcal{A}_{A\times B}$ is a monotone class.
\textbf{Proof of item (b). }Since $\mathds{1}_{C\setminus C_m}=\mathds{1}_{C}-\mathds{1}_{C_m}$, it is enough to show that
\begin{equation}
\sup_{n}\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C\setminus C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]\xrightarrow[m\to\infty]{a.s.}0.
\end{equation}
We set $D_m\coloneqq C\setminus C_m$, and define
$$(D_m)^{s}\coloneqq\{x\in \mathbb{R}_+|(x,s)\in D_m\},\quad \text{for all} \quad s\in\mathbb{R},$$
$$\pi_Y(D_m)\coloneqq\{y\in \mathbb{R}_+| \exists x\in \mathbb{R}_+ \text{ s.t. }(x,y)\in D_m\}.$$
Since $\bm \xi_\ell\stackrel{d}{=}\bm V$ for all $\ell\in[2n-1]$, then for all $\ell,j\in[2n-1]$, a.s.
\begin{align}
\mathbb{E}\left[\mathds{1}_{D_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]
&=\mathbb{P}\left(\bm \xi_\ell\in (D_m)^{\bm s_\ell},\bm \xi_j\in A\middle| \bm s_\ell\right)\mathds{1}_{\pi_Y(D_m)}(\bm s_\ell)\mathds{1}_{B}(\bm s_j)\\
&\leq \mathbb{P}\left(\bm \xi_\ell\in (D_m)^{\bm s_\ell}\middle| \bm s_\ell\right)=\mathbb{P}\left(\bm V\in (D_m)^{\bm s_\ell}\middle| \bm s_\ell\right).\label{eq:bound_for_expect1}
\end{align}
Therefore a.s.
\begin{equation}\label{eq:bnd_for_exp}
\sup_{n}\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathds{1}_{C\setminus C_m}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A\times B}\left(\bm \xi_j,\bm s_j\right)\middle| ( \bm s_i)_i\right]
\leq\mathbb{P}\left(\bm V\in (C\setminus C_m)^{\bm s_\ell}\middle| \bm s_\ell\right).
\end{equation}
Now the sequence $\mathbb{P}\left(\bm V\in (C\setminus C_m)^{\bm s_\ell}\middle| \bm s_\ell\right)$ is a.s.\ non-increasing (because $C_m \subseteq C_{m+1}$) and hence has an a.s.\ limit. The limit is non-negative and its expectation is the limit of expectations which is $0$ because $C\coloneqq\bigcup_m C_m$. This completes the proof of item (b).
\end{itemize}
It remains to prove that $\mathcal{A}_{C*}$ is also a monotone class.
The proof is similar to the proof above, replacing the bound in \cref{eq:bound_for_expect1} by
\begin{multline}\label{eq:bound_for_expect2}
\mathbb{E}\left[\mathds{1}_{C^*}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{D_m}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]\\
=\mathbb{P}\left(\bm \xi_\ell\in (C^*)^{\bm s_\ell},\bm \xi_j\in (D_m)^{\bm s_j}\middle| \bm s_\ell,\bm s_j\right)\mathds{1}_{\pi_Y(C^*)}(\bm s_\ell)\mathds{1}_{\pi_Y(D_m)}(\bm s_j)
\leq \mathbb{P}\left(\bm \xi_j\in (D_m)^{\bm s_j}\middle|\bm s_j\right).
\end{multline}
This completes the proof of the lemma.
\end{proof}
We now generalize the result in \cref{lem:ind_fnct} to all bounded and measurable functions, hereby proving \cref{prop:conv_for_bnd_cont_funct}.
\begin{proof}[Proof of \cref{prop:conv_for_bnd_cont_funct}]
We further assume that $g:\mathbb{R}^2_+\to\mathbb{R}_+$ is non-negative, the general case following by stardand arguments.
Fubini's theorem, together with the fact that $g(x,y)=\int_0^{\|g\|_\infty}\mathds{1}_{\left\{z\leq g(x,y)\right\}}dz$, yields
\begin{multline}
\mathbb{E}\left[\left(\frac{1}{2n}\sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right)\right)^2\middle| (\bm s_i)_i\right]
=\mathbb{E}\left[\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\int_0^{\|g\|_\infty}\mathds{1}_{\left\{z\leq g\left(\bm \xi_\ell,s_\ell\right)\right\}}dz \int_0^{\|g\|_\infty}\mathds{1}_{\left\{t\leq g\left(\bm \xi_j,s_j\right)\right\}}dt\middle| (\bm s_i)_i\right]\\
=\int_0^{\|g\|_\infty}\int_0^{\|g\|_\infty}\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{A(z)}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A(t)}\left(\bm \xi_j,\bm s_j\right)\middle| \bm s_\ell,\bm s_j\right]dz \; dt,
\end{multline}
where $A(s)=\left\{(x,y)\in\mathbb{R}_+^2 \middle | g(x,y)\geq s \right\}$.
By \cref{lem:ind_fnct}, for almost every $(z,t)\in\mathbb{R}^2_+$
\begin{equation}
\frac{1}{4n^2}\sum_{\ell,j=1}^{2n-1}\mathbb{E}\left[\mathds{1}_{A(z)}\left(\bm \xi_\ell,\bm s_\ell\right)\mathds{1}_{A(t)}\left(\bm \xi_j,\bm s_j\right)\middle| (\bm s_i)_i\right]\xrightarrow[n\to \infty]{a.s.} \mathbb{E}\left[\mathds{1}_{A(z)}\left(\bm V,\bm U\right)\right]\mathbb{E}\left[\mathds{1}_{A(t)}\left(\bm V,\bm U\right)\right].
\end{equation}
Since the left-hand side is bounded by $1$, we can conclude by dominated convergence that
\begin{equation}
\mathbb{E}\left[\left(\frac{1}{2n}\sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right)\right)^2\middle| (\bm s_i)_i\right]\xrightarrow[n\to \infty]{a.s}\mathbb{E}\left[g\left(\bm V,\bm U\right)\right]^2,
\end{equation}
completing the proof of the proposition.
\end{proof}
\section{Proof of the central limit theorems}\label{sect:CLT}
In this section, we start by proving \cref{thm:CLT} using \cref{prop:clt_sum_of_the_g} and then we prove the latter result.
\begin{proof}[Proof of \cref{thm:CLT}]
We set $\bm X_n\coloneqq\frac{\bm W_n(s)- 2n \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)]}{\sqrt{2n}}$. In order to prove the convergence in \cref{eq:CLT}, it is enough to show that, for every $t \in \mathbb{R}$,
\begin{equation}\label{eq:goal_proof_MGF}
\mathbb{E}_{\vec{s}} \left[e^{it\bm X_n}\right]\xrightarrow{n\to \infty}e^{-\frac{t^2}{2} \left(\sigma(s)^2+\rho(s)^2 \right)},
\end{equation}
where we recall that $\sigma(s)^2=\mathbb{E}\left[F_s(\bm V,\bm U)-F^2_s(\bm V,\bm U)\right]$ and $\rho(s)^2 = \rho_{F_s}^2$. Note that $\sigma(s)^2+\rho(s)^2$ is finite thanks to \cref{prop:clt_sum_of_the_g} and the fact that $F_s-F_s^2$ is a bounded and measurable function.
Recalling that $\bm W_n(s)=\sum_{j=1}^{2n-1}\mathds{1}_{W_j}$, $W_{j}$ being the event that the team $T_{0}$ wins against the team $T_j$,
and that, conditioning on $\left( \bm \xi^{0}_r \right)_{_{r \in [2n-1]}}$, the results of different matches are independent, we have that
\begin{multline}
\mathbb{E}_{\vec{s}}\left[e^{it\bm X_n}\right]
=e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it} \cdot \mathbb{E}_{\vec{s}} \left[e^{\frac{it}{\sqrt{2n}} \sum_{j=1}^{2n-1}\mathds{1}_{W_j}}\right]\\
=e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it}\cdot \mathbb{E}_{\vec{s}}\left[\mathbb{E}_{\vec{s}}\left[e^{\frac{it}{\sqrt{2n}}\sum_{j=1}^{2n-1}\mathds{1}_{W_j}}\middle|\left( \bm \xi^{0}_r \right)_{_{r \in [2n-1]}} \right]\right] \\
=e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it}\cdot \mathbb{E}_{\vec{s}}\left[ \prod_{j=1}^{2n-1} \mathbb{E}_{\vec{s}}\left[e^{\frac{it}{\sqrt{2n}}\mathds{1}_{W_j}}\middle| \bm \xi^{0}_j \right]\right].
\end{multline}
Since, by assumption, we have that
$
\mathbb{P}_{\vec{s}}\left(W_j\middle | \bm \xi^{0}_j,\bm \xi^{j}_j\right)=f(s\cdot \bm \xi^{0}_j,s_j \cdot \bm \xi^{j}_j)
$
and, for all $j\in [2n-1]$, $\bm \xi^j_j$ is independent of $\bm \xi^{0}_j$, $\bm \xi^{0}_j\stackrel{d}{=}\bm \xi_j$ and $\bm \xi^j_j\stackrel{d}{=}\bm V'$, we have that
\begin{multline}
\mathbb{E}_{\vec{s}}\left[e^{it\bm X_n}\right]
= e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it}\cdot \mathbb{E}\left[ \prod_{j=1}^{2n-1} \mathbb{E}\left[ 1 + \left( e^{\frac{it}{\sqrt{2n}}} -1 \right) f(s\cdot \bm \xi^{0}_j,s_j \cdot \bm \xi^{j}_j) \middle| \bm \xi^{0}_j \right]\right] \\
= e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it}\cdot \mathbb{E}\left[ \prod_{j=1}^{2n-1} \left(1 + \left( e^{\frac{it}{\sqrt{2n}}} -1 \right) \cdot F_s(\bm \xi_j,s_j) \right)\right],
\end{multline}
where we recall that $F_s(x,y)=\mathbb{E}\left[f\left(s\cdot x,y\cdot \bm V'\right)\right]$.
Rewriting the last term as
\begin{equation}
e^{-\sqrt{2n} \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \cdot it}\cdot \mathbb{E}\left[ e^{ \sum_{j=1}^{2n-1} \log \left(1 + (e^{it /\sqrt{2n}} -1 ) \cdot F_s(\bm \xi_j,s_j ) \right) } \right],
\end{equation}
and observing that
\begin{multline}
\sum_{j=1}^{2n-1} \log \left( 1 + \left( e^{it /\sqrt{2n}} -1 \right) \cdot F_s\left(\bm \xi_j,s_j \right) \right) \\
= \sum_{j=1}^{2n-1} \left(\frac{it}{\sqrt{2n}} \cdot F_s\left(\bm \xi_j,s_j \right) - \frac{t^2}{4n} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) + O \left(\frac{1}{n\sqrt{n}} \right)\right) \\
= \frac{i t}{\sqrt{2n}} \sum_{j=1}^{2n-1} F_s\left(\bm \xi_j,s_j \right) - \frac{t^2}{2} \cdot \frac{1}{2n} \sum_{j=1}^{2n-1} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) + O \left(\frac{1}{\sqrt{n}} \right),
\end{multline}
we obtain that the characteristic function $\mathbb{E}_{\vec{s}}\left[e^{it\bm X_n}\right]$ is equal to
\begin{equation}
e^{O\left( \frac{1}{\sqrt n} \right)} \cdot e^{-\frac{t^2}{2}\sigma(s)^2}
\cdot \mathbb{E} \left[ e^{ \frac{it}{\sqrt{2n}} \left( \sum_{j=1}^{2n-1} F_s (\bm \xi_j,s_j ) - 2n \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \right) }
\cdot e^{- \frac{t^2}{2} \cdot \left(\frac{1}{2n} \sum_{j=1}^{2n-1} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) - \sigma(s)^2 \right) } \right].
\end{equation}
Now we set
\begin{align}
&\bm A_n\coloneqq e^{ \frac{it}{\sqrt{2n}} \left( \sum_{j=1}^{2n-1} F_s (\bm \xi_j,s_j ) - 2n \cdot \mathbb{E}_{\vec{s}}[\bm W_n(s)] \right) },\\
&\bm B_n\coloneqq e^{- \frac{t^2}{2} \cdot \left(\frac{1}{2n} \sum_{j=1}^{2n-1} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) - \sigma(s)^2 \right) },
\end{align}
obtaining that $\mathbb{E}_{\vec{s}}\left[e^{it\bm X_n}\right]= e^{O\left( \frac{1}{\sqrt n} \right)} \cdot e^{-\frac{t^2}{2}\sigma(s)^2}\left(\mathbb{E}\left[\bm A_n\right]-\mathbb{E}\left[\bm A_n(1-\bm B_n)\right]\right)$.
Hence, \cref{eq:goal_proof_MGF} holds if we show that
\begin{enumerate}
\item $\mathbb{E}\left[\bm A_n\right] \to e^{-\frac{t^2}{2} \rho(s)^2} $, \label{eq:clt_delta_small2}
\item $\mathbb{E}\left[\bm A_n(1-\bm B_n)\right] \to 0 $ .\label{eq:clt_delta_big2}
\end{enumerate}
Item 1 follows from \cref{prop:clt_sum_of_the_g}. For Item 2, since $|\bm A_n|=1$, we have that
\begin{equation}
\left|\mathbb{E}\left[\bm A_n(1-\bm B_n)\right] \right|\leq \mathbb{E}\left[|1-\bm B_n|\right].
\end{equation}
Recalling that $\sigma(s)^2=\mathbb{E}\left[F_s(\bm V,\bm U)-F^2_s(\bm V,\bm U)\right]$, and that $\bm \xi_j\stackrel{d}{=}\bm V$ for all $j\in[2n-1]$, we have that
\begin{multline}
\frac{1}{{2n}} \sum_{j=1}^{2n-1} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) - \sigma(s)^2
=\\
\frac{1}{{2n}} \sum_{j=1}^{2n-1} \left( F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right)
-\frac{1}{{2n}} \sum_{j=1}^{2n-1} \mathbb{E}\left[ F_s\left(\bm V,s_j \right) - F_s^2\left( \bm V,s_j \right) \right]\\
+\frac{1}{{2n}} \sum_{j=1}^{2n-1} \mathbb{E}\left[ F_s\left(\bm V,s_j \right) - F_s^2\left( \bm V,s_j \right) \right]
-\mathbb{E}\left[F_s(\bm V,\bm U)-F^2_s(\bm V,\bm U)\right] \xrightarrow{P} 0,
\end{multline}
where for the limit we used once again \cref{prop:clt_sum_of_the_g} and similar arguments to the ones already used in the proof of \cref{prop:first_mom}.
Since the function $e^{-t^2x/2}$ is continuous and the random variable $\frac{1}{{2n}} \left( \sum_{j=1}^{2n-1} F_s\left(\bm \xi_j,s_j \right) - F_s^2\left(\bm \xi_j,s_j \right) \right) - \sigma(s)^2$ is bounded, we can conclude that $\mathbb{E}\left[|1-\bm B_n|\right]\to 0$. This ends the proof of \cref{thm:CLT}.
\end{proof}
The rest of this section is devoted to the proof of \cref{prop:clt_sum_of_the_g}. We start by stating a lemma that shows how the coefficients $\alpha_n$ defined in \cref{eq:def_alpha_n} control the correlations of the process $\bm \xi$.
\begin{lem}[Theorem 17.2.1 in \cite{MR0322926}]\label{lem:decay_correlations}
Fix $\tau \in \mathbb{N}$ and let $\bm X$ be a random variable measurable w.r.t.\ $\mathcal{A}_1^{k}$ and $\bm Y$ a random variable measurable w.r.t.\ $\mathcal{A}^{\infty}_{k + \tau}$. Assume, in addition, that $| \bm X | < C_1$ almost surely and $| \bm Y | < C_2$ almost surely. Then
\begin{equation}\label{eq:decay_correlations}
\left| \mathbb{E} \left[\bm X \bm Y\right] - \mathbb{E} [\bm X] \mathbb{E}[\bm Y] \right| \leq 4 \cdot C_1 \cdot C_2 \cdot \alpha_{\tau}.
\end{equation}
\end{lem}
We now focus on the behaviour of the random variables $\sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right)$ appearing in the statement of \cref{prop:clt_sum_of_the_g}. It follows directly from \cref{prop:conv_for_bnd_cont_funct} and
Chebyshev's inequality that, for $\mu^{\mathbb{N}}$-almost every sequence $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$,
\begin{equation}
\frac{1}{2n} \sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right) \xrightarrow{P} \mathbb{E} \left[g \left( \bm V, \bm U \right)\right].
\end{equation}
We aim at establishing a central limit theorem.
Recalling the definition of the function $\tilde g$ in the statement of \cref{prop:clt_sum_of_the_g}, we note that, for all $j \in \mathbb{N}$,
\begin{equation}
\tilde{g} \left(\bm \xi_j,s_j \right) = g\left(\bm \xi_j,s_j \right) - \mathbb{E}[g\left(\bm V, s_j\right)],
\end{equation}
and so $\mathbb{E} \left[\tilde{g} \left(\bm \xi_j,s_j \right) \right] = 0$.
Define
\begin{equation}
\rho_{g,n}^2 \coloneqq \Var \left( \sum_{j=1}^{2n-1}\tilde{g} \left(\bm \xi_j,s_j\right) \right) = \Var \left( \sum_{j=1}^{2n-1}g\left(\bm \xi_j,s_j\right) \right).
\end{equation}
The following lemma shows that the variance $\rho_{g,n}^2$ is asymptotically linear in $n$ and proves the first part of \cref{prop:clt_sum_of_the_g}.
\begin{lem} \label{lem:variance_is_linear}
The quantity $\rho_g^2$ defined in \cref{eq:def_of_rho} is finite. Moreover, for $\mu^{\mathbb{N}}$-almost every sequence $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, we have that
\begin{equation} \label{eq:asympt_for_var}
\rho_{g,n}^2 = 2n\cdot \rho_g^2 \cdot (1+o(1)).
\end{equation}
\end{lem}
\begin{proof}
We have that
\begin{multline}
\rho_{g,n}^2 = \Var \left( \sum_{j=1}^{2n-1}\tilde{g} \left(\bm \xi_j,s_j\right) \right)
= \mathbb{E} \left[ \left( \sum_{j=1}^{2n-1}\tilde{g} \left(\bm \xi_j,s_j\right) \right)^2 \right] \\
= \mathbb{E} \left[ \sum_{j=1}^{2n-1}\tilde{g}^2 \left(\bm \xi_j,s_j\right) \right]
+2\cdot \mathbb{E} \left[ \sum_{i=1}^{2n-2} \sum_{j=i+1}^{2n-1} \tilde{g} \left(\bm \xi_i,s_i\right) \tilde{g} \left(\bm \xi_j,s_j\right) \right] \\
= \sum_{j=1}^{2n-1} \mathbb{E} \left[ \tilde{g}^2 \left(\bm \xi_j,s_j\right) \right]
+ 2\cdot \sum_{i=1}^{2n-2} \sum_{k=1}^{2n-1 -i} \mathbb{E} \left[ \tilde{g} \left(\bm \xi_i,s_i\right) \tilde{g} \left(\bm \xi_{i+k},s_{i+k}\right) \right].
\end{multline}
Using similar arguments to the ones already used in the proof of \cref{prop:first_mom}, we have that
\begin{equation}\label{eq:fibiwfwofnbew}
\sum_{j=1}^{2n-1} \mathbb{E} \left[\tilde{g}^2 \left(\bm \xi_j,s_j\right) \right] = 2n \cdot \mathbb{E} \left[ \tilde{g}^2 (\bm V, \bm U) \right] + o(n) .
\end{equation}
We now show that
\begin{equation}\label{eq:wfejbbfweqibfdwequobfd}
\lim_{n\to \infty} \frac{1}{2n} \sum_{i=1}^{2n-2} \sum_{k=1}^{2n-1 -i} \mathbb{E} \left[ \tilde{g} \left(\bm \xi_i,s_i\right) \tilde{g} \left(\bm \xi_{i+k},s_{i+k}\right) \right] = \sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde{g} (\bm{\xi}_1, \bm U) \tilde{g} (\bm{\xi}_{1+k}, \bm U') \right].
\end{equation}
First we show that the right-hand side is convergent. We start by noting that from Fubini's theorem and \cref{lem:decay_correlations},
$$ \mathbb{E} \left[\mathbb{E} \left[ \tilde{g} (\bm{\xi}_1, \bm U) \tilde{g} (\bm{\xi}_{1+k}, \bm U') \middle| \bm{\xi}_1, \bm{\xi}_{1+k} \right]\right] =\int_{\mathbb{R}^2} \mathbb{E} \left[\tilde{g} (\bm{\xi}_1, x) \tilde{g} (\bm{\xi}_{1+k}, y)\right] d\mu(x) d\mu(y)\leq 4\cdot \alpha_k.$$
Therefore, thanks to the assumption in \cref{eq:strongly_mix_plus}, we have that
\begin{multline}\label{eq:bnd_with_alpha}
\sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde{g} (\bm{\xi}_1, \bm U) \tilde{g} (\bm{\xi}_{1+k}, \bm U') \right] = \sum_{k=1}^{\infty} \mathbb{E}\left[ \mathbb{E} \left[ \tilde{g} (\bm{\xi}_1, \bm U) \tilde{g} (\bm{\xi}_{1+k}, \bm U') \middle| \bm{\xi}_1, \bm{\xi}_{1+k} \right] \right]
\leq \sum_{k=1}^{\infty} 4 \cdot \alpha_k < \infty.
\end{multline}
Now we turn to the proof of the limit in \cref{eq:wfejbbfweqibfdwequobfd}. Using the stationarity assumption for the process $\bm\xi$ in \cref{eq:stationarity}, we can write
\begin{equation}
\frac{1}{2n} \sum_{i=1}^{2n-2} \sum_{k=1}^{2n-1 -i} \mathbb{E} \left[ \tilde{g} \left(\bm \xi_i,s_i\right) \tilde{g} \left(\bm \xi_{i+k},s_{i+k}\right) \right]
= \sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=1}^{2n-1-k} \mathbb{E} \left[ \tilde{g} \left(\bm \xi_1,s_i\right) \tilde{g} \left(\bm \xi_{1+k},s_{i+k}\right) \right].
\end{equation}
Using a monotone class argument similar to the one used for the law of large numbers, we will show that the right-hand side of the equation above converges to
\begin{equation}
\sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde{g} (\bm{\xi}_1, \bm U) \tilde{g} (\bm{\xi}_{1+k}, \bm U') \right].
\end{equation}
We start, as usual, from indicator functions. We have to prove that for all quadruplets $(A,A',B,B')$ of Borel subsets of $\mathbb{R}_+$, it holds that
\begin{multline}\label{eq:dim_for_ind_fct_centered}
\sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=1}^{2n-1-k} \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right)
\tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \\
\to
\sum_{k=1}^{\infty} \mathbb{E}\left[ \tilde {\mathds{1}}_{A \times B}(\bm{\xi}_1, \bm U) \tilde {\mathds{1}}_{A' \times B'} (\bm{\xi}_{1+k}, \bm U') \right] <\infty,
\end{multline}
where for every rectangle $R$ of $\mathbb{R}_+^2$,
\begin{align}
\tilde{\mathds{1}}_{R}\left(x,y\right)\coloneqq\mathds{1}_{R}\left(x,y\right)-\mathbb{E}\left[ \mathds{1}_{R}\left(\bm \xi_1,y\right) \right].
\end{align}
Setting $S_n\coloneqq\sum_{k=1}^{n}\mathbb{E}\left[ \tilde {\mathds{1}}_{A \times B}(\bm{\xi}_1, \bm U) \tilde {\mathds{1}}_{A' \times B'} (\bm{\xi}_{1+k}, \bm U') \right]$, we estimate
\begin{multline}\label{eq:rehbgre0uq-9grg}
\left| S_{\infty} -
\sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=1}^{2n-1-k} \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right) \tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \right| \\
\leq \left| S_{\infty}-S_{2n-2}\right|
+ \left| S_{2n-2} -
\sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=1}^{2n-2} \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right) \tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \right| \\
+ \left| \sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=2n-1-k}^{2n-2} \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right) \tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \right|.
\end{multline}
Clearly, the first term in the right-hand side of the inequality above tends to zero, being the tail of a convergent series (the fact that $S_{\infty}<\infty$ follows via arguments already used for \cref{eq:bnd_with_alpha}). For the last term, we notice that, using \cref{lem:decay_correlations},
\begin{equation}
\left| \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right) \tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \right| \leq 4 \cdot \alpha_k,
\end{equation}
and thus
\begin{equation}
\left| \sum_{k=1}^{2n-2} \frac{1}{2n} \sum_{i=2n-1-k}^{2n-2} \mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right) \tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right] \right| \leq \frac{1}{2n} \sum_{k=1}^{2n-2} 4 k \cdot \alpha_k,
\end{equation}
which converges to $0$ as $n$ goes to infinity by the assumption in \cref{eq:strongly_mix_plus} and the same arguments used in \cref{rem:fbkwufobw}.
It remains to bound the second term. Expanding the products and recalling that $\bm V, \bm V', \bm U, \bm U'$ are independent random variables such that $\bm{\xi}_{1}\stackrel{d}{=}\bm{\xi}_{1+k}\stackrel{d}{=}\bm {V}\stackrel{d}{=}\bm {V}'$ and $\bm {U}\stackrel{d}{=}\bm {U}'\stackrel{d}{=}\mu$, we have that
\begin{multline}
S_{2n-2}=\sum_{k=1}^{2n-2} \mathbb{E}\left[ \tilde {\mathds{1}}_{A \times B}(\bm{\xi}_1, \bm U) \tilde {\mathds{1}}_{A' \times B'} (\bm{\xi}_{1+k}, \bm U') \right]\\
=
\sum_{k=1}^{2n-2} \mathbb{E}\left[ \mathds{1}_{A \times B}(\bm{\xi}_1, \bm U) \mathds{1}_{A' \times B'} (\bm{\xi}_{1+k}, \bm U') \right]-\mathbb{E}\left[ \mathds{1}_{A \times B}(\bm V, \bm U) \mathds{1}_{A' \times B'} (\bm{V}', \bm U') \right]\\
=
\sum_{k=1}^{2n-2} \mu \left(B\right) \cdot \mu \left(B'\right) \cdot
\left(\mathbb{E}\left[ \mathds{1}_{A \times A'}(\bm{\xi}_1,\bm{\xi}_{1+k} )\right]-\mathbb{E}\left[ \mathds{1}_{A \times A'}(\bm V,\bm{V}') \right]\right).
\end{multline}
Similarly, we obtain
\begin{multline}
\mathbb{E} \left[ \tilde{\mathds{1}}_{A \times B}\left(\bm \xi_1,s_i\right)
\tilde{\mathds{1}}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right) \right]\\
=
\mathbb{E} \left[ \mathds{1}_{A \times B}\left(\bm \xi_1,s_i\right) \mathds{1}_{A' \times B'}\left(\bm \xi_{1+k},s_{i+k}\right)\right]-\mathbb{E} \left[ \mathds{1}_{A \times B}\left( \bm V,s_i\right)\right]\mathbb{E}\left[ \mathds{1}_{A' \times B'}\left(\bm V',s_{i+k}\right)\right]\\
= \mathds{1}_{B \times B'}(s_i,s_{i+k})\cdot \left(\mathbb{E} \left[ \mathds{1}_{A \times A'}\left(\bm \xi_1,\bm \xi_{1+k}\right)\right]-\mathbb{E} \left[ \mathds{1}_{A \times A'}\left( \bm V,\bm V'\right)\right]\right).
\end{multline}
Therefore the second term in the right-hand side of \cref{eq:rehbgre0uq-9grg} is bounded by
\begin{equation}\label{eq:erbgorobgegoe}
\sum_{k=1}^{2n-2} \left| \mathbb{E} \left[ \mathds{1}_{A \times A'}\left(\bm \xi_1,\bm \xi_{1+k}\right)\right]-\mathbb{E} \left[ \mathds{1}_{A \times A'}\left( \bm V,\bm V'\right)\right]\right| \cdot \left| \mu (B)\mu(B') -\frac{1}{2n} \sum_{i=1}^{2n-2} \mathds{1}_{B \times B'} (s_i, s_{i+k}) \right|.
\end{equation}
Using \cref{prop:uniform_bound} we have that
$$\sup_{k \in [2n-2]} \left| \mu (B)\mu(B') -\frac{1}{2n} \sum_{i=1}^{2n-2} \mathds{1}_{B \times B'} (\bm s_i, \bm s_{i+k}) \right| \xrightarrow[n\to\infty]{a.s.} 0.$$
In addition, using once again \cref{lem:decay_correlations} and the assumption in \cref{eq:strongly_mix_plus} we have that $$\sum_{k=1}^{2n-2} \left| \mathbb{E} \left[ \mathds{1}_{A \times A'}\left(\bm \xi_1,\bm \xi_{1+k}\right)\right]-\mathbb{E} \left[ \mathds{1}_{A \times A'}\left( \bm V,\bm V'\right)\right]\right|<\infty.$$
The last two equations imply that the bound in \cref{eq:erbgorobgegoe} tends to zero as $n$ tends to infinity for $\mu^{\mathbb{N}}$-almost every sequence $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, completing the proof of \cref{eq:dim_for_ind_fct_centered}.
In order to conclude the proof of the lemma, it remains to generalize the result in \cref{eq:dim_for_ind_fct_centered} to all bounded and measurable functions. This can be done using the same techniques adopted to prove the law of large numbers, therefore we skip the details.
\end{proof}
We now complete the proof of \cref{prop:clt_sum_of_the_g}.
\begin{proof}[Proof of \cref{prop:clt_sum_of_the_g}]
Recalling that $\tilde g (x,y) \coloneqq g(x,y) - \mathbb{E} \left[ g\left( \bm V, y\right) \right]$ and thanks to \cref{lem:variance_is_linear}, it is enough to show that
\begin{equation}\label{eq:clt_h}
\frac{1}{\rho_{g,n}} \sum_{j=1}^{2n-1}\tilde{g}\left(\bm \xi_j,s_j\right)\xrightarrow{d} \bm{\mathcal{N}}(0, 1).
\end{equation}
The difficulty in establishing this convergence lies in the fact that we are dealing with a sum of random variables that are neither independent nor identically distributed.
We proceed in two steps. First, we apply the the Bernstein's method, thus we reduce the problem to the study of a sum of ``almost" independent random variables. More precisely, we use the decay of the correlations for the process $\bm \xi$ to decompose $\sum_{j=1}^{2n-1}\tilde{g}\left(\bm \xi_j,s_j\right)$ into two distinct sums, in such a way that one of them is a sum of ``almost" independent random variables and the other one is negligible (in a sense that will be specified in due time). After having dealt with the lack of independence, we settle the issue that the random variables are not identically distributed using the Lyapounov's condition.
We start with the first step. Recall that we assume the existence of two sequences $p=p(n)$ and $q=q(n)$ such that:
\begin{itemize}
\item $p\xrightarrow{n\to\infty} +\infty$ and $q\xrightarrow{n\to\infty} +\infty$,
\item $q=o(p)$ and $p=o(n)$ as $n \to \infty$,
\item $n p^{-1 } \alpha_q=o(1)$,
\item $ \frac{p}{n} \cdot \sum_{j=1}^p j \alpha_j = o(1)$.
\end{itemize}
As said above, we represent the sum $\sum_{j=1}^{2n-1}\tilde{g}\left(\bm \xi_j,s_j\right)$ as a sum of nearly independent random variables (the \emph{big blocks} of size $p$, denoted $\bm \beta_i$ below) alternating with other terms (the \emph{small blocks} of size $q$, denoted $\bm \gamma_i$ below) whose sum is negligible.
We define $k = \lfloor (2n-1)/(p+q) \rfloor$.
We can thus write
\begin{equation}
\sum_{j=1}^{2n-1}\tilde{g}\left(\bm \xi_j,s_j\right) = \sum_{i=0}^{k-1} \bm \beta_i + \sum_{i=0}^k \bm \gamma_i,
\end{equation}
where, for $0\leq i \leq k-1$,
\begin{equation}
\bm \beta_i= \bm \beta_i (\tilde g, n) \coloneqq \sum_{j=ip+iq+1}^{(i+1)p+iq} \tilde g \left(\bm \xi_j,s_j\right), \quad\quad
\bm \gamma_i= \bm \gamma_i (\tilde g, n) \coloneqq \sum_{j=(i+1)p+iq+1}^{(i+1)p+(i+1)q} \tilde g\left(\bm \xi_j,s_j\right),
\end{equation}
and
\begin{equation}
\quad \quad \bm \gamma_k=\bm \gamma_k(\tilde g, n) \coloneqq \sum_{j=kp+kq+1}^{2n-1} \tilde g\left(\bm \xi_j,s_j\right).
\end{equation}
Henceforth, we will omit the dependence on $\tilde g$ and on $n$ simply writing $\bm \beta_i$ and $\bm \gamma_i$, in order to simplify the notation (whenever it is clear). Setting $\bm H_n'\coloneqq \frac{1}{\rho_{g,n}} \sum_{i=0}^{k-1} \bm \beta_i$ and $\bm H_n''\coloneqq \frac{1}{\rho_{g,n}} \sum_{i=0}^k \bm \gamma_i $, we can write
\begin{equation}
\frac{1}{\rho_{g,n}}\sum_{j=1}^{2n-1}\tilde{g}\left(\bm \xi_j,s_j\right) = \bm H_n' + \bm H_n''.
\end{equation}
The proof of \cref{eq:clt_h} now consists of two steps. First, we show that $\bm H_n''\xrightarrow{P} 0$, and secondly we show that the characteristic function of $\bm H_n'$ converges to the characteristic function of a standard Gaussian random variable. Then, we can conclude using standard arguments.
We start by proving that $\bm H_n'' \xrightarrow{P} 0$. By
Chebyshev's inequality and the fact that $\mathbb{E}[\bm H_n'']=0$, it is enough to show that $\mathbb{E} \left[\left( \bm H_n''\right)^2\right] \to 0$ as $n \to \infty$.
We can rewrite $\mathbb{E} \left[\left( \bm H_n''\right)^2\right]$ as
\begin{multline}
\frac{1 }{\rho_{g,n}^2} \cdot \mathbb{E} \left[\left( \sum_{i=0}^{k} \bm \gamma_i \right)^2\right]
= \frac{1 }{\rho_{g,n}^2}
\left(
\mathbb{E} \left[ \sum_{i=0}^{k-1} \bm \gamma_i^2 \right]
+ \mathbb{E} \left[\bm \gamma_k^2 \right]
+\mathbb{E} \left[ \sum_{ \substack{ i,j=0\\i \neq j} }^{k-1} \bm \gamma_i \bm \gamma_j \right]
+ 2 \mathbb{E} \left[ \sum_{i=0}^{k-1} \bm \gamma_i \bm \gamma_k \right]
\right).
\end{multline}
Note that, by definition of $\bm \gamma_i$ and using \cref{lem:decay_correlations} once again, for $i\neq j$, we have the bounds
\begin{equation}\label{eq:ebfgreubfoeqrf}
\mathbb{E} \left[ \bm \gamma_i \bm \gamma_j \right] \leq q^2 \cdot \alpha_{p(i-j)},
\end{equation}
and
\begin{equation}
\mathbb{E} \left[ \bm \gamma_i \bm \gamma_k \right] \leq q \cdot (p+q) \cdot \alpha_{p(k-i)}.
\end{equation}
Moreover, by the same argument used in \cref{lem:variance_is_linear}, we have that
\begin{equation}
\mathbb{E} \left[ \bm \gamma_i^2 \right] = \rho_g^2 \cdot q \cdot (1+o(1))=O(q)
\end{equation}
and
\begin{equation}
\mathbb{E} \left[ \bm \gamma_k^2 \right] = O(p+q) = O(p).
\end{equation}
Hence, using \cref{lem:variance_is_linear},
\begin{equation}
\frac{1}{\rho_{g,n}^2} \mathbb{E} \left[ \sum_{i=0}^{k-1} \bm \gamma_i^2 \right] = O\left(\frac{kq}{n}\right) = O\left(\frac{q}{p}\right) = o(1)
\end{equation}
and
\begin{equation}
\frac{1 }{\rho_{g,n}^2} \mathbb{E} \left[\bm \gamma_k^2 \right] = o(1).
\end{equation}
Using \cref{eq:ebfgreubfoeqrf}, we also have that
\begin{multline}
\frac{1}{\rho_{g,n}^2} \mathbb{E} \left[ \sum_{i,j=1, i \neq j}^{k-1} \bm \gamma_i \bm \gamma_j \right]
\leq
\frac{2}{\rho_{g,n}^2} \sum_{j=0}^{k-1} \sum_{i=j+1}^{k-1} q^2 \cdot \alpha_{p(i-j)}
= \frac{2}{\rho_{g,n}^2} \sum_{j=0}^{k-1} \sum_{m=1}^{k-j} q^2 \cdot \alpha_{pm} \\
\leq \frac{2}{\rho_{g,n}^2}\cdot kq^2 \sum_{m=1}^{k} \alpha_{pm} \leq \frac{2kq^2}{\rho_{g,n}^2\cdot p} \sum_{m=1}^{\infty} \alpha_{m}= o(1),
\end{multline}
where in the last inequality we used that, since $\alpha_n$ is decreasing,
\begin{equation}
\sum_{m=1}^{k} \alpha_{pm} \leq \sum_{m=1}^{k} \frac{1}{p} \cdot \sum_{s=(m-1)p+1}^{mp} \alpha_s \leq \frac{1}{p} \cdot \sum_{s=1}^{\infty} \alpha_s.
\end{equation}
Analogously, we can prove that
\begin{equation}
\frac{2}{\rho_{g,n}^2} \mathbb{E} \left[ \sum_{i=0}^{k-1} \bm \gamma_i \bm \gamma_k \right] = o(1),
\end{equation}
concluding the proof that $\bm H_n'' \xrightarrow{P} 0$.
Now we turn to the study of the limiting distribution of $\bm H_n'$.
We have that, for $t \in \mathbb{R}$,
\begin{equation}
\exp \left\{ it \bm H_n' \right\} = \exp \left\{ \frac{it}{\rho_{g,n}} \sum_{i=0}^{k-1} \bm \beta_i \right\}.
\end{equation}
We now look at
$\exp \left\{ \frac{it}{\rho_{g,n}} \sum_{i=1}^{k-2} \bm \beta_i \right\}$
and
$ \exp \left\{ \frac{it}{\rho_{g,n}} \bm \beta_{k-1} \right\}$.
We have that the first random variable is measurable with respect to $\mathcal A_1^{(k-1)p + (k-2)q}$ and the second one is measurable with respect to $\mathcal A_{(k-1)p + (k-1)q+1}^{\infty}$. So, by \cref{lem:decay_correlations},
\begin{equation}
\left| \mathbb{E} \left[\exp \left\{ \frac{it}{\rho_{g,n}}\sum_{i=0}^{k-1} \bm \beta_i \right\} \right] - \mathbb{E} \left[ \exp \left\{ \frac{it}{\rho_{g,n}}\sum_{i=0}^{k-2} \bm \beta_i \right\}\right] \mathbb{E} \left[ \exp \left\{ \frac{it}{\rho_{g,n}} \bm \beta_{k-1} \right\}\right]\right| \leq 4 \cdot \alpha_q.
\end{equation}
Iterating, we get
\begin{equation}\label{eq:char_function_ofh_converges}
\left| \mathbb{E} \left[\exp \left\{ \frac{it}{\rho_{g,n}}\sum_{i=0}^{k-1} \bm \beta_i \right\} \right] -
\prod_{i=0}^{k-1} \mathbb{E} \left[ \exp \left\{ \frac{it}{\rho_{g,n}} \bm \beta_{i} \right\}\right]\right| \leq 4 \cdot (k-1) \cdot \alpha_q,
\end{equation}
the latter quantity tending to $0$ as $k \to \infty$ thanks to the assumptions on the sequences $p$ and $q$.
The last step of the proof consists in showing that, as $n \to \infty$,
\begin{equation}\label{eq:ifbueiwufbweonfew}
\prod_{i=0}^{k-1} \mathbb{E} \left[ \exp \left\{ \frac{it}{\rho_{g,n}} \bm \beta_{i} \right\}\right] \to e^{ \frac{t^2}{2} }.
\end{equation}
Consider a collection of independent random variables $\bm X_{n,i}$, $n \in \mathbb{N}, i \in [k-1]_0$, such that $\bm X_{n,i}$ has the same distribution as $\frac{1}{\rho_{g,n}}\bm \beta_i (n)$.
By \cite[Theorem 27.3]{MR1324786}, a sufficient condition to ensure that $\sum_{i=0}^{k-1}\bm X_{n,i}\to \bm{\mathcal N}(0,1)$ and so verifying \cref{eq:ifbueiwufbweonfew},
is the well-known Lyapounov's condition:
\begin{equation}\label{eq:fbbfoqehfoiewqhf}
\lim_{n \to \infty} \frac{1 }{\rho_{g,n}^{2+\delta}} \sum_{i=0}^{k-1} \mathbb{E} \left[ \bm Y_{n,i}^{2+\delta} \right] = 0, \quad \text{for some} \quad \delta>0,
\end{equation}
where $\bm Y_{n,i} \coloneqq \bm X_{n,i} \cdot \rho_{g,n} \stackrel{d}{=} \sum_{j=ip+iq+1}^{(i+1)p+iq} \tilde{g} \left( \bm \xi_j, s_j \right)$.
The condition is satisfied with $\delta=2$ thanks to the following lemma.
\begin{lem}\label{lem:CLT_bound_fourth_moment}
Under the assumptions of \cref{prop:clt_sum_of_the_g}, we have that for $\mu^{\mathbb{N}}$-almost every sequence $(s_i)_{{i\in\mathbb{N}}}\in\mathbb{R}^{\mathbb{N}}_{+}$, uniformly for all $i \in [k-1]_0$,
\begin{equation}
\mathbb{E} \left[ \left( \bm Y_{n,i} \right)^4 \right] = O \left( p^2 \cdot \sum_{j=1}^{p} j\alpha_j \right).
\end{equation}
\end{lem}
Before proving the lemma above above, we explain how it implies the condition in \cref{eq:fbbfoqehfoiewqhf} with $\delta=2$. By \cref{lem:variance_is_linear}, we have that $\rho_{g,n}^2 = 2 n \cdot \rho^2_g \cdot (1+o(1))$, thus
\begin{equation}
\frac{1}{\rho_{g,n}^4 } \sum_{i=0}^{k-1} \mathbb{E} \left[ \bm Y_{n,i}^4 \right]
\leq
C \cdot \frac{k\cdot p^2 \cdot \sum_{j=1}^{p} j\alpha_j}{4 n^2 \cdot \rho^4_g} \to 0,
\end{equation}
where for the limit we used the fact that $ \frac{k\cdot p}{n} \to 1$ and that, by assumption, $ \frac{ p \cdot \sum_{j=1}^{p} j\alpha_j}{n} \to 0$.
\medskip
We conclude the proof of \cref{prop:clt_sum_of_the_g} by proving \cref{lem:CLT_bound_fourth_moment}. Let $A_0 \coloneqq [p]$ and $A_i \coloneqq [(i+1)p+iq] \setminus [ip+iq]$ for $i \geq 1$. Note that $|A_i|=p$ for all $i\geq 0$. We have that
\begin{align}
&\mathbb{E} \left[ \left( \bm Y_{n,i} \right)^4 \right] = \mathbb{E} \left[ \left( \sum_{j=ip+iq+1}^{(i+1)p+iq} \tilde g \left( \bm \xi_j, s_j \right) \right)^4 \right] \\
=&O\Bigg(\sum_{j \in A_i} \mathbb{E} \left[ \tilde g^4 \left( \bm \xi_j, s_j \right) \right]
+ \sum_{\substack {j,k \in A_i \\ j\neq k}} \mathbb{E} \left[ \tilde g ^2 \left( \bm \xi_j, s_j \right) \tilde g ^2 \left( \bm \xi_k, s_k\right) \right]
+ \sum_{\substack{j,k \in A_i\\ j\neq k}} \mathbb{E} \left[ \tilde g^3 \left( \bm \xi_j, s_j \right) \tilde g \left( \bm \xi_k, s_k\right) \right] \\
+ &\sum_{\substack{j,k, l \in A_i \\ j\neq k\neq l}} \mathbb{E} \left[ \tilde g^2 \left( \bm \xi_j, s_j \right) \tilde g \left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \right]
+ \sum_{\substack{j,k, l,m \in A_i \\ j\neq k\neq l \neq m}} \mathbb{E} \left[ \tilde g\left( \bm \xi_j, s_j \right) \tilde g \left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \tilde g \left( \bm \xi_m, s_m\right) \right]\Bigg).
\end{align}
The fact that $\tilde g$ is bounded and the decay of the correlations will give us some bounds for each of these terms. First of all, since $\tilde g$ is bounded, we have that
$
\sum_{j \in A_i} \mathbb{E} \left[ \tilde g^4 \left( \bm \xi_j, s_j \right) \right] = O(p),
$
$
\sum_{j,k \in A_i, j\neq k} \mathbb{E} \left[ \tilde g^2 \left( \bm \xi_j, s_j \right) \tilde g^2 \left( \bm \xi_k, s_k\right) \right] = O \left(p^2 \right),
$
and
$
\sum_{j,k \in A_i, j\neq k} \mathbb{E} \left[ \tilde g^3 \left( \bm \xi_j, s_j \right) \tilde g \left( \bm \xi_k, s_k\right) \right] = O \left(p^2 \right).
$
We now look at the fourth addend.
We have that
\begin{multline}
\sum_{\substack{j,k, l \in A_i\\ j\neq k\neq l}} \mathbb{E} \left[ \tilde g^2 \left( \bm \xi_j, s_j \right) \tilde g\left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \right] \\
=
O \left(
\sum_{\substack{j,k, l \in A_i\\ j< k< l}} \mathbb{E} \left[ \tilde g^2 \left( \bm \xi_j, s_j \right) \tilde g\left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \right]
\right)
=
O \left(
\sum_{l \in A_i} \sum_{k = ip+iq+1 }^{l-1} \sum_{j=ip+iq+1}^{k} \alpha_{l-k}
\right) = O(p^2),
\end{multline}
since $\sum_{i=1}^{\infty} \alpha_{i} < + \infty$ by assumption and $|A_i|=p$.
Finally, we estimate the last addend.
We have that
\begin{multline}
\sum_{\substack{j,k, l,m \in A_i\\ j\neq k\neq l \neq m}} \mathbb{E} \left[ \tilde g \left( \bm \xi_j, s_j \right) \tilde g\left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \tilde g \left( \bm \xi_m, s_m\right) \right] \\
=
O \left(
\sum_{\substack{j,k, l,m \in A_i\\ j < k < l <m}} \mathbb{E} \left[ \tilde g\left( \bm \xi_j, s_j \right) \tilde g \left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \tilde g\left( \bm \xi_m, s_m\right) \right]
\right)
=
O \left(
\sum_{\substack{j,k, l,m \in A_i\\j < k < l <m}} \min \{\alpha_{k-j}, \alpha_{m-l}\}
\right).
\end{multline}
We analyse the last expression. Since the sequence $(\alpha_n)_{n\in\mathbb{N}}$ is decreasing, we see that
\begin{multline}
\sum_{\substack{j,k, l,m \in A_i\\j < k < l <m}} \min \{\alpha_{k-j}, \alpha_{m-l}\}
=
\sum_{j \in A_i} \sum_{x =1}^{p-j} \sum_{\substack{l \in A_i \\ l>j+x}} \sum_{y =1}^{p-l} \min \{\alpha_{x}, \alpha_{y}\} \\
=
\sum_{j \in A_i} \sum_{x =1}^{p-j} \sum_{\substack{l \in A_i \\ l>j+x}} \sum_{y =1}^{x} \alpha_{x}
+
\sum_{j \in A_i} \sum_{x =1}^{p-j} \sum_{\substack{l \in A_i \\ l>j+x}} \sum_{y =x+1}^{p-l} \alpha_{y} .
\end{multline}
Since $|A_i|=p$ we have that
\begin{equation}
\sum_{j \in A_i} \sum_{x =1}^{p-j} \sum_{\substack{l \in A_i \\ l>j+x}} \sum_{y =1}^{x} \alpha_{x}
\leq p^2 \cdot \sum_{x=1}^p x\alpha_{x},
\end{equation}
and that
\begin{equation}
\sum_{j \in A_i} \sum_{x =1}^{p-j} \sum_{\substack{l \in A_i \\ l>j+x}} \sum_{y =x+1}^{p-l} \alpha_{y}
\leq
p^2 \cdot \sum_{y=1}^p y \alpha_{y},
\end{equation}
from which we conclude that
\begin{equation}
\sum_{\substack{j,k, l,m \in A_i\\ j\neq k\neq l \neq m}} \mathbb{E} \left[ \tilde g \left( \bm \xi_j, s_j \right) \tilde g\left( \bm \xi_k, s_k\right) \tilde g \left( \bm \xi_l, s_l\right) \tilde g \left( \bm \xi_m, s_m\right) \right]
=
O \left( p^2 \cdot \sum_{j=1}^p j \alpha_j \right).
\end{equation}
This concludes the proof of \cref{lem:CLT_bound_fourth_moment}, and hence of \cref{prop:clt_sum_of_the_g} as well.
\end{proof}
|
2208.09755
|
\section*{Acknowledgments.} This material is based upon work supported by the National
\subjclass[2020]{Primary:
35B40;
Secondary:
35K65, 35Q85.
}
\keywords{Kompaneets equation, Sunyaev--Zeldovich effect, Bose--Einstein condensate, quantum entropy,
LaSalle invariance principle}
\begin{abstract}
The Kompaneets equation governs dynamics of the photon energy spectrum in certain high temperature (or low density) plasmas.
We prove several results concerning the long-time convergence of solutions to Bose--Einstein equilibria
and the failure of photon conservation.
In particular, we show the total photon number can decrease with time
via an outflux of photons at the zero-energy boundary.
The ensuing accumulation of photons at zero energy is analogous to Bose--Einstein condensation.
We provide two conditions that guarantee that photon loss occurs, and show that once loss is initiated then it persists forever.
We prove that as $t\to \infty$, solutions necessarily converge to equilibrium and we characterize the limit in terms of the total photon loss.
Additionally, we provide a few results concerning the behavior of the solution near the zero-energy boundary, an Oleinik inequality, a comparison principle, and show that the solution operator is a contraction in $L^1$.
None of these results impose a boundary condition at the zero-energy boundary.
\end{abstract}
\maketitle
\section{Introduction.}
The Kompaneets equation governs evolution of the photon energy spectrum in high temperature (or low density) plasmas, which are spatially uniform, isotropic, isothermal and non-relativistic, and in which the dominant energy exchange mechanism is Compton scattering.
This equation was first derived by Kompaneets~\cite{Kompaneets57}
and
is now fundamental to modern cosmology and high-energy astrophysics.
It has applications in the study of the interaction between matter and radiation in the early universe, the radiation spectra for the accretion disk around black holes, and the Sunyaev--Zeldovich effect, the reduction of the cosmic microwave background (CMB) brightness in near clusters of galaxies with hot gases~\cite{SunyaevZeldovich70,SunyaevZeldovich72,ShakuraSunyaev73,Birkinshaw99}.
Mathematically, the Kompaneets equation takes the non-dimensional form
\begin{equation}\label{e:kompaneetsF}
\partial_t f
= \frac{1}{x^2} \,
\partial_x \brak[\big]{ x^4 \paren[\big]{ \partial_x f + f + f^2 } } \,,
\qquad x \in (0, \infty)\,,
~t > 0\,.
\end{equation}
Here $x$ is proportional to photon energy and $t$ is proportional to time.
The variable $f$ expresses the photon number density relative to the
measure $x^2dx$ that appears due to the assumption of isotropy in a
3-dimensional space of wave vectors.
The physically relevant boundary condition at infinity requires that the incoming photon flux vanishes.
The boundary at $x = 0$ requires more care to understand, as the diffusion coefficient vanishes.
Escobedo et al.~\cite{EscobedoHerreroEA98} showed that solutions to~\eqref{e:kompaneetsF} are (globally) unique without imposing any boundary condition at $x = 0$.
One interesting feature of the Kompaneets equation is that the absence of the boundary condition at $x = 0$ allows for photon loss,
despite the fact that photon numbers are nominally conserved in Compton scattering.
In this paper we provide rigorous results describing the manner by which photons can be lost through an outflux at $x = 0$.
Physically, an outflux of photons at $x = 0$ means that a macroscopic number of photons accumulate at negligible values of energy.
This phenomenon has been regarded by several authors~\cite{ZelDovichLevich69,Syunyaev71,CaflischLevermore86,EscobedoHerreroEA98,EscobedoMischler01,JosserandPomeauEA06,KhatriSunyaevEA12,LevermoreLiuEA16} as analogous to the formation of a Bose--Einstein condensate---a collection of bosons occupying the same minimum-energy quantum state.
While the existence of such condensates was predicted in 1924 by Bose and Einstein~\cites{Bose24,Einstein25,Einstein25a}, they were first exhibited in 1995~\cite{AndersonEnsherEA95} for Rubidium-87 vapor.
For photons, Bose--Einstein condensates were experimentally exhibited in 2010~\cite{KlaersSchmittEA10}, but in circumstances dominated by physics different from Compton scattering.
In the cosmological setting, true Bose--Einstein condensation is thought to be suppressed by other physical mechanisms
that become important at very low energy \cite{KhatriSunyaevEA12}.
To briefly explain the outflux phenomenon for \eqref{e:kompaneetsF}, we note that the total photon number is
\begin{equation*}
\mathcal N(f_t) \defeq \int_0^\infty f_t(x)\,x^2 dx\,.
\end{equation*}
(We clarify that $f_t(x)$ here is the value of $f$ at $(t, x)$, and not the time derivative.)
By multiplying~\eqref{e:kompaneetsF} by $x^2$ and integrating, one immediately sees that the total photon number is a conserved quantity, provided the photon flux vanishes at both $0$ and $\infty$.
Now it is also known (see for instance~\cites{CaflischLevermore86,LevermoreLiuEA16}, or Section~\ref{s:largetime}, below) that solutions to~\eqref{e:kompaneetsF} formally dissipate a quantum entropy.
This suggests that $f_t$ converges to an equilibrium solution as $t \to \infty$.
The nonnegative equilibrium solutions to~\eqref{e:kompaneetsF} can readily be computed by solving an ODE, and are given by Bose--Einstein statistics, taking the form
\begin{equation}\label{e:fhat}
\hat f_\mu(x) \defeq \frac{1}{e^{x + \mu} - 1}\,,
\qquad\text{for } \mu \geq 0\,.
\end{equation}
(For mathematical convenience, as in \cite{EscobedoHerreroEA98} we take the parameter $\mu$ as proportional to the negative of chemical potential.)
Of course, one can now compute the maximum photon number in equilibrium to be
\begin{equation*}
\sup_{\mu \geq 0} \mathcal N( \hat f_\mu)
= \mathcal N(\hat f_0)
= \int_0^\infty \frac{x^2}{e^x - 1} \, dx
= 2\zeta(3)
\approx 2.404\dots
< \infty \,.
\end{equation*}
Here $\zeta(s) = \sum_1^\infty 1/k^s$ is the Riemann zeta function.
Thus if one starts~\eqref{e:kompaneetsF} with initial data such that $\mathcal N(f_0) > \mathcal N( \hat f_0 )$, then the total photon number can not be a conserved quantity---at least not in the infinite-time limit---and there must be a dissipation mechanism through which photons are lost.
In this paper we prove that photons are indeed lost in finite time through an outflux at $0$
for all solutions of the Kompaneets equation \eqref{e:kompaneetsF} with $\mathcal N(f_0) > \mathcal N( \hat f_0 )$.
However, $\mathcal N(f_0) \leq \mathcal N(\hat f_0)$ does not guarantee that the photon number is conserved, and a family of examples was constructed by Escobedo et\ al.~\cite{EscobedoHerreroEA98}.
Here we provide an explicit condition on the initial data guaranteeing a finite time photon outflux, that does not necessitate or preclude $\mathcal N(f_0) > \mathcal N(\hat f_0)$.
We also prove several other results concerning existence, uniqueness, and
convergence to equilibrium in the long time limit.
To state our results it is convenient to reformulate the problem in terms of the photon number density with respect to the measure $dx$, defined by
\begin{equation}\label{e:nDef}
n_t(x) \defeq x^2 f_t(x)\,.
\end{equation}
In terms of this photon number density, the stationary solutions~\eqref{e:fhat} are now
\begin{equation}\label{e:nhat}
\hat n_\mu(x) \defeq \frac{x^2}{e^{x + \mu} - 1} \,,
\qquad
\text{for } \mu \geq 0\,,
\end{equation}
and the total photon number is now
\begin{equation*}
N(n_t) \defeq \int_0^\infty n_t(x) \, dx
= \int_0^\infty x^2 f_t(x) \, dx
= \mathcal N(f_t)\,.
\end{equation*}
\iffalse
Previous work of Escobedo et\ al.~\cite{EscobedoHerreroEA98} already establishes global existence and uniqueness of solutions.
Namely if $n_0 \geq 0$ is bounded and $o(1/x^2)$ as $x \to \infty$, then we know there exists a time global, unique nonnegative solution to~\eqref{e:kompaneetsF} \emph{without} imposing any boundary condition at $x = 0$, or assuming any apriori bound on the solution.
\fi
Two of our main results can be stated non-technically as follows:
\begin{enumerate}\tagsleft@false\let\veqno\@@eqno
\iffalse
\item
We prove that for every $t > 0$, the solution $n_t(\cdot)$ can be extended continuously at $x = 0$.
While $\partial_x n$ may develop a singularity at $x = 0$, and $x^2 \partial_x n_t$ may not vanish as $x \to 0$ pointwise for every $t > 0$, we prove that it does vanish in a time integrated sense.
\item
We prove that the solution operator to~\eqref{e:kompaneetsF} is an $L^1$ contraction, and also prove a comparison without imposing boundary conditions at $x = 0$.
We construct a family of stationary super solutions allowing us to obtain time independent bounds on solutions with exponentially decaying initial data.
\fi
\item
We show that the total photon number is non-increasing in time, and can only decrease through an outflux of photons at $x = 0$.
The boundary conditions ensure that photons can never be lost to (or gained from) infinity, and so the fact that the total photon number is decreasing means that there can never be an influx of photons at $x = 0$.
Moreover, for $0 \leq s \leq t$, we prove the following loss formula for the total photon number:
\begin{equation}\label{e:lossIntro}
N(n_t) = N(n_s) - \int_s^t n_\tau(0)^2 \, d\tau \,.
\end{equation}
\item
If the initial data is not identically~$0$, then we prove that as $t \to \infty$, the solution converges strongly in~$L^1$ to one of the equilibrium solutions~$\hat n_\mu$.
The parameter~$\mu \in [0, \infty)$ is characterized by the property
\begin{equation*}
N(\hat n_\mu) = N(n_0) - \int_0^\infty n_t(0)^2 \, dt\,.
\end{equation*}
In particular, $N(n_t)\to N(\hat n_\mu)$ as $t\to\infty$. Consequently, if $N(n_0)>N(\hat n_0)$, photon loss must occur in finite time.
\end{enumerate}
Even though we show there exists $\mu \in [0, \infty)$ such that $n_t \to \hat n_\mu$ as~$t \to \infty$, there appears to be no general way to determine~$\mu$ from the initial data.
There are however two cases where this can be done:
\begin{enumerate}
\item
If $n_0 \geq \hat n_0$, then $\mu = 0$ and $n_t \to \hat n_0$ in $L^1$ as $t \to \infty$.
\item
If $n_0 \leq \hat n_0$ on the other hand, then there is no photon loss, and $\mu \geq 0$ will be the unique number such that $N(n_0) = N(\hat n_\mu)$.
\end{enumerate}
\medskip
In light of~\eqref{e:lossIntro}, we see the change in the total photon number between time $0$ and $t$ is exactly $\int_0^t n_s(0)^2 \, ds$.
Following precedent, one can interpret the accumulated outflux at zero energy, as the ``mass'' of a Bose--Einstein condensate at time $t$.
Clearly this is non-decreasing as a function of time.
We will in fact show a stronger result: once photon loss is initiated, it persists for all time without stopping.
That is,
there exists $t_* \in [0, \infty]$ such that $n_t(0) = 0$ for all $t < t_*$ and $n_t(0) > 0$ for all $t > t_*$.
This time $t_*$ can be viewed as the time the Bose--Einstein condensate starts forming.
There are situations in which photon loss never occurs
(i.e., $t^* = +\infty$).
Indeed, we show that if $n_0 \leq \hat n_0$, then total photon number is conserved
and $t_*$ is infinite.
On the other hand, there are certain scenarios under which one can prove $t_*$ is finite.
One such scenario was previously identified by Escobedo et\ al.~\cite{EscobedoHerreroEA98} where the authors construct a family of solutions that develop a Burgers-like shock at $x = 0$ in finite time.
In this paper we prove that $t_*$ is finite in two different scenarios:
\begin{enumerate}
\item
If the total photon number initially is larger than $N(\hat n_0) = 2\zeta(3)$, the maximum photon number in equilibrium, then $t_*$ must be finite.
\item
If $\partial_x n_0(0) > 1$, then we show $t_*$ is finite, and furthermore we provide an explicit upper bound for $t_*$.
\end{enumerate}
To the best of our knowledge the second scenario above was not identified earlier.
The first scenario mathematically rules out the possibility that the Kompaneets equation
allows photons to remain conserved while concentrating at small but positive energy.
Such behavior was suggested in \cite{CaflischLevermore86} by number-conserving numerical simulations
and was shown to be compatible with entropy-minimization arguments.
Our results on photon loss indicate, instead, that numerical schemes for the Kompaneets equation
should \emph{not} be designed to conserve photon number at the zero-energy boundary,
since solutions of \eqref{e:kompaneetsF} do not have this property in general.
Analogs of many of our current results were previously obtained by the authors
in~\cite{LevermoreLiuEA16} and~\cite{BallewIyerEA16} for different simplified
models of~\eqref{e:kompaneetsF} obtained by neglecting terms that seem inessential to the photon loss phenomenon
but fail to preserve true Bose--Einstein equilibria.
As in those previous works, we make essential use of mathematical tools traditionally associated with first-order nonlinear
conservation laws without diffusion, such as an $L^1$ contraction property for solutions, negative slope bounds (an Oleinik inequality),
and comparisons to compression and rarefaction waves.
The proofs for the full Kompaneets equation~\eqref{e:kompaneetsF}, however, are significantly different and more involved.
All of our main results (including precise statements of those non-technically described earlier) are stated in Section~\ref{s:results} below.
In a sense, our results justify the notion, examined nonrigorously by many previous authors,
that at the zero-energy boundary the diffusion term in \eqref{e:kompaneetsF} can be neglected
and the flux is dominated by the nonlinear advection term $n^2 = x^4 f^2$ which arises
from a quantum enhancement of scattering into states occupied by bosons.
There are various mathematical or physical mechanisms that may prevent (or permit)
loss of photons at zero energy or formation of a true Bose--Einstein condensate.
Kompaneets derived \eqref{e:kompaneetsF} in a Fokker--Planck approximation
to a quantum Boltzmann equation for photon scattering from a fixed
Maxwellian electron distribution.
This equation can be written in the nondimensional form
\begin{align*}
\partial_t f(x,t) &=
\int_0^\infty \Bigl(
f(x_*,t)\bigl(1+f(x,t)\bigr)\,e^{-x}\\
& \qquad\qquad -f(x,t)\bigl(1+f(x_*,t)\bigr)\,e^{-x_*}\Bigr) \sigma(x,x_*)x_*^2\,dx_*\,,
\end{align*}
cf.~\cite[Eq.~(12.47)]{Castor04}, \cite[Eq.~(5.67)]{EscobedoMischlerEA03} and \cite{CortesEscobedo19}.
In this Boltzmann--Compton equation the form of $\sigma(x,x_*)=\sigma(x_*,x)$
is determined by approximating the Klein--Nishina cross section for Compton scattering,
and is strongly peaked where $x\approx x_*$.
Under a simplifying boundedness assumption on the scattering kernel,
Escobedo \& Mischler \cite{EscobedoMischler01} showed that photon loss in
finite time is impossible, but a concentration of photons approaching zero
energy appears in the limit $t\to\infty$.
The behavior in such a case was further studied in \cite{EscobedoMischlerEA04}.
For the physical kernel, Ferrari \& Nouri \cite{FerrariNouri06}
showed that if the initial data everywhere exceeds the Planckian
($f_0\ge \hat f_0$), then a number-conserving weak solution fails to exist
for any positive time, while if $f_0\le \hat f_0$ then a global
solution exists.
Recent work by Cort\'es \& Escobedo \cite{CortesEscobedo19} reviews related results
and revisits the question with a different kind of kernel truncation, obtaining
an existence result that does not preclude formation of a Dirac delta mass at zero energy.
Physical effects that become important at low energy and destroy photon
conservation include Bremsstrahlung and double Compton scattering \cite{KhatriSunyaevEA12}.
The derivation of the Kompaneets equation from a quantum Boltzmann equation
has also been revisited recently by several authors in the physical literature \cite{OliveiraMaesEA21,MendoncaTercas17,Milonni21}.
E.g., Mendon{\c{c}}a \& Ter{\c{c}}as \cite{MendoncaTercas17} suggest that photons in some plasmas
can have an effective mass at zero frequency, and modify the Kompaneets equation accordingly.
Finally, we mention that a considerable body of work exists concerning the
related but distinct phenomenon
of Bose--Einstein condensation in quantum models of boson-boson scattering,
described by Boltzmann--Nordheim (or Uehling--Uhlenbeck) equations.
For analytical studies of condensation phenomena and convergence to equilibrium
in these models we refer to
\cite{Spohn10,EscobedoVelazquez15,Lu13,LuMouhot12,LuMouhot15,Lu18,CaiLu19},
the book \cite{PomeauTran19}, and references therein.
For the Boltzmann--Nordheim equation, Fokker--Planck-type approximations of higher order
have been developed formally by Josserand et al.~\cite{JosserandPomeauEA06}
and analyzed in work of J\"ungel \& Winkler \cite{ JungelWinkler15, JungelWinkler15a}.
There are also studies of other nonlinear Fokker--Planck models
that admit Bose--Einstein equilibria and the possibility of condensation,
and we refer the reader to \cite{Toscani12,CarrilloDiFrancescoEA16,CarrilloHopfEA20}
for work on this and further references.
Even though we do not consider alternative models or mechanisms here,
our present results for solutions of the Kompaneets equation itself
provide some understanding of the mechanism by which Compton scattering creates a photon flux towards low energy.
\section{Main Results.}\label{s:results}
This section is devoted to stating precise versions of our main results.
In terms of the photon number density~$n$ (defined in~\eqref{e:nDef}), equation~\eqref{e:kompaneetsF} becomes
\begin{equation}\label{e:komp}
\partial_t n = \partial_x J\,,
\qquad
\text{where}\quad
J = J(x, n) \defeq x^2 \partial_x n + (x^2 - 2x) n + n^2\,,
\end{equation}
is the photon flux to the left.
We will study equation~\eqref{e:komp} with bounded nonnegative initial data $n_0$, and impose the no-flux boundary condition
\begin{equation}\label{e:noflux}
\lim_{x \to \infty} J(x, n) = 0
\end{equation}
at infinity.
As mentioned earlier, we will not impose any boundary condition at $x = 0$.
\subsection{Construction and Properties of Solutions.}
Previous work of Escobedo et\ al.~\cite{EscobedoHerreroEA98} shows there is a \emph{unique}, globally regular solution to~\eqref{e:komp}--\eqref{e:noflux}.
\begin{theorem}[Existence~\cite{EscobedoHerreroEA98}]\label{t:exist}
For any bounded measurable $n_0 \geq 0$ satisfying
\begin{equation}\label{e:x2n}
\lim_{x \to \infty} x^2 n_0(x) \to 0\,,
\end{equation}
there exists a unique nonnegative function
\begin{equation}\label{e:nSpace}
n \in C( [0, \infty); L^1) \cap L^\infty_{_{\textup{loc}}}( [0, \infty); L^\infty( [0, \infty) ) \cap C^{2,1}((0, \infty)^2)
\end{equation}
that is a solution to \eqref{e:komp}--\eqref{e:noflux} with initial data~$n_0$.
Moreover, for any~$T < \infty$ we have
\begin{gather}\label{e:x2nPtwise}
\lim_{x\to \infty}x^2 n_t(x) =0 \,,
\qquad\text{uniformly for } 0 \leq t < T\,,
\\
\label{e:x2dxnVanish}
\lim_{x\to \infty} x^2 \partial_x n(x, t) =0\,,
\qquad\text{uniformly on the set } \frac{1}{x^2} \leq t < T\,.
\end{gather}
\end{theorem}
Existence and uniqueness of the solution is the content of Theorem 2 in~\cite{EscobedoHerreroEA98}.
The vanishing conditions~\eqref{e:x2nPtwise}--\eqref{e:x2dxnVanish} are contained in the proof of this theorem on page~3850, and we refer the reader to~\cite{EscobedoHerreroEA98} for details.
\begin{remark}\label{r:bounded}
In Section~\ref{s:pointwise} (Proposition~\ref{p:nBdd}) we will show later that $n$ is bounded globally in both space and time, not just locally bounded as stated in~\eqref{e:nSpace}, and moreover infer from \cite{EscobedoHerreroEA98} that \eqref{e:x2nPtwise} and \eqref{e:x2dxnVanish} hold with $T=\infty$.
\end{remark}
\begin{remark}
In~\cite{EscobedoHerreroEA98}, the authors assumed the initial data~$n_0$ is continuous.
This assumption isn't necessary and can be relaxed using a density argument and the $L^1$ contraction property
(cf.~\cite{LevermoreLiuEA16}).
\end{remark}
\begin{remark}
We reiterate that while Theorem~\ref{t:exist} requires a no-flux boundary condition at $x = \infty$ (equation~\eqref{e:noflux}), it does not require a boundary condition at $x = 0$.
That is, Theorem~\ref{t:exist} guarantees that solutions to~\eqref{e:komp}--\eqref{e:noflux} are globally unique, without requiring any boundary condition at $x = 0$.
\end{remark}
From the vanishing conditions~\eqref{e:x2nPtwise}--\eqref{e:x2dxnVanish} we immediately see that
\begin{equation}\label{jr}
\lim_{R\to \infty}\int_s^t \abs{J(R, n_\tau)} \, d\tau = 0 \,,
\qquad\text{for any } 0 < s \leq t \,.
\end{equation}
The behavior of the solution at $0$, however, is a little more delicate.
The constructed examples in~\cite{EscobedoHerreroEA98} (discussed below) show that the function $n$ can not always be extended continuously to a function defined on the domain $(x, t) \in [0, \infty) \times (0, \infty)$.
However, for every $t > 0$ the function $n_t(x)$ can be extend continuously at $x = 0$.
\begin{lemma}[Continuity at $x = 0$]\label{l:n0exist}
Let $T > 0$, let $Q_T = (0, \infty) \times (0, T)$ and $n \in L^\infty(Q_T)$ be a nonnegative solution of~\eqref{e:komp}--\eqref{e:noflux} whose initial data satisfies~\eqref{e:x2n}.
Then for any $t \in (0, T]$, the limit of $n_t(x)$ exists as $x \to 0^+$.
\end{lemma}
We will subsequently use the notation $n_t(0)$ to denote $\lim_{x \to 0^+} n_t(x)$ for every $t > 0$.
The proof of Lemma~\ref{l:n0exist} is presented in Section~\ref{s:construct}.
The key ingredient in the proof is an Oleinik inequality (Lemma~\ref{l:oleinik}, below) establishing explicit negative slope bounds on solutions.
{Such inequalities typically arise in the study of hyperbolic problems.
Even though~\eqref{e:komp} is parabolic, the degeneracy near $x = 0$ makes the system exhibit an ``almost hyperbolic'' behavior, and hence is amenable to analysis using such tools.}
Next, we study convergence of the flux $J(x, n)$ as $n \to 0$.
As we will shortly see, the slope $\partial_x n$ may develop a singularity at $x = 0$.
We don't presently know the exact singularity profile, and hence do not know whether or not for every $t > 0$ we have $x^2 \partial_x n_t(x) \to 0$ as $x \to 0$.
As a result, we don't know whether or not the flux satisfies $\lim_{x \to 0} J(x, n_t) = n_t(0)^2$ for every $t > 0$.
We claim, however, that we do have the time integrated version $\lim_{x \to 0} \int_s^t J(x, n_\tau) \, d\tau \to \int_s^t n_\tau(0)^2$, and this is our next lemma.
\begin{lemma}[Flux behavior at $x = 0$]\label{l:intJ0}
For any $0 \leq s \leq t$ we have
\begin{equation}\label{e:intJ0}
\lim_{x\to 0^+} \int_s^t J(x,\tau)\,d\tau = \int_s^t n^2(0,\tau)\,d\tau.
\end{equation}
If, further, $0 < s \leq t$, then we have the stronger convergence
\begin{align}\label{e:a0}
\lim_{x \to 0} \int_s^t x^2 \abs{ \partial_x n_\tau(x)} \, d\tau = 0 \,,
\quad\text{and}\quad
\lim_{x \to 0} \int_s^t \abs{ J(x, n_\tau) -n_\tau^2(0) } \, d\tau = 0 \,.
\end{align}
\end{lemma}
We are presently unaware whether or not~\eqref{e:a0} holds if $s = 0$.
The proof of Lemma~\ref{l:intJ0} is somewhat indirect.
We first establish a loss formula (Proposition~\ref{p:lossFormula}, below) relating the decrease in total photon number to the outflux of zero-energy photons.
(For clarity of presentation we state Proposition~\ref{p:lossFormula} in next subsection, as it fits better with our results on condensate formation.)
It turns out that the loss formula can be used to prove convergence of the time integrated flux as stated in~\eqref{e:intJ0}.
To obtain the stronger convergence stated in~\eqref{e:a0}, we require the Oleinik inequality (Lemma~\ref{l:oleinik}), and hence require $s > 0$.
The next two results are the two main tools that we will use to study the long time behavior of solutions and photon loss.
The first asserts that the solution operator to~\eqref{e:komp}--\eqref{e:noflux} is a contraction in $L^1$.
The second provides a comparison principle, without requiring a boundary condition at $x = 0$.
Both results are proved in Section~\ref{s:construct}, below.
\begin{lemma}[$L^1$ contraction]\label{l:l1contract}
Let $n, m$ be two bounded, nonnegative solutions of~\eqref{e:komp}.
Then for any $0 \leq s \leq t$, we have
\begin{equation}\label{eqn:contract}
\norm{n_t - m_t}_{L^1}
+ \int^t_s \abs[\big]{ n^2_\tau (0)-m^2_\tau(0)} \, d\tau
\leq \norm{n_s - m_s}_{L^1}\,.
\end{equation}
\end{lemma}
\begin{remark*}
Here $n_\tau(0) = \lim_{x \to 0^+} n_\tau(x)$, which exists by Lemma~\ref{l:n0exist}.
\end{remark*}
\begin{lemma}[Weak comparison principle]\label{l:comparison}
Let $T > 0$, $m \in L^\infty(Q_T)$ be a nonnegative sub-solution to~\eqref{e:komp}--\eqref{e:noflux}, and $n \in L^\infty(Q_T)$ be a nonnegative super-solution to~\eqref{e:komp}--\eqref{e:noflux}.
If $m_0 \leq n_0$ then we must have $m_t \leq n_t$ for all $t \in [0, T]$.
\end{lemma}
We reiterate that our comparison principle \emph{does not} require the assumption $m \leq n$ at the boundary~$x = 0$.
Instead, it provides as a conclusion that $m_t(x) \leq n_t(x)$ for every $t > 0$ and $x \geq 0$, including at $x = 0$.
The notion of sub and super-solutions used in Lemma~\ref{l:comparison} are defined precisely in Definition~\ref{d:subsol}, below.
\subsection{Photon Loss, and Condensate Formation.}
We now turn to results concerning loss of photons,
which corresponds to Bose--Einstein condensation at the level of approximation that the Kompaneets equation represents.
Throughout this section we will assume $n_0$ is a nonnegative bounded function satisfying~\eqref{e:x2n}, and $n$ is the global solution to~\eqref{e:komp}--\eqref{e:noflux} with initial data $n_0$.
Our first result is an explicit formula for the total photon number that was mentioned earlier.
\begin{proposition}[Loss formula]\label{p:lossFormula}
Whenever $0 \leq s \leq t$ we have
\begin{equation}\label{e:loss}
N(n_t) + \int_s^t n_\tau(0)^2 \, d\tau = N(n_s)\,.
\end{equation}
\end{proposition}
As a result the total photon number can only decrease with time, and can only decrease through an outflux of zero-energy photons.
In fact, equation~\eqref{e:loss} shows that the change in the total photon number between time $0$ and $t$ is exactly $\int_0^t n_s(0)^2 \, ds$, and thus we may interpret $\int_0^t n_s(0)^2 \, ds$ as the mass of the Bose--Einstein condensate at time $t$.
Notice that since $n_s(0)^2$ is manifestly nonnegative, the total photon number can never increase through an influx at $x=0$.
That is, according to Kompaneets dynamics, photons may enter the Bose--Einstein condensate, but can not leave it.
In addition to the above physical interpretation, Proposition~\ref{p:lossFormula} is essential to obtaining the behavior of the flux at $x = 0$ (Lemma~\ref{l:intJ0}, above).
Thus we prove Proposition~\ref{p:lossFormula} in Section~\ref{s:construct}, before Lemma~\ref{l:intJ0}.
\medskip
Next we show that once a photon outflux at $x = 0$ starts, it will never stop.
Thus once the Bose--Einstein condensate forms, its mass will always strictly increase with time.
\begin{proposition}[Persistence]\label{p:persistence}
There exists $t_*\in [0,\infty]$ such that $n_t(0)>0$ whenever $t> t_*$, and $n_t(0)=0$ whenever $0 < t<t_*$.
\end{proposition}
Due to Proposition~\ref{p:lossFormula}, $t_*$ is the time of onset of photon loss.
Of course there are situations (such as Corollary~\ref{c:mulim}, in the case where $n_0 \leq \hat n_0$) when $t_* = \infty$, and
photon loss never occurs.
There are, however, a few scenarios under which one can prove $t_* < \infty$, so photon loss begins in finite time.
One scenario was constructed by Escobedo et\ al.\ in~\cite{EscobedoHerreroEA98}, where the solution develops a ``viscous shock'' (see Figure~\ref{f:formationEHV}).
Namely, for any $t_*, c_* > 0$, they produce solutions~$n$ such that
\begin{equation*}
\lim_{t \to t_*^-} n_t(c (t_* - t)) = c_*
\quad\text{if } c > c_*\,,
\qquad\text{and}\qquad
\lim_{t \to t_*^-} n_t(c (t_* - t)) = 0
\quad\text{if } c < c_*\,.
\end{equation*}
For this solution $n_t(0)$ has a jump discontinuity at $t = t_*$, and $N(n_t)$ has a corner at $t = t_*$ (see Figure~\ref{f:formationEHV}).
Escobedo et al.\ produce such solutions with $N(n_t)$ arbitrarily small, showing this scenario is not related to an excess of
photon number above the maximum equilibrium value $N(\hat n_0)$.
\begin{figure}[htb]
\setkeys{Gin}{width=.3\linewidth}
\includegraphics{figures/ehv-nvsx.pdf}
\quad
\includegraphics{figures/ehv-Nvst.pdf}
\quad
\includegraphics{figures/ehv-u0vst.pdf}
\caption{%
\footnotesize
Numerical simulations showing the onset of photon loss
through a ``viscous shock'' \`a la Escobedo et\ al.~\cite{EscobedoHerreroEA98}.
Left: Profile of the solution $n_t(x)$ vs $x$ at various times close to $t_*$.
Center: Total photon number $N(n_t)$ vs $t$ showing a corner at $t = t_*$.
Right: Photon outflux at $x = 0$ vs $t$, showing a jump at $t = t_*$.
(The numerical method used to generate these plots is described in Appendix~\ref{s:nmethod}, and the parameters used are listed in Remark~\ref{r:params}.)
}
\label{f:formationEHV}
\end{figure}
\iffalse
\begin{figure}[htb]%
\setkeys{Gin}{width=.45\linewidth}
\includegraphics{figures/viscous-shock.pdf}%
\qquad
\includegraphics{figures/ux-singularity.pdf}%
\caption{%
\footnotesize
Numerical plots of $n_t(x)$ vs $x$ at various times close to the onset of photon loss.
The plots were generated using a logarithmic mesh and an upwind scheme which fixes the photon flux at $x = 0$ to be $n_t(0)^2$.
Left: The Escobedo et.\ al.\ scenario forming a Bose--Einstein condensate through Burgers'-type viscous shock.
Right: The scenario in Proposition~\ref{p:formation}, when $\partial_x n_0(0) > 1$.
Here $t \mapsto n_t(0)$ is remains continuous, but $t \mapsto \partial_x n_t(0)$ becomes singular.%
}
\label{f:BECFormation}
\end{figure}
\fi
There are two other scenarios under which one can prove $t_* < \infty$.
The first is a mass condition that guarantees $t_* < \infty$ if the initial photon number larger than that the maximum photon number can be sustained in equilibrium.
While this is the natural physically expected behavior, it was not proved rigorously before.
We prove it here as a consequence of our long time convergence result (Theorem~\ref{t:lim}).
\begin{figure}[htb]
\setkeys{Gin}{width=.3\linewidth}
\includegraphics{figures/slope-nvsx.pdf}
\quad
\includegraphics{figures/slope-Nvst.pdf}
\quad
\includegraphics{figures/slope-u0vst.pdf}
\caption{%
\footnotesize
Numerical simulations showing the onset of photon loss through the slope condition in Proposition~\ref{p:formation}.
Left: Profile of the solution $n_t(x)$ vs $x$ at various times close to $t_*$.
Center: Total photon number $N(n_t)$ vs $t$ showing a~$C^1$ transition at $t = t_*$.
Right: Photon outflux at $x = 0$ vs $t$, showing a corner at $t = t_*$.
(The numerical method used to generate these plots is described in Appendix~\ref{s:nmethod}, and the parameters used are listed in Remark~\ref{r:params}.)
}
\label{f:formationSlope}
\end{figure}
The second scenario we provide here is a slope condition that guarantees $t_* < \infty$, provided $\partial_x n_0 > 1$ (see Figure~\ref{f:formationSlope}).
To the best of our knowledge, this scenario hasn't been identified before.
In this case, in contrast to the viscous shock of Escobedo et\ al.~\cite{EscobedoHerreroEA98}, the photon outflux at $0$ can be continuous but not differentiable in time at $t = t_*$.
Moreover, $\partial_x n_t(0) \to \infty$ as $t \to t_*^-$.
\begin{proposition}[Onset of loss]\label{p:formation}
Let $t_*$ be the time given by Proposition~\ref{p:persistence}.
\begin{enumerate}\tagsleft@false\let\veqno\@@eqno
\item
\emph{(Mass condition)} If $N(n_0) > N(\hat n_0)$ then $t_* < \infty$.
\item
\iffalse
looks good(HL, Aug 30) \GI[2021-02-27]{It was $t_* < \frac{1}{2} \ln(\cdots)$. I changed it to $t_* \leq \frac{1}{2} \ln( \cdots )$. Double check.}%
\fi
\emph{(Slope condition)}
If $\partial_x n_0(0)>1$, then $t_* \leq \bar t_*$, where
\begin{equation}\label{e:tStarRiccati}
\bar t_* \defeq \frac{1}{2} \ln\paren[\Big]{ \frac{\partial_x n_0(0)}{\partial_x n_0(0) -1} } \,.
\end{equation}
Moreover, there exists $n_0$ for which $t_* = \bar t_*$ and the photon outflux $n_t(0)$ is continuous at $t = t_*$.
\end{enumerate}
\end{proposition}
In the second scenario above, the profile of the initial photon number density away from the origin may initiate loss of photons
before time $\bar t_*$.
It also allows the photon outflux to have a jump discontinuity at any time, before, at, or after $\bar t_*$, though.
Thus for arbitrary initial data with $\partial_x n_0(0) > 1$, we cannot expect photon loss to begin exactly at time
$t_* = \bar t_*$, or the photon outflux to be continuous at time $t_*$.
We can, however, produce a family of initial data for which it is true that $t_* = \bar t_*$ precisely, and $n_t(0)$ is continuous at (but not necessarily after) time $t_*$, and this is the second assertion of Proposition~\ref{p:formation}.
The initial data we consider produces solutions where $n_{t_*}(\cdot)$ develops a square-root singularity at $x=0$, and $n_{\cdot}(0)$ develops a corner at $t = t_*$.
Generic initial data for which the photon outflux is continuous at time $t_*$ may exhibit different singular behavior at the point $(x, t) = (0, t_*)$, and we do not presently have a complete characterization.
Finally, we state one result that guarantees photon loss never occurs.
Namely, if the initial data lies entirely below the maximal stationary solution~$\hat n_0$, then
total photon number is globally conserved and no condensate can form.
\begin{lemma}[Absence of loss]\label{l:absence}
If $n_0 \leq \hat n_0$, then~$t_* = \infty$.
\end{lemma}
Lemma~\ref{l:absence} is already contained in work of Escobedo \& Mischler~\cite{EscobedoMischler01}.
We can, however, provide a short and direct proof of it here using the comparison principle.
\begin{proof}[Proof of Lemma~\ref{l:absence}]
Treating $\hat n_0$ as a super-solution and $n$ as a sub-solution, Lemma~\ref{l:comparison} implies that $n_t(x) \leq \hat n_0(x)$ for all $t > 0$, $x \geq 0$.
Since $\hat n_0(0) = 0$, we must have $n_t(0) = 0$ for all $t > 0$, hence the total photon number $N[n_t]$ is constant.
\end{proof}
\subsection{Long Time Convergence.}\label{s:longtime}
We now study the behavior of solutions as~$t \to \infty$.
Our main result shows that as~$t \to \infty$, the solution must converge (in~$L^1$) to a stationary solution~$\hat n_\mu$.
The parameter~$\mu$ and the total loss in the photon number can be determined uniquely (but not explicitly) from the initial data.
\begin{theorem}[Long time convergence]\label{t:lim}
Let $n_0$ be a nonnegative bounded function which is not identically $0$ and satisfies~\eqref{e:x2n}.
If $n$ is the unique global solution of~\eqref{e:komp}--\eqref{e:noflux} with initial data $n_0$, then
\begin{equation}\label{e:l1conv}
\lim_{t \to \infty} \norm{ n_t - \hat n_\mu }_{L^1} = 0\,,
\end{equation}
where $\mu \in [0, \infty)$ is the unique number for which
\begin{equation}\label{e:muDef}
N(\hat n_\mu)=N(n_0)- \int_0^\infty n_t(0)^2 \, dt \,.
\end{equation}
\end{theorem}
As mentioned above, while the parameter~$\mu$ can be determined uniquely from the identity~\eqref{e:muDef}, it can not in general be explicitly computed from the initial data $n_0$.
The best we can do presently is to obtain a non-trivial lower bound for $N(\hat n_\mu)$ for general initial data, and compute $\mu$ explicitly in two special cases.
We present this below.
\begin{corollary}\label{c:mulim}
Let $n_0$ and $n$ be as in Theorem~\ref{t:lim}.
\begin{enumerate}\tagsleft@false\let\veqno\@@eqno
\item
If $n_0 \geq \hat n_0$, then~\eqref{e:l1conv} holds with~$\mu = 0$, and the total loss in the photon number is precisely $N(n_0) - N(\hat n_0)$.
\item
If, on the other hand, $n_0 \leq \hat n_0$, then there is never any photon outflux at $x = 0$, and a Bose--Einstein condensate never forms.
Consequently~\eqref{e:l1conv} holds for the unique $\mu \in [0, \infty)$ such that $N(n_0) = N( \hat n_\mu )$.
\item
In general, the total photon number in equilibrium is bounded below by
\begin{equation}\label{e:NmuLower}
N(\hat n_\mu) \geq \int_0^\infty (n_0 \varmin \hat n_0) \, dx > 0\,,
\end{equation}
where the notation $a \varmin b$ above denotes the minimum of $a$ and $b$.
\end{enumerate}
\end{corollary}
The main tool used in the proof of Theorem~\ref{t:lim} is to use the fact that solutions dissipate a quantum entropy
\begin{equation}\label{e:HdefIntro}
H(n) \defeq \int_0^\infty \paren[\Big]{
xn + n\ln n -(n+x^2)\ln(n+x^2)+x^2 \ln (x^2)
} \, dx\,.
\end{equation}
While this can be checked formally (see~\cite{CaflischLevermore86} or Section~\ref{s:largetime}), we are only able to prove~\eqref{e:HdefIntro} rigorously under a decay assumption on the initial data.
\begin{lemma}[Entropy dissipation]\label{ent}
Suppose there exists $C_0 > 0$ such that
\begin{equation}\label{e:expDecay}
n_0(x) \leq C_0(1 + x^2) e^{-x}
\quad\text{for all}\quad x > 0 \,.
\end{equation}
Then for any~$t > 0$ we have
\begin{equation}\label{e:ent}
\partial_t H(n_t)
+ \int_0^\infty \frac{J^2}{n(n+ x^2)} \, dx
= 0.
\end{equation}
\end{lemma}
\begin{remark}
To prove existence of solutions to~\eqref{e:komp}--\eqref{e:noflux}, one only needs to assume the algebraic decay condition~\eqref{e:x2n}.
However, to prove Lemma~\ref{ent}, we required the exponential decay assumption~\eqref{e:expDecay}.
We are presently unaware if~\eqref{e:ent} holds for initial data that only satisfies~\eqref{e:x2n}, and our proof requires the strong exponential decay assumption~\eqref{e:expDecay}.
\end{remark}
Once entropy decay is established, we will prove Theorem~\ref{t:lim} using the $L^1$-contraction and LaSalle's invariance principle (see Section~\ref{s:largetime}, below).
We remark, however, our proof gives no information on the rate at which~$n_t \to \hat n_\mu$.
For exponentially decaying initial data, the entropy can be used to provide some information about the convergence rate, and we conclude by stating this result.
\begin{proposition}[Convergence rate]\label{p:rate}
Suppose~$n_0$ satisfies~\eqref{e:expDecay}, and let~$\mu$ be as in Theorem~\ref{t:lim}.
Then there exists a constant $C$ (depending only on~$C_0$) such that
\begin{equation}\label{e:rate}
\norm{ x^2 (n_t - \hat n_\mu) }^2_{L^1}
\leq C \paren[\Big]{ H(n)-H(\hat n_\mu)+\mu \int_t^\infty n_s(0)^2 \, ds } \,
\end{equation}
for all~$t \geq 0$.
\end{proposition}
\subsection*{Plan of this paper.}
In Section~\ref{s:construct} we prove our results concerning properties of solutions
(Lemmas~\ref{l:n0exist}--\ref{l:comparison})
and the loss formula (Proposition~\ref{p:lossFormula}).
The proofs require an Oleinik type bound on the negative slope, and we also prove this in Section~\ref{s:construct}.
In Section~\ref{s:bec} we prove our results concerning onset, persistence and absence of photon loss (Proposition~\ref{p:persistence}, the second assertion in Proposition~\ref{p:formation} and Lemma~\ref{l:absence}).
In Section~\ref{s:largetime} we prove our results concerning the long time behavior (Theorem~\ref{t:lim}, Corollary~\ref{c:mulim}, Lemma~\ref{ent} and~\ref{p:rate}, and the first assertion in Proposition~\ref{p:formation}).
Finally we conclude this paper with two appendices.
Appendix~\ref{s:nmethod} describes the numerical method used to produce Figures~\ref{f:formationEHV}--\ref{f:formationSlope}.
\section{Construction and Properties of Solutions.}\label{s:construct}
\subsection{Negative Slope Bounds (Lemma~\ref{l:oleinik}).}
We begin by proving an Oleinik inequality, that guarantees a negative slope bound for solutions.
This will be used in the proofs of Lemmas~\ref{l:n0exist} and~\ref{l:intJ0}.
\begin{lemma}[Oleinik inequality]\label{l:oleinik}
Let $T > 0$, let $Q_T = (0, \infty) \times (0, T)$, and $n \in L^\infty(Q_T)$ be a nonnegative solution of~\eqref{e:komp}--\eqref{e:noflux} with initial data satisfying~\eqref{e:x2n}.
For every $(x, t) \in (0, \infty) \times (0, T]$, we have
\begin{equation}\label{e:dxnLower}
\partial_x n \geq -\frac{1}{2t}-\frac{5x}{2} - \frac{\alpha}2 \,,
\quad\text{for any }
\alpha \geq {\sqrt{6 \norm{n}_{L^\infty(Q_T)} + 1} - 1 }\,.
\end{equation}
\end{lemma}
\begin{remark*}
While Lemma~\ref{l:oleinik} provides a lower bound for $\partial_x n$, there is certainly no upper bound.
In fact, we expect $\partial_x n$ to become singular at $x = 0$ at the time when loss of photons commences.
\end{remark*}
\begin{proof}[Proof of Lemma~\ref{l:oleinik}]
Differentiating~\eqref{e:komp} with respect to $x$ and setting~$w = \partial_x n$ shows
\begin{equation}\label{e:w}
\partial_t w =x ^2 \partial_{xx} w +2\paren[\Big]{ n +x+\frac{x^2}{2} }\partial_x w +2w(w+2x-1)+2n\,.
\end{equation}
The main idea behind the proof is to construct a suitable sub-solution to~\eqref{e:w}.
For this, let $\delta > 0$ and let $Q'_\delta = (2\delta, 1/\delta) \times (2\delta, T)$.
Define $\ubar{w}$ by $\ubar{w} \defeq -(\varphi + \psi)$, where
\begin{equation}\label{e:z0psiDef}
\varphi \defeq \frac{\alpha}2 + \frac{5 x}{2} +\frac{1}{2(t - \delta) }\,,
\qquad\text{and}\qquad
\psi \defeq \frac{\delta e^{30t}}{(x - \delta)^{4}}\,.
\end{equation}
We claim~$\ubar{w}$ is the desired sub-solution to~\eqref{e:w} in~$Q'_\delta$.
To see this, define the linear differential operator $\mathcal L_0$ by
$$
\mathcal L_0 v \defeq \partial_t v -x^2 \partial_x^2 v - 2 \paren[\Big]{ n+x + \frac{x^2}{2} } \partial_x v
+(1-2x)2v \,,
$$
and observe that~\eqref{e:w} implies
$
\mathcal L_0 w = 2w^2 + 2n\,.
$
Thus for $u=w-\ubar{w}$, we have
\begin{equation}\label{e:ole1}
\mathcal L_0 u
= 2(u+\ubar w)^2 + 2n -\mathcal L_0 \ubar{w}
= 2u^2 + 4u\ubar{w} + 2\ubar{w}^2 + 2n +\mathcal L_0 (\varphi + \psi) \,.
\end{equation}
We will now find a lower bound for the right hand side of this.
First, we note that $\varphi\psi > 0$ and $2\partial_x\varphi=5$ in $Q'_\delta$, hence
\begin{align}
\nonumber
& 2\ubar{w}^2+2n+ \mathcal L_0 \varphi
\\ \nonumber
& >
2(\psi^2+\varphi^2)-3n + 2\varphi(1-2x)
-5\paren[\Big]{x+\frac{x^2}{2}}
-\frac1{2(t-\delta)^2}
\\ \nonumber
& = 2\psi^2-3n + \frac{ 1+ \alpha +3x }{t-\delta}
+ 3\alpha x + \frac12\alpha(\alpha+2)
\\ \label{e:ole2}
& \geq 2\psi^2 - 3 \norm{n}_{L^\infty(Q_T)} + \frac12\alpha(\alpha+2)
\geq 2\psi^2\,.
\end{align}
The last inequality above followed by our choice of~$\alpha$.
\iffalse\GI[2021-04-06]{The old proof assumed $n \leq 2\gamma + 1$ here.
This is only true if $n \leq S_\gamma$; but that assumes exponential decay of the initial data, and the comparison principle which we don't yet have.
I replaced this.
Also there were a couple of terms missing which I added back (they didn't affect the proof). Please double check I didn't miss anything further.
Also we use $\varphi \leq 0$ here to drop the $4 \varphi \psi$ term; this is only valid if $t \geq \delta$.
}\fi
Next, we note that since $e^{30t}>1$ and $n\geq 0$ we have
\iffalse\GI[2021-04-10]{Bob, was there any reason you made the first line red? I uncolored it (looks OK to leave in). RP: I thought it could be omitted, but it's ok to keep.}
\fi
\begin{align}
\nonumber
2\psi^2+\mathcal L_0\psi
&= \psi\brak[\Big]
{2\psi + 30 - \frac{20x^2}{(x-\delta)^2}+(2n+2x+x^2)\frac4{x-\delta} + 2-4x}
\\\nonumber
&> \psi\brak[\Big]
{\frac{2\delta}{(x-\delta)^4} + 30 - \frac{20x^2}{(x-\delta)^2}+(2x+x^2)\frac4{x} + 2-4x}
\\
\label{e:ole3}
&= \psi\brak[\Big]
{\,\frac{2}{\delta^3}
\paren[\Big]{\frac{\delta}{x-\delta}}^4
+ 40 - 20
\paren[\Big]{1+\frac{\delta}{x-\delta}}^2\, } >0\,,
\end{align}
provided $x > \delta$, and $\delta$ is sufficient small.
Using~\eqref{e:ole2} and~\eqref{e:ole3} in~\eqref{e:ole1} implies
\begin{equation}\label{e:L0U}
\mathcal L_0 u \geq 2u^2 + 4u\ubar{w} \geq 4 \ubar{w} u\,.
\end{equation}
Finally, let
\begin{equation*}
\partial Q'_\delta = \paren[\big]{ \set{2\delta, \tfrac{1}{\delta} } \times [2\delta, T] } \cup \paren[\big]{ \paren{2\delta, \tfrac{1}{\delta}} \times \set{2\delta} }
\end{equation*}
denote the parabolic boundary of~$Q'_\delta$.
Using~\eqref{e:x2dxnVanish} and~\eqref{e:z0psiDef} we see that $u > 0$ on $\partial Q'_\delta$ when~$\delta$ is sufficiently small.
Moreover, since $\ubar{w}$ and $2x - 1$ are bounded in $Q'_\delta$, the minimum principle (see for instance~\cite[\S7.1.4]{Evans98}) applies to the inequality~\eqref{e:L0U} and guarantees $u \geq 0$ in $Q'_\delta$.
Sending~$\delta \to 0$ concludes the proof.
\end{proof}
One immediate consequence of Lemma~\ref{l:oleinik} is that for any~$t > 0$ the limit $\lim_{x \to 0^+} n_t(x)$ must exist.
This is the content of Lemma~\ref{l:n0exist}, and we prove it here.
\begin{proof}[Proof of Lemma~\ref{l:n0exist}]
Notice that for any $t>0$, \eqref{e:dxnLower} implies
\begin{equation*}
\partial_x\paren[\Big]{n_t+ x\paren[\Big]{\frac\alpha2 + \frac1{2t}}+\frac{5x^2}4} \ge 0.
\end{equation*}
This immediately implies $\lim_{x \to 0} n_t(x)$ exists as claimed.
\end{proof}
\subsection{Loss Formula and Flux Behavior at \texorpdfstring{$x = 0$}{x=0}.}
Next we prove Proposition~\ref{p:lossFormula} that provides an explicit expression relating the decrease in the total photon number to the outflux of zero-energy photons.
\begin{proof}[Proof of Proposition~\ref{p:lossFormula}]
Integrating~\eqref{e:komp} on~$(x, R) \times (s, t)$ we see
\begin{equation*}
\int_x^R (n_t(y) - n_s(y)) \, dy
= \int_s^t J(R, n_\tau) \, d\tau - \int_s^t (x^2 \partial_x n_\tau + (x^2 - 2x) n_\tau + n_\tau^2) \, d\tau \,.
\end{equation*}
The first term on the right vanishes as $R \to \infty$ (equation~\eqref{jr}).
The second term requires care: we can't directly send $x \to 0$ as we don't know the behavior of~$x^2 \partial_x n_\tau(x)$ at this stage.
We instead average this term for~$x \in (\epsilon, 2\epsilon)$, and then send $\epsilon \to 0$.
Using the notation~$\Xint-_\epsilon^{2\epsilon} f \, dx$ to denote the averaged integral~$\frac{1}{\epsilon} \int_\epsilon^{2\epsilon} f \, dx$, we observe
\begin{equation}\label{e:avN}
\Xint-_\epsilon^{2\epsilon}
\int_x^\infty (n_t(y) - n_s(y)) \, dy \, dx
= - \int_s^t \Xint-_{\epsilon}^{2\epsilon} (x^2 \partial_x n_\tau + (x^2 - 2x) n_\tau + n_\tau^2) \, dx \, d\tau \,.
\end{equation}
Notice that for $\tau>0$ fixed,
\begin{equation*}
\Xint-_\epsilon^{2\epsilon} x^2 \partial_x n_\tau
= \frac{1}{\epsilon} \brak[\Big]{ x^2 n_\tau }_\epsilon^{2\epsilon}
- 2 \Xint-_\epsilon^{2\epsilon} x n_\tau \, dx\,,
\end{equation*}
which is uniformly bounded and vanishes as $\epsilon \to 0$.
The dominated convergence theorem now implies
\begin{equation*}
\lim_{\epsilon \to 0} \int_s^t \Xint-_\epsilon^{2\epsilon} x^2 \partial_x n_\tau \, d\tau = 0\,.
\end{equation*}
Of course, Lemma~\ref{l:n0exist} and the dominated convergence theorem also implies
\begin{equation*}
\lim_{\epsilon \to 0} \int_s^t \Xint-_{\epsilon}^{2\epsilon} ((x^2 - 2x) n_\tau + n_\tau^2) \, dx \, d\tau
= \int_s^t n_\tau(0)^2 \, d\tau\,.
\end{equation*}
By the fundamental theorem of calculus the left hand side of~\eqref{e:avN} converges to $N(n_t) - N(n_s)$ as $\epsilon \to 0$.
Thus sending~$\epsilon \to 0$ in~\eqref{e:avN} yields~\eqref{e:loss} as claimed.
\end{proof}
Using the loss formula (Proposition~\ref{p:lossFormula}) in conjunction with the Oleinik inequality (Lemma~\ref{l:oleinik}) we can now prove Lemma~\ref{l:intJ0} concerning convergence of the flux $J(x, n_\tau)$ as $x \to 0$.
\begin{proof}[Proof of Lemma~\ref{l:intJ0}]
Observe first
\begin{equation*}
\lim_{x \to 0} \int_s^t J(x, n_\tau) \, d\tau
= \lim_{x \to 0} \int_x^\infty \paren[\big]{ n_s(y) - n_t(y) }
= N(n_s) - N(n_t)
= \int_s^t n_\tau(0)^2 \, d\tau \,,
\end{equation*}
proving~\eqref{e:intJ0}.
Here the first equality follows from~\eqref{e:komp} and~\eqref{jr}, and the last equality follows from Proposition~\ref{p:lossFormula}.
In order to prove the stronger convergence stated in~\eqref{e:a0}, we require $s > 0$.
Let
\begin{equation}\label{e:varphiDef}
\varphi_\tau(y) \defeq \frac{\alpha}2 + \frac{5 y}{2} +\frac{1}{2\tau }\,,
\end{equation}
where~$\alpha$ is as in~\eqref{e:dxnLower} and note that Lemma~\ref{l:oleinik} implies $\partial_x n_t \geq -\varphi_t $.
Thus
\begin{equation*}
\abs{\partial_x n_t} \leq \partial_x n_t + 2 {\varphi_t}\,,
\end{equation*}
and hence
\begin{align*}
\int_s^t x^2 \abs{\partial_x n_\tau } \, d\tau
&\leq \int_s^t x^2 \partial_x n_\tau \, d\tau
+ 2 \int_s^t x^2 {\varphi_\tau } \, d\tau
\\
&= \int_s^t (J(x, n_\tau) - n_\tau^2(x) ) \, d\tau
+ \int_s^t \paren[\big]{
2 x^2 {\varphi_\tau}
- (x^2 - 2x) n_\tau
}
\, d\tau\,.
\end{align*}
Using \eqref{e:intJ0} and the dominated convergence theorem the right hand side vanishes as $x \to 0$.
This proves the first identity in~\eqref{e:a0}.
The second identity follows immediately from the first and the dominated convergence theorem.
\iffalse
From Lemma \ref{l:oleinik} we have $\partial_x n_t(x) \geq -z_0(x, t)$, hence
$$
|\partial_x n(x, t)|\leq \partial_x n(x, t) +2|z_0(x, t)|.
$$
This gives
\begin{align*}
\int_s^t a^2|\partial_x n(a, \tau)| d\tau& \leq \int_s^t (a^2 \partial_x n(a, \tau) +2a^2|z_0(a, \tau)|)d\tau \\
& =\int_s^t (J(a, n(a,\tau)) -n^2(a, \tau) \\
& \qquad +(2a-a^2)n(a, \tau) + 2a^2|z_0(a, \tau)|)d\tau.
\end{align*}
Using also \eqref{jr}, we thus have $\lim_{a\to 0^+} \int_s^t a^2|\partial_x n(a, \tau)| d\tau=0$.
\fi
\end{proof}
\subsection{Contraction (Lemma~\ref{l:l1contract}).}
Our aim in this section is to prove that the solution operator to~\eqref{e:komp}--\eqref{e:noflux} is an~$L^1$ contraction (Lemma~\ref{l:l1contract}).
\iffalse
\begingroup\color{red}
\textbf{2021-04-18 GI: This was part of the $L^1$ contraction section; I moved it here when working on the contraction proof.}
We note that the truncated solutions $n_\varepsilon$ satisfy for any nonnegative $\phi$ compactly supported on $Q_\varepsilon$ and $k\in\mathbb{R}$
\begin{multline}\label{eqn:approxent}
\int_{Q_\varepsilon}\eta_\varepsilon(n_\epsilon-k)\partial_t\phi+\operatorname{sign}_\epsilon(n_\epsilon-k)\left\{\left[F(x,n_\epsilon)-F(x,k)-x^2\partial_x n_\epsilon\right]\right.\partial_x\phi\\
\left.+F_x(x,k)\phi\right\}~dx~dt=\int_{Q_\epsilon}\operatorname{sign}'_\epsilon(n_\epsilon-k)x^2(\partial_x n_\epsilon)^2~dx~dt
\end{multline}
where $\operatorname{sign}_\epsilon$ is a smooth, odd, non-decreasing function on $\mathbb{R}$ such that
\begin{equation}\label{eqn:sgnepsdef}
\operatorname{sign}_\epsilon(\tau)\defeq\begin{cases} 1, & \tau>\epsilon\\\frac{\tau}{\epsilon},& \tau\in\left(-\frac{\epsilon}{2},\frac{\epsilon}{2}\right)\\
-1, & \tau < -\epsilon\end{cases}
\end{equation}
and $\eta_\epsilon$ is defined such that $\eta'_\epsilon(\tau)=\operatorname{sign}_\epsilon(\tau)$ and $\eta_\epsilon(0)=0$, making $\eta_\epsilon$ an even function.
Following the techniques in \cite{BallewIyerEA16,Kruzkov70}, we can show the following.
\begin{lemma}\label{lma:entdiff}
Let $n$ and $m$ be bounded, nonnegative solutions of \eqref{e:kompaneetsF}. Then for any nonnegative test function $\phi$ on $Q$,
\begin{equation}\label{eqn:entdiff}
\int_{Q}\operatorname{sign}(n-m)\left\{(n-m)\partial_t\phi-[J(x,n)-J(x,m)]\partial_x\phi\right\}~dx~dt\geq 0.
\end{equation}
\end{lemma}
\begin{proof}
Let $g$ be a nonnegative test function on $Q_\epsilon^2$ for some fixed $\epsilon>0$ and fix $y$ and $s$. Additionally, let $n_\epsilon$ and $m_\epsilon$ be solutions to the truncated problem. Then, using $m_\epsilon(y,s)$ for $k$ in the entropy inequality \eqref{eqn:approxent}, we get after integrating over $Q_\epsilon$
\begin{multline}
\int_{Q_\epsilon}\int_{Q_\epsilon}\eta_\epsilon(n_\epsilon(x,t)-m_\epsilon(y,s))\partial_tg\\+\operatorname{sign}_\epsilon(n_\epsilon(x,t)-m_\epsilon(y,s))\left\{\left[F(x,n_\epsilon(x,t))-F(x,m_\epsilon(y,s))-x^2\partial_xn_\epsilon(x,t)\right]\partial_xg\right.\\
\left.+F_x(x,m_\epsilon(y,s))g\right\}~dx~dt~dy~ds\\=\int_{Q_\epsilon}\int_{Q_\epsilon}\operatorname{sign}'_\epsilon(n_\epsilon(x,t)-m_\epsilon(y,s))x^2\left(\partial_xn_\epsilon(x,t)\right)^2~ds~dt~dy~ds.
\end{multline}
Repeating the process for $m_\epsilon(y,s)$ using $n_\epsilon(x,t)$ for $k$ in \eqref{eqn:approxent} yields
\begin{multline}
\int_{Q_\epsilon}\int_{Q_\epsilon}\eta_\epsilon(m_\epsilon(y,s)-n_\epsilon(x,t))\partial_sg\\+\operatorname{sign}_\epsilon(m_\epsilon(y,s)-n_\epsilon(x,t))\left\{\left[F(y,m_\epsilon(y,s))-F(y,n_\epsilon(x,t))-y^2\partial_ym_\epsilon(y,s)\right]\partial_yg\right.\\
\left.+F_y(y,n_\epsilon(x,t))g\right\}~dy~ds~dx~dt\\=\int_{Q_\epsilon}\int_{Q_\epsilon}\operatorname{sign}'(m_\epsilon(y,s)-n(x,t))y^2\left(\partial_y m_\epsilon(y,s)\right)^2~dy~ds~dx~dt.
\end{multline}
Adding the previous two equations yields
\begin{multline}\label{eqn:entcombine}
\int_{Q_\epsilon^2}\eta_\epsilon\left[n_\epsilon(x,t)-m_\epsilon(y,s)\right](\partial_tg+\partial_sg)\\-\operatorname{sign}_\epsilon(n_\epsilon(x,t)-m_\epsilon(y,s))\left[J(x,n_\epsilon(x,t))-J(y,m_\epsilon(y,s))\right](\partial_xg+\partial_yg)~dx~dt~dy~ds\\
-\int_{Q_\epsilon^2}\operatorname{sign}_\epsilon(n_\epsilon(x,t)-m_\epsilon(y,s))\left\{\left[F(x,n_\epsilon(x,t))\partial_yg-F(y,n_\epsilon(x,t))\partial_yg+F_y(y,n_\epsilon(x,t))g\right]\right.\\-\left.\left[F(y,m_\epsilon(y,s))\partial_xg-F(x,m_\epsilon(y,s))\partial_xg+F_x(x,m_\epsilon(y,s))g\right]\right\}~dx~dt~dy~ds\\
+\int_{Q_\epsilon^2}\operatorname{sign}_\epsilon\left(n_\epsilon(x,t)-m_\epsilon(y,s)\right)\left[x^2\partial_x n_\epsilon(x,t)\partial_y g-y^2\partial_y m_\epsilon(y,s)\partial_x g\right]~dx~dt~dy~ds\\
=\int_{Q^2_\epsilon}\operatorname{sign}'_\epsilon(n_\epsilon(x,t)-m_\epsilon(y,s))\left[x^2\left(\partial_x n_\epsilon(x,t)\right)^2+y^2\left(\partial_y m_\epsilon(y,s)\right)^2\right]~dx~dt~dy~ds.
\end{multline}
Next, we let $\phi$ be a nonnegative test function on $Q$ and we define
\begin{equation*}
\alpha_h(z)\defeq\int_{-\infty}^z \delta_h(\zeta)~d\zeta
\end{equation*}
where
\begin{equation*}
\delta_h(z)\defeq\frac{1}{h}\delta\left(\frac{z}{h}\right)
\end{equation*}
for some nonnegative $\delta\in C^\infty_c(\mathbb{R})$ such that $\delta(z)=0$ for $\vert z\vert\geq 1$ and
\begin{equation*}
\int_\mathbb{R}\delta(z)~dz=1.
\end{equation*}
We then define for each $h>0$ the test function
\begin{equation}\label{eqn:gdef}
g_h(x,t,y,s)\defeq\phi\left(\frac{x+y}{2},\frac{t+s}{2}\right)\delta_h\left(\frac{t-s}{2}\right)\delta_h\left(\frac{x-y}{2}\right)
\end{equation}
and use these as the test function in \eqref{eqn:entcombine}. As $h\to 0$, the second integral in \eqref{eqn:entcombine} converges to zero following the techniques in \cite{BallewIyerEA16, Kruzkov70} and the first integral converges to the left side of \eqref{eqn:entdiff} and the third integral in \eqref{eqn:entcombine} converges to the integral on the right, which is seen after doing an integration by parts and taking the limit $h\to 0$, the proof is completed after taking $\epsilon\to 0$.
\end{proof}
\endgroup
\fi
\begin{lemma}\label{l:contraction}
Let $T > 0$, $m, n \in L^\infty(Q_T)$ be two nonnegative solutions to \eqref{e:komp}--\eqref{e:noflux} whose initial data satisfies~\eqref{e:x2n}.
For any $0 < r < R < \infty$, and any $0<s<t \leq T$, we have
\begin{align}\label{eqn:contract1}
\nonumber
\MoveEqLeft
\int^R_r \abs{ n_t(x)-m_t(x)} \, dx
+ \int^t_s\operatorname{sign}\paren[\big]{
n_\tau(r)-m_\tau(r)}
\paren[\big]{
J(r,n_\tau)-J(r,m_\tau)
}
\, d\tau
\\
\nonumber
&\leq \int^R_r \abs{ n_s(x)-m_s(x) } \, dx
\\
&\phantom{\leq}
+\int^t_s \operatorname{sign}\paren[\big]{ n_\tau(R)-m_\tau(R) }
\paren[\big]{ J(R,n_\tau)-J(R,m_\tau) }
\, d\tau \, .
\end{align}
\end{lemma}
Momentarily postponing the proof of Lemma~\ref{l:contraction}, we prove Lemma~\ref{l:l1contract}.
\begin{proof}[Proof of Lemma~\ref{l:l1contract}]
In order to prove Lemma~\ref{l:l1contract}, we only need to send $r \to 0$ and $R \to \infty$ in~\eqref{eqn:contract1}.
Using~\eqref{e:x2nPtwise}, \eqref{e:x2dxnVanish} the last term on the right of~\eqref{eqn:contract1} vanishes as $R \to \infty$.
The convergence as $r \to 0$ requires care: While $J(r, n_\tau) \to n_\tau(0)^2$ in $L^1((s, t))$, the sign function is discontinuous and the pre-factor $\sign(n_\tau(r) - m_\tau(r))$ may not converge.
Expanding out the flux explicitly, however, allows us to still compute the limit as $r \to 0$.
Indeed let~$J_{\mathit{lin}}$ be the linear terms in the flux~$J$.
That is, define
\begin{equation}\label{e:Jlin}
J_\mathit{lin}(x, n) \defeq x^2 \partial_x n + (x^2 - 2x) n\,,
\end{equation}
and note $J(x, n) = J_\mathit{lin}(x, n) + n^2$.
Now
\begin{align*}
\MoveEqLeft
\int_s^t
\sign( n_\tau(r) - m_\tau(r) ) (J(r, n_\tau) - J(r, m_\tau))
\, d\tau
\\
&= \int_s^t
\abs{n_\tau(r) - m_\tau(r)}^2
\, d\tau
\\
&\qquad+ \int_s^t
\sign( n_\tau(r) - m_\tau(r) ) (J_{\mathit{lin}}(r, n_\tau) - J_{\mathit{lin}}(r, m_\tau))
\, d\tau.
\end{align*}
The first term on the right converges to $\int_s^t (n_\tau(0)^2 - m_\tau(0)^2) \, d\tau$ as $r \to 0$.
For any $0 < s \leq t$, \eqref{e:a0} and the dominated convergence theorem imply that the second term vanishes as $r \to 0$.
This proves~\eqref{eqn:contract} for any $0 < s \leq t$.
Using continuity in $L^1$ and sending $s \to 0$ concludes the proof.
\end{proof}
It remains to prove Lemma~\ref{l:contraction}.
Before going through the proof, we first perform a formal calculation showing why~\eqref{eqn:contract1} is expected.
Let $w=n-m$ and note
\begin{equation}\label{e:w1}
\partial_t w -\partial_x (J(x, n)-J(x, m))=0 \,.
\end{equation}
Multiplying by~$\sign(w)$ and integrating in space gives
\begin{align}\label{e:wFormal}
\nonumber
\MoveEqLeft
\int_r^R \partial_t \abs{w} \, dx
- \brak[\Big]{ \sign(w) ( J(x, n) - J(x, m) ) }_r^R
\\
&\qquad= -\int_r^R \partial_x \sign(w) ( J(x, n) - J(x, m) ) \, dx \,.
\end{align}
We will show (using the structure of~$J$) that the right hand side is negative.
Once this is established, integrating in time will yield~\eqref{eqn:contract1} as desired.
To make this argument rigorous, we will regularize $\sign(w)$, and explicitly check that the right hand side of~\eqref{e:wFormal} is indeed negative.
As we will shortly see we only obtain a one sided bound for this term after regularization: while it is certainly negative, it need not vanish.
\begin{proof}[Proof of Lemma~\ref{l:contraction}]
Let $\operatorname{sign}_\epsilon$ be an odd smooth increasing function on $\mathbb{R}$ such that
\begin{equation}\label{eqn:sgnepsdef}
\operatorname{sign}_\epsilon(x)\defeq
\begin{cases}
1 & x>\epsilon \,,\\
-1 & x < -\epsilon\,,
\end{cases}
\end{equation}
given by $\operatorname{sign}_\epsilon(x)=2\int_0^x\eta_\epsilon(z)\,dz$ where $\eta_\epsilon$ is a standard mollifier~\cite{Evans98},
and define
\begin{equation*}
\zeta_{\epsilon}(x) = \int_0^x \operatorname{sign}_\epsilon(y) \, dy \,.
\end{equation*}
Note that~$\zeta_\epsilon$ is a smooth convex even function with~$\zeta_\epsilon(0) = 0$.
Multiplying equation~\eqref{e:w1} by~$\sign_\epsilon(w)$ and integrating by parts yields
\begin{align*}
\MoveEqLeft
\int_s^t \int_r^R \partial_t \zeta_\epsilon(w) \, dx \, d\tau - \int_s^t \operatorname{sign}_\epsilon(w) (J(x, n)-J(x, m)\Bigr|_r^R d\tau
\\
& =- \int_s^t \int_r^R \partial_x \operatorname{sign}_\epsilon(w)
\paren[\big]{ J(x, n)-J(x, m) } \, dx \, d\tau \,.
\end{align*}
We note that $\zeta_\epsilon(w)$ increases to $|w|$, and $\operatorname{sign}_\epsilon(w) \to \operatorname{sign}(w)$.
Thus to complete the proof, it suffices to find an upper bound for the right hand side that vanishes as $\epsilon \to 0$.
Using Young's inequality, we have
\begin{align*}
\MoveEqLeft
-\int_s^t \int_r^R \operatorname{sign}_\epsilon'(w)\partial_x w (x^2 \partial_x w +(x^2-2x)w +w(m+n)) \, dx \, d\tau \\
& \leq - \int_s^t \int_r^R \operatorname{sign}_\epsilon'(w) \left(\frac{x^2}{2} (\partial_x w)^2 -\frac{w^2}{2x^2} (x^2-2x +m+n)^2 \right) \, dx \, d\tau\\
& \leq \frac{1}{2r^2} \int_s^t \int_r^R \operatorname{sign}_\epsilon'(w) w^2 (R^2+ \norm{m}_{L^\infty(Q_T)} + \norm{n}_{L^\infty(Q_T)} )^2 \, dx \, d\tau.
\end{align*}
Note that for any $z \in \mathbb{R}$, we must have $z^2 \sign_\epsilon'(z) \leq \epsilon$.
Indeed, if $\abs{z} \geq \epsilon$, then $\sign_\epsilon'(z) = 0$.
On the other hand, if $\abs{z} < \epsilon$, then $z^2 \sign_\epsilon'(z) =2\epsilon^2 \eta_\epsilon(z) \leq C\epsilon$.
Using this in the above yields
\begin{multline*}
-\int_s^t \int_r^R \operatorname{sign}_\epsilon'(w)\partial_x w (x^2 \partial_x w +(x^2-2x)w +w(m+n)) \, dx \, d\tau \\
\leq C\epsilon\frac{(R^2+ \norm{m}_{L^\infty(Q_T)} + \norm{n}_{L^\infty(Q_T)})^2}{2r^2}(t-s)(R-r) \,,
\end{multline*}
which vanishes as~$\epsilon \to 0$.
This completes the proof.
\end{proof}
\subsection{Comparison (Lemma~\ref{l:comparison}).}
This section is devoted to the proof of the comparison principle.
We begin by stating the definitions of the sub and super solutions we use.
\begin{definition}\label{d:subsol}
Let $Q = (0, \infty) \times (0, \infty)$, and $n \in C^{2,1}(Q)$ be a function such that $n_t \to n_0$ in $L^1([0, \infty))$, and whenever $0 < s \leq t$ we have
\begin{equation}\label{e:vanish0}
\lim_{x \to 0^+} n_t(x) = n_t(0)\,,
\qquad
\lim_{x \to 0^+} \int_s^t \abs{x^2 \partial_x n_\tau} \, d\tau = 0\,.
\end{equation}
We say $n$ is a \emph{sub-solution} to~\eqref{e:komp}--\eqref{e:noflux} if whenever $0 < s \leq t$ we have
\begin{gather*}
\partial_t n \leq \partial_x J(x, n)\,,
\qquad\text{and}\qquad
\limsup_{x \to \infty} \int_s^t J(x, n_\tau)\, d\tau \leq 0\,.
\end{gather*}
We say $n$ is a~\emph{super-solution} to~\eqref{e:komp}--\eqref{e:noflux} if whenever $0 < s \leq t$ we have
\begin{equation*}
\partial_t n \geq \partial_x J(x, n)\,,
\qquad\text{and}\qquad
\liminf_{x \to \infty} \int_s^t J(x, n_\tau)\, d\tau \geq 0\,.
\end{equation*}
\end{definition}
\begin{remark}
We note that the (globally unique) solutions provided by Theorem~\ref{t:exist} are both sub and super-solutions in the sense of Definition~\ref{d:subsol}.
Certainly by~\eqref{e:komp} we have $\partial_t n = \partial_x J$.
For the flux at infinity note that the vanishing conditions~\eqref{e:x2nPtwise} and \eqref{e:x2dxnVanish} imply
\begin{equation*}
\lim_{x \to \infty} \int_s^t J(x, n_\tau) \, d\tau = 0\,,
\end{equation*}
as needed.
\end{remark}
\begin{remark}
As with the classical theory, we can relax the requirement that $n \in C^{2,1}(Q)$.
For our purposes it will suffice to consider functions that are $C^{2,1}$, except on finitely many, non-degenerate, disjoint $C^2$ curves of the form $x = \gamma(t)$.
At each point $(x, t)$ on one of these curves we require continuity.
Moreover, we require sub-solutions to satisfy the ``V corner'' condition
\begin{subequations}
\begin{equation}\label{e:cornerSubSol}
-\infty < \partial_{x}^- n_{t}(x) \leq \partial_{x}^+ n_{t}(x) < \infty\,,
\end{equation}
and super-solutions to satisfy the ``inverted V corner'' condition
\begin{equation}\label{e:cornerSupSol}
\infty > \partial_{x}^- n_{t}(x) \geq \partial_{x}^+ n_{t}(x) > -\infty\,.
\end{equation}
\end{subequations}
While a more general notion based on viscosity solutions is possible, it is unnecessary for our purposes since all the sub and super-solutions we construct are in the above form.
\end{remark}
\medskip
We now provide some intuition as to why the comparison principle (Lemma~\ref{l:comparison}) holds.
Suppose momentarily
\begin{equation}\label{e:nmStrict}
\partial_t m \leq \partial_x J(x, m)\,,
\qquad
\partial_t n > \partial_x J(x, n)\,,
\end{equation}
and that $x \partial_x m$, $x^2 \partial_x^2 m$, $x \partial_x n$ and $x^2 \partial_x^2 n$ all vanish as $x \to 0$.
Let $t_0$ be the first time at which $n_{t_0}(0) = m_{t_0}(0)$.
A standard comparison principle argument (see for instance~\cite[Th. 2.6.16]{Friedman64}) shows $n \geq m$ for all $(x, t) \in [0, t_0] \times [0, \infty)$, and hence $\partial_t m_{t_0}(0) \geq \partial_t n_{t_0}(0)$ and $\partial_x (m_{t_0}(0)^2) \leq \partial_x (n_{t_0}(0)^2)$.
Using~\eqref{e:nmStrict} and our vanishing assumptions this would imply $\partial_t m_{t_0}(0) < \partial_t n_{t_0}(0)$, which is a contradiction.
In order to make the above argument rigorous, we would have to show that the terms $x \partial_x n$ and $x^2 \partial_x n$ vanish as $x \to 0$.
Presently we don't know whether or not either of these conditions holds, and the most we can prove (equation~\eqref{e:a0}) is not strong enough to make the above argument work.
To circumvent this issue, we instead prove Lemma~\ref{l:comparison} using the technique used to prove the weak maximum principle.
\iffalse
\RP{Do we really need all this intuition and formal argument, here and before the
proof of Lemma 3.2?}
Set $w = n - m$ and note
\begin{equation}\label{e:wSup}
\partial_t w \geq x^2 \partial_x^2 w + (x^2 + 2m) \partial_x w + (2x - 2 + 2\partial_x n) w\,.
\end{equation}
\RP{$2\partial_x n$, yes?}
To prove Lemma~\ref{l:comparison} we need to show $w \geq 0$.
Let us first study the auxiliary equation
Through relatively standard techniques, one can reduce this to showing that if~$\tilde w$ is some function such that
\begin{equation}\label{e:wSimp}
\partial_t \tilde w > x^2 \partial_x^2 \tilde w + (x^2 + 2m) \partial_x \tilde w + \tilde c \tilde w\,,
\qquad
\liminf_{x \to \infty} J(x, \tilde w) \geq 0\,,
\qquad
\tilde w_0(x) \geq 0\,,
\end{equation}
then we must have~$\tilde w \geq 0$.
Here $\tilde c$ is some nonnegative function.
Using the boundary conditions at infinity, we can ensure $\tilde w \geq 0$ for all sufficiently large~$x$.
By truncating near infinity and using a standard minimum principle argument one can show that a minimum of~$\tilde w$ can not be attained at any point $(x, t)$ with $x, t > 0$.
Certainly when $t = 0$, we already know $\tilde w_0 \geq 0$.
How about when $x = 0$?
The standard minimum principle typically \emph{assumes} $\tilde w \geq 0$ at the boundary.
This assumption is \emph{not} available to us!
Making this assumption boils down to imposing a boundary condition for~\eqref{e:komp} at $x = 0$, which is precisely what we're trying to avoid.
Thus, to prove the comparison principle, we now need to prove that~$\tilde w$ can not attain a minimum at~$x = 0$.
To do this, suppose for contradiction that $(0, t_0)$ is the first time at which $\tilde w$ attains a minimum.
Since this is the first time at which $\tilde w$ attains a minimum, we must have $\partial_x \tilde w_{t_0}(0) \geq 0$.
Inserting this in~\eqref{e:wSimp} now forces~$\partial_t \tilde w_{t_0}(0) > 0$, which is a contradiction unless $t_0 = 0$.
While this intuitively explains why we must have~$\tilde w \geq 0$, without assuming the boundary condition $\tilde w_t(0) \geq 0$, we do not presently know how to make this argument rigorous.
Indeed, to make this argument work we need to show~$x^2 \partial_x \tilde w$ and $x^2 \partial_x^2 \tilde w$ both vanish as $x \to 0$.
Presently we don't know whether or not either of these conditions holds, and the strongest result we can presently prove (equation~\eqref{e:a0}) is much weaker than necessary.
To circumvent this issue, we prove Lemma~\ref{l:comparison} using the technique used to prove the weak maximum principle.
\fi
\begin{proof}[Proof of Lemma~\ref{l:comparison}]
We begin by providing a formal argument.
Let $w = n - m$ and observe
\begin{equation*}
\partial_t w - \partial_x (J(x, n) - J(x, m)) \geq 0\,.
\end{equation*}
Multiplying this by ~$-\mathbbm{1}_{\set{w\leq 0}}$ and integrating yields,
since $w^-= -\mathbbm{1}_{\set{w\leq 0}} w$,
\begin{align*}
\MoveEqLeft
\int_0^\infty \partial_t w^- \, dx
+ \brak[\Big]{ \mathbbm{1}_{\set{w \leq 0}} ( J(x, n) - J(x, m) ) }_0^\infty
\\
&\qquad\leq \int_0^\infty \partial_x \mathbbm{1}_{\set{w \leq 0}} ( J(x, n) - J(x, m) ) \, dx \,.
\end{align*}
Since $n$ is a super-solution and~$m$ is a sub-solution, we know
\begin{equation*}
\liminf_{x \to \infty} J(x, n) - J(x, m) \geq 0 \,.
\end{equation*}
Consequently,
\begin{align*}
\MoveEqLeft
\int_0^\infty \partial_t w^- \, dx
- \mathbbm{1}_{\set{w \leq 0}}
(n(0)^2 - m(0)^2)
\leq \int_0^\infty \partial_x \mathbbm{1}_{\set{w \leq 0}} ( J(x, n) - J(x, m) ) \, dx \,.
\end{align*}
As before, we claim the right hand side is negative.
Once this is established, integrating in time immediately yields
\begin{equation}\label{e:wMinus}
\int_0^\infty w_t(x)^- \, dx
+ \int^t_s w_\tau(0)^- (n_\tau(0) + m_\tau(0) ) \, d\tau
\leq \int_0^\infty w_s(x)^- \, dx\,.
\end{equation}
Sending $s \to 0$ we see that $w_t^- = 0$ for all $t > 0$, forcing $m_t(x) \leq n_t(x)$ for all $x, t \geq 0$.
To make the above formal argument rigorous, we use the same regularization procedure as Lemma~\ref{l:contraction}.
Let $0 < r < R < \infty$, and $0 < s < t \leq T$, and let $\mathcal H_\epsilon$ be a smooth increasing function such that
\begin{equation*}
\mathcal H_\epsilon(x) \defeq
\begin{cases}
-1 & x < -\epsilon \,,
\\
0 & x > 0 \,,
\end{cases}
\end{equation*}
given by $\mathcal H_\epsilon(x) = \int_{-\epsilon}^{\epsilon+2x}\eta_\epsilon(z)\,dz$ from the standard mollifier $\eta_\epsilon$,
and let
\begin{equation*}
\zeta_\epsilon(x) = \int_0^x \mathcal H_\epsilon(y) \, dy\,.
\end{equation*}
By following the proof of Lemma~\ref{l:contraction}, we deduce an analog
to \eqref{eqn:contract1}, namely
\begin{align}\label{eqn:compare1}
\nonumber
\MoveEqLeft
\int^R_r w_t(x)^- \, dx
- \int^t_s
\mathbbm{1}_{\set{w \leq 0}}
\paren[\big]{
J(r,n_\tau)-J(r,m_\tau)
}
\, d\tau
\\
&\leq \int^R_r w_s(x)^- \, dx
-\int^t_s \mathbbm{1}_{\set{w \leq 0}} \paren[\big]{ J(R,n_\tau)-J(R,m_\tau) }
\, d\tau \, .
\end{align}
Now sending~$r \to 0$ (using~\eqref{e:vanish0}) and~$R \to \infty$ as in the proof of Lemma~\ref{l:l1contract}, we obtain~\eqref{e:wMinus} as desired.
This finishes the proof.
\end{proof}
\subsection{Pointwise Bounds.}\label{s:pointwise}
The comparison principle (Lemma~\ref{l:comparison}) allows us to easily obtain pointwise upper bounds, provided we can find suitable super-solutions.
The key to many of our estimates is a family of stationary super-solutions that allows us time independent bounds on solutions with exponentially decaying initial data.
We present this next.
\begin{figure}[htb]
\includegraphics[width=.75\linewidth]{figures/Sgamma.pdf}
\caption{Plots of the stationary super-solution~$S_\gamma$ for various values of~$\gamma$.}
\end{figure}
\begin{lemma}\label{l:uss}
For any~$\gamma \geq 0$ the function $S_\gamma$ defined by
\begin{equation}\label{d:S}
S_\gamma(x) =\hat n_0(x) + \gamma m(x)\,,
\quad\text{where}\quad
m(x)
=\frac{x^2e^x}{(e^x-1)^2}\,,
\end{equation}
is a stationary super-solution to~\eqref{e:komp}--\eqref{e:noflux}.
\end{lemma}
An immediate consequence of Lemmas~\ref{l:comparison} and~\ref{l:uss} is a uniform in time upper bound for any solution of~\eqref{e:komp}--\eqref{e:noflux} with sufficiently rapidly decaying initial data.
\begin{corollary}\label{c:Sbound}
Let $n$ be the solution to~\eqref{e:komp}--\eqref{e:noflux} with initial data~$n_0 \geq 0$.
If for some~$\gamma \geq 0$ we have~$n_0 \leq S_\gamma$, then we must have $n_t(x) \leq S_\gamma(x)$ for all $t \geq 0$, $x \geq 0$.
\end{corollary}
\begin{remark}
Corollary~\ref{c:Sbound} applies to any initial data that can be bounded by $C(1+ x^2) e^{-x}$ for some $C \geq 0$, as in~\eqref{e:expDecay}.
\end{remark}
\begin{proof}[Proof of Lemma~\ref{l:uss}]
Given the formula~\eqref{d:S}, one can directly differentiate and check that~$S_\gamma$ is a super-solution.
A more illuminating proof is as follows:
Observe
\begin{equation*}
J(x, S_\gamma) = J(x, \hat n_0) + J_\mathit{lin}(x, \gamma m) + \gamma^2 m^2 + 2 \gamma m \hat n_0\,,
\end{equation*}
where $J_{\mathit{lin}}$ (defined in equation~\eqref{e:Jlin}) is the linear terms in~$J$.
Since $J(x, \hat n_0) = 0$, we will now look for functions~$m$ for which
\begin{equation*}
J(x, S_\gamma) = \gamma^2 m^2 \,.
\end{equation*}
For such functions we obtain the linear ODE
\begin{equation*}
J_\mathit{lin}(x, m) + 2 m \hat n_0 = 0\,,
\end{equation*}
which simplifies to
\begin{equation}\label{e:mEq}
x^2 \partial_x m + (x^2 - 2x + 2\hat n_0)m = 0\,,
\end{equation}
Solving this ODE with the normalization~$m(0) = 1$ yields the formula for~$m$ in~\eqref{d:S}.
Now to check~$S_\gamma$ is a stationary super-solution, we need to verify
\begin{equation*}
\partial_x J(x, S_\gamma) \leq 0
\qquad\text{and}\qquad
\lim_{x \to \infty} J(x, S_\gamma) \geq 0\,.
\end{equation*}
The second condition is true because $J(x, S_\gamma) = \gamma^2 m^2 > 0$.
To check the first condition we note from~\eqref{d:S}
that~$m>0$, and from~\eqref{e:mEq} it follows
\begin{equation*}
\frac{\partial_x m}{m}
= \frac{2x - x^2 - 2\hat n_0}{x^2}
= \frac{g(x)}{x(e^x - 1)}\,,
\quad\text{where}\quad
g(x) \defeq (2 -x)(e^x - 1) - 2x \,.
\end{equation*}
Notice $g(0) = 0$, $g'(0) = 0$, and $g''(x) = - x e^x \leq 0$, which forces $g(x) \leq 0$ for all $x \geq 0$.
Consequently, $\partial_x m \leq 0$ and hence $\partial_x J(x, S_\gamma) = 2 \gamma^2 m \partial_x m \leq 0$, concluding the proof.
\end{proof}
In case the initial data does not decay exponentially, we can still obtain explicit time-independent
pointwise bounds. The super-solutions, however, are not as natural as the $S_\gamma$ defined in~\eqref{d:S}.
\begin{proposition}\label{p:nBdd}
For the solution~$n$ given by Theorem~\ref{t:exist}, $(1+x^2)n(x,t)$ is uniformly bounded on
$Q = (0, \infty)\times(0, \infty)$.
Moreover, the uniform decay properties \eqref{e:x2nPtwise} and \eqref{e:x2dxnVanish} stated in Theorem~\ref{t:exist} hold with $T=\infty$.
\end{proposition}
\begin{remark}
Theorem~2 in~\cite{EscobedoHerreroEA98} already asserts that the solution is uniformly bounded in time.
However, we were unable to verify the proof given. It asserts that a function of the
form $\beta(1\varmin x^{-2})$ is a super-solution to equation (3.2) of \cite{EscobedoHerreroEA98},
which is a truncated form of equation \eqref{e:komp} above.
This does not work if $\beta$ is a fixed constant independent of both time
and truncation.
Instead, the function $\beta e^{4t}(1\varmin x^{-2})$ (corresponding to (2.3) in \cite{EscobedoHerreroEA98})
works as a uniform super-solution and provides local in time bounds as we stated in Theorem~\ref{t:exist}.
\end{remark}
\begin{proof}[Proof of Proposition~\ref{p:nBdd}]
Define
\begin{equation*}
Z(x) =
\begin{dcases}
e^{20 - x} + c_0 & x < 15\,,\\
\frac{c_1}{x (x - 5)} & x \geq 15\,,
\end{dcases}
\end{equation*}
for constants $c_0$, $c_1$ that will be chosen as follows:
By~\eqref{e:x2n} we can always choose $c_1 > 0$ so that $n_0 \leq Z$ for all $x > 15$.
A direct calculation shows that if
\begin{equation*}
c_1 > 900 e^5
\end{equation*}
then the corner condition $\partial_x^- Z(15) > \partial_x^+ Z(15)$ is satisfied.
Making $c_1$ larger if necessary, we can ensure that there exists $c_0 \geq 0$ such that $Z$ is continuous at $x = 15$ and $n_0 \leq Z$ for all $x > 0$.
We claim $Z$ is a supersolution to~\eqref{e:komp}--\eqref{e:noflux}.
Clearly
\begin{equation*}
J(x, Z) = \frac{ c_1 \paren{x^{4} - 9 x^{3} + 15 x^{2}+c_1}} {x^{2} \paren{x - 5}^{2}} \xrightarrow{x \to \infty} c_1\,.
\end{equation*}
Thus we only need to verify $\partial_x J(x, Z) \leq 0$.
For $x > 15$ we note
\begin{equation*}
\partial_x J(x, Z)
= 2 Z \partial_x Z + \partial_x J_{\mathit{lin}}(x, Z)
\leq \frac{c_1 (15 - x)}{(x - 5)^3} < 0\,,
\end{equation*}
where $J_{\mathit{lin}}$ is defined in~\eqref{e:Jlin}.
For $x < 15$ we compute
\begin{equation*}
\partial_x J(x, Z) = -2 (c_0 + e^{20-x})(e^{20-x} + 1 - x) < 0\,,
\end{equation*}
provided $e^{20 - x} + 1 - x > 0$.
But this condition holds, since $x<2^4<e^5<e^{20-x}$.
Thus $Z$ is a stationary super-solution of~\eqref{e:komp}--\eqref{e:noflux}.
By Lemma~\ref{l:comparison} this implies $n \leq Z$ for all $t \geq 0$, concluding the proof
that $(1+x^2)n(x,t)$ is uniformly bounded on $Q$.
Now the uniform decay properties \eqref{e:x2nPtwise} and \eqref{e:x2dxnVanish}
for $T=\infty$ follow exactly as in \cite[pp.~3849-50]{EscobedoHerreroEA98},
based on a finer comparison argument for $x>R$ large and classical regularity estimates.
\end{proof}
\iffalse
\begin{removed}[For removal]
In case the initial data does not decay exponentially, we can still obtain explicit pointwise bounds; however, these bounds are suboptimal as they grow exponentially with time.
\begin{lemma}
Let $n$ be a solution to~\eqref{e:komp}--\eqref{e:noflux} with bounded, nonnegative initial data $n_0$ that satisfies~\eqref{e:x2n}.
There exists a constant $\beta_0 = \beta_0(n_0)$ such that for all $x, t \geq 0$ we have
\begin{equation}\label{e:Zbd}
n_t(x) \leq \beta_0 e^{4t} \paren[\Big]{ 1 \varmin \frac{1}{x^2} }\,.
\end{equation}
\end{lemma}
\begin{proof}
We prove~\eqref{e:Zbd} by producing a super-solution and using Lemma~\ref{l:comparison}.
Define
\begin{align}\label{z1}
Z_1(x, t)= \begin{dcases}
\beta(t) & x\leq 1\,, \\
\frac{\beta(t)}{x^{2}}, & 1\leq x \leq \infty\,,
\end{dcases}
\end{align}
where $\beta(t)=\beta_0e^{4t}$ for suitably large $\beta_0$.
Define the (non-linear) differential operator $\mathcal L$ by
\begin{equation
\mathcal L n
\defeq
\partial_t n - \partial_x J(x, n)
= \partial_t n- x^2\partial_{xx} n-\partial_x n(2n+x^2)+2n(1-x) \,.
\end{equation}
We claim that $\mathcal L Z_1>0$.
To see this, suppose first $x \leq 1$.
In this case we have
\begin{equation*}
\frac{\mathcal L Z_1}{Z_1}
=\frac{\dot \beta}{\beta} +2(1-x)=2( 3-x)>0 \,.
\end{equation*}
On the other hand, if $x > 1$ we note
\begin{equation*}
\frac{\mathcal L Z_1}{Z_1}
= \frac{ \dot \beta}{\beta} -4 +\frac{4\beta}{x^3}
=\frac{4\beta}{x^3}
>0 \,.
\end{equation*}
Moreover, $\partial_x^- Z_1(1, \cdot) = 0 > \partial_x^+ Z_1$ giving the corner condition~\eqref{e:cornerSupSol} required for super-solutions.
Now Lemma~\ref{l:comparison} implies estimate~\eqref{e:Zbd}, as claimed.
\end{proof}
\end{removed}
\fi
\subsection{Energy Estimates.}\label{s:energy}
We conclude this section by establishing $L^2$ energy estimates on solutions.
While such energy estimates usually play a central role in the study of parabolic problems, they are not as helpful in the present context.
Indeed, the proofs of our main results do not use~$L^2$ energy estimates, and they are only presented here for completeness.
\begin{proposition}\label{p:energy}
Let $n$ be a solution to~\eqref{e:komp}--\eqref{e:noflux} with nonnegative initial data~$n_0$ that satisfies~\eqref{e:x2n}.
Then, for any $t>s>0$, we have
\begin{align}\label{en}
\nonumber
\MoveEqLeft
\int_0^\infty n^2_t(x)\,dx +\int_s^t\int_0^\infty [n_\tau^2 +x ^2(\partial_x n_\tau)^2]\,dx\, d\tau
\\
&\leq \int_0^\infty n^2_s(x)\,dx
+ 2 \int_s^t\int_0^\infty x n_\tau^2 \, dx \, d\tau\,.
\end{align}
\end{proposition}
\begin{remark}
By Proposition~\ref{p:nBdd}, the term $\int_s^t \int_0^\infty x n_\tau^2 \, dx \, d\tau$ appearing on the right can be bounded by $C(t - s)$ for some constant $C = C(n_0)$.
\iffals
Note that the term $\int_s^t \int_0^\infty x n_\tau^2 \, dx \, d\tau$ appearing on the right must be finite.
Indeed, using~\eqref{e:Zbd} we see
\begin{equation*}
\int_s^t \int_0^\infty x n_\tau^2 \, dx \, d\tau
\leq \frac{3 \beta_0^2 (e^{4t} - e^{4s})}{8} \,.
\end{equation*}
The above bound can be improved if one further assumes $n_0 \leq S_\gamma$ for some~$\gamma > 0$.
In this case Corollary~\ref{c:Sbound} implies
\begin{equation*}
\int_s^t \int_0^\infty x n^2 \, dx \, d\tau
\leq (t - s) \int_0^\infty x S_\gamma^2 \, dx\,.
\end{equation*}
\fi
\end{remark}
\begin{proof}[Proof of Proposition~\ref{p:energy}]
Let $0 < \epsilon < R < \infty$.
Multiplying~\eqref{e:komp} by $2n$ and integrating from~$\epsilon$ to $R$ yields
\begin{align}
\nonumber
\partial_t \int_\epsilon^R n^2 \,dx
& =
- 2\int_\epsilon^R (\partial_x n) J\, dx +\brak[\Big]{ 2Jn}_\epsilon^R
\\
\label{e:ee1}
& = -2\int_\epsilon^R [n^2+x ^2(\partial_x n)^2]dx +2\int_\epsilon^R x n^2dx +\Gamma(t)\,,
\end{align}
where
\begin{align*}
\Gamma(t) \defeq -\brak[\Big]{ (x^2-2x)n^2}^R_{\epsilon}
-\frac{2}{3}\brak[\Big]{ n^3}^R_{\epsilon} + 2\brak[\Big]{ Jn }^R_{\epsilon} \,.
\end{align*}
When~$\epsilon < 1$ and~$R > 2$ we observe
\begin{align*}
\Gamma(t)
& =- n^2(R, t)R(R-2)+\epsilon(\epsilon -2) n^2(\epsilon, t) -\frac{2}{3}n^3(R, t)+\frac{2}{3}n^3(\epsilon, t)+2\brak[\Big]{ Jn }^R_{\epsilon}
\\
& \leq 2J(R, t)n(R, t) +\frac{2}{3}n^3(\epsilon, t) -2J(\epsilon, t)n(\epsilon, t)\\
& = 2J(R, t)n(R, t)-\frac{4}{3}n^3(\epsilon, t) -2\epsilon^2 n(\epsilon, t)\partial_x n(\epsilon, t)+2\epsilon (2-\epsilon)n^2(\epsilon, t)\,.
\end{align*}
Since~$n$ is bounded and the flux vanishes at infinity (equation~\eqref{e:noflux}) the first term on the right vanishes as $R \to \infty$.
The second term is bounded above by~$0$.
Using Lemma~\ref{l:oleinik}, third term vanishes as~$\epsilon \to 0$.
The last term vanishes as~$\epsilon \to 0$, and so
\begin{equation*}
\lim_{\epsilon \to 0,~ R\to \infty} \Gamma(t) = 0\,.
\end{equation*}
Thus sending~$\epsilon \to 0$ and $R \to \infty$ in~\eqref{e:ee1} yields~\eqref{en} as claimed.
\end{proof}
\section{Finite Time Condensation.}\label{s:bec}
In this section we prove persistence (Proposition~\ref{p:persistence}) and establish the onset of photon loss
through a singularity in the slope (the second assertion in Proposition~\ref{p:formation}).
Throughout this section we assume $n_0$ is a nonnegative bounded function satisfying~\eqref{e:x2n}, and~$n$ is the unique global solution to~\eqref{e:komp}--\eqref{e:noflux} with initial data~$n_0$.
\subsection{Persistence.}
We now prove Proposition~\ref{p:persistence} and show that photon loss begins, it will never stop.
\begin{proof}[Proof of Proposition~\ref{p:persistence}]
Suppose for some $T > 0 $ we have $n_T(0) > 0$.
We claim that $n_t(0) > 0$ for all $t > T$.
Once this claim is established, Proposition~\ref{p:persistence} follows immediately by setting~$t_* = \inf\set{t > 0 \st n_t(0) > 0}$.
To prove the claim recall by Lemma~\ref{l:oleinik} we know $\partial_x n_t \geq -\varphi_t$, where~$\varphi_t$ is defined by~\eqref{e:varphiDef}.
Integrating in~$x$ this implies
\begin{align*}
n_T(x) \geq \paren[\Big]{ n_T(0) - \int_0^x \varphi_T(y) \, dy }_+\,.
\end{align*}
Here notation $z_+$ denotes $\max\set{z, 0}$, the positive part of $z$.
Since the function $\int_0^x \varphi_T(y) \, dy$ is convex in~$x$, we must have
\begin{equation*}
n_T(x) \geq (a_T -b_T x)_+ \,,
\end{equation*}
where
\begin{equation}\label{e:abID}
a_T = n_T(0) > 0\,,
\qquad
b_T = \frac{a_T}{R}\,,
\end{equation}
and $R > 0$ is uniquely determined from
\begin{equation*}
a_T - \int_0^R \varphi_T(y) \, dy = 0\,.
\end{equation*}
Now for $t > T$, we define $a_t$ and $b_t$ to solve the ODE
\begin{equation}\label{e:ab}
\partial_t a=-2a(1+b)\,, \qquad \partial_t b=a-2b(1+b)\,,
\end{equation}
with initial data~\eqref{e:abID}.
Let
\begin{equation*}
q_t(x) = (a_t - b_t x)_+\,,
\end{equation*}
for $t \geq T$.
We claim that $q$ is a sub-solution to~\eqref{e:komp}--\eqref{e:noflux} (as in Definition~\ref{d:subsol}) for all $t \geq T$.
To see this, note first that $\partial_t (b / a) = 1$ and hence
\begin{equation}\label{e:b}
b_t = \frac{a_t b_T}{a_T} + a_t(t - T)\,.
\end{equation}
Using the first equation in~\eqref{e:ab} it now follows that both
\begin{equation*}
a_t > 0\qquad\text{and}\qquad
b_t > 0\,,
\end{equation*}
for all $t \geq T$.
Moreover, for~$\hat x_t \defeq a_t / b_t$, equation~\eqref{e:b} implies
\begin{equation*}
\hat x_t = \frac{R}{1 + R(t - T)} \in (0, R)\,,
\end{equation*}
for all $t > T$.
Now for $t > T$ and $x \in (0, \hat x_t)$ we compute
\begin{align*}
\partial_t q - \partial_x J(x, q)
=& \partial_t a- \partial_t b x -\partial_x [-x^2 b +(x^2 -2x)(a-bx)+(a-bx)^2]\\
=& \partial_t a +2a(1+b) -x( \partial_t b +2b+2b^2-a) +3x(bx-a)\\
=& 3x(bx- a) \leq 0.
\end{align*}
For $ x > \hat x_t$, $q_t = 0$ and so $\partial_t q_t = \partial_x J(x, q_t) = 0$.
Moreover, since $b_t > 0$ we note that the appropriate corner condition holds:
\begin{equation*}
\partial_x^- q_t(\hat x_t) = -b_t < 0 = \partial_x^+ q_t(\hat x_t)\,.
\end{equation*}
Thus~$q_t$ is a sub-solution to~\eqref{e:komp}--\eqref{e:noflux} for all $t \geq T$.
By the comparison principle (Lemma~\ref{l:comparison}) this implies $n_t(x) \geq q_t(x)$ for all $t \geq T$ and $x \geq 0$.
This implies $n_t(0) \geq q_t(0) = a_t > 0$ for all $t \geq T$, finishing the proof.
\iffalse
\bigskip\hrule\bigskip\color{red}
By Lemma~\ref{l:oleinik} we know
From $\partial_xn(x, t^*)\geq -\frac{1}{2t^*}-\frac{5x}{2}-(\gamma+1)$ we have
\begin{align*}
n(x, t^*)
&\geq \paren[\Big]{
n(0, t^*)- \frac{x}{2t^*} -\frac{5x^2}{4}-(\gamma +1)x}_+
\\
& \geq (a(t^*)-b(t^*)x)_+ \,,
\end{align*}
for $a(t^*)=n(0, t^*)>0$ and $b(t^*)=a(t^*)/r$ with $r>0$ uniquely determined by
\begin{align}\label{r}
a(t^*)- \frac{r}{2t^*} -\frac{5r^2}{4}-(\gamma +1)r=0.
\end{align}
Let $(a(t), b(t))$ be solved from the following ODE system
$$
\dot a=-2a(1+b), \quad \dot b=a-2b(1+b), \quad t>t^*
$$
subject to initial data $a(t^*)=n(0, t^*)>0$ and $b(t^*)=a(t^*)/r$. Then
$
n(x, t^*)\geq q(x, t^*)
$
with
$$
q(x,t)=(a(t)- b(t)x)_+.
$$
Note from $\frac{d}{dt} \left(\frac{b}{a} \right)=1$ it follows $b(t)/a(t)=b(t^*)/a(t^*)+t-t^*$; that is
$$
\hat x(t) \defeq \frac{a(t)}{b(t)}=\frac{r}{1+r(t-t^*)}<r, \quad t>t^*.
$$
For $0<x<\hat x(t)$ we have
\begin{align*}
L[q] \defeq & \partial_t q - \partial_x [ x^2 \partial_x q +(x^2-2x)q +q^2] \\
=& \dot a-\dot b x -\partial_x [-x^2 b +(x^2 -2x)(a-bx)+(a-bx)^2]\\
=& \dot a +2a(1+b) -x( \dot b +2b+2b^2-a) +3x(bx-a)\\
=& 3x(bx- a) \leq 0.
\end{align*}
We claim $n\geq q$ for $t>t^*$. Let $\epsilon>0$.
Substitution of
$
n=q +v+\epsilon \Psi
$
into the equation $L[n]=0$ gives
$$
\hat L[v] \defeq \partial_t v-x^2\partial_{xx} v+2(1-x)v -2v\partial_x q -(x^2+2n)\partial_x v = -\epsilon \hat L[\Psi] +3xq\,.
$$
We can show that
\begin{align}\label{ab}
a<b(b+1), \quad t\geq t^*,
\end{align}
which implies $ b+1>\hat x$. Assuming that this is correct for the time being, we choose
$
\Psi=-t+\ln (x/r),
$
so that $\Psi<0$ and
$$
\hat L[\Psi] = 2(b+1-x)\Psi - \frac {2n}x -x <0,
$$
hence $\hat L[v]>0$ for $0<x<\hat x(t)$, $t\geq t^*$.
For $0<x\leq\sigma$ for $\sigma$ sufficiently small (depending on $\epsilon$),
$$
v =n- q -\epsilon \Psi\geq -q + \epsilon (t-\ln (\sigma/r))>0\,, \quad\forall t>t^*\,.
$$
Moreover, at $x=\hat x$ we have
$$
v(\hat x, t)=n(\hat x, t)+\epsilon (t - \ln (\hat x/r)) >0\,.
$$
These facts, together with the fact $v(x, t^*)\geq \epsilon (t^* -\ln (x/r))> 0$, ensure that
$$
v(x, t) > 0\,, \quad \forall t>t^*, \ x\in (0,\hat x]\,.
$$
Since $\epsilon>0$ is arbitrary, we infer
$$
n(x, t)\geq q(x,t)\,, \quad \forall t \geq t^*, x\in (0,\hat x]\,.
$$
This gives the desired estimate upon taking $x\to0$. We are left to prove \eqref{ab}. Set $G=b(b+1)-a$, a direct calculation gives
$$
\dot G=-2(2b+1)G +a > -2(2b+1)G.
$$
Hence $G(t)>0$ for all $t\geq t^*$ if $G(t^*)>0$. In fact, we have
$$
G(t^*)=b(t^*) (b(t^*)+1 - r)>0.
$$
since from \eqref{r} it follows that
$$
b(t^*)=1+\gamma +\frac{5r}{4} +\frac{1}{2t^*} > r.
$$
The proof is thus complete.
\fi
\end{proof}
\iffalse
This is removed from text on [6-25-21] by Liu.
{\color{red}
Here below we present a general lemma
\begin{lemma} Consider smooth functions $q(x, t)>0$ and $Q(x, t)$ over $x\in (0, r)$ for $t>0$. If
$$
L[q]<0, \quad L(Q)>0, \quad t>0, \;x\in (0, r),
$$
then
$$
z=\max\{q, u\}
$$
is a subsolution, where $u$ is another subsolution; and
$$
Z=\min\{Q, S\}
$$
is a supersolution, where $S$ is another superposition.
\end{lemma}
\begin{proof} For $n=q+v$ we have
$$
\hat L[v] \defeq \partial_t v-x^2\partial_{xx} v+2(1-x)v -2v\partial_x q -(x^2+2n)\partial_x v=L[n]-L[q]=-L[q].
$$
Let $\epsilon>0$. Replacing $v$ by $v+\epsilon \Psi$ we have
$$
\hat L[v]= -\epsilon \hat L[\Psi] -L[q].
$$
We choose
$
\Psi=-t+\ln (x/r),
$
so that $\Psi<0$ and
$$
\hat L[\Psi] = 2(1-x-q_x)\Psi - \frac {2n}x -x <0,
$$
hence $\hat L[v]>0$ for $0<x<r$, $t\geq 0$.
For $0<x\leq\sigma$ for $\sigma$ sufficiently small (depending on $\epsilon$),
$$
v =n- q -\epsilon \Psi\geq -q + \epsilon (t-\ln (\sigma/r))>0\,, \quad\forall t>0\,.
$$
Moreover, at $x=\hat x<r$ we have
$$
v(\hat x, t)=n(\hat x, t) -q(\hat x) +\epsilon (t - \ln (\hat x/r)) >0\,.
$$
These facts, together with the fact $v(x, 0)\geq -\epsilon \ln (x/r)> 0$, ensure that
$$
v(x, t) > 0\,, \quad \forall t>0, \ x\in (0,\hat x]\,.
$$
Since $\epsilon>0$ is arbitrary, we infer
$$
n(x, t)\geq q(x,t)\,, \quad \forall t \geq 0, x\in (0,\hat x]\,.
$$
For supersolution, substitution of
$
n=Q +v-\epsilon \Psi
$
into the equation $L[n]=0$ gives
$$
\hat L[v] \defeq \partial_t v-x^2\partial_{xx} v+2(1-x)v -2v\partial_x Q -(x^2+2n)\partial_x v = \epsilon \hat L[\Psi] +3xQ\,.
$$
We choose
$
\Psi=-t+\ln (x/r),
$
so that $\Psi<0$ and
$$
\hat L[\Psi] = 2(1-x-Q_x)\Psi - \frac {2n}x -x <0,
$$
hence $\hat L[v]<0$ for $0<x<r$, $t\geq 0$.
For $0<x\leq\sigma$ for $\sigma$ sufficiently small (depending on $\epsilon$),
$$
v =n- Q +\epsilon \Psi \leq S(x) -Q - \epsilon (t-\ln (\sigma/r))<0\,, \quad\forall t>0\,.
$$
Moreover, at $x=\hat x<r$ we have
$$
v(\hat x, t)=n(\hat x, t)+\epsilon (t - \ln (\hat x/r)) <0\,.
$$
These facts, together with the fact $v(x, 0)\leq +\epsilon \ln (x/r)<0$, ensure that
$$
v(x, t) < 0\,, \quad \forall t>0, \ x\in (0,\hat x]\,.
$$
Since $\epsilon>0$ is arbitrary, we infer
$$
n(x, t)\leq Q(x,t)\,, \quad \forall t \geq 0, x\in (0,\hat x]\,.
$$
\end{proof}
}
\fi
\subsection{Onset of Photon Loss (Slope Condition).}
\iffalse
The next result bounds $n$ from below and above to show a precise finite time condensation formation.
\begin{lemma}\label{l:ec+}(Finite time condensate formation) If
$$
(a(0)x-b(0)x^2)_+ \leq n^{\rm in}(x),
$$
then
$$
(a(t)x-b(t)x^2)_+ \leq n(x, t)
$$
with $a(0)>1$ and $\dot a=2a(a-1)$ and $b(0)>1$ and $\dot b=3a(2b-1)$.
\end{lemma}
\begin{proof}
Let $(a(t), b(t))$ be solved from the following ODE system
$$
\dot a=2a(a-1), \quad \dot b=3a(2b-1), \quad t>0
$$
subject to initial data $a(0)>1$ and $b(0)>1$. The blow-up time for both $a$ and $b$ is
$$
t^*=\frac{1}{2} {\rm ln} \frac{a(0)}{a(0)-1},
$$
which can be determined using the exact solution $a=a(0) (a(0)-(a(0)-1)e^{2t})^{-1}$. We now show $z(x, t)=(ax-bx^2)_+$ is a sub-solution of this problem for $x\in (0, a/b)$.
Note from the invariant
$$
(b(0)-1/2))(a(t)-1)^3=(b(t)-1/2)(a(0)-1)^3,
$$
it follows that for $c=(2b(0)-1)/(a(0)-1)^3$,
$$
\hat x(t) \defeq \frac{a(t)}{b(t)}=\frac{2a(t)}{1 +c(a-1)^3}<r, \quad 0<t<t^*,
$$
and $\hat x(t^*)=0$. Let us write
\begin{equation}\label{d:Ln}
L[n] \defeq \partial_t n- x^2\partial_{xx} n-\partial_x n(2n+x^2)+2n(1-x).
\end{equation}
For $0<t<t^*$, $0<x<\hat x(t)$ we have
\begin{align*}
L[z] \defeq & \dot a x -\dot b x^2 +2bx^2 -(a-2bx)(2ax-2bx^2+x^2) +2(ax-bx^2)(1-x)\\
=& x(\dot a -2a(a-1)) -x^2( \dot b -6ab +3a) +4x^3 b(1-b)\\
=& 4x^3 b(1-b) < 0.
\end{align*}
We claim $n\geq z$ for $0<t<t^*$. Substitution of
$
n=z +v
$
into the equation $L[n]=0$ gives
\begin{align*}
\hat L[v] \defeq & \partial_t v-x^2\partial_{xx} v+2(1-x)v -2v\partial_x z -(x^2+2n)\partial_x v \\
=& L[n] -L[z] >0, \quad 0<x<\hat x(t).
\end{align*}
For $x=0$, we have $v(x, t)=n(0, t)=0$ for $0<t<t^*$. Moreover, at $x=\hat x$ we have
$$
v(\hat x, t)=n(\hat x, t) >0\,.
$$
These facts, together with the fact $v(x, 0)> 0$, ensure that
$$
v(x, t) \geq 0\,, \quad \forall 0<t<t^*, \ x\in (0,\hat x]\,.
$$
This gives the desired estimate.
\iffalse
Let $\epsilon>0$. Substitution of
$
n=z +v+\epsilon \Psi
$
into the equation $L[n]=0$ gives
\begin{align*}
\hat L[v] \defeq & \partial_t v-x^2\partial_{xx} v+2(1-x)v -2v\partial_x z -(x^2+2n)\partial_x v \\
=& -\epsilon \hat L[\Psi]+L[n] -L[z] >-\epsilon \hat L[\Psi]\,.
\end{align*}
{\color{red} Assume there exists a function $\Psi<0$ such that $\hat L[\Psi]<0$ and $\lim_{x\to 0} \Psi <<-1$.} \\
Then $\hat L[v]>0$ for $0<x<\hat x(t)$, $0<t<t^*$. For $0<x\leq\sigma$ for $\sigma$ sufficiently small (depending on $\epsilon$),
$$
v =n- z -\epsilon \Psi \geq -z + \epsilon ( -\Psi ) >0\,, \quad\forall 0<t<t^*\,.
$$
Moreover, at $x=\hat x$ we have
$$
v(\hat x, t)=n(\hat x, t)- \epsilon \Psi (\hat x, t) >0\,.
$$
These facts, together with the fact $v(x, 0)\geq - \epsilon \Psi(t, 0) > 0$, ensure that
$$
v(x, t) > 0\,, \quad \forall t>0, \ x\in (0,\hat x]\,.
$$
Since $\epsilon>0$ is arbitrary, we infer
$$
n(x, t)\geq z(x,t)\,, \quad \forall t \geq 0, x\in (0,\hat x]\,.
$$
This gives the desired estimate upon taking $x\to0$.
\fi
\end{proof}
\fi
We now prove the second assertion in Proposition~\ref{p:formation}, which states that if $\partial_x n_0(0) > 1$,
then photon loss must commence at or before the time $\bar t_*$ given by~\eqref{e:tStarRiccati}.
Before delving into the details of the rigorous proof, we present a quick heuristic derivation.
Let $w_t = \partial_x n_t(0)$ and differentiate equation~\eqref{e:komp} in~$x$.
Using the fact that $n_t(0) = 0$ for $t < t_*$ we formally obtain
\begin{equation}\label{e:riccati}
\partial_t w = 2w - 4w + 2 w^2 = 2w(w-1)\,.
\end{equation}
This is a Riccati equation which can readily be integrated.
If $w_0 = \partial_x n_0(0) > 1$, then~\eqref{e:riccati} develops a singularity at time~$t_*$ given by~\eqref{e:tStarRiccati}.
To convert the above heuristic into a rigorous proof, we need to construct suitable sub and super-solutions.
\begin{proof}[Proof of the second assertion in Proposition~\ref{p:formation}]
We will first prove~\eqref{e:tStarRiccati} holds by constructing sub-solutions using modified sideways parabolas.
More precisely, the sub-solutions we construct will be of the form
\begin{equation}\label{d:nsubdef}
z_t(x) = (u_t(x)-c_t x^2)_+ \,,
\end{equation}
where
\begin{equation}\label{e:u}
u_t(x) = \frac{\sqrt{a_t^2 + 2 b_t x} - a_t }{b_t}\,,
\end{equation}
and the functions $a, b, c$, will be chosen shortly, with $b > 0, c > 0$.
Note that $u$ is determined implicitly from the upper branch of parabolic arcs
\begin{equation}\label{d:para1}
x = a_t u_t(x) + \frac12{b_t }u_t(x)^2 \,.
\end{equation}
From this we compute
\begin{subequations}
\begin{gather}
\label{e:dxU}
1 = (a+bu)\partial_x u ,
\\
\label{e:dx2U}
0 = (a+bu)\partial_{xx} u + b (\partial_x u)^2,
\\
\label{e:dtU}
0 = u \partial_t a + \frac12u^2{\partial_t b} + (a+bu)\partial_t u \,.
\end{gather}
\end{subequations}
When $a_t > 0$, we have $u_t(0)=0$, and hence $\partial_x z_t(0) = \partial_x u_t(0)=1/a_t$.
Our aim is to choose~$a$ so that $\partial_x z_t(0) = 1/a_t$ satisfies the Riccati equation~\eqref{e:riccati}
until blowup.
This boils down to letting~$a$ solve
\begin{equation}\label{e:invslope1}
\partial_t a_t
= 2a_t - 2\,.
\end{equation}
Notice $u\geq0$, $a+bu\ge0$, and $(a+bu)u\geq x$, hence $x\partial_x u\leq u$.
Define the (non-linear) differential operator $\mathcal L$ by
\begin{equation*
\mathcal L n
\defeq
\partial_t n - \partial_x J(x, n)
= \partial_t n- x^2\partial_{xx} n-\partial_x n(2n+x^2)+2n(1-x) \,,
\end{equation*}
and compute, when $z>0$,
\[
\mathcal L z = \partial_t u - x^2\partial_t c- x^2(\partial_x^2 u - 2c) +(x^2+2z)(-\partial_x u+2cx)+2z(1-x),
\]
whence, since $cx^2=u-z$,
\begin{align*}
(a+bu)\mathcal L z &=
(a+bu)\left( -x^2\partial_t c + 2u + (x^2+2z)2cx - 2zx \right)
\\ & \qquad
-u\partial_t a -\frac12u^2\partial_t b + x^2 b(\partial_x u)^2 - x^2 - 2z
\\ &\leq (a+bu)\left(-x^2\partial_t c + 2cx^3 +2zx(2c-1) \right)
\\ &\qquad
+ u(-\partial_t a+2a-2)+ u^2\left(-\frac12 \partial_t b+ 3b\right) + x^2(-1+2c )\,.
\end{align*}
If
\begin{equation}\label{e:bevol1}
\partial_t c = c \,,
\qquad
0 < x < \frac{1}{2}\,,
\qquad
0<c < \frac{1}{2}\,,
\qquad
\partial_t b = 6b \,,
\end{equation}
then we have $\mathcal L z<0$.
Thus, from the above we choose
$$
a_t = 1-(1 -a_0)e^{2t} \,,
\qquad
b_t = b_0 e^{6t} \,,
\qquad\text{and}\qquad
c_t = c_0 e^{t}\,,
$$
with $a_0 \in (0, 1)$, $b_0, c_0 > 0$ to be determined shortly.
In order to ensure $z_0 \leq n_0$, pick $\epsilon > 0$ and let
\begin{equation*}
a_0 = \frac{1}{\partial_x n_0(0) - \epsilon} \in (0, 1)\,,
\qquad
t^*_\epsilon = \frac{1}{2}\abs{\ln (1-a_0)} \,.
\end{equation*}
Due to this choice of $t_\epsilon^*$ we have $a_{t} > 0$ for $t < t_\epsilon^*$, $a_{t_\epsilon^*} = 0$, and $a_t < 0$ for $t > t_\epsilon^*$.
(Notice $u_t(0)>0$ for $t> t_\epsilon^*$.)
Next choose $c_0<\frac{1}{2}(1-a_0)$
so that
$$
c_t<1/2 \quad\text{for } t\in (0, 2t_\epsilon^*) \,.
$$
This choice is made to ensure $z_t(x)$ remains a sub-solution up to time $t= 2t_\epsilon^*$.
Next, we choose $b_0$ large enough to ensure that for some $\bar x \ll 1/2$ we have $z_0(x) = 0$ for all $x \in (\bar x, \infty)$.
Since $\partial_x z_0(0) = \partial_x n_0(0) - \epsilon < \partial_x n_0(0)$, by making $b_0$ larger if necessary we can also arrange $z_0 \leq n_0$.
Now all the requirements in~\eqref{e:bevol1} are satisfied, and hence $z$ is a sub-solution up to time $2t_\epsilon^*$.
Since $z_0 \leq n_0$, the comparison principle (Lemma~\ref{l:comparison}) implies $z_t \leq n_t$ for all $t \leq 2t_\epsilon^*$.
Using~\eqref{e:u} this implies
\begin{equation*}
n_t(0) \geq z_t(0) = u_t(0) \geq \frac{-2a_t}{b_t} > 0
\qquad\text{for all } t \in (t_\epsilon^*, 2t_\epsilon^*]\,.
\end{equation*}
Sending~$\epsilon \to 0$ we see that
\begin{equation*}
t_* \leq \lim_{\epsilon \to 0} t_\epsilon^*
= \frac{1}{2} \ln\paren[\Big]{ \frac{\partial_x n_0(0)}{\partial_x n_0(0) -1} } = \bar t_*\,,
\end{equation*}
which proves $t_* \leq \bar t_*$ as claimed.
\medskip
It remains to produce initial data for which $n_t(0)$ is continuous at $t = t_*$, and $t_* = \bar t_*$, where $\bar t_*$ is defined in~\eqref{e:tStarRiccati}.
We will do this by constructing a super-solution~$Z$ such that $Z_t(0) = 0$ for~$0\le t \leq \bar t_*$ and $Z_t(0)>0$ for $t_*<t<\hat t$,
with $Z_t(0)$ a continuous function of time. Moreover we can make $\partial_x Z_0(0)>1$ be arbitrary.
Once we construct $Z$, we choose $n_0$ to be any function for which $0\leq n_0 \leq Z_0$ and $\partial_x n_0(0) = \partial_x Z_0(0)$.
For the corresponding solution, $n$, we must have
\begin{equation*}
0 \leq n_t(0) \leq Z_t(0) = 0 \qquad \text{for all } t \in [0, \bar t_*]\,.
\end{equation*}
This forces $t_* \geq \bar t_*$.
Since we have already proved $t_* \leq \bar t_*$, this implies $t_* = \bar t_*$ as desired.
Continuity of $n_t(0)$ at $t = t_*$ follows because $0 \leq n_t(0) \leq Z_t(0)$ and $Z_t(0)$ is continuous
with $Z_{\bar t_*}(0) = 0$.
It remains to construct the super-solution~$Z$.
We will do this by choosing
\begin{equation*}
Z_t(x) = \begin{cases} u_t(x)\varmin S_\gamma(x) \,, & 0<x\leq \bar x_0,
\\ S_\gamma(x)\,, & x>\bar x_0,
\end{cases}
\end{equation*}
where $S_\gamma$ is the stationary super-solution in~\eqref{d:S}, $\gamma > 0$ and $\bar x_0>0$ will be chosen shortly,
and~$u$ is given explicitly by~\eqref{e:u} (or implicitly from the upper branch of the parabolic arc~\eqref{d:para1}).
As before we let~$a$ satisfy~\eqref{e:invslope1}, with $a_0\in(0,1)$ specified arbitrarily.
One easily checks that
\[
|a_t|\le a_0 \quad\mbox{ for } \quad 0\le t\le \hat t:=\frac12\log \frac{1+a_0}{1-a_0} = \bar t_*+\frac12\log(1+a_0)\,.
\]
In this case it suffices to chose $b > 0$ to be constant in time.
Using~\eqref{e:u}, \eqref{e:dxU}--\eqref{e:dtU}, and~\eqref{e:invslope1} we compute
\begin{align*}
(a + bu) \mathcal L u
&= 2u(1-a) + \frac{x^2 b}{(a + bu)^2} - (2u + x^2) + 2(1-x) u (a + bu)
\\
&= x^2 \paren[\Big]{\frac{b}{a^2 + 2bx} - 1} -2ax u +2 (1-x) b u^2\\
& = x^2 \paren[\Big]{\frac{b}{a^2 + 2bx} - 3 } + (2-x) b u^2,
\end{align*}
where we used~\eqref{d:para1} to obtain the last equality.
Hence
\begin{equation*}
(a + bu) \mathcal L u
\geq x^2 \paren[\Big]{\frac{b}{a_0^2+2bx} - 3} + (2-x) b u^2.
\end{equation*}
Choosing~$\bar x_0 < 1/6$ and $b$ large we see that $\mathcal L u \geq 0$ for all $x \in (0, \bar x_0)$.
Since $\partial_x u_0(0) > 1 = \partial_x \hat n_0(0)$, we can make $\bar x_0$ smaller if necessary to ensure $u_0(\bar x_0) > \hat n_0(\bar x_0)$.
Then we can find a sufficiently small~$\gamma > 0$ for which $u_0(\bar x_0) > S_\gamma(\bar x_0)$.
From~\eqref{e:dxU} and~\eqref{e:dtU} we see that the function~$u$ is increasing in both~$x$ and~$t$.
Hence the functions $u_t(\cdot)$ and $S_\gamma(\cdot)$ must meet at some~$\bar x_t < \bar x_0$ where the function~$Z$ will satisfy the corner condition~\eqref{e:cornerSupSol}.
This shows~$Z$ is a super-solution, concluding the proof.
\iffals
(ii) Supersolutions using modified sideways parabolas and $S$.
\HL[2021-03-06]{Here the special form of $S$ seems unused, maybe replaced by $Z_1$}
We construct a superposition of form
$$
Z(x, t)=\min\{ u(x, t), S(x)\},
$$
where $u$ is determined by
\begin{equation*
x = a(t)u + \frac{1}{2}bu^2,
\end{equation*}
with $a$ again satisfying \eqref{e:invslope1} and $b$ a constant.
\GI[2021-08-14]{Was $2x/b > u^2$ instead, which is false. I changed it.}
Note that $x=au+\frac{b}{2}u^2 <u+\frac{b}{2}u^2$, hence $u^2 >\frac{2x}{b}$ and
$$
\frac{x}{u} <1+\frac{b}{2}u<1+\sqrt{\frac{bx}{2}}.
$$
A direct calculation shows that
\begin{align*}
(a+bu)L[u] &=bx^2(\partial_x u)^2+bu^2(2-x)-3x^2 \\
& \geq u^2 (b(2-x) -3(1+\sqrt{\frac{bx}{2}})^2)\\
& >u^2(2b-3 -(5b/2 +3\sqrt{2b})\sqrt{x})>0,
\end{align*}
provided $b>3/2$ and
$$
x<\hat x: = \left(\frac{4b-6}{5b+6\sqrt{2b}}\right).
$$
Let $0<x_0(t)$ be the first
Since $S(x)$ is a supersolution, it suffices to show that $u(x, t)$ is a supersolution in $(0, x_0(t))$.
This way we are able to provide explicit bounds of the form
$$
z(x,t)<n(x, t)<Z(x,t),
$$
which guarantees a precise time
$$
t_c=\frac{1}{2} \ln \frac{1}{1-a(0)}
$$
of condensate formation corresponding to $a(t_c)=0$.
\fi
\end{proof}
\section{Long Time Behavior.}\label{s:largetime}
This section is devoted to studying the long time convergence of solutions (Theorem~\ref{t:lim}, and the other results stated in Section~\ref{s:longtime}).
Following the convention from the previous section, we assume~$n$ is the unique global solution to~\eqref{e:komp}--\eqref{e:noflux} with initial data~$n_0$.
Recall Theorem~\ref{t:exist} guarantees that $n \in C^\infty( (0, \infty)^2 )$.
\subsection{Entropy Decay and Steady States (Lemma~\ref{ent}).}
The main goal of this section is to prove the entropy decay stated in Lemma~\ref{ent}.
We begin with a formal argument showing that the quantum entropy~$H$ defined in~\eqref{e:HdefIntro} is dissipated (see also~\cites{CaflischLevermore86,LevermoreLiuEA16}).
Note that the flux~$J$ can be rewritten as
\begin{align}\label{e:Jh}
J&=n(n+x^2) \partial_x h(x, n)\,,
\end{align}
where
\begin{equation*}
h(x, n)
\defeq x+ \ln n -\ln (n+x^2)
= x - \ln\paren[\Big]{ 1 + \frac{x^2}{n} } \,.
\end{equation*}
Multiplying~\eqref{e:komp} by $h(x, n)$ and integrating by parts formally gives
$$
\int_0^\infty \partial_t n \, h(x, n) \, dx=-\int_0^\infty n(n+x^2)|\partial_x h(x, n)|^2 \, dx \,.
$$
Here we assumed that the boundary term~$Jh$ vanishes both at zero and infinity.
Since the left hand side can be recast as a time derivative, this yields the dissipation relation~\eqref{e:ent}.
For convenience we rewrite~\eqref{e:ent} as
\begin{equation}\label{e:dtH}
\partial_t H + D=0,
\end{equation}
where the quantum entropy functional, $H = H(n)$, and the dissipation term, $D = D(n)$ can be rewritten as
\begin{align}
\label{e:HPhi}H(n) & \defeq \int_0^\infty [xn +\Phi(x, n)]dx,
\\
\nonumber
\Phi(x, n) & \defeq n\ln n -(n+x^2)\ln(n+x^2)+x^2 \ln (x^2)
\\
\nonumber
&= -n\ln \paren[\Big]{1 + \frac{x^2}{n}}
-x^2 \ln \paren[\Big]{ 1 + \frac{n}{x^2} }\,,
\\
\nonumber
\llap{\text{and}\qquad} D(n) &\defeq \int_0^\infty n(n+x^2)|\partial_x h(x, n)|^2 \, dx \, .
\end{align}
In order to justify~\eqref{e:dtH} we need to ensure~$n > 0$ (so that~$D$ is defined), and show that $J h$ vanishes at both zero and infinity.
We do each of these below.
\begin{lemma}\label{l:nPositive}
If $n_0$ is not identically~$0$, then $n_t(x) >0$ for every $x>0$, $t>0$.
\end{lemma}
\begin{proof}
For any $\delta >0$, the equation
$$
\partial_t n =x^2 \partial_x^2 n +(x^2+2n)\partial_x n +2(x-1)n, \quad x\in (\delta, R), \quad t>0,
$$
is strictly parabolic, and the zeroth order coefficient is bounded from below.
Thus by the strong minimum principle (see for instance~\cite[\S7.1.4]{Evans98}), $n_t(x) > 0$ for all $t > 0$, $x > \delta$.
Sending $\delta \to 0$ finishes the proof.
\iffalse
By the parabolic Harnack principle (see \cite[Theorem H]{Aronson68})
we have for any $0<s<t$ and $x, y >\delta$,
$$
n(y, s)\leq n(x, t) \exp L\left(\frac{(x-y)^2}{t-s} + \frac{t-s}{K} +1\right),
$$
where $K=\min (1, s, \delta^2)$ and $L$ depends on $n$ and the parabolic structure of the equation.
Hence, $n(x, t)> 0$ for any $t>0$ unless it is identical zero. Passing to limit $\delta \downarrow 0$ gives the result.
\fi
\end{proof}
For the remainder of this section we will assume that the initial data is not identically~$0$, and hence the solution is strictly positive on $(0, \infty)^2$.
Next, to show that~\eqref{e:dtH} holds we need to show $Jh$ vanishes both at~$0$ and at infinity.
We use an averaging argument near $x=0$ (similar to what was used in the proof of Proposition~\ref{p:lossFormula}).
Unfortunately, as $x \to \infty$, our existence results \emph{do not} provide enough decay to guarantee that~$J h$ vanishes.
However, if $n_0(x)$ decays fast enough, then the comparison principle and our super-solutions~\eqref{d:S} provide enough decay to show that $J h$ vanishes as $x \to \infty$.
This is what we use to rigorously prove~\eqref{e:dtH}.
\begin{proof}[Proof of Lemma~\ref{ent}]
First note that since $n_0(x) \leq C_0(1 + x^2) e^{-x}$, there must exist~$\gamma > 0$ such that $n_0 \leq S_\gamma$.
(Recall $S_\gamma$ is the stationary super-solution defined in~\eqref{d:S}.)
Thus using the comparison principle (Corollary~\ref{c:Sbound}) we must have $n_t(x) \leq S_\gamma(x)$ for all $t > 0$ and $x \geq 0$.
Now fix $0 < \epsilon < R$, and define
\begin{equation*}
\zeta(x)=
\begin{dcases}
0 & 0\leq x\leq \epsilon\,, \\
\tfrac{x}{\epsilon} -1 & \epsilon \leq x\leq 2\epsilon\,,\\
1 & 2\epsilon \leq x\leq R\,, \\
R+1-x, & R \leq x \leq R+1,\\
0 & x\geq R+1\,.
\end{dcases}
\end{equation*}
Multiplying~\eqref{e:komp} by $h(x, n)\zeta(x)$ and integrating by parts gives
\begin{align}
\nonumber
\int_s^t \int_{\epsilon}^{R+1} \partial_t n \, h \, \zeta(x) \, dx \, d\tau
& =-\int_s^t \int_{\epsilon}^{R+1} n(n+x^2)|\partial_x h(x, n)|^2 \zeta(x) \, dx
\\
\label{en1}
& \qquad - \int_s^t \Xint-_{\epsilon}^{2\epsilon} J \, h \, dx \, d\tau
+ \int_s^t \int_{R}^{R+1} J \, h \, dx \, d\tau \,.
\end{align}
Note that $h \partial_t n =\partial_t (xn+\Phi)$, hence the left hand side of the above reduces to
\begin{equation*}
\int_{\epsilon}^{R+1}(x n+ \Phi) \zeta(x)dx \Big|_s^{t} \,.
\end{equation*}
For the right hand side we note
\begin{equation}\label{e:dPhi}
\partial_n \Phi=\ln \paren[\Big]{\frac{n}{n+x^2}}
\qquad\text{and}\qquad
\partial_x \Phi= 2x \ln \paren[\Big]{\frac{x^2}{n+x^2}} \,.
\end{equation}
Thus we can regroup terms in $Jh$ to obtain
\begin{align*}
Jh & =(x^2\partial_x n +n^2 +(2x-x^2)n)(x +\partial_n \Phi) \\
& =\partial_x (x^2(xn +\Phi)) -3x^2 n -2x\Phi -x^2 \partial_x \Phi +(n+2x-x^2)(xn + n \partial_n \Phi)\\
&=\partial_x (x^2(xn +\Phi)) -B \,,
\end{align*}
where
\begin{equation*}
B= xn(x^2+x-n)+2x\Phi +x^2 \partial_x \Phi +n(x^2-2x-n)\partial_n \Phi\,.
\end{equation*}
For $\epsilon$ small and $x \in [\epsilon , 2\epsilon]$, we have
\begin{equation*}
0< n \leq S_\gamma \leq \gamma + 2\epsilon \leq \gamma + 2
\qquad\text{and}\qquad
|\Phi|\leq C \,,
\end{equation*}
for some finite constant $C$.
We will subsequently allow $C$ to increase from line to line, provided it does not depend on $\epsilon$ or $R$.
Note also
\begin{equation}\label{e:xdxphi}
x|\partial_x \Phi| =2x^2\ln \left( 1+\frac{n}{x^2}\right) \leq 2n
\end{equation}
and
$$
n|\partial_n \Phi|= n\ln \left( 1+\frac{x^2}{n}\right)\leq x^2.
$$
These combined ensure $|x^2(xn+\Phi)|\leq C\epsilon^2$ and
$$
|B| \leq xn|x^2+x-n| +2x|\Phi| +2xn +(n+2x-x^2)x^2 \leq C\epsilon.
$$
Hence
\begin{align}
\nonumber
\abs[\Big]{-\int_s^t \Xint-_{\epsilon}^{2\epsilon} J\, h \, dx \, d\tau }
&= \abs[\Big]{ -\frac{1}{\epsilon}\int_s^t \brak[\big]{x^2(xn+\Phi)}_\epsilon^{2\epsilon} \, d\tau
+\int_s^t \Xint-_{\epsilon}^{2\epsilon}B \, dx \, d\tau }
\\
\label{e:Jh0}
&\leq C(t-s)\epsilon.
\end{align}
We now bound the last term in~\eqref{en1}.
Note
$$
\int_s^t \int_{R}^{R+1} J \, h \, dx \, d\tau
= \int_s^t \brak[\Big]{x^2(xn+\Phi)}_R^{R+1} \, d\tau +\int_s^t \int_{R}^{R+1}B \, dx \, d\tau \, .
$$
For $x \in [R, R+1]$, we have
\begin{equation*}
n_t(x) \leq S_\gamma(x)
= \frac{x^2 e^{-x}}{ (1-e^{-x})^{2} } (\gamma+1-e^{-x}) \,,
\end{equation*}
and hence
$$
n_t(x) \leq C R^2 e^{-R}\,, \qquad
$$
for all sufficiently large~$R$ and $x \in [R, R+1]$.
Therefore, since $n\mapsto|n\ln n|$ is increasing for $0<n<e^{-1}$, we find
\begin{align*}
|\Phi| &= n \ln \paren[\Big]{ \frac{n + x^2}{n} } + x^2 \ln \paren[\Big]{ \frac{n + x^2}{x^2} }
\leq C R^3 e^{-R}\,.
\end{align*}
As before we still use $x|\partial_x \Phi|\leq 2n$ (inequality~\eqref{e:xdxphi}), but we bound $n\partial_n \Phi$ differently.
Namely,
$$
n|\partial_n \Phi|= n \ln\paren[\Big]{ 1 + \frac{x^2}{n} }
\leq C R^3 e^{-R}\,.
$$
This ensures
$$
|x^2(xn+\Phi)|\leq C R^5 e^{-R}\,,
$$
and
$$
|B| \leq xn(x^2+x) +2x|\Phi| +2x n +x^2n|\partial_n \Phi| \leq C R^5 e^{-R}\,.
$$
Hence,
\begin{equation}\label{e:Jhinf}
\abs[\Big]{ \int_s^t \int_{R}^{R+1} J\, h \, dx \, d\tau }
\leq C(t-s)R^5 e^{-R}
\xrightarrow{R \to \infty} 0\,.
\end{equation}
\medskip
Finally sending~$\epsilon \to 0$ and $R \to \infty$~\eqref{en1} implies
\begin{equation}\label{eq:ent}
H(n(\cdot, t))=H(n(\cdot, s)) - \int_s^t D(n(\cdot, \tau))\,d\tau\,.
\end{equation}
Since $n$ is smooth for $x > 0$, $t > 0$ this implies~\eqref{e:dtH} as desired.
\end{proof}
\begin{remark}
In the previous we used the decay assumption on $n_0$ to ensure $n \leq S_\gamma$.
We used $n \leq S_\gamma$ to obtain both the vanishing of $Jh$ near both zero (equation~\eqref{e:Jh0}), and at infinity (equation~\eqref{e:Jhinf}).
The use of $n \leq S_\gamma$ to show that $J h$ vanishes at $0$ can be avoided by using
Proposition~\ref{p:nBdd}
instead.
We have so far not managed to avoid the use of $n \leq S_\gamma$ to show that $J h$ vanishes at infinity.
\end{remark}
The entropy dissipation lemma suggests that the long time limit of solutions to~\eqref{e:komp}--\eqref{e:noflux} is an equilibrium solution for which
$$
J(x, n)=n(n+x^2)\partial_x h(x, n)=0 \,.
$$
This equation can be directly solved, and the nonnegative solutions are precisely the Bose--Einstein equilibria~\eqref{e:fhat} (equivalently~\eqref{e:nhat}).
\begin{lemma}\label{l:stationary}
Let $b > 0$ be a~$C^1$ function on $[0, \infty)$.
Then $D(b) = 0$ if and only if $J(b) = 0$ if and only if there exists $\mu \in [0, \infty)$ such that $b = \hat n_\mu$.
\end{lemma}
We remark that the stationary solutions $\hat n_\mu$ can also be characterized as minimizers of the quantum entropy functional,
but we do not need this characterization in our proofs.
See \cite{EscobedoMischlerEA05} for a precise analysis of such extrema.
\begin{proof}[Proof of Lemma~\ref{l:stationary}]
Clearly $D(b) = 0$ if and only if~$J(b) = 0$.
Also~$J(b) = 0$ if and only if $h(x, b) = -\mu$ for some constant~$\mu \in \mathbb{R}$.
That is,
\begin{equation*}
x + \ln \paren[\Big]{\frac{b}{b + x^2}} = -\mu\,.
\end{equation*}
Solving for~$b$ and using the fact that~$b > 0$ implies $\mu \in [0, \infty)$ and $b = \hat n_\mu$.
\end{proof}
\iffalse
\begin{lemma}
\GI[2021-08-17]{The last two assertions are worth saving. Perhaps move to a second appendix?}
\color{red}
A strictly positive~$C^1$ function~$b$ satisfies $D(b) = 0$ if and only if there exists a unique $\mu \geq 0$ such that
$$
b= \hat n_\mu(x) \,.
$$
Moreover, the steady states $ \hat n_\mu$ satisfy one of the following equivalent conditions:
\begin{enumerate}
\item \emph{(Equilibrium)} $J(x, \hat n_\mu) = 0$ for all $x \geq 0$.
\item \emph{(No dissipation)} $D( \hat n_\mu)=0$.
\item The function $\hat n_\mu$ is critical point of the entropy functional $H$. Moreover $H(\rho) \geq H(\hat n_0)$ for any $\rho>0$.
\item For any nonnegative function~$\rho$, if $N(\rho) =N(\hat n_\mu)$, then $H(\rho) \geq H(\hat n_\mu)$.
\end{enumerate}
\end{lemma}
\begin{proof}
\color{red}
From $J(x, b)=0$, we have $b(b+x^2)\partial_x h(x, b)=0$, hence $D(b)=0$ for $b \not\equiv 0$. There must exist a constant $\mu\geq 0$ such that
\begin{align}\label{hmu}
h(x, b)=-\mu.
\end{align}
This leads to $b= \hat n_\mu(x)$ for some $\mu \geq 0$. Note also that
$$
\frac{d}{d\mu}H(\hat n_\mu)=-\mu \int_0^\infty \frac{x^2 e^{x+\mu}}{(e^{x+\mu}-1)^2}dx <0,
$$
hence there exists a unique $\mu$ such that $H(\hat n_\mu)=H(b)$.
\iffalse
Note that the limit $b$ has to be one of the stationary solutions $\hat n_\mu$ for some $\mu \geq 0$. In fact, for $b \not\equiv 0$, the quantity
$$
g(x)=x+\ln \left( \frac{b}{b+x^2}\right)
$$
is well-defined. From the fact that $D(b)=0$ it follows
$$
\partial_x h(x, b)=1+\frac{x^2\partial_x -2xb}{b(b+x^2)}=\partial_x g\equiv 0.
$$
Hence $g(x) \equiv Cost$. This together with the fact $g(x) \geq x +\frac{b+x^2}{b}-1=x+\frac{x^2}{b}> 0$ asserts that
$g(x) \equiv \mu > 0$.
\fi
For the entropy comparison, it suffices to perform a variational study of $H$ around $ \hat n_\mu$. We take a small perturbation $m+\hat n_\mu$
with $\int_0^\infty mdx=0$ so that $\rho =m+\hat n_\mu$ remains a density function, and
\begin{align*}
H(\rho) & =H (\hat n_\mu)+\int_0^\infty (x+\partial_n \Phi(x, \hat n_\mu))m +\frac{1}{2}\partial_n^2 \Phi(x, \tilde n)m^2)dx \\
& =H (\hat n_\mu)+\mu \int_0^\infty m dx +\frac{1}{2}\int_0^\infty \frac{x^2}{\tilde n(\tilde n+x^2)}m^2dx \\
& \geq H (\hat n_\mu),
\end{align*}
where $\tilde n$ is an immediate value between $\rho$ and $\hat n_\mu$. This estimate means that $\hat n_\mu$ is a critical point.
On the other hand, if $b$ is a critical point of $H$, then for any $m$ we will have
$$
\int_0^\infty (x +\partial_n \Phi(x, b))mdx=0.
$$
If $m$ satisfies $\int_0^\infty mdx=0$ we must have $x +\partial_n \Phi(x, b)= - \mu$, hence $b=\hat n_\mu$ for some $\mu$.
\end{proof}
\fi
\subsection{The \texorpdfstring{$\omega$}{omega}-limit Set.}
Our aim in this section is to show that the $\omega$-limit set of any trajectory is non-empty, and invariant under the dynamics.
It will be convenient to let~$U_t$ be the solution operator of~\eqref{e:komp}--\eqref{e:noflux}.
That is, given any nonnegative initial data $a$ satisfying~\eqref{e:expDecay}, let
\[
U_t a \defeq n_t\,,
\]
where~$n$ is the solution of \eqref{e:komp}--\eqref{e:noflux} with initial data $n_0 = a$.
Note, we assumed the faster decay~\eqref{e:expDecay} on the initial data (as opposed to the slower decay~\eqref{e:x2n} that is required to prove existence).
Under the assumption~\eqref{e:expDecay} there exists $\gamma > 0$ such that $a \leq S_\gamma$, and hence by Corollary~\ref{c:Sbound} we must have $U_t a \leq S_\gamma$ for all $t \geq 0$.
Thus~$\norm{U_t a}_{L^\infty} \leq \norm{S_\gamma}_{L^\infty} \leq 1 + 2\gamma$.
Define the set~$A_\gamma$ by
\begin{gather*}
A_\gamma \defeq \set{ a\in L^\infty(0,\infty) \st 0 \leq a(x) \leq S_\gamma
\quad\text{for } x\in(0,\infty) }\,,
\end{gather*}
and note that~$A_\gamma$ is positively invariant under the semi-flow induced by the solution operator.
That is,
\[
U_t A_\gamma \subseteq A_\gamma \,, \qquad \text{for all } t\geq 0\,.
\]
Given any~$a \in A_\gamma$, recall the usual $\omega$-limit is defined by
$$
\omega(a) = \bigcap_{s>0} \overline{
\set{U_t a \st t\geq s} }\,,
$$
where the over-line notation above denotes the closure in $L^1$.
That is, $b\in\omega(a)$ if and only if there is a sequence of times $t_k\to \infty$ such that $\norm{U_{t_k}a- b}_{L^1} \to 0$.
The main purpose of this section is to show that the~$\omega$-limit set is non-empty.
\begin{proposition}[The $\omega$-limit set]\label{p:omegalimit}
For every $a\in A_\gamma$,
Then $\omega(a)$ is not empty, and is invariant under $U(t)$, with
\begin{equation}\label{uo}
U_t \paren[\big]{ \omega(a) } = \omega(a), \qquad\text{for all } t>0\,.
\end{equation}
\end{proposition}
The first step in proving Proposition~\ref{p:omegalimit} is to establish boundedness in $\mathit{BV}$, the space of bounded variation functions.
\begin{lemma}\label{l:BV}
For every $a \in A_\gamma$, and any increasing sequence of times $0 < t_1 \leq t_2 \dots$ that diverges to infinity, the family $\set{ U_{t_k} a \st k \in \N }$ is bounded in $\mathit{BV}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{l:oleinik} we have~\eqref{e:dxnLower} for every $x > 0$ and $t \in (0, T]$.
By Corollary~\ref{c:Sbound} we know~$\norm{n_t}_{L^\infty} \leq \norm{S_\gamma}_{L^\infty}$ which is independent of~$t$.
Thus choosing
\begin{equation*}
\alpha \geq \sqrt{6 \norm{S_\gamma}_{L^\infty} + 1} - 1\,,
\end{equation*}
implies the first inequality in~\eqref{e:dxnLower} holds for all $x > 0$ and $t > 0$.
This gives
$$
\abs{\partial_x n_t(x)} \leq
\partial_x n_t(x) +\frac{1}{t}+5x+\alpha \,,
\quad\text{for all}\quad
x>0,~ t > 0\,.
$$
Now by~\eqref{e:x2dxnVanish} and Proposition~\ref{p:nBdd} there exists~$R \geq \frac{1}{\sqrt{t_1}}$ such that
\begin{equation*}
\abs{\partial_x n_t(x)} \leq \frac{1}{x^2}\,,
\qquad\text{for all } x \geq R,~ t \geq t_1\,.
\end{equation*}
Thus for every~$k \in \N$ we have
\begin{align*}
\int_0^\infty \abs{\partial_x n_{t_k}(x)} \, dx
&\leq
\int_0^R \paren[\Big]{ \partial_x n_{t_k} + \frac{1}{t_k}+5x+\alpha } \, dx
+\int_R^\infty \frac{dx}{x^2}
\\
& \leq S_\gamma(R) +\frac{R}{t_1} +\frac{5 R^2}{2} + \alpha R + \frac1R\,,
\end{align*}
concluding the proof.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{p:omegalimit}]
By Lemma~\ref{l:BV} and Helley selection principle there exists an increasing sequence of times $(t_k) \to \infty$ such that $(U_{t_k} a)$ converges in~$L^1$.
By definition the limit must belong to~$\omega(a)$ and hence~$\omega(A_\gamma)$ is non-empty.
To prove \eqref{uo}, choose any $b\in \omega(a)$.
There must exist a sequence of times $(t_k) \to \infty$ such that
$$
\|U_{t_k} a-b\|_{L^1} \xrightarrow{k \to \infty} 0\,.
$$
By Lemma~\ref{l:l1contract} this implies
\begin{equation*}
\norm{U_{t_k + t} a-U_t b}_{L^1}
\leq \norm{U_{t_k} a- b}_{L^1}
\xrightarrow{k \to \infty} 0\,,
\end{equation*}
and hence $U_t b \in \omega(a)$.
This shows $U_t( \omega(a) ) \subseteq \omega(a)$.
For the reverse inclusion, choose any~$\hat b \in U_t( \omega(a) )$.
By definition, there exists~$b^* \in \omega(a)$ such that $U_t b^* = \hat b$.
For this~$b^*$ there exists a sequence~$t_k \to \infty$ such that $(U_{t_k} a) \to b^*$ in~$L^1$.
Hence, by Lemma~\ref{l:l1contract} we have
\begin{equation*}
\norm{U_{t + t_k} a - \hat b}_{L^1}
= \norm{U_{t + t_k} a - U_t b^*}_{L^1}
\leq \norm{U_{t_k} a - b^*}_{L^1} \xrightarrow{k \to \infty} 0\,,
\end{equation*}
which shows $\hat b \in \omega(a)$.
Thus $\omega(a) \subseteq U_t( \omega(a) )$, finishing the proof of~\eqref{uo}.
\end{proof}
\subsection{LaSalle\texorpdfstring{'}{}s Invariance Principle (Theorem~\ref{t:lim}).}
The long time convergence of solutions (Theorem~\ref{t:lim}) can now be proved by adapting LaSalle's invariance principle to our situation.
We first prove Theorem~\ref{t:lim} assuming an exponential tail bound on the initial data, and then prove the general case using the $L^1$ contraction.
\begin{proposition}[LaSalle's invariance principle]\label{p:lasalle}
Suppose $a \in A_\gamma$ is not identically~$0$, let $n_t =U_t a$, and set
\begin{equation*}
H_\infty = \lim_{t \to \infty} H(n_t)\,.
\end{equation*}
Then there exists a unique $\mu \in [0, \infty)$ such that
\begin{equation*}
\omega(a) = \set{ \hat n_\mu}\,,
\qquad\text{and}\qquad
\lim_{t \to \infty} \norm{U_t a - \hat n_\mu}_{L^1} = 0\,.
\end{equation*}
This~$\mu$ can be uniquely determined from the relation
\begin{equation}\label{e:Hmu}
H(\hat n_\mu) = H_\infty\,.
\end{equation}
\end{proposition}
\iffalse
\begin{lemma}\label{l:Hcont}
The functional~$H$ is uniformly continuous on~$A_\gamma$ with respect to the~$L^1$ norm.
That is, for every~$\epsilon > 0$ there exists~$\delta > 0$ such that for every~$a, b \in A_\gamma$ with $\norm{a - b}_{L^1} < \delta$ we have $\abs{ H(a) - H(b) } < \epsilon$.
\end{lemma}
\fi
\begin{proof}[Proof of Proposition~\ref{p:lasalle}]
Choose any $b \in \omega(a)$ and a sequence~$(t_k) \to \infty$ such that $U_{t_k}(a) \to b$ in~$L^1$.
We will show that~$b = \hat n_\mu$ for~$\mu$ given by~\eqref{e:Hmu}.
By passing to a subsequence if necessary, we may also assume $U_{t_k}(a) \to b$ almost everywhere.
We claim that $H(U_{t_k} a)$ also converges to $H(b)$, and hence~$H(b) = H_\infty$.
To see this, observe
\begin{equation*}
\abs{H(n)} \leq x n + n + n \ln\paren[\Big]{ 1 + \frac{x^2}{n} }
\end{equation*}
and the right hand side is an increasing function of~$n$.
Using this and the fact that $U_t a \leq S_\gamma$ we must have
\begin{equation*}
\abs{ H(U_{t_k}a) } \leq x S_\gamma + S_\gamma + S_\gamma \ln\paren[\Big]{1 + \frac{x^2}{S_\gamma} } \,,
\end{equation*}
which is integrable.
Thus, by the dominated convergence theorem
\begin{equation*}
H(U_{t_k}(a)) \to H(b)\,,
\end{equation*}
and hence $H(b) = H_\infty$.
We now claim that $b$ can not be identically~$0$.
To see this let $\ubar{a} = a \varmin \hat n_0 \leq \hat n_0$.
By Lemma~\ref{l:comparison} this implies~$U_t \ubar{a} \leq \hat n_0$ for all~$t \geq 0$ and hence~$U_t \ubar{a}(0) = 0$ for all $t \geq 0$.
By~\eqref{e:loss} this means $N(U_t \ubar a) = N(\ubar a)$ for all $t \geq 0$.
Since $\ubar a \leq a$ this implies
\begin{equation*}
0 < \int_0^\infty (\hat n_0 \varmin a) \, dx
= N(\ubar a)
= N(U_t \ubar a)
\leq N( U_t a )
\xrightarrow{t \to \infty}
N(b)\,.
\end{equation*}
The first inequality is strict since $a$ is not identically~$0$ by assumption, and $\hat n_0 > 0$.
Thus~$0 < N(\ubar a) \leq N(b)$ showing that~$b$ is not identically~$0$.
We now claim there exists a~$\mu \in [0, \infty)$ such that $b = \hat n_\mu$.
To see this recall that by Proposition~\ref{p:omegalimit} we know that $U_t(b) \in \omega(a)$ for every~$t \geq 0$.
Hence, for every $t \geq 0$ we must also have $H(U_t(b)) = H_\infty$ and hence $\partial_t (H(U_t(b))) = 0$.
Using~\eqref{e:dtH} this implies $D( U_t(b) ) = 0$ for every $t \geq 0$, and thus, in particular $D(b) = 0$.
Since all solutions of~$D(b) = 0$ are of the form~\eqref{e:nhat} (Lemma~\ref{l:stationary}), there exists~$\mu \in [0, \infty)$ such that~$b =\hat n_\mu$.
Of course, since~$H(b) = H_\infty$ the identity~\eqref{e:Hmu} holds.
We will now show that the~$\mu$ satisfying~\eqref{e:Hmu} must be unique.
To see this, note that
\begin{equation*}
\partial_\nu H(\hat n_\nu)=-\nu \int_0^\infty \frac{x^2 e^{x+\nu}}{(e^{x+\nu}-1)^2} \, dx <0,
\end{equation*}
and so the function~$\nu \mapsto H(\hat n_\nu)$ is strictly decreasing.
Thus there can be at most one solution to~\eqref{e:Hmu}, proving uniqueness.
The above shows that for any, arbitrarily chosen, $b \in \omega(a)$, we must have~$b = \hat n_\mu$, with~$\mu$ uniquely determined by~\eqref{e:Hmu}.
Hence~$\omega(a) = \set{\hat n_\mu}$ where~$\mu \in [0, \infty)$ is the unique number satisfying~\eqref{e:Hmu}.
Finally, to show~$L^1$ convergence, we note that the above shows $U_{t_k} a \to \hat n_\mu$ in $L^1$ for some sequence of times with $(t_k) \to \infty$.
For any $t > t_k$, the semi-group property, the fact that~$\hat n_\mu$ is invariant under~$U$ implies and the~$L^1$ contraction (Lemma~\ref{l:l1contract}) imply
\begin{equation*}
\norm{U_t a - \hat n_\mu}_{L^1}
= \norm{U_{t-t_k} U_{t_k} a - U_{t - t_k} \hat n_\mu}_{L^1}
\leq \norm{U_{t_k} a - \hat n_\mu}_{L^1} \,.
\end{equation*}
This immediately implies~$U_t a \to \hat n_\mu$ in $L^1$ as~$t \to \infty$.
\end{proof}
We are now in a position to prove Theorem~\ref{t:lim}.
The main idea in the proof is to truncate the initial data to $[0, R]$, and use Proposition~\ref{p:lasalle} to obtain the long time limit solutions with the truncated initial data.
Next we use the $L^1$ contraction (Lemma~\ref{l:l1contract}) to send $R \to \infty$.
\begin{proof}[Proof of Theorem~\ref{t:lim}]
For any~$R > 0$ we write~$n_0 = a_R + b_R$ where~$a_R = \mathbbm{1}_{[0, R]} n_0$.
Since~$a_R(x) = 0$ for all $x > R$ there must exist~$\gamma = \gamma(R) \geq 0$ such that~$a_R \in A_\gamma$ for some~$\gamma \geq 0$.
Since $a_R \in A_\gamma$, Proposition~\ref{p:lasalle} applies and there exists a unique~$\nu(R)$ such that $U_t a_R \to \hat n_{\nu(R)}$ in~$L^1$ as $t \to \infty$.
Notice that we have $a_R\leq a_{R'}$ whenever $0<R<R'$.
Due to the comparison principle it follows $U_ta_R \leq U_t a_{R'}$ for all $t>0$,
whence $\hat n_{\nu(R)}\le \hat n_{\nu(R')}$.
This implies ${\nu(R)}\geq {\nu(R')}$
since the function ~$\mu \mapsto \hat n_\mu(x)$ is decreasing as a function of~$\mu$
for every~$x>0$.
Thus the limit $\mu \defeq \lim_{R\to\infty}\nu(R)$ exists in $[0,\infty)$.
Moreover $\|\hat n_{\nu(R)}-\hat n_\mu\|_{L^1}\to0$ as $R\to\infty$
by dominated convergence.
We now infer that $n_t\to \hat n_\mu$ in $L^1$ as $t\to\infty$,
by a standard triangle argument:
Given $\epsilon>0$, we may choose $R$ sufficiently large so
both $\|\hat n_{\nu(R)}-\hat n_\mu\|_{L^1}$
and $\|n_0-a_R\|_{L^1}$ are less than $\frac\epsilon 3$.
Then by the $L^1$ contraction property,
\begin{align*}
\norm{n_t - \hat n_\mu}_{L^1}
&\leq \norm{n_t - U_t a_R}_{L^1}
+ \norm{U_t a_R - \hat n_{\nu(R)}}_{L^1}
+ \norm{\hat n_{\nu(R)} - \hat n_{\mu}}_{L^1}
\\
&\leq \norm{n_0 - a_R}_{L^1}
+ \norm{U_t a_R - \hat n_{\nu(R)}}_{L^1}
+ \norm{\hat n_{\nu(R)} - \hat n_{\mu}}_{L^1}
<\epsilon
\end{align*}
for sufficiently large $t$.
This implies~\eqref{e:l1conv} as claimed.
Finally, the identity~\eqref{e:muDef} follows immediately from~\eqref{e:l1conv} and Proposition~\ref{p:lossFormula}.
Indeed, by~\eqref{e:loss} we have
\begin{equation*}
N(n_t) = N(n_0) - \int_0^t n_s(0)^2 \, ds\,.
\end{equation*}
By~\eqref{e:l1conv} the left hand side converges to~$N(\hat n_\mu)$ as~$t \to \infty$, yielding~\eqref{e:muDef} as claimed.
\end{proof}
\subsection{Convergence Rate.}
We will now use the entropy to bound the rate at which solutions converge to equilibrium.
\begin{proof}[Proof of Proposition~\ref{p:rate}]
Let $b=\hat n_\mu=\lim_{t\to\infty}n_t$.
Taylor expanding $\Phi$ in~$n$ we see
\begin{align}
\nonumber
H(n)-H(b)
& =\int_0^\infty
\paren[\Big]{
(x+\partial_n \Phi(x, b))(n -b)
+\frac{1}{2}\partial_n^2\Phi(x, \tilde n) (n-b)^2
} \, dx
\\
\nonumber
& =\int_0^\infty h(x, b) (n-b) \, dx
+\frac{1}{2}\int_0^\infty \partial_n^2\Phi(x, \tilde n) (n-b)^2 \, dx
\\
\label{e:HnHb1}
& = -\mu (N(n) -N(b))
+\frac{1}{2}\int_0^\infty \partial_n^2\Phi(x, \tilde n) (n-b)^2 \, dx
\,,
\end{align}
where $\tilde n$ is some intermediate value between $n$ and $b$.
Since~$n_0$ satisfies~\eqref{e:expDecay} we can choose~$\gamma$ large so that~$n_0 \leq S_\gamma$.
Thus by Corollary~\ref{c:Sbound} we must have~$n_t \leq S_\gamma$ for all $t \geq 0$.
This implies
$$
\partial_n^2\Phi(x, \tilde n)=\frac{x^2}{\tilde n (\tilde n+x^2)}
\geq \frac{x^2}{S_\gamma (S_\gamma +x^2)},
$$
and hence~\eqref{e:HnHb1} and~\eqref{e:loss} imply
\begin{equation}\label{e:SgammaWeightedRate}
\int_0^\infty \frac{x^2}{S_\gamma ( S_\gamma + x^2) }(n_t-b)^2 \, dx
\leq H(n_t)-H(b)+\mu \int_t^\infty n_s(0)^2 \, ds \,.
\end{equation}
Using the Cauchy--Schwarz inequality and~\eqref{e:SgammaWeightedRate} we have
\begin{align*}
\norm{ x^2 (n_t - b) }^2_{L^1}
&\leq \paren[\Big]{
\int_0^\infty \frac{x^2}{S_\gamma(S_\gamma + x^2)} (n_t - b)^2 \, dx
}
\paren[\Big]{
\int_0^\infty S_\gamma(S_\gamma + x^2) \, dx
}
\\
& \leq C \paren[\Big]{ H(n_t)-H(b)+\mu \int_t^\infty n_s(0)^2 \, ds } \,.
\end{align*}
Here $C = C(\gamma) = \int_0^\infty S_\gamma (S_\gamma + x^2) \, dx$.
Since this is a constant that can be bounded in terms of~$C_0$ alone, the proof is complete.
\end{proof}
\begin{remark}
Another estimate on the rate of convergence is~\eqref{e:SgammaWeightedRate}.
This bounds the norm of~$n_t - b$ in the weighted~$L^2$ space with the exponentially growing weight~$x^2 / (S_\gamma (S_\gamma + x^2))$.
\end{remark}
\subsection{Mass Condition for Photon Loss, and Determining \texorpdfstring{$\mu$}{mu}.}
Finally we conclude the paper with the proofs of the first assertion in Proposition~\ref{p:formation}, and Corollary~\ref{c:mulim}.
\begin{proof}[Proof of the first assertion in Proposition~\ref{p:formation}]
Let~$\mu$ be as in Theorem~\ref{t:lim}.
Since $N(n_0) > N(\hat n_0) \geq N(\hat n_\mu)$, by~\eqref{e:muDef} we have
\begin{equation*}
\int_0^\infty n_t(0)^2 \, dt = N(n_0) - N(\hat n_\mu)
\geq N(n_0) - N(\hat n_0)
> 0 \,.
\end{equation*}
Thus there must exist some $t < \infty$ for which $n_t(0) > 0$.
This implies~$t_* < \infty$ concluding the proof.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{c:mulim}]
For the first assertion, we assume~$n_0 \geq \hat n_0$.
By the comparison principle (Lemma~\ref{l:comparison}) this implies $n_t \geq \hat n_0$ for all $t \geq 0$.
By Theorem~\ref{t:lim}, we also know $(n_t) \to \hat n_\mu$ in~$L^1$ as $t \to \infty$.
However, for any~$\mu > 0$, $\hat n_\mu < \hat n_0$.
Thus the only way we can have $(n_t) \to \hat n_\mu$ as $t \to \infty$ is if~$\mu = 0$.
This proves the first assertion.
For the second assertion, we again let~$\mu$ be as in Theorem~\ref{t:lim}.
The comparison principle implies~$n_t \leq \hat n_0$ for all $t \geq 0$, and hence~$n_t(0) = 0$ for all $t \geq 0$.
Using~\eqref{e:muDef} this implies
\begin{equation*}
0 = \int_0^\infty n_t(0)^2 \, dt = N(n_0) - N(\hat n_\mu)\,,
\end{equation*}
proving~$N(n_0) = N(\hat n_\mu)$ as claimed.
Finally, for the third assertion let $\ubar n_0 = n_0 \varmin \hat n_0$, and let $\ubar n$ be the solution to~\eqref{e:komp}--\eqref{e:noflux} with initial data $\ubar n$.
By the comparison principle (Lemma~\ref{l:comparison}), $n_t \geq \ubar n_t$ for all $t \geq 0$.
By the previous assertion, $N(\ubar n_t) = N(\ubar n_0)$ for all $t \geq 0$, from which~\eqref{e:NmuLower} follows immediately.
\end{proof}
|
1111.6343
|
\section{Introduction}
Weak itinerant ferromagnets (WIFMs) are metallic magnetic systems characterized by very low saturation moment and low Curie temperature ($T_C$).~\cite{shimizu, kaul} WIFMs attracted considerable attention due to their exotic ground state properties such as unusual magnetic excitation, superconductivity, quantum critical behavior, non-Fermi liquid (NFL) state etc.~\cite{saxena, cp, uhlarz} The itinerant magnetism is based on the band theory of electrons and the magnetic moment arises from the exchange splitting of the band.~\cite{blanco} In case of WIFMs, the band splitting is extremely small, and as a result the high field moment per magnetic element is only a fraction of a Bohr magneton. WIFMs lie very close to the magnetic non-magnetic phase boundary and often a small perturbation can give rise to a large change in the electronic and magnetic properties.
\par
The conventional Stoner-Wohlfarth theory of band ferromagnetism~\cite{wohlf} based on Hartree-Fock mean field theory is found to be inadequate to explain the various thermodynamical properties of WIFMs. The theoretical model based on self consistent renormalization (SCR) approach in presence of spin fluctuations~\cite{moriya} is found to be more appropriate describing the electronic and magnetic properties of such materials. Among others, several transition metal based intermetallic compounds, such as ZrZn$_2$~\cite{zrzn2a, zrzn2b}, Sc$_3$In~\cite{sc3ina, sc3inb}, Ni$_3$Al~\cite{ni3ala, ni3alb}, MnSi~\cite{cp}, NbFe$_2$~\cite{nbfe2} etc. show weak itinerant ferromagnetism, which is primarily connected to the delocalized nature of the $d$ electrons.
\par
The title compound of the present work, Y$_2$Ni$_7$ is also a transition metal based WIFM. Y-Ni solid solutions show interesting magnetic properties depending upon the ratio of Y and Ni.\cite{dg1, dg2} The stoichiometric compounds YNi$_{17}$ and YNi$_{15}$ show ferromagnetic ground state. However, saturation moment and $T_C$ decrease as the Ni concentration is further lowered and eventually YNi$_5$ becomes a paramagnetic material. With further decrease in Ni concentration, ferromagnetism reappears and the compounds such as Y$_2$Ni$_7$ and YNi$_3$ show weak itinerant ferromagnetism.~\cite{nishi, naka, tazuke} Finally, ferromagnetic state disappears in case of YNi$_2$. Evidently, Y$_2$Ni$_7$ remains close to the magnetic and non-magnetic phase boundary.
\par
The weak itinerant character of ferromagnetism in Y$_2$Ni$_7$ is apparent from the low saturation moment and low $T_C$ ($\sim$ 50 K) as reported in previous studies.~\cite{nishi,rb} However, a comprehensive investigation of the ground state magnetic and electronic properties of the material is lacking. Such investigation is necessary to understand the low lying excitation in the system, where magnetic fluctuations play a very crucial role in determining the magnetic state. Another important aspect associated with the low moment ferromagnet like Y$_2$Ni$_7$ is the nature of the paramagnetic (PM) to ferromagnetic (FM) phase transition. For that, one needs to investigate the critical region and the corresponding critical exponents. This can provide us the order, the universality class and the effective dimensionality of the phase transition.
\par
The present paper is organized in the following manner. Firstly, a thorough investigations have been presented based on the magnetic, transport and heat capacity measurements. Later we have provided the criticality and scaling behavior of Y$_2$Ni$_7$ around $T_C$ using modified Arrott plot, Kouvel-Fisher plot and critical isotherm method.
\begin{figure}[t]
\vskip 0.4 cm
\centering
\includegraphics[width = 8.5 cm]{fig1.eps}
\caption {(Color online)(a) Temperature dependence of dc magnetic susceptibility ($\chi = M/H$) measured in zero-field-cooled condition in presence of an applied magnetic field of 2 kOe. Inset shows the inverse dc magnetic susceptibility measured above 200 K. (b) shows the isothermal field dependence of magnetization at 4 K. The inset in (b) represents the XRD pattern of the sample along with the Rietveld refinement data. }
\end{figure}
\begin{figure}[t]
\vskip 0.4 cm
\centering
\includegraphics[width = 9 cm]{fig2.eps}
\caption {(Color online)(a) Resistivity ($\rho$) as a function of temperature for Y$_2$Ni$_7$. The insets show the $T^2$ and $T^{5/3}$ variation of $\rho$ at low temperature (5-18 K) and just below (39-54 K) the magnetic transition ($T_C$) respectively. (b) shows the low temperature resistivity as a function of square of the temperature for different applied fields. The dashed lines are the linear fit to the data. }
\end{figure}
\begin{figure}[t]
\vskip 0.4 cm
\centering
\includegraphics[width = 8.5 cm]{fig3.eps}
\caption {(Color online)(a) shows the temperature dependence of magnetoresistance (MR = [$\rho(H)-\rho(0)$]/$\rho(0)$) for different applied fields. (b) The main panel shows the variation of MR with field at two selected temperatures below $T_C$. The solid line shows the fit to the data with equation MR = $aH^{0.5}$. The inset shows the MR as a function of field at few temperatures above $T_C$ along with the fitted curves (solid lines).}
\end{figure}
\begin{figure}[t]
\vskip 0.4 cm
\centering
\includegraphics[width = 8.5 cm]{fig4.eps}
\caption {(Color online)(a) shows the heat capacity as a function of temperature . (b) shows the low temperature heat capacity with solid line being fit to the data with a formula comprising electronic, lattice and spin wave contributions of heat capacity.}
\end{figure}
\section{Experimental Details}
The bulk polycrystalline sample of Y$_2$Ni$_7$ for the present investigation was prepared by argon arc melting followed by annealing at 1000$^{\circ}$C for 500 h. The x-ray powder diffraction pattern (using Cu K$_{\alpha}$) were collected using a SEIFERT XRD3000P diffractometer (Cu K$_{\alpha}$ radiation, 2$\theta$ range from 30$^{\circ}$ to 80$^{\circ}$ with step size 0.02$^{\circ}$). The collected powder patterns have been used for Rietveld refinement (see inset of fig. 1 (b)) using the GSAS software package.~\cite{gsas} The analysis shows that the sample was formed in single phase with rhombohedral Gd$_{2}$Co$_7$ type crystal structure (space group: $R\overline{3}m$) with lattice parameters $a$ = 4.952 \AA, $c$ = 36.230 \AA, and $V$ = 888.44 \AA$^3$, which match well with previous result.~\cite{bu}
\par
Magnetization ($M$) measurements on Y$_2$Ni$_7$ sample was carried out on a Quantum Design SQUID magnetometer (MPMS 6, Evercool model). The resistivity ($\rho$) and magnetoresistance (MR) were measured on a cryogen-free high magnetic field system from Cryogenic Ltd., UK. The heat capacity ($C$) was recorded on a Qantum Design Physical Property Measurement System down to 2 K.
\section{Magnetization, transport and thermodynamical studies}
\subsection{Magnetization}
The temperature ($T$) variation of the dc magnetic susceptibility ($\chi = M/H$, where $H$ is the applied magnetic field) measured in zero field cooled condition in presence of $H$ = 2 kOe is shown in fig. 1 (a). $\chi$ shows a sharp rise below 70 K with decreasing $T$. This corresponds to the PM to FM transition in the sample and the associated $T_C$ is found to be around 53 K. This value of $T_C$ has been calculated from the first $T$ derivative of $\chi$ (not shown here). Above 200 K, $\chi^{-1}(T)$ varies linearly with $T$ (see inset of fig. 1 (a)), which signifies the validity of Curie-Weiss law at high-$T$. Linear fit to the high-$T$ part of $\chi^{-1}(T)$ {\it vs.} $T$ data gives the paramagnetic effective moment $p_{eff}$= 0.93 $\mu_B$/Ni and paramagnetic Curie temperature $\theta_p$ = 40 K. These values are close to the previously reported results.~\cite{bu1}
\par
Fig. 1(b) shows the $M$ versus $H$ isotherm recorded at 4 K. We observe FM like behaviour with steep rise at low $H$ and a sluggish increase at higher fields. Till $H$ = 50 kOe, $M$ does not show complete saturation. This lack of saturation is presumably connected to the itinerant character of the ferromagnetism.~\cite{kaul} The magnetic moment for $H$ = 50 kOe is found to be 0.06 $\mu_B$/Ni, which is one order of magnitude smaller than the {\it per atom} moment in metallic Ni (0.64 $\mu_B$/Ni).~\cite{ni} Such small value of moment in Y$_2$Ni$_7$ is an indication of the very weak FM nature of the sample. The observed moment is comparable with the results reported in previous studies on Y$_2$Ni$_7$.~\cite{bu1, nishi} It is to be noted that there is no hysteresis in the field increasing and decreasing legs of the $M-H$ isotherm, indicating a very soft FM character of the sample.
\par
The itinerant ferromagnetism is often characterized by the Rhodes-Wolhfarth ratio, which is defined as RWR = $p_c/p_{sat}$. Here $p_c$ is related to $p_{eff}$ by the relation $p_{eff}$ = $p_c$($p_c$ +2), while $p_{sat}$ is the saturation moment per magnetic atom at the temperature of interest.~\cite{wol} For a localized system, the value of RWR should be close to 1, while it diverges for itinerant ferromagnets. For Y$_2$Ni$_7$, $p_{sat}$ has been calculated from the $M-H$ isotherm at 4 K and the resulting RWR is found to be 6.17. This value is much larger than unity and provides support for the itinerant character of ferromagnetism in Y$_2$Ni$_7$. It is to be noted that for a typical WIFM such as ZrZn$_2$, the RWR is close to 5.4.~\cite{ak}
\begin{figure}[t]
\vskip 0.4 cm
\centering
\includegraphics[width = 8.5 cm]{fig5.eps}
\caption {(Color online) (a) shows several $M^2$ vs $H/M$ isotherms (Arrott plot) of Y$_2$Ni$_7$ around $T_C$ ($\approx$ 53 K) with temperature interval $\Delta T$ = 1 K . (b) represents modified Arrott plot ($M^{1/ \beta}$ vs $(H/M)^{1/\gamma}$) with $\beta$ = 0.31 and $\gamma$ = 1.40 . At 53 K, the modified Arrott plot pass through the origin indicating the proximity of Curie temperature.}
\end{figure}
\subsection{Resistivity and magnetoresistance}
$T$ dependence of $\rho$ for Y$_2$Ni$_7$ is shown in fig. 2 (a). Clear signature of change in slope is observable around $T_C \approx$ 53 K. At low-$T$ (below $\sim$ 18 K, in the FM state), $\rho(T)$ shows a well defined $T^2$ dependence ($\rho(T) = \rho_0 + BT^2$) as evident from the upper inset of fig. 2(a). Such $T^2$ dependence is the typical Fermi liquid behavior arising from the electron-electron scattering which generally does not involve spin-flip process. Such Fermi liquid type behavior is observed in case of many simple metals at low-$T$.~\cite{fli1,nfl} The SCR theory which incorporates spin fluctuations in WIFM also predicts a $T^2$ dependence of $\rho$ well below $T_C$. However, this $T^2$ dependence has different origin than the simple electron-electron scattering and it is connected to the {\it spin-flip} scattering of $s$ electrons by the spin density fluctuations of the $d$ electrons via $s-d$ exchange interaction.~\cite{ueda1, hertel} Notably , the coefficient $B$ of the $T^2$ term is found to be rather large in case of Y$_2$Ni$_7$ with its value being 5.04 $\times$10$^{-9} \Omega$ cm$^{-1}$K$^{-2}$. This is about two order of magnitude higher than the typical FM metal such as Ni and Fe ($\sim$ 10$^{-11} \Omega$cm$^{-1}$ K$^{-2}$). Similar enhanced value of $B$ was observed in prototypical WIFM such as ZrZn$_2$ or Ni$_3$Al.~\cite{ogawa1, ogawa2}
\par
The spin fluctuation theory predicts $T^{5/3}$ dependence of $\rho$ for WIFM just below $T_C$.~\cite{ueda1} The lower inset in fig. 2 (a) shows $\rho$ as a function of $T^{5/3}$ just below $T_C$ and the linear nature of the curve indicates that $\rho$ varies as $T^{5/3}$, which is consistent with the prediction of SCR model.
\par
We also investigated the $\rho (T)$ behavior of the sample under $H$. The $\rho$ {\it vs.} $T^2$ data at low temperature are shown here for different values of $H$ (fig. 2(b)). It is evident that the temperature range where we can observe $T^2$ dependence of $\rho$ becomes narrower with increasing $H$. The Fermi liquid type $T^2$ dependence is observed below 18 K in zero field, however in 10 kOe and 50 kOe of fields, it is only visible below 15 K and 12 K respectively . Such suppression of Fermi liquid like state with $H$ indicates some field-induced change in the $s-d$ scattering process.~\cite{fli1,fli2} We have fitted the $T^2$ dependent part of $\rho(T)$ to the formula $\rho(T) = \rho_0 + BT^2$ and it has been observed that $B$ decreases with increasing $H$. This is due to the fact that the applied field tends to quench the spin fluctuations.~\cite{ueda2}
\par
We have calculated the magnetoresistance (MR = [$\rho(H)-\rho(0)]/\rho(0)$) of the sample from the $\rho$ versus $T$ data recorded under different values of $H$. The $T$ variation of -MR is plotted in fig. 3(a) between 5 and 100 K. MR is found to be negative over a wide temperature region both above and below $T_C$ with -MR versus $T$ data showing a peak close to $T_C$. At 50 kOe , the maximum value of MR is found to be about -8\% around 58 K. The observed negative MR is likely to be associated with the suppression of spin fluctuations under $H$.~\cite{ikeda} If the Zeeman splitting energy corresponding to the applied magnetic field between spin up and spin down states is comparable or larger than the spin fluctuation energy, the inelastic spin flip scattering probability decreases leading to the decrease in $\rho$. Observation of negative MR well over $T_C$ indicates that the spin fluctuations exist even at higher temperatures for Y$_2$Ni$_7$. Ueda~\cite{ueda2} studied the effect of magnetic field on the spin fluctuations in WIFM based on the SCR theory. The results indicate the spin fluctuations related negative MR to be present both above and below $T_C$. The MR calculated from SCR theory is found to be negative with its maximum value at $T_C$. It also predicts that the range above $T_C$, over which negative MR is observed, increases with increasing $H$. Very similar effect is observed in case of Y$_2$Ni$_7$ with the region of negative MR above $T_C$ being widened with increasing $H$.
\par
Fig. 3(b) shows the isothermal MR versus $H$ data above (inset) and below (main panel) $T_C$. In both the regions, MR is found to follow power law of the type MR $\sim H^m$. The exponent $m$ is found to be 0.5 below $T_C$ as evident from the 40 K and 50 K data. On the other hand just above $T_C$ (60 K), $m$ shows slightly enhanced value of 0.7. On further increase of $T$, MR varies almost linearly with $H$ (except for the low field part) as apparent from the 70 K and 80 K isotherms shown in the inset.
\begin{table}
\begin{center}
\caption{Basic characteristic of Y$_2$Ni$_7$ together with some itinerant weak ferromagnets.~\cite{ak}}
\begin{tabular}{lccccccccccccccc}
\hline
&& Y$_2$Ni$_7$ && ZrZn$_2$ && InSc$_3$ && Y$_4$Co$_3$ \\
\hline
\hline
$\theta_p$(K) && 40 && 33 && 8 && 14 \\
$T_C$(K) && 53 && 21 && 6 && 5 \\
$p_c$($\mu_B$/at.) && 0.37 && 0.65 && 0.26 && 0.14 \\
$p_{sat}$($\mu_B$/at.) && 0.06 && 0.12 && 0.045 && 0.012 \\
RWR = $p_c$/$p_{sat}$ && 6.17 && 5.4 && 5.78 && 11.5 \\
$\Gamma$ (mJ/mol K$^2$) && 52.3 &&45&&12 && 3.45 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Heat Capacity}
The $C$ versus $T$ data of Y$_2$Ni$_7$ from 2 K to room temperature is shown in fig. 4 (a). Apparently no anomaly is observed near $T_C$ in the $C(T)$ plot. This is due to the fact that the low moment ordering associated with the WIFM has a small contribution as compared to the lattice and electronic components of $C$. We have carefully looked the low $T$ behavior of $C$ as shown in fig. 4 (b). At $T \ll \Theta$ ($\Theta$ = Debye temperature), the lattice part of the heat capacity $C_{debye}$ have a $T^3$ dependence. The other contributions for $C$ will be the spin wave term of the ordered FM state and a linear term corresponding to the electronic heat capacity. The solid line in fig. 4(b) represents a fit to the data with the contributions $C = \Gamma T + a_{sw}T^{3/2} + \lambda T^3$, where the terms respectively denote the electronic heat capacity, spin wave contribution and the low-$T$ lattice contribution. The best fit is obtained for $\Gamma$ = 52.3 mJ mol$^{-1}$ K$^{-2}$, $a_{sw}$ = 2.61 mJ mol$^{-1}$ K$^{-5/2}$ and $\lambda$ =0.506 mJ mol$^{-1}$ K$^{-4}$. $\lambda$ is related to the Debye temperature $\Theta$ as $\lambda = \frac{12}{5}\pi^4\frac{pR}{\Theta^3}$, where $p$ is the number of atoms per formula, and $R$ is the universal gas constant. Using this relation, we get the value of $\Theta$ to be 306 K. The interesting point to be noted here is the enhanced value of $\Gamma$, which is otherwise close to 1 mJ mol$^{-1}$ K$^{-2}$ for simple metal like Cu. Such enhancement is clearly related to the strong spin fluctuation in WIFM as described by Moriya.~\cite{moriya} Enhanced $\Gamma$ has also been observed in case of other WIFM, for example, it is close to 45 mJ mol$^{-1}$ K$^{-2}$ in ZrZn$_2$.~\cite{zrzn2b} The spin wave contribution for Y$_2$Ni$_7$ is also rather weak, which is also a characteristic feature of a WIFM.~\cite{heusler}
\par
A comparison of the electronic and magnetic properties of Y$_2$Ni$_7$ with few other well known WIFM is depicted in table I. The observed parameters characterizing the WIFM behavior in Y$_2$Ni$_7$ are close to the values observed in other materials.
\begin{figure}[t]
\vskip 0.4 cm
\centering
\includegraphics[width = 8 cm]{fig6.eps}
\caption {(Color online) (a) Temperature dependence of spontaneous magnetization $M_S $ (left axis) and inverse initial susceptibility $\chi^{-1}_{0}$ (right axis) which are obtained from the high-field extrapolation of modified Arrott plot (fig. 5 (b)). The solid lines represent the fit to the data by eqns. 1 and 2. (b) Kouvel-Fisher plot of spontaneous magnetization $M_S$ (left axis) and inverse initial susceptibility $\chi^{-1}_{0}$ (right axis) for Y$_2$Ni$_7$. Straight lines are the linear fit to the data.}
\end{figure}
\section{Scaling and Critical exponents}
\subsection{Modified Arrott plot}
For a second order phase transition from a PM to FM phase, the critical behavior near $T_C$ is characterized by a set of critical exponents, namely $\beta$, $\gamma$ and $\delta$, which are respectively associated with the spontaneous magnetization ($M_S$), initial susceptibility ($\chi_0 = \lim_{H \to 0} M/H$) and magnetization isotherm ($M-H$). The scaling hypothesis suggests following power law relations near the critical region: ~\cite{stanley}
\begin{eqnarray}
M_S(T)&=&M_0 {|\epsilon|}^{\beta}, \epsilon < 0 , T < T_C \\
\chi^{-1}_{0} (T)& =& G(\epsilon)^{\gamma} , \epsilon > 0 , T > T_C \\
M &=& XH^{1/\delta} , \epsilon = 0 , T = T_C \\
\nonumber
\end{eqnarray}
Where $\epsilon$ = $(T-T_C)/T_C$ is the reduced temperature. $M_0$, $G$ and $X$ are the critical amplitudes.
\par
In order to determine the critical exponents of Y$_2$Ni$_7$, we recorded several $M$ versus $H$ isotherms. In fig. 5(a) we plot $M^2$ as a function of $H/M$ at different temperatures around $T_C$, which is commonly known as Arrott plot.~\cite{arrott} In the mean field theory, one would expect $M^2$ {\it vs.} $H/M$ isotherms to be series of parallel straight lines around $T_C$ and the line would pass through the origin at $T = T_C$. The main observations from the Arrott plot are: (i) The curves are non-linear even at high field, which rules out the possibility of a mean field model for the phase transition at $T_C$; and (ii) the plots show downward curvature, {\it i.e., } the slope of the curves is always positive. According to the condition suggested by Banerjee,~\cite{bn} the positive slope indicates that the phase transition at $T_C$ is second order in nature. The occurrence of second order phase transition allows us to investigate the critical behavior of Y$_2$Ni$_7$ on the the basis of exponents as described in eqns. 1-3.
\begin{figure}[t]
\vskip 0.4 cm
\centering
\includegraphics[width = 8.5 cm]{fig7.eps}
\caption {(Color online) (a) Magnetization isotherm collected at 53 K for Y$_2$Ni$_7$. Inset shows the same plot in log-log scale and the straight line is the linear fit following eqn. 3. (b) shows the reduced magnetization ($M/\epsilon^{\beta}$) plotted against the reduced field ($H/\epsilon^{\beta + \gamma}$). The plot shows all the data collapse into two separate branches: one below $T_C$ and another above $T_C$. Inset shows the same plot in the log-log scale.}
\end{figure}
\par
Considering non-mean field like behavior of the second order phase transition in Y$_2$Ni$_7$, we have used more generalized modified Arrott plot techniques. This is based on the Arrott-Noakes equation of state in the critical region:~\cite{an}
\begin{eqnarray}
(H/M)^{1/\gamma} &=& a(T-T_C)/T + bM^{1/\beta}
\end{eqnarray}
where $a$ and $b$ are constants.
\par
In modified Arrott plot $M^{1/\beta}$ is plotted against $(H/M)^{1/\gamma}$ for suitable choice of the exponents of $\beta$ and $\gamma$. The proper choice of $\beta$ and $\gamma$ will produce modified Arrott plot, where the curves are parallel to each other at least in the high field region. We checked this for $\beta$ and $\gamma$ values of 3D Heisenberg model, 3D Ising model etc., but none of them provide parallel straight lines in the modified Arrott plot. Hence for the proper choice of the exponents, we varied $\beta$ and $\gamma$ over a wide range staring from the above models. After comparing large number \{$\beta, \gamma$\}, it has been found the best parallel sets of straight line for modified Arrott plot occurs for $\beta$ = 0.31 and $\gamma$ = 1.4. Fig. 5(b) shows such plot of Y$_2$Ni$_7$ at selected temperatures around $T_C$. We can also calculate the value of $\delta$ using Widom scaling relation, $\delta = 1+\gamma/\beta$ and the corresponding value of $\delta$ is found to be 5.578.
\par
Modified Arrott plot provides a good opportunity to calculate the values of $M_S$ and $\chi_0$. Linear extrapolation of the high field straight line portion of the isotherm provides the value of ($M_S)^{1/\beta}$ and ($\chi^{-1}_{0})^{1/\gamma}$ as an intercept on the ($M)^{1/\beta}$ and ($H/M)^{1/\gamma}$ axes respectively. Now using the values of $\beta$ and $\gamma$ of the modifies Arrott plot, one can calculate $M_S$ and $\chi^{-1}_{0}$.
The $T$ dependence of $M_S$ and $\chi^{-1}_0$, obtained from the intercepts of the modified Arrott plot, is plotted in fig. 6 (a). The variation of these quantities in the critical region satisfies the behavior expected from scaling laws (eqns. 1 and 2). We have fitted the $M_S(T)$ and $\chi^{-1}_0(T)$ curves with eqns. 1 and 2 respectively. This provide a new set of values of $\beta$ and $\gamma$. The value of $\beta$ obtained from $M_S(T)$ data is 0.292 ($\pm$ 0.006) while the $\gamma$ value from $\chi^{-1}_0(T)$ data is 1.44 ($\pm$ 0.02).
\subsection{Kouvel-Fisher method}
A more accurate method to determine the exponents $\beta$ and $\gamma$ is by the Kouvel-Fisher method.~\cite{ni} It is based on the equations:
\begin{eqnarray}
\frac{M_S}{dM_S/dT} = \frac{T-T_C}{\beta} \\
\frac{\chi_0^{-1}}{d\chi_0^{-1}/dT} =\frac{T-T_C}{\gamma}
\end{eqnarray}
Therefore, the $T$ variation of $M_S (dM_S/dT)^{-1}$ and $\chi^{-1}_{0}/(d\chi^{-1}_{0}/dT)$ yields respectively straight lines with slopes $1/\beta$ and $1/\gamma$ along with the intercepts on the $T$ axis giving the values of $T_C$. The $T$ variation of such quantities is shown in fig. 6 (b) and the resulting linear curves confirm the applicability of the Kouvel-Fisher method for the present sample. By linear fit of the left curve ($M_S (dM_S/dT)^{-1}$ versus $T$), we have obtained $\beta$ =0.306 ($\pm$ 0.002) and $T_C$ = 54.42 K. On the other hand linear fit of the right curve ($\chi^{-1}_{0}/(d\chi^{-1}_{0}/dT)$ versus $T$) provides $\gamma$ = 1.401($\pm$ 0.02) and $T_C$ = 53.58 K. These values are quite close to that obtained from modified Arrott plot method.
\par
The value of $\delta$ can also be independently calculated from $M-H$ isotherms by using eqn. 3. Fig. 7 (a) represent the $M-H$ isotherm at 53 K (the closest temperature to the $T_C$) along with $\log M$ versus $\log H$ plot in the inset. According to eqn. 3, the $\log M$ versus $\log H$ plot should be straight line with slope to be 1/$\delta$ at $T_C$. We have calculated the value of $\delta$ from the log-log plot and it is found to be 5.35. This value is also quite close to that value obtained from modified Arrott plot and Widom scaling relation.
\subsection{Scaling theory}
The critical exponents obtained from different methods are found to be close to each other. However, it is important to check the reliability of the values through the scaling hypothesis. According to the hypothesis~\cite{stanley, kaul2} $M(H, \epsilon)$ is a universal function of $T$ and $H$ and there exists a reduced equation of state of the form:
\begin{eqnarray}
M(H,\epsilon) &=& \epsilon^{\beta}f_{\pm} (H/\epsilon^{\beta+\gamma})
\end{eqnarray}
\par
where the functions $f_{+}$ and $f_{-}$ are for $T>T_C$ and $T<T_C$ respectively. This equation implies that experimentally observed $M/\epsilon^{\beta}$ versus $H/\epsilon^{\beta + \gamma}$ should collapse on two different curves, one for temperature below $T_C$ and the other for above $T_C$. Such scaling would be realized if one chooses right values of the $\beta$, $\gamma$ and the $T_C$.
\par
In order to check this criterion for Y$_2$Ni$_7$, we have plotted $M/\epsilon^{\beta}$ as a function of $H/\epsilon^{\beta + \gamma}$ (see fig. 7(b)) in the critical region with the values of $\beta$ and $\gamma$ obtained from Kouvel-Fisher method. It is to be noted that all the curves converges into two branches depending upon $T > T_C$ or $T<T_C$. The inset of fig. 7(b) shows the same plot in the log-log scale for better clarity. This shows that the scaling hypothesis is obeyed over a wide range of $T$ and $H$, and therefore the calculated critical exponents are meaningful.
\begin{table}
\begin{center}
\caption{Critical exponents $\beta$, $\gamma$ and $\delta$ obtained from modified Arrott plot (MAP), Kouvel-Fisher (KF) plot and critical isotherm (CI) for Y$_2$Ni$_7$ along with theoretical values for various model.~\cite{kaul2}}
\begin{tabular}{lcccccccccccc}
\hline
&& $\beta$ && $\gamma$ && $\delta$ \\
\hline
\hline
Y$_2$Ni$_7$ (MAP) && 0.31 && 1.40 && 5.52 \\
Y$_2$Ni$_7$ (KF) && 0.306&& 1.401 && 5.578 \\
Y$_2$Ni$_7$ (CI) && -&& - && 5.35 \\
Mean Field Model && 0.5 && 1.00 && 3 \\
3D Heisenberg Model && 0.365 && 1.386 && 4.8 \\
3D Ising Model && 0.325 && 1.241 && 4.82 \\
\hline
\hline
\end{tabular}
\end{center}
\end{table}
\section{Discussion}
The present work aims to study of the magnetic and electronic properties of Y$_2$Ni$_7$ based on the magnetization, transport and calorimetric measurements. The signature of PM to FM transition in Y$_2$Ni$_7$ is found to be relatively weak in our $M(T)$, $\rho(T)$ and $C(T)$ data indicating the weak ferromagnetism in the sample. This is supported by the low value of the saturation moment as evident from the $M(H)$ isotherms. The ground state physical properties indicate the WIFM character of the sample, which is particularly evident from the large value of the coefficient of $T^2$ term in $\rho$ and large coefficient of the linear term in $C$. Such enhancement is connected to the spin fluctuations present in the itinerant ferromagnet.
\par
In Y$_2$Ni$_7$, spin fluctuations exist well above $T_C$. Notably, such behavior is supported by the theoretical calculation based on SCR model.~\cite{ueda2} In our experimental data, we observe negative MR
at temperature as high as 100 K, which is about twice the $T_C$ of the sample. It was shown that the MR arising from spin fluctuations in a weakly FM material should follow a linear field dependence {\it i.e.,} (MR)$_{SF} \sim -H$. Below $T_C$ we observe MR to vary as $-H^{0.5}$ rather than a linear field dependence. Although MR does not show linear variation with $H$ below $T_C$, we observe linear $H$ dependence of MR at high field for $T >$ 60 K.
\par
The $T$ dependence of $\rho$ and $C$ shows typical Fermi liquid like behavior with enhanced value of the coefficients of $T^2$ and $T$ terms respectively. It is to be noted that their ratio $B/\Gamma^2$ is an important parameter to ascertain the Fermi liquid state in a metal.~\cite{kadowaki, rice} The coefficient $B$ in $\rho$ arises from the electron-electron scattering and it is found to be proportional to the square of the effective mass of the conduction electrons. On the other hand the electronic specific heat coefficient is linearly proportional to the effective mass. Therefore, within a class of materials obeying renormalized band picture, the ratio should have universal value. For heavy fermion metals, the ratio is found to be close to 1.0$\times$10$^{-5} \Omega$ cm mole$^2$ K$^2$ J$^{-2}$.~\cite{kadowaki} However, for transition metals, the ratio has an average value of 0.9 $ \times$10$^{-6}\Omega$ cm mole$^2$ K$^2$ J$^{-2}$, which is one order of magnitude lesser than that of the heavy fermions. We have calculated the ratio for Y$_2$Ni$_7$ and it turns out to be 1.8 $ \times$10$^{-6} \Omega$ cm mole$^2$ K$^2$ J$^{-2}$, which is pretty close to the value found in case of transition metals. Although the coefficients $B$ and $\Gamma$ are much enhanced in case of Y$_2$Ni$_7$ due to spin fluctuations, the ratio remains the same. This indicates that the spin fluctuations in Y$_2$Ni$_7$ can be well accounted by the {\it renormalized electronic band parameters}.
\par
One interesting observation is the effect of $H$ on the $T^2$ dependent part of the $\rho(T)$ data. The temperature window over which $\rho \sim T^2$, diminishes with increasing $H$. The SCR theory, contrary to our experimental result, actually predicts the broadening of the $T^2$ dependent region with $H$,~\cite{ueda2} and similar behavior was observed experimentally.~\cite{heusler} This contradictory result in Y$_2$Ni$_7$ might be an indication of field induced instability in the Fermi liquid ground state of Y$_2$Ni$_7$. It is to be noted that the sample remains FM in high fields and $T_C$ does not decrease with increasing $H$. Therefore, the emergence of NFL like state with $H$ can not be attributed to the fact that the system is nearing to a quantum critical point.~\cite{nfl} It is to be noted that ZrZn$_2$ shows NFL like resistivity behavior at the ambient condition in the FM state.~\cite{zrzn2b} Coexistence of FM and NFL states has been observed in the alloys URu$_{2-x}$Re$_x$Si$_2$ and it is found to be feasible if we consider the material to be disordered and anisotropic leading to the formation of FM clusters.~\cite{bauer} Such scenario can not be ruled out in case of Y$_2$Ni$_7$. On the other hand, electronic structure calculations indicates a weak peak in the density of states (DOS) near the Fermi level of Y$_2$Ni$_7$, which is found to be responsible for the WIFM character.~\cite{band} Any subtle change in the DOS with $H$ may also result in reducing the $T^2$ dependent range of $\rho$.
\par
The critical exponent calculated from the modified Arrott plot, Kouvel-Fisher plot and the Critical isotherm method has been depicted in table II. It is evident that the values calculated from different techniques are quite close. Among other methods, the Kouvel-Fisher plot can provide the most accurate values of the critical exponents. The exponents from Kouvel-Fisher method show scaling behavior, where the scaled equation of state (eqn. 7) produces two curves for the state below and above $T_C$ (see fig. 7 (b)). This proves the authenticity of the calculated exponents.
\par
In table II, we show the theoretical values of critical exponents for mean field, 3D Ising and 3D Heisenberg models. Notably, the calculated exponents do not match well with any of the 3D models. Similar discrepancy was also observed in various other itinerant magnetic systems and it has been attributed to the length scale of interaction. The 3D Ising and Heisenberg models described in table II are of {\it short range} type, {\it i.e.} the spin spin interaction falls off rapidly with distance. However, for itinerant system, the interaction can be of long range due to the mobile electrons. It has been observed that for long range interaction, the spin-spin interaction term varies as $J(r) \sim r^{-(d + \sigma)}$, where $d$ is the effective dimensionality of the system, $r$ is the spin-spin distance and $\sigma$ is an exponent.~\cite{fisher} It has been found that for $\sigma <$ 2, such model for long range interaction can hold good.
\par
In case Cr-Fe itinerant system~\cite{fecr}, the observed critical exponents were explained on the basis of long range interaction.~\cite{fisher} It was found that for $\sigma$ = 1.34, the observed exponents ($\beta$ =0.298 $\gamma$ =1.392 $\delta$ =5.67 ) match well with 2D Ising model coupled with long range interaction. Interestingly, like Cr-Fe system, the critical exponents obtained for Y$_2$Ni$_7$ from the present study are also very close to the 2D Ising model with long range interaction. Therefore, it appears that Y$_2$Ni$_7$ also has the same universality class of Cr-Fe alloys as far as the critical exponents are concerned. Prompted by the similarity, we have also used the same method to calculate the value of $\sigma$ that produces the best suitable set of critical exponents. For long range interaction, the exponent $\gamma = \mathcal{F}\{\sigma, d, n\}$, where $\mathcal{F}$ is a known function (see equation 9 of reference~\cite{fisher}) and $n$ is the dimension of the order parameter. The other exponents are related to $\sigma$ as $\nu = \gamma/\sigma$, $\alpha$ = 2-$\nu d$, $\beta$ = (2$ -\alpha - \gamma$)/2, and $\delta$ = 1 + $\gamma/\beta$. We have used an iterative method to calculate $\sigma$ from these relations starting from the experimental value of $\gamma$ (= 1.40) and the best match to our experimental data is obtained for $d$ = 2, $n$ = 1 and $\sigma$ = 1.38. This long range 2D Ising model with $\sigma$ = 1.38 produces critical exponents $\beta$ = 0.314, $\gamma$ = 1.40, and $\delta$ = 5.46, which match very well with our experimentally obtained values. It is worth mentioning that recently such long range model for magnetic interaction was also found to be suitable for explaining the critical behavior of itinerant manganite Pr$_{0.5}$Sr$_{0.5}$MnO$_3$.~\cite{pramanik}
\par
The possible long range spin spin interaction in case of Y$_2$Ni$_7$ can be understood on the itinerant character of the electrons responsible for magnetism. Since the electrons are delocalized, the spin-spin interaction can be of long range. The 2D Ising Character of the magnetic interaction can have its origin in the anisotropic crystal structure. Y$_2$Ni$_7$ has Gd$_2$Co$_7$ type rhombohedral structure, which can be considered to be derived by stacking YNi$_5$ and YNi$_2$ slabs in 2:1 ratio along the $c$ axis. Such layered arrangements of atoms can give rise to effectively 2D character of the magnetic interaction.
\section{Acknowledgment}
AB wishes to thank Council for Scientific and Industrial Research (CSIR), India for his research fellowship.The authors would like to acknowledge the Low Temperature \& High Magnetic Field (LTHM) facilities at CSR, Indore (sponsored by DST) for heat capacity measurements.
|
1503.08979
|
\section{\label{Sec1}Introduction}
In a central hotspot-ignition scheme of inertial confinement fusion (ICF)\cite{Nuckolls72,Atzeni04}, a layered capsule with a frozen deuterium-tritium (DT) main fuel shell is imploded, the central hotspot is compressed and heated to reach ignition condition. A thermonuclear burn wave from the ignited hotspot propagates outward towards the high compressed main fuel, resulting in the self-sustaining burn and the fusion energy gain. Two main implosion schemes, direct drive (DD)\cite{Bodner98} and indirect drive (ID)\cite{Lindl95}, have been proposed. In the DD scheme, lasers are directly focused on the capsule ablator surface. In the ID scheme, the capsule is placed inside a hohlraum with a high-Z shell and lasers irradiate on an inner wall of the hohlraum to generate radiation (soft x-rays) that ablates and heats the capsule ablator surface, where a high temperature and pressure plasma is formed, resulting in implosion dynamics to highly compress the main fuel and heat the hotspot performing ignition. So far, the hotspot is designed in isobaric ignition, for which the implosion shock running inside the hotspot experiences multiple reflections back and forth from the hotspot center to an interface between the hotspot and high compressed main fuel (below, called the hotspot interface) till stagnation that the inward velocity at the hotspot interface vanishes. It results in the fact that at the hotspot interface each reflection exerts a force from a light fluid (the hotspot) to an inward-accelerating heavy fluid (the high-compressed density main fuel), and then the heavy fluid is decelerated, where $\nabla p\cdot\nabla \rho<0$ with $p$ pressure and $\rho$ density, and consequently the hydrodynamic instabilities\cite{Taylor50} and mix happen since the actual implosion shock is not an ideal spherical symmetry. The shock reflections also amplify the asymmetry of the hotspot interface since there always exists some spatial roughness at the interface, resulting in the hotspot distortion. In the recent full ID experiments on the National Ignition Facility (NIF), the layered capsule is put inside a cylindrical hohlraum that may cause a radiation flux asymmetry at the CH ablator surface of the capsule and exacerbate the hotspot distortion, in addition, the isobaric-ignition hotspot driven by the low ablation pressure most in $\sim 100 Mbar$ and the low implosion velocity most in $\sim300 km/s$\cite{Haan11,Lindl14,Hurricane14} may undergo serious hydrodynamic instabilities and mix. In the low adiabat ($\alpha= P/P_F\sim1.45$) experiments\cite{Lindl14}, where $P$ and $P_F$ are the main-fuel pressure and Fermi pressure at stagnation respectively, the implosion dynamics caused serious distortion of the hotspot and main-fuel layer, and ablator materials mixed into the hotspot, resulting in the failure of the hotspot isobaric ignition. In the high adiabat ($\alpha\sim2.8$) experiments\cite{Hurricane14,Dittrich14}, although the hydrodynamic instabilities at the radiation ablation front (RAF) of the CH ablator were improved and the hotspot performed the isobaric ignition, no burn appeared and the hotspot distortion and mix were clearly observed. It has exhausted the existing NIF energy of $\sim2 MJ$. In addition, in the high adiabat experiments the main fuel only arrived at the lower areal density ($<1.0g/cm^2$), which is difficult to perform burn with gain.
In this report, we proposed an ID-DD hybrid-drive work-dominated ignition scheme to solve the above issues. The capsule, which is put in a spherical hohlraum with an octahedral symmetry to ensure radiation symmetry, is first compressed by the successively intensive ID shocks that adiabatically compress the main fuel to high density and are combined into a merged shock (MS) in the vicinity of the inner surface of the main fuel layer. The ID MS runs into the hotspot that is first compressed and heated, and rebounds from the hotspot center to the hotspot interface. On the other hand, an enhanced shock (ES) with the pressure greater than that of the MS is launched for ignition by DD lasers\cite{He13,*Fan13}, and rapidly arrives at the hotspot interface and collides with the MS that is stopped before its first reflecting there. Therefore, the hydrodynamic instabilities and asymmetry caused by the MS at the hotspot interface are suppressed. The ES and a follow-up compression wave through the interface provide large $pdV$ work to the hotspot that is further compressed and heated, resulting in the work-dominated hotspot ignition soon after the ES running in the hotspot first reflects at the hotspot interface. During implosion compression, the rate of the total internal energy $E$ in the hotspot varies from $dE/dt<0$ to $dE/dt>0$. The hotspot ignition condition can be calculated by using $dE/dt =0$. Assume that the temperature $T$ and density $\rho$ in the hotspot are homogeneous, we obtain the equation for the ignition condition ($\alpha$-particle self-heat beginning)\cite{Atzeni04,Lindl95} for a DT system under spherical symmetry geometry in the form
\begin{eqnarray}
g(T,\xi ){(\rho {r_D})^2} + {A_W}\Gamma Tu(\rho {r_D}) = {A_S}\frac{{{T^{7/2}}}}{{\ln \Lambda }},\label{Eq1}
\end{eqnarray}
it indicates a relationship between the areal density $\rho r_D(g/cm^2 )$ and temperature $T (keV)$, where $r_D$ is the hotspot radius. In Eq.(\ref{Eq1})
\begin{eqnarray}
g(T,\xi ) = {A_Q}n_T^2\xi (1 - \varphi ){\left\langle {\sigma \,v} \right\rangle _{DT}}{\varepsilon _0}{f_\alpha } - {A_B}n_T^2{(1 + \xi )^2}\sqrt T, \label{Eq2}
\end{eqnarray}
is the net increase rate ($g>0$) of $\alpha$-particle energy deposition (the first term in the right side of Eq.(\ref{Eq2})) taking off electron bremsstrahlung emission energy (the second term). In Eq.(\ref{Eq2}), ${f_\alpha } = 1 - \frac{1}{{4{\theta _\alpha }}} + \frac{1}{{160\theta _\alpha ^3}}$ with ${\theta _\alpha } = \rho {r_D}/\rho {\ell _\alpha}$ is a fraction of the $\alpha$-particle energy deposited into the hotspot\cite{Krokhin73} and $\rho {\ell _\alpha } = \frac{{0.025{T^{5/4}}}}{{1 + 0.0082{T^{5/4}}}}$ is the $\alpha$-particle range at densities $\rho=(10-100)g/cc$\cite{Atzeni04,Hurricane14}, and a fraction of bremsstrashlung x-ray energy absorbed in the hotspot is neglected. $g(T,\xi)=0$ is a boundary between the $\alpha$-particle energy deposition rate in thermonuclear reaction and the electron energy emission loss rate, where $\xi=n_D/n_T$ is a ratio of the deuterium particle number $n_D$ to the tritium $n_T$. The multi-deuterium system is very important for ICF because the ion temperature during burning process can far exceed the deuterium-deuterium ignition threshold, in which the tritium is generated and thus the initial assembled tritium is saved. In the ignition discussion, we take the burn-up fraction $\phi=0$ and $\xi=1$, and from $g=0$ we can get the minimal ignition temperature $T\sim4.3keV$. In Eq.(\ref{Eq1}), $A_W\Gamma T$ comes from the power $pu$ provided by the ES and compression wave through the hotspot interface, where the pressure $p=\Gamma \rho T$, the velocity $u$ at the hotspot interface and the isothermal-sound velocity $C_T = \sqrt{\Gamma T}$ are both in 10km/s, $A_sT^{7/2}/ln\Lambda$ is from the electron thermal conduction rate, $<\sigma v>_{DT}$ is the DT thermonuclear reaction rate in $10^{-24}cm^3/s$, $ln\Lambda$ is the Coulomb logarithm, and $\varepsilon_0$ is the $\alpha$-particle energy in $MeV$. The coefficients $A_Q =1.6\times10^6$, $A_B=5.0\times10^5$, $A_W=34.8$, $A_S =2.7\times10^8$. Eq.(\ref{Eq1}) is a quadric in form since ${\theta _\alpha } = \rho {r_D}/\rho {\ell _\alpha }\sim 1$ in $f_{\alpha}(\theta_{\alpha})$. We have the general solution from in the form
\begin{eqnarray}
\rho {r_D} = \frac{{{A_w}\Gamma Tu}}{{2g}}(\sqrt {1 + h} - 1) \equiv {f_1}(T,u), \label{Eq3}
\end{eqnarray}
where $h = \frac{{4g{A_s}{T^{7/2}}/\ell n\Lambda }}{{A_w^2{\Gamma ^2}{T^2}{u^2}}}$.
Let $u=0$, the isobaric ignition threshold is
\begin{eqnarray}
\rho {r_D} = \sqrt {\frac{{{A_s}{T^{7/2}}/\ell n\Lambda }}{g}} \equiv {f_2}(T). \label{Eq4}
\end{eqnarray}
Let $u$ larger enough to make $h<<1$, we have the work-dominated ignition threshold
\begin{eqnarray}
\rho {r_D} = \frac{{{A_s}{T^{7/2}}/\ell n\Lambda }}{{{A_w}\Gamma Tu}} \equiv {f_3}(T,u). \label{Eq5}
\end{eqnarray}
Eq.(\ref{Eq5}) has an additional relation of $A_w \Gamma Tu = {2g}(T,\xi = 1)$, substituting it into Eq.(\ref{Eq1}) we can obtain Eq.(\ref{Eq5}) again. Therefore, from it or $h<<1$ we could estimate the interface velocity $u=u(T)$ demanded for the work-dominated ignition, then combining Fig.\ref{Fig1} we could get the ignition threshold. We usually call Eq.(\ref{Eq3}) nonisobaric ignition except for the case of the isobaric ignition, while the work-dominated ignition is nonisobaric without the MS reflections at the hotspot interface.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{Figure1
\caption{\label{Fig1} (Color online) Hot-spot ignition conditions: the areal density $\rho r_D$ versus temperature T for $\xi=1$ and particle energy $\varepsilon_0= 3.5 MeV$. The curves show the isobaric ignition (u=0, black solid line), and the nonisobaric ignition with velocities u (km/s)=100 (red), 200(green), and 400(blue).}
\end{figure}
It is seen from Fig.\ref{Fig1} that for the same temperature the isobaric ignition demands higher $\rho r_D$ to compare with the work-dominated ignition. For example, taking $T\sim4.9 keV$, the work-dominated ignition ($u=400 km/s$) demands only $\rho r_D \sim0.15 g/cm^2$ while the isobaric ($u=0$) demands $\rho r_D\sim0.45 g/cm^2$, as shown by the vertical dashed-line in Fig.\ref{Fig1}. It is as well shown that for lower velocity, such as $u=100 km/s$, the ignition is close to isobaric. In fact, the inward velocity $u$ at the hotspot interface is close to the implosion velocity $V_{im}$, where the main fuel is in high compressed density, therefore, $V_{im}\sim400 km/s$ is necessary to promote the hotspot temperature and perform the work-dominated hotspot ignition and main fuel burn.
Such high implosion velocity $V_{im}$ is difficult to be provided only by the ID ablation pressure. In our hybrid scheme, this implosion velocity for ignition is driven by the DD lasers that are steadily imposed on a critical surface in the corona plasma formed by the ID ablated ablator inside the SH. A supersonic electron thermal wave with an electron ablation front (EAF) propagates from the critical surface inward towards the RAF, and decays across the corona plasma where the wave energy is deposited. Once the supersonic velocity slows down to an electron isothermal sonic $C_s=(\Gamma_eT_e)^{1/2}$ at time $t=t_s$, an electron ablation shock and a follow-up steady electron compression wave with high pressure at the EAF are launched, where $T_e$ is the electron temperature and $\Gamma_e$ is the electron pressure coefficient. The electron compression wave continually drives the corona plasma inward toward the RAF like a snowplow. Meanwhile, the ablator is continually ablated by the radiation temperature at the RAF as well. Therefore, a high density plateau between the EAF and RAF is formed due to the density pile-up of the corona plasma and ablated ablator plasma. We now discuss how the corona plasma is driven by the electron compression wave using the snowplow model. Assume that the areal plasma density (mass per unit area) at time $t$ accumulated in front of the EAF (here and below, the EAF means the EAF of the electron compression wave, unless otherwise specified) is $\mu (\tau ) = {\mu _0}(0) + {\mu _1}(\tau )$ with $\tau=t-t_s$, where ${\mu _0}(0) = \int\limits_{{R_{RA}}(0)}^{{R_{EA}}(0)} \rho dR$ is the areal density between the radius $R_{EA}(\tau=0)$ of the EAF and radius $R_{RA}(\tau=0)$ of the RAF, ${\mu _1}(\tau ) = \int\limits_0^\tau {\dot m d\tau }$ is the areal density of the ID ablated mass of the ablator at the RAF, where the mass ablation rate $\dot{m}\sim T^3$\cite{Olson11}. The motion of the corona plasma pushed by the EAF can be described by the snowplow equation:
\begin{eqnarray}
\frac{d}{{d\tau }}(\mu {R^2}\frac{{dR}}{{d\tau }}) = - {R^2}[{P_{EA}}(\tau ) - P(\tau )],\label{Eq6}
\end{eqnarray}
where $P_{EA}$ is a high pressure ahead of the electron compression wave and $P=\Gamma \rho T$ is the corona plasma pressure. Taking initial conditions: $R(\tau=0)=R_{EA}(0)$ and $dR/d\tau(\tau =0)\approx0$, the radius $R_{EA}$ of the EAF is obtained from the Eq.(\ref{Eq6}), i.e., ${R_{EA}}(\tau ) = {R_{EA}}(0) - \int\limits_0^\tau {\frac{{d{\tau _1}}}{{\mu ({\tau _1}){R^2}({\tau _1})}}\{ } \int\limits_{{0}}^{{\tau_1}} {d{\tau _2}} {R^2}({\tau _2})[{P_{EA}}({\tau _2}) - P({\tau _2})]\}$. Consider that the velocity at the RAF is approximately the imploding velocity, we can write the plateau width between radius $R_{EA}$ of the $EAF$ and radius $R_{RA}$ of the $RAF$ in the form
\begin{eqnarray}
\Delta {R_{ER}}(\tau ) = \Delta {R_{ER}}(0)
&& + \int\limits_0^\tau {d{\tau _1}\{ {V_{im}} - } \frac{1}{{{\mu _1}({\tau _1}){R^2}({\tau _1})}}\int\limits_0^{\tau_1} {d{\tau _2}} {R^2}({\tau _2})[{P_{EA}}({\tau _2}) - P({\tau _2})]\},\label{Eq7}
\end{eqnarray}
where $\Delta R_{ER}(\tau )=R_{EA}(\tau )-R_{RA}(\tau)$ and $\Delta R_{ER}(0)=R_{EA}(0)-R_{RA}(0)$. In early stages, $P_{EA}$ driven by the steady DD lasers is close to a constant, at the EAF, $P_{EA}/P= \Gamma_e T_e/\Gamma T>>1$, where the electron temperature $T_e$ is far greater than the radiation temperature $T$, and $V_{im} \sim T^{1/2}$\cite{Olson11} at the RAF is less than the driving velocity by $P_{EA}$, therefore, $V_{im}<\frac{1}{{{\mu _1}({\tau _1})}}\int\limits_0^\tau {d{\tau _2}} {P_{EA}}({\tau _2})\sim C_S=(\Gamma_e T_e)^{1/2}$. As a result, the snowplow pushes the density plateau width to be contracted, i.e., $\Delta R_{ER}(\tau) <\Delta R_{ER}(0)$, and thus the averaged plasma density $<\rho>\sim \mu_0/\Delta R_{ER}(\tau)$ goes up within the density plateau. With the lapse of time, later, $P=\Gamma \rho T>>P_{EA}$ due to $T$ and $\rho$ rising while $T_e$ decreasing within the $\Delta R_{ER}(\tau)$, and thus $\Delta R_{ER}(\tau)> \Delta R_{ER}(0)$. However, noticing that ${\mu_1}(\tau )\sim\int\limits_0^\tau {{T^3}d\tau }$, $V_{im}\sim T^{1/2}$ and $P/\mu \sim T^{-2}$, as a result, the averaged plasma density $<\rho>\sim \mu/\Delta R_{ER}(\tau)$ increases rapidly within $\Delta R_{ER}(\tau)$, where an extremely high pressure $P=\Gamma \rho T$ far greater than the ID ablation pressure $P_a (\sim 100 Mbar)$ is produced. Thus, the analytical discussion showed that the DD lasers may drive an extremely high pressure plateau between the RAF and EAF, and it may drive the ES toward the implosion capsule with the implosion velocity greater than $400 km/s$ to perform the work-dominated hotspot ignition.
In order to conduct the numerical simulations, we specify the target structure and the source to drive implosion dynamics. The SH filled an electron density of $0.05 n_c$ has six laser entrance holes (LEHs) with an octahedral symmetry (Fig.\ref{Fig2}a)\cite{Lan14,*Lan14-2} and the radii are $4.0 mm$ for the SH and $0.9 mm$ for the LEH. The area ratio of the LEHs to the hohlraum, an important factor characterizing the soft x-ray loss from the LEHs, is $\sim 7\% $ for the SH comparable to the cylindrical hohlraum ($5.75 mm$ in diameter, $9.4 mm$ in length, $3.7 mm$ in LEH's diameter at two ends, and a volume close to the SH) in high-foot NIF experiments\cite{Kline13}. In addition, in the SH, laser beams incident on the inner wall are in a single ring for each LEH, i.e., laser-beam spots on the inner wall are distributed spatially on a ring, therefore no laser beam overlap and cross transfer appear in the SH of the 6 LEHs, and calculation shows that the x-ray flux asymmetry of $\Delta F/F\sim 0.5\%$ is mainly from the spherical harmonics $Y_{44}$ and $Y_{40}$\cite{Lan14,*Lan14-2}. It is of better radiation symmetry and higher coupling of lasers to target compared with the cylindrical hohlraum in which lasers are incident in inner and outer rings to tune the complicated time-dependent radiation flux asymmetry to ensure the hotspot symmetry\cite{Town14}. The profile of the capsule in the SH is made up of a CH ablator with an outer radius of $850 \mu m$, a cryogenic DT main fuel layer with mass of $0.19 mg$, and a cavity filled DT gas with density about $0.3 mg/cc$, as shown in Fig.\ref{Fig2}b. The capsule is imploded and compressed first by the ID temperature that has four-step temporal distribution from $100 eV$ to peak $270 eV$ (Fig.\ref{Fig2}c), which drives four successively intensive shocks, in a total pulse duration of $12.9 ns$ (Fig.\ref{Fig2}c), corresponding to the ID laser energy of $0.55 MJ$\cite{Lan14,*Lan14-2}. Then the capsule is further compressed by the steady DD lasers of an intensity $2.8\times10^{15} W/cm^2$ in last $2.2 ns$ duration with a flat-top power of $P_L=350 TW$ and laser energy of $0.77 MJ$ (Fig.\ref{Fig2}c).
\begin{figure}[h]
\includegraphics[width=0.2\textwidth]{Figure2_1
\includegraphics[width=0.25\textwidth]{Figure2_2}
\includegraphics[width=0.3\textwidth]{Figure2_3}
\caption{\label{Fig2}(Color online) Configurations of the SH and layered capsule, and the evolution of the ID temperature and DD laser power. (a) The schematic plot of the SH with six laser entrance holes (LEHs) and octahedral symmetry; (b) the capsule structure: CH ablator (the outer radius of $0.85 mm$), DT ice layer, DT gas cavity; (c) the radiation temperature T(eV, black) with four steps and the steady DD laser power $P_L$ of $350 TW$ (red) in the pulse duration of last $2.2 ns$.}
\end{figure}
The simulations are first performed in one-dimensional spherical geometry using the radiation-hydrodynamics code LARED-S, which is multi-dimensional, massively parallel and Eulerian mesh based, and 2000 meshes with minimum grid size of $0.05\mu m$. The LARED-S code includes laser ray- tracing, multi-group radiation diffusion (20 groups), radiation hydrodynamics, electron and ion heat conduction, atomic physics, nuclear reaction, $\alpha-$particle transport, and quotidian equation of state (QEOS).
Numerical results of implosion dynamics are shown in Fig.\ref{Fig3}, where the radii of the interfaces (fronts) of the imploding capsule varying with time are indicated. The RAF (white solid line in Fig.\ref{Fig3}a) formed by the ID temperature ablating the CH is rapidly converged during implosion dynamics. The successively intensive four shocks driven by the four-step ID temperature run into the main fuel that is adiabatically compressed, at $t\sim 11.4 ns$ the four shocks and another shock driven by the DD lasers merge into a single MS in the innermost part of the main DT layer which provides the main mass of the hotspot. The convergent MS followed a compression wave runs directly inside the hotspot that is first compressed and heated. During this time, the MS arrives at the hotspot center at $t\sim 12.55 ns$ and consequently rebounds there, and then runs outward toward the hotspot interface at the radius of $\sim 148 \mu m$ at $t\sim 12.95 ns$. On the other hand, at $t\sim 10.7ns$ the steady DD lasers are imposed on the critical surface at the radius of $\sim960 \mu m$ (white dashed-line in Fig.\ref{Fig3}a) in the corona region, which is $\sim 300 \mu m$ away from the RAF. The DD laser energy is strongly absorbed and the electron temperature $T_e$ rises rapidly near the critical surface. The supersonic electron thermal wave is observed, and the electron ablation front EAF propagates in the corona plasma with a maximal velocity of $\sim 850 km/s$ inward toward the RAF. The electron ablation shock and follow-up electron compression wave are launched when the supersonic electron thermal wave slows to a sound speed of $\sim200 km/s$ in the EAF (electron temperature $T_e\sim 1.2 keV$ and ion $T_i\sim 1.0 keV$) at the radius $R_{EA}\sim660 \mu m$ ($15\mu m$ away from the radius $R_{RA}$ of the RAF) at $t\sim 11.1ns$, where the plasma density is $\sim 0.34 g/cc$ and the electron ablation pressure is $P_{EA}\sim240 Mbar$ far greater than the ID ablation pressure of $50 Mbar$ (Fig.\ref{Fig3}b). The electron compression wave with high pressure, like the snowplow as shown by the Eq.(\ref{Eq6}), drives the corona plasma and piles up it into a high density and pressure plateau between the EAF and RAF, where the averaged density of $\sim 2.0 g/cc$ at $t\sim 12.4 ns$ and $\sim 3.3 g/cc$ at $t\sim 12.9 ns$ (gray region in Fig.\ref{Fig3}b) and the peak pressure of $\sim 400 Mbar$ and $\sim 680 Mbar$ respectively (red curves in Fig.\ref{Fig3}b), and the ID temperature is $260\sim270 eV$ there during this period (Fig.\ref{Fig1}c). Such a plateau predicted in analytical discussion above plays a role of a high-pressure piston that drives the ES and steady compression wave. The inward ES runs in the imploding capsule and fast arrives near the hotspot interface at the radius of $148\sim150 \mu m$ at $t\sim12.95 ns$, where the ES is in pressure of $\sim 400 Mbar$ and collides with the MS that is stopped as the MS with pressure of $\sim70 Mbar$ (Fig.\ref{Fig3}c) just arrived there from the hotspot center. Then, the ES directly enters into the hotspot, and it together with the follow-up compression wave further compresses and heats the hotspot, resulting in the ion temperature and pressure rapidly rising. After the ES rebounded from the hotspot center reaches the interface at $t\sim13.18 ns$, soon the hotspot ignition (the heating rate of $\alpha$-particles equals to the cooling rate by electron bremsstrahlung and heat conduction) occurs, that is, the hotspot reaches the mass averaged ion temperature $T_i\sim5.1 keV$, the pressure $\sim 200 Gbar$, the averaged areal density $\rho r_D=0.15 g/cm^2$, the $pdV$ work about $4 kJ$ ($\sim 2/3$ from the ES and compression wave), and total fusion energy of $25 kJ$ is released ($\sim$1/5 deposited in the hotspot) (Fig.\ref{Fig3}d), resulting in the work-dominated ignition in which the hydrodynamic instabilities and hotspot distortion are suppressed. At that time, a convergence ratio (the initial outer radius of capsule to the final radius of the hotspot) is $\sim 25$ that is far less than $\sim 35$ in NIF-NIC experiments\cite{Haan11}. In our scheme, the maximal drive pressure at the RAF is $\sim 700 Mbar$ (far larger than the ID maximal ablation pressure $\sim 100 Mbar$) and the maximal implosion velocity achieves $\sim400 km/s$ which is demanded for the work-dominated ignition as discussed before. In fact, after ignition, the inward velocity u at the hotspot interface is still large enough to be able to further supply the $pdV$ work to the ignited hotspot till stagnation ($u=0$), which, indeed, provides a margin for the hotspot ignition. Until stagnation beginning at $t\sim 13.32 ns$, the isobaric distribution with pressure of $\sim 930 Gbar$ in the ignited hotspot is observed, the system has released the fusion yield of $\sim 200 kJ$ ($\sim 35 KJ$ deposited in the hotspot), the areal density of the main fuel is $1.5 g/cm^2$, $\alpha =1.6$. Finally, the fusion yield of $15 MJ$ under total laser energy of $1.32 MJ$ is achieved.
\begin{figure}[h]
\includegraphics[width=0.8\textwidth]{Figure3_1
\includegraphics[width=0.3\textwidth]{Figure3_2}
\includegraphics[width=0.3\textwidth]{Figure3_3}
\includegraphics[width=0.3\textwidth]{Figure3_4
\caption{\label{Fig3} (Color online) Implosion physics of the ID-DD hybrid scheme. (a) The radii of interfaces (fronts) in the imploding capsule versus time: the DD laser critical surface(white-dashed line), the electron ablation front EAF (dark-gray-dashed line), the radiation ablation front RAF (white-solid line), the enhanced shock ES (green line), the merged shock MS (blue line), the hotspot boundary (black line); (b) the ID ablation pressure of $50 Mbar$ (red color) at $t\sim 11.1 ns$ before the EAF arriving; the high density and pressure plateaus (gray region) between the EAF and RAF at $t\sim 12.4 ns$ and $12.9 ns$, respectively; (c) the ES and the MS collision occurring at the hotspot boundary, density (black) and pressure (red); (d) ion temperature (green), density (black) and pressure (red) at $t\sim 13.26ns$ (ignition time).}
\end{figure}
We now discuss the two-dimensional LARED-S simulation results. First of all, we investigate the hydrodynamic instabilities at interfaces of the imploding capsule. Define growth factor (GF) as a ratio of the final amplitude to the initial perturbation, Figs.\ref{Fig4}a-c show the GFs at different interfaces varying with the mode $l$ till ignition time. Making a comparison of GFs between the work-dominated hybrid-drive ignition (red curves) and the high-foot hotspot ignition in the NIF experiments (black curves), both are simulated by our LARED-S code. At the RAF the maximal GFs with mode $l=40$ are $\sim150$ for hybrid and $\sim200$ for high-foot\cite{Dittrich14} (Fig.\ref{Fig4}c); at the main fuel-ablator interface the maximal GFs with mode $l=40$ are $\sim 220$ (hybrid) and $\sim 370$ (high foot) in Fig.\ref{Fig4}b, i.e., GFs for hybrid-drive at two interfaces are only $75\%$ and $60\%$ of high foot drive respectively. Most essential is for the GF at the hotspot interface, where the maximal GF is $\sim15$ for hybrid (Fig. 4a) appearing in $l=16$ and is far less than $\sim 70$ of the high-foot ignition in NIF experiments\cite{Dittrich14,Hurricane14}, which only the hotspot ignition occurs without burning due to mix. Our simulations as well manifest that if the ES is absent, soon after the MS first reflects at the hotspot interface, the deceleration is rapidly increasing till stagnation, and the GF magnifies $5\sim10$ times for dominant modes within $\sim 300 ps$, also see\cite{Temporal06}. Thus the work-dominated hybrid-drive ignition scheme can significantly suppress the hydrodynamic instabilities and mix at the hotspot interface by using the ES implosion ignition, and the results show as well that the feedthrough effect from the outer-interfaces is negligible. Therefore, the hotspot is of benefit to performing ignition and burn.
\begin{figure}[h]
\includegraphics[width=0.3\textwidth]{Figure4_1
\includegraphics[width=0.3\textwidth]{Figure4_2}
\includegraphics[width=0.3\textwidth]{Figure4_3
\caption{\label{Fig4} (Color online) Comparisons of the GFs of hydrodynamic instabilities (till ignition time) between the hybrid-drive capsule and the ID capsule for perturbations initially seeded at the ablator outer surface, and the 2-dimensional capsule profile at the ignition time. Density contours for a perturbation with a mode $l=16$ and an initial roughness of $350 {\AA}$. (a)-(c) all red curves for present hybrid-drive and all black curves for the high-foot indirect drive in NIF experiments: (a) at the fuel-hotspot interface; (b) at the fuel-ablator interface; (c) at the radiation ablation front (RAF).}
\end{figure}
Hydrodynamic instabilities as well occur at the DD laser critical surface in the corona region, where the non-uniformities from the laser imprinting (smaller scales) and edge overlapping (larger scales) of laser beam bundles would take place. Meanwhile, the supersonic EAF of the electron thermal wave leaves the critical surface and propagates inward towards the RAF at the average velocity of $\sim850 km/s$, and the corona plasma is almost static during propagation. 2-dimensional simulations by LARED-S code show that the initial temperature perturbation $\delta T_e \sim 30 \mu m$ in space (Fig.\ref{Fig5}a), induced by the laser intensity nonuniformity $\sim 10\%$, imposed at the critical surface, is suppressed to $\delta T_e \sim 1.5 \mu m$ from $t\sim 10.7$ to $11.1 ns$ when the electron ablation wave slows down to sonic at the radius of $\sim 660 \mu m$. As a result, the effect of the nonuniformity imposed by the DD laser on the capsule implosion is negligible. Recently, the thermal smoothing in the nonuniform laser-target interaction was also verified by using 2-dimensional fully Fokker-Plank model simulations\cite{Keskinen09} and it was observed as well that this model predicts more smoothing than the hydrodynamic Spitzer mode. In our recent experiments on the $SG-II$ laser facility, significant smoothing effect was observed as well\cite{SGII14}.
\begin{figure}[h]
\includegraphics[width=0.4\textwidth]{Figure5_1
\includegraphics[width=0.4\textwidth]{Figure5_2}
\caption{\label{Fig5} (Color online) The electron temprature $T_e$ nonuniformity at spherical coordinate of $R-\theta$, which is caused by DD lasers imposed at the critical surface, is suppressed by thermal smoothing of the supersonic electron ablation wave in the corona plasma. (a) Initial temperature perturbation $\delta T_e\sim30 \mu m$ at the critical surface ($R\sim 960\mu m$) at $t\sim10.7 ns$; (b) perturbation amplitude $\delta T_e\sim1.5\mu m$ at the EAF ($R\sim660 \mu m$) at $t\sim 11.1 ns$ when the electron ablation wave slows down to sonic.}
\end{figure}
Laser plasma interaction (LPI) in the ignition target is concerned for two reasons: the laser energy degradation by instabilities that are mainly from the stimulated Raman scattering (SRS), stimulated Brillouin scattering (SBS), two plasmon decay (TPD), and energetic electrons generated by these instabilities, which might preheat the imploding fuel core to degrade its compression. In NIF experiments that all ID laser beams are incident in the cylindrical hohlraum using double rings, the inner ring beams occur in cross and overlap on the optical ways, it results in the laser intensity increase and beam energy transfer, and the SRS is enhanced. Laser beam coupling to the target is $\sim (84\pm3)\%$ in a wide range of laser energy and power\cite{Kline13}, and the energetic electrons with kinetic energy $>100 keV$ are absorbed in about 0.2 mg DT ice (preheat) in an upper bound of $(5 \pm3)J$\cite{Doppner12}, which is $\sim(3.8\pm2.3)\times10^{-6}$ of total laser energy of $1.3MJ$, it results in an adiabat increase of $\delta \alpha\sim3.5\%$ that is acceptable in $\alpha \sim1.5$. Making a comparison, in our hybrid scheme, the layered capsule is inside the SH with the radius of $4 mm$, the ID laser energy coupling to the target is greater than $90\%$ because of only one ring lasers incident, the short length of LPI and no beam overlap and energy cross transfer result in low LPI$\sim6\%$ mainly from SBS measured in OMEGA experiments\cite{Wallace99}. The preheating DT energy by energetic electrons $>100 keV$ is estimated at most $(2.1\pm1.3)J$ for the ID laser energy of $0.55 MJ$, and thus the effect of fuel preheating on the adiabat is negligible. As for the DD lasers, the backscattering experiments on OMEGA regard to the shock ignition\cite{Betti07} for spike beams show that the energy loss of the total DD lasers with intensity of $2.8\times10^{15}W/cm^2$ is $\sim 10\%$ without using phase plates\cite{Theobald12}. In fact, the loss is significantly reduced if using smooth technologies. The further experiment is necessary for the ignition lasers of higher intensity.
In summary, we have proposed the indirect-direct hybrid-drive work-dominated ignition scheme for inertial confinement fusion. The layered fuel capsule inside the octahedral symmetric spherical hohlraum is compressed first by the ID radiation pressure and then by the DD lasers in the last pulse duration. The simulations by LARED-S show that the piston-like high-pressure plateau with pressure far greater than the ID ablation pressure is formed between the EAF and RAF and the ID implosion dynamics is significantly improved. The enhanced shock for ignition is driven by the high-pressure plateau, and the maximal imploding velocity reaches$\sim400 km/s$ demanded for the work-dominated ignition. The ES stops the ID shock reflections at the hotspot interface and suppresses the hydrodynamic instabilities and asymmetry there, thus two-dimensional influence may be significantly improved. The hotspot is further compressed and heated by the ES, resulting in the work-dominated ignition in a lower convergence ratio of 25. Finally, the fusion energy of $15 MJ$ under total laser energy of $1.32 MJ$ is achieved.
We would like to thank C. Y. Zheng, H. B. Cai, Z. S. Dai, L. Hao, and M. Chen at IAPCM, C. K. Li at MIT and J. Fernandez at LANL for discussions. This work was performed under the auspices of the National ICF Program of China, the National Basic Research Program of China (Grant No. 2013CB34100), the National High Technology research and development Program of China(Grant No. 2012AA01A303), and the Nature Science Foundation of China (Grant Nos: 11475032, 11175027, 11391130002, 11274026, and 11205010).
|
1906.09375
|
\section{\bf Introduction}
The effective approximation of stochastic partial differential equations has attracted a lot of attention recently \cite{2,Duan,Otto}, due to its importance in valid mathematical modeling and efficient simulation.
The Schr\"{o}dinger equation is the fundamental equation in quantum physics for describing quantum mechanical behaviors. It quantifies the wave function of a quantum system evolving over time. For the homogenization of deterministic Schr\"{o}dinger equations, there are two different scalings. One is the semi-classical scaling \cite{Buslaev,Dimassi,Gerard}, and the other one is the typical scaling of homogenization \cite{Allaire,Allairetwo}.
In the path integral approach \cite{Feynman} to quantum physics, the integral over the Brownian trajectories leads to the usual (local) Schr\"{o}dinger equation \cite{Albeverio}. Recent works on the path integrals over
the L\'{e}vy paths (e.g., \cite{Piotr}) lead to nonlocal Schr\"{o}dinger equations. More physical investigations on fractional or nonlocal generalization of the Schr\"{o}dinger equations may be
found in, for example, \cite{Dinh, Jiarui, Obrezkov, Yang, Zaba}.
As random disturbances may affect the qualitative behaviors drastically and result in new properties for this quantum model, stochastic Schr\"{o}dinger equations have attracted attentions recently (e.g., \cite{De,Gautier,Keller,Pellegrini, Zastawniak}).
In this paper, we will establish an effective approximation for a nonlocal stochastic Schr\"{o}dinger equation with a typical scaling and an oscillating potential. For stochastic homogenization problems, a two-scale convergence technique \cite{Bourgeat,Casado,Heida,Zhikov}
is available.
Specifically, we consider the effective approximation for the following \emph{nonlocal} stochastic Schr\"{o}dinger equation (\textbf{heterogeneous system}) with a small positive scale parameter $\epsilon$:
\begin{small}
\begin{equation}\label{1}
\begin{cases}
$$idu_\e=\l(\frac{1}{2}\mathcal{A}^\e u_\e+\epsilon^{(1-\a)/2}\mathcal{V}^\epsilon u_\e\r)dt+g(u_\e)dW_t+fdt, \; 0<t<T, \; x \in D=(-1,1), \\
u_\epsilon(0,x)=h(x),\quad\quad x \in D,\\
u_\e(t,x)=0, \quad\quad x \in D^c=\mathbb{R}\backslash D,$$
\end{cases}
\end{equation}
\end{small}
where $\a\in(1,2)$, and where $f\in L^2\l(0,T;L^2(D)\r)$.
The function $\mathcal{V}^\e(t,x)=\mathcal{V}\l(\frac{t}{\e},\frac{x}{\e}\r)$ is a real potential and
$W(t)$ is a one dimensional Brownian motion defined on a complete probability space $(\Omega, \mathcal{F},\mathbb{P})$. The function $g$ is noise intensity and satisfies the Lipschitz condition.
We consider the nonlocal operator
$$\mathcal{A}^\e u=\mathcal{D}\l(\Theta^\e\mathcal{D}^*u\r),$$
where $\Theta^\e(x,z)=\Theta\l(\frac{x}{\e},\frac{z}{\e}\r)$ is positive, bounded function, and the linear operator $\mathcal{D}$ and its adjoint operator $\mathcal{D}^*$ are defined as follows.
Given functions $\beta(x,z)$ and $\gamma(x,z)$,
the nonlocal divergence $\mathcal{D}$ on $\beta$ is defined as
$$\mathcal{D}(\beta)(x):=\int_{\mathbb{R}}(\beta(x,z)+\beta(z,x))\cdot\gamma(x,z)dz \qquad \text{for}\; x\in \mathbb{R}.$$
For a function $\phi(x)$, the adjoint operator $\mathcal{D^{*}}$ corresponding to $\mathcal{D}$ is the operator whose action on $\phi$ is given by
$$\mathcal{D^{*}}(\phi)(x,z)=-(\phi(z)-\phi(x))\gamma(x,z) \qquad \text{for} \; x,z\in \mathbb{R}.$$
The aim here is to investigate the limiting behaviour of $u_{\e}$ when $\varepsilon$ goes to zero, under the periodicity hypotheses on the coefficients
$\Theta$ and the potential $\mathcal{V}$, and the assumption that the mean value of $\mathcal{V}$ for the spatial variable is null.
Through this paper, we always take $\gamma(x,z)=(z-x)\frac{1}{|z-x|^{\frac{3+\a}{2}}}.$
As a special case, we set $\Theta$ to be $1$, we have
$$\frac{1}{2}\mathcal{D}\mathcal{D^{*}}=-(-\Delta)^{\a/2}.$$ The nonlocal Laplace operator $(-\Delta)^{\a/2}$ is defined as
$$(-\Delta)^{\a/2}u(x)=\int_{\mathbb{R}\backslash \{x\}}\frac{u(z)-u(x)}{|z-x|^{1+\a}}dz,$$
where the integral is in the sense of Cauchy principal value.
\begin{remark}
Without loss of generality, we set $\Theta^{\e}(x,z)$ to be a symmetric function. In fact, we can define the symmetric and anti-symmetric parts of $\Theta$:
$$\Theta^{\e}_s(x,z)=\frac{1}{2}(\Theta^{\e}(x,z)+\Theta^{\e}(z,x)) \quad \text{and} \quad \Theta^{\e}_a(x,z)=\frac{1}{2}(\Theta^{\e}(x,z)-\Theta^{\e}(z,x)).$$
From the fact that $$\mathcal{D}\l(\Theta^{\e}_a(x,z)\mathcal{D}^*u\r)=0,$$ we have $$\mathcal{D}\l(\Theta^{\e}(x,z)\mathcal{D}^*u\r)=\mathcal{D}\l(\Theta^{\e}_s(x,z)\mathcal{D}^*u\r).$$
\end{remark}
\begin{remark}
For a function $\upsilon(x,y)$, we define
$$(\mathcal{D}^*_x\upsilon)(x,z,y)=-(\upsilon(z,y)-\upsilon(x,y))\gamma(x,z)$$
and
\[
\begin{split}
(\mathcal{D}_x\mathcal{D}^*_x\upsilon)(x,y)&=2\int_{\mathbb{R}}
-(\upsilon(z,y)-\upsilon(x,y))\gamma^2(x,z)dz\\
&=-2(-\Delta)^{\a/2}_x\upsilon(x,y).
\end{split}
\]
\end{remark}
Our purpose is to examine the convergence of the solution $u_\epsilon$ of (\ref{1}) in some probabilistic sense, as $\epsilon\rightarrow0$,
and to specify the limit $\tilde{u}$. We will see that the limit process $\tilde{u}$ satisfies the following nonlocal stochastic partial differential equation (\textbf{effective system}):
\begin{align}\label{n7}
\begin{cases}
id\tilde{u}=\l(-\Xi_1(-\Delta)^{\a/2}\tilde{u}-\frac{\Xi_2}{2}\mathcal{D}|_{D}(\zeta)(x)-\Xi_3\zeta(x)\r)dt+g(\tilde{u})dW_t+fdt,\\
\tilde{u}(x,t)=0,\quad\quad (x,t) \in D^c\times(0,T),\\
\tilde{u}(0)=h(x), \quad\quad x \in D,$$
\end{cases}
\end{align}
where
\begin{align*}
&\Xi_1=\int_{Y\times N}\Theta(y,\eta)dydn,\\
&\Xi_2=\int_{Y\times N \times Z}\Theta(y,\eta)\mathcal{D}_y^*\chi dydnd\tau,\\
&\Xi_3=2\int_{Y\times Z}\mathcal{V}(y,\tau)\chi(y,\tau)dyd\tau,\\
&\zeta(x)=\frac{1}{|D|}\int_{D}(\mathcal{D}^*\tilde{u})(x,z)dz,\\
&\mathcal{D}|_{D}(\zeta)(x)=\int_{D}(\zeta(x)+\zeta(z))\gamma(x,z)dz,\\
\end{align*}
where the function $\chi$ is given by (\ref{654}). We set $Y=N=Z=(0,1)$ and consider $Y, N, Z$ as subsets of $\mathbb{R}_y, \mathbb{R}_\eta, \mathbb{R}_\tau $ respectively (the spaces of variables $y$, $\eta$ and $\tau$ respectively).
Through this paper, we always identify functions on $\mathbb{T}=(0,1)$ with their periodic extension to $\mathbb{R}$.
\textbf{Structure of this paper.}
In Section $2$, we recall some function spaces and deal with the existence and uniqueness of the Schr\"{o}dinger equation. Then, in Section $3$, we prove the main theorem and derive the effective system.
\section{Preliminaries}
We now briefly discuss the well-posedness for the heterogeneous equation (\ref{1}), and derive a few uniform estimates concerning the solution $u^\e$.
\subsection{\bf Function spaces}
We set $Y=N=Z=(0,1)$ and consider $Y, N, Z$ as subsets of $\mathbb{R}_y, \mathbb{R}_\eta, \mathbb{R}_\tau $ respectively (the spaces of variables $y$, $\eta$ and $\tau$ respectively).
Let us first recall that a function $u$ is said to be $Y\times N\times Z$-periodic if for each $k,l,m\in \mathbb{Z}$, we have $u(y+k,\eta+l,\tau+m)=u(y,\eta,\tau)$ almost everywhere with $y,\eta,\tau\in \mathbb{R}.$ The space of all $Y\times N\times Z$-periodic continuous complex functions on $\mathbb{R}_y\times\mathbb{R}_\eta\times\mathbb{R}_\tau$
is denoted by $\mathcal{C}_{per}(Y\times N\times Z)$, that of all $Y\times N\times Z$-periodic functions in $L^p\l(\mathbb{R}_y\times\mathbb{R}_\eta\times\mathbb{R}_\tau\r)(1\leq p\leq\infty)$ is
denoted by $L^p_{per}(Y\times N\times Z).$ $\mathcal{C}_{per}(Y\times N\times Z)$ is a Banach space under the supremum norm on $\mathbb{R}\times\mathbb{R}\times\mathbb{R},$ whereas $L^p_{per}(Y\times N\times Z)$ is a Banach space under the norm
$$\|u\|_{L^p_{per}(Y\times N\times Z)}=\l(\int_{Y\times N\times Z}\l|u(y,\eta,\tau)\r|^pdyd\eta d\tau\r)^{\frac{1}{p}}.$$
Let $\a\in(1,2)$, the classical fractional Sobolev space is
$$H^{\a/2}(D)=\l\{u\in L^2(D): \int_{D}\int_{D}\frac{|u(x)-u(z)|^2}{|x-z|^{1+\a}}dxdz<\infty\r\},$$
with the norm
$$ \|u\|_{H^{\a/2}(D)}^2=\|u\|_{L^2(D)}^2+\int_{D}\int_{D}\frac{|u(x)-u(z)|^2}{|x-z|^{1+\a}}dxdz.$$
We set $u|_{\mathbb{R}\backslash D}\equiv 0$. For the nonlocal operator, we have
\[
\begin{split}
(\frac{1}{2}\mathcal{A}^\epsilon&u,u)_{L^2(D)}=\frac{1}{2}\l(\Theta^\epsilon(x,z)\mathcal{D}^*u(x,z),\mathcal{D}^*u(x,z)\r)_{L^2(\mathbb{R}\times\mathbb{R})}\\
&=\int_D\int_{D^c}\Theta^\epsilon(x,z)\frac{|u(x)|^2}{|z-x|^{1+\a}}dzdx+\frac{1}{2}\int_{D}\int_{D}\Theta^\epsilon(x,z)\frac{|u(x)-u(z)|^2}{|x-z|^{1+\a}}dxdz.
\end{split}
\]
Pose $\rho(x):=\int_{D^c}\frac{1}{|z-x|^{1+\a}}dz.$ Since the fact that $\Theta^\epsilon(x,z)$ is a positive, bounded function, we then can define a weighted fractional Sobolev space without considering the function $\Theta^\epsilon(x,y):$
$$H_\rho^{\a/2}(D):=\l\{u\in L^2(\mathbb{R}): u|_{\mathbb{R}\backslash D}\equiv 0, \|u\|_{H_\rho^{\a/2}(D)}< \infty \r\},$$
equipped with the norm
$$\|u\|_{H_\rho^{\a/2}(D)}:= \l(\int_D\rho(x)|u(x)|^2dx+\frac{1}{2}\int_{D}\int_{D}\frac{|u(x)-u(z)|^2}{|x-z|^{1+\a}}dxdz\r)^{\frac{1}{2}},$$
which immediately implies that $\l(-(-\Delta)^{\a/2} u,u\r)_{L^2(D)}=\|u\|^2_{H_\rho^{\a/2}(D)}.$
The space $H^{\a/2}_\#(Y)$ of $Y$-periodic functions $u\in H^{\a/2}$ such that $\int_{Y}u(y)dy=0$ will be interest in this study. Provided with the norm,
$$\|u\|_{H^{\a/2}_\#(Y)}=\l(\int_{Y}\int_{N}\frac{|u(y)-u(\eta)|^2}{|y-\eta|^{1+\a}}dyd\eta\r)^{\frac{1}{2}}$$
For $s<0$, we define $H^s(D)$ as the dual space of $H^{-s}(D)$.
\subsection{\bf Well-posedness}
Let $B^\e$ be the linear operator in $L^2(D)$
defined by\\
\begin{center}
$B^\e u=-i\mathcal{A}^\e u $ for all $u\in D(B^\e),$\\
\end{center}
with domain
$$D(B^\e)=\{\upsilon\in H^{\a/2}_\rho(D): \mathcal{A}^\e\upsilon\in L^2(D)\}.$$
Then, $B^\e$ is of dense domain, and skew-adjoint since $\mathcal{A^\e}$ is self-adjoint\cite{Caze}. Consequently, $B^\e$ is a $m$-dissipative operator in $L^2(D)$(Corollary 2.4.11 of \cite{ Caze}). It follows by the Hille-Yosida-Philips theorem that $B^\e$ is the generator of a contraction semigroup $(G_t^\e)_{t>0}$.
Now, let us check the existence and uniqueness for equation (\ref{1}). The abstract problem for equation (\ref{1}) is given by
\begin{equation}\label{2}
\begin{cases}
$$du_\e=\l(B^\e u_\e+F_\e(u_\e)\r)dt+g(u_\e)dW_t, \\
u_\epsilon(0,x)=h(x),$$
\end{cases}
\end{equation}
where $F_\e$ is defined in $L^2\l(0,T;L^2(D)\r)$ by
$$F_\e(\upsilon)(t)=-i\epsilon^{1-\a}\mathcal{V}^\epsilon \upsilon-if(t).$$
We can obtain the following lemma (Theorem $3.3$ of \cite{Gaw}).
\begin{lemma}\label{l1}
Suppose $h\in L^2(D),$ $f\in C\l([0,T];L^2(D)\r)$ and for all $\e>0$,
\begin{equation}\label{321}
\l\|\e^{(1-\a)/2}\mathcal{V}^\e\r\|_{\mathcal{L}\l(H_\rho^{\a/2}(D), H_\rho^{-\a/2}(D)\r)}\leq \beta,
\end{equation}
where $\beta$ is a positive constant independent of $\epsilon$ and $\mathcal{L}\l(H_\rho^{\a/2}(D), H_\rho^{-\a/2}(D)\r)$ is the space of linear continuous mapping of
$H_\rho^{\a/2}(D)$ into $ H_\rho^{-\a/2}(D).$
We obtain the existence and uniqueness of mild solution $u_\e(t)\in C
\l([0,T];L^2(\Omega\times D)\r).$
\end{lemma}
\begin{remark}
For an illustrative example, if the potential $\mathcal{V}$ belongs to $C_{per}(Y)$ and verifies
\begin{equation}\label{421}
\int_{Y}\mathcal{V}(y)dy=0.
\end{equation}
Then, the linear operator $\e^{(1-\a)/2}\mathcal{V}^\e$ vertifies (\ref{321}). Indeed, since $\mathcal{V}\in C_{per}(Y)$ and verifies (\ref{421}), the equation
\begin{equation*}
\mathcal{D}_y(\Theta \mathcal{D}_y^*\varsigma)=\mathcal{V}
\end{equation*}
admits a unique solution $\varsigma$ in $H^{\a/2}_\#(Y).$ Let $\varsigma^\e(x)=\varsigma(\frac{x}{\epsilon})$. For all $\e>0,$ we have
\begin{equation}
\e^{(1+\a)/2}\mathcal{D}\l(\Theta \mathcal{D}^*\varsigma^\e\r)=\e^{(1-\a)/2}\mathcal{V}^\e
\end{equation}
Thus, for any $u\in{H_\rho^{\a/2}}(D),$ we have
$$\l(\e^{(1-\a)/2}\mathcal{V}^\e u, v\r)=\e^{(1+\a)/2}\int_{D\times D}\Theta \mathcal{D}^*\varsigma
\mathcal{D}^*(uv)dxdz=\int_{D\times D}\Theta (\mathcal{D}_y^*\varsigma)^\e
\mathcal{D}^*(uv)dxdz$$
for all $v\in \mathcal{M}(D)$($\mathcal{M}(D)$ is the space of functions in $C^\infty(D)$ with compact supports). From the nonlocal Poincar\'{e} inequality\cite{Fel}, we have
$$\l|\l(\e^{(1-\a)/2}\mathcal{V}^\e u, v\r)\r|\leq C\l\|u\r\|_{H_\rho^{\a/2}(D)}\l\|v\r\|_{H_\rho^{\a/2}(D)},$$
where $c$ is a positive constant.
Thus, by the density of $\mathcal{M}(D)$ in $H_\rho^{\a/2}(D)$, the precedent inequality holds for all $v\in H_\rho^{\a/2}(D)$. Hence, inequality $(\ref{321})$ follows.
\end{remark}
Now, let us prove some uniform estimates in the following lemma. Before this lemma, we make an useful remark.
Let us put
$$a^\e(u,v)=\int_{D}\int_{D}\Theta^\e\mathcal{D}^*u(x,z)\overline{\mathcal{D}^*v}(x,z)dxdz.$$
\begin{lemma}\label{l2}
Let $u^\e$ be a solution of equation (\ref{1}) with initial value $h\in L^2(D).$ Suppose further that $$f, f'\in L^2\l(0,T;L^2(D)\r)$$and
$$\l\|\e^{(-1-\a)/2}(\frac{\partial\mathcal{V}}{\partial\tau})^\e\r\|_{\mathcal{L}(H_\rho^{\a/2}(D), H_\rho^{-\a/2}(D))}\leq c_0,$$
where $f'$ stands for $\frac{df}{dt}$, $c_0$ is a constant independent of $\epsilon.$ Then there exists a constant $c>0$ independent of $\epsilon$ such that the solution $u_\e$
of equation (\ref{1}) verifies:
$$\sup_\epsilon\mathbb{E}\sup_{0\leq t\leq T}\l\|u_\epsilon\r\|^2_{L^2(D)}+\sup_\epsilon\mathbb{E}\l\|u_\e\r\|^2_{L^2\l(0,T;H_\rho^{\a/2}(D)\r)}
+\sup_\epsilon\mathbb{E}\l\|u'_\e\r\|^2_{L^2\l(0,T;H_\rho^{-\a/2}(D)\r)}\leq c. $$
\begin{proof}
Applying It\^{o} formula for $u_\e(t),$ we have
\begin{small}
\begin{equation*}
\begin{split}
&\l\|u_\e(t)\r\|^2_{L^2}=\l\|u_\e(0)\r\|^2_{L^2}-\mbox{Re}\int_0^t2i\l(\mathcal{A}^\e u_\e(s)+\e^{(1-\a)/2}\mathcal{V^\e}u_\e(s),,u_\e(s)\r)ds\\
&\quad-\mbox{Re}\int_0^t2i\l(g(u_\e(s)),u_\e(s)\r)dW_s-\mbox{Re}\int_0^t2i\l(f,u_\e(s)\r)ds+\int_0^t\l\|g(u_s)\r\|^2_{L^2}ds\\
&=\l\|u_\e(0)\r\|^2_{L^2}+\mbox{Im}\int_0^t2\l(g(u_\e(s)),u_\e(s)\r)dW_s+\mbox{Im}\int_0^t2\l(f,u_\e(s)\r)ds+\int_0^t\l\|g(u_s)\r\|^2_{L^2}ds.
\end{split}
\end{equation*}
\end{small}
By Burkholder-Davis-Gundy's inequality, H\"{o}lder inequality and Young's inequality, it refers that
\begin{equation*}
\begin{split}
\mathbb{E}&\sup_{0\leq t\leq T}2\mbox{Im}\int_0^t2\l(g(u_\e(s)),u_\e(s)\r)dW_s\\
&=\mathbb{E}\sup_{0\leq t\leq T}2\mbox{Im}\int_0^t2\int_{D}g\l(u_\e(s)\r)\bar{u}_\e(s)dW_sdx\\
&\leq c_1\mathbb{E}\l(\int_0^T\l\|\bar{u}_\e(s) g(u_\e(s))\r\|^2_{L^2}ds\r)^{\frac{1}{2}}\\
&\leq c_1\mathbb{E}\l(\delta\sup_{0\leq t\leq T}\l\|u_\e(t)\r\|^2_{L^2}+\frac{1}{\delta}\int_0^T\l\|g(u_\e(s))\r\|^2_{L^2}ds\r)\\
&\leq\frac{1}{3}\mathbb{E}\sup_{0\leq t\leq T}\l\|u_\e(t)\r\|^2_{L^2}+c_2\mathbb{E}\int_0^T\l\|u_\e(s)\r\|^2_{L^2}ds+c_2.
\end{split}
\end{equation*}
Then, we obtain
$$\frac{2}{3}\mathbb{E}\sup_{0\leq t\leq T}||u_\e(t)||^2_{L^2}\leq ||u_\e(0)||^2_{L^2}+c_3\int_0^T\sup_{0\leq r\leq s}||u_\e(r)||^2_{L^2}dr+c_4,$$
which implies from Gronwall inequality that
$$\mathbb{E}\sup_{0\leq t\leq T}||u_\e(t)||^2_{L^2}\leq c_5,$$
where the positive constant $c_5$ is indepedent of $\e.$\\
Moreover, we also have $$\mathbb{E}\sup_{0\leq t\leq T}||u_\e(t)||^4_{L^2}\leq c_5.$$
Next, we denote $d\dot{W_t}=\frac{dW_t}{dt}$ and take the product in $L^2(D)$ of equation (\ref{1}) with $u'_\e$
\begin{small}
$$i\l\|u'_\e(t)\r\|^2_{L^2}=\l(\mathcal{A}^\e u_\e(t)+\e^{(1-\a)/2}\mathcal{V^\e}u_\e(t),,u'_\e(t)\r)+\l(g(u_\e(t)),u'_\e(t)\r)d\dot{W_t}+\l(f(t),u'_\e(t)\r).$$
\end{small}
By the preceding equality we have
\begin{small}
$$\mbox{Re}\l(\mathcal{A}^\e u_\e(t)+\e^{(1-\a)/2}\mathcal{V^\e}u_\e(t),u'_\e(t)\r)+\mbox{Re}\l(g(u_\e(t)),u'_\e(t)\r)d\dot{W_t}+\mbox{Re}\l(f(t),u'_\e(t)\r)=0.$$
\end{small}
Since the fact that
$$\frac{d}{dt}a^\e\l(u_\e(t),u_\e(t)\r)=2\mbox{Re}\l(\mathcal{A}^\e u_\e(t),u'_\e(t)\r),$$
\begin{small}
$$\e^{(1-\a)/2}\frac{d}{dt}\l(\mathcal{V}^\e u_\e(t),u_\e(t)\r)=\e^{(-1-\a)/2}\l((\frac{\partial\mathcal{V}}{\partial{\tau}})^\e u_\e(t),u_\e(t)\r)
+2{\e}^{(1-\a)/2}\mbox{Re}\l(\mathcal{V}^\e u_\e(t),u'_\e(t)\r).$$
\end{small}
We have
\begin{equation*}
\begin{split}
&\frac{1}{2}\frac{d}{dt}a^\e(u_\e(t),u_\e(t))+\frac{1}{2}\e^{(1-\a)/2}\frac{d}{dt}(\mathcal{V}^\e u_\e(t),u_\e(t))-
\e^{(-1-\a)/2}\l(\l(\frac{\partial\mathcal{V}}{\partial{\tau}}\r)^\e u_\e(t),u_\e(t)\r)\\
&+\mbox{Re}\l(g(u_\e(t)),u'_\e(t)\r)d\dot{W_t}+\mbox{Re}\frac{d}{dt}(f(t),u_\e(t))
-\mbox{Re}\l(f'(t),u_\e(t)\r)=0.
\end{split}
\end{equation*}
An integration on $[0,t]$ of the equality above yields,
\begin{small}
\begin{equation*}
\begin{split}
&\frac{1}{2}a^\e(u_\e(t),u_\e(t))+\frac{1}{2}\e^{(1-\a)/2}\l(\mathcal{V}^\e u_\e(t),u_\e(t)\r)-\frac{1}{2}a^\e(h,h)
-\frac{1}{2}\e^{(1-\a)/2}\l(\mathcal{V}^\e(0) h,h\r)\\
&=\e^{(-1-\a)/2}\int_0^t\l(\l(\frac{\partial\mathcal{V}}{\partial{\tau}}\r)^\e u_\e(s),u_\e(s)\r)ds
-\int_0^t\mbox{Re}\l(g(u_\e(s)),u'_\e(s)\r)dW_s\\
&\quad-\mbox{Re}(f(t),u_\e(t))+\mbox{Re}(f(0),h)+\mbox{Re}\int_0^t\l(f'(s),u_\e(s)\r)ds.
\end{split}
\end{equation*}
\end{small}
It follows that
\begin{equation*}
\begin{split}
c_6||u_\e(t)||^2_{H_\rho^{\a/2}}&+2\int_0^t\mbox{Re}\l(g(u_\e(s)),u'_\e(s)\r)dW_s\\
&\leq \beta||u_\e(t)||^2_{L^2}+c_7||h||^2_{H_\rho^{\a/2}}
+\beta||h||^2_{L^2}\\&+c_0||u_\e(t)||^2_{L^2(0,T;L^2(D))}
+2||f(t)||_{L^2}||u_\e(t)||_{L^2}\\
&+2||f(0)||_{L^2}||h||_{L^2}+2||f'||_{L^2(0,T;L^2(D))}||u_\e(t)||_{L^2(0,T;L^2(D))}.
\end{split}
\end{equation*}
We consider the expectation after integrating on $[0,T]$ the preceding inequality and using Burkholder-Davis-Gundy's inequality, we have
$$\mathbb{E}||u_\e||^2_{L^2(0,T;H_\rho^{\a/2}(D))}\leq c_8,$$
where the positive constant $c_8$ is indepedent of $\e.$
By equatioin (\ref{1}), we have
\begin{equation*}
\begin{split}
i\int_0^T(u'_\e(t),\bar{v}(t))dt&=\int_0^Ta^\e(u_\e(t),v(t))dt+\int_0^T\e^{(1-\a)/2}\l(\mathcal{V}^\e u_\e(t),v(t)\r)dt\\
&\quad+\int_0^T(g(u_\e),v(t))dW_t+\int_0^T(f(t),v(t))dt
\end{split}
\end{equation*}
for all $v\in L^2\l(0,T; H_\rho^{\a/2}(D)\r).$
Hence, we have
$$\mathbb{E}||u'_\e||^2_{L^2\l(0,T;H_\rho^{-\a/2}(D)\r)}\leq c_9.$$
In summary, we deduce that
\begin{small}
$$\sup_\epsilon\mathbb{E}\sup_{0\leq t\leq T}||u_\epsilon||^2_{L^2(D)}+\sup_\epsilon\mathbb{E}||u_\e||^2_{L^2\l(0,T;H_\rho^{\a/2}(D)\r)}
+\sup_\epsilon\mathbb{E}||u'_\e||^2_{L^2\l(0,T;H_\rho^{-\a/2}(D)\r)}\leq c. $$
\end{small}
\end{proof}
\end{lemma}
\section{Effective Approximation and Effective System}
In this section, we will prove several convergence results. Then, we establish effective approximation and derive the effective system.
\subsection{Some convergence results}
Let us first introduce some functions spaces. Let $Q=D\times(0,T)$ with $T\in \mathbb{R}_+^*.$
We consider the space
$$\mathcal{Y}(0,T)=\l\{\upsilon\in L^2\l(\Omega\times (0,T);H^{\a/2}_\rho(D)\r): \upsilon'\in L^2\l(\Omega\times (0,T);H_\rho^{-\a/ 2}(D)\r)\r\}$$
provided with the norm
$$\|\upsilon\|^2_{\mathcal{Y}(0,T)}=\mathbb{E}\|\upsilon\|^2_{L^2\l(0,T;H^{\a/2}_\rho(D)\r)}+\mathbb{E}||\upsilon'||^2_{L^2\l(0,T ;H_\rho^{-\a/ 2}(D)\r)}$$
which makes it a Hilbert space.
\begin{definition}
Let $E$ be a fundamental sequence. A sequence $(u_\e)_{\e\in E}\subset L^2(Q\times\Omega)$ is said to two-scale converge in $L^2(Q\times\Omega)$ to some $\tilde{u}\in L^2(Q\times\Omega; L^2_{per}(Y\times Z))$ if as
$E\ni\e\rightarrow0,$
\begin{small}
$$\int_{Q\times\Omega}u_\e(x,t,\omega)\psi^\e(x,t,\omega)dxdtd{\mathbb{P}}\!\rightarrow\!\int_{Q\times\Omega\times Y\times Z}\tilde{u}(x,t,y,\tau,\omega)\psi(x,t,y,\tau,\omega))dxdtdyd\tau d{\mathbb{P}}$$
\end{small}
for all $\psi\in L^2(Q\times\Omega;\mathcal {C}_{per}(Y\times Z)),$ where $\psi^\e(x,t)=\psi(x,t,\frac{x}{\e},\frac{t}{\e},\omega).$
\end{definition}
\begin{lemma}\label{l3}
Let $E$ be a fundamental sequence. $u^\e$ is a solution of equation (\ref{1}). Then, a subsequence $E'$ can be extracted from $E$ such that, as $E'\ni\e\rightarrow 0,$
$$u_\e\rightarrow \tilde{u} \quad \text{in} \quad \mathcal{Y}(0,T)\text{-weakly}, $$
\begin{center}
$u_\e\rightarrow \tilde{u} \quad \text{in} \quad L^2(Q\times\Omega)\text{-two-scale}.$
\end{center}
Moreover,
for a further subsequence $\e'\in E''$, we have
$$\mathcal{D}^*u_{\e'}\rightarrow D^*_x\tilde{u}+D^*_y u_1 \quad \text{in} \quad L^2(Q\times D\times\Omega) \text{-two-scale},$$
where $\tilde{u}\in \mathcal{Y}(0,T), u_1\in L^2(Q\times\Omega;L^2_{per}(Z;H_{\#}^{\a/2}(Y)).$
\begin{proof}
Let $$\psi_\e=\psi_0+\epsilon^{(1+\a)/2}\psi_1^\epsilon,\; i.e.,\;
\psi_\e(x,t,\omega)=\psi_0(x,t,\omega)+\epsilon^{(1+\a)/2}\psi_1(x,t,\frac{x}{\epsilon},\frac{t}{\epsilon},\omega),$$
where $$\psi_0\in \mathcal{M}(Q)\otimes L^2(\Omega) \;and\; \psi_1\in \mathcal{M}(Q)\otimes[(\mathcal{C}_{per}(Y)/\mathbb{C})\otimes\mathcal{C}_{per}(Z)]\otimes L^2(\Omega).$$
We set $$\phi_0(x,z,t,\omega)=(\overline{D^*_x\psi_0})(x,z,t,\omega),$$
$$\phi_1(x,\frac{x}{\epsilon'},\frac{z}{\epsilon'},t,\frac{t}{\epsilon'},\omega)=(\overline{D^*_y \psi_1})(x,\frac{x}{\epsilon'},\frac{z}{\epsilon'},t
,\frac{t}{\epsilon'},\omega),$$ $$\phi(x,z,t,y,\eta,\tau,\omega)=\phi_0(x,z,t,\omega)+\phi_1(x,y,t,\eta,\tau,\omega).$$
For convenient, we omit variables $\omega$ and $t.$ The functions $\phi_0(x,z,t,\omega)$ and $\phi_1(x,\frac{x}{\epsilon'},\frac{z}{\epsilon'},t,\frac{t}{\epsilon'},\omega)$ are abbreviated as $\phi_0(x,z)$ and $\phi_1(x,\frac{x}{\epsilon'},\frac{z}{\epsilon'})$ respectively.
Due to Lemma \ref{l2}, one has a subsequence $E,$ such that
$$u_\e\rightarrow \tilde{u} \quad \text{in} \quad \mathcal{Y}(0,T)\text{-weakly}.$$
Then for a further subsequence $E''\ni \e',$ we have
\begin{equation*}
\begin{cases}
u_{\e'} \quad \text{two-scale \;converges\; to} \;u\in L^2(Q\times\Omega\times Y\times Z),\\
\mathcal{D}^*u_{\e'}\text{\quad two-scale\; converges\; to }\;U\in L^2(Q\times D\times\Omega\times Y\times Z\times N).
\end{cases}
\end{equation*}
Hence, for any $\psi_{\epsilon'}$. one has
\begin{small}
\begin{equation}\label{e2}
\begin{split}
&\int_0^T\int_{\Omega}a^{\e'}(u_{\epsilon'},\psi_{\epsilon'})dtd{\mathbb{P}}=\int_0^T\int_{\Omega}\int_{D}\int_{D}\Theta^{\e'}\mathcal{D}^*u_{\epsilon'}(x,z)
\overline{\mathcal{D}^*\psi_{\epsilon'}}(x,z)dxdzdtd{\mathbb{P}}\\
&\rightarrow \int_{Q\times D\times\Omega\times Y\times N \times Z}\Theta(y,\eta)U(x,z,t,y,\eta,\tau,\omega)\phi(x,z,t,y,\eta,\tau,\omega)dxdzdtdyd\eta d\tau d{\mathbb{P}}.
\end{split}
\end{equation}
\end{small}
By the definition of $\mathcal{D}^*$ and $\mathcal{D},$ it follows that
\begin{small}
\begin{equation*}
\begin{split}
&\int_0^T\int_{\Omega}a^{\epsilon'}(u_{\epsilon'},\psi_{\epsilon'})dtd{\mathbb{P}}\\
&=\int_{Q\times D}\int_{\Omega}\Theta^{\epsilon'}(x,z)(\mathcal{D}^*u_{\e'})(x,z)
\l[(\overline{D^*_x\psi_0})(x,z)+(\overline{D^*_y \psi_1})(x,\frac{x}{\epsilon'},\frac{z}{\epsilon'})\r]dxdzdtd{\mathbb{P}}+o(\epsilon')\\
&=\int_{Q\times D}\int_{\Omega}(\mathcal{D}^*u_{\e'})(x,z)\l[\phi_0(x,z)\Theta^{\epsilon'}(x,z)
+\phi_1((x,\frac{x}{\epsilon'},\frac{z}{\epsilon'}))\Theta^{\epsilon'}(x,z)\r]dxdzdtd{\mathbb{P}}+o(\epsilon')\\
&=\Lambda_1^{\epsilon'}+\Lambda_2^{\epsilon'}+o(\epsilon').
\end{split}
\end{equation*}
\end{small}
For the first part of the right side, on one hand,
\[
\begin{split}
\Lambda_1^{\epsilon'}&=\int_{Q\times D}\int_{\Omega}(\mathcal{D}^*u_{\e'})(x,z)
\phi_0(x,z)\Theta^{\epsilon'}(x,z)dxdzdtd{\mathbb{P}}\\
&=\int_{Q\times D}\int_{\Omega}u_{\e'}(x)\l[\phi_0(x,z)\Theta^{\epsilon'}(x,z)+\phi_0(z,x)\Theta^{\epsilon'}(z,x)\r]\gamma(x,z)dzdxdtd{\mathbb{P}}.
\end{split}
\]
Let $\epsilon'$ goes to $0$, we have
\begin{small}
\[
\begin{split}
\Lambda_1^{\epsilon'}&\rightarrow\int_{Q\times D\times\Omega\times Y\times N\times Z}u(x,y)\Theta(y,\eta)[\phi_0(x,z)+\phi_0(z,x)]\gamma(x,z)dzdxdtd\eta dyd\tau d{\mathbb{P}}\\
&=\int_{Q\times\Omega\times Y\times N\times Z}u(x,y)\Theta(y,\eta)(D_x\phi_0)(x)dxdtdyd\eta d\tau d{\mathbb{P}}.
\end{split}
\]
\end{small}
On the other hand, from the fact that $\mathcal{D}^*u_{\e'}$ two-scale converges to $U\in L^2(Q\times D\times\Omega\times Y\times Z\times N)$, we have
\begin{equation*}
\begin{split}
\Lambda_1^{\epsilon'}&=\int_{Q\times D}\int_{\Omega}\Theta^{\epsilon'}(x,z)\mathcal{D}^*u_{\e'}(x,z)
(\overline{D^*_x\psi_0})(x,z)dxdzdtd{\mathbb{P}}\\
&\rightarrow \int_{Q\times D\times\Omega\times Y\times N \times Z}\Theta(y,\eta)U(x,z,t,y,\eta,\tau)\phi_0(x,z)dxdzdtdyd\eta d\tau d{\mathbb{P}}.
\end{split}
\end{equation*}
Then we have
\begin{small}
\begin{equation}\label{100}
\int_{Q\times D\times\Omega\times Y\times N \times Z}\Theta(y,\eta)(U(x,z,t,y,\eta,\tau)-\mathcal{D}_x^*u(x,y))\phi_0(x,z)dxdzdtdyd\eta d\tau d{\mathbb{P}}=0.
\end{equation}
\end{small}
For the second part,
\begin{small}
\[
\begin{split}
\Lambda_2^{\epsilon'}&=\int_{Q\times\Omega}\int_{D}(\mathcal{D}^*u_{\e'})(x,z)\Theta^{\epsilon'}(x,z)
\phi_1(x,\frac{x}{{\epsilon'}},\frac{z}{{\epsilon'}})dxdzdtd{\mathbb{P}}\\
&=\int_{Q\times\Omega}u_{\e'}(x)\int_{D}\l[\Theta^{\epsilon'}(x,z)
\phi_1(x,\frac{x}{{\epsilon'}},\frac{z}{{\epsilon'}})+\Theta^{\epsilon'}(z,x)
\phi_1(z,\frac{z}{{\epsilon'}},\frac{x}{{\epsilon'}})\r]\gamma(x,z)dxdzdtd{\mathbb{P}}\\
&=\int_{Q\times\Omega}u_{\e'}(x)\int_{D}\l[\Theta^{\epsilon'}(x,z)
\phi_1(x,\frac{x}{{\epsilon'}},\frac{z}{{\epsilon'}})+\Theta^{\epsilon'}(z,x)
\phi_1(x,\frac{z}{{\epsilon'}},\frac{x}{{\epsilon'}})\r]\gamma(x,z)dxdzdtd{\mathbb{P}}\\
\quad &+\int_{Q\times\Omega}u_{\e'}(x)\int_{D}\l[\Theta^{\epsilon'}(z,x)
\phi_1(z,\frac{z}{{\epsilon'}},\frac{x}{{\epsilon'}})-\Theta^{\epsilon'}(z,x)
\phi_1(x,\frac{z}{{\epsilon'}},\frac{x}{{\epsilon'}})\r]\gamma(x,z)dxdzdtd{\mathbb{P}}\\
&=\Lambda_3^{\epsilon'}+\Lambda_4^{\epsilon'},
\end{split}
\]
\end{small}
where
\[
\begin{split}
\Lambda_3^{\epsilon'}
&={\epsilon'}^{(1-\a)/2}\int_{Q\times\Omega}u_{\e'}(x)[D_y(\Theta\phi_1)(x)]^{\e'} dxdtd{\mathbb{P}},
\end{split}
\]
and
\[
\begin{split}
\Lambda_4^{\epsilon'}&={\epsilon'}^{(1+\a)/2}\int_{Q\times\Omega}u_{\e'}(x)\int_{D}\Theta^{\epsilon'}(z,x)
[\psi_1(z,\frac{x}{{\epsilon'}})-\psi_1(x,\frac{x}{{\epsilon'}})\\
&\quad+\psi_1(x,\frac{z}{{\epsilon'}})
-\psi_1(z,\frac{z}{{\epsilon'}})]\gamma^2(x,z)dzdxdtd{\mathbb{P}}\\
&\rightarrow 0.
\end{split}
\]
We have
\begin{small}
\[
\begin{split}
\lim\limits_{\epsilon'\rightarrow 0}\int_0^T\int_{\Omega}a^{{\e'}}(u_{\e'},\psi_{\e'})dtd\mathbb{P}
&=\int_{Q\times\Omega\times Y\times N\times Z}u(x,y)\Theta(y,\eta)(D_x\phi_0)(x)dxdtdyd\eta d\tau d\mathbb{P}\\
&\quad+\lim\limits_{\epsilon'\rightarrow 0}\epsilon'^{(1-\a)/2}\int_{Q\times\Omega}u_{\e'}(x)[D_y(\Theta\phi_1)(x)]^{\e'} dxdtd\mathbb{P}.
\end{split}
\]
\end{small}
By the two-scale convergence of $u_{\e'},$
$$\int_{Q\times\Omega} \int_{Y \times Z}u(x,y)D_y(
\Theta\phi_1)(x,t,y,\tau)dxdtdyd\tau d\mathbb{P}=0.$$
This yields in paticular for any $\phi$
$$\int_{Q\times\Omega\times Y\times N \times Z}(\mathcal{D}_y^*u)(x,y)(\Theta\phi_1)(x,t,y,\eta,\tau)dxdtdyd\eta d\tau d\mathbb{P}=0,$$
hence,
$$(\mathcal{D}_y^*u)(x,y)=0,$$
which means that $u$ does not depend on $y.$ Then $u=\tilde{u}.$
Now, we set $D_y(\Theta\phi_1)=0,$ we get
\begin{small}
\begin{equation*}
\begin{split}
\lim\limits_{\epsilon'\rightarrow 0}&\int_0^T\int_{\Omega}a^{{\e'}}(u_{\e'},\psi_{\e'})dtd\mathbb{P}
=\int_{Q\times\Omega \times Y\times N\times Z}u(x,y)\Theta(y,\eta)(D_x\phi)(x)dxdtdyd\eta d\tau d\mathbb{P}\\
&=\int_{Q\times D\times\Omega\times Y\times N \times Z}\Theta(y,\eta)D^*_x\tilde{u}(x,t,z)\phi(x,t,z,y,\eta,\tau)dzdxdtdy d\eta d\tau d\mathbb{P}.
\end{split}
\end{equation*}
\end{small}
We get that
\begin{footnotesize}
$$\int_{Q\times D\times\Omega \times Y\times N \times Z}\l(U(x,t,z,y,\tau,\eta)-D^*_x\tilde{u}(x,t,z)\r)\Theta(y,\eta)\phi(x,t,z,y,\eta,\tau)dzdxdtdy d\eta d\tau d\mathbb{P}=0.$$
\end{footnotesize}
From the formula (\ref{100}), we deduce that
\begin{footnotesize}
$$\int_{Q\times D\times\Omega\times Y\times N \times Z}(U(x,t,z,y,\tau,\eta)-D^*_x\tilde{u}(x,t,z))\Theta(y,\eta)\phi_1(x,t,y,\eta,\tau)dzdxdtdy d\eta d\tau d\mathbb{P}=0.$$
\end{footnotesize}
Since the fact that $D_y(\Theta\phi_1)=0,$ we can deduce that there exists a unique function $u_1\in L^2(Q\times\Omega;L_{per}^2(Z;H_{\#}^{\a/2}(Y)))$ such that
$$U(x,t,z,y,\tau,\eta,\omega)-(D^*_x\tilde{u})(x,t,z,\omega)=(D^*_yu_1)(x,t,y,\tau,\eta,\omega).$$
This ends the proof of Lemma \ref{l3}.
\end{proof}
\end{lemma}
\begin{remark}
This setting for the two scale convergence method has a very unique feature in that, the limit of the sequence depends on additional variable which does not appear in the weak limit.
\end{remark}
\begin{remark}
If the limit in Lemma \ref{l3} can be shown to be unique then convergence of the whole sequence occurs.
\end{remark}
\begin{lemma}\label{l4}
For all $\psi_0\in \mathcal{M}(Q)\times L^2(\Omega),$ for the subsequence $E''$ in Lemma \ref{l3}, we have
\begin{footnotesize}
\begin{align*}
\int_{Q\times\Omega} \e^{(1-\a)/2}u_\e\mathcal{V}^\epsilon(x,t)\psi_0(x,t,\omega) dxdtd\mathbb{P}&\rightarrow2\int_{Q\times\Omega\times Y\times Z}u_1(x,t,y,\tau)\psi_0\mathcal{V}(y,\tau)dxdtdyd\tau d\mathbb{P}.\\
\end{align*}
\end{footnotesize}
\begin{proof}
Since the fact that the mean value of $\mathcal{V}$ for spatial variable is null, the equation
\begin{equation*}
\mathcal{D}_y(\mathcal{D}_y^*\xi)=\mathcal{V}
\end{equation*}
admits a unique solution $\xi$ in $L^2_{per}(Z;H_{\#}^{\a/2}(Y))$. Let $\xi^\e(x,t)=\xi(\frac{x}{\epsilon},\frac{t}{\epsilon})$. For all $\e>0,$
we can conclude
\begin{align*}
\int_{Q\times\Omega} \epsilon^{(1-\a)/2}u_\e\mathcal{V}^\epsilon(x,t)\psi_0 dxdtd\mathbb{P}=\e^{(1-\a)/2}\int_{Q\times\Omega}u_\e\psi_0 [\mathcal{D}_y(\mathcal{D}_y^*\xi)]^\epsilon dxdtd\mathbb{P}.
\end{align*}
Since the fact that, for every function $\Phi^\epsilon$, we have $\mathcal{D}\Phi^\epsilon=\epsilon^{\frac{1-\a}{2}}(D_y\Phi)^\e.$
Let $\Phi=\mathcal{D}_y^*\xi$, we have
\begin{align*}
&\quad\int_{Q\times\Omega} \epsilon^{(1-\a)/2}u_\e\mathcal{V}^\epsilon(x,t)\psi_0 dxdtd\mathbb{P}
=\int_0^T\int_{\Omega}\int_{\mathbb{R}}u_\e\psi_0 \mathcal{D}(\mathcal{D}_y^*\xi)^\epsilon dxdtd\mathbb{P}\\
&=\int_0^T\int_{\Omega}\int_{\mathbb{R}}\int_{\mathbb{R}}\mathcal{D}^*(u_\e\psi_0)(\mathcal{D}_y^*\xi)^\epsilon dxdzdtd\mathbb{P}\\
&=\int_0^T\int_{\Omega}\int_{\mathbb{R}}\int_{\mathbb{R}}[\mathcal{D}^*(u_\e)\psi_0(x)+\mathcal{D}^*(\psi_0)u_\epsilon(z)](\mathcal{D}_y^*\xi)^\epsilon dxdzdtd\mathbb{P}\\
&\rightarrow\int_0^T\int_{\Omega}\int_{D}\int_{D}\int_{Y\times N \times Z}(\mathcal{D}^*\tilde{u}+\mathcal{D}_y^*u_1)\psi_0\mathcal{D}_y^*\xi dtdxdzd\tau dyd\eta d\mathbb{P}\\
&+\int_0^T\int_{\Omega}\int_{D}\int_{D}\int_{Y\times N \times Z}\mathcal{D}^*\psi_0\tilde{u}(z)\mathcal{D}_y^*\xi dtdxdzd\tau dyd\eta d\mathbb{P}\\
&+2\int_0^T\int_{\Omega}\int_{D}\int_{D^c}\int_{Y\times N \times Z}\tilde{u}(x)\psi_0(x)\gamma(x,z)\mathcal{D}_y^*\xi dtdxdzd\tau dyd\eta d\mathbb{P}\\
&=|D|\int_0^T\int_{\Omega}\int_{D}\int_{Y \times Z}u_1\psi_0\mathcal{V}(y,\tau)dtdxd\tau dyd\mathbb{P}\\
&+2\int_{Y\times N \times Z}\mathcal{D}_y^*\xi d\tau dyd\eta\int_0^T\int_{\Omega}\int_{\mathbb{R}}\int_{\mathbb{R}}\tilde{u}(x)\psi_0(x)\gamma(x,z)dxdzdtd\mathbb{P}\\
&=2\int_0^T\int_{\Omega}\l(\int_{Y\times Z}u_1\mathcal{V}(y,\tau)dyd\tau,\psi_0(x)\r)dtd\mathbb{P}.
\end{align*}
Hence the conclusion in this lemma follows.
\end{proof}
\end{lemma}
\subsection{Effective approximation theorem}
In this section, we will verify the main result that gives the effective approximation of equation (\ref{1}) and effective system.
Let us introduce some functions spaces. We consider the space
$$\mathbb{F}_0^1=\mathcal{Y}(0,T)\times L^2(Q\times\Omega;L^2_{per}(Z;H_{\#}^{\a/2}(Y)),$$
provided with the norm
$$||u||^2_{\mathbb{F}_0^1}=||\tilde{u}||^2_{\mathcal{Y}(0,T)}+||u_1||^2_{L^2(Q\times\Omega;L^2_{per}(Z;H_{\#}^{\a/2}(Y))},$$
which makes it Hilbert space. We consider also the space
$$\mathcal{F}_0^\infty=\mathcal{M}(Q)\otimes L^2(\Omega)\times[\mathcal{M}(Q)\otimes L^2(\Omega)\otimes (\mathcal{C}_{per}(Y)/\mathbb{C})\otimes\mathcal{C}_{per}(Z)],$$
which is a dense subspace of $\mathbb{F}_0^1.$ For $\bf{u}$$=(\tilde{u},u_1)$ and $\bf{v}$ $=(v_0,v_1)\in L^2(\Omega;H_\rho^{\a/2}(D))\times L^2(D\times\Omega;L^2_{per}(Z;H_{\#}^{\a/2}(Y)),$ we set
\begin{center}
$a(\bf{u},v)$$=\int_{D\times D\times Y\times N \times Z}\Theta(y,\eta)(D^*_x\tilde{u}+D^*_yu_1)(\overline{D^*_xv_0}+\overline{D^*_yv_1})dxdzdyd\eta d\tau.$
\end{center}
We give the uniqueness by the following lemma \cite{Gaw}.
\begin{lemma}\label{l5}
Suppose $f\in L^2(0,T;L^2(D))$, the variational problem
\begin{equation}\label{e5}
\begin{cases}
$$\textbf{u}=(\tilde{u},u_1)\in\mathbb{F}_0^1\; with\; \tilde{u}(0)=h$$\\
\begin{split}
i\int_0^T\int_{\Omega}\l<\tilde{u}'(t),\overline{v_0}(t)\r>dtd\mathbb{P}&=\frac{1}{2}\int_0^T\int_{\Omega}a(\textbf{u}(t),\textbf{v}(t))dtd\mathbb{P}\\
&+\int_0^T\int_{\Omega}(g(\tilde{u}),v_0)dW_td\mathbb{P}+\int_0^T(f(t),v_0(t))dtd\mathbb{P}\\
&+2\int_0^T\int_{\Omega}(\int_{Y\times Z}u_1\mathcal{V}(y,\tau)dyd\tau,v_0(x))_{L^2(D)}dtd\mathbb{P}\\
&+\int_{Q}\int_{\Omega}\int_{Y\times N}\rho(x)\Theta(y,\eta)\tilde{u}(x)\overline{v_0}(x)dxdtdyd\eta d\mathbb{P}\\
\text{for all} \;\textbf{v}=(v_0,v_1)\in\mathbb{F}_0^1\\
\end{split}
\end{cases}
\end{equation}
admits at most one solution($\l<,\r>$ is the dual pairing between $H_\rho^{\a/2}$ and $H_\rho^{-\a/2}$).
\end{lemma}
Next, we will show $\textbf{u}=(\tilde{u},u_1),$ where $\tilde{u},u_1$ is defined in Lemma \ref{l3}.
\begin{theorem}(Effective approximation)
Suppose the hypotheses of Lemma \ref{l1} and Lemma \ref{l3} are satisfied. For $\e>0,$ let $u_\e$ be the solution of equation (\ref{1}). Then, as $\e\rightarrow0,$ we have
\begin{equation}\label{110}
u_\e\rightarrow \tilde{u} \quad \text{in} \quad \mathcal{Y}(0,T)\text{-weakly},
\end{equation}
\begin{equation}\label{111}
u_\e\rightarrow \tilde{u} \quad \text{in} \quad L^2(\l(0,T\r)\times D\times\Omega)\text{-strongly},
\end{equation}
where, $(\tilde{u},u_1)=\textbf{u} \in \mathbb{F}_0^1$ is the unique solution of equation (\ref{e5}).
\begin{proof}
Thanks to the Lemma \ref{l3}, there are some subsequence $E'$ extracted from $E$ and some vector function $\textbf{u}=(\tilde{u},u_1)\in \mathbb{F}_0^1$ such that the convergence is satisfied when $E'\ni \e\rightarrow0.$
Thus, according to Lemma \ref{l5}, the theorem is certainly proved if we can show that $\textbf{u}$ vertifies equation (\ref{e5}).
Indeed, we begin by vertifying that $\tilde{u}(0)=h.$
Let $\upsilon\in H_\rho^{\a/2}$ and $\varphi\in \mathcal{C}^1([0,T])$ with $\varphi(T)=0.$ By integration by parts, we have
$$\int_0^T\l<u'_\e(t),\upsilon\r>\varphi(t)dt+\int_0^T\l<u_\e(t),\upsilon\r>\varphi'(t)dt=-\l<h,\upsilon\r>\varphi(0),$$
we pass to the limit in the preceding equality as $\e\rightarrow0.$ we obtain
$$\int_0^T\l<\tilde{u}'(t),\upsilon\r>\varphi(t)dt+\int_0^T\l<\tilde{u}(t),\upsilon\r>\varphi'(t)dt=-\l<h,\upsilon\r>\varphi(0).$$
Since $\varphi$ and $\upsilon$ are arbitary, we see that $\tilde{u}(0)=h.$
Finally, let us prove the variational equality of (\ref{e5}).
We let $\psi^{\e}\in L^2(Q\times\Omega;\mathcal {C}_{per}(Y\times Z)),$ then there are two functions
$$\psi_0\in \mathcal{M}(Q)\otimes L^2(\Omega) \;\text{and}\; \psi_1\in \mathcal{M}(Q)\otimes L^2(\Omega)\otimes[(\mathcal{C}_{per}(Y)/\mathbb{C})\otimes\mathcal{C}_{per}(Z)],$$
such that
$$\psi_\e=\psi_0+\epsilon^{(1+\a)/2}\psi_1^\epsilon,\; i.e.,\;
\psi_\e(x,t,\omega)=\psi_0(x,t,\omega)+\epsilon^{(1+\a)/2}\psi_1(x,t,\frac{x}{\epsilon},\frac{t}{\epsilon},\omega).$$
By equation (\ref{1}), one as
\begin{small}
\begin{equation}\label{e8}
\begin{split}
i\int_0^T\int_{\Omega}&\l<u'_\e(t),\bar{\psi}_\e(t)\r>dtd\mathbb{P}=\frac{1}{2}\int_0^T\int_{\Omega}a^\e(u_\e(t),\psi_\e(t))dtd\mathbb{P}+\int_0^T
\int_{\Omega}(f(t),\psi_\e(t))dtd\mathbb{P}\\
&+\int_0^T\int_{\Omega}(g(u_\e),\psi_\e(t))dW_td\mathbb{P}
+\int_0^T\int_{\Omega}(\e^{(1-\a)/2}\mathcal{V}^\e u_\e(t),\psi_\e(t))dtd\mathbb{P}\\
&+\int_Q\int_{\Omega}\Theta^\e(x,z)\rho(x)u_\e(t)\psi_\e(t)dxdzdtd\mathbb{P}.
\end{split}
\end{equation}
\end{small}
The aim is to pass to the limit in the above equation as $\e$ goes to $0.$ First, we have
$$\int_0^T\int_{\Omega}\l<u'_\e(t),\bar{\psi}_\e(t)\r>dtd\mathbb{P}=-\int_Q\int_{\Omega}u_\e\frac{\partial \bar{\psi}_\e}{\partial t}dxdtd\mathbb{P}.$$
Thus, we have
$$\int_0^T\int_{\Omega}\l<u'_\e(t),\bar{\psi}_\e(t)\r>dtd\mathbb{P}\rightarrow -\int_Q\int_{\Omega}\tilde{u}\frac{\partial \bar{\psi}_0}{\partial t}dxdtd\mathbb{P}=\int_0^T\int_{\Omega}\l<\tilde{u}'(t),\bar{\psi}_0(t)\r>dtd\mathbb{P},$$
as $\e\rightarrow0.$
Then, from Lemma \ref{l3}, as $\e$ goes to $0,$ we have
$$\int_0^T\int_{\Omega}a^\e(u_\e(t),\psi_\e(t))dtd\mathbb{P}\rightarrow\int_0^T\int_{\Omega} a(\textbf{u}(t),\boldsymbol{\phi}(t))dtd\mathbb{P},$$
where $\boldsymbol{\phi}=(\psi_0,\psi_1).$
Finally
\begin{footnotesize}
\begin{equation}\label{e6}
\int_0^T\int_{\Omega}\e^{(1-\a)/2}(\mathcal{V}^\e u_\e(t),\psi_\e(t))dtd\mathbb{P}=\e^{(1-\a)/2}\int_Q\int_{\Omega}\mathcal{V}^\e u_\e\overline{\psi}_0dxdtd\mathbb{P}+\epsilon\int_Q\int_{\Omega}
\mathcal{V}^\e u_\e\overline{\psi}^\e_1dxdtd\mathbb{P}
\end{equation}
\end{footnotesize}
In view of Lemma \ref{l4}, we pass to the limit in (\ref{e6}). This yields,
\begin{equation*}
\begin{split}
\int_0^T\int_{\Omega}\e^{(1-\a)/2}(\mathcal{V}^\e u_\e(t),\psi_\e(t))dtd\mathbb{P}&\rightarrow\int_{\Omega\times Q\times Y \times N}u_1\overline{\psi}_0\mathcal{V}dxdtdyd\tau d\mathbb{P},\\
\end{split}
\end{equation*}
as $\e\rightarrow0.$
Hence, passing to the limit in (\ref{e8}) leads to
\begin{equation}\label{e9}
\begin{split}
i\int_0^T\int_{\Omega}&\l<u'_0(t),\bar{\psi}_0(t)\r>dtd\mathbb{P}=\frac{1}{2}\int_0^T\int_{\Omega}a(\textbf{u}(t),\boldsymbol{\phi}(t))dtd\mathbb{P}
+\int_0^T\int_{\Omega}(f(t),\psi_0(t))dtd\mathbb{P}\\
&\quad+\int_0^T\int_{\Omega}(g(\tilde{u}),\psi_0(t))dW_td\mathbb{P} +\int_{\Omega\times Q\times Y \times N}u_1\overline{\psi}_0\mathcal{V}dxdtdyd\tau d\mathbb{P}\\
&\quad+\int_Q\int_{\Omega}\int_{Y\times N}\rho(x)\Theta(y,\eta)\tilde{u}(x)\overline{\psi_0}(x)dxdtdyd\eta d\mathbb{P}\\
\end{split}
\end{equation}
for all $\boldsymbol{\phi}=(\psi_0,\psi_1)\in \mathcal{F}_0^\infty.$ Moreover, since $\mathcal{F}_0^\infty$ is a dense subspace of $\mathbb{F}_0^1,$
by (\ref{e9}) we see that $\textbf{u}=(\tilde{u},u_1)$ verifies (\ref{e5}). Thanks to the uniqueness of the solution for (\ref{e5}) and let the fact that the sequence $E$ is arbitrary, the theorem is proved.
\end{proof}
\end{theorem}
For further needs, we wish to give a simple representation of the function $u_1.$
For this purpose, Let us introduce the bilinear form $${\hat{a}}(w,v)=\int_{Y\times N \times Z}\Theta(y,\eta)(D^*_yw\cdot\overline{D^*_yv})dy d\eta d\tau $$
for all $w,v\in L^2_{per}(Z;H_{\#}^{\a/2}(Y)).$
Due to the nonlocal Poincar\'{e} inequality, we obtain ${\hat{a}}(w,v)$ is coercive on the space $L^2_{per}(Z;H_{\#}^{\a/2}(Y))$. Moreover,
${\hat{a}}(w,v)\leq C ||w||_{H_{\#}^{\a/2}(Y)}||v||_{H_{\#}^{\a/2}(Y)}, $ where $C$ is a positive constant.
\begin{remark}\label{r5}(Representation of $u_1$)
We consider the variational problem
\begin{equation}\label{654}
\begin{cases}
$${\hat{a}}(\chi,v)=\int_{Y\times N \times Z}\Theta(y,\eta)\overline{D^*_yv}dyd\tau d\eta,\\
\chi\in L^2_{per}(Z;H_{\#}^{\a/2}(Y)),\\
\end{cases}
\end{equation}
for all $v\in L^2_{per}(Z;H_{\#}^{\a/2}(Y)).$ It determines $\chi$ in a unique manner.
Under the assumption of Lemma \ref{l3}, we have
\begin{equation}\label{eyong17}
u_1(x,t,y,\tau,\omega)=-\frac{1}{|D|}\int_{D}(D^*_x\tilde{u})(x,t,z,\omega)dz\cdot\chi(y,\tau)
\end{equation}
for almost all $(x,t,y,\tau,\omega)\in Q\times Y \times Z\times\Omega.$ \newline
In fact, in view of (\ref{e5}), we choose the particular test function $\textbf{v}=(v_0,v_1)\in \mathbb{F}_0^1$ with $v_0=0$ and $v_1=\varphi\times v,$ where
$\varphi\in \mathcal{M}(Q)$ and $v\in L^2_{per}(Z;H_{\#}^{\a/2}(Y)).$ This yields
\begin{equation}\label{103}
\begin{split}
0&=|D|\int_{\Omega}{\hat{a}}(u_1,v)d\mathbb{P}
+\int_{\Omega}\int_{D}(\mathcal{D}_x^*\tilde{u})(x,t,z)dzd\mathbb{P}\\
&\quad\times\int_{Y\times N \times Z}(\mathcal{D}_y^*v)(x,t,y,\tau,\eta)\Theta(y,\eta)dyd\tau d\eta dx,
\end{split}
\end{equation}
almost everywhere in $(x,t)\in Q$ and for all $v\in L^2_{per}(Z;H_{\#}^{\a/2}(Y)).$ By the fact that $u_1(x,t,\omega)($for fixed $(x,t,\omega)\in (Q\times\Omega)$) is the sole function in $L^2_{per}(Z;H_{\#}^{\a/2}(Y))$ solving equation (\ref{103}). At the same time, it is an easy matter to check that the right hand side
of (\ref{eyong17}) solves the same equation (\ref{103}).
\end{remark}
\subsection{Effective system}\label{sub3.3}
In this section, we will show that the limit process $\tilde{u}$ satisfies the following nonlocal stochastic Schr\"{o}dinger equation (\textbf{effective system}):
\begin{align}\label{e7}
\begin{cases}
id\tilde{u}=\l(-\Xi_1(-\Delta)^{\a/2}\tilde{u}-\frac{\Xi_2}{2}(\mathcal{D}\zeta)(x)-\Xi_3\zeta(x)\r)dt+g(\tilde{u})dW_t+fdt,\\
\tilde{u}(x,t)=0,\quad\quad (x,t) \in D^c\times(0,T),\\
\tilde{u}(0)=h(x), \quad\quad x \in D,$$
\end{cases}
\end{align}
where
\begin{align*}
&\Xi_1=\int_{Y\times N}\Theta(y,\eta)dydn,\\
&\Xi_2=\int_{Y\times N \times Z}\Theta(y,\eta)\mathcal{D}_y^*\chi dydnd\tau,\\
&\Xi_3=2\int_{Y\times Z}\mathcal{V}(y,\tau)\chi(y,\tau)dyd\tau,\\
&\zeta(x)=\frac{1}{|D|}\int_{D}(\mathcal{D}^*\tilde{u})(x,z)dz,\\
&\mathcal{D}|_{D}(\zeta)(x)=\int_{D}(\zeta(x)+\zeta(z))\gamma(x,z)dz.\\
\end{align*}
We can see that if $\tilde{u}$ verifies equation (\ref{e7}) then $\textbf{u}=(\tilde{u},u_1)$ satisfies equation (\ref{e5}). From Lemma \ref{l5}, we obtain equation (\ref{e7}) has at most one weak solution $\tilde{u}.$
\begin{theorem}(Effective equation)
Suppose the hypotheses of Lemma \ref{l1} and \ref{l2} are satisfied. Let $u_\e$ be the solution of heterogeneous equation (\ref{1}). Then, as scale parameter epsilon goes to $0$, we have $u_\e\rightarrow \tilde{u}$ in $\mathcal{Y}(0,T)$-weakly, where $\tilde{u}$ is the unique weak solution of effective equation (\ref{e7}) in $\mathcal{Y}(0,T)$.
\end{theorem}
\begin{proof}
Since the fact that, from any fundamental sequence $\e'\in E$ one can extract a subsequence $E'$ such that as $\e$ goes to $0$, we have (\ref{110})-(\ref{111}), and (\ref{e9}) holds for all $\boldsymbol{\phi}=(\psi_0,\psi_1)\in \mathcal{F}_0^\infty$, where $\textbf{u}=(\tilde{u},u_1)\in \mathbb{F}_0^1.$ Now, substituting $u_1$ in Remark \ref{r5} to (\ref{e9}), a simple computation yields equation (\ref{e7}).
\end{proof}
\subsection{An example}
In this subsection, we set $\Theta^{\e}(x,y)=1.$ We can simplify the heterogeneous system (\ref{1}):
\begin{equation}
\begin{cases}
$$idu_\e=\l(-(-\Delta)^{\a/2} u_\e+\epsilon^{(1-\a)/2}\mathcal{V}^\epsilon u_\e\r)dt+g(u_\e)dW_t+fdt, \; t>0, \; x \in D=(-1,1), \\
u_\epsilon(0,x)=h(x),\quad\quad x \in D,\\
u_\e(t,x)=0, \quad\quad x \in D^c=\mathbb{R}\backslash D,$$
\end{cases}
\end{equation}
From subsection \ref{sub3.3}, equation(\ref{654}) and the fact that the function $\chi, \xi\in L^2_{per}(Z;H_{\#}^{\a/2}(Y))$, we have
\begin{align*}
\Xi_1&=\int_{Y\times N}\Theta(y,\eta)dydn=1,\\
\Xi_2&=\int_{Y\times N \times Z}\mathcal{D}_y^*\chi dydnd\tau=\int_Z\int_{Y\times N}\l(\chi(y,\tau)-\chi(\eta,\tau)\r)(y-\eta)\frac{1}{|y-\eta|^{\frac{3+\a}{2}}}dyd\eta d\tau\\
&=2\int_Z\int_{Y\times N}\chi(y,\tau)(y-\eta)\frac{1}{|y-\eta|^{\frac{3+\a}{2}}}dyd\eta d\tau=0,\\
\Xi_3&=\int_{Y\times Z}\mathcal{V}(y,\tau)\chi(y,\tau)dyd\tau=0.\\
\end{align*}
Then, we can obtain the effective system
\begin{align}
\begin{cases}
id\tilde{u}=\l(-(-\Delta)^{\a/2}\tilde{u}\r)dt+g(\tilde{u})dW_t+fdt,\\
\tilde{u}(x,t)=0,\quad\quad (x,t) \in D^c\times(0,T),\\
\tilde{u}(0)=h(x), \quad\quad x \in D.$$
\end{cases}
\end{align}
We can see that, in this situation, the term of oscillating potential has no influence on the effective system.
|
1406.6939
|
\section{Background}
The transient, free decay of coupled, damped oscillators is not discussed in elementary physics courses and rarely, if ever, in advanced ones. The discussion in advanced physics textbooks is cursory, typically suggesting that one would proceed ``just as one might imagine" but that the details are cumbersome. The new features possessed by the generic or typical version of such systems relative to the well-studied ones are just not part of the basic physics education offered to all students of physical sciences and engineering. ``Proportional" damping (where each normal mode has its own individual damping constant) is often presented and is easy to analyze, but it is a very special subset of possible damping in coupled systems, and it misses some of the most dramatic features that are actually quite common. With no basic understanding provided by the elementary physics canon, when analogous aspects arise in particular situations, the people involved sense an aspect of discovery. Sometimes the ``newly" discovered perspective has had major impact. Although the mechanics of such linear systems has been understood, in principle, for hundreds of years, rediscovery of their special features has occurred even into the $21^{\text{st}}$ Century.
Here are some examples of such rediscoveries. Understanding the $K^o$--$\bar K^o$ meson system\cite{MGM} in the 1950's laid the groundwork for the experiments that identified CP and T (time reversal symmetry) violations in the fundamental interactions.\cite{CP}$^,$\cite{kabir} Mechanical engineers in the 1960's were interested in earthquake shaking of buildings,\cite{caughey}$^,$\cite{caughey2} which can exhibit something now known as transient growth after the initial quake ground shaking has ceased. The origin of the characteristic sound of the piano\cite{weinreich} (in contrast to earlier stringed, keyboard instruments) was elucidated in the 1970's. Stability analyses in fluid mechanics and analogous problems in applied linear algebra witnessed a major revolution starting in the late 1990's.\cite{schmidt}$^,$\cite{tref} ``Transient growth" is the generic term applied wherever all eigenvalues and eigenvectors decay exponentially in time and yet some combinations initially grow before ultimately dying off. And additional examples of systems and situations of this type continue to be identified.
Of course, numerical integration of differential equations has gotten easier and better over the years. And it has often been observed that what are now identified as ``non-normal" linear systems sometimes exhibit surprising transient behavior, quite sensitive to parameters and initial conditions. But that is not a substitute for thorough understanding of at least one simple, mechanical system.
What is needed is an explicit example that exhibits the phenomena particular to the {\it simultaneous} presence of {\it both} coupling and damping and yet is visualizable and analytically tractable. With respect to coupling, as simple as possible means only two oscillators. With respect to damping, as simple as possible means weak damping. But solution of the general problem of this form involves solving a quartic polynomial. Quartic is the highest order polynomial for which a closed-form solution exists. That solution has been known since the $16^{\text{th}}$ Century, but it is far longer than most people can remember or comprehend at a glance. Reducing the algebra to quadratic equations requires weak coupling as well. But even that is not enough. We will need a system whose frequencies, before coupling and damping, are degenerate or nearly degenerate, i.e., the same. A bonus of having the coupling and damping be weak is that the combined behavior shows vestiges of the familiar separate problems of the single damped oscillator and of undamped coupled oscillators.
A mechanical system satisfying those requirements is illustrated in Fig.~\ref{springs}.
\begin{figure}[h!]
\centering
\includegraphics[width=2.8in]{two-oscillators.pdf}
\caption{two oscillator mechanical analog}
\label{springs}
\end{figure}
\noindent Two identical oscillators are coupled with a weak spring, $\kappa$, but only one of the oscillators is weakly damped. With sufficient care, such an apparatus could be constructed. In fact, Bruce Winstein\cite{winstein}, when he gave colloquia and talks about CP violation, brought along just such a mechanical model that he had made, which included finely adjustable damping and coupling. Ref.~\onlinecite{quant} makes explicit the connection of the neutral kaons to a two coupled degenerate mode damped system.
On the other hand, plucking a typical stringed instrument readily provides the appropriate degrees of freedom and parameter ranges.
As background, it is useful to understand the generic case of two weakly coupled, degenerate oscillators. In typical demonstrations, damping is minimized. These could be springs,\cite{springs} pendulums,\cite{pendulums} or, most charmingly, the Wilburforce pendulum.\cite{wilbur} Essential features are the small splitting of the normal frequencies, identification of the normal modes, and dependence of the qualitative motion on initial conditions --- in particular, the possibility of beats.
The other essential bit of background is the decomposition of string motion into normal modes. The ideal string can be analyzed as a set of normal modes, which are uncoupled oscillators whose frequencies are integer multiples of the lowest, ``fundamental" frequency. What is often not mentioned is that for a physical string, each of these modes is actually a degenerate pair, corresponding to the possible vertical and horizontal motion at the given frequency. In the simplest approximation, these are totally independent motions, which are then superposed into the motion of the single physical string.
And for the string actually to be ``musical," the damping and the coupling must be {\it both} small {\it yet} non-negligible.
There are several related meanings of the term ``non-normal." Here, it refers simply to matrices with complete sets of eigenvectors which are not all orthogonal to one another. They are common elements of descriptions of a variety of systems including ones that exhibit transient growth where all eigenvectors, individually, decay monotonically.\cite{schmidt}$^,$\cite{tref}$^,$\cite{rahul} As described in the following, the total energy of two coupled, damped oscillators decays in time, no matter what the initial conditions or parameter values. However, there are ranges of parameters and initial conditions for which the amplitude and energy of {\it one} of the oscillators can grow before eventually decaying. In such cases, the decay of the total system's energy can come in spurts, rather than a steady one- or two-rate exponential.
There are, of course, quantum mechanical analogs, where more sophisticated methods and different approximations yield a complementary set of solutions.\cite{quantum}
\section{The String as Two Coupled, Damped Oscillators}
The ideal string stretched taut between two fixed points has normal modes with frequencies that are integer multiples of the fundamental. In a musical instrument, there are actually two degenerate modes for each frequency, reflecting the possibility of string displacements in the plane transverse to the string direction.
There are always some further interactions that couple those degenerate modes. The result is that their frequencies are split an amount proportional to the strength of that coupling. The coupling has the further consequence of establishing the form of the actual normal modes, i.e., the motions that have precisely just one of those two frequencies. And the modes thus determined are not coupled. (It is said that the perturbation diagonalizes the interaction.) In the specific context of a stringed instrument, it is important that the splitting be small so that any combination of the two is perceived as essentially a single pitch.
By itself, the vibration of a string produces almost no sound --- because the tiny cross section of the string moves almost no air. There must be some further transduction of the string motion to air motion. In acoustic instruments, that is accomplished by linking one end of the string to a sound board. String oscillations force sound board motion, which in turn produces sound. Hence, at least one ``fixed" end of the string is not actually fixed. Furthermore, in a good musical instrument, that end motion is the string's primary loss of energy. And again, that damping must be weak so that the consequent width or spread in frequency due to the damping leaves a single discernible pitch rather than noise.
So, for the present, we focus on a single original frequency. The system is approximated as two initially degenerate oscillators which are weakly coupled and weakly damped. The catch is that the coupling and the damping, represented as matrices in the configuration space of the two initial oscillators, are generally not simultaneously diagonalizable. If the system is a particular mode of a string, we may choose as a basis the up-and-down motion (relative to the sound board) and the side-to-side.
The coupling of vibration to the sound board is typically far more effective for vertical motion than for horizontal. To make matters as simple as possible, we will consider explicitly small vertical damping (denoted by $\gamma$) and no horizontal damping. If the system is started out in one of the modes of the undamped, coupled problem, then the non-zero damping will decrease the amount of vertical motion relative to the horizontal. Unless the undamped modes were precisely pure vertical motion for one mode and pure horizontal motion for the other, this damping would disturb the balance that defined the normal mode. Thus, the damping mixes in some amount of the other mode.
What is it that actually happens?
We choose the original restoring forces, the coupling, and the damping so that the system is linear (e.g., with the damping proportional to the velocities). With positive damping, the total energy must decrease monotonically with time. It is handy (and virtually essential) to use a complex number representation of the frequencies and displacements; their superposition into real motions at the end is totally parallel to standard treatments of the free decay of the single damped harmonic oscillator.
We will find that there are, indeed, two eigenfrequencies. They describe exponential decay multiplied by sinusoidal oscillation. They have corresponding eigenvectors, which are possible motions that follow their single eigenfrequency. However, these vectors have complex components, which, translated into the real motion of strings, means that their motion in the transverse plane is elliptical. And, finally, these eigenvectors are not orthogonal. One consequence is that the total energy and the rate of energy dissipation (which is the volume of produced sound in this simple model of a string) are not necessarily the sum of two independent, decreasing exponentials.
Newton's second law yields two coupled differential equations in time that are linear in the two displacements. Write the displacements, $x_1(t)$ and $x_2(t)$ in vector form with
\begin{equation}
\nonumber
\mathbf{x}(t)
=
\begin{bmatrix}
x_1(t)\\
x_2(t)\\
\end{bmatrix}.
\end{equation}
\noindent For the plucked string, $x_1(t)$ is a given mode's vertical motion, and $x_2(t)$ is the horizontal motion.
The equations of motion take the form
\begin{equation}
\label{eq-mot}
\mathbf{\ddot x} = - \mathbf{K} \cdot \mathbf{x} - \mathbf{\Gamma} \cdot \mathbf{\dot x} ~ ,
\end{equation}
where {\bf K} and $\mathbf{\Gamma}$ are $2\times2$ matrices representing the zeroth order restoring forces, the coupling, and the damping. (The general mathematical problem would include a mass matrix multiplying $\mathbf{\ddot x}$.)
The unit of time can be chosen to make the zeroth order {\bf K$_\text{o}$ = I}, the unit matrix. The most general possible form of {\bf K} is Hermitian, which means it has three real parameters for this mode pair. (The fourth parameter is absorbed into the average frequency, which is normalized to 1.) For the present, however, choose just the simplest possible mathematical form the has the feature that the eigenvectors of {\bf K} are mixed by the damping. That simplest, one parameter form is
\begin{equation}
\nonumber
\mathbf{K}
=
\begin{pmatrix}
1 & ~ ~ \epsilon \\
\epsilon & ~ ~ 1
\end{pmatrix} ~.
\end{equation}
This particular {\bf K} describes the mass/spring system of Fig.~\ref{springs}. From the standpoint of actual stringed instruments, it is theoretically possible but may be an oversimplification. However, with only a single parameter in {\bf K}, the ensuing math is far easier to follow. At the end, it will also be easy to see how the same calculations with the most general possible {\bf K} would go through and see what aspects of the results are generic.
The damping, as described above, takes the form
\begin{equation}
\nonumber
\mathbf{\Gamma}
=
\begin{pmatrix}
\gamma & ~ ~ 0 \\
0 & ~ ~ 0
\end{pmatrix} ~ .
\end{equation}
For weak coupling and damping, units are chosen such that all of the angular frequencies are close to 1.
There is no basis in which {\bf K} and $\mathbf{\Gamma}$ are both diagonal. A commonly recognized reflection of this is that the commutator
\begin{equation}
\nonumber
[\mathbf{K}, \mathbf{\Gamma}] = \mathbf{K} \cdot \mathbf{\Gamma} - \mathbf{\Gamma} \cdot \mathbf{K} =
\begin{pmatrix}
0 & ~ ~ - \epsilon \gamma \\
\epsilon \gamma & ~ ~ 0
\end{pmatrix}
\ne 0 ~ .
\end{equation}
(Of course, there are other forms of damping for which the damping and coupling matrices do commute and are simultaneously diagonalizable. This is known as proportional damping, and the separation into normal modes is straightforward.)
\section{Solution: Eigenvalues}
We seek eigenvalues $\alpha$ and time-independent eigenvectors $\mathbf{x}_o$ such that e$^{\alpha t} \mathbf{x}_o$ is a solution to Eq.~(\ref{eq-mot}). Plugging that in yields
\begin{equation}
\label{plug-in}
(\alpha^2 \mathbf I + \mathbf K + \alpha \mathbf \Gamma) \cdot \text{e}^{\alpha t} \mathbf{x}_o = 0 ~,
\end{equation}
where {\bf I} is the identity matrix.
$\mathbf{x}_o = 0$ is a solution to Eq.~(\ref{plug-in}) but not to the problem at hand. For all other solutions, the matrix factor in Eq.~(\ref{plug-in}) cannot have an inverse, and that requires that its determinant vanish, which is the following ``characteristic equation":
\begin{equation}
\label{det}
(\alpha^2 + 1)^2 + \gamma \alpha (\alpha^2 + 1) - \epsilon^2 = 0 ~.
\end{equation}
This would be easy to solve were either $\gamma$ or $\epsilon$ zero, but as it stands it is a quartic equation. Its analytic solution is hundreds of characters long and contains many nested square and cube roots. Most people find it impossible to discern by inspection the qualitative properties for particular values of the parameters. Just as the single damped oscillator has cases with radically different qualitative behavior, i.e., over damped, under damped, and critically damped, there are cases here, too --- only many more.
For weak coupling and weak damping, we can anticipate the structure of the solutions from physical considerations. With $0 < \gamma \ll 1$, the four solutions for $\alpha$ will be two complex conjugate pairs. Re[$\alpha$] will be negative, reflecting the monotonic loss of energy. Im[$\alpha$] comes in conjugate pairs. These conjugate pair solutions can ultimately be superposed to get real solutions with sines and cosines of $t$ with the same frequency. And this multiplicity of solutions allows fitting of any initial conditions of the two oscillators.
If {\it both} $\gamma \ll 1$ and $\epsilon \ll 1$, then {\it all} the frequencies will be near to 1, i.e. {\it i} Im[$\alpha] \approx \pm i$. This offers a way to approximate Eq.~(\ref{det}) and reduce the algebra problem to a quadratic.\cite{weinreich2} In the term $\gamma \alpha (\alpha^2 + 1)$, approximating the first $\alpha$ by $\pm i$ leaves the whole term still as small as $\epsilon^2$ and $(\alpha^2 + 1)^2$, at least in the vicinity of the desired solutions. So, using this approximation, Eq.~(\ref{det}) becomes
\begin{equation}
\label{quad-det}
(\alpha^2 + 1)^2 \pm ~ i \gamma (\alpha^2 + 1) - \epsilon^2 \simeq 0 ~,
\end{equation}
whose solutions are
\begin{equation}
\label{approx}
\alpha \simeq \pm ~ i \left(1 \pm ~ i \frac{\gamma}{4} \pm ~ \frac{1}{2} \sqrt{\epsilon^2 - (\frac{\gamma}{2})^2} ~ \right) ~.
\end{equation}
If the three $\pm$'s were chosen independently, there would appear to be eight solutions. However, the coefficient $\pm i$ of ${\gamma \over 4}$ is an approximation to $\alpha$. Hence, that $\pm i $ is $+ i $ when $\alpha \approx + i$. That selects the four actual solutions out the apparent eight, which can also be identified by their having Re$[\alpha] < 0$ for $\gamma > 0$. So there are really only two $\pm$'s coming from the sequential solution of two quadratic equations, and there is no ambiguity in the term $({\gamma \over 2})^2$.
The separate limits $\epsilon \to 0$ and $\gamma \to 0$ recover the previously understood behaviors of single damped oscillators and coupled, undamped oscillators.
There are evidently three qualitatively different regions, even with both $\gamma \ll 1$ and $\epsilon \ll 1$. With $\epsilon > \gamma / 2$, the square root term contributes to the oscillation frequency; there are two oscillation frequencies but only one decay rate, which is independent of $\epsilon$. With $\epsilon < \gamma / 2$, the square root term affects the two decay rates, and there is no splitting of the oscillation frequency degeneracy. And for $\epsilon \approx \gamma / 2$, the frequencies and decay rates of the two eigenmodes are nearly equal.
In the Appendix, evaluations of Eq.~(\ref{approx}) for three numerical pairs of $\epsilon$ and $\gamma$, representative of the three regions, are compared to the exact values that come from solving Eq.~(\ref{det}).
\section{Solution: Eigenvectors}
Let the components of the four eigenvectors be $a$ and $b$:
\begin{equation}
\nonumber
\mathbf{x_o}
=
\begin{bmatrix}
a\\
b\\
\end{bmatrix}.
\end{equation}
(The eigenvalues $\alpha$ and eigenvectors $\mathbf x_o$, with their components $a$ and $b$, have a four-valued index $i$ to tell which goes with which. That index $i$ is suppressed when that improves clarity.)
The lower component of Eq.~(\ref{eq-mot}) tells us that
\begin{equation}
\label{eigenvec}
\frac{b}{a} = \frac{- \epsilon}{\alpha^2 +1} \simeq \frac{-\epsilon}{\pm i \gamma / 2 \pm \sqrt{ \epsilon^2 - (\gamma / 2)^2}}~.
\end{equation}
This specifies the four eigenvectors, one for each $\alpha$. (Recall that $\alpha^2 + 1 = 0$ only for $\epsilon = 0$ and $\alpha^2 + 1 = \pm \epsilon$ for $\gamma = 0$.) The first expression for $b/a$ with the = sign is exact relative to the initial statement of the problem, i.e., Eq.~(\ref{eq-mot}); the approximate solutions to Eq.~(\ref{quad-det}) are used for the $\simeq$ expression. The ratio $b/a$ is of order 1 (because $|\alpha^2 + 1| \ll 1$).
Also, $b/a$ is complex. The phase of each $b/a$ means that in the oscillatory part of the motion corresponding to a single eigenvalue, there is a fixed, non-zero phase difference between the $x_1(t)$ and the $x_2(t)$. In the language of the plucked string: in the transverse plane, the eigen-motions are elliptical rather than linear (which they would be in the absence of damping).
\section{Real solutions and non-orthogonality}
It is worth returning to the original physical problem and constructing the basis of real eigenfunctions.
The four eigenvalues $\alpha$ are two pairs of complex conjugates. Label them as $\alpha_{\pm1}$ and $\alpha_{\pm2}$, where there are two, in general, different, negative real parts, each with a pair of conjugate imaginary parts. The $\alpha_{\pm1}$'s can be assembled into two real eigenfunctions:
\begin{equation}
\nonumber
\mathbf y_{+1}(t) =
\text{e}^{\alpha_{+1} t}
\begin{bmatrix}
a_{+1} \\
b_{+1} \\
\end{bmatrix} +
\text{e}^{\alpha^*_{+1}t}
\begin{bmatrix}
a^*_{+1} \\
b^*_{+1} \\
\end{bmatrix}
\end{equation}
\begin{eqnarray}
\nonumber
\mathbf y_{-1}(t) & = & i ~ \left(
\text{e}^{\alpha_{+1}t}
\begin{bmatrix}
a_{+1} \\
b_{+1} \\
\end{bmatrix} -
\text{e}^{\alpha^*_{+1}t}
\begin{bmatrix}
a^*_{+1} \\
b^*_{+1} \\
\end{bmatrix}
\right) \nonumber \\
\nonumber
& = &
\text{e}^{\alpha_{+1}t + i \pi/2}
\begin{bmatrix}
a_{+1} \\
b_{+1} \\
\end{bmatrix} +
\text{e}^{\alpha^*_{+1}t + i^* \pi/2}
\begin{bmatrix}
a^*_{+1} \\
b^*_{+1} \\
\end{bmatrix}~.
\end{eqnarray}
These forms use the facts that $\alpha_{-1} = \alpha_{+1}^*$ and $b_{-1}/a_{-1} = b_{+1}^*/a_{+1}^*$ (true in the original, exact formulation). Likewise, there are two real functions $\mathbf y_{\pm2}(t)$ similarly constructed out of the conjugate pair $\alpha_{\pm2}$. Appropriate superpostion of the four $\mathbf y(t)$'s can match the two initial positions and velocities.
The normal modes of linearly coupled, undamped oscillators behave themselves, essentially, like a set of uncoupled oscillators. Whatever superposition is determined by the initial conditions remains in force for all time. Many quantities of central importance, such as the kinetic energy, the potential energy, and the total energy of the system are, at any time, just the sum of the contributions from the normal modes. Since these particular quantities are quadratic in the dynamical variables, the reduction to a sum over modes requires that cross terms between the contributions of different modes vanish. And that is the sense in which the normal modes are normal to each other. However, for the generic case of coupled, damped oscillators, such cross terms are non-zero. Hence, there are no normal modes --- in spite of there being a complete set of solutions corresponding to the time-dependence eigenvalues.
As long as $\epsilon \ne 0$, for convenience we can choose all four $a_{\pm1,2} = 1$. Then, for example, at $t=0$:
\begin{eqnarray}
\nonumber
\mathbf y_{+1}(0) \cdot \mathbf y_{+2}(0) & = & 4 (1 + \text{Re}[b_{+1}]\text{Re}[b_{+2}]) \nonumber \\
\nonumber
& \ne & 0 \quad \textrm{for}\ \gamma / \epsilon \ne 0 ~.
\end{eqnarray}
Once again, this result is not particularly surprising, except possibly to those imbued with an overwhelming respect for normal modes. If the coupled, damped system were exactly describable by normal modes, then the total energy would decay steadily as the sum of one or two exponentials. However, if the corresponding undamped system could exhibit beats, then with the addition of very weak dissipation to only one of the (pre-coupled) degrees of freedom, dissipation should likewise come and go at the beat frequency. And that is, indeed, one of the possible generic behaviors.
If the simple model analyzed thus far is interpreted as describing a mode doublet of a plucked string, then the produced sound volume would be proportional to the rate of energy lost to damping of the vertical motion of the particular string harmonic. The instantaneous value of this lost power is $\mathcal P_\text{inst} = \gamma \dot x_2(t)^2$. In Fig.~\ref{pluck}, the log of the several-cycle-averaged $ \dot x_2(t)^2$ is plotted {\it verus} time for $\epsilon = 0.01$ and $\gamma = 0.01$ with the $t=0$ condition that the pluck is purely in the horizontal (undamped) direction. The horizontal motion $x_2(t)$ is the lower component of $\mathbf x(t) \equiv \mathbf y_{+1}(t) - \mathbf y_{+2}(t)$.
\begin{figure}[h!]
\centering
\includegraphics[width=4.5in]{pluck.pdf}
\caption{The calculated ``sound" of a particular horizontal pluck}
\label{pluck}
\end{figure}
The numerical parameter values in the Appendix were chosen for convenience of computer entry, with the consideration that they be small but realistic for stringed instruments. Of these three pairs, the values used for Fig.~\ref{pluck} are the ones that exhibit beats, i.e., there are two distinct oscillation frequencies with the beat period being distinctly shorter than the damping time. And the horizontal pluck was chosen for display because time must elapse after the pluck before the coupling to the dissipative vertical motion is substantial. Hence, the dissipated power {\it grows} immediately after the pluck before it subsequently decays.
With a vertical pluck, the dissipation would be evident from the start but would also exhibit the beats between the two eigenfrequencies. The dissipated power for the other domains of $\epsilon$-$\gamma$ space would look like single or double exponential decay without beats. Strictly speaking, the single exponential behavior arises at a point in the parameter space. The two exponential rates become closer and closer as one approaches that point. On the other side of the point, with only a single exponential, the frequency splitting begins at zero and then grows.
\section{Generic coupling matrix {\bf K}}
The most general possible form of {\bf K} that is close to the identity is
\begin{equation}
\nonumber
\mathbf{K}
=
\begin{pmatrix}
1+\delta & ~ ~ \eta \\
\eta ^* & ~ ~ 1-\delta
\end{pmatrix} ~,
\end{equation}
where $\delta$ is a small detuning of the vertical and horizontal modes and $\eta$ is a small, (in general) complex coupling. A non-zero phase or imaginary part of $\eta$ would give the undamped normal modes a phase shift between the vertical and horizontal components. If the system under discussion were, indeed, a string, then that phase shift would give an elliptical motion of the undamped normal modes the plane transverse to the string.
When this general {\bf K} is used in Eq.~(\ref{plug-in}), Eq.~(\ref{quad-det}) is replaced by
\begin{equation}
\nonumber
(\alpha^2 + 1)^2 \pm ~ i \gamma (\alpha^2 + 1) - \delta^2 - \eta \eta ^* \pm ~ i \delta \gamma \simeq 0 ~,
\end{equation}
whose solutions are
\begin{equation}
\nonumber
\alpha \simeq \pm ~ i \left(1 \pm ~ i \frac{\gamma}{4} \pm ~ \frac{1}{2} \sqrt{\delta^2 + \eta \eta ^* \pm ~ i \delta \gamma - (\frac{\gamma}{2})^2} ~ \right) ~,
\end{equation}
where the coefficients of $\gamma$ are chosen to have the same sign of $i$ as the $\pm i$ coefficient of the leading 1 term (because their coefficient $i$ is an approximation to the same $\alpha$). This coincides with the physical condition Re[$\alpha$] $<$ 0 for $\gamma > 0$.
If $\delta$ is negligible, the phase of $\eta$ would only contribute to the phase difference between the components of the eigenvectors, i.e., the analog of Eq.~(\ref{eigenvec}). This is in addition to the phase difference implicit in Eq.~(\ref{eigenvec}) that arises from the simultaneous real coupling and damping. The eigenvalues depend only on $\eta \eta^*$ and would be qualitatively as before (with real $\epsilon$).
If $\eta \eta^*$ were negligible, it would be the simple case of proportional damping. The oscillation frequencies would be $1 \pm \delta /2$, and only the $1 + \delta /2$ mode would decay, with decay constant $\gamma /2$.
If both $\eta \eta^*$ and $i \delta \gamma$ are relevant, then the square root in the above expression for $\alpha$ is generically complex with a $\pm$ coefficient. This means that the most general behavior is that the two non-orthogonal modes have different frequencies and different decay rates. Besides the trivial comment that the magnitudes of the frequency and decay splitting depend on the parameters, it is worth remembering that in practice the amount of time one has to observe the beats and the second decay rate is limited by the decay.
\section{Real plucked string sounds}
The subject of what happens in real stringed instruments is a rich one. The simplest features were recognized in antiquity. Details have been the subject of serious research over the past 150 years and continues to this day. Professional journals in which this research is reported include (but are not restricted to) the Journal of the Acoustical Society of America, Acta Acoustica united with Acoustica, and the Journal of Sound and Vibration. The present discussion has focused on effects that are small compared to the basic oscillatory motion (here normalized to a frequency equal to 1). Furthermore, attention was restricted to effects that are linear in terms of the differential equations. This makes the mathematical exercise of relevance to a great number of other physical problems. However, in terms of the behavior of an actual plucked string, there may be non-linear effects that are small enough to ignore relative to that 1 but are significant in the present context. Many effects have been discussed in the scientific literature. But there is no consensus as to which is ``the correct one." In fact, the relative importance of different mechanisms may depend on the kind of instrument and even the individual instrument. The most basic lesson to take away is that, when dealing with the interactions of frequency degenerate systems, very tiny forces can lead to clearly visible effects if they can act over many cycles of the primary oscillation.
The following are just a few of possible contributions to the parameters in {\bf K}. One contribution to the ``detuning" $\delta$ might be an approximate description of consequences of the up-down motion of the bridge end of the string being much larger than the horizontal motion. For a particular vertical mode, the bridge end is no longer an actual node, and the effective length of the string is longer than were it a node. A small skewness of the elastic constants relative to vertical/horizontal (i.e., $\epsilon$ or Re[$\eta$]) can arise from imperfections and asymmetries in the materials of the string and its supports, which can rotate the principle axes of the restoring forces relative to exactly parallel and perpendicular to the head. The longitudinal stretching of the string is usually successfully ignored at the level of the 1, but its small effects can be relevant when one looks closer. Sometimes an effort is made to approximate those effects and represent them as contributions to the parameters of the generic linear system. Players of stringed instruments easily notice that the motion in the transverse plane is typically elliptical, and, furthermore, those ellipses typically precess. This would happen even if the only deviation from degeneracy were a detuning of the two components and the initial pluck was not exactly in one of the normal mode directions. A separate issue is whether the undamped normal modes themselves describe elliptical motion, i.e., whether $\eta$ has a phase. This would not arise from a combination of linear springs, but it cannot be ruled out as an effective, approximate description of the combined effects of linear and non-linear dynamics.
Even given the uncertainties described above, it is interesting to look at a case of measured, real string motion where the investigators projected out the time evolution of particular, individual modes.
Measurements of the sounds of plucked banjo strings were published by Moore and Stephey.\cite{moore} For one aspect of their experimental survey, they damped all strings but the first, plucked it, first in the vertical direction (perpendicular to the banjo head) and then in the horizontal. They did the same for the second string. They recorded the sounds and analyzed them into Fourier components. In Fig.~\ref{plucked-strings} (copied from Fig.~5 of Ref. \onlinecite{moore}) the (logarithmic) sound intensities for the first three harmonics of each plucked string are displayed as functions of time.
\begin{figure}[h!]
\centering
\includegraphics[width=2.7in]{moore-stephey.pdf}
\caption{Measured loudness in dB (log scale) {\it vs.} time for string harmonics, copied from Ref.~\onlinecite{moore}}
\label{plucked-strings}
\end{figure}
Each harmonic acts as a separate nearly degenerate, coupled, damped pair. As those authors noted in their paper, evident are single exponential decays, double exponential decays, and decays with prominent beats modulating a single overall decay rate.
The qualitative agreement with the forms of the time dependence predicted by the initial model of {\bf K} with a single, real parameter $\epsilon$ does not rule out the more general form. Rather, it suggests that the region of four dimensional parameter space of the generic linear model populated by the experimentally observed modes is roughly two dimensional and might be well-approximated by an effective $\epsilon$. More measurements would be needed to do better.
With just the naked eye, some of this behavior is typically visible on a stringed instrument. In particular, there is usually at least one string that after a pluck exhibits beats. Instead of decreasing steadily, its amplitude gets smaller and larger again a couple or even several times before it dies completely. (Generally, each maximum is smaller than the previous one.) Identifying double exponential decay is harder, just judging by eye. However, there are typically some pluck responses with a dramatic initial fall off and a surprisingly long tail.
This phenomenon is actually a very important aspect of banjo sound. The banjo is an instrument where the degeneracy is often four- and even six-fold, not just the two of a single string. That is because, as normally tuned and played, the undamped strings are often in unison or share harmonics (e.g., the second harmonic of one string is degenerate with the third harmonic of another). And the design of the bridge facilitates coupling between between all of the strings.
|
1506.07855
|
\section{Introduction}
In everyday life we often use mechanical devices that utilize oscillatory force along with a valve mechanism to generate unidirectional flow, e.g., a simple hand pump. In this, the valve breaks space-inversion symmetry, as a result breaking time-reversal symmetry, leading to unidirectional flow.
The same basic principle of oscillatory forcing, and time-reversal symmetry breaking leading to directed motion, is utilized in operation of various molecular motors~\cite{Julicher1997, Reimann2002, Astumian2002}, and ion pumps~\cite{Gadsby2009, Astumian2003} in biological cell, and to generate overall unidirectional flow of electrons in quantum pumps~\cite{Brouwer1998, Sela2006,Cavaliere2009, Strass2005}. However, in all these cases noise, stochastic or quantum, plays important role in the resultant dynamics. In molecular motors, e.g., repeated hydrolysis of ATP leads to a stochastic oscillatory energy input, and the intrinsic head-tail directionality of polymeric track on which the motors move, breaks the inversion symmetry to act like a valve~\cite{Julicher1997}.
Most theoretical studies of stochastic pumps have discussed properties of noninteracting system of particles,
apart from a few exceptions~\cite{Derenyi1995,Derenyi1996,Aghababaie1999,savel2004}.
In this paper, we present a stochastic particle pump model in which an external time-varying potential pumps energy, and a spatially varying phase factor of the oscillation breaks time-reversal symmetry to generate a directed current. We particularly focus on the effect of inter-particle interaction on the dynamics.
This interaction may be incorporated via an exclusion process in a spatially discretized version of the dynamics. Two variants of this model have already been proposed and analyzed in some detail~\cite{Jain2007, Marathe2008,Chaudhuri2011}.
Here we present a unified description of the model on discrete lattice. The model allows for several choices of hopping rates dependent on instantaneous local potential, where each choice obeys microscopic reversibility. We show that depending on this choice, one obtains different forms of density dependence of average directed current.
\section{Model}
We consider particles hopping on a ring, discretizing space into $s=1,\dots, N$ lattice sites, such that the system size is $L=N$ in units of lattice spacing $b$.
We assume that particles evolve under a position dependent weak oscillatory potential
$\beta V_s = \lambda_s \sin(\Omega t + \phi_s)$ where $\beta=1/k_B T$ with Boltzmann constant $k_B$ and temperature $T$, and $\phi_s$ denote local phase factor.
This potential drives the system out of equilibrium.
If the driving frequency $\Omega$ is slow with respect to the diffusion time scale $1/f_0 = a^2/D$ where $D$ denotes the diffusivity,
the system of particles would come to local thermal equilibrium with the instantaneous local potential. We assume microscopic reversibility,
i.e., the hopping rates are such that given the value of local potential at any instant of time the detailed balance condition,
$ n_s w_{s, s+1}= n_{s+1} w_{s+1,s}$, is obeyed. In this relation $n_s$ stands for the occupation number of $s$-th site and $w_{s, s\pm 1}$ is the time-dependent hopping rate from $s$-th to $s\pm 1$-th site. At each moment the system tries to reach equilibrium distribution corresponding to the instantaneous potential, but lags behind as the potential itself changes with time. This keeps the system out of equilibrium. Following three choices of particle hopping rates $w_{s,s\pm 1}$ obey local detailed balance:
(A) $w_{s, s \pm 1} = f_0 \exp[\beta V_s]$, a symmetric hopping rate that depends only on the on-site potential energy~\cite{Jain2007,Marathe2008};
(B) $w_{s, s \pm 1} = f_0 \exp[-\beta(V_{s \pm 1}-V_s)/2]$, depends on relative strength of the potential energies~\cite{Chaudhuri2011};
(C) $w_{s, s \pm 1} = f_0 \exp[-\beta V_{s \pm 1}]$ depends only on the potential energy at the site where the particle hops to.
In this paper we present a {\em unified derivation} of the time averaged DC current obtained for these three cases. While models B and C are able to pump DC current even
in a non-interacting system of particles, in model A interaction is crucial in order to achieve pumping.
We present analytic results using a perturbation theory proposed in Ref.~\cite{Chaudhuri2011}, and compare them with numerical simulations. We present a detailed comparative analysis of the three models of $w_{s, s \pm 1}$ proposed above, two of which (models A and B) were already discussed earlier~\cite{Marathe2008,Chaudhuri2011}, and the third one (model C) being the main new result of this paper.
A hard core repulsion between particles is modeled by exclusion process, in which two particles can not occupy the same lattice site. With this restriction, the local density $\rho_s=\langle n_s \rangle$ and two-point correlation functions $C_{s,p} = \langle n_s n_p \rangle$ obey the following dynamics,
\begin{eqnarray}
\frac{d \langle n_s\rangle}{dt} &=& w_{s-1,s} \langle n_{s-1}(1-n_s)\rangle +w_{s+1,s}\langle n_{s+1}(1-n_s)\rangle \crcr
&-& w_{s,s-1}\langle n_s(1-n_{s-1})\rangle - w_{s,s+1}\langle n_s(1-n_{s+1})\rangle.\\
\frac{d \langle n_s n_p\rangle}{dt} &=&
w_{s-1,s} \langle n_{s-1}(1-n_s) n_p\rangle +w_{s+1,s}\langle n_{s+1}(1-n_s) n_p\rangle \crcr
&+& w_{p-1,p} \langle n_s n_{p-1}(1-n_p)\rangle +w_{p+1,p}\langle n_s n_{p+1}(1-n_p)\rangle \crcr
&-& w_{s,s-1}\langle n_s(1-n_{s-1}) n_p\rangle - w_{s,s+1}\langle n_s(1-n_{s+1}) n_p\rangle \crcr
&-& w_{p,p-1}\langle n_s n_p(1-n_{p-1})\rangle - w_{p,p+1}\langle n_s n_p(1-n_{p+1})\rangle \\
\frac{d \langle n_s n_{s+1}\rangle}{dt} &=&
w_{s-1,s} \langle n_{s-1}(1-n_s) n_{s+1}\rangle +w_{s+2,s+1}\langle n_s n_{s+2}(1-n_{s+1})\rangle \crcr
&-& w_{s,s-1}\langle n_s(1-n_{s-1}) n_{s+1}\rangle - w_{s+1,s+2}\langle n_s n_{s+1}(1-n_{s+2})\rangle
\label{eq:motion}
\end{eqnarray}
The last equation is for the special case of nearest neighbor correlations.
Writing $\rho_s = \langle n_s \rangle$ and the multi-point correlations as $C_{s,p,\dots} = \langle n_s n_p \dots \rangle$
we can re-express the above relations as
\begin{eqnarray}
\frac{d \rho_s}{dt} &=& w_{s-1,s} (\rho_{s-1}-C_{s-1,s}) + w_{s+1,s} (\rho_{s+1}-C_{s,s+1}) \crcr
&-& w_{s,s-1}(\rho_s- C_{s-1,s}) - w_{s,s+1} (\rho_s- C_{s,s+1}).\\
\frac{d C_{s,p}}{dt} &=&
w_{s-1,s} (C_{s-1,p} - C_{s-1,s,j}) +w_{s+1,s} (C_{s+1,p}-C_{s, s+1, p}) \crcr
&+& w_{p-1,p} (C_{s,p-1}-C_{s,p-1,p}) +w_{p+1,p} (C_{s,p+1}-C_{s,p,p+1}) \crcr
&-& w_{s,s-1} (C_{s,p} - C_{s-1,s,p}) - w_{s,s+1} (C_{s,p}-C_{s,p,p+1}) \crcr
&-& w_{p,p-1} (C_{s,p} - C_{s,p-1,p}) - w_{p,p+1} (C_{s,p}-C_{s,p,p+1}) \\
\frac{d C_{s,s+1}}{dt} &=&
w_{s-1,s} (C_{s-1,s+1}-C_{s-1,s,s+1}) +w_{s+2,s+1} (C_{s,s+2} - C_{s,s+1,s+2}) \crcr
&-& w_{s,s-1} (C_{s,s+1} - C_{s,s-1,s+1}) - w_{s+1,s+2} (C_{s,s+1} - C_{s,s+1,s+2}).
\end{eqnarray}
Thus dynamics of each order of correlation depends on correlations of higher order, following a Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy.
Note that the evolution of local density may be represented as $d\rho_s / dt = J_{s-1,s} - J_{s,s+1}$ where the local current
\begin{equation}
J_{s-1,s} = (w_{s-1,s} \rho_{s-1} - w_{s,s-1} \rho_s) - (w_{s-1,s} - w_{s,s-1} ) C_{s-1,s}
\end{equation}
In the time periodic steady state, local current averaged over the period $\tau = 2\pi / \Omega$ is independent of position. Therefore the net time- and space- averaged
directed current is given by
\begin{equation}
\bar J = \frac{1}{N \tau} \sum_{s=1}^N \int_0^\tau dt J_{s-1,s}.
\end{equation}
For potential strength $\lambda_s=0$ at all lattice sites, the above model reduces to homogeneous symmetric
exclusion process, characterized by the following local density and correlation
functions
\begin{eqnarray}
\bar\rho &=& \frac{n}{L} =\rho, \crcr
C^{(2)} &=& \rho \frac{n-1}{L-1} \crcr
C^{(3)} &=& C^{(2)} \frac{n-2}{L-2}
\end{eqnarray}
etc.~\cite{Schutz2000}, where $n$ is the total number of particles. As we show in the following, the BBGKY hierarchy separates by order, if one expands local quantities like $\rho_s$, $C_{s,p}$ etc. in perturbative expansion around $\lambda_s=0$.
This allows one to obtain exact expressions within perturbative expansion.
\section{Perturbative calculation}
We consider driving at all the sites with constant potential strength and frequency
\begin{eqnarray}
\beta V_s = \lambda \sin(\Omega t +\phi_s) = \lambda\times u_s
\end{eqnarray}
where
\begin{eqnarray}
u_s &=& 2\mbox{Re}~ [\eta_s e^{i\Omega t}], \crcr
\eta_s &=& -\frac{i}{2} e^{i \phi_s}.
\end{eqnarray}
For small values of $\lambda$ we can linearize the transition rates to obtain :
(A) $w_{s, s\pm 1} = f_0 (1+ \lambda u_s)$,
(B) $w_{s, s\pm 1} = f_0 \{1-\frac{1}{2} \lambda(u_{s\pm 1}-u_s) \}$
(C) $w_{s, s\pm 1}
= f_0 (1- \lambda u_{s\pm 1})$
and the corresponding bond currents are expressed as
\begin{description}
\item[(A) $J_{s-1,s} = -f_0 (\rho_s - \rho_{s-1} ) - \lambda f_0 (u_s \rho_s - u_{s-1} \rho_{s-1} ) + \lambda f_0 (u_s - u_{s-1}) C_{s-1,s}$, ]
\item[(B) $J_{s-1,s} = -f_0 (\rho_s - \rho_{s-1} ) - (\lambda f_0/2) (u_s - u_{s-1} )(\rho_{s-1} + \rho_s - 2 C_{s-1,s})$ ]
\item[ (C) $J_{s-1,s} = -f_0 (\rho_s - \rho_{s-1} ) - \lambda f_0 (u_s \rho_{s-1} - u_{s-1} \rho_{s} ) + \lambda f_0 (u_s - u_{s-1}) C_{s-1,s}$. ]
\end{description}
Therefore the space-time averaged directed current is
\begin{eqnarray}
\bar J_A &=& \frac{\lambda f_0}{N\tau} \sum_{s=1}^N \int_0^\tau dt \,(u_s-u_{s-1})C_{s-1,s} \nonumber\\
\bar J_B &=& -\frac{\lambda f_0}{2 N \tau} \sum_{s=1}^N \int_0^\tau dt \, (u_s-u_{s-1})(\rho_{s-1}+\rho_s - 2 C_{s-1,s}) \nonumber \\
\bar J_C &=& -\frac{\lambda f_0}{ N\tau} \sum_{s=1}^N \int_0^\tau dt \, [ (u_s \rho_{s-1} - u_{s-1} \rho_{s} ) - (u_s - u_{s-1}) C_{s-1,s}]
\label{eq:JC}
\end{eqnarray}
Note that, for non-interacting particles the correlation function $C_{s-1,s} =0$ leads to $\bar J_A = 0$. That means, within model-A, the pump will
drive particles in an averaged unidirectional fashion only in the presence of interaction -- free particles can not be pumped within this model~\cite{Marathe2008}. However, for the
other two models this requirement is absent. Even free particles may be pumped in a unidirectional manner.
Let us write down the density and correlation functions as perturbative expansion in potential strength $\lambda \,(\ll 1)$
\begin{eqnarray}
\rho_s &=& \rho + \sum_{k=1,2,\dots} \lambda^k \rho^{ (k) }_s \crcr
C_{s,p} &=& C^{(2)} + \sum_{k=1,2,\dots} \lambda^k C_{s,p}^{(k)}.
\label{perturb}
\end{eqnarray}
We consider the case where the phase factor of driving potential $\phi_s = \phi s$ where $\phi=2\pi/L$, remembering $L$ is expressed in units of lattice parameter.
Within the perturbative expansion, the above relations for directed current may be calculated exactly~\cite{Marathe2008, Chaudhuri2011}. The results for the
first two cases were derived earlier, and the third one is presented in this paper. In what follows we derive all three results. However, before we start deriving
them, let us enlist the expressions for current corresponding to the three variants of the model here,
\begin{eqnarray}
\bar J_A &=& -2 \lambda^2 k_0 f_0^2 \frac{ \Omega \sin \phi (1-\cos \phi)}{\Omega^2 + 4 f_0^2 (1-\cos \phi)^2} \nonumber\\
\bar J_B &=& \lambda^2 (q_0 - 2 k_0) f_0^2 \frac{\Omega \sin \phi (1-\cos \phi)}{\Omega^2 + 4 f_0^2 (1-\cos \phi)^2} \nonumber\\
\bar J_C &=& 2\lambda^2 (q_0 - k_0) f_0^2 \frac{\Omega \sin\phi (1-\cos\phi)}{\Omega^2 + 4f_0^2 (1-\cos\phi)^2 },
\label{eq:JC1}
\end{eqnarray}
with $q_0 = \rho - C^{(2)}$, $k_0 = C^{(2)} - C^{(3)}$.
Note that in the limit of large $n$ and $L$ keeping $\rho = n/L$ constant, $q_0 \approx \rho (1-\rho) $ and $k_0 \approx \rho^2(1-\rho)$.
Thus $\bar J_A \sim \rho^2 (1-\rho)$, and $\bar J_B \sim \rho(1-\rho)(1-2\rho)$, and $\bar J_C \sim \rho(1-\rho)^2$ (See Fig.\ref{fig:Jro}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=7.9 cm] {graph_11}
\includegraphics[width=7.9 cm] {graph_22}
\caption{(Color online)
Directed current $\bar J$ as a function of mean density
$\bar\rho$. The points denote Monte-Carlo result and the lines are plot of the functions in Eq.\ref{eq:JC1}.
In plotting the functions we replaced $q_0$ by $\rho(1-\rho)$ and $k_0$ by $\rho^2(1-\rho)$.
Left panel: Shows plots of analytic functions for models A and C, and simulation data for model C.
Right panel: Comparison of simulation results of model B with analytic prediction.
The parameters used are system size $L=16$, bare hopping rate due to free particle diffusion $f_0 = 0.34$, potential strength $\lambda=0.5$, frequency of
oscillation $\Omega = 0.2 \pi$ such that the time period $\tau = 10$. We choose phase difference between consecutive lattice points $\phi = \pi/2$. The data (points) were collected over $100\tau$ after
equilibration over $100\tau$. All the data for model C were averaged over $10^5$ initial conditions, and for model B over $10^6$ initial conditions.
}
\label{fig:Jro}
\end{center}
\end{figure}
\subsection{Time evolution and solution}
Let us first consider model C, which constitutes the main new contribution of this paper. Using the perturbative expansion of Eq.(\ref{perturb}), time evolution of the first order perturbations can be written as,
\begin{eqnarray}
\frac{d\rho^{ (1) }_s}{dt} &=& f_0 \Delta_s \rho^{ (1) }_s + f_0 q_0 \Delta_s u_s,
\label{eq:rek} \\
\frac{d C^{ (1) }_{s,p}}{dt} &=& f_0 (\Delta_s + \Delta_p)C^{ (1) }_{s,p} + f_0 k_0 (\Delta_s u_s+ \Delta_p u_p) ~~ {\rm for~} p\neq s \pm 1, \crcr
\frac{d C^{ (1) }_{s,s+1}}{dt} &=& f_0 (C^{ (1) }_{s-1,s+1} + C^{ (1) }_{s,s+2} -2 C^{ (1) }_{s,s+1} )+ f_0 k_0 (u_{s-1} + u_{s+2} - u_{s} -u_{s+1})
\label{eq:cek}
\end{eqnarray}
where $\Delta_s g_{s,p} = g_{s+1,p} + g_{s-1,p} - 2 g_{s,p}$.
Note that the above time evolution, for
first order terms in perturbative expansion, remains the same for all the three variants of the model considered above. This is easy to see by comparing with
Ref.s~\cite{Marathe2008,Chaudhuri2011}.
These linear differential
equations can be solved exactly to find long time limit of time-varying steady state~\cite{Chaudhuri2011},
\begin{eqnarray}
\rho^{ (1) }_s(t) = 2 {\rm Re} [A_s^{(1)} \exp(i \Omega t)].
\end{eqnarray}
Using this in Eq.(\ref{eq:rek}) we find, for all $s$,
\begin{equation}
-A_{s-1}^{(1)} - A_{s+1}^{(1)} + (2 + i \Omega/f_0) A_s^{(1)} = - q_0 (-\eta_{s+1} - \eta_{s-1}+2\eta_s).
\end{equation}
Clearly this equation can be written in the operator form,
\begin{equation}
\hat Z \mid A\rangle = -q_0 \hat \Delta \mid \eta\rangle
\end{equation}
with matrix elements
\begin{eqnarray}
\Delta_{s,p} &=& \delta_{s,p+1} +\delta_{s,p-1} - 2 \delta_{s,p}. \crcr
Z_{s,p} &=& \Delta_{s,p} - \frac{i \Omega}{f_0}\delta_{s,p}
\end{eqnarray}
where $\hat\Delta$ satisfies the eigenvalue equation $\hat \Delta | q \rangle = \epsilon_q | q \rangle$ with $\epsilon_q = -2(1-\cos q)$
and the eigenfunction $\psi_s(q) = \langle s | q\rangle = (1/\sqrt L) \exp(-i q s)$ where $q=2\pi k/L = \phi\, k$ where $k=1,2,\dots,N$
such that $\psi_{s+N}=\psi_s$. Similarly, $\hat Z | q \rangle = (\epsilon_q - i\Omega/f_0) | q \rangle$. Thus the solution may be expressed as
$| A\rangle = -q_0 \hat Z^{-1} \hat \Delta | \eta\rangle$.
In the real-space representation
$\langle s | A\rangle = -q_0 \sum_{q,m}\langle s| \hat Z^{-1} | q\rangle \langle q | \hat \Delta | m\rangle \langle m | \eta\rangle
= -q_0 \sum_{q,m} (\epsilon_q - i\Omega/f_0)^{-1} \langle s\mid q\rangle \epsilon_q \langle q\mid m\rangle \eta_m
= -q_0 \sum_{q,m} (\epsilon_q - i\Omega/f_0)^{-1} \epsilon_q \psi_s(q) \psi^\ast_m(q) \eta_m$. Therefore,
\begin{eqnarray}
A_s^{(1)} &=& -q_0 \sum_{m=1}^N \sum_{k=1}^N \frac{\epsilon_{\phi k}}{\epsilon_{\phi k} - i\Omega/f_0} \psi_s(\phi k) \psi^\ast_m(\phi k) \eta_m \crcr
&=& \frac{i q_0}{2} e^{i\phi s} \frac{\epsilon_{\phi}}{\epsilon_{\phi} - i\Omega/f_0},
\end{eqnarray}
where $\epsilon_\phi = -2 (1-\cos\phi)$.
The equation for two point correlation function can also be solved~\cite{Chaudhuri2011}
\begin{equation}
C^{ (1) }_{s,p} (t) = \frac{k_0}{q_0} [ \rho^{ (1) }_s(t) + \rho^{ (1) }_p(t)] = 2 {\rm Re} [ A^{(1)}_{s,p} e^{i \Omega t}]
\end{equation}
where $A^{(1)}_{s,p} = (k_0/q_0) (A_s^{(1)} + A_p^{(1)} )$.
\subsection{Averaged directed current}
Using the above relations, one can calculate the space-time averaged directed current in the time-periodic steady state for all three variants of the
model, through the relations shown in Eq.(\ref{eq:JC}). For the particular case of model C, using the last relation in Eq.(\ref{eq:JC}), we have
\begin{eqnarray}
\bar J_C &=& - \frac{\lambda^2 f_0}{N \tau} \sum_{s=1}^N \int_0^\tau dt \left[ \left(1-\frac{k_0}{q_0} \right) (u_s \rho^{ (1) }_{s-1} - u_{s-1}\rho^{ (1) }_s) - \frac{k_0}{q_0} (u_s \rho^{ (1) }_s - u_{s-1}\rho^{ (1) }_{s-1})\right] \crcr
&=& - \frac{\lambda^2 f_0}{N } \sum_{s=1}^N 2\, {\rm Re}\left[ \left(1-\frac{k_0}{q_0} \right) (\eta_s^\ast A^{(1)}_{s-1} - \eta^\ast_{s-1} A^{(1)}_s) - \frac{k_0}{q_0} (\eta_s^\ast A^{(1)}_s - \eta^\ast_{s-1} A^{(1)}_{s-1})\right]
\label{JC2}
\end{eqnarray}
where in the last step we have used the fact that after integration over a period $\tau=2\pi/\Omega$, only the time-independent combinations of $u_s \rho^{ (1) }_s$ terms which can be expressed as
$ 2\, {\rm Re} [\eta_s^\ast A_s^{(1)}]$ etc. remain non-zero.
Given that $\eta_s = (-i/2) e^{i\phi s}$, and as we may write $A_s^{(1)} = (i q_0/2) e^{i\phi s} a$ where $a=\epsilon_\phi/[\epsilon_\phi - i\Omega/f_0]$, the terms in Eq.(\ref{JC2}) may be evaluated. We find that
$\eta_s^\ast A^{(1)}_s = - q_0 a/4$, and $\eta^\ast_{s-1} A^{(1)}_{s-1} = - q_0 a/4$. Thus the second term in the parentheses $(\eta_s^\ast A^{(1)}_s - \eta^\ast_{s-1} A^{(1)}_{s-1})=0$. Similarly one can
show that $(\eta_s^\ast A^{(1)}_{s-1} - \eta^\ast_{s-1} A^{(1)}_s) = (q_0/2) \sin\phi\, (i a)$. Thus the terms $2 {\rm Re}(\eta_s^\ast A^{(1)}_{s-1} - \eta^\ast_{s-1} A^{(1)}_s) = q_0 \sin\phi\, {\rm Im}(a)$.
Therefore,
\begin{eqnarray}
\bar J_C &=& - \lambda^2 f_0 \left(1-\frac{k_0}{q_0}\right) \,\,q_0 \sin\phi\, {\rm Im}(a) \crcr
&=& -\lambda^2 (q_0 - k_0) f_0^2 \frac{\Omega \sin\phi \epsilon_\phi}{\Omega^2 + f_0^2 \epsilon_\phi^2 } \crcr
&=& 2\lambda^2 (q_0 - k_0) f_0^2 \frac{\Omega \sin\phi (1-\cos\phi)}{\Omega^2 + 4f_0^2 (1-\cos\phi)^2 },
\end{eqnarray}
where in the last step we used the expression for $\epsilon_\phi$.
In the limit of large system size $q_0-k_0 \approx \rho(1-\rho)^2$, and thus $\bar J_C \sim \rho(1-\rho)^2$.
Similar arguments may be used to derive the results corresponding to models A and B. For example using relations in Eq.(\ref{eq:JC}),
\begin{eqnarray}
\bar J_A &=& \frac{\lambda^2 f_0}{N\tau } \sum_{s=1}^N \int_0^\tau dt \frac{k_0}{q_0} (u_s - u_{s-1}) (\rho^{ (1) }_s + \rho^{ (1) }_{s-1}) \crcr
&=& \frac{\lambda^2 f_0}{N } \frac{k_0}{q_0} \sum_{s=1}^N 2\, {\rm Re} \left[ (\eta_s^\ast - \eta^\ast_{s-1}) (A^{(1)}_s + A^{(1)}_{s-1} )\right].
\end{eqnarray}
One can show that $(\eta_s^\ast - \eta^\ast_{s-1}) (A^{(1)}_s + A^{(1)}_{s-1} ) = (q_0/2) \sin\phi\, (i a)$, and thus $2\, {\rm Re} \left[ (\eta_s^\ast - \eta^\ast_{s-1}) (A^{(1)}_s + A^{(1)}_{s-1} )\right]
= q_0 \sin\phi\, {\rm Im}(a)$. Thus
\begin{eqnarray}
\bar J_A &=& \lambda^2 f_0 \frac{k_0}{q_0} q_0 \sin\phi\, {\rm Im}(a) \crcr
&=& - 2\lambda^2 k_0 f_0^2 \frac{\Omega \sin\phi (1-\cos\phi)}{\Omega^2 + 4f_0^2 (1-\cos\phi)^2 }.
\end{eqnarray}
Again, using Eq.(\ref{eq:JC})
\begin{eqnarray}
\bar J_B &=& - \frac{\lambda^2 f_0}{2 N \tau} \left(1-\frac{2 k_0}{q_0} \right) \sum_{s=1}^N \int_0^\tau dt\, (u_s - u_{s-1}) (\rho^{ (1) }_{s-1}+\rho^{ (1) }_s) \crcr
&=& - \frac{\lambda^2 f_0}{2N} \left(1-\frac{2 k_0}{q_0} \right) \sum_{s=1}^N 2\, {\rm Re} \left[ (\eta_s^\ast - \eta_{s-1}^\ast) (A_{s-1}^{(1)} + A_s^{(1)}) \right].
\end{eqnarray}
The relation $(\eta_s^\ast - \eta_{s-1}^\ast) (A_{s-1}^{(1)} + A_s^{(1)}) = (q_0/2) \sin\phi\, (i a)$, leading to $ 2\, {\rm Re} \left[ (\eta_s^\ast - \eta_{s-1}^\ast) (A_{s-1}^{(1)} + A_s^{(1)}) \right]
= q_0 \sin\phi\, {\rm Im}(a)$. Thus one gets
\begin{eqnarray}
\bar J_B &=& - \frac{\lambda^2 f_0}{2} \left(1-\frac{2 k_0}{q_0} \right) \, q_0 \sin\phi\, {\rm Im}(a) \crcr
&=& \lambda^2 (q_0 - 2 k_0) f_0^2 \frac{\Omega \sin\phi (1-\cos\phi)}{\Omega^2 + 4f_0^2 (1-\cos\phi)^2 }.
\end{eqnarray}
\section{Simulation}
A detailed numerical simulation for model A was presented earlier in Ref.~\cite{Jain2007}.
Here we perform Monte-Carlo simulations of the models B and C and present density dependence of directed current $\bar J$ in Fig.~\ref{fig:Jro}. In the stochastic simulation,
we randomly choose a lattice site $s$ with uniform probability and perform a trial move with rate $w_{s,s\pm 1}$. The trial move is accepted if the new site $s \pm 1$ is
empty, else it is rejected. A sweep of $n$ trial moves for a system having $n$ particles is considered as one Monte-Carlo step. We use periodic boundary condition.
Note that in simulations we do not use the linearized versions of hopping rates,
unlike in theory with small potential strength $\lambda$. Instead we use the full non-linear forms. In all our simulations we keep $\lambda=0.5$, unlike the
perturbation theory where we assumed $\lambda \ll 1$.
At the time-periodic steady state, the current is measured on each bond and then averaged over all bonds in the system, and several time-periods. We also average over
many initial conditions to obtain better statistics. For details of the parameter values used in simulations, see figure caption of Fig.~\ref{fig:Jro}.
All the simulation data show good agreement with predictions presented in Eq.(\ref{eq:JC1}).
For all the three variants of the model current vanishes as packing fraction $\rho \to 0$ and $\rho \to 1$ (close pack) limits. The first vanishing is due to absence of particles to carry current, and the second one is due to complete jamming.
If all the lattice sites are occupied, within the discrete lattice random sequential dynamics, particles can not move. However, the detailed density dependence of current $\bar J$ shows three very different
form for the three variants of the model considered. While for model A, $\bar J_A \sim \rho^2 (1-\rho)$, model B shows a dramatic effect of current reversal with changing density.
For model B, the density dependence is $\bar J_B \sim \rho(1-\rho)(1-2\rho)$, with a new zero in current appearing at the half filling $\rho = 1/2$. This particular model has a symmetry under the exchange of
particles with holes together with swapping direction from right to left. Thus a phase factor $\phi$ that leads to free particle motion towards right, which is the dominant mode at low densities, will
lead to {\em free} hole motion to right at high densities. Therefore, particle current changes direction from near $\rho=0$ to near $\rho=1$. At $\rho=1/2$ the particle and hole currents cancel each other leading to $\bar J_B =0$.
We performed simulations for model C as well, and present the numerically obtained $\bar J_C$ in Fig.~\ref{fig:Jro}.
Our simulation results for model B and C agree well with theoretical predictions~(see Fig.~\ref{fig:Jro}). Note that, for model C, theory
predicts a density dependence $\bar J_C \sim \rho (1-\rho)^2$.
In Fig.~\ref{fig:Jro} we have also plotted the theoretical prediction for model A, for comparison.
\section{Outlook}
We presented a discrete pump model in which an external traveling wave potential leads to average directed motion of particles interacting via exclusion process.
We discussed three possible choices of external potential dependent hopping rates, all of which obey the microscopic time-reversal symmetry.
We studied how a resultant directed current depends on average density of particles. Using a perturbative expansion
for small strength of external potential with respect to thermal noise, we obtained analytic expressions for directed current, and compared our results with direct Monte-Carlo simulations to find good
agreement. While the time evolution of first order perturbation in local density and correlation functions, are independent of specific choice of the three variants of the lattice model discussed here, the expressions for directed current depend on the choice of local hopping rates.
The dependence of average directed current on frequency and phase is the same across all the three choices of hopping rates [see Eq.(\ref{eq:JC})]. However, it is important to note that the detailed density dependence is very different in the three choices of models (hopping rates) discussed -- while model A predicts $\bar J_A \sim \rho^2(1-\rho)$, model B and C predict $\bar J_B \sim \rho(1-\rho)(1-2\rho)$ and $\bar J_C \sim \rho (1-\rho)^2 $ respectively [mean field limits of Eq.(\ref{eq:JC})]. The hopping rates chosen for model-A fails to generate any directed current in absence of particle exclusion -- density correlation turns out to be necessary. On the other hand, models B and C allows for driving of directed current for non-interacting particles, also.
In Ref.~\cite{Chaudhuri2011} we argued that the model B is a natural choice, if one starts from the corresponding Langevin equation and discretize its dynamics. This model shows a curious current reversal with increasing density of particles.
However, later studies by us in continuum model showed very different density dependence~\cite{Chaudhuri2014}, in particular, absence of current reversal predicted by the discrete exclusion process in model-B. In order to discuss the continuum limit of our calculation, let us now use the lattice parameter $b$ explicitly. In the continuum limit, one has to take the lattice parameter $b/L \to 0$, with packing fraction $\rho b \ll 1$.
Since $b$ defines the length scale of inter-particle repulsion as well, in this limit, one obtains results valid for non-interacting continuum dynamics. However, in the real continuum system the space is continuum, but the hard core particles have finite size and repel each other. Thus the continuum limit of the discrete exclusion process used here, fails to capture the behavior of hard core particles moving in continuum space under stochastic thermal force, and time-oscillatory external potential. A correct discrete model would require the length scale of exclusion process to be defined as a new variable $\sigma=\nu b$, such that in the continuum limit, $b/L \to 0$ with $\nu \to \infty$ keeping $\sigma$ constant.
Experimental realization of the model presented here looks possible, using colloidal particles confined in narrow channels driven by traveling wave potential. This would provide better insight into driven many-body dynamics, and could have potential applications.
\ack
The author thanks Abhishek Dhar for numerous discussions, and a collaboration on related topics which led to the publication of Ref.s\cite{Chaudhuri2011,Chaudhuri2014}.
\vskip 2cm
\bibliographystyle{prsty}
|
1810.08308
|
\section{Introduction}
Random Matrix models were originally suggested by Wigner \cite{MR0083848,MR0077805} to model the nuclei of heavy atoms.
The models he originally studied, the Gaussian orthogonal/unitary ensembles were successful in describing the spacing distribution between energy levels. Wigner conjectured that general random matrices will have this same spacing distribution as long as they are in the same symmetry class.
Later, in 1962 \cite{MR0148397}, Dyson interpreted the Gaussian orthogonal/unitary ensembles as dynamical limit of the matrix valued Brownian motion, which is given by
\begin{align}\label{e:MatrixDBM}
{\rm d} H(t)={\rm d} B(t)-\frac{1}{2}H(t){\rm d} t,
\end{align}
where $B(t)$ is the Brownian motion on real symmetric/complex Hermitian matrices. It turns out the eigenvalues of the above matrix valued Brownian motion satisfy a system of stochastic differential equations. These equations have been later generalized to stochastic differential equations,
called the $\beta$-Dyson Brownian Motion with potential $V$,
\begin{equation} \label{DBM}
d \lambda_i(t) = \sqrt{\frac{2}{\beta N}} dB_i(t) + \sum_{i=1}^{N} \frac{dt}{\lambda_i(t) -\lambda_j(t)} - \frac{1}{2} V'(\lambda_i(t)) dt,\quad 1\leq i\leq N,
\end{equation}
where the initial data $\{\lambda_1(0),\lambda_2(0),\cdots,\lambda_N(0)\}$ lies in the closer of the Weyl chamber
\begin{equation}
\triangle_N :=\{(x_1,x_2,\cdots,x_N): x_1<x_2\cdots<x_N\}.
\end{equation}
The real symmetric and complex Hermitian matrix valued Brownian motion corresponds to \eqref{DBM} with $\beta=1$ and $\beta=2$ respectively, and quadratic potential $V=x^2/2$.
Dyson suggested that on times of order $O(1/N)$ one would get equilibrium in the microscopic statistics by evolving a random system stochastically to one of the standard Gaussian matrix models depending on the symmetry class. In fact, one has a dichotomy of three time scales
\begin{enumerate}
\item For time $t \gg 1$ one should get the global equilibrium, e.g., the global spectral density should approach that for the corresponding Gaussian ensembles. For Dyson Brownian motion with general $\beta$ and potential $V$, this was studied in \cite{GDBM1}.
\item On scales of order $N^{-1}\ll\eta^* \ll 1 $, one should reach the equilibrium after running Dyson Brownian motion for time $t \gg \eta^*$. Namely, mesoscopic quantities of the form $ \sum_{i=1}^{N} f((\lambda_i -E)/
\eta^*)$ for appropriate test functions should be universal.
\item For the microscopic scale, i.e. the scale of order $O(1/N)$, and $\beta=2$, the microscopic eigenvalue distribution should be the same as that of the determinantal point process with the Sine kernel, $K(x,y) = \sin(x-y)/(x-y)$, provided one runs Dyson Brownian motion for $t\gg 1/N$.
\end{enumerate}
The understanding of the local ergodicity of Dyson Brownian motion, i.e. the fact that the local statistics of Dyson Brownian motion reaches an equilibrium in short time, plays an important role in the proof of Wigner's original universality conjecture by Erd{\H o}s, Schlein and Yau \cite{MR2919197}. Their methods to prove universality for matrix models first involve proving a rigidity estimates of the eigenvalues, i.e. the eigenvalues are close to their classical locations, up to an optimal scale. This is the initial data for a Dyson Brownian motion input which interpolates the initial model to the Gaussian orthogonal/unitary ensembles. The second step is to show that Dyson Brownian motion reaches an equilibrium in a short time for local statistics. Since Dyson Brownian motion needs only to be run in short time, then the initial and final models can be compared. The last step compares the original random matrices with ones with small Gaussian component. For a good review about the general framework regarding this type of analysis, one can read the book by Erd{\H o}s and Yau \cite{YauErdosRMT}.
Of the three steps described in the previous section, the step that is the least robust is the proof of the rigidity estimates. This part is very model particular and, depending on the model in question, requires significant effort in trying to prove optimal estimates. Even in the most basic case of Wigner matrices, the concentration of the trace of the resolvent would require very precise cancellation in the form of what is known as the \emph{fluctuating averaging lemma} \cite{MR2871147}. The proof of this type of cancellation uses very delicate combinatorial expansions involving iterated applications of the resolvent identity. For models even more complicated than the Wigner matrices, such lemmas are an intricate effort.
A more general method that does not involve delving into the particulars of a model would be desirable; then we would be able to treat a general class of models uniformly.
A dynamical approach to proving rigidity using Dyson Brownian motion allows us to avoid technical issues relating to the particulars of a matrix model. This would allow us to avoid complicated combinatorial analysis and, in addition, allow us to treat models that do not occur naturally with an associated matrix structure, such as the $\beta$-ensembles. In an earlier paper by B.Landon and the second author \cite{HL}, they proved the rigidity estimates for the bulk eigenvalues of Dyson Brownian motion. As a result, the optimal rigidity estimates are purely a consequence of the dynamics. The proof of rigidity is based on a comparison between the empirical eigenvalue process of Dyson Brownian motion and the deterministic measure valued process obtained as the solution of the associated McKean-Vlasov equation by using the method of characteristics. The difference in the corresponding Stieltjes transforms can be analyzed by estimates of Gronwall type.
There are substantial difficulties involved in performing a comparison between the solutions of Dyson Brownian motion and the associated McKean-Vlasov equation near the edge. In the bulk, one can derive sufficiently strong estimates by looking at the distance from the characteristics to the real line; this is thanks to the fact that we have strong bounds on the imaginary part of the Stieltjes transform in the bulk. Near the spectral edge, the power of these bounds decay and become too weak to prove optimal rigidity. In our case, we have to establish an equation determining the relative movement of our characteristics to the edge. The estimates of the Stieltjes transform of the empirical particle density near the edge heavily depend on this relative movement. The equation for the edge allows us to explicitly understand how the eigenvalues move from their initial position to the optimal region.
\begin{comment}
We also observe that since the second step of the two part strategy mentioned earlier also involves a running of the DBM once rigidity is established, we see that we can essentially combine the two steps into one and establish the fact that universality is a process that is , in principle, merely a consequence of running the DBM. Near the edge, this establishes Dyson's original vision of the universality of microscopic eigenvalue spacings through Dyson Brownian Motion.
\end{comment}
In addition to the rigidity estimates, another main innovation in this paper is the determination of the correlation kernel for the Stieltjes transform of the empirical particle density of Dyson Brownian motion at mesoscopic scales near the edge. It allows us to prove a mesoscopic central limit theorem near the edge.
The mesoscopic central limit at the bulk for Wigner matrices was proven in \cite{MR1678012,MR1689027, mesoCLT1,mesoCLT3}, for $\beta$-ensemble in \cite{mesoCLT1} and for Dyson Brownian motion in \cite{HL,mesoCLTDBM,fix}.
As far as we know, the mesoscopic central limit theorem near the edge is new even for the Wigner matrices and $\beta$-ensembles.
The dynamical method provides a unified approach to see how it emerges naturally, and allows us to see the universality of this correlation kernel.
Combining with \cite{LandonEdge}, our rigidity estimates are used to give a proof of the local ergodicity of Dyson Brownian motion for general $\beta$ and potential at the edge, i.e. the distribution of extreme particles converges to Tracy-Widom $\beta$ distribution in short time. Our proof uses only the dynamics, and is independent of the matrix models. This is in alignment with Dyson's original vision on the nature of universality of the local eigenvalue statistics. A consequence of our edge universality result is a purely dynamical proof of the edge universality for $\beta$-ensembles with general potential.
\begin{comment}
Previous work studying the behavior of the $\beta$-Dyson Brownian Motion considered only the cases $\beta=2$ or in very special particular cases of the potential $V$.
When the potential $V$ is 0 or otherwise $quadratic$ the solution of the McKean-Vlasov is known in closed form as the free convolution of with the semicircle distribution. Properties such as the square root behavior for the Green's function for solution of the McKean-Vlasov equation can be determined by carefully analyzing the properties of this free convolution. The issue when studying the Dyson Brownian motion with general V is that there is no longer an explicit formula to the solution of the equation. Determining properties of the solution to the McKean-Vlasov equation can then only be done by carefully analyzing the characteristics of the equation; however, square root behavior is a property that depends very strongly on fine cancelations. These cancelations are implicitly contained in explicit formulas, such as in the free convolution formula in the case $V=0$ but are not clearly seen in more general cases.
In order to circumvent this problem we do not try to compare the $\beta$-Dyson Brownian motion with initial data $\mu_0$ with the solution of the McKean-Vlasov equation with initial data $\mu_0$. Rather, we try to compare the solution of the $\beta$-Dyson Brownian motion with an 'analytic approximation' to the initial data $\hat{\mu}_)$. Studying the comparison with the analytic approximation is the first major novelty of this paper. We decompose the initial data into the form $\hat{m}_0= A_0 + \sqrt{B_0}$, where $A_0$ and $B_0$ are analytic functions; many common limiting eigenvalue distributions of Random Matrices have this property, which is called square root behavior. We can decompose the McKean-Vlasov equation to a coupled system of differential equation for the analytic functions $A_t$ and $B_t$ where $\hat{m}_t= A_t + \sqrt{B_t}$. We can further use analyticity to turn this system into an infinite system of differential equations, one for each coefficient in the analytic expansion of $A_t(z)$ and $B_t(z)$ in a neighborhood about the initial edge $E_0$. Proving the existence of this coupled system of differential equations allows us to prove square root behavior around the edge is maintained in short time.
The second main contribution of this paper is a fine analysis of the behavior of the characteristics near the edge. In earlier works, the only quantity that was important in proving bounds was the growth of the imaginary part along characteristics. However, the growth of the imaginary part along characteristics is no longer large enough to be able to prove edge rigidity results. In order to improve these bounds, one must actually look at the growth of the distance between the real part of our point and the edge along characteristics. This paper provides bounds for this growth that we can then use in our analysis to prove edge rigidity bounds. Finding an equation and getting bounds for the growth of the real part along characteristics and then using these growth bounds to prove rigidity estimates is the second main contribution of this paper.
\begin{equation} \label{DBM}
{\rm d} \lambda_i(t) = \sqrt{\frac{2}{\beta N}} {\rm d} B_i(t) +\frac{1}{N}\sum_{j:j\neq i}\frac{{\rm d} t}{\lambda_i(t)-\lambda_j(t)}-\frac{1}{2}V'(\lambda_i(t)){\rm d} t,\quad i=1,2,\cdots, N,
\end{equation}
where $(B_1, \cdots, B_N)$ is an $N$-dimensional Brownian motion defined on a probability space with a filtration $\mathscr F=\{\mathscr F_t, t\geq 0\}$. The initial data ${\bm \lambda}(0)=(\lambda_1(0),\lambda_2(0)\cdots, \lambda_N(0))\in \overline{\Delta_N}$ is given by the eigenvalues of $H(0)$. Here, $\Delta_N$ denotes the Weyl chamber
\begin{equation}
\Delta_N=\{\{x_i\}_{1\leq i\leq N}\in {\mathbb R}^N: x_1<x_2<\cdots <x_{N}\}.
\end{equation}
The process $\bm\lambda(t)=(\lambda_1(t),\lambda_2(t),\cdots, \lambda_N(t))$ defined by the stochastic differential equation system \eqref{DBM} is called the $\beta$-Dyson Brownian motion ($\beta$-DBM) with potential $V$, which is an interacting particle system with Hamiltonian of the form
\begin{equation}
H(x_1,\cdots, x_N)\deq -\frac{1}{2N}\sum_{1\leq i\neq j\leq N}\log |x_i-x_j|+\frac{1}{2}\sum_{i=1}^{N}V(x_i).
\end{equation}
\end{comment}
\subsection{Related Results in the Literature}
Results for the McKean-Vlasov equation were first established by Chan\cite{MR1176727} and Rogers-Shi\cite{MR1217451}, who showed the existence of a solution for quadratic potentials $V$.
The McKean-Vlasov equation for general potentials $V$ was studied in detail in the works of Li, Li and Xie. In the works \cite{GDBM1} and \cite{GDBM2}, it was shown that under very weak conditions on $V$ the solution of the McKean-Vlasov equation will converge to an equilibrium distribution, that is dependent on the parameters $\beta$ and $V$ at times $t \gg 1$. The authors were able to interpret the time evolution under the McKean-Vlasov equation as a manner of gradient descent on the space of measures. This gives the complete description of Dyson Brownian motion at the macroscopic scale.
For the microscopic scale, Dyson Brownian motion was studied in detail by Erdos, Yau and various coauthors across a multitude of papers \cite{MR3098073,MR2662426,MR2661171,MR2639734,MR2481753,MR2537522,MR2810797,MR3372074,MR2905803,MR2871147}. Specifically, from these works, it is known that for the classical ensembles $\beta=1,2, 4$ and quadratic potential, with the initial data given by the eigenvalues of a Wigner matrix, it is known that after $t \gg N^{-1}$ the local statistics of the particles are the same as those of the corresponding classical Gaussian ensembles. After this, the two works \cite{kevin3,Landon2016} established gap universality for the classical $\beta=1,2,4$ Dyson Brownian motion with general initial data, by using estimates established in a discrete DiGeorgi-Nash-Moser theorem in \cite{MR3372074}. Fixed Energy Universality required a sophisticated homogenization argument that allowed the comparison between the discrete equation and a continuous version; the results have been established in recent papers \cite{fixedBourgade,fix}. An extension of this interpolation at the edge was shown in \cite{LandonEdge}. These results were a key step in the proof of edge and bulk universality in various models. An alternative approach to Universality was shown, independently, in the works of Tao and Vu \cite{MR2784665}.
In the three-step strategy for proving universality, as developed by Erd{\H o}s, Yau and their collaborators, the first step is to derive a local law of eigenvalue density. This is a very technical and highly model dependent procedure. In the case of Wigner matrices, the proofs have been established in \cite{MR2481753,MR2537522,MR2981427,MR2871147,MR3109424}.
Local laws can be established for other matrix models in the bulk, such as the case of sparse random matrices \cite{MR3098073} and in deformed Wigner matrices \cite{MR3502606}. Establishing local laws near the edge are generally more involved; the case of correlated matrices was shown in \cite{ArkaZiliang,AltEdge1,ErdosEdge1}. Local laws for $\beta$-ensembles near the edge were considered in \cite{MR3253704} with the discrete analogue in \cite{HG}; the Wigner matrices were considered in \cite{MR3161313}.
\begin{comment}
Central Limit Theorems for mesoscopic statistics in the bulk were shown in a series of papers \cite{MR1678012,MR1689027,mesoCLT3,mesoCLT1}. These formulas have also been shown for various models in which one has either an explicit matrix model or a formula. With the Brezin-Hikami formula at $\beta=2$ at quadratic potential, mesoscopic linear statistics were shown in \cite{mesoCLTDBM}, where they showed convergence at scale $\eta$ for times $t\gg \eta$. Mesoscopic statistics were also shown for the classical $\beta=1,2,4$ ensembles in \cite{fix}. These proofs all require very particular strcuture of the family. A dyanamical approach to mesoscopic statistics was used in \cite{HL}; the estimates near the edge are more involved and, as mentioned before, have been performed for the first time here.
\end{comment}
\section{Background}
In this section, we will provide basic definitions and assumptions in our study of the $\beta$-Dyson Brownian motion and the associated McKean Vlasov equation. This section culminates in the analysis of solutions of the McKean-Vlasov equation via the method of characteristics and the proof of various important inequalities on the growth of the solution in time $t$ and the behavior of its characteristics $z_t(u)$. These bounds provide the basis for our later estimates on the edge rigidity of the $\beta$-Dyson Brownian motion near the edge. To make the argument clean, we make the following assumption on the potential $V$. We believe the main results in this paper hold for $V$ in $C^4$ as in \cite{HL}.
\begin{assumption}\label{a:asumpV}
We assume that the potential $V$ is an analytic function
\end{assumption}
We denote $M_1({\mathbb R})$ as the space of probability measures on ${\mathbb R}$ and equip this space with the weak topology. We fix a sufficiently small time $T>0$ and denote by $C([0,T], M_1({\mathbb R}))$ the space of continuous processes on $[0,T]$ taking values in $M_1({\mathbb R})$. It follows from \cite{GDBM1} that for all $\beta\geq 1$ and initial data $\bm\lambda(0)\in \overline{\Delta_N}$, there exists a strong solution $(\bm \lambda(t))_{0\leq t\leq T}\in C([0,T],\overline{\Delta_N})$ to the stochastic differential equation \eqref{DBM}.
We recall the following estimates on the locations of extreme particles of $\beta$-Dyson Brownian motion from \cite[Proposition 2.5]{HL}.
\begin{proposition}\label{normbound}
Suppose $V$ satisfies Assumption \ref{a:asumpV}. Let $\beta\geq 1$, and $\bm \lambda(0)\in \overline{\Delta_N}$. Let ${\frak a}$ be a constant such that the initial data $\|\bm \lambda(0)\|_\infty\leq {\frak a}$. Then for a sufficiently small time $T>0$, there exists a finite constant ${\frak b}={\frak b}({\frak a}, T )$, such that for any $0\leq t\leq T$, the unique strong solution of \eqref{DBM} satisfies:
\begin{equation}\label{e:normbound}
\mathbb{P}(\max\{|\lambda_1(t)|,|\lambda_N(t)|\}\geq {\frak b})\leq e^{-N}.
\end{equation}
\end{proposition}
\begin{comment}
{\color{red}
We denote,
\begin{equation}
\partial_z=\frac{1}{2}(\partial_x-\mathrm{i} \partial_y),\quad \partial_{\bar z}=\frac{1}{2}(\partial_x+\mathrm{i} \partial_y).
\end{equation}
}
\end{comment}
Given a probability measure $\hat \rho_0$, we define the measure-valued process $\{\hat \rho_t\}_{t\geq 0}$ as the solution of the following equation
\begin{align}\label{eq:dm0}\begin{split}
\partial_t \hat{m}_t(z)=\partial_z \hat{m}_t(z)\left(\hat{m}_t(z)+\frac{V'(z)}{2}\right)+\frac{\hat{m}_t(z) V''(z)}{2}+{\mathrm{int}}_{{\mathbb R}}g(z,x) {\rm d} \hat{\rho}_t(x),
\end{split}
\end{align}
where
\begin{equation} \label{def:gzx}
g(z,x)\deq\frac{V'(x)-V'(z)-(x-z)V''(z)}{2(x-z)^2},\quad g(x,x)\deq\frac{V'''(x)}{4}.
\end{equation}
It is easy to see that for any fixed $z$, $g(z,x)$ is analytic in ${\mathbb C}$ as a function of $x$; for any fixed $x$, $g(z,x)$ is analytic in ${\mathbb C}$ as a function of $z$.
We analyze \eqref{eq:dm0} by the method of characteristics. Let
\begin{equation} \label{def:zt}
\partial_t z_t(u)=-\hat m_t(z_t(u))-\frac{V'(z_t(u))}{2},\qquad z_0=u\in {\mathbb C}_+,
\end{equation}
If the context is clear, we omit the parameter $u$, i.e., we simply write $z_t$ instead of $z_t(u)$. Plugging \eqref{def:zt} into \eqref{eq:dm0}, and applying the chain rule we obtain
\begin{equation} \label{e:mtzt}
\partial_t \hat m_t(z_t(u))=\frac{\hat m_t(z_t(u)) V''(z_t(u))}{2}+{\mathrm{int}}_{{\mathbb R}} g(z_t(u),x) {\rm d} \rho_t(x).
\end{equation}
The behaviors of $z_s$ and $\hat{m}_s(z_s)$ are governed by the system of equations \eqref{def:zt} and \eqref{e:mtzt}.
{ As a consequence of Proposition \ref{normbound}, if the probability measure $\hat\rho_0$ is supported on $[-{\frak a}, {\frak a}]$, then there exists a finite constant ${\frak b}={\frak b}({\frak a}, T)$, such that $\hat\rho_t$ are supported on $[-{\frak b}, {\frak b}]$ for $0\leq t\leq T$. We fix a large constant ${\frak r}$. If $z_t(u)\in {\mathbb B}_{2{\frak r}}(0)$ and $\dist(z_t(u),[-{\frak b},{\frak b}])\geq 1$, then
\begin{align}\label{def:fr}
|\partial_t z_t|\leq 1+\frac{1}{2}\sup_{z\in {\mathbb B}_{2{\frak r}}(0)}|V'(z)|.
\end{align}
Therefore, for any $u\in {\mathbb B}_{\frak r}(0)$, we have $z_t(u)\in {\mathbb B}_{2{\frak r}}(0)$ for any $0\leq t\leq T$, provided $T$ is small enough.
}
We also frequently use the following estimates studying the imaginary part of characteristics. They were proven in \cite[Proposition 2.7]{HL}.
\begin{proposition} \label{prop:ImEst}
Suppose $V$ satisfies assumption \ref{a:asumpV}. Let $\beta\geq 1$, and $\bm \lambda(0)\in \overline{\Delta_N}$. Fix large constant ${\frak r}>0$ Then for a sufficiently small time $T>0$, there exist constants depending on potential $V$ and ${\frak a}$, such that the following holds. Fix any $0\leq s\leq t\leq T$ with $u\in {\mathbb B}_{{\frak r}}(0)$ and $\Im[z_t(u)]>0$,
\begin{align}\begin{split}
& e^{-C(t-s)} \Im[z_t] \leq \Im[z_s]\\
& e^{-(t-s)C}\Im[\hat m_t(z_t)] \leq \Im[\hat m_s(z_s)] \leq e^{(t-s)C} \Im[\hat m_s(z_s)] \\
& e^{-C(t-s)}\left(\Im[z_t]+(t-s)\Im[\hat m_t(z_t)]\right) \leq \Im[z_s] \leq e^{C(t-s)}\left(\Im[z_t]+(t-s)\Im[\hat m_t(z_t)]\right)\\
& e^{-C(t-s)}(\Im[z_t]\Im[\hat m_t(z_t)]+(t-s)\Im[\hat m_t(z_t)]^2) \leq \Im[z_0]\Im[\hat m_0(z_0)]
\end{split}\end{align}
\end{proposition}
\section{Square Root Behavior Measures}
In the earlier work \cite{HL}, the bulk rigidity of $\beta$-Dyson Brownian motion was proved via a comparison of the empirical density $\rho_t$ with $\tilde \rho_t$, the solution of the associated McKean-Vlasov equation with $\rho_0$ as initial data. This is not a good choice for studying the spectral edge. In most applications, we take $\rho_0$ to be the empirical eigenvalue density of a random matrix, which itself is random. As a consequence, the solution $\tilde\rho_t$ of the associated McKean-Vlasov equation with $\rho_0$, is again a random measure. Even if we have a good control on the difference between $\rho_t$ and $\tilde \rho_t$, it does not tell us the locations of the extreme eigenvalues, unless we have a very precise control of $\tilde \rho_t$. Unfortunately the edge universality asks exactly the locations of the extreme eigenvalues.
In order to circumvent this problem, we comparison the empirical density $\rho_t$ with $\hat \rho_t$, the solution of the associated McKean-Vlasov equation with initial data $\hat\rho_0$ close to $\rho_0$. In most applications, we take $\hat\rho_0$ to be either the semi-circle distribution,
\begin{align}\label{e:semicircle}
\rho_{\rm sc}(x)=\frac{\sqrt{[4-x^2]_+}}{2\pi},
\end{align}
or the Kesten-McKay distribution,
\begin{align}\label{e:km}
\rho_{d}=\left(1+\frac{1}{d-1}-\frac{x^2}{d}\right)^{-1}\frac{\sqrt{[4-x^2]_+}}{2\pi}.
\end{align}
As one can see from the expressions of semi-circle distribution \ref{e:semicircle}, and Kesten-McKay distribution \ref{e:km}, they both have square root behavior at the spectral edge. It is believed that square root behavior is necessary for edge universality. For the remainder of the paper, we assume that the initial measure $\hat\rho_0$ has square root behavior in the following sense.
\begin{definition}\label{d:stable}
We say a probability measure $\hat\rho_0$ has square root behavior at $E_0$ if the measure is supported in $(-\infty, E_0]$ and, in addition, there is some neighborhood $\mathcal{N}$ around $E_0$ such that its Stieltjes transform satisfies
\begin{align}\label{e:holom0}
\hat m_0(z)=A_0(z)+\sqrt{B_0(z)},
\end{align}
with $A_0(z)$ and $B_0(z)$ analytic in $\mathcal{N}$ and with $z=E_0$ a simple root of $B_0(z)$.
\end{definition}
\begin{remark}\label{r:Imm0}
If $\hat \rho_0$ has square root behavior at right edge $E_0$, for any $z=E_0+\kappa+\mathrm{i}\eta$, with $\eta>0$, it is easy to check that
\begin{align}
\Im[\hat m_0(z)]\asymp \Im\left[\sqrt{z-E_0}\right]\asymp \left\{
\begin{array}{cc}
\sqrt{|\kappa|+\eta}, & \kappa\leq 0\\
\eta/\sqrt{|\kappa|+\eta}, &\kappa\geq 0.
\end{array}
\right.
\end{align}
\end{remark}
The Stieltjes transforms of semi-circle distribution and Kesten-McKay distribution are given by
\begin{align}\begin{split}
&m_{\rm sc}(z)={\mathrm{int}}_{{\mathbb R}}\frac{\rho_{\rm sc}(x){\rm d} x}{x-z}=-\frac{z}{2}+\frac{\sqrt{z^2-4}}{2} \\
&m_{d}(z)={\mathrm{int}}_{{\mathbb R}}\frac{\rho_{d}(x){\rm d} x}{x-z}=\left(1+\frac{1}{d-1}-\frac{z^2}{d}\right)^{-1}\left(-\frac{(d-2)z}{2d}+\frac{\sqrt{z^2-4}}{2}\right).
\end{split}\end{align}
They both have square root behavior in the sense of Definition \ref{d:stable}. More generally, we have the following proposition.
\begin{proposition}
If $\hat \rho_0$ has an analytic density $\hat\rho_0$ in a small neighborhood of $E_0$, given by
\begin{align}
\hat\rho_0(x)=S(x)\sqrt{[E_0-x]_+}, \quad E_0-\varepsilon \leq x\leq E_0+\varepsilon,
\end{align}
where $S(x)>0$ is analytic on $[E_0-\varepsilon,E_0+\varepsilon]$, then $\hat\rho_0$ has square root behavior in the sense of Definition \ref{d:stable}.
\end{proposition}
One important consequence of our definition of square root behavior measure is the following proposition which shows us how the square root behavior is a property that propagates in time when solving the McKean-Vlasov equation. We postpone its proof to the Appendix \ref{a:TimStab}.
\begin{proposition}\label{prop:TimStab}
Let $\hat\rho_0$ be a probability measure which has square root behavior at the right edge $E_0$ in the sense of Definition \ref{d:stable}. Fix a sufficiently small time $T>0$, and let $(\hat \rho_t)_{t\in [0,T]}$ the solution of the McKean-Vlasov equation \eqref{eq:dm0} with initial data $\hat\rho_0$. Then the measures $\hat{\rho}_t$ have square root behavior at the right edge $E_t$, for any $0\leq t\leq T$. The edge $E_t$ satisfies,
\begin{align}\label{e:edgeEqn}
\partial_t E_t = -\hat m_t(E_t)-\frac{V'(E_t)}{2},
\end{align}
and it is Lipschitz in time, $|E_t - E_s| = O(|t-s|)$ for $0\leq s\leq t\leq T$.
As a consequence, $\hat \rho_t$ has a density in the neighborhood of $E_t$, given by
\begin{align}\label{e:ConstDef}
\hat\rho_t(x)=(1+\oo(1))C_t\sqrt{[E_t-x]_+},\quad E_t-\varepsilon\leq x\leq E_t+\varepsilon.
\end{align}
The constants $C_t$ are Lipschitz in time, $|C_t - C_s| = O(|t-s|)$, for $0\leq s\leq t\leq T$.
\end{proposition}
The following proposition studies the growth of the distance of the real part of the characteristics $z_t(u)$ to the edge $E_t$. This is the main proposition we use to give strong bounds on $|m_t - \hat{m}_t|$ close to the edge and it serves as one of our fundamental inequalities in next section. The square root behavior of the measures $\rho_t$ was used essentially to describe an equation for the growth of $E_t$ and to provide estimates for the Stieltjes transform.
\begin{proposition}\label{p:gap}
Let $\hat \rho_0$ be a probability measure having square root behavior in the sense of Definition \ref{d:stable}. Fix small $\varepsilon>0$ and a sufficiently small time $T>0$, and let $(\hat \rho_t)_{t\in [0,T]}$ the solution of the McKean-Vlasov equation \eqref{eq:dm0} with initial data $\hat\rho_0$.
If at some $t\ll 1$, the characteristics $z_t(u)=E_t+\kappa_t+\mathrm{i}\eta_t$, with $0<\eta_t,\kappa_t\leq \varepsilon$, then there exists an universal constant $C$ such that for any $0\leq s\leq t$,
\begin{align}\label{e:gap}
\sqrt{\kappa_s}\geq \sqrt{\kappa_t}+C(t-s).
\end{align}
\end{proposition}
\begin{proof}
We denote $z_s(u)=E_s+\kappa_s+\mathrm{i}\eta_s$, for $0\leq s\leq t$. Thanks to \eqref{def:zt} and Proposition \eqref{prop:TimStab}, if $z_s(u)\in {\mathbb B}_{2\varepsilon}(E_s)$, then there exists some universal constant $C$ such that $|\partial_s z_s(u)|\leq C$. If we take $T$ sufficiently small, we will have that $z_s(u)\in {\mathbb B}_{2\varepsilon}(E_s)$ for any $0\leq s \leq t$. In the following we prove that if $\kappa_s\geq 0$, then $\partial_s\kappa_s\leq -C\sqrt{\kappa_s}$ for some universal constant $C$. Then the claim \eqref{e:gap} follows by integrating from $s$ to $t$, and we have $\kappa_s\geq 0$ for all $0\leq s\leq t$.
We recall the differential equation \eqref{e:edgeEqn} for the edge $E_s$
\begin{align}\label{e:edgeEqncopy}
\partial_s E_s=-\hat m_s(E_s)-\frac{V'(E_s)}{2}.
\end{align}
We take real part of \eqref{def:zt}, and take difference with \eqref{e:edgeEqncopy}
\begin{align}\label{e:diff}
\partial_s \kappa_s
=-\left(\Re[\hat m_s(z_s(u))]-\hat m_s(E_s)\right)
-\frac{\Re[V'(z_s(u))]-V'(E_s)}{2}.
\end{align}
For the first term in \eqref{e:diff}
\begin{align}\begin{split}\label{e:firstterm}
&\phantom{{}={}}\Re[\hat m_s(z_s(u))]-\hat m_s(E_s)
=(\Re[\hat m_s(z_s(u))]-\hat m_s(\Re[z_s(u)]))+(\hat m_s(\Re[z_s(u)])-\hat m_s(E_s))\\
&=\eta_s^2{\mathrm{int}}\frac{{\rm d} \hat \rho_s(x)}{(\Re[z_s(u)]-x)((\Re[z_s(u)]-x)^2+\eta_s^2)}
+\kappa_s{\mathrm{int}}\frac{{\rm d} \hat \rho_s(x)}{(\Re[z_s(u)]-x)(E_s-x)}.
\end{split}\end{align}
The purpose of the above decomposition is to write out the expressions for the Stieltjes transform in a way that we can easily compare the corresponding integral expressions. From the integral expression, we can compute the leading order behavior in terms of $\kappa_s$ and $\eta_s$ in order to get an equation. Thanks to Proposition \ref{prop:TimStab}, $\hat \rho_s$ has square root behavior. From Remark \ref{r:Imm0}, we have ${\rm d} \hat\rho_s(x)/{\rm d} x\asymp \sqrt{E_s-x}$ on a neighborhood of $E_s$, and we can estimate \eqref{e:firstterm}
\begin{align}
\Re[\hat m_s(z_s(u))]-\hat m_s(E_s)
\geq C\left(\frac{\eta_s^2}{(\kappa_s+\eta_s)^{3/2}}+{\sqrt{\kappa_s}}\right).
\end{align}
where $C>0$ is some universal constant. For the second term in \eqref{e:diff}
\begin{align}\begin{split}\label{e:secondterm}
\Re[V'(z_s(u))]-V'(E_s)
&=(\Re[V'(z_s(u))]-V'(\Re[z_s(u)]))+(V'(\Re[z_s(u)])-V'(E_s))\\
&\geq -C(\eta_s^2+\kappa_s).
\end{split}\end{align}
Uniformly for $0\leq s\leq t$, we have $\kappa_s,\eta_s\leq 2\varepsilon$. By taking $\varepsilon$ sufficiently small, it follows by combining \eqref{e:firstterm} and \eqref{e:secondterm}, there exists some constant $C>0$ such that
\begin{align}
\partial_s \kappa_s\leq -C\sqrt{\kappa_s},
\end{align}
and the claim \eqref{e:gap} follows.
\end{proof}
\section{Rigidity Estimates}
We prove our edge rigidity estimates in this section. Roughly speaking if the initial data is regular on the scale $\eta^*$, then the optimal rigidity holds for time $t\geq C\sqrt{\eta^*}$, provided $C$ is large enough. We fix a smaller number $r>0$ and the control parameter { $M=(\log N)^{12}$}.
\begin{assumption}\label{a:initial}
Let $\bm{\lambda}(0)=(\lambda_1(0),\lambda_2(0),\cdots,\lambda_N(0))\in \overline{\Delta_N}$, and $\|\bm \lambda(0)\|_\infty\leq {\frak a}$ for some constant ${\frak a}$.
We assume that the initial empirical density
\begin{align}
\rho_0=\frac{1}{N}\sum_{i=1}^{N}\delta_{\lambda_i(0)}
\end{align}
satisfies
\begin{enumerate}
\item $\lambda_1(0)\leq E_0+\eta^*$.
\item There exists a measure $\hat\rho_0$, with square root behavior as defined in Definition \ref{d:stable} such that we have the estimate
\begin{align}
|m_0(z) - \hat{m}_0(z)|\leq \frac{M}{N \eta}, \quad z\in \cal D_0^{\rm in},
\end{align}
and
\begin{align}
|m_0(z) - \hat{m}_0(z)|\leq \frac{1}{M}\frac{1}{N \eta}, \quad z\in \cal D_0^{\rm out},
\end{align}
and
\begin{align}
|m_0(z) - \hat{m}_0(z)|\leq \frac{M}{N }, \quad z\in \cal D_0^{\rm far},
\end{align}
where $m_0(z)$ and $\hat m_0(z)$ are the Stieltjes transform of $\rho_0$ and $\hat\rho_0$ respectively, and the domains $\cal D_0^{\rm in}$, $\cal D_0^{\rm out}$ and $\cal D_0^{\rm far}$ are given by
\begin{align}
\begin{split}
&\cal D_0^{\rm in}\deq\left\{z\in {\mathbb C}^+\cap {\mathbb B}_{E_0}(r): \Im[z]\Im[\hat m_0(z)]\geq (\eta^*)^{3/2} \right\},\\
&\cal D_0^{\rm out}\deq\{z\in {\mathbb C}^+\cap {\mathbb B}_{E_0}(r): \Re[z]\geq E_0+\eta^* \},\\
& \cal D_0^{\rm far}\deq\{z\in {\mathbb C}^+: {\frak r}-1\leq \dist(z,\supp\hat \rho_0)\leq {\frak r}+1\},
\end{split}\end{align}
where ${\frak r}$ is a large constant as defined in \eqref{def:fr}, and ${\mathbb B}_{E_0}(r)$ is the radius $r$ disk centered at $E_0$.
\end{enumerate}
\end{assumption}
\begin{remark}
We remark here that it is essential to control the difference of $m_0$ and $\hat{m}_0$ far away from the support of $\hat\rho_0$, i.e. on $\cal D_0^{\rm far}$. The effect of the potential $V$ is to cause a long range interaction that will cause two solutions to diverge if we have no control in this region. To see this effect, one should notice that if we were to compare the linear statistics of two measures, the difference will change by no more than a constant factor
\end{remark}
We define the following function
\begin{align}\label{e:deff}
f(t)=\left(\max\left\{\sqrt{\eta^*}-{\frak c} t, M N^{-1/3}\right\} \right)^2,
\end{align}
where small constant ${\frak c}>0$ will be chosen later.
It holds that $f(0)=\eta^*$, and it has similar behavior as the real part of characteristics as in \eqref{e:gap}, i.e it satisfies $\sqrt{f(s)}\leq \sqrt{f(t)}+{\frak c}(t-s)$ for any $0\leq s\leq t$. We use this function in interpolating from weak eigenvalue rigidity at the edge at time $0$ to better eigenvalue rigidity at time $t$.
\begin{theorem} \label{Thm:EdgeRidgity}
Suppose $V$ satisfies Assumption \ref{a:asumpV} and the initial data $\bm{\lambda}(0)$ satisfies Assumption \ref{a:initial}. For time $T=(\log N)^{-3}$, with high probability under the Dyson Brownian motion \eqref{DBM}, we have $\lambda_1(t) \leq E_t + f(t)$ for $t \in [0,T]$.
\end{theorem}
We define the spectral domains $\cal D_t$. Roughly speaking the information of Stieltjes transform $m_t(z)$ on $\cal D_t$ reflects the regularity of the empirical particle density $\rho_t$ on the scale $f(t)$.
\begin{definition}
For any $t\geq 0$, we define the region $\mathcal{D}_t= \cal D_t^{\rm in}\cup \cal D_t^{\rm out}\cup \cal D_t^{\rm far}$, where
\begin{align}\begin{split}\label{e:defcDt}
&\mathcal{D}_t^{\rm in} \deq
\{z\in {\mathbb C}^+\cap {\mathbb B}_{E_t}(r-t/{\frak c}): \Im[z]\Im[\hat m_t(z)]\geq f(t)^{3/2} \},\\
&\mathcal{D}_t^{\rm out} \deq\left\{z\in {\mathbb C}^+\cap {\mathbb B}_{E_t}(r-t/{\frak c}): \Re[z]\geq E_t+f(t)\right\},
\\
&\mathcal{D}_t^{\rm far}\deq \{z\in {\mathbb C}^+: {\frak r}-1+t/{\frak c}\leq \dist(z,\supp\hat \rho_t)\leq {\frak r}+1-t/{\frak c}\},\\
\end{split}\end{align}
\end{definition}
For any $0\leq s\leq t$, the spectral domain $\cal D_t$ is a subsect of the domain $\cal D_s$ under the characteristic flow.
\begin{proposition}\label{p:domainic}
Suppose $V$ satisfies Assumption \ref{a:asumpV} and the initial data $\bm{\lambda}(0)$ satisfies Assumption \ref{a:initial}. For any $0\leq s\leq t\ll r$, we have
\begin{align}
z_s\circ z_t^{-1}(\cal D_t)\subset \cal D_s,
\end{align}
provided $N$ is large enough.
\end{proposition}
\begin{proof}
By integrating \eqref{def:fr}, we get that $|z_t-z_0|=\OO(t)$ for any $z_t\in \cal D_t$. It follows that $z_t^{-1}(\cal D_t^{\rm far})\subset \cal D_0^{\rm far}$, and $z_t^{-1}({\mathbb C}^+\cap{\mathbb B}_{E_t}(r-t/{\frak c}))\subset {\mathbb C}^+\cap{\mathbb B}_{E_0}(r)$, provided that ${\frak c}$ is small enough.
For any $z_t\in \{z\in {\mathbb C}^+\cap {\mathbb B}_{E_t}(r-t/{\frak c}): \Re[z]\geq E_t+f(t) \}$, let $z_s=E_s+\kappa_s+\mathrm{i}\eta_s$ for $0\leq s\leq t$. By the definition of $f(t)$ in \eqref{e:deff}, we have
\begin{align}\label{e:ksbound0}
\sqrt{f(s)}\leq {\frak c}(t-s)+\sqrt{f(t)}= {\frak c}(t-s)+\sqrt{\kappa_t}\leq \sqrt{\kappa_s}-{\frak c}(t-s),
\end{align}
provided that ${\frak c}\leq C/2$, where $C$ is the constant in \eqref{e:gap}. We can rearrange \eqref{e:ksbound0} to get
\begin{align}\label{e:ksbound}
\kappa_s\geq f(s)+{\frak c}\sqrt{\kappa_s}(t-s).
\end{align}
As a consequence, we have $\Re[ z_0]\geq E_0+f(0)=E_0+\eta^*$ and $z_t^{-1}(\cal D_t^{\rm out})\subset \cal D_0^{\rm out}$.
Thanks to Proposition \ref{prop:ImEst}, if $z_t\in \cal D_t^{\rm in}$ and $\Re[z_t]\leq E_t+f(t)$, we have
\begin{align}\begin{split}
\Im[z_0]\Im[\hat m_0(z_0)]
&\geq e^{-tC}(\Im[z_t]\Im[\hat m_t(z_t)]+t\Im[\hat m_t(z_t)]^2)\\
&\geq e^{-tC} (f(t)^{3/2}+t\Im[\hat m_t(z_t)]^2)\geq (\eta^*)^{3/2},
\end{split}\end{align}
provided $t\leq \sqrt{\eta^*}/(3{\frak c})$ or $\Re[z_t]\leq E_t-\eta^*$. For $t\geq \sqrt{\eta^*}/(3{\frak c})$, in fact we have $z_t^{-1}({\mathbb C}^+\cap{\mathbb B}_{E_t}(\eta^*))\subset \cal D_*^{\rm out}$. We prove it by contradiction. Say if there exists some $z_t\in {\mathbb C}^+\cap{\mathbb B}_{E_t}(\eta^*)$, such that $\Re[z_0]\leq E_t+\eta^*$.
By our assumption that $\hat\rho_t$ has square root behavior, we have $\Im[\hat m_t(z_t)]\leq C\sqrt{\eta^*}$.
Thanks to Proposition \ref{prop:ImEst}, we have $\Im[z_0]\geq e^{-tC}(\Im[z_t]+t\Im[\hat m_t(z_t)])\geq e^{-tC}t\Im[\hat m_t(z_t)]$, and thus
\begin{align}\begin{split}
\Im[\hat m_t(z_t)]\geq e^{-Ct}\Im[\hat m_0(z_0)]\geq
\frac{e^{-Ct}}{C}\frac{\Im[z_0]}{\sqrt{\eta^*+\Im[z_0]}}\geq\frac{e^{-2Ct}}{C} \frac{t\Im[\hat m_t(z_t)]}{\sqrt{\eta^*+e^{-tC}t\Im[\hat m_t(z_t)]}},
\end{split}
\end{align}
which is impossible if $t\geq \sqrt{\eta^*}/(3{\frak c})$, and ${\frak c}$ is sufficiently small. This finishes the proof of Proposition \ref{p:domainic}
\end{proof}
The following proposition gives optimal bulk estimate of $m_t$, i.e. on the spectral domain $\cal D_t^{\rm in}\cup \cal D_t^{\rm far}$.
\begin{proposition}
\label{p:rigidity}
Suppose $V$ satisfies the Assumption \ref{a:asumpV}. Fix time $T=(\log N)^{-3}$. For any initial data $\bm{\lambda}(0)$ satisfies Assumption \ref{a:initial}, uniformly for any $0\leq t\leq T$, and $w\in \dom_t^{\rm in}\cup \dom_t^{\rm far}$
there exists a set $\Omega$ that occurs with overwhelming probability on which the following estimate holds: if $w\in \cal D_t^{\rm in}$
\begin{equation} \label{e:diffmm}
|m_t(w)- \hat m_t(w)|\leq \frac{M}{N\Im[w]},
\end{equation}
if $w\in \cal D_t^{\rm far}$
\begin{equation} \label{e:diffmm2}
|m_t(w)- \hat m_t(w)|\leq \frac{M}{N}.
\end{equation}
\end{proposition}
The proof of proposition \ref{p:rigidity} follows the same argument as \cite[Theorem 3.1]{HL}, with two modifications. Firstly, when we use Gronwall inequality, we need to take care of the error from the initial data, i.e. $m_0(z)-\hat m_0(z)\neq 0$. This is where our Assumption \ref{a:initial} comes into play. Secondly, we estimate the error term involving the potential $V$ using a contour integral.
\begin{proof}[Proof of Proposition \ref{p:rigidity}]
By Ito's formula, $ m_s(z)$ satisfies the stochastic differential equation
\begin{align}\begin{split}\label{eq:dm}
{\rm d} m_s(z)= -&\sqrt{\frac{2}{\beta N^3}}\sum_{i=1}^N \frac{{\rm d} B_i(s)}{(\lambda_i(s)-z)^2}+ m_s(z)\partial_z m_s(z){\rm d} s \\
+&\frac{1}{2N}\sum_{i=1}^{N}\frac{V'(\lambda_i(s))}{(\lambda_i(s)-z)^2}{\rm d} s+\frac{2-\beta}{\beta N^2}\sum_{i=1}^{N}\frac{{\rm d} s}{(\lambda_i(s)-z)^3}.
\end{split}\end{align}
We can rewrite \eqref{eq:dm} as
\begin{align}\label{eq:dm3}\begin{split}
{\rm d} m_s(z)=-&\sqrt{\frac{2}{\beta N^3}}\sum_{i=1}^N \frac{{\rm d} B_i(s)}{(\lambda_i(s)-z)^2}+\partial_z m_s(z)\left( m_s(z)+\frac{V'(z)}{2}\right){\rm d} s+\frac{ m_s(z)\partial_z V'(z)}{2}{\rm d} s\\+& {\mathrm{int}}_{{\mathbb R}} g(z,x) {\rm d} \rho_s(x){\rm d} s
+\frac{2-\beta}{\beta N^2}\sum_{i=1}^{N}\frac{{\rm d} s}{(\lambda_i(s)-z)^3},
\end{split}
\end{align}
$ g(z,w)$ is defined in \eqref{def:gzx}.
Plugging \eqref{def:zt} into \eqref{eq:dm3}, and by the chain rule, we have
\begin{align}\label{e:tdmzt}\begin{split}
{\rm d} m_s(z_s)=-&\sqrt{\frac{2}{\beta N^3}}\sum_{i=1}^N \frac{{\rm d} B_i(s)}{(\lambda_i(s)-z_s)^2}+\partial_z m_s(z_s)\left(m_s(z_s)-\hat{m}_s(z_s)\right){\rm d} s+\frac{ m_s(z_s)V''(z_s)}{2}{\rm d} s\\
+ &
{\mathrm{int}}_{{\mathbb R}} g(z_s,x) {\rm d} \rho_s(x){\rm d} s
+\frac{2-\beta}{\beta N^2}\sum_{i=1}^{N}\frac{{\rm d} s}{(\lambda_i(s)-z_s)^3}.
\end{split}
\end{align}
It follows by taking the difference of \eqref{e:mtzt} and \eqref{e:tdmzt} that,
\begin{align}\label{e:diffm}\begin{split}
{\rm d} (m_s(z_s)-\hat{m}_s(z_s))=-&\sqrt{\frac{2}{\beta N^3}}\sum_{i=1}^N \frac{{\rm d} B_i(s)}{(\lambda_i(s)-z_s)^2}+\left( m_s(z_s)-\hat{m}_s(z_s)\right)\partial_z \left( m_s(z_s)+\frac{V'(z_s)}{2}\right){\rm d} s\\
+&
{\mathrm{int}}_{\mathbb R} g(z_s,x) ({\rm d} \rho_s(x)-{\rm d} \hat\rho_s(x)){\rm d} s
+\frac{2-\beta}{\beta N^2}\sum_{i=1}^{N}\frac{{\rm d} s}{(\lambda_i(s)-z_s)^3}.
\end{split}
\end{align}
We can integrate both sides of \eqref{e:diffm} from $0$ to $t$ and obtain
\begin{equation} \label{eq:mzt}
m_t(z_t)- \hat{m}_t(z_t)={\mathrm{int}}_0^t \left( {\cal E}_1(s){\rm d} s+{\rm d} {\cal E}_2(s) \right) +(m_0(z_0) - \hat{m}_0(z_0)),
\end{equation}
where the error terms are
\begin{align}
\label{defcE1}{\cal E}_1(s)=&\left(m_s(z_s)-\hat{m}_s(z_s)\right)\partial_z \left( m_s(z_s)+\frac{V'(z_s)}{2}\right)+
{\mathrm{int}}_{\mathbb R} g(z_s,x) ({\rm d} \rho_s(x)-{\rm d} \hat\rho_s(x)),
\\
\label{defcE2}{\rm d} {\cal E}_2(t)=&\frac{2-\beta}{\beta N^2}\frac{{\rm d} s}{(\lambda_i(s)-z_s)^3}-\sqrt{\frac{2}{\beta N^3}}\sum_{i=1}^N \frac{{{\rm d}} B_i(s)}{(\lambda_i(s)-z_s)^2}.
\end{align}
We remark that $\cal E_1$ and $\cal E_2$ implicitly depend on $u$, the initial value of the flow $z_s(u)$. The local law will eventually follow from an application of Gronwall's inequality to \eqref{eq:mzt}.
We define the following lattice on the upper half plane ${\mathbb C}_+$,
\begin{equation}\label{def:L}
\cal L=\left\{E+\mathrm{i} \eta\in \dom_0^{\rm in}\cup \dom_0^{\rm out}\cup \dom_0^{\rm far}: E\in \mathbb{Z}/ N^{3}, \eta\in \mathbb{Z}/N^{3}\right\}.
\end{equation}
It follows from Propositions \ref{p:domainic}, $z_t^{-1}(\cal D_t)\subset\dom_*^{\rm in}\cup \dom_*^{\rm out}\cup \dom_*^{\rm far}$, and for any and $w\in \dom_t$, there exists some lattice point $u\in \cal L\cap z_t^{-1}(\dom_t)$, such that
$
|z_t(u)-w|=\OO(N^{-3}).
$
We define the stopping time
\begin{align}\begin{split}\label{stoptime}
\sigma\deq
T
&\bigwedge
\inf_{s\geq0}\left\{\|\bm{\lambda}(s)\|_{\infty}\geq {\frak b}\right\}\\
&\bigwedge
\inf_{s\geq0}\left\{\exists w\in \cal D_s^{\rm in }: \left|m_s(w)-\hat m_s(w)\right|\geq \frac{M}{N\Im[w]}\right\}\\
&\bigwedge\inf_{s\geq0}\left\{\exists w\in \cal D_s^{\rm far}: \left|m_s(w)-\hat m_s(w)\right|\geq \frac{M}{N}\right\}.
\end{split}\end{align}
By the same argument as in \cite[Proposition 3.8]{HL}, using Burkholder-Davis-Gundy inequality, there exists a set $\Omega$ of Brownian paths $\{B_1(s), B_2(s), \cdots, B_N(s)\}_{0\leq s\leq t}$, such that for any $0\leq s\leq t$, and $u\in {\cal L}\cap z_t^{-1}(\cal D_t^{\rm in})$,
\begin{align}\begin{split}\label{e:continuityarg}
\left|{\mathrm{int}}_0^{s\wedge \sigma} {\rm d} {\cal E}_2(s)\right|
\leq \frac{(\log N)^2}{N\Im[z_{s\wedge\sigma}(u)]},
\end{split}
\end{align}
and $u\in {\cal L}\cap z_t^{-1}(\cal D_t^{\rm far})$,
\begin{align}\begin{split}\label{e:continuityarg}
\left|{\mathrm{int}}_0^{s\wedge \sigma} {\rm d} {\cal E}_2(s)\right|
\leq \frac{(\log N)^2}{N}.
\end{split}
\end{align}
For the last term in \eqref{defcE1}, we rewrite it as a contour integral and bound it simply by its absolute value.
\begin{proposition} \label{prop:HFbound}
Under the assumptions of Theorem \ref{Thm:EdgeRidgity} for any $u \in z_t^{-1}(\cal D_t)$ and $s\in [0,t]$ we have
\begin{equation}\label{e:HFbound}
\left|{\mathrm{int}}_{{\mathbb R}} g(z_{s \wedge \sigma}(u) ,x) ({\rm d} \rho_{s\wedge \sigma}(x)-{\rm d} \hat\rho_{s\wedge \sigma}(x)) \right| =\OO\left( \frac{M}{N}\right).
\end{equation}
\end{proposition}
\begin{proof}
From our choice of the stopping time \eqref{stoptime}, we have both $\rho_{s\wedge \sigma}$ and $\hat \rho_{s\wedge \sigma}$ are supported on $[-{\frak b}, {\frak b}]$. Moreover, $g(z_{s\wedge \sigma}(u), x)$ is analytic in $x$, we can rewrite the integral in \eqref{e:HFbound} as a contour integral
\begin{align}
{\mathrm{int}}_{{\mathbb R}} g(z_{s \wedge \sigma}(u) ,x) ({\rm d} \rho_{s\wedge \sigma}(x)-{\rm d} \hat\rho_{s\wedge \sigma}(x)) =
-\frac{1}{2\pi \mathrm{i}}\oint_{\mathcal{C}_s} (g(z_{s \wedge \sigma}(u) ,w) ({m}_{s \wedge \sigma}(w) - \hat m_{s\wedge \sigma}(w)) {\rm d} w
\end{align}
where $\mathcal{C}_s$ is a contour of distance ${\frak r}$ away from the support of $\hat{\rho}_s$. Thanks to our definition of $\cal D_s$ in \ref{e:defcDt}, we have ${\cal C}_s\subset \cal D_s$.
The above contour integral can be bounded as
\begin{align}\begin{split}
&\phantom{{}={}}\left|\oint_{\mathcal{C}_s} (g(z_{s \wedge \sigma} ,w) ({m}_{s \wedge \sigma})(w) -\hat m_{s\wedge \sigma}(w)) {\rm d} w\right| \\
& \leq {\rm length}({\cal C}_s)\sup_{w\in \mathcal{C}_s}|g(z_{s \wedge \sigma} ,w)| |(\hat{m}_{s \wedge \sigma})(w) - m_{s\wedge \sigma}(w))| =\OO\left( \frac{M}{N}\right),
\end{split}\end{align}
where we use the fact that $g$ is bounded on the contour $\mathcal{C}_s$, the length of $\mathcal{C}_s$ is bounded, and we have rigidity along the contour $\mathcal{C}_s$.
\end{proof}
We plug \eqref{e:HFbound} and \eqref{e:continuityarg} into \eqref{eq:mzt}, on the event $\Omega$, for $u\in {\cal L} \cap z_t^{-1}(\cal D_t^{\rm in})$ we have
\begin{align}\begin{split}
m_{t\wedge\sigma}(z_{t\wedge\sigma})-\hat m_{t\wedge \sigma}(z_{t\wedge \sigma})
&=(m_0(z_0)-\hat m_s(z_0))+\OO\left(\frac{({t\wedge\sigma})M}{N}
+\frac{(\log N)^2}{N\eta_{t\wedge \sigma}}\right)\\
&+{\mathrm{int}}_0^{t\wedge\sigma}\left|\hat{m}_{s}(z_s)-m_s(z_s)\right|\left|\partial_z \left( m_s(z_s)+\frac{V'(z_s)}{2}\right)\right|{\rm d} s.
\end{split}\end{align}
It follows by the Gronwall inequality, and same argument as in \cite{HL}, we get
\begin{align}
|m_{t\wedge\sigma}(z_{t\wedge\sigma})-\hat m_{t\wedge \sigma}(z_{t\wedge \sigma})|\leq \frac{\oo(M)}{N\eta_{t\wedge\sigma}},
\end{align}
provided that $t\leq T=(\log N)^{-3}$. And similarly for $u\in {\cal L}\cap z_t^{-1}(\cal D_t^{\rm far})$, we have
\begin{align}
|m_{t\wedge\sigma}(z_{t\wedge\sigma})-\hat m_{t\wedge \sigma}(z_{t\wedge \sigma})|\leq \frac{\oo(M)}{N},
\end{align}
provided that $t\leq T=(\log N)^{-3}$.
Thus with high probability we have $\sigma=T$, and Proposition \ref{p:rigidity} follows.
\end{proof}
\begin{comment}
\begin{proposition}\label{Thm:EdgeRigid}
Fix time $T=(\log N)^{-2}$ and $0\leq t\leq T$. For any initial data $\bm{\lambda}(0)$ satisfies Assumption \ref{a:initial}, with high probability under the Dyson Brownian motion \eqref{DBM}, there is no particle in the interval $[E_t+f(t)-N^{-1}, E_t+f(t)+N^{-1}]$ for any time $t-N^{-1}\leq s\leq t$, i.e.
\begin{align}
|\{i\in \qq{N}: \lambda_i(s)\in [E_t+f(t)-N^{-1}, E_t+f(t)+N^{-1}]\}|=0,\quad t-N^{-1}\leq s\leq t.
\end{align}
\end{proposition}
\end{comment}
\begin{comment}
{\color{red}
\begin{lemma}
\begin{align}\begin{split}\label{e:someclaims}
\Im[m_s(z_s)]=-\partial_s \Im[z_s] +O(\Im[z_s])\\
\Im[m_s(z_s)]\asymp \Im[\hat m_t(z_t)]\\
\partial_t E_t=\OO(1).
\end{split}\end{align}
\end{lemma}
it seems that we repeatedly used them.
}
\end{comment}
\begin{proof}[Proof of Theorem \ref{Thm:EdgeRidgity}]
Theorem \ref{Thm:EdgeRidgity} follows from a very precise estimate of the Stieltjes transform. More precisely, it follows from the following estimate
\begin{align}\label{e:mtbound}
|m_t(E_t+\kappa+\mathrm{i} \eta)-\hat m_t(E_t+\kappa+\mathrm{i} \eta)|\ll \frac{1}{N\eta},
\end{align}
where $\kappa\geq M^2 N^{-2/3}$ and $\eta=M^{-1/3}\kappa^{1/4}N^{-1/2}\geq M^{1/6}N^{-2/3}$, that there is no particle on the interval $[E_t+\kappa-\eta, E_t+\kappa+\eta]$. Thanks to our assumption \ref{a:initial} that $\hat\rho_0$ and $\hat \rho_t$ have square root behavior, and $\Im[\hat m_t(E_t+\kappa+\mathrm{i} \eta)]\asymp \eta/\sqrt{\kappa+\eta}\ll 1/N\eta$.
Then it follows that
\begin{align}\label{e:sumerror}
\Im[m_t(E_t+\kappa+\mathrm{i} \eta)]=\frac{1}{N}\sum_{i=1}^N \frac{\eta}{(\lambda_i(t)-\kappa-f(t))^2+\eta^2}\le \frac{1}{N\eta}.
\end{align}
If there exists some $\lambda_i(t)$ such that $|\lambda_i(t)-E_t-\kappa|\leq \eta$, then the righthand side of \eqref{e:sumerror} is at least $1/(2N\eta)$. This leads to a contradiction.
In the following, we will use a stopping time argument to show estimates like \eqref{e:mtbound}. We let $t_i=i/N$, for $i\leq \lceil TN\rceil$ and $\{z_s(u_i)\}_{1\leq s\leq t_i}$ denote the characteristic flow starting at $u_i$ such that at time $t_i$, $z_{t_i}(u_i)=E_{t_i} + f(t_i)+ \mathrm{i} M^{-1/3}f(t_i)^{1/4}N^{-1/2}$.
Thanks to \eqref{e:gap}, for $0\leq t\leq t_i$, we have
\begin{align}\label{e:real}
z_t(u_i)-E_t\geq \left(\sqrt{f(t_i)}+{\frak c}(t_i-t)\right)^2.
\end{align}
Moreover, using Proposition \ref{prop:ImEst}
\begin{align}\label{e:imag}
\Im[z_t(u_i)]\asymp \Im[z_{t_i}(u_i)]+(t_i-t)\Im[\hat m_{t_i}(z_{t_i}(u_i))]\asymp M^{-1/3}f(t_i)^{1/4}N^{-1/2}(1+(t_i-t)/\sqrt{f(t_i)}).
\end{align}
where we used that $\hat \rho_{t_i}$ has square root behavior. It follows from comparing \eqref{e:real} and \eqref{e:imag} we get that
\begin{align}\label{e:realim}
z_t(u_i)-E_t\geq M^{3/2}\Im[z_t(u_i)],
\end{align}
for any $0\leq t\leq t_i$.
We now define the stopping time $\sigma$
\begin{align}\begin{split}\label{e:defsigma}
\sigma := T&\bigwedge\inf\{s:\lambda_1(s) - E_s\geq f(s)\} \\
& \bigwedge\inf\left\{s:\exists i\leq \lceil TN\rceil, {\bf 1}_{s\leq t_i}|m_s(z_s(u_i)) - \hat{m}_s(z_s(u_i))|\geq \frac{1}{M^{1/4}} \frac{1}{N\Im[z_s(u_i)]}\right\}\\
& \bigwedge \inf \left\{s:\exists w\in \cal D_s^{\rm far}, |m_s(w)- \hat{m}_s(w)| \geq \frac{M}{N }\right \}.
\end{split}\end{align}
To get a more precise estimate of the Stieltjes transform $m_s(z_s(u_i))$, we need to upgrade the estimate \eqref{e:continuityarg}.
The following proposition analyzes the short range deterministic term in \eqref{defcE2} for the edge terms
\begin{proposition}\label{l:thirdbound}
For any $0\leq t\leq t_i$,
\begin{equation}
\frac{2- \beta}{\beta} {\mathrm{int}}_{0}^{t \wedge \sigma} \frac{1}{N^2} \sum_{k=1}^{N}\frac{1}{|\lambda_{k}(s) - z_{s}(u_i)|^3}{\rm d} s\leq \frac{C \log N}{N \kappa_{t \wedge \sigma}},
\end{equation}
where $\kappa_t=\Re[z_t(u_i)]-E_t$ for $0\leq t\leq t_i$.
\end{proposition}
\begin{proof}
For simplicity of notations, we write $z_t(u_i)$ as $z_t$, and denote $\eta_t=\Im[z_t(u_i)]$ for $0\leq t\leq t_i$.
By the definition of $\sigma$, for $s\leq \sigma$, it holds $\lambda_1(s)\leq E_s+f(s)$.
\begin{align}\begin{split}
&\phantom{{}={}}{\mathrm{int}}_{0}^{t \wedge \sigma} \frac{1}{N^2} \sum_{k=1}^{N}\frac{1}{|\lambda_{k}(s) - z_{s}|^3} {\rm d} s
\leq \frac{1}{N}{\mathrm{int}}_{0}^{t \wedge \sigma} \frac{ \text{Im}[m_s(z_s)]}{|\Re[z_s]-\lambda_1(s)+\mathrm{i}\eta_s| \text{Im}[z_s]} {\rm d} s \\
&\leq \frac{2}{N}{\mathrm{int}}_{0}^{t \wedge \sigma} \frac{\text{Im} [\hat{m}_s(z_s)]}{|\Re[z_s]-\lambda_1(s)+\mathrm{i}\eta_s|\text{Im}[z_{s}]} {\rm d} s
\leq \frac{C}{N} {\mathrm{int}}_{0}^{t \wedge \sigma} \frac{\eta_{s}/\sqrt{\kappa_{s}}}{(\kappa_s -f(s)+\eta_s) \eta_{s}}{\rm d} s\\
&\leq \frac{C}{N}{\mathrm{int}}_{0}^{t \wedge \sigma} \frac{{\rm d} s}{(\kappa_{s})^{1/2}((\kappa_{s})^{1/2}(t_i-s)+\eta_s)}\leq \frac{C}{N\kappa_{t\wedge \sigma}}{\mathrm{int}}_{0}^{t \wedge \sigma} \frac{{\rm d} s}{(t_i-s)+\eta_s/\sqrt{\kappa_s}}\leq \frac{C\log N}{N\kappa_{t\wedge \sigma}},
\end{split}\end{align}
where in the last line we use \eqref{e:ksbound} and the increasing gap \eqref{e:gap} inequality, for any $0\leq s\leq t\wedge \sigma$,
\begin{align}
\kappa_s^{1/2}\geq \kappa_{t\wedge \sigma}^{1/2}+C(t\wedge \sigma-s)\geq \kappa_{t\wedge \sigma}^{1/2}.
\end{align}
\end{proof}
A similar analysis can be done to analyze the short range stochastic term in \eqref{defcE2} for the edge terms.
\begin{proposition}\label{l:StochasticBnd}
There exists a set $\Omega$, which holds with high probability, such that on $\Omega$ the following inequality holds for any $i\leq \lceil TN\rceil$
\begin{equation}\label{StochasticBnd}
{\mathrm{int}}_{0}^{t \wedge \sigma}\sqrt{\frac{2}{\beta N^3}} \sum_{k=1}^{N} \frac{{\rm d} B_k(s)}{|z_s(u_i) - \lambda_k(s)|^2} ds \leq \frac{C(\log N)^2}{N \sqrt{\kappa_{t \wedge \sigma} \eta_{t \wedge \sigma}}},
\end{equation}
where $\kappa_t=\Re[z_t(u_i)]-E_t$ and $\eta_t=\Im[z_t(u_i)]$ for $0\leq t\leq t_i$.
\end{proposition}
\begin{proof}
For simplicity of notations, we write $z_t(u_i)$ as $z_t$,
For a given $t_i$, we define a series of partial stopping times $0=t_i^0< t_i^1<t_i^2<\cdots$, as follows:
\begin{equation}\label{e:choosetik}
t^k_i = t_i \wedge \inf\{t> t^{k-1}_i : \kappa_{t} \eta_{t} < \kappa_{t^{k-1}_i} \eta_{t^{k-1}_i}/2\},\quad k=1,2,3,\cdots.
\end{equation}
Notice that since $\kappa_t$ and $\eta_t$ are finite and cannot be smaller than $N^{-2/3}$, for a given $t_i$, we have $t_i^k=t_i$ for $k\gtrsim \log N$.
We now apply the Burkholder-Davis Gundy inequality to our stochastic integral. The quadratic variation can be found as follows:
\begin{align}\begin{split}
&\phantom{{}={}}{\mathrm{int}}_{0}^{t^k_{i} \wedge \sigma} \frac{2}{\beta N^3} \sum_{k=1}^{N} \frac{{\rm d} s}{|z_s - \lambda_k(s)|^4}
\leq \frac{C}{N^2} {\mathrm{int}}_{0}^{t^k_i \wedge \sigma} \frac{\text{Im}[m_s(z_s)]}{\text{Im}[z_s] ((\kappa_s - f(s))^2 + (\eta_s)^2)} {\rm d} s\\
& \leq \frac{C}{N^2} {\mathrm{int}}_{0}^{t^k_i \wedge \sigma} \frac{\text{Im}[\hat m_s(z_s)]}{\text{Im}[z_s] ((\kappa_s - f(s))^2 + (\eta_s)^2)} {\rm d} s
\leq \frac{C}{N^{2}} {\mathrm{int}}_{0}^{t^k_i \wedge \sigma} \frac{ \eta_{s}/{\sqrt{\kappa_{s}}}}{\eta_{s}((\kappa_s)^{1/2}(t_i-s)+ \eta_{s})^2}{\rm d} s \\
&\leq \frac{C}{N^{2}} {\mathrm{int}}_{0}^{t^k_i \wedge \sigma} \frac{{\rm d} s}{(\kappa_{s})^{3/2}(t_i-s+\eta_s/(\kappa_s)^{1/2})^2}
\leq \frac{C}{N^{2}} {\mathrm{int}}_{0}^{t^k_i \wedge \sigma} \frac{{\rm d} s}{(\kappa_{t^k_i \wedge \sigma})^{3/2}(t_i-s+\eta_{t^k_i \wedge \sigma}/(\kappa_{t^k_i \wedge \sigma})^{1/2})^2}\\
&\leq \frac{C} {N^{2}} \frac{1}{\kappa_{t^k_i \wedge \sigma} \eta_{t^k_i \wedge \sigma}}
\end{split}\end{align}
where in the third inequality we use \eqref{e:ksbound}, and
\begin{align}
\frac{\eta_s}{\sqrt{\kappa_s}}\asymp
\Im[\hat m_s(z_s)]\asymp \Im[\hat m_{t_i^k\wedge \sigma}(z_{t_i^k\wedge \sigma})]\asymp \frac{\eta_{t_i^k\wedge \sigma}}{\sqrt{\kappa_{t_i^k\wedge \sigma}}}.
\end{align}
The Burkholder-Davis Gundy inequality implies that with high probability we must have
\begin{equation}\label{localineq}
\sup_{0\leq t \leq t^k_i } \left|{\mathrm{int}}_{0}^{t\wedge \sigma}\sqrt{\frac{2}{\beta N^3}} \sum_{k=1}^{N} \frac{{\rm d} B_i(s)}{|z_s - \lambda_k(s)|^2} ds \right|\leq \frac{C(\log N)^2}{N \sqrt{\kappa_{t^k_i \wedge \sigma} \eta_{t^k_i \wedge \sigma}}}
\end{equation}
We define $\Omega$ to be the set of Brownian paths $\{B_1(s), \cdots, B_N(s)\}_{0\leq s\leq T}$ on which, \eqref{localineq} holds for all $k$. It follows from the discussion above, $\Omega$ holds with high probability.
Therefore, for any $t\in[t_{i}^{k-1},t_i^k]$, the bounds \eqref{localineq} and our choice of $t_i^k$ \eqref{e:choosetik} yield that on $\Omega$,
\begin{align}\begin{split}
\left|{\mathrm{int}}_{0}^{t\wedge \sigma}\sqrt{\frac{2}{\beta N^3}} \sum_{k=1}^{N} \frac{{\rm d} B_i(s)}{|z_s - \lambda_k(s)|^2} ds \right|\leq \frac{C(\log N)^2}{N \sqrt{\kappa_{t \wedge \sigma} \eta_{t \wedge \sigma}}}.
\end{split}
\end{align}
This finishes the proof of proposition \ref{l:StochasticBnd}.
\end{proof}
The last term in \eqref{defcE1} can be estimated by Proposition \ref{prop:HFbound}. Now let us return to the equation \eqref{eq:mzt} for the difference between $\hat{m}_t(z)$ and $m_t(z)$. Fix some $i\leq \lceil TN\rceil$, we denote $z_t=E_t+\kappa_t+\mathrm{i}\eta_t =z_t(u_i)$ for any time $0\leq t\leq t_i$, thanks to Proposition \ref{l:thirdbound}, \ref{l:StochasticBnd} and \ref{prop:HFbound} we have
\begin{align}\begin{split}\label{e:globalest}
\left|\hat{m}_{ t \wedge \sigma}(z_{ t \wedge \sigma})-m_{ t \wedge \sigma}(z_{ t \wedge \sigma})\right|
\leq& {\mathrm{int}}_0^{ t\wedge\sigma}\left|\hat{m}_{s}(z_s)-m_s(z_s)\right|\left|\partial_z \left( \hat m_s(z_s)+\frac{V'(z_s)}{2}\right)\right|{\rm d} s\\+&
\frac{C({ t\wedge\sigma})M}{N}
+\frac{C(\log N)^2}{N\sqrt{\eta_{ t \wedge \sigma} \kappa_{ t \wedge \sigma}}} +\left|\hat{m}_0(z_0) - m_0(z_0) \right|.
\end{split}\end{align}
Notice that for $s\leq t\wedge \sigma$,
\begin{align}\label{e:aterm}
\left|\partial_z \left(\hat m_s(z_s)+\frac{V'(z_s)}{2}\right)\right|
\leq \frac{\text{Im}[ \hat m_s(z_s)]}{\text{Im}[z_s]}+C,
\end{align}
From the definition \eqref{e:defsigma} of $\sigma$, we have that $|m_s(z_s) - \hat{m}_s(z_s)|\leq \text{Im}[\hat{m}_s(z_s)]/\log N$, and it follows
\begin{align}\label{e:aterm2}
\left|\partial_z \left(\hat m_s(z_s)+\frac{V'(z_s)}{2}\right)\right|
\leq \frac{\text{Im}[ \hat m_s(z_s)]}{\text{Im}[z_s]}+C\leq \left(1+\frac{1}{\log N}\right)\frac{\text{Im}[ \hat{m}_s(z_s)]}{\text{Im}[z_s]}+C.
\end{align}
We denote the quantity,
\begin{align} \label{eqn:betabd}
\beta(s):= \left(1+\frac{1}{\log N}\right)\frac{\text{Im}[ \hat{m}_s(z_s)]}{\text{Im}[z_s]}+C= \text{O}\left(\frac{\text{Im}[ \hat{m}_s(z_s)]}{\text{Im}[z_s]}\right),
\end{align}
and rewrite \eqref{e:globalest} as
\begin{align}\begin{split}
\left|\hat{m}_{ t\wedge\sigma}(z_{ t\wedge\sigma})-m_{ t\wedge\sigma}(z_{ t\wedge\sigma})\right|
\leq& {\mathrm{int}}_0^{ t\wedge\sigma}\beta(s)\left|\hat{m}_s(z_s)-m_s(z_s)\right|ds\\+&
\frac{C({ t\wedge\sigma})M}{N}
+\frac{C(\log N)^2}{N\sqrt{\eta_{ t \wedge \sigma}\kappa_{ t \wedge \sigma}}}+\left|\hat{m}_0(z_0) - m_0(z_0) \right|.
\end{split}
\end{align}
By Gronwall's inequality, this implies the following estimate for any $ 0\leq t\leq t_i$
\begin{align}\begin{split}\label{e:midgronwall}
&\left|\hat{m}_{t\wedge\sigma}(z_{t\wedge\sigma})-m_{t\wedge\sigma}(z_{t\wedge\sigma})\right|
\leq \frac{C({t\wedge\sigma})M}{N}
+\frac{C(\log N)^2}{N\sqrt{\eta_{t\wedge \sigma} \kappa_{t \wedge \sigma}}}+\left|\hat{m}_0(z_0) - m_0(z_0) \right|\\
+&{\mathrm{int}}_0^{t\wedge\sigma}\beta(s)\left(\frac{sM(\log N)^{2}}{N}
+\frac{C(\log N)^2}{N\sqrt{\eta_{s}\kappa_{s}}}+\left|\hat{m}_0(z_0) - m_0(z_0) \right|\right)e^{{\mathrm{int}}_s^{t\wedge\sigma} \beta(\tau)d\tau} {\rm d} s
\end{split}
\end{align}
For the function $\beta$ we have the following estimates
\begin{align}\begin{split}
{\mathrm{int}}_s^{t\wedge\sigma} \beta(\tau){\rm d} \tau
&\leq C(t-s)+\left(1+\frac{1}{\log N}\right){\mathrm{int}}_s^{t\wedge\sigma} \frac{\Im[\hat m_s(z_s)]}{\Im[z_s]}{\rm d} \tau\\
&\leq C(t-s)+ \left(1+\frac{1}{\log N}\right)\log \left(\frac{\text{Im}[z_{s}]}{\text{Im}[z_{t\wedge\sigma}]}\right),
\end{split}\end{align}
and thus
\begin{align}\label{e:beta}
e^{{\mathrm{int}}_s^{t\wedge\sigma} \beta(\tau)d \tau}
\leq e^{C(t-s)} e^{\left(1+\frac{1}{\log N}\right)\log \left(\frac{\text{Im}[z_{s}]}{\text{Im}[z_{t\wedge\sigma}]}\right)}
\leq C \frac{\text{Im}[z_s]}{\text{Im}[z_{t\wedge\sigma}]},
\end{align}
where in the last equality, we used the estimate $\text{Im}[z_{s}]/\text{Im}[z_{t\wedge\sigma}] \leq C N$.
Combining the above inequality \eqref{e:beta} with \eqref{eqn:betabd} we can bound the last term in \eqref{e:midgronwall} by
\begin{align}\begin{split}\label{e:term2}
&\phantom{{}={}}C{\mathrm{int}}_0^{t\wedge\sigma}\frac{\text{Im}[ \hat m_{s}(z_s)]}{\text{Im}[z_{t\wedge \sigma}]}\left(\frac{sM}{N}
+\frac{C(\log N)^2}{N\sqrt{\eta_{s}\kappa_{s}}}+\left|\hat{m}_0(z_0) - m_0(z_0) \right|\right) {\rm d} s\\
&\leq \frac{CM}{N\text{Im}[z_{t\wedge \sigma}]}{\mathrm{int}}_0^{t\wedge\sigma}s\text{Im}[\hat m_s(z_s)] ds+ {\mathrm{int}}_0^{t\wedge \sigma} \frac{\text{Im}[ \hat m_{s}(z_s)]}{\text{Im}[z_{t\wedge \sigma}]} \frac{C(\log N)^2}{N\sqrt{\eta_{s}\kappa_{s}}} {\rm d} s\\
&+ C(t\wedge \sigma) \left|\hat{m}_0(z_0) - m_0(z_0) \right| \frac{\text{Im}[\hat m_{t\wedge \sigma}(z_{t \wedge \sigma})]}{\text{Im}[z_{t\wedge \sigma}]}.
\end{split}\end{align}
Since $|V' (z)| \leq C$, it follows that $\text{Im}[\hat m_s(z_s)]=-\partial_s \text{Im}[z_s] +O(1)$. Therefore we can bound the first term in the righthand side of \eqref{e:term2} as
\begin{align}\begin{split}\label{e:term3}
{\mathrm{int}}_0^{t\wedge\sigma}s\text{Im}[\hat m_s(z_s)]ds = &{\mathrm{int}}_0^{t\wedge\sigma}(-\partial_s \text{Im}[z_s] )s {\rm d} s + O ( (t \wedge \sigma )^2 ) = O ( t \wedge \sigma ) .
\end{split}\end{align}
We notice that
\begin{align}
\Im[\hat m_s(z_s)]\asymp \eta_s/(\kappa_s)^{1/2}\asymp\eta_{t\wedge \sigma}/(\kappa_{t\wedge \sigma})^{1/2}.
\end{align}
For the second term in the righthand side of \eqref{e:term2}, we have
\begin{align}\begin{split}
{\mathrm{int}}_{0}^{t \wedge \sigma} \frac{ \eta_{t\wedge \sigma}/(\kappa_{t\wedge\sigma})^{1/2}}{ \eta_{t\wedge \sigma} N \sqrt{\kappa_s \eta_s}} {\rm d} s
&
= O\left({\mathrm{int}}_{0}^{t\wedge \sigma} \frac{1}{N(\eta_{t\wedge \sigma})^{1/2} (\kappa_{t\wedge \sigma})^{1/2}(\kappa_{s})^{1/2} }{\rm d} s\right) \\
&= O\left({\mathrm{int}}_{0}^{t\wedge \sigma} \frac{1}{N(\eta_{t\wedge \sigma})^{1/2} (\kappa_{t\wedge \sigma}) (\sqrt{\kappa_{t\wedge\sigma}}+(t\wedge\sigma-s)) }{\rm d} s\right) \\
&= O\left(\frac{\log N}{N \sqrt{\eta_{t\wedge \sigma} \kappa_{t\wedge \sigma}}}\right).
\end{split}\end{align}
For the last term in the righthand side of \eqref{e:term2}, we have
\begin{align}\begin{split}
(t\wedge \sigma) \left|\hat{m}_0(z_0) - m_0(z_0) \right| \frac{\text{Im}[\hat m_{t\wedge \sigma}(z_{t \wedge \sigma})]}{\text{Im}[z_{t\wedge \sigma}]}
&= O\left( \frac{(t \wedge \sigma)}{M N \eta_0\sqrt{\kappa_{t\wedge \sigma}}}\right) \\
&= O\left( \frac{(t \wedge \sigma)}{M N (\eta_{t\wedge \sigma}+(t\wedge \sigma)\eta_{t\wedge \sigma}/\sqrt{\kappa_{t\wedge\sigma}})\sqrt{\kappa_{t\wedge \sigma}}}\right) \\
&= O\left(\frac{1}{M N \eta_{t\wedge \sigma}}\right).
\end{split}\end{align}
Combining all the above estimates, we have that on the event $\Omega$, for any $i\leq TN$ and $0\leq t\leq t_i$,
\begin{equation}\label{e:outside}
\left|\hat{m}_{t\wedge \sigma}(z_{t\wedge \sigma}(u_i)) - m_{t\wedge \sigma}(z_{t\wedge \sigma}(u_i)) \right| = O\left(\frac{(\log N)^3}{N \sqrt{\kappa_{t\wedge \sigma} \eta_{t\wedge \sigma}}}+\frac{1}{M N \eta_{t\wedge \sigma}}\right) \ll \frac{1}{M^{1/4}N \eta_{t\wedge \sigma}},
\end{equation}
where $z_{t\wedge \sigma}(u_i)=E_{t\wedge \sigma}+\kappa_{t\wedge\sigma}+\mathrm{i} \eta_{t\wedge \sigma}$, and we used \eqref{e:realim}.
In the following we show that on the event $\Omega$, $\sigma=T$, otherwise if there exists a sample in $\Omega$ such that $\sigma<T$. Thanks to \eqref{e:outside}, we must have $\lambda_\sigma=E_\sigma+f(\sigma)$. We prove this is impossible by contradiction. If $\sigma<T$, then there exists some $i\leq \lceil TN \rceil $, $t_{i-1}< \sigma\leq t_i$.
We recall that by \eqref{def:fr} and Proposition \ref{prop:TimStab}, $z_\sigma=z_{t_i}+\OO(1/N)$, { $E_\sigma=E_{t_i}+\OO(1/N)$} and $f(\sigma)=f(t_i)+\OO(1/N)$. Therefore, we have $z_\sigma(u_i)=z_{t_i}(u_i)+\OO(1/N)=E_{t_i}+f(t_i)+ \mathrm{i} M^{-1/3}f(t_i)^{1/4}N^{-1/2}=E_\sigma+f(\sigma)+\mathrm{i} M^{-1/3}f(t_i)^{1/4}N^{-1/2}+\OO(1/N)$. It follows from the argument as given at the beginning of proof, i.e by taking $E_{\sigma}+\kappa+\eta=z_\sigma$, we get that there is no eigenvalue in a neighborhood of $E_{\sigma}+f(\sigma)$ at time $\sigma$. This leads to a contradiction! This finishes the proof of \eqref{Thm:EdgeRidgity}.
\end{proof}
As a consequence of Theorem \ref{Thm:EdgeRidgity} and Proposition \ref{p:rigidity}, by the same argument as \cite[Corollary 3.2]{HL}, we have the following corollary on the locations of extreme eigenvalues.
\begin{corollary} \label{Col:GrenEdg}
Under the assumptions of \ref{Thm:EdgeRidgity}, we have the following:
there exists a constant ${\frak e}>0$ such that with high probability under the Dyson Brownian motion \eqref{DBM}, for $ \sqrt{\eta^*}/{\frak c}\leq t\leq T$, and uniformly for indices $1\leq i\leq {\frak e} N$, we have
\begin{align}
|\lambda_i(t)-\gamma_i(t)|\leq \frac{M^2}{N^{2/3}i^{1/3}},
\end{align}
where $\gamma_i(t)$ are the classical particle locations of the density $\hat \rho_t$, i.e.
\begin{align}
\frac{i-1}{N}={\mathrm{int}}_{\gamma_i(t)}^{E_t}{\rm d} \hat \rho_t(x).
\end{align}
\end{corollary}
\section{Mesoscopic Central Limit Theorem}
We will try to prove here a version of the CLT for mesoscopic linear statistics for empirical particle density $\mu_t$ with $\mu_0$ satisfying the assumptions of \ref{a:initial}; if not explicitly stated, $m_0$ will be assumed to satisfy said hypothesis.
\begin{comment}
\begin{remark} \label{Asmp:Reg}
One consequence of $\hat{m_0}_t$ a stable measure as in \ref{d:stable} is that we can write the Stieltjes transform as
\begin{equation}
\hat{m}_t(z) = {\mathrm{int}}_{-\infty}^{\infty} \frac{\hat{\rho}_t(x)}{x -z} dx
\end{equation}
where the measure $\hat{\rho}_t$ is of the following form:
It has a right spectral edge $E_t$ and is of the functional form \begin{equation}
\hat{\rho}_t(x) = f_t(x- E_t) \sqrt{E_t-x}
\end{equation}
where $f_t$ is analytic in a small neighborhood around 0.
We recall the following special constant associated to $f_t(x)$
\begin{equation}
\lim_{x \rightarrow 0}f_t(x) := C_t
\end{equation}
\end{remark}
\end{comment}
\begin{comment}
\begin{remark}
The results we have proven in the Appendix show that $\rho_0$ is a stable measure and that $\hat{m}_t(z)$ is of the form
\begin{equation}
\hat{m}_t(z) = {\mathrm{int}}_{-\infty}^{\infty} \frac{\hat{\rho}_t(x)}{x -z} dx
\end{equation}
where $\hat{\rho}_t(x)= f_t(x-E_t) \sqrt{E_t -x}$ and $f_t$ is analytic in a small neighborhood around 0 which is uniform in $t \in [0,1]$.
Our later estimates will use the following special constant
In addition, $\hat{m}_t(z_t)$ satisfies the following differential equation for the edge
\begin{equation}
\partial_s E_s = - \hat{m}_s(E_s) - \frac{V'(E_s)}{2}
\end{equation}
\end{remark}
One should recall from previous sections the following estimates on the characteristics of solutions to the $\beta$-Dyson Brownian motion. They will be used liberally in the proofs that follow.
\begin{align*}
& \eta_s\approx\eta_t+(t-s)\frac{\eta_t}{\sqrt{\kappa_t+\eta_t}} \\
& \sqrt{\kappa_s}\geq\sqrt{\kappa_t}+{\frak c}(t-s)
\end{align*}for $0\leq s\leq t$.
\end{comment}
\begin{comment}
\begin{definition}
We define $\cH_t$ to be the region $\{z:|\Re[z]-E_t| < C \Im[z], \Im[z]\gg N^{-2/3}, |z -E_t|\le (\log N)^{-3} \}$. From this point onwards, we will use the notation that if $z_t(u)$ is a characteristic then $\kappa_t(u)= \Re[z_t(u)] - E_t$ and $\eta_t(u)= \Im[z_t(u)]$. We will not always give reference to the parameter u when the context is obvious.
\end{definition}
\end{comment}
{
\begin{definition}
We fix a control parameter $\epsilon$.
We define $\cH_t$ to be the region $\{z: - (\Im[z])^{4/5 + \epsilon} \leq \Re[z]-E_t \leq N^{-\epsilon}, N^{-2/3+\epsilon} \leq \Im[z]\leq N^{-\epsilon}, |z -E_t|\le N^{-\epsilon} \}$. From this point onwards, we will use the notation that if $z_t(u)$ is a characteristic then $\kappa_t(u)= \Re[z_t(u)] - E_t$ and $\eta_t(u)= \Im[z_t(u)]$. We will not always give reference to the parameter u when the context is obvious.
\end{definition}
\begin{remark}
From the equation determining the movement of characteristics, we know that each characteristic moves at $\OO(1)$ in time. Thus, if we choose a time $s<t$ such that $|s-t| \ll (\log N)^{-2}$, then we are assured that if $z_t(u) \in \cH_t$ necessarily we have $|z_s(u) - E_s| \ll (\log N)^{-2}$. In addition, one can check that if $z_t(u)$ is in $\cH_t$ for $t \le N^{-\epsilon}$, then we must necessarily have that $z_s(u)$ is in $\cH_s$ for $s<t$ as the edge moves faster to the right than any characteristic.
\end{remark}
}
Our goal is to prove the following theorem
\begin{theorem} \label{t:mesoCLT}
Let $m(z)$ satisfy \ref{a:initial}. First fix a scale $\eta$ satisfying $N^{-2/3 + \epsilon} \ll \eta^* \ll \eta \ll N^{-\epsilon} $. Consider complex numbers $w_1, w_2, \cdots,w_n$ and a time $t$ satisfying $\sqrt{ \eta} N^{\epsilon} \leq t \leq (\log N)^{-4} $. Then the rescaled quantities $\Gamma_{t}[E_t +w_i \eta] = N \eta \left[m_{t}(E_{t} + w_i \eta) - \hat{m}_{t}(E_t + w_i \eta)\right] - \frac{2- \beta}{4 \beta w_i}$ asymptotically form a Gaussian Field with limiting Covariance Kernel
\begin{align*}
K_{\rm edge}(w_i,w_j):=\lim_{N\rightarrow \infty}N^2 {\rm{cov}}\langle \Gamma_t(E_t+w_i\eta), \Gamma_t(E_t+w_j\eta) \rangle=\frac{1}{2\beta \sqrt{w_i}\sqrt{w_j}(\sqrt{w_i}+\sqrt{w_j})^2},
\end{align*}
\begin{comment}
the following characteristic function up to a factor of $(1+ \OO(N^{-\epsilon}))$
\begin{align}\begin{split} \label{e:variance}
& -\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j-\mathrm{i} b_j)(a_\ell+\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{4\beta\sqrt{\kappa_t(u_j) - i \eta_t(u_j)}\sqrt{\kappa_t(u_\ell)+i \eta_t(u_\ell)} (\sqrt{\kappa_t(u_j) - i \eta_t(u_j)}+ \sqrt{\kappa_t(u_\ell) + i \eta_t(u_\ell)})^2}\right]\\
&-\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j+\mathrm{i} b_j)(a_\ell+\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{4\beta \sqrt{\kappa_t(u_j) + i \eta_t(u_j)}\sqrt{\kappa_t(u_\ell)+i \eta_t(u_\ell)}(\sqrt{\kappa_t(u_j) + i \eta_t(u_j)}+ \sqrt{\kappa_t(u_\ell) + i \eta_t(u_\ell)})^2}\right]\\
&-\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j-\mathrm{i} b_j)(a_\ell-\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{4\beta\sqrt{\kappa_t(u_j) - i \eta_t(u_j)}\sqrt{\kappa_t(u_\ell)-i \eta_t(u_\ell)}(\sqrt{\kappa_t(u_j) - i \eta_t(u_j)}+ \sqrt{\kappa_t(u_\ell) - i \eta_t(u_\ell)})^2}\right]
\end{split}\end{align}
\end{comment}
\end{theorem}
\begin{remark}
We will remark here that this result only gives the leading order when $\kappa_t = \OO(\eta_t)$. Regardless, the variance bound in the larger region is important later.
\end{remark}
\begin{proof}[Proof of Theorem \ref{t:mesoCLT}]
Let the event $\Omega$ be as in Theorem \ref{p:rigidity}. Thanks to the estimates \eqref{e:diffmm} and Lemma \ref{dpbound} which hold on $\Omega$, we can bound the second term on the RHS of \eqref{e:diffm} by first splitting it into an error part and main term part. The following is an estimate of the error term.
\begin{align}\begin{split}
\left|\left(m_t(z_t)- \hat{m}_t(z_t)\right)\partial_z \left( (m_t(z_t) - \hat{m}_t(z_t))+\frac{V'(z_t)}{2}\right)\right|
\leq& \frac{M}{N\Im[z_t]}\left(\frac{M}{N\Im[z_t]^{2}}+ 1\right)\\
=& \OO\left(\frac{M^2}{(N^2\Im[z_t]^{3})}+\frac{M}{N \Im[z_t]}\right),
\end{split}\end{align}
The main term coming from this contribution is
\begin{equation}
\left(m_t(z_t)- \hat{m}_t(z_t)\right)\partial_z \left( \hat{m}_t(z_t) \right)
\end{equation}
The contour integral on the righthand side of \eqref{e:diffm} is an error term. Using Proposition \ref{prop:HFbound}, we have on the event $\Omega
\begin{equation}
\left|\oint_{\mathcal{C}_t} g(z_t,w) (m_t(w)- \hat{m}_t(w)) {\rm d} w \right|\leq \frac{CM}{N}.
\end{equation}
We can rewrite the final term on the righthand side of \eqref{e:diffm} as
\begin{equation}
\frac{2-\beta}{\beta N^2}\sum_{i=1}^{N}\frac{1}{(\lambda_i(t)-z_t)^3}
=\frac{2-\beta}{2\beta N} \partial_z^2 (m_t(z_t) - \hat{m}_t(z_t)) +\frac{2-\beta}{2\beta N} \partial_z^2 \hat{m}_t(z_t)
\end{equation}
Thanks to Lemma \ref{dpbound}, we have
\begin{align}\begin{split}
\left|\partial_z^2 (m_t(z_t) - \hat{m}_t(z_t))\right|
=\OO\left(\frac{1}{N (\Im[z_t])^{3}}\right).
\end{split}
\end{align}
\begin{comment}
It follows that
\begin{equation}
\left|\frac{2-\beta}{\beta N^2}{\mathrm{int}}_0^t\sum_{i=1}^{N}\frac{{\rm d} s}{(\lambda_i(s)-z_s)^3}\right|
=\OO\left(\frac{1}{(N^2\Im[z_t]^{3/2})}+\frac{\log N}{Nt}\right).
\end{equation}
\end{comment}
We have the following differential equation in order to study the fluctuations of $m_t(z_t) - \hat{m}_t(z_t)$
\begin{align}\begin{split}
\partial_t(m_t(z_t) - \hat{m}_t(z_t))
&= (m_t(z_t) - \hat{m}_t(z_t)) \partial_z \hat{m}_t(z_t) + \frac{2-\beta}{2 \beta N} \partial_z^2 \hat{m}_t(z_t) \\
&- \sqrt{\frac{2}{\beta N^3}} \sum_{i=1}^N \frac{ {\rm d} B_i(t)}{(\lambda_i(s) - z_s)^2} + \OO\left(\frac{1}{N^2 \Im[z_t]^3}\right)
\end{split}\end{align}
We can explicitly solve the above equation by using $\mathcal{I}_t:=\exp{{\mathrm{int}}_{0}^t \partial_z \hat{m}_s(z_s) {\rm d} s}$ as an integrating factor.
The solution can be explicitly written up as
\begin{align} \label{e:StochasDiff}
\begin{split}
m_t(z_t) - \hat{m}_t(z_t) &=\mathcal{I}_t (\mathcal{I}_s)^{-1} (m_s(z_s) - \hat{m}(z_s)) + \mathcal{I}_t {\mathrm{int}}_s^{t}\mathcal{I}_q^{-1} \left( \frac{2-\beta}{2\beta N} \partial_z^2 \hat{m}_q(z_q) +\right. \\&
\left.- \sqrt{\frac{2}{\beta N^3}} \sum_{i=1}^{N} \frac{{\rm d}B_i(q)}{(\lambda_i(q)-z_q)^2} + \OO\left( \frac{1}{N^2 \Im[z_q]^3}\right) \right) {\rm d} q
\end{split}
\end{align}
The deterministic integral in the above line is an offset term for the mean value. The stochastic integral is the cause of the gaussian fluctuation.
From Lemma \ref{lem:IntFac}, which will be shown later, we can proceed further and evaluate the quantities that appear in the integral of \ref{e:StochasDiff}.
Choosing $s$ so that $(\sqrt{|\kappa_t + i \eta_t|})^{1- \epsilon} \ll t-s \ll (|\kappa_t + i \eta_t|)^{1/4 + \epsilon}$, we can evaluate
\begin{align}
\begin{split}
\frac{2-\beta}{2\beta N} \frac{1}{\sqrt{\kappa_t + i \eta_t}} {\mathrm{int}}_s^{t} \partial_z^2 \hat m_q(z_q) \sqrt{\kappa_q + i \eta_q} {\rm d} q & = \frac{2-\beta}{2 \beta N} \frac{1}{\sqrt{\kappa_t + i \eta_t}} [{\mathrm{int}}_s^{t} \frac{C_q \pi}{4(\kappa_q + i \eta_q)} {\rm d} q+ \OO((s-t)(\log N)^5 )]\\
&= \frac{2-\beta}{4 \beta N} \frac{1}{(\kappa_t + i \eta_t)}[1+ \OO(N^{-\epsilon})]
\end{split}
\end{align}
The bound we have on $[m_s(z_s) - \hat{m}_s(z_s)] \mathcal{I}_t(\mathcal{I}_s)^{-1}$ is $\OO \left(\frac{1}{N\sqrt{\kappa_s+ i \eta_s} \sqrt{\kappa_t + i \eta_t}} \right)$ where we applied rigidity at time $t$. This is clearly of much smaller order than $\frac{1}{N (\kappa_t + i \eta_t)}$
By combining the above estimates we see that on the event $\Omega$, we have
\begin{equation}\label{e:tmmtdiff}
m_t(z_t)- \hat{m}_t(z_t) = \frac{2 - \beta}{4 \beta N} \frac{1}{\kappa_t + i \eta_t} [1+ O(N^{-\epsilon})] +\sqrt{\frac{2}{\beta N^3}}{\mathrm{int}}_s^t \mathcal{I}_t (\mathcal{I}_s)^
{-1} \sum_{i=1}^N \frac{{\rm d} B_i(q)}{(\lambda_i(q)-z_q)^2} + \OO(\frac{1}{N^2 \Im[z_t]^{5/2}}).
\end{equation}
We remark at this point that since we have that $\Im[z_t] \gg N^{-2/3}$, the final term in $\eqref{e:tmmtdiff}$ will be less than the previous two terms and can essentially be absorbed into the $\OO(N^{-\epsilon})$ factor appearing above.
In the following we show that the Brownian integrals are asymptotically jointly Gaussian. We fix $z_1,z_2\cdots, z_k \in H_t$ such that there exists a constant $B>1$ such that for all $ i,j$, $B^{-1}\leq \Im[z_j](\Im[z_i])^{-1}\leq B $ and let $u_1,u_2,\cdots, u_k$ be points such that $z_t(u_i) = z_i$ for $i=1,2,\cdots,k$ respectively.
For $1\leq j\leq k$, let
\begin{equation}
X_j(t)= \Im[z_t(u_j)]\sqrt{\frac{2}{\beta N}}{\mathrm{int}}_s^t\sum_{i=1}^N \mathcal{I}_t (\mathcal{I}_s)^{-1}\frac{{\rm d} B_i(t)}{(\lambda_i(s)-z_s(u_j))^2},\quad j=1,2,\cdots, k.
\end{equation}
where $s$ is a time such that $(\log N)^{-2} \max(\sqrt{|\kappa_t(u_1) + i \eta_t(u_i)|})^{1/2+ \epsilon}\gg (t-s) \gg (\sqrt{\max_{i=1,2,\cdots,n}(\eta_t(u_i)})^{1-\epsilon}$. Such a time exists based on how we chose our points $z_1, z_2,\cdots, z_n$.
We compute their joint characteristic function,
\begin{equation}\label{e:cfunc}
\mathbb{E}\left[\exp\left\{\mathrm{i}\sum_{j=1}^k a_j\Re[X_j(t)]+b_j\Im[X_j(t)]\right\}\right]
\end{equation}
Since $\sum_{j=1}^k a_j\Re[X_j(t)]+b_j\Im[X_j(t)]$ is a martingale, the following is also a martingale
\begin{equation}
\exp\left\{\mathrm{i} \sum_{j=1}^k \{a_j\Re[X_j(t)]+b_j\Im[X_j(t)]\}+\frac{1}{2}\left\langle \sum_{j=1}^k a_j\Re[X_j(t)]+b_j\Im[X_j(t)]\right\rangle\right\}
\end{equation}
In particular, its expectation is one. By computations performed later in Proposition \ref{p:var} and Lemma \ref{Lem:VarEval}, on the event $\Omega$ , the quadratic variation is given by
\begin{align}\begin{split}
&\frac{1}{2}\left\langle \sum_{j=1}^k a_j\Re[X_j(t)]+b_j\Im[X_j(t)]\right\rangle [1+\OO(N^{-\epsilon})]\\
=& -\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j-\mathrm{i} b_j)(a_\ell+\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{16\beta\sqrt{\kappa_t(u_j) - i \eta_t(u_j)}\sqrt{\kappa_t(u_\ell)+i \eta_t(u_\ell)} (\sqrt{\kappa_t(u_j) - i \eta_t(u_j)}+ \sqrt{\kappa_t(u_\ell) + i \eta_t(u_\ell)})^2}\right]\\
&-\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j+\mathrm{i} b_j)(a_\ell+\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{16\beta \sqrt{\kappa_t(u_j) + i \eta_t(u_j)}\sqrt{\kappa_t(u_\ell)+i \eta_t(u_\ell)}(\sqrt{\kappa_t(u_j) + i \eta_t(u_j)}+ \sqrt{\kappa_t(u_\ell) + i \eta_t(u_\ell)})^2}\right]\\
&-\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j-\mathrm{i} b_j)(a_\ell-\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{16\beta\sqrt{\kappa_t(u_j) - i \eta_t(u_j)}\sqrt{\kappa_t(u_\ell)-i \eta_t(u_\ell)}(\sqrt{\kappa_t(u_j) - i \eta_t(u_j)}+ \sqrt{\kappa_t(u_\ell) - i \eta_t(u_\ell)})^2}\right]
\end{split}\end{align}
The value of \eqref{e:cfunc} is clearly the exponential of the upper quantity.
\begin{comment}
Therefore,
\begin{align}\begin{split}
\eqref{e:cfunc}
=&\exp\left\{\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j-\mathrm{i} b_j)(a_\ell+\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{2\beta(z_t(u_j)-\overline{z_t(u_\ell)})^2}\right]\right\} \\+&\OO\left( \frac{M}{N\min_j\{\Im[z_t(u_j)]\}}+\frac{\max_j\{\Im[z_t(u_j)]\}}{t}\right).
\end{split}\end{align}
\end{comment}
Since by \eqref{e:tmmtdiff},
\begin{align*}
\Gamma_t(z_t(u_j))=X_j(t)+\OO(N^{-\epsilon}),
\end{align*}
and so we have a gaussian field. Choosing $z_t(u_j) = E_t + w_j \eta$, we get Theorem \ref{t:mesoCLT}.
\end{proof}
From the above computation, we get the kernel, defined as
\begin{align*} \label{e:CharFunc}
K_{\rm edge}(w,w')=\frac{1}{2\beta \sqrt{w}\sqrt{w'}(\sqrt{w}+\sqrt{w'})^2}
\end{align*}
where $w,w'\in {\mathbb C}\setminus {\mathbb R}_{-}$. We recall the kernel in the bulk is given by
\begin{align}
K_{\rm bulk}(w,w')=\frac{2}{\beta (w-w')^2},
\end{align}
provided $w\in {\mathbb C}_+, w'\in {\mathbb C}_-$, or $w\in {\mathbb C}_-,w'\in {\mathbb C}_+$, otherwise $K_{\rm bulk}(w,w')=0$. We can in fact recover the kernel in the bulk from the kernel in the edge. We take $w=\kappa+\mathrm{i} \eta$, and $w=\kappa'+\mathrm{i} \eta'$, and let $\kappa,\kappa'$ tend to $-\infty$,
\begin{align}
K_{\rm edge}(w,w')
\rightarrow
\left\{
\begin{array}{cc}
0 & \text{if $\eta\eta'>0$,}\\
K_{\rm bulk}(w,w') & \text{if $\eta\eta'<0$.}\\
\end{array}
\right.
\end{align}
\begin{corollary}\label{c:mesoCLT}
Under the assumptions of Theorem \ref{t:mesoCLT}, the following holds for any compactly supported test function $\psi$ in the Sobolev space $H^{s}$ with $s>1$. Let { $N^{-2/3+ \epsilon}\ll \eta^{*} \ll \eta\ll t \ll N^{-1/2 - \epsilon}$} and define
\begin{align}
\psi_{\eta}(x)=\psi\left(\frac{x-E_t}{\eta}\right).
\end{align}
The normalized linear statistics converges to a Gaussian
\begin{equation}\label{e:toGaussian}
\hat{\cal L}(\psi_{\eta})\deq \sum_{i=1}^N \psi_{\eta}(\lambda_i(t))-N{\mathrm{int}}_{{\mathbb R}} \psi_{\eta,E}(x) {\rm d} \rho_t(x)\rightarrow N(0, \sigma_\psi^2) - \frac{2-\beta}{ 4 \beta} \psi(0),
\end{equation}
in distribution as $N\rightarrow \infty$, where
\begin{equation}
\sigma_\psi^2\deq \frac{1}{4\pi^2 \beta}{\mathrm{int}}_{{\mathbb R}^2} \left(\frac{\psi(x^2)-\psi(y^2)}{x-y}\right)^2{\rm d} x{\rm d} y.
\end{equation}
\end{corollary}
\subsection{Preliminary Estimates}
We will start by proving some of the simple estimates that have appeared in the derivation of the covariance Kernel. These quantities will also reoccur frequently during later computations.
\begin{lemma} \label{IntegralBounds}
We have for those points such that $z_t(u) \in H_t$
\begin{equation}
{\mathrm{int}}_{0}^{t} \frac{1}{(\Im[z_s(u)])^{p}} ds= \OO\left( \frac{M}{\eta_t^{p-1/2}}\right)
\end{equation}
where the constant appears above is independent of $N$ and $u$ in $H_t$
\end{lemma}
\begin{proof}
We can perform explicit calculation due to our assumption of square root behavior.
\begin{equation}
{\mathrm{int}}_{0}^{t} \frac{1}{(\Im[z_s(u)])^p} ds =\OO\left( {\mathrm{int}}_{0}^{t} \frac{M}{(\eta_t + (t-s) \sqrt{\eta_t})^p} ds \right)
= \OO \left(\frac{M}{ \eta_t^{p-1/2}}\right)
\end{equation}
\end{proof}
The following lemma estimates various quantities that will reoccur when one tries to compare the measure $m_t$ to the stable measure $\hat{m}_t$
\begin{lemma}\label{dpbound}
Suppose that the assumptions of Proposition \ref{p:rigidity} hold. Fix $u\in H_t$. If u is of the form $z_t(x)$ for some $x$ in $z_t^{-1}(H_t)$, then on the event $\Omega$ as given by \ref{p:rigidity}, we have the following estimate uniformly for $0\leq s\leq t$, with the constants appearing below independent of $x$ and $N$.
\begin{equation} \label{dptdms}
\partial_z^p (m_s(z_s(x)) - \hat{m}_s(z_s(x)))=\OO\left(\frac{M}{N (\Im[z_s(x)])^{p+1}}\right)
\end{equation}
\end{lemma}
\begin{proof}
Since both $m_s$ and $\hat m_s$ are analytic on the upper half plane, by Cauchy's integral formula
\begin{equation}
\partial_z^p \left(m_s(z_s(x))-\hat m_s(z_s(x))\right)
=\frac{p!}{2\pi \mathrm{i}}\oint_{{\cal C}} \frac{m_s(w)-\hat m_s(w)}{(w-z_s(x))^{p+1}}{\rm d} w ,
\end{equation}
where ${\cal C}$ is a small contour in the upper half plane centering at $z_s(x)$ with radius $\Im[z_s(x)]/2$. On the event $\Omega$, we use \eqref{e:diffmm} in Proposition \ref{p:rigidity} to bound the integral by
\begin{align}\begin{split}
\left|\frac{p!}{2\pi \text{i}}\oint_{{\cal C}} \frac{m_s(w)-\hat m_s(w)}{(w-z_s(x))^{p+1}}{\rm d} w\right|
\leq & \frac{p!}{2\pi }\oint_{{\cal C}} \frac{|m_s(w)-\hat m_s(w)|}{|w-z_s(x)|^{p+1}}{\rm d} w
\\
= & \OO\left( \frac{M}{N (\Im[z_s(x)])^{p+1}} \right).
\end{split}\end{align}
\end{proof}
In the following proposition we attempt to calculate the Quadratic Variation of the stochastic integral that appears in the proof of \ref{t:mesoCLT}. We need the following lemma on the behavior of the integrating factor $\mathcal{I}_t(\mathcal{I}_s)^{-1}$ whose proof will be found near the end of the section after more technical details have been established.
\begin{lemma} \label{lem:IntFac}
Let $\hat{m}_t$ be a Green's function associated to a stable measure. Consider a characteristic $z_t = E_t + \kappa_t + i \eta_t$ and ,along this characteristic, consider times $s$ and $t$ such that $(s-t)^2= O(\sqrt{\kappa_t + i \eta_t}^{1+ \tilde \epsilon})$. Then we have
\begin{equation}
(\mathcal{I}_t)(\mathcal{I}_s)^{-1}= \exp{{\mathrm{int}}_s^t \partial_z \hat{m}_s(z_s)} = \frac{\sqrt{\kappa_s+ i \eta_s}}{\sqrt{\kappa_t + i \eta_t} }(1+ \OO(N^{- \tilde \epsilon}))
\end{equation}
\end{lemma}
Replacing this integrating factor with the above term, we have the following Quadratic Variance integrals.
\begin{comment}
{\color{blue}
We integrate both sides of \eqref{e:diffm}, and get the following integral expression for $ m_t(z_t)$,
\begin{align}\label{e:intdiffm}\begin{split}
&m_t(z_t)- \hat{m}_t(z_t)=(m_0(z_0) - \hat{m}_0(z_0))+{\mathrm{int}}_0^t\left(m_s(z_s)-\hat{m}_s(z_s)\right)\partial_z \left( m_s(z_s)+\frac{V'(z_s)}{2}\right){\rm d} s\\
+& \frac{1}{\pi}{\mathrm{int}}_0^t{\mathrm{int}}_{{\mathbb C}} \partial_{\bar w} \tilde g(z_s,w) ( m_s(w)- \hat{m}_s(w)){\rm d}^2 w{\rm d} s
+\frac{2-\beta}{\beta N^2}{\mathrm{int}}_0^t\sum_{i=1}^{N}\frac{{\rm d} s}{(\lambda_i(s)-z_s)^3}\\
-&\sqrt{\frac{2}{\beta N^3}}{\mathrm{int}}_0^t\sum_{i=1}^N \frac{{\rm d} B_i(s)}{(\lambda_i(s)-z_s)^2}.
\end{split}
\end{align}
For the proof of the mesoscopic central limit theorem, we will show that the first four terms on the righthand side of \eqref{e:intdiffm} are negligible, and the Gaussian fluctuation is from the last term, i.e. the integral with respect to Brownian motion. In the following Proposition, we calculate the quadratic variance of the Brownian integrals.
}
\end{comment}
\begin{proposition}\label{p:var}
Suppose that the assumptions of Corollary \ref{Col:GrenEdg} hold. Consider two points $u,u'$ in $z_t^{-1}(H_t)$. We will use the notation $z_t(u) = E_t + \kappa_t + i \eta_t$ and $z_t(u') = E_t + \kappa_t' + i \eta_t'$ and, without loss of generality, assume $\eta_t< \eta_t'$. Along these characteristics, consider times $(\log N)^{-3} \gg t,s \gg \sqrt{\eta^*}$ with $(t-s)^2 = \OO(\sqrt{\eta_t}^{1+\tilde \epsilon})$ . Then we have the following expressions for the quadratic variance.
\begin{align}
\label{e:var1}&\frac{1}{N^3}{\mathrm{int}}_s^t\sum_{i=1}^N [(\mathcal{I}_t)(\mathcal{I}_q)^{-1}]^2\frac{{\rm d}q}{(\lambda_i(q)-z_q)^4} = \frac{1}{6N^2}{\mathrm{int}}_s^t \frac{\kappa_q + i \eta_q}{\kappa_t + i \eta_t} \partial^3_z\hat{m}_q(z_q){\rm d} q [1+ O(N^{- \tilde \epsilon})] +\OO\left( \frac{M}{N^3\eta_t^{7/2}} \right) , \\
\label{e:var2}&\frac{1}{N^3}{\mathrm{int}}_s^t\sum_{i=1}^N [(\mathcal{I}_t)(\mathcal{I}_q)^{-1}][(\mathcal{I}_t')(\mathcal{I}_q')^{-1}]\frac{{\rm d}q}{(\lambda_i(q)-z_q)^2(\lambda_i(q)-z_q')^2}=\\
&\frac{1}{2\pi\mathrm{i} N^2}{\mathrm{int}}_{s}^{t} \frac{\sqrt{\kappa_q + i \eta_q}\sqrt{\kappa_q' + i \eta_q'}}{\sqrt{\kappa_t + i \eta_t}\sqrt{\kappa_t' + i \eta_t'}} \oint_{\cal C} \frac{\hat{m}_q(w) }{(w-z_q)^2(w-z_q')^2}{\rm d} w {\rm d} q[1+ \OO(N^{-\tilde \epsilon})] + \OO\left(\frac{M}{N^3 \eta_t^{7/2}}+\frac{M}{N^3 \eta_t'^{7/2}}\right) , \\
\label{e:var3}&\frac{1}{ N^3}{\mathrm{int}}_s^t[\overline{(\mathcal{I}_t)(\mathcal{I}_q)^{-1}}][(\mathcal{I}_t')(\mathcal{I}_q')^{-1}] \sum_{i=1}^N \frac{{\rm d} q}{(\lambda_i(q)-\bar{z}_q)^2(\lambda_i(q)-z_q')^2}=\\
& {\mathrm{int}}_{s}^{t}\frac{\sqrt{\kappa_q - i \eta_q}\sqrt{\kappa_q' + i \eta_q'}}{\sqrt{\kappa_t - i \eta_t}\sqrt{\kappa_t' + i \eta_t'}}\left[-
\frac{2(\overline{- \hat{m}_q(z_q)}+ \hat{m}_q(z_q'))}{(\bar z_q-z_q')^3}+\frac{\overline{\partial_{z} \hat{m}_q( z_q)}+\partial_z \hat{m}_q(z_q')}{(\bar z_q-z_q')^2} \right] {\rm d} q [1+ \OO(N^{-\tilde \epsilon})]\\
&+ \OO\left(\frac{M}{N^3 \eta_t^{7/2}}+\frac{M}{N^3 \eta_t'^{7/2}}\right).
\end{align}
\end{proposition}
\begin{proof}
When simplifying the above expression, it is more important to evaluate the Green's function in constant time. We will deal with the time integral later.
We define $g_q(w)\deq (m_q(w) - \hat{m}_q(w))$.
For \eqref{e:var1}, the left hand side can be written as the derivative of the Stieltjes transform $\tilde m_q$ at $z_q$, and so
\begin{align}\begin{split}
\frac{\partial^3_z m_q(z_q)}{6N^2} = \frac{\partial^3_z g_q(z_q)}{6N^2}+ \frac{1}{6N^2} \partial^3_z \hat{m}_q(z_q)
= \OO\left(\frac{M}{N^3 \eta_q^4} \right)+\frac{\partial^3_z \hat{m}_q(z_q)}{6N^2}
\end{split}\end{align}
where we used Lemma \ref{dpbound} and Lemma \ref{IntegralBounds}.
We write the LHS of \eqref{e:var2}, as a contour integral of $\hat m_s$:
\begin{align}\begin{split}\label{e:contourintm}
\frac{1}{N^3}\sum_{i=1}^N \frac{1}{(\lambda_i(q)-z_q)^2(\lambda_i(q)-z_q')^2}
=&\frac{1}{2\pi\mathrm{i} N^2}\oint_{\cal C}\frac{\hat{m}_q(w) + g_q(w)}{(w-z_q)^2(w-z_q')^2}{\rm d} w,
\end{split}\end{align}
In the case that $\max\{\Im[z_q]/3, \Im[z_q']/3\} \geq |z_q-z'_q|$, (without loss of generality, we assume that the maximum is $\Im[z_q]/3$) we set $\cal C$ to be a contour centered at $z_q$ with radius $\Im[z_q]/2$. In this case we have $\dist(\cal C, \{z_q, z_q'\})\geq \Im[z_q]/6$.
In the case that $|z_q-z'_q|\geq \max\{\Im[z_q]/3, \Im[z_q']/3\}$,
we let $\cal C=\cal C_1\cup \cal C_2$ consist of two contours, where $\cal C_1$ is centered at $z_q$ with radius $\Im[z_q]/6$, and $\cal C_2$ is centered at $z_q'$ with radius $\Im[z_q']/6$. Then in this case we have
$\dist(\cal C_1, z_q')\geq \Im[z_q]/6$ and $\dist(\cal C_2, z_q)\geq \Im[z_q']/6$.
We analyze the contour integral of $g_q(w)$ via Taylor expansion. The computation of the associated contour integral can be done more explicitly with the square root behavior assumption.
In the first case, thanks to Lemma \ref{dpbound} and the fact that $\Im[z_q] \gg N^{-2/3}$ would imply that $\Im[w] \gg N^{-2/3}$ and we have the rigidity estimates \eqref{e:diffmm}. As a result, we have the following Taylor expansion
\begin{equation}\label{e:tdmsexpand}
g_q(w)=g_q(z_q)+(w-z_q)\partial_z g_q(z_q)+(w-z_q)^2\OO\left(\frac{M}{N \eta_q^{3}}\right).
\end{equation}
Plugging \eqref{e:tdmsexpand} into \eqref{e:contourintm}, we see that the first two terms vanish and
\begin{align}
|\eqref{e:contourintm}|\leq \frac{C}{N^2}{\mathrm{int}}_{\cal C}\left(\frac{M}{N \eta_q^{5}}\right){\rm d} w
=\OO\left(\frac{M}{N^3 \eta_q^{4}}\right),\label{e:contourintm1}
\end{align}
where we used that $|\cal C|\asymp \Im[z_q]$. Clearly, if instead $\Im[z_q'] >\Im[z_q]$ then the above inequality would hold except with the $\kappa_q$ and $\eta_q$ replaced with $\kappa_q'$ and $\eta_q'$. Clearly, we could use the sum of the above quantity along with its analogue with $\eta_q'$ to provide a bound in the first case.
In the second case, \eqref{e:tdmsexpand} holds on $\cal C_1$. Similarly, for $w\in \cal C_2$ we have
\begin{equation}\label{e:tdmsexpand2}
g_q(w)= g_q(z_q')+(w-z_q')\partial_z g_q(z_q')+(w-z_q')^2\OO\left(\frac{M}{N\eta_q'^{3}}\right).
\end{equation}
It follows by plugging \eqref{e:tdmsexpand} and \eqref{e:tdmsexpand2} into \eqref{e:contourintm}, that we can bound \eqref{e:contourintm} by
\begin{align}\begin{split}\label{e:contourintm2}
& \frac{C}{N^2}\left({\mathrm{int}}_{\cal C_1}\left(\frac{M}{N (\eta_q)^{5} }\right){\rm d} w+{\mathrm{int}}_{\cal C_2}\left(\frac{M}{N (\eta_q')^{5}}\right){\rm d} w\right)\\
&= \OO\left(\frac{M}{N^3 \eta_q^{4}}\right) +\OO\left(\frac{M}{N^3 \eta_q'^{4}}\right)
\end{split}\end{align}
where we used $|\cal C_1| = \OO(\Im[z_q])$ and $ |\cal C_2|=\OO(\Im[z_q'])$.
Finally, for \eqref{e:var3},
\begin{equation}\label{e:mainterm}
\frac{1}{N}\sum_{i=1}^N \frac{1}{(\lambda_i(q)-\bar z_q)^2(\lambda_i(q)-z_q')^2}
=\frac{2(\overline{- m_q(z_q)}+ m_q(z_q'))}{(\bar z_q-z_q')^3}+\frac{\overline{\partial_{z} m_q( z_q)}+\partial_z m_q(z_q')}{(\bar z_q-z_q')^2}.
\end{equation}
Note that $|\bar z_q-z_q'|\geq \Im[z_q]+\Im[z_q']$. As before we will separate $m_q$ into $g_q$ and $\hat{m}_q$ and analyze the corresponding term with $g_q$ for the second term in \eqref{e:mainterm}, we have by \eqref{dptdms},
\begin{align}\begin{split}
\left|\frac{1}{N^2}\frac{\overline{\partial_{z} g_q( z_q)}+\partial_z g_q(z_q')}{(\bar z_q-z_q')^2}\right|
\leq& \frac{C}{N^2} \left(\frac{M}{N (\eta_q)^{4}} + \frac{M}{N (\eta_q')^{4}}\right)\\
\end{split}\end{align}
A similar analysis can be performed for $g_q$ for the first term in \eqref{e:mainterm}.
One can perform the integral of the error terms in time via Lemma \ref{IntegralBounds}
\end{proof}
\subsection{Computation of Quadratic Variance quantities associated with $\hat{m}$}
In this section, we will find the highest order expansion of various quantities that are associated with the quadratic variance terms appearing in the previous lemma.
One can easily check that
\begin{equation}
\oint_{\cal C}\frac{\hat{m}_q(w) }{(w-z_q)^2(w-z_q')^2}{\rm d} w = {\mathrm{int}}_{-\infty}^{\infty} \frac{\hat{\rho}_q(x)}{(x -z_q)^2(x-z_q')^2} {\rm d} x
\end{equation}
In fact, if one lets $z_q = z_q'$ on the right hand size of the above equation, one obtains $6\partial_z^3 \hat{m}_q(z_q)$ while instead one replaces $z_q'$ by its complex conjugate $\bar{z_q'}$ then one would obtain the first term in the right hand side of line \eqref{e:var3}.
\begin{lemma}
Recall \eqref{e:ConstDef}, we have the following integral evaluation.
\begin{equation}\label{e:term}
{\mathrm{int}}_{-\infty}^{\infty} \frac{\hat{\rho}_q(x)}{(x -z_q)^2(x-z_q')^2} {\rm d} x = \pi/2 \frac{C_q}{\sqrt{\kappa_q + i \eta_q}\sqrt{\kappa_q' + i \eta_q'}(\sqrt{\kappa_q + i \eta_q}+ \sqrt{\kappa_q' + i \eta_q'})^3} (1+ O(N^{-\tilde \epsilon}))
\end{equation}
with $z_q-E_q=\kappa_q + i \eta_q$ and $z_q'-E_q= \kappa_q' + i \eta_q'$ and both $z_q,z_q' \in H_q$.
\end{lemma}
\begin{proof}
From this point on,when computing integrals at fixed time, we will automatically translate $x$ so that the edge of the measure $\rho_q$ is located at 0 and the support of the measure after translation is $(-\infty,0]$.
Now, we try to explicitly identify the main term found in \eqref{e:term}, we have
\begin{equation}
{\mathrm{int}}_{-(\log N)^{-1}}^{0} \frac{\hat{\rho}_q(x)}{(x- z_q)^2(x - z_q')^2} {\rm d} x + {\mathrm{int}}_{-\infty}^{-(\log N)^{-1}} \frac{\hat{\rho}_q(x)}{(x- z_q)^2(x - z_q')^2} {\rm d} x
\end{equation}
The second term of the above equation can clearly be bounded by $\OO((\log N)^4)$ if one takes into account the bounded support of $\hat{\rho}_q(x)$ and the fact that $z_q$ and $z_q'$ are far away from $(\log N)^{-1}$. This can easily be seen to be an $\OO(N^{-\tilde \epsilon})$ factor of the main term in $\eqref{e:term}$.
{
To estimate the other quantity, we use a Taylor expansion of $\hat{\rho}_q(x)$ around the point 0 as $|\hat{\rho}_q(x) - C_q\sqrt{x}| = \OO(x^{\frac{3}{2}})$ . Without loss of generality we will assume that $|\kappa_q| \ge |\kappa_q'|$. To illustrate the computation, we will only consider the case that $-\kappa_q \ge \eta_q$, and $-\kappa_q' \ge \eta_q'$. Similar techniques can be used in all other cases and are generally simpler. The above computation can be divided into two cases; the first case is where $|\kappa_q - \kappa_q'| \ge |\kappa_q|/2$. We will perform a decomposition of the integral as follows:
\begin{align}
{\mathrm{int}}_{-\infty}^0 \frac{|x|^{3/2}}{|x-z_q|^{2}|x-z_q'|^2} {\rm d} x &= {\mathrm{int}}_{-\infty}^{2 \kappa_q} \frac{|x|^{3/2}}{|x-z_q|^{2}|x-z_q'|^2} {\rm d} x + {\mathrm{int}}_{2 \kappa_q}^{\max(2 \kappa_q', (\kappa_q + \kappa_q')/2)} \frac{|x|^{3/2}}{|x-z_q|^{2}|x-z_q'|^2} {\rm d} x \\
& + {\mathrm{int}}_{\max(2 \kappa_q', (\kappa_q + \kappa_q')/2)}^{0} \frac{|x|^{3/2}}{|x-z_q|^{2}|x-z_q'|^2} {\rm d} x \\
& \le 1/4 {\mathrm{int}}_{-\infty}^{2 \kappa_q} \frac{|x|^{3/2}}{|x|^4} {\rm d} x + \frac{4|2 \kappa_q|^{5/2}}{\eta_q^2 |\kappa_q - \kappa_q'|^2} + \frac{4|2\kappa_q'|^{5/2}}{\eta_q'^2 |\kappa_q - \kappa_q'|^2}\\
& \le \OO(|\kappa_q|^{-3/2}) + \OO(\frac{|\kappa_q|^{1/2}}{\eta_q^2}) + \OO(\frac{|\kappa_q'|^{5/2}}{\eta_q'^2 |\kappa_q|^2})
\end{align}
where the $\OO$ represents a constant factor that does not depend on $N$.
We need to show that the above quantity will be less than $N^{-\tilde\epsilon} |\kappa_q|^{-2} |\kappa_q'|^{-1/2}$. This is equivalent to the set of inequalities $|\kappa_q| \le N^{-\tilde \epsilon}$, $|\kappa_q|^3 \le \eta_q^2 N^{-\tilde \epsilon}$, $|\kappa_q'|^{3} \le \eta_q'^2 N^{-\tilde \epsilon}$. Since in $\mathcal{H}_t$, we have that $|\kappa_q| \le \eta^{4/5 + \epsilon}$ and similar with $|\kappa_q'|$, along with the fact $\eta_q, \eta_q' \gg N^{-2/3}$, we have more than enough room to have the desired inequalities.
Now consider the case in which $|\kappa_q - \kappa_q'| \le |\kappa_q|/2$. We can divide the integral as
\begin{align}
{\mathrm{int}}_{-\infty}^0 \frac{|x|^{3/2}}{|x-z_q|^{2}|x-z_q'|^2} &= {\mathrm{int}}_{-\infty}^{2 \kappa_q} \frac{|x|^{3/2}}{|x-z_q|^{2}|x-z_q'|^2} {\rm d} x + {\mathrm{int}}_{2 \kappa_q}^{0} \frac{|x|^{3/2}}{|x-z_q|^{2}|x-z_q'|^2} {\rm d} x\\
\le \OO(|\kappa_q|^{-3/2}) + \OO(\frac{|\kappa_q|^{3/2}}{\eta_q^2 \eta_q'^2})
\end{align}
To deal with the last factor, recall that in $\cH_q$ we have that $(\eta_q')^{4/5 + \epsilon} \ge (-\kappa_q') \ge (- \kappa_q)/2$. Up to a constant factor that does not depend on $N$, this would imply the second term in the previous equation would be
\begin{equation*}
\frac{\kappa_q^{3/2 -2 \frac{1}{4/5 + \epsilon}}}{\eta_q^2}
\end{equation*}
To show that this is less than $\frac{1}{|\kappa_q|^{5/2}}N^{-\tilde \epsilon}$ is now merely a consequence of the fact that $|\kappa_q|^{4/5 + \epsilon} \le \eta_q$
}
\begin{comment}
We have,
\begin{align}
{\mathrm{int}}_{-\infty}^0 \frac{|x|^{3/2}}{|x-z_q|^{2}|x-z_q'|^2} {\rm d} x &\le {\mathrm{int}}_{0}^{-\infty} \frac{|x|^{3/2}}{|x-z_q|^4} {\rm d} x + {\mathrm{int}}_{0}^{-\infty}\frac{|x|^{3/2}}{|x-z_q|^4} {\rm d} x\\
&\le {\mathrm{int}}_{-\infty}^{-2 \max{\kappa_q,\eta_q}} \frac{|x|^{3/2}}{|x-z_q|^4} {\rm d} x + {\mathrm{int}}_{-2 \max(\kappa_q,\eta_q)}\frac{|x|^{3/2}}{|x-z_q|^4} {\rm d} x + \cdots \\
&\le \OO\left(\frac{1}{|\max(\kappa_q,\eta_q)|^{3/2}}\right) +\OO\left(\frac{|\max{\kappa_q, \eta_q}|^{5/2}}{|\eta_q|^4}\right)+\cdots
\end{align}
{\color{blue} where the $\cdots$ represent the corresponding terms involving $\kappa_q'$ and $\eta_q'$. Notice that if we have $\eta_q \ge \kappa_q$, this this error term is clearly $\OO(N^{-\tilde \epsilon})$ times the main term in \eqref{e:term}. In the case that $\kappa_q \ge \eta_q$, we have to check the inequality $\frac{\kappa_q^{5/2}}{\eta_q^4} \le \frac{N^{-\tilde \epsilon}}{\kappa_q^{5/2}}$, which is equivalent to $\kappa_q \le \eta^{4/5} N^{-4/5 \epsilon}$. Noting that in the definition of $\cH_t$, we required there be some $\epsilon$ such that both $\kappa_q \le \eta^{4/5 + \epsilon} $ and $\eta \le N^{-\epsilon}$. Then, if we choose $\tilde \epsilon$ to be $\epsilon^2$, then we can see this error term can be incorportated as a small factor of the main term.}
\end{comment}
We now only need to compute the following integral
\begin{equation}\label{e:ttm}
{\mathrm{int}}_{-(\log N)^{-1}}^{0} \frac{C_q \sqrt{x}}{(x- z_q)^2 (x - z_q')^2} {\rm d} x = {\mathrm{int}}_{-\infty}^{0} \frac{C_q \sqrt{|x|}}{(x - z_q)^2 (x- z_q')^2} {\rm d} x + O((\log N)^{5/2})
\end{equation}
where we can see that the term $(\log N)^{5/2}$ can be bounded by an $N^{-\tilde \epsilon}$ factor of the main term. We can evaluate the integral in \eqref{e:ttm},
\begin{align}\begin{split}
{\mathrm{int}}_{-\infty}^{0} \frac{C_q \sqrt{|x|}}{(x - z_q)^2 (x- z_q')^2}{\rm d} x
= (\pi/2) \frac{C_q}{\sqrt{z_q} \sqrt{z_q'} (\sqrt{z_q'} + \sqrt{z_q'})^3},
\end{split}\end{align}
where both $z_q$ and $z_q'$ are not on the negative real axis, and square root is the branch with positive real part.
\end{proof}
With this information in hand, we are able to compute the time integrals of these quantities. However, we cannot do this yet because we do not know exactly how the functions $\sqrt{\kappa_t + \mathrm{i} \eta_t}$ behave in time. In the following lemma, we determine a relation that computes, up to highest order, the behavior of the functions $\sqrt{\kappa_t + \mathrm{i} \eta_t}$
\begin{lemma}\label{l:sqrtgrowth}
For a point $z_t$ in $H_t$ whose characteristic $z_q$ can be written as $(\kappa_q + E_q) + i \eta_q$ we have
\begin{equation}
\left|2 \sqrt{\kappa_t + i \eta_t} - 2 \sqrt{\kappa_s + i \eta_s} + \pi {\mathrm{int}}_{s}^{t} C_q {\rm d} q\right| = \OO\left( (\log N)^2\left((t-s)^2 + (t-s) \sqrt{|\kappa_t + i \eta_t|}\right)\right)
\end{equation}
\end{lemma}
\begin{proof}
We start by studying the differential equation determining $\kappa_t + i \eta_t$
\begin{align}\begin{split}
\partial_t(\kappa_t + i \eta_t) &= -(m_t(z_t) - m_t(E_t)) - \frac{V'(z_t)}{2} + \frac{V'(E_t)}{2}\\
&= -{\mathrm{int}}_{-\infty}^{0} \frac{\hat{\rho}_t(x)}{x-(z_t-E_t)} {\rm d} x + {\mathrm{int}}_{-\infty}^{0} \frac{\hat{\rho}_t(x)}{x} {\rm d} x + O(|z_t -E_t|)\\
&= -(z_t-E_t) {\mathrm{int}}_{-\infty}^{0} \frac{\hat{\rho}_t(x)}{x(x-(z_t-E_t))} {\rm d} x + O(|z_t -E_t|) \\
\end{split}
\end{align}
At this stage, we use our assumption on $\hat{\rho}_s$ to replace the integral appearing above with an expression that can be computed explicitly. As before, the main contribution is coming from the part close to the edge, and we can consider the integral from $-(\log N)^{-1}$ to $-\infty$ as error term. A Taylor expansion indicates that the leading order contribution will be from the $C_s \sqrt{s}$ component of the expansion of $\hat{\rho}_s(x)$ near the edge.
\begin{align} \label{e:sqrdiffeq}
\begin{split}
\partial_t(\kappa_t + i \eta_t) &= -(z_t-E_t) [ {\mathrm{int}}_{-(\log N)^{-1}}^{0} \frac{\hat{\rho}_t(x)}{x(x-(z_t-E_t))} {\rm d} x + {\mathrm{int}}_{-\infty}^{-(\log N)^{-1}} \frac{\hat{\rho}_t(x)}{x(x-(z_t-E_t))} {\rm d} x ] + O(|z_t -E_t|) \\
&= -(z_t-E_t)[{\mathrm{int}}_{(\log N)^{-1}}^{0} \frac{C_t\sqrt{|x|} + O(|x|^{3/2})}{x(x-(z_t-E_t))} {\rm d} x+ O((\log N)^2) ] + O(|z_t-E_t|)\\
&= -(z_t-E_t)[{\mathrm{int}}_{0}^{\infty} \frac{C_t \sqrt{|x|}}{x(x-(z_t -E_t))} {\rm d} x +O((\log N)^2) ] + O(|z_t -E_t|) \\
&=- \pi C_t \sqrt{\kappa_t + i \eta_t} + O((\log N)^2|z_t -E_t|) .
\end{split}\end{align}.
{
In the second line, we bounded the integral
${\mathrm{int}}_{-(\log N)^{-1}}^{0} \frac{|x|^{3/2}}{|x||x- \kappa_t + i \eta_t|} {\rm d} x$ by $(\log N)^2$. We will prove this by considering two cases: the first case is when $\kappa_t \le 0$, the second is when $\kappa_t \ge 0$. In the first case, we can split the integral as
\begin{equation}
{\mathrm{int}}_{-(\log N)^{-1}}^{-2\max(-\kappa, \eta)} \frac{|x|^{3/2}}{|x||x- \kappa_t - i \eta_t|} {\rm d} x + {\mathrm{int}}_{-2\max(-\kappa, \eta)}^{0} \frac{|x|^{3/2}}{|x||x- \kappa_t - i\eta_t|} {\rm d} x
\end{equation}
which can be bounded by $\OO(\sqrt{\max(-\kappa,\eta)}) + \OO(\frac{\max(-\kappa, \eta)^{3/2}}{\eta})$.
We can bound this quantity by $(\log N)^2$, since it is equivalent to $\sqrt{\eta} \le (\log N)^2$ and $-\kappa \le \eta^{2/3} (\log N)^2$. This is due to the fact that on $\cH_t$, we have that $-\kappa \le \eta^{4/5+ \epsilon}$ on $\cH_t$ and for the preservation property $z_t(u) \in \cH_t$ implies $z_s(u) \in \cH_s$ for $s \le t$.
In the case that $\kappa_t \ge 0$, we have the integral bound
\begin{equation*}
{\mathrm{int}}_{-(\log N)^{-1}}^{0} \frac{|x|^{1/2}}{|x -\kappa_t|} {\rm d} x = \sqrt{\kappa_t} {\mathrm{int}}_{-\kappa_t/(\log N)}^{0} \frac{|x|^{1/2}}{|x-1|}{\rm d} x
\end{equation*}
The integral in the above equation is bounded by a constant since $\kappa_t/(\log N) $ will be less than 1. This gives us the bound of $\OO(\sqrt{\kappa_t})$ which will be less than $\OO(\log N^2).$
}
The last line of \eqref{e:sqrdiffeq} comes from evaluating the integral at real numbers and extending by analytic continuation.
We have that
\begin{equation}\label{e:term1}
2\partial_q\sqrt{\kappa_q + i \eta_q}=- \pi C_q + O\left((\log N)^2 \sqrt{|\kappa_q + \mathrm{i} \eta_q|}\right)
\end{equation}
We notice that the righthand is of order $\OO(1)$. By directly integrating both sides of the above equation, we get
\begin{align}\label{e:term2ab}
\sqrt{|\kappa_q + \mathrm{i} \eta_q|}=\OO\left(|t-q|+\sqrt{|\kappa_t+\mathrm{i}\eta_t|}\right).
\end{align}
We can plug \eqref{e:term2ab} into \eqref{e:term1}, and integrate both side from $q=s$ to $q=t$, to get
\begin{align}\begin{split}
&\phantom{{}={}}\sqrt{\kappa_t + \mathrm{i} \eta_t} - \sqrt{\kappa_s+ \mathrm{i} \eta_s} + \pi/2 {\mathrm{int}}_s^t C_q {\rm d} q \\
&=\OO\left((\log N)^2{\mathrm{int}}_{s}^t |t-q|+\sqrt{|\kappa_t+\mathrm{i} \eta_t|}{\rm d} q\right)\\
&=\OO\left((\log N)^2\left((t-s)\sqrt{|\kappa_t + \mathrm{i}\eta_t|}+ (t-s)^2\right)\right).
\end{split}\end{align}
This finishes the proof of Lemma \ref{l:sqrtgrowth}.
\begin{comment}
One can apply Gronwall to study this lemmma. We will show here the upper bound.
One has here that
\begin{equation}
\partial_t(\exp(-(t-s) \log(N)^2) \sqrt{\kappa_t + i \eta_t} + \pi/2 {\mathrm{int}}_s^t C_q \exp(-(q-t)\log(N)^2) \le 0
\end{equation}
which implies that
\begin{align}
\sqrt{\kappa_t + i \eta_t} - \sqrt{\kappa_s+ i \eta_s} + \pi/2 {\mathrm{int}}_s^t C_q {\rm d} q & \le [1 - \exp(-(t-s)\log(N)^2)] \sqrt{\kappa_t + i\eta_t} \\
&+ {\mathrm{int}}_s^t C[1 - \exp(-(t-q)\log(N)^2)] {\rm d} q\\
& \le C(t-s) \log(N)^2 \sqrt{\kappa_t + i\eta_t} + C\log(N)^2 {\mathrm{int}}_s^{t} (t-q) {\rm d} q\\
&\le C(t-s) \log(N)^2 \sqrt{\kappa_t + i\eta_t}+ C\log(N)^2 (t-s)^2.
\end{align}
The lower bound can be proved in a similar way, which gives us the desired final estimate.
\end{comment}
\end{proof}
We can use the above expression when trying to compute the quadratic variance integrals
\begin{lemma} \label{Lem:VarEval}
Let $z_t$ and $z_t'$ be two points in the region $H_t$. Let $z_s = (\kappa_s + E_s) + i \eta_s$ and $z_s'= (\kappa_s'+ E_s) + i \eta_s'$ represent the two characteristics that terminate at $z_t$ and $z_t'$ respectively at time $t$. We assume here that $\eta_t < \eta_t'$. Consider two times $s$ and $t$ with $\sqrt{\eta^*}\ll s,t \ll (\log N)^{-3}$ and $(s-t)^2 \ll \min\{\sqrt{|\kappa_t + i \eta_t|},\sqrt{|\kappa_t' + i \eta_t'|} \}$ .
We have that
\begin{align}\begin{split}\label{e:integralestimate}
&\phantom{{}={}}{\mathrm{int}}_{s}^{t}\frac{\sqrt{\kappa_q + i \eta_q}\sqrt{\kappa_q' + i \eta_q'}}{\sqrt{\kappa_t + i \eta_t}\sqrt{\kappa_t' + i \eta_t'}}{\mathrm{int}}_{-\infty}^{\infty} \frac{\hat{\rho}_q(x)}{(x -z_q)^2(x-z_q')^2} {\rm d} x {\rm d} q \\
&=
\frac{1}{4 \sqrt{\kappa_t + i \eta_t}\sqrt{\kappa_t' + i \eta_t'} (\sqrt{\kappa_t + i \eta_t} + \sqrt{\kappa_t' + i \eta_t'})^2}[1+ \OO(N^{-\tilde \epsilon})] \\
&- \frac{1}{4 \sqrt{\kappa_t + i \eta_t}\sqrt{\kappa_t' + i \eta_t'} (\sqrt{\kappa_s + i \eta_s} + \sqrt{\kappa_s' + i \eta_s'})^2}[1+ \OO(N^{-\tilde \epsilon})]
\end{split}\end{align}
\end{lemma}
\begin{proof}
We want to compute
\begin{align}
{\mathrm{int}}_{s}^{t} \pi/2 \frac{C_q}{(\sqrt{(\kappa_q + i \eta_q)} + \sqrt{(\kappa_q' + i \eta_q')})^3} {\rm d} q
\end{align}
which up to a factor of $(1+ \OO(N^{-\tilde \epsilon}))$ is letting $A= \sqrt{\kappa_s + \mathrm{i} \eta_s}$ , $A' = \sqrt{\kappa_s' + \mathrm{i} \eta_s'}$ and $B_t = \pi/2 {\mathrm{int}}_{s}^{t} C_s {\rm d} s$
\begin{align}
{\mathrm{int}}_{s}^{t} \pi/2 \frac{C_q}{(A+A'- 2 B_q)^3} {\rm d} q
\end{align}
We change variable from $q \rightarrow B_q$ the Jacobian of this transform is clearly $\pi/2 C_q$
\begin{equation}
{\mathrm{int}}_{B_s=0}^{B_t} \frac{1}{(A+A' -2 x)^3} {\rm d} x
\end{equation}
The above equation is up to a factor of $O(1+ N^{-\tilde \epsilon})$ is
\begin{equation}
\frac{1}{4(\sqrt{\kappa_t + i \eta_t} + \sqrt{\kappa_t' + i \eta_t'})^2}
\end{equation}
\end{proof}
We remark here that if we shoose $t-s\gg \max \{\sqrt{|\kappa_t + \eta_t|}, \sqrt{|\kappa_t' + i \eta_t'|}\}$, then the last line of \eqref{e:integralestimate} can be subsumed as a $O(N^{- \tilde \epsilon})$ error of the second term.
\begin{proof}[Proof of Lemma 5.7]
Through calculations similar to what we have done earlier, we can compute the value of $\mathcal{I}_t (\mathcal{I}_s)^{-1}$ when we have $(s-t)^2(\log N)^2 \ll \sqrt{\kappa_t + i \eta_t}^{1+\tilde \epsilon}$.
\begin{align}\begin{split}
{\mathrm{int}}_{s}^t \partial_z \hat{m}_s(z_s) {\rm d} s &= {\mathrm{int}}_s^t{\mathrm{int}}_{0}^{\infty} \frac{C_q \sqrt{x}}{(x+ \kappa_q + i \eta_q)^2} {\rm d} x+\OO((\log N)^2) {\rm d} t \\
&= {\mathrm{int}}_{s}^{t} \frac{C_q \pi}{2 \sqrt{\kappa_q + i \eta_q}} {\rm d} q +\OO((t-s) \log N^{1/2})\\
&= \ln\left(\frac{\sqrt{\kappa_s + i \eta_s}}{\sqrt{\kappa_t + i \eta_t}} \right) +\OO(N^{-\tilde \epsilon})
\end{split}\end{align}
{ To get from the first line to the second line, we bounded the integral of ${\mathrm{int}}_{-(\log N)^{-1}}^{0} \frac{x^{3/2}}{|x - \eta_q + i \eta_q|^2} {\rm d} x$ by $\OO((\log N)^2)$. As before, the only interesting case is when $-\kappa_q \ge \eta_q$. The integral can be divided and bounded up to a constant independent of $N$ as
\begin{equation}
{\mathrm{int}}_{-(\log N)^{-1}}^{2 \kappa_q} |x|^{-1/2} {\rm d} x + {\mathrm{int}}_{2 \kappa_q}^{0} \frac{|x|^{3/2}}{|x-\kappa_q -i \eta_q|^2} {\rm d} x \le (|\kappa_q|)^{1/2} + \frac{|\kappa_q|^{5/2}}{\eta_q^2}
\end{equation}
The bound $\frac{|\kappa_q|^{5/2}}{\eta_q^2} \le (\log N)^2$ is true because of the fact that terms in $\cH_t$ satisfy $|\kappa_q| \le \eta_q^{4/5 + \epsilon}$, the other inequality is true since $|\kappa_q| \ll (\log N)^2$.
The final line requires the following comparison estimate
\begin{align}
{\mathrm{int}}_{s}^{t} \frac{C_q \pi/2}{\sqrt{\kappa_q + i \eta_q}} {\rm d} q - {\mathrm{int}}_{s}^{t} \frac{C_q \pi/2}{ (\sqrt{\kappa_s + i \eta_s} -\pi/2 {\mathrm{int}}_s^{q} C_l {\rm d} l)} {\rm d} q \le {\mathrm{int}}_{s}^{t} \frac{C_q \pi/2}{ (\sqrt{\kappa_s + i \eta_s} -\pi/2 {\mathrm{int}}_s^{q} C_l {\rm d} l)^{1- \tilde \epsilon}} {\rm d} q
\end{align}
where we used in the last line that for times $s<t$ as in the condition of Lemma \ref{lem:IntFac}, the difference in the denominator of the first integral and the second integral in the left hand side of the above equation is $(\sqrt{\kappa_q + i \eta_q})^{1+ \tilde \epsilon}$. This gives the $1- \epsilon$ in the denominator on the left hand side. We can integrate the right hand side to get that this is bounded by $O(|\sqrt{\kappa_s + i\eta_s}|^{\tilde \epsilon})$. Clearly, this can be bounded by $O(N^{-\tilde \epsilon})$ . This shows us that we have an additive error term of the order $O(N^{-\epsilon})$. Taking exponentials will give us a multiplicative error term as in \ref{lem:IntFac}}
\end{proof}
\begin{comment}
\begin{lemma}
Under the assumption of the previous lemmas the joint characteristic function of
For $1\leq j\leq k$. Let
\begin{equation}
X_j(t)= \Im[z_t(u_j)]\sqrt{\frac{2}{\beta N}}{\mathrm{int}}_0^t\sum_{i=1}^N \frac{{\rm d} B_i(s)}{(\lambda_i(s)-z_s(u_j))^2},\quad j=1,2,\cdots, k.
\end{equation}
is the exponential of
\begin{align}\begin{split}
&\frac{1}{2}\left\langle \sum_{j=1}^k a_j\Re[X_j(t)]+b_j\Im[X_j(t)]\right\rangle\\
=&-\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j-\mathrm{i} b_j)(a_\ell+\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{2\beta(\sqrt{\kappa_t(u_j) + i \eta_t(u_j)}+ \sqrt{\kappa_t(u_j) - i \eta_t(u_j)})^4}\right]\\
&-\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j+\mathrm{i} b_j)(a_\ell+\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{2\beta(\sqrt{\kappa_t(u_j) + i \eta_t(u_j)}+ \sqrt{\kappa_t(u_j) + i \eta_t(u_j)})^4}\right]\\
&-\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j-\mathrm{i} b_j)(a_\ell-\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{2\beta(\sqrt{\kappa_t(u_j) - i \eta_t(u_j)}+ \sqrt{\kappa_t(u_j) - i \eta_t(u_j)})^4}\right]
\end{split}\end{align}
\end{lemma}
\begin{proof}
For $1\leq j\leq k$. Let
\begin{equation}
X_j(t)= \Im[z_t(u_j)]\sqrt{\frac{2}{\beta N}}{\mathrm{int}}_0^t\sum_{i=1}^N \frac{{\rm d} B_i(t)}{(\lambda_i(s)-z_s(u_j))^2},\quad j=1,2,\cdots, k.
\end{equation}
We compute their joint characteristic function,
\begin{equation}\label{e:cfunc}
\mathbb{E}\left[\exp\left\{\mathrm{i}\sum_{j=1}^k a_j\Re[X_j(t)]+b_j\Im[X_j(t)]\right\}\right]
\end{equation}
Since $\sum_{j=1}^k a_j\Re[X_j(t)]+b_j\Im[X_j(t)]$ is a martingale, the following is also a martingale
\begin{equation}
\exp\left\{\mathrm{i} \sum_{j=1}^k a_j\Re[X_j(t)]+b_j\Im[X_j(t)]\}+\frac{1}{2}\left\langle \sum_{j=1}^k a_j\Re[X_j(t)]+b_j\Im[X_j(t)]\right\rangle\right\}
\end{equation}
In particular, its expectation is one. By Proposition \ref{p:var}, on the event $\Omega$ (as defined in the proof of Proposition \ref{Col:GrenEdg}), the quadratic variation is given by
\begin{align}\begin{split}
&\frac{1}{2}\left\langle \sum_{j=1}^k a_j\Re[X_j(t)]+b_j\Im[X_j(t)]\right\rangle\\
=&-\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j-\mathrm{i} b_j)(a_\ell+\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{2\beta(\sqrt{\kappa_t(u_j) + i \eta_t(u_j)}+ \sqrt{\kappa_t(u_{\ell}) - i \eta_t(u_l)})^4}\right]\\
&-\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j+\mathrm{i} b_j)(a_\ell+\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{2\beta(\sqrt{\kappa_t(u_j) + i \eta_t(u_j)}+ \sqrt{\kappa_t(u_l) + i \eta_t(u_l)})^4}\right]\\
&-\sum_{1\leq j,\ell\leq k}\Re\left[\frac{(a_j-\mathrm{i} b_j)(a_\ell-\mathrm{i} b_\ell)\Im[z_t(u_j)]\Im[z_t(u_\ell)]}{2\beta(\sqrt{\kappa_t(u_j) - i \eta_t(u_j)}+ \sqrt{\kappa_t(u_l) - i \eta_t(u_l)})^4}\right] \\
& + \OO\left(\frac{1}{N^3 \min_j[\Im[z_j]^{7/2}]}\right)
\end{split}\end{align}
The characteristic function is therefore the exponential of the previous function.
\end{proof}
\end{comment}
\section{Edge Universality}
We recall the $\beta$-Dyson Brownian motion from \eqref{DBM}
\begin{equation}
{\rm d} \lambda_i(t) = \sqrt{\frac{2}{\beta N}} {\rm d} B_i(t) +\frac{1}{N}\sum_{j:j\neq i}\frac{{\rm d} t}{\lambda_i(t)-\lambda_j(t)}-\frac{1}{2}V'(\lambda_i(t)){\rm d} t,\quad i=1,2,\cdots, N,
\end{equation}
where the initial data $\bm{\lambda}(0)=(\lambda_1,\lambda_2,\cdots,\lambda_N)$ satisfies assumption \ref{a:initial}, and the potential $V$ satisfies \ref{a:asumpV}.
\begin{comment}
We assume that the standard of comparison $\hat{m}$ satisfies the assumption \ref{Asmp:Reg} and, thus, we can write the measure associated to $\hat{m}_t$ as $\hat{\rho_t}(x) = C_t\sqrt{E_t-x} + O((E_t-x)^{3/2})$, where $\hat{m}_t$ is the solution to the McKean-Vlasov equation of potential $V$ with $\hat{m}_0$ as initial data.
\end{comment}
We have shown in previous sections that after applying the $\beta-$DBM with potential $V$ on $\bm{\lambda}(0)$ would create a distribution that satisfies edge rigidity at an optimal scale after certain amount of time. Upon further applying the $\beta-$DBM with potential $V$ , we can compare the local eigenvalue fluctuations to that of the $\beta$-ensemble with a quadratic potential. The main strategy is to perform a coupling of the $\beta$-DBM process with that of the $\beta$-DBM process with quadratic potential from its equilibrium measure. We will, as before, estimate the differences in the coupling via a continuous interpolation. A similar analysis has been performed in \cite{Landon2016}
Without loss of generality, in the rest of this section we assume that the initial data $\bm{\lambda}(0)$ satisfies the optimal rigidity. Otherwise, we can first apply the $\beta$-DBM until we have the edge rigidity at an optimal scale.
We now define $\mu_i$ as the unique strong solution to the SDE,
\begin{equation}
{\rm d} \mu_i(t) = \sqrt{\frac{2}{\beta N}} {\rm d} B_i(t) +\frac{1}{N}\sum_{j:j\neq i}\frac{{\rm d} t}{\mu_i(t)-\mu_j(t)}-\frac{1}{2}W'(\mu_i(t)){\rm d} t,\quad i=1,2,\cdots, N,
\end{equation}
with initial data $\mu_i (0)$ being distributed like a $\beta$-ensemble
\begin{align}\label{e:beta2}
\frac{1}{Z_N}\prod_{i<j}|\mu_i-\mu_j|^\beta e^{-N\sum_{i}W(\mu_i)}\prod_{i}{\rm d} \mu_i,
\end{align}
where the potential $W$ is quadratic, and the equilibrium measure behaves like
\begin{align}
C_0 \sqrt{E_0-x},
\end{align}
as $x\rightarrow E_0$.
The main result of this section is the following.
\begin{theorem} \label{thm:maindbm}
Let $t_1= \OO(\frac{N^{\omega_1}}{N^{1/3}})$ . With overwhelming probability, we have
\begin{equation}
| (\lambda_{ i} (t_1) - E_{t_1} ) - ( \mu_i - E_0 ) | \leq \frac{1}{N^{2/3+\varepsilon} }
\end{equation}
for a small $\varepsilon>0$ and for any finite $1 \leq i \leq K$.
\end{theorem}
\subsection{Interpolation}
For clarity of presentation, we will write up a proof of \ref{thm:maindbm} in the case that we have a local law and rigidity along the entire spectrum. In the case that there is less control of the egienvalues above some scale $i^* \asymp N$, one can perform minor modifications to the construction of the interpolating process above the scale $i^*$ like in \cite{LandonEdge} section 3.1.
We define the following interpolating processes for $0 \leq \alpha \leq 1$.
\begin{equation} \label{e:defz}
{\rm d} z_i (t, \alpha ) = \sqrt{\frac{2}{\beta N}} {\rm d} B_i(t) +\frac{1}{N}\sum_{j:j\neq i}\frac{{\rm d} t}{z_i (t, \alpha )-z_j (t, \alpha )}-\frac{1}{2}V_\alpha'(z_i (t, \alpha )){\rm d} t,\quad i=1,2,\cdots, N,
\end{equation}
with the potential
\begin{align}
V_\alpha=\alpha V +(1-\alpha)W,
\end{align}
and the initial data
\begin{equation}
z_i (0, \alpha ):= \alpha \lambda_i (0) + (1 - \alpha ) \mu_i (0),
\end{equation}
for $i=1,2,\cdots,N$.
We define the Stieltjes transform of the empirical particle process $z_i(t,\alpha)$ as defined in \eqref{e:defz}
\begin{equation}
m_t (z, \alpha ) = \frac{1}{N} \sum_i \frac{1}{ z_i (t, \alpha ) - z }.
\end{equation}
We recall that $\hat{\rho}_{ t}$ is the solution of the McKean-Vlasov equation with initial data given by the Stieltjes transform $\hat m_0(z)$. The edge of $\hat \rho_{t}$ will be designated by $E_t$.
We recall that by our Assumption \ref{a:initial}, we have
\begin{align}
\hat\rho_0(x)=f_0(x,1)\sqrt{E_0-x}=\mathrel{\mathop:}\hat \rho_0(x,1),
\end{align}
where $f_0(x,1)$ is analytic in a neighborhood of $x=E_0$. The equilibrium measure of the $\beta$-ensemble \eqref{e:beta2}, has the form
\begin{align}
\hat\rho_0(x,0)=f_0(x,0)\sqrt{E_0-x}.
\end{align}
It turns out the empirical distribution of the interpolated initial data $z_i(z,\alpha)$ is close to the profile,
\begin{align}
\hat\rho_0(x,\alpha)=f_0(x,\alpha)\sqrt{E_0-x}.
\end{align}
Let
\begin{align}
\hat F(y,\alpha)={\mathrm{int}}_y^\infty \hat\rho_0(x,\alpha){\rm d} x,
\end{align}
the profiles $\hat \rho_0(x,\alpha)$ for $0<\alpha<1$ are determined by
\begin{align}
F^{-1}(y,\alpha)=\alpha F^{-1}(y,1)+(1-\alpha)F^{-1}(y,0).
\end{align}
\begin{proposition}
{Under the assumptions of \ref{a:initial} on initial data to optimal scale $\eta^* = N^{-1/3}$, we have
\begin{align}
\frac{1}{N}\sum_{i=1}^N \delta_{z_i(0,\alpha)}\sim \hat \rho_0(x,\alpha)\sim C_0\sqrt{E_0-x}.
\end{align}
where the $\sim$ relation holds in the following sense:
\begin{equation}
|\lambda_i(0,\alpha) - \gamma_i(0,\alpha)| < \frac{N^{\epsilon}}{N^{2/3} i^{1/3}}
\end{equation}
and $\gamma_i(0,\alpha)$ is the value of $\hat F^{-1}(i/n, \alpha)$.
This rigidity of the eigenvalues would imply that the point process $\frac{1}{N}\sum_{i=1}^N \delta_{\lambda_i(0,\alpha)} $ would be close to the stable measure $\hat{\rho}_0(x,\alpha)$ in the sense of Assumption \ref{a:initial} to optimal scale.
}
\end{proposition}
One should note that the above lemma is quite simply a consequence of the definition of the $F$ inverse transform if deciphered correctly. The value of $\gamma_i(0,\alpha)$ should be the linearly interpolated value between the endpoints at $\alpha=0$ and 1. Since this optimal rigidity holds at these regions, we have it at $\alpha$.
{ \begin{proposition} \label{Prop:MeasureInterp}
Consider two measures $\rho_0(x)= f_0(x) \sqrt{x-E_0}$ and $\rho_1(x) =f_1(x)\sqrt{x-E_1}$ where $f_0(x)$ and $f_1(x)$ are analytic functions around their respective edges $E_0$ and $E_1$. The measure $\rho_{\alpha}$ whose eigenvalue counting function $F(y,\alpha)$ is determined by the relation
\begin{equation*}
F^{-1}(y,\alpha) = \alpha F^{-1}(y,1) + (1-\alpha)F^{-1}(y,0)
\end{equation*}
where $F(y,1)$ is the eigenvalue counting function of $\rho_1(x)$ and $F(y,0)$ is the eigenvalue counting function of $F(y,0)$ is of the following form:
\begin{equation*}
\rho_{\alpha}(x) = f_{\alpha}(x) \sqrt{E_{\alpha} -x }
\end{equation*}
where $f_{\alpha}(x)$ is analytic around the new edge $E_{\alpha}= \alpha E_0 + (1-\alpha) E_1$
\end{proposition}}
\begin{proof}
This statement can be proved by expanding power series.
First expand $f_0= a_0 + a_1(E_0-z) + a_2(E_0-z)^2\cdots$ and $f_1= b_0 + b_1(z-E_1) + b_2(z-E_1)^2+\cdots$
Then we can write the function $\hat{F}(y,0)= {\mathrm{int}}_{y}^{E_0} [a_0 (E_0 -z_0)^{1/2} + a_1(E_0 -z_0)^{3/2} + a_2 (E_0 - z_0)^{5/2} +\cdots] {\rm d} y$ so , by integrating, we get
\begin{equation}
\hat{F}(z_0,0)= -2/3 a_0 (E_0 -z_0)^{3/2} -2/5 a_1 (E_0-z_0)^{5/2} -2/7 a_2 (E_0 - z_0)^{7/2} -\cdots
\end{equation}
Taking the inverse of this expression will give us fractional powers with lowest power $y^{2/3}$. Since we are only considering $y$ positive, there is no issue in defining our expansions.
\begin{equation}
\hat{F}^{-1}(y,0) = E_0 + \hat{a}_0 y^{2/3} + \hat{a}_1 y^{4/3} + \hat{a_2} y^{6/3}
\end{equation}
Upon taking the map $(E_0 - z_0) \rightarrow (E_0 - z_0)^2$, we see that we are actually inverting a formal power series. We can bound the coefficients exponentially by using Lagrange's inversion formula. After this procedure, we see that if we actually replace
$y \rightarrow y^3$ in the above formula, we see that we get an analytic power series in a small neighborhood of 0.
One can get a similar power series expansion to $\hat{F}^{-1}(y,1) = E_1 + \hat{b}_0 y^{2/3} + \hat{b}_1 y^{4/3}+\cdots$, we then obtain
\begin{equation}
\hat{F}^{-1}(y, \alpha) = \alpha E_0 + (1- \alpha) E_1 +(\alpha \hat{a}_0 + (1- \alpha) \hat{b}_0) y^{2/3} + (\alpha \hat{a}_1 + (1- \alpha) \hat{b}_1) y^{4/3} + (\alpha \hat{a}_2 + ( 1-\alpha) \hat{b}_2) y^{6/3}+\cdots
\end{equation}
We have to invert the function $\hat{F}^{-1}(y,\alpha)$ and, again, the inverse can be written in the form
\begin{equation}
\hat{F}(z, \alpha) = c_0(z - \alpha E_0 -(1-\alpha) E_1)^{3/2} + c_1(z -\alpha E_0 -(1- \alpha) E_1)^{5/2}+\cdots.
\end{equation}
Again, the coefficients appearing in the above expression would be bounded exponentially in a small interval by Lagrange's inversion formula, and the above expression, if we factor out $(z - \alpha E_0 -(1- \alpha) E_1)$ would represent an analytic power series in a small neighborhood around $\alpha E_0 + (1-\alpha) E_1$ .
We can take the derivative of this to find the functional form of the measure and, thus, we see that the functional form of the measure $\rho_{\alpha}(x) = f_{\alpha}(x) \sqrt{x - E_{\alpha}}$ where
\end{proof}
We let now $\hat\rho_t (x, \alpha )$ be the solution of the McKean-Vlasov equation of potential $V_\alpha$ with initial data $\hat\rho_0(x,\alpha)$, and denote the Stieltjes transform by $\hat m_t (z, \alpha)$. It follows from Proposition \ref{Prop:MeasureInterp} that they have a square root density with an right edge which we denote by $E_t(\alpha)$. Let $\gamma_i (t, \alpha )$ be the classical eigenvalue locations with respect to $\hat\rho_t (E, \alpha)$. To be more precise, they are defined by
\begin{equation}
\frac{i}{N} = {\mathrm{int}}_{ \gamma_i (t, \alpha ) }^\infty \hat\rho_t (x, \alpha ) {\rm d} x.
\end{equation}
As a consequence of Proposition \ref{prop:TimStab}, we have the following proposition for the solutions of McKean-Vlasov equation in time for the interpolated measures.
\begin{proposition}
{Under the assumption \ref{a:initial} on initial data to optimal scale $\eta* = N^{-1/3}$, we have
\begin{align}
\sum_{i=1}^N \delta_{z_i(t,\alpha)}\sim \hat \rho_t(x,\alpha)\sim C_t(\alpha)\sqrt{E_t(\alpha)-x}.
\end{align}
In the following sense:
there exists a small constant ${\frak e}>0$ so that the following estimates hold. We have,
\begin{equation} \label{eqn:rigzi}
\sup_{ 0 \leq \alpha \leq 1 } \sup_{ 0 \leq t \leq T } |z_i (t, \alpha ) - \gamma_i (t, \alpha ) | \leq \frac{ M }{N^{2/3} i^{1/3}}
\end{equation}
for $1 \leq i \leq {\frak e} N$ with overwhelming probability.
along with a corresponding local law.
In addition, we have the following estimates regarding the change of the measure and the edge in $\alpha$ and time $t$
\begin{align}
|C_t(\alpha)-C_t(0)|=\OO(t),\\
|E_t(\alpha)-E_t(0)|=\OO(t).
\end{align}
As a consequence of the scaling estimates, we have the rigidity results:
for $cN^{-2\epsilon} N^{-2/3} \le E \le 0$
\begin{equation} \label{e:interrig}
|\Re[\hat{m}_t(E+ E_t(\alpha),\alpha)- \hat{m}_t(E_t(\alpha), \alpha)] - \Re[\hat{m}_t(E+ E_t(0),0)- \hat{m}_t(E_t(0), 0)]| \le C \frac{|E|N^{\epsilon}}{N^{-1/3}}
\end{equation}
for $0 \le E \le cN^{-2 \epsilon} N^{-2/3}$
\begin{equation}
|\Re[\hat{m}_t(E+ E_t(\alpha),\alpha)- \hat{m}_t(E_t(\alpha), \alpha)] - \Re[\hat{m}_t(E+ E_t(0),0)- \hat{m}_t(E_t(0), 0)]| \le C |E|^{1/2}N^{\epsilon}
\end{equation}
For eigenvalues $i$ that are on the scale $N^{\omega_A} \ll N$, we would have the following estimates on the classical locations on the eigenvalues.
\begin{equation} \label{e:classloc}
|(\gamma_i(t,\alpha) - E_t(\alpha)) - (\gamma_i(t,0) - E_t(0))| \le N^{\epsilon}\frac{i^{2/3}}{N^{2/3}}
\end{equation}
}
\end{proposition}
The proof of the rigidity results are the same as in Lemma 7.11 and 7.12 of \cite{LandonEdge}. It only involves knowing that the two measures are close near the edge up to a small multiplicative factor and square root behavior around the edge. As a remark, the corresponding result in \cite{LandonEdge} gives the bound on the right hand side of \eqref{e:classloc} as a function of $t$ and for a larger range of eigenvalues. However, if one investigates the proof of the coming Proposition \ref{prop:shortrangapprox} where these estimates are used, only the weaker version presented here is ever used.
\subsection{Short-range approximation}
We recall that the edge $E_t(\alpha)$ satisfies the differential equation:
\begin{align}
{\rm d} E_t(\alpha)=- m_t (E_t (\alpha ), \alpha ) {\rm d} t- \frac{1}{2} V'(E_t(\alpha)) {\rm d} t
\end{align}
Combining the SDE of $z_i (t, \alpha)$, we get
\begin{align}\begin{split}
&{\rm d} \left(z_i (t, \alpha ) -E_t(\alpha)\right) = \sqrt{\frac{ 2}{ \beta N}} {\rm d} B_i(t) + \frac{1}{N} \sum_{j:j\neq i} \frac{1}{ z_i (t, \alpha ) - z_j (t, \alpha ) } {\rm d} t\\
&- \frac{1}{2} V'(z_i(t,\alpha)) {\rm d} t
+ m_t (E_t (\alpha ), \alpha ) {\rm d} t + \frac{1}{2} V'(E_t(\alpha)) {\rm d} t.
\end{split}\end{align}
{
The important effects of the edge behavior are due to short range interactions. To quantify this information, we also use the set of indices $\cal A \subseteq \qq{N} \times \qq{N}$. We choose $\cal A$ to be symmetric, i.e., $(i, j) \in \cal A$ iff $(j, i) \in \cal A$. The definition of $\cal A$ requires the choice of
\begin{equation}
\ell := N^{{\omega}_\ell}.
\end{equation}
We let
\begin{equation}
\cal A := \left\{ (i, j) : |i-j| \leq \ell ( 10 \ell^2 + i^{2/3} + j^{2/3} ) \right\}.
\end{equation}
}
We denote the interval
\begin{align*}
I_i(t, \alpha)=[\gamma_{j-}(t,\alpha)-E_t(\alpha), \gamma_{j+}(t,\alpha)-E_t(\alpha)],
\end{align*}
where $j-=\min_j\{(i,j)\in \cA\}$ and $j+=\max_j\{(i,j)\in \cA\}$.
We introduce the short range approximation of $z_i(t,\alpha)$ by $\tilde z_i(t,\alpha)$, such that the difference $z_i(t,\alpha)-\tilde z_i(t,\alpha)$ is negligible for small indices $i$. The advantage for the new dynamics $\tilde z_i(t,\alpha)$ is that the derivative $\partial_\alpha(\tilde z_i(t,\alpha)-E_t(\alpha))$ does not depend on particles far away for small indices $i$. The second benefit is the fact that there are fewer $\alpha$ dependencies near the edge, which will make the later analysis simpler.
For $1 \leq i \leq N^{{\omega}_A}$,
\begin{align}\begin{split}\label{e:short}
&{\rm d} (\tilde z_i (t, \alpha )-E_t(\alpha)) = \sqrt{\frac{2}{\beta N}} {\rm d} B_i(t) + \frac{1}{N}
\sum_{j:(i,j)\in \cal A} \frac{{\rm d} t}{ \tilde z_i (t, \alpha ) - \tilde z_j (t, \alpha ) }
\\
&+ {\mathrm{int}}_{ I^c_i (0, t) } \frac{\hat \rho_t (E+E_t(0),0)}{ \tilde z_i (t, \alpha ) -E_t(\alpha)- E } {\rm d} E {\rm d} t
+ \Re[ m_t (E_t (0), 0) ]{\rm d} t,
\end{split}\end{align}
for $N^{{\omega}_A} \leq i $,
\begin{align}\begin{split}\label{e:long}
&{\rm d} (\tilde z_i (t, \alpha )-E_t(\alpha)) = \sqrt{\frac{2}{\beta N}} {\rm d} B_i(t) + \frac{1}{N} \sum_{j:(i,j)\in A} \frac{{\rm d} t}{ \tilde z_i (t, \alpha ) - \tilde z_j (t, \alpha ) }\\
&+ {\mathrm{int}}_{ I^c_i (\alpha, t) } \frac{\hat \rho_t (E+E_t(\alpha) , \alpha )}{ \tilde z_i (t, \alpha )-E_t(\alpha) - E } {\rm d} E {\rm d} t
-\frac{1}{2} V_\alpha'(\tilde z(t,\alpha)){\rm d} t \\
&+ \Re[ m_t (E_t (\alpha), \alpha) ] {\rm d} t + \frac{1}{2}V_\alpha'(E_{t}(\alpha)){\rm d} t.
\end{split}\end{align}
{ One should notice that for particles near the edge, we have largely removed the dependence on $\alpha$. What one should realize is that the effects of the interpolation are very small near the edge, so one can approximate replacing terms like $\Re[m_t(E_t(\alpha),\alpha)$ by its counterpart at $\alpha =0$. Similarly, the effect of $V_{\alpha}'(E_t(\alpha)) - V_{\alpha}(\hat{z}(t,\alpha))$ is negligible near the edge. Since these error terms are small, and our differential kernel is a contraction in $\ell^p$ space, we are able to show the difference upon making the short range approximation is small. The details of the proof are similar to those found in \cite{LandonEdge} Lemma 3.7; we have the same rigidity estimates \eqref{eqn:rigzi} as well as the measure comparison estimates \eqref{e:interrig}- \eqref{e:classloc}, which allow us to show that the error made upon replacing the interpolating terms at $\alpha$ with terms at $0$ are negligible at scales $i \le N^{\omega_A}$.
\begin{proposition} \label{prop:shortrangapprox}
With the construction \eqref{e:short} and \eqref{e:long}, we have
\begin{align}
\sup_{0\leq \alpha\leq 1}\sup_{0\leq t\leq T}\max_{1\leq i\leq N}|z_i(t,\alpha)-\tilde z_i(t,\alpha)|\leq \frac{1}{N^{2/3-{\frak c}}},
\end{align}
and especially,
\begin{align}
\sup_{0\leq \alpha\leq 1}\sup_{0\leq t\leq T}\sup_{1\leq i\leq K}
|z_i(t,\alpha)-\hat \gamma(t,\alpha)|\leq \frac{M}{N^{2/3}i^{1/3}}.
\end{align}
\end{proposition}
In the following we show that for time $t\gg N^{1/3}$, $\max_i |\partial_\alpha\tilde z_i(t,\alpha)|$ is negligible.
Let $u_i (t, \alpha ) \deq \partial_\alpha (\tilde z_i (t, \alpha )-E_t(\alpha))$. We see that $u(t,\alpha)=(u_1(t,\alpha), u_2(t,\alpha),\cdots, u_N(t,\alpha))$ satisfies the equation,
\begin{equation}
\partial_t u(t,\alpha) = \cal L u(t,\alpha) + \cal E,
\end{equation}
where the operator $\cal L$ and the force term $\cal E$ are given as follows. The operator $\cal L$ is
\begin{equation}
\cal L = \cal B + \cal V,
\end{equation}
where
\begin{equation}
( \cal B u )_i = \frac{1}{N} \sum_{j: (i,j)\in \cal A} \frac{ u_j - u_i}{ ( \tilde z_i ( \alpha, t) - \tilde z_j ( \alpha, t) )^2},
\end{equation}
where for $1 \leq i \leq N^{{\omega}_A}$,
\begin{equation}
\cal V_i = - {\mathrm{int}}_{ I_i (0, t ) } \frac{\rho_t (E, 0) }{ ( \tilde z_i ( \alpha, t) - E )^2 },\quad \cal E=0
\end{equation}
for $ N^{{\omega}_A} \leq i$,
\begin{equation}
\cal V_i = - {\mathrm{int}}_{ I_i (\alpha, t ) } \frac{\rho_t (E, \alpha ) }{ ( \tilde z_i (\alpha, t) - E )^2 },\quad |\cal E|\leq N^C.
\end{equation}
The same as in \cite{LandonEdge}, the propagator of the operator $\cal L$ satisfies a finite speed estimate. It follows from an energy estimate the same as in \cite{LandonEdge}, we get
\begin{align}\begin{split}
&\phantom{{}={}}N^{2/3}|(z_i(t,1)-E_t(1))-(z_i(t,0)-E_t(0))|\\
&=N^{2/3}|(\tilde z_i(t,1)-E_t(1))-(\tilde z_i(t,0)-E_t(0))|+\OO\left(N^{-{\frak c}}\right)\\
&=N^{2/3}\left|{\mathrm{int}}_0^1\partial_\alpha(\tilde z_i(t,\alpha)-E_t(\alpha))\right|+\OO\left(N^{-{\frak c}}\right)\\
&=\OO\left(N^{\varepsilon}/(N^{1/3}t)+N^{-{\frak c}}\right)=\oo(1),
\end{split}\end{align}
provided $t\gg N^{-1/3}$, and Theorem \ref{thm:maindbm} follows.
|
1009.3613
|
\section{Introduction}\label{sec:intro}
The \texttt{AdaBoost} algorithm \citep{Freund:Schapire1996a,Freund:Schapire1997}, which aims to construct a ``strong'' classifier by combining some ``weak'' learners (slightly better than random guess), has been one of the most influential classification algorithms \citep{Caruana:Niculescu-Mizil2006,Wu:Kumar2009}, and it has exhibited excellent performance both on benchmark datasets and real applications \citep{Bauer:Kohavi1999,Dietterich2000}.
Many studies are devoted to understanding the mysteries behind the success of \texttt{AdaBoost}, among which the margin theory proposed by Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998} has been very influential. For example, \texttt{AdaBoost} often tends to be empirically resistant (but not completely) to overfitting \citep{Breiman1998,Drucker:Cortes1996,Quinlan1996}, i.e., the generalization error of the combined learner keeps decreasing as its size becomes very large and even after the training error has reached zero; it seems violating the Occam's razor \citep{Blumer:Ehrenfeucht:Haussler:Warmuth1987}, i.e., the principle that less complex classifiers should perform better. This remains one of the most famous mysteries of \texttt{AdaBoost}. The margin theory provides the most intuitive and popular explanation to this mystery, that is: \texttt{AdaBoost} tends to improve the margin even after the error on training sample reaches zero.
However, Breiman \cite{Breiman1999} raised serious doubt on the margin theory by designing \texttt{arc-gv}, a boosting-style algorithm. This algorithm is able to maximize the \textit{minimum margin} over the training data, but its generalization error is high on empirical datasets. Thus, Breiman \cite{Breiman1999} concluded that the margin theory for \texttt{AdaBoost} failed. Breiman's argument was backed up with a minimum margin bound, which is tighter than the generalization bound given by Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998}, and a lot of experiments. Later, Reyzin and Schapire \cite{Reyzin:Schapire2006} found that there were flaws in the design of experiments: Breiman used CART trees \citep{Breiman:Friedman:Olshen:Stone1984} as base learners and fixed the number of leaves for controlling the complexity of base learners. However, Reyzin and Schapire \cite{Reyzin:Schapire2006} found that the trees produced by \texttt{arc-gv} were usually much deeper than those produced by \texttt{AdaBoost}. Generally, for two trees with the same number of leaves, the deeper one is with a larger complexity because more judgements are needed for making a prediction. Therefore, Reyzin and Schapire \cite{Reyzin:Schapire2006} concluded that Breiman's observation was biased due to the poor control of model complexity. They repeated the experiments by using decision stumps for base learners, considering that decision stump has only one leaf and thus with a fixed complexity, and observed that though \texttt{arc-gv} produced a larger minimum margin, its margin distribution was quite poor. Nowadays, it is well-accepted that the margin distribution is crucial to relate margin to the generalization performance of \texttt{AdaBoost}. To support the margin theory, Wang et al. \cite{Wang:Sugiyama:Yang:Zhou:Feng2011} presented a tighter bound in term of \textit{Emargin}, which was believed to be relevant to margin distribution.
In this paper, we show that the minimum margin and Emargin are special cases of the \textit{$k$th margin}, and all the previous margin bounds are single margin bounds that are not really based on the whole margin distribution. Then, we present a new empirical Bernstein bound, which slightly improves the bound in \citep{Maurer:Pontil2009} but with different proof skills. Based on this result, we prove a new generalization error bound for voting classifier, which considers exactly the same factors as Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998}, but is uniformly tighter than the bounds of Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998} and Breiman \cite{Breiman1999}. Therefore, we defend the margin-based explanation against Breiman's doubt. Furthermore, we present a lower generalization error bound for voting classifiers, and by incorporating other factors such as average margin and variance, we prove a generalization error bound which is heavily relevant to the whole margin distribution. Finally, we make a comprehensive empirical comparisons between \texttt{AdaBoost} and \texttt{arc-gv}, and find that \texttt{AdaBoost} has better performance than but dose not absolutely outperform \texttt{arc-gv}, which verifies our theory completely.
The rest of this paper is organized as follows. We begin with some notations and background in Sections~\ref{sec:Notations} and \ref{sec:back}, respectively. Then, we prove the $k$th margin bound and discuss on its relation to previous bounds in Section~\ref{sec:kmargin}. Our main results are presented in Section~\ref{sec:mainresult}, and detailed proofs are provided in Section~\ref{sec:pf}. We give empirical evidence in Section~\ref{sec:exper} and conclude this paper in Section \ref{sec:con}.
\section{Notations}\label{sec:Notations}
Let $\mathcal {X}$ and $\mathcal {Y}$ denote an input space and output space, respectively. For simplicity, we focus on binary classification problems, i.e., $\mathcal{Y}=\{+1,-1\}$. Denote by $D$ an (unknown) underlying probability distribution over the product space $\mathcal {X}\times\mathcal {Y}$. A training sample with size $m$
\[
S=\{(x_1,y_1), (x_2,y_2), \cdots, (x_m,y_m)\}
\]
is drawn independently and identically (i.i.d) according to distribution $D$. We use $\Pr_{D}[\cdot]$ to refer as the probability with respect to $D$, and $\Pr_{S}[\cdot]$ to denote the probability with respect to uniform distribution over the sample $S$. Similarly, we use $E_{D}[\cdot]$ and $E_{S} [\cdot]$ to denote the expected values, respectively. For an integer $m>0$, we set $[m]=\{1,2,\cdots,m\}$.
The Bernoulli Kullback-Leiler (or KL) divergence is defined as
\[
KL(q||p)= q\log\frac{q}{p}+(1-q)\log\frac{1-q}{1-p} \text{ for }0\leq p,q\leq 1.
\]
For a fixed $q$, we can easily find that $KL(q||p)$ is a monotone increasing function for $q\leq p<1$, and thus, the inverse of $KL(q||p)$ for the fixed $q$ is given by
\[
KL^{-1}(q;u)=\inf_{w}\left\{w\colon w\geq q\text{ and } KL(q||w)\geq u\right\}.
\]
Let $\mathcal{H}$ be a hypothesis space. Throughout this paper, we restrain $\mathcal{H}$ to be finite, and similar consideration can be made to the case when $\mathcal{H}$ has finite VC-dimension. We denote by
\[
\mathcal{A}=\Big\{\frac{i}{|\mathcal{H}|}\colon i\in\big[|\mathcal{H}| \big] \Big\}.
\]
A base learner $h\in \mathcal{H}$ is a function which maps a distribution over $\mathcal {X}\times\mathcal {Y}$ onto a function $h\colon\mathcal{X} \rightarrow \mathcal{Y}$. Let $\mathcal{C}(\mathcal{H})$ denote the convex hull of $H$, i.e., a voting classifier $f\in \mathcal{C}(\mathcal{H})$ is of the following form
\[
f=\sum \alpha_ih_i \text{ with }\sum \alpha_i=1 \text{ and }\alpha_i\geq0.
\]
For $N\geq1$, denote by $\mathcal{C}_N(\mathcal{H})$ the set of unweighted averages over $N$ elements from $\mathcal{H}$, that is
\begin{equation}\label{eq:CN(H)}
\mathcal{C}_N(\mathcal{H})=\Big\{g\colon g=\sum_{j=1}^N \frac{h_j}{N},h_j\in \mathcal{H}\Big\}.
\end{equation}
For voting classifier $f\in \mathcal{C}(\mathcal{H})$, we can associate with a distribution over $\mathcal{H}$ by using the coefficients $\{\alpha_i\}$, denoted by $\mathcal{Q}(f)$. For convenience, $g\in \mathcal{C}_N(\mathcal{H}) \sim\mathcal{Q}(f)$ implies $g=\sum_{j=1}^N {h_j}/{N}$ where $h_j\sim\mathcal{Q}(f)$.
For an instance $(x,y)$, the \emph{margin} with respect to the voting classifier $f=\sum \alpha_i h_i(x)$ is defined as $yf(x)$; in other words,
\[
yf(x)=\sum_{i\colon y=h_i(x)}\alpha_i - \sum_{i\colon y\neq h_i(x)}\alpha_i,
\]
which shows the difference between the weights of base learners that classify $(x,y)$ correctly and the weights of base learners that misclassify $(x,y)$. Therefore, margin can be viewed as a measure of the confidence of the classification. Given a sample $S=\{(x_1,y_1), (x_2,y_2), \cdots, (x_m,y_m)\}$, we denote by $\hat{y}_1f(\hat{x}_1)$ the \textit{minimum margin} and $E_S[yf(x)]$ the \textit{average margin}, which are defined respectively as follows:
\[
\hat{y}_1f(\hat{x}_1)=\min_{i\in[m]}\{y_if(x_i)\} \ \ \text{ and }\ \ E_S[yf(x)]=\sum_{i=1}^m \frac{y_if(x_i)}{m}.
\]
\section{Background}\label{sec:back}
In statistical community, great efforts have been devoted to understanding how and why \texttt{AdaBoost} works. Friedman et al. \cite{Friedman:Hastie:Tibshirani2000} made an important stride by viewing \texttt{AdaBoost} as a stagewise optimization and relating it to fitting an additive logistic regression model. Various new boosting-style algorithms were developed by performing a gradient decent optimization of some potential loss functions \citep{Buhlmann:Yu2003,Mason:Baxter:Bartlett:Frean1999,Ratsch:Onoda:Muller2001}. Based on this optimization view, some boosting-style algorithms and their variants have been shown to be Bayes's consistent under different settings \citep{Bartlett:Jordan:McAuliffe2006,Bartlett:Traskin2007,Bickel:Ritov:Zakai2006,Breiman2000tech,Jiang2004,Lugosi:Vayatis2004,Mukherjee:Rudin:Schapire2011,Zhang2004ann}. However, these theories can not be used to explain the resistance of \texttt{AdaBoost} to overfitting, and some statistical views have been questioned seriously by Mease and Wyner \cite{Mease:Wyner2008} with empirical evidences. In this paper, we focus on the margin theory.
\begin{algorithm}[!t]\label{alg}
\caption{A unified description of \texttt{AdaBoost} and \texttt{arc-gv}} \textbf{Input}: Sample $S=\{(x_1,y_1),(x_2,y_2),\cdots,(x_m,y_m)\}$ and the number of iterations $T$.\vspace{+2mm}\\
\textbf{Initialization}: $D_1(i)=1/m$.
\begin{algorithmic}
\FOR{$t=1$ to $T$}
\STATE 1. Construct base learner $h_t\colon \mathcal{X}\to\mathcal{Y}$ using the distribution $D_t$.
\STATE 2. Choose $\alpha_t$.
\STATE 3. Update
\[
D_{t+1}(i)={D_t(i)\exp(-\alpha_ty_ih_t(x_i))}/{Z_t},
\]
where $Z_t$ is a normalization factor (such that $D_{t+1}$ is a distribution). \ENDFOR
\end{algorithmic}\vspace{+2mm}
\textbf{Output}: The final classifier $\text{sgn}[f(x)]$, where
\[
f(x)=\sum_{t=1}^T \frac{\alpha_t}{\sum_{t=1}^T\alpha_t}h_t(x).
\]
\end{algorithm}
Algorithm 1 provides a unified description of \texttt{AdaBoost} and \texttt{arc-gv}. The only difference between them lies in the choice of $\alpha_t$. In \texttt{AdaBoost}, $\alpha_t$ is chosen by
\[
\alpha_t=\frac{1}{2}\ln\frac{1+\gamma_t}{1-\gamma_t},
\]
where $\gamma_t= \sum_{i=1}^m D_t(i)y_ih_t(x_i)$ is called the \textit{edge} of $h_t$, which is an affine transformation of the error rate of $h_t(x)$. However, \texttt{Arc-gv} sets $\alpha_t$ in a different way. Denote by $\rho_t$ the minimum margin of the voting classifier of round $t-1$, that is,
\[
\rho_t=\hat{y}_1 f_t(\hat{x}_1)\text{ with }\rho_1=0
\]
where
\[
f_t=\sum_{s=1}^{t-1}\frac{\alpha_s}{\sum_{s=1}^{t-1} \alpha_s}h_s(x).
\]
Then, \texttt{Arc-gv} sets $\alpha_t$ as to be
\[
\alpha_t=\frac{1}{2}\ln\frac{1+\gamma_t}{1-\gamma_t}- \frac{1}{2}\ln\frac{1+\rho_t}{1-\rho_t}.
\]
Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998} first proposed the margin theory for \texttt{AdaBoost} and upper bounded the generalization error as follows:
\begin{theorem}\citep{Schapire:Freund:Bartlett:Lee1998}\label{thm:Scha}
For any $\delta>0$ and $\theta>0$, with probability at least $1-\delta$ over the random choice of sample $S$ with size $m$, every voting classifier $f$ satisfies the following bound:
\[
\Pr_D[yf(x)<0]\leq \Pr_S[yf(x)\leq\theta]+O\left( \frac{1}{\sqrt{m}} \left(\frac{\ln m\ln|\mathcal{H}|}{\theta^2}+\ln\frac{1}{\delta}\right)^{1/2}\right).
\]
\end{theorem}
Breiman \cite{Breiman1999} provided the minimum margin bound for \texttt{arc-gv} by Theorem~\ref{thm:brei} with our notations.
\begin{theorem}\citep{Breiman1999}\label{thm:brei}
If
\[
\theta=\hat{y}_1f(\hat{x}_1)>4\sqrt{\frac{2}{|\mathcal{H}|}}\text{ and } R=\frac{32\ln2|\mathcal{H}|} {m\theta^2}\leq2m,
\]
then, for any $\delta>0$, with probability at least $1-\delta$ over the random choice of sample $S$ with size $m$, every voting classifier $f$ satisfies the following bound:
\[
\Pr_D[yf(x)<0]\leq R\Big(\ln(2m)+ \ln\frac{1}{R}+1\Big)+ \frac{1}{m}\ln\frac{|\mathcal{H}|}{\delta}.
\]
\end{theorem}
Empirical results show that \texttt{arc-gv} probably generates a larger minimum margin but with higher generalization error, and Breiman's bound is $O(\frac{\ln m}{m})$, tighter than $O(\sqrt{\frac{\ln m}{m}})$ in Theorem~\ref{thm:Scha}. Thus, Breiman cast serious doubt on margin theory. To support the margin theory, \cite{Wang:Sugiyama:Yang:Zhou:Feng2011} presented a tighter bound in term of Wang et al. \textit{Emargin} by Theorem~\ref{thm:wang}, which was believed to be related to margin distribution. Notice that the factors considered by Wang et al. \cite{Wang:Sugiyama:Yang:Zhou:Feng2011} are different from that considered by Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998} and Breiman \cite{Breiman1999}.
\begin{theorem}\citep{Wang:Sugiyama:Yang:Zhou:Feng2011}\label{thm:wang}
For any $\delta>0$, with probability at least $1-\delta$ over the random choice of the sample $S$ with size $m$, every voting classifier $f$ satisfying the following bound:
\[
\Pr_D[yf(x)<0]\leq \frac{\ln|\mathcal{H}|}{m}+\inf_{q\in\{0,\frac{1}{m},\cdots,1\}} KL^{-1}(q;u[\hat{\theta}(q)]),
\]
where
\[
u[\hat{\theta}(q)]= \frac{1}{m}\Big( \frac{8\ln|\mathcal{H}|}{\hat{\theta}^2(q)} \ln\frac{2m^2}{\ln|\mathcal{H}|} +\ln|\mathcal{H}|+\ln\frac{m}{\delta}\Big)
\]
and $\hat{\theta}(q)=\sup\big\{\theta \in \big(\sqrt{{8}/{|\mathcal{H}|}},1\big]\colon \Pr_S[yf(x)\leq\theta]\leq q\big\}$.
\end{theorem}
Instead of the whole function space, much work developed margin-based data-dependent bounds for generalization error, e.g., empirical cover number \citep{Shawe-Taylor:Williamson1999}, empirical fat-shattering dimension \citep{Antos:Kegl:Linder:Lugosi2002}, Rademacher and Gaussian complexities \citep{Koltchinskii:Panchanko2002,Koltchinskii:Panchanko2005}. Some of these bounds are proven to be sharper than Theorem~\ref{thm:Scha}, but it is difficult, or even impossible, to directly show that these bounds are sharper than the minimum bound of Theorem~\ref{thm:brei}, and fail to explain the resistance of \texttt{AdaBoost} to overfitting.
\section{None Margin Distribution Bound}\label{sec:kmargin}
Given a sample $S$ of size $m$, we define the \textit{$k$th margin} $\hat{y}_kf(\hat{x}_k)$ as the $k$th smallest margin over sample $S$, i.e., the $k$th smallest value in $\{y_if(x_i), i\in[m]\}$. The following theorem shows that the $k$th margin can be used to measure the performance of a voting classifier, whose proof is deferred in Section~\ref{sec:pf1}.
\begin{theorem}\label{thm:kmar}
For any $\delta>0$ and $k\in [m]$, if $\theta=\hat{y}_{k} f(\hat{x}_{k})> \sqrt{{8}/{|\mathcal{H}|}}$, then with probability at least $1-\delta$ over the random choice of sample with size $m$, every voting classifier $f$ satisfies the following bound:
\begin{equation}\label{eq:kmar:re1}
\Pr_D[yf(x)<0]\leq \frac{\ln|\mathcal{H}|}{m}+KL^{-1}\Big(\frac{k-1}{m};\frac{q}{m}\Big)
\end{equation}
where
\[
q=\frac{8\ln(2|\mathcal{H}|)} {\theta^2}\ln\frac{2m^2} {\ln|\mathcal{H}|}+\ln|\mathcal{H}|+\ln\frac{m}{\delta}.
\]
Especially, when $k$ is constant with $m>4k$, we have
\begin{equation}\label{eq:kmar:re2}
\Pr_D[yf(x)<0]\leq \frac{\ln|\mathcal{H}|}{m}+\frac{2}{m}\Big(\frac{8\ln(2|\mathcal{H}|)} {\theta^2} \ln\frac{2m^2}{\ln|\mathcal{H}|}+\ln|\mathcal{H}|+\ln\frac{km^{k-1}}{\delta}\Big).
\end{equation}
\end{theorem}
It is interesting to study the relation between Theorem~\ref{thm:kmar} and previous results, especially for Theorems~\ref{thm:brei} and \ref{thm:wang}. It is straightforward to get a result similar to Breiman's minimum margin bound in Theorem~\ref{thm:brei}, by setting $k=1$ in Eqn.~\eqref{eq:kmar:re2}:
\begin{corollary}
For any $\delta>0$, if $\theta=\hat{y}_{1} f(\hat{x}_{1})>\sqrt{{8}/{|\mathcal{H}|}}$, then with probability at least $1-\delta$ over the random choice of sample $S$ with size $m$, every voting classifier $f$ satisfies the following bound:
\[
\Pr_D[yf(x)<0]\leq \frac{\ln|\mathcal{H}|}{m} +\frac{2}{m}\Big( \frac{8\ln(2|\mathcal{H}|)}{\theta^2}\ln\frac{2m^2}{\ln|\mathcal{H}|} +\ln\frac{|\mathcal{H}|}{\delta}\Big).
\]
\end{corollary}
Notice that when $k$ is a constant, the bound in Eqn.~\eqref{eq:kmar:re2} is $O({\ln m}/{m})$ and the only difference lies in the coefficient. Thus, there is no essential difference to select constant $k$th margin (such as the $2$nd margin, the $3$rd margin, etc.) to measure the confidence of classification for large-size sample.
Based on Theorem~\ref{thm:kmar}, it is also not difficult to get a result similar to the Emargin bound in Theorem~\ref{thm:wang} as follows:
\begin{corollary}\label{corol-wang}
For any $\delta>0$, if $\theta_k=\hat{y}_kf(\hat{x}_k)>\sqrt{8/|\mathcal{H}|}$, then with probability at least $1-\delta$ over the random choice of the sample $S$ with size $m$, every voting classifier $f$ satisfying the following bound:
\[
\Pr_D[yf(x)<0]\leq \frac{\ln|\mathcal{H}|}{m}+ \inf_{k\in[m]}KL^{-1} \Big(\frac{k-1}{m};\frac{q}{m}\Big)
\]
where
\[
q=\frac{8\ln(2|\mathcal{H}|)} {\theta_k^2}\ln\frac{2m^2} {\ln|\mathcal{H}|}+\ln|\mathcal{H}|+\ln\frac{m}{\delta}.
\]
\end{corollary}
From Corollary~\ref{corol-wang}, we can easily understand that the Emargin bound ought to be tighter than the minimum margin bound because the former takes the infimum range over $k\in[m]$ while the latter focuses only on the minimum margin.
In summary, the preceding analysis reveals that both the minimum margin and Emargin are special cases of the $k$th margin; neither of them succeeds in relating margin distribution to the generalization performance of \texttt{AdaBoost}.
\section{Main Results}\label{sec:mainresult}
We begin with the following empirical Bernstein bound, which is crucial for our main theorems:
\begin{theorem}\label{thm:tool}
For any $\delta>0$, and for i.i.d random variables $Z, Z_1, Z_2, \ldots, Z_m$ with $Z\in[0,1]$ and $m\geq4$. the followings hold with probability at least $1-\delta$
\begin{eqnarray}
E[Z]-\frac{1}{m}\sum_{i=1}^m Z_i &\leq& \sqrt{\frac{2\hat{V}_m\ln (2/\delta)}{m}} + \frac{7\ln(2/\delta)}{3m}, \label{eq:tool2} \\
E[Z]-\frac{1}{m}\sum_{i=1}^m Z_i &\geq& -\sqrt{\frac{2\hat{V}_m\ln (2/\delta)}{m}} - \frac{7\ln(2/\delta)}{3m}, \label{eq:tool1}
\end{eqnarray}
where $\hat{V}_m=\sum_{i<j}(Z_i-Z_j)^2/2m(m-1)$.
\end{theorem}
It is noteworthy that the bound in Eqn.~\eqref{eq:tool2} is similar to but improves slightly the bound of Maurer and Pontil \cite[Theorem 4]{Maurer:Pontil2009}, and we also present a lower bound as shown in Eqn.~\eqref{eq:tool1}. This proof is deferred to Section~\ref{sec:pf:tool}, which is simple, straightforward and different from \citep{Maurer:Pontil2009}.\vspace{0.15in}
We now present our first main theorem:
\begin{theorem}\label{thm:our1}
For any $\delta>0$, with probability at least $1-\delta$ over the random choice of sample $S$ with size $m\geq4$, every voting classifier $f$ satisfies the following bound:
\[
\Pr_D[yf(x)<0] \leq \frac{2}{m}+\inf_{\theta\in (0,1]} \left[\Pr_S[yf(x)<\theta]+ \frac{7\mu+3\sqrt{2\mu}}{3m}+\sqrt{\frac{2\mu}{m}\Pr_S[yf(x)<\theta]}\right],
\]
where
\[
\mu=\frac{8}{\theta^2}\ln m\ln(2|\mathcal{H}|)+ \ln\frac{2|\mathcal{H}|}{\delta}.
\]
\end{theorem}
This proof is based on the techniques developed by Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998}, and the main difference is that we utilize the empirical Bernstein bound of Eqn.~\eqref{eq:tool2} in Theorem~\ref{thm:tool} for the derivation of generalization error. The detailed proof is deferred to Section~\ref{sec:pf2}.
It is noteworthy that Theorem~\ref{thm:our1} shows that the generalization error can be bounded in term of the empirical margin distribution $\Pr_S[yf(x)\leq\theta]$, the training sample size and the hypothesis complexity; in other words, this bound considers exactly the same factors as Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998} in Theorem~\ref{thm:Scha}. However, the following corollary shows that, the bound in Theorem~\ref{thm:our1} is tighter than the bound of Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998} in Theorem~\ref{thm:Scha}, as well as the minimum margin bound of Breiman \cite{Breiman1999} in Theorem~\ref{thm:brei}.
\begin{corollary}\label{coro:tighter}
For any $\delta>0$, if the minimum margin $\theta_1=\hat{y}_1f(\hat{x}_1)>0$ and $m\geq4$, then we have
\begin{equation}\label{eq:tight1}
\inf_{\theta\in (0,1]} \left[\Pr_S[yf(x)<\theta]+ \frac{7\mu+3\sqrt{2\mu}}{3m}+ \sqrt{\frac{2\mu}{m}\Pr_S[yf(x)<\theta]}\right]
\leq \frac{7\mu_1+3\sqrt{2\mu_1}}{3m},
\end{equation}
where $\mu={8\ln m}\ln(2|\mathcal{H}|)/{\theta^2}+ \ln({2|\mathcal{H}|}/ {\delta})$ and $\mu_1={8\ln m}\ln(2|\mathcal{H}|)/{\theta_1^2}+ \ln({2|\mathcal{H}|}/ {\delta})$; moreover, if the followings hold
\begin{eqnarray}
&\theta_1=\hat{y}_1f(\hat{x}_1)>4 \sqrt{\frac{2}{|\mathcal{H}|}}\label{eq:t1}&\\
&R=\frac{32\ln2|\mathcal{H}|} {m\theta_1^2}\leq2m\label{eq:t2}&\\
&m\geq \max\Big\{4, \exp\Big(\frac{\theta_1^2}{4\ln(2|\mathcal{H}|)} \ln\frac{|\mathcal{H}|}{\delta}\Big)\Big\}\label{eq:con},&
\end{eqnarray}
then we have
\begin{multline}\label{eq:tight2}
\frac{2}{m}+\inf_{\theta\in (0,1]} \left[\Pr_S[yf(x)<\theta]+ \frac{7\mu+ 3 \sqrt{2\mu}}{3m}+ \sqrt{\frac{2\mu}{m}\Pr_S[yf(x)<\theta]}\right]\\
\leq R\Big(\ln(2m)+ \ln\frac{1}{R}+1\Big)+ \frac{1}{m} \ln\frac{|\mathcal{H}|}{\delta}.
\end{multline}
\end{corollary}
This proof is deferred to Section~\ref{sec:pf4}. From Eqn.~\eqref{eq:tight1}, we can see clearly that the bound of Theorem~\ref{thm:our1} is $O(\ln m/m)$, uniformly tighter than the bound of Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998} in Theorem~\ref{thm:Scha}. In fact, we could also guarantee that bound of Theorem~\ref{thm:our1} is $O(\ln m/m)$ even under weaker condition that $\hat{y}_kf(\hat{x}_k)>0$ for some $k\leq O(\ln m)$. It is also noteworthy Eqns.~\eqref{eq:t1} and \eqref{eq:t2} are used here to guarantee the conditions of Theorem~\ref{thm:brei}, and Eqn.~\eqref{eq:tight2} shows that the bound of Theorem~\ref{thm:our1} is tighter than Breiman's minimum margin bound of Theorem~\ref{thm:brei} for large-size sample.
Breiman \cite{Breiman1999} doubted the margin theory because of two recognitions: i) the minimum margin bound of Breiman \cite{Breiman1999} is tighter than the margin distribution bound of Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998}, and therefore, the minimum margin is more essential than margin distribution to characterize the generalization performance; ii) \texttt{arc-gv} maximizes the minimum margin, but demonstrates worse performance than \texttt{AdaBoost} empirically. However, our result shows that the margin distribution bound in Theorem~\ref{thm:Scha} can be greatly improved so that it is tighter than the minimum margin bound, and therefore, it is natural that \texttt{AdaBoost} outperforms \texttt{arc-gv} empirically on some datasets; in a word, our results provide a complete answer to Breiman's doubt on margin theory. \vspace{0.15in}
We can also give a lower bound for generalization error as follows:
\begin{theorem}\label{thm:lower1}
For any $\delta>0$, with probability at least $1-\delta$ over the random choice of sample $S$ with size $m\geq4$, every voting classifier $f$ satisfies the following bound:
\[
\Pr_D[yf(x)<0]\geq \sup_{\theta\in (0,1]}\left[ \Pr_{S}[yf(x)<-\theta] -\sqrt{\frac{2\mu}{m}\Pr_S[yg(x)<0]}-\frac{7\mu+3\sqrt{2\mu}}{3m} \right] -\frac{2}{m}
\]
where $\mu={8\ln m}\ln(2|\mathcal{H}|)/{\theta^2}+ \ln({2|\mathcal{H}|}/{\delta})$.
\end{theorem}
The proof is based on Eqn.~\eqref{eq:tool1} in Theorem~\ref{thm:tool} and we defer it to Section~\ref{sec:pf:lower}. We now introduce the second main result as follows:
\begin{theorem}\label{thm:our2}
For any $\delta>0$, with probability at least $1-\delta$ over the random choice of sample $S$ with size $m\geq4$, every voting classifier $f$ satisfies the following bound:
\begin{multline*}
\Pr_D[yf(x)<0] \leq \frac{1}{m^{50}}+\inf_{\theta\in (0,1]}\left[\Pr_S[yf(x)<\theta]+ \frac{\sqrt{6\mu}}{m^{3/2}}+ \frac{7\mu}{3m}\right.\\
\left.+ \sqrt{\frac{2\mu}{m}\hat{\mathcal{I}}(\theta)}+\exp\Big(\frac{-2\ln m}{(1-E^2_{S}[yf(x)]+\theta/9)}\Big)\right]
\end{multline*}
where $\mu={144}\ln m\ln(2|\mathcal{H}|)/{\theta^2}+\ln({2|\mathcal{H}|}/\delta)$ and $\hat{\mathcal{I}}(\theta)=\Pr_S[yf(x)<\theta] \Pr_S[yf(x)\geq2\theta/3]$.
\end{theorem}
It is easy to find in almost all boosting experiments that the average margin $E_S[yf(x)]$ is positive. Thus, the bound of Theorem~\ref{thm:our2} can be tighter when we enlarge the average margin. The statistics $\hat{\mathcal{I}}(\cdot)$ reflects the margin variance in some sense, and the term including $\hat{\mathcal{I}}(\cdot)$ could be small or even vanished except for a small interval when the variance is small. Similarly to the proof of Eqn.~\eqref{eq:tight1}, we can show that the bound of Theorem~\ref{thm:our2} is still $O(\ln m/m)$.
Theorem~\ref{thm:our2} provides a theoretical support to the suggestion of Reyzin and Schapire \cite{Reyzin:Schapire2006}, that is, the average margin can be used to measure the performance. It is noteworthy that, however, merely considering the average margin is insufficient to bound the generalization error tightly, as shown by the simple example in Figure~\ref{fig1}. Indeed, ``average'' and ``variance'' are two important statistics for capturing a distribution, and thus, it is reasonable that both the average margin and margin variance are considered in Theorem~\ref{thm:our2}.
\begin{center}
\begin{figure}[!t]
\begin{minipage}{1\linewidth}
\includegraphics[width=6.5in]{fig.eps}
\caption{Each curve represents a voting classifier. The $X$-axis and $Y$-axis denote instance and margin, respectively, and uniform distribution is assumed on the instance space. The voting classifiers $h_1$, $h_2$ and $h_3$ have the same average margin but with different generalization error rates: ${1}/{2}$, ${1}/{3}$ and $0$.}\label{fig1}
\end{minipage}
\end{figure}
\end{center}
\section{Proofs}\label{sec:pf}
In this section, we provide the detailed proofs for the main theorems and corollaries, and we begin with a series of useful lemmas as follows:
\begin{lemma}[Chernoff bound \citep{Chernoff1952}]\label{lem:Chern} Let $X, X_1, X_2, \ldots, X_m$ be i.i.d random variables with $X\in[0,1]$. Then, the followings hold for any $\epsilon>0$,
\begin{eqnarray*}
&&\Pr\left[\frac{1}{m}\sum_{i=1}^m X_i\geq E[X]+\epsilon\right] \leq\exp\left(-\frac{m\epsilon^2}{2}\right),\\
&&\Pr\left[\frac{1}{m}\sum_{i=1}^mX_i\leq E[X]-\epsilon\right]\leq \exp\left(-\frac{m\epsilon^2}{2}\right).
\end{eqnarray*}
\end{lemma}
\begin{lemma}[Relative entropy Chernoff bound \citep{Hoeffding1963}]\label{lem:entropy}
The following holds for $0<\epsilon<1$,
\[
\sum_{i=0}^{k-1}{m\choose i} \epsilon_N^i(1-\epsilon_N)^{m-i} \leq \exp\left(-m KL\left(\frac{k-1}{m}\Big|\Big|\epsilon\right)\right).
\]
\end{lemma}
\begin{lemma}[Bernstein inequalities \citep{Bernshtein1946}]
Let $X, X_1, X_2, \ldots, X_m$ be i.i.d random variables with $X_i\in[0,1]$. Then, for any $\delta>0$, the followings hold with probability at least $1-\delta$,
\begin{eqnarray}
E[X]-\frac{1}{m}\sum_{i=1}^mX_i&\leq& \sqrt{\frac{2V(X)\ln1/\delta}{m}}+\frac{\ln1/\delta}{3m},\label{eq:bernineq1}\\
E[X]-\frac{1}{m}\sum_{i=1}^mX_i&\geq& -\sqrt{\frac{2V(X)\ln1/\delta}{m}}-\frac{\ln1/\delta}{3m}, \label{eq:bernineq2}
\end{eqnarray}
where $V(X)$ denotes the variance $E[(X-E[X])^2]$.
\end{lemma}
\subsection{Proof of Theorem~\ref{thm:kmar}}\label{sec:pf1}
We begin with a lemma as follows:
\begin{lemma}\label{lem-temp0}
For $f\in\mathcal{C}(\mathcal{H})$ and $g\in \mathcal{C}_N(\mathcal{H})$ chosen i.i.d according to distribution $\mathcal{Q}(f)$. If $\hat{y}_{k} f(\hat{x}_{k})\geq \theta$ and $\hat{y}_{k} g(\hat{x}_{k})\leq \alpha$ with $\theta>\alpha$, then there is an instance $(x_i,y_i)$ in $S$ such that $y_{i} f(x_{i})\geq \theta$ and $y_{i} g(x_{i})\leq \alpha$.
\end{lemma}
\proof There exists a bijection between $\{y_jf(x_j)\colon j\in[m]\}$ and $\{y_jg(x_j)\colon j\in [m]\}$ according to the original position in $S$. Suppose $\hat{y}_{k} f(\hat{x}_{k})$ corresponds to $\hat{y}_lg(\hat{x}_l)$ for some $l$. If $l\leq k$ then the example $(\hat{x}_k,\hat{y}_k)$ of $\hat{y}_{k} f(\hat{x}_{k})$ is desired; otherwise, except for $(\hat{x}_k,\hat{y}_k)$ of $\hat{y}_{k} f(\hat{x}_{k})$ in $S$, there are at least $m-k$ elements larger than or equal to $\theta$ in $\{y_jf(x_j)\colon j\in[m]\setminus\{k\} \}$ but at most $m-k-1$ elements larger than $\alpha$ in $\{y_jg(x_j)\colon j\in[m]\setminus\{l\}\}$. This completes the proof from the bijection.\qed\vspace{0.15in}
\textit{Proof of Theorem~\ref{thm:kmar}:} For every $f\in\mathcal{C}(\mathcal{H})$, we can construct a $g\in \mathcal{C}_N(\mathcal{H})$ by choosing $N$ elements i.i.d according to distribution $\mathcal{Q}(f)$, and thus $E_{g\sim\mathcal{Q}(f)}[g]=f$. For $\alpha>0$, the Chernoff's bound in Lemma~\ref{lem:Chern} gives
\begin{eqnarray}
\Pr_D[yf(x)<0]&=&\Pr_{D,\mathcal{Q}(f)}[yf(x)<0,yg(x)\geq\alpha]+ \Pr_{D,\mathcal{Q}(f)}[yf(x)<0,yg(x)<\alpha]\nonumber\\
&\leq&\exp(-{N\alpha^2}/{2})+\Pr_{D,\mathcal{Q}(f)}[yg(x)<\alpha].\label{eq:thm1:temp0}
\end{eqnarray}
For any $\epsilon_N>0$, we consider the following probability:
\begin{eqnarray}
&&\Pr_{S\sim D^m}\left[\Pr_D[yg(x)<\alpha]> I[\hat{y}_{k} g(\hat{x}_{k}) \leq \alpha]+\epsilon_N \right]\nonumber \\
&&\leq \Pr_{S\sim D^m} \left[\hat{y}_{k} g(\hat{x}_{k})>\alpha \left|\Pr_D[yg(x)<\alpha] >\epsilon_N\right.\right]\nonumber \\
&&\leq \sum_{i=0}^{k-1}{m\choose i} \epsilon_N^i(1-\epsilon_N)^{m-i} \label{eq:thm1:temp1}
\end{eqnarray}
where $\hat{y}_{k} g(\hat{x}_{k})$ denotes the $k$th margin with respect to $g$. For any $k$, Eqn.~\eqref{eq:thm1:temp1} can be bounded by $\exp\big(-m KL\big(\frac{k-1}{m} \big|\big|\epsilon_N\big)\big)$ from Lemma~\ref{lem:entropy}; for constant $k$ with $m > 4k$, we have
\[
\sum_{i=0}^{k-1}{m\choose i} \epsilon_N^i(1-\epsilon_N)^{m-i}\leq k(1-\epsilon_N)^{m/2} {m\choose k-1}\leq km^{k-1}(1-\epsilon_N)^{m/2}.
\]
By using the union bound and $|\mathcal{C}_N(\mathcal{H})|\leq |\mathcal{H}|^{N}$, we have, for any $k\in[m]$,
\begin{eqnarray*}
&&\Pr_{S\sim {D}^m,g\sim \mathcal{Q}(f)}\left[\exists g\in \mathcal{C}_N(\mathcal{H}), \exists \alpha \in \mathcal{A},\Pr_D[yg(x)<\alpha]> I[\hat{y}_{k} g(\hat{x}_{k})\leq \alpha]+\epsilon_N\right]\\
&&\quad\quad \leq |\mathcal{H}|^{N+1} \exp\left(-m KL\Big(\frac{k-1}{m}\big|\big|\epsilon_N\Big)\right).
\end{eqnarray*}
Setting $\delta_N=|\mathcal{H}|^{N+1} \exp\big(-m KL\big(\frac{k-1}{m} \big|\big| \epsilon_N\big) \big)$ gives $\epsilon_N=KL^{-1}\big(\frac{k-1}{m};\frac{1}{m} \ln\frac{|\mathcal{H}|^{N+1}}{\delta_N}\big)$. Thus, with probability at least $1-\delta_N$ over sample $S$, for all $f\in \mathcal{C}(\mathcal{H})$ and all $\alpha\in \mathcal{A}$, we have
\begin{equation}\label{eq:thm1:temp2}
\Pr_D[yg(x)<\alpha]\leq I[\hat{y}_{k} g(\hat{x}_{k}) \leq \alpha]
+KL^{-1}\left(\frac{k-1}{m};\frac{1}{m} \ln\frac{|\mathcal{H}|^{N+1}}{\delta_N}\right).
\end{equation}
Similarly, for constant $k$, with probability at least $1-\delta_N$ over sample $S$, it holds that
\begin{equation}\label{eq:thm1:temp3}
\Pr_D[yg(x)<\alpha]\leq I[\hat{y}_{k} g(\hat{x}_{k})\leq\alpha]
+\frac{2}{m}\ln\frac{km^{k-1}|\mathcal{H}|^{N+1}}{\delta_N}.
\end{equation}
From $E_{g\sim \mathcal{Q}(f)}[I[\hat{y}_{k} g(\hat{x}_{k})\leq \alpha]] =\Pr_{ g \sim \mathcal{Q}(f)}[\hat{y}_{k} g(\hat{x}_{k})\leq \alpha]$, we have, for any $\theta>\alpha$,
\begin{equation}\label{eq:thm1:temp5}
\Pr_{g\sim \mathcal{Q}(f)}[\hat{y}_{k} g(\hat{x}_{k})\leq \alpha]\leq I[\hat{y}_{k} f(\hat{x}_{k})<\theta]+\Pr_{g\sim \mathcal{Q}(f)}[\hat{y}_{k} f(\hat{x}_{k})\geq \theta, \hat{y}_{k}g(\hat{x}_{k})\leq\alpha].
\end{equation}
Notice that the instance $(\hat{x}_k,\hat{y}_k)$ in $\{\hat{y}_if(\hat{x}_i)\}$ may be different from instance $(\hat{x}_k,\hat{y}_k)$ in $\{\hat{y}_ig(\hat{x}_i)\}$, but from Lemma \ref{lem-temp0}, the last term on the right-hand side of Eqn.~\eqref{eq:thm1:temp5} can be further bounded by
\begin{equation}\label{eq:thm1:temp6}
\Pr_{g\sim \mathcal{Q}(f)}[\exists (x_i,y_i)\in S\colon y_{i} f(x_{i})\geq \theta, y_{i} g(x_{i})\leq \alpha]\leq m\exp(-N(\theta-\alpha)^2/2).
\end{equation}
Combining Eqns.~\eqref{eq:thm1:temp0}, \eqref{eq:thm1:temp2}, \eqref{eq:thm1:temp5} and \eqref{eq:thm1:temp6}, we have that with probability at least $1-\delta_N$ over the sample $S$, for all $f\in \mathcal{C}(\mathcal{H})$, all $\theta>\alpha$, all $k\in[m]$ but fixed $N$:
\begin{multline}\label{eq:thm1:temp7}
\Pr_D[yf(x)<0]\leq I[\hat{y}_{k} f(\hat{x}_{k})\leq\theta] +m\exp(-{N(\theta-\alpha)^2}/{2}) +\exp(-N\alpha^2/2)\\
+KL^{-1}\left(\frac{k-1}{m};\frac{1}{m} \ln\frac{|\mathcal{H}|^{N+1}m}{\delta_N}\right).
\end{multline}
To obtain the probability of failure for any $N$ at most $\delta$, we select $\delta_N=\delta/2^N$. Setting $\alpha=\frac{\theta}{2}-\frac{\eta}{|\mathcal{H}|}\in \mathcal{A}$ and $N=\frac{8}{\theta^2}\ln\frac{2m^2}{\ln|\mathcal{H}|}$ with $0\leq\eta<1$, we have
\[
\exp(-N\alpha^2/2)+m\exp(-N(\theta-\alpha)^2/2)\leq 2m\exp(-N\theta^2/8)\leq \ln|\mathcal{H}|/m
\]
from the fact $2m>\exp(N/(2|\mathcal{H}|))$ for $\theta>\sqrt{8/|\mathcal{H}|}$. Finally we obtain
\[
\Pr[yf(x)<0]\leq I[\hat{y}_{k} f(\hat{x}_{k}) < \theta]+ \frac{\ln|\mathcal{H}|}{m} +KL^{-1}\left(\frac{k-1}{m}||\frac{q}{m}\right)
\]
where $q=\frac{8\ln(2|\mathcal{H}|)} {\theta^2}\ln\frac{2m^2} {\ln|\mathcal{H}|}+\ln|\mathcal{H}|+\ln\frac{m}{\delta}$. This completes the proof of Eqn.~\eqref{eq:kmar:re1}. In a similar manner, we have
\[
\Pr[yf(x)<0]\leq I[\hat{y}_{k} f(\hat{x}_{k}) < \theta]+\frac{\ln|\mathcal{H}|}{m}
+\frac{2}{m}\left(\frac{8\ln(2|\mathcal{H}|)} {\theta^2}\ln\frac{2m^2}{\ln|\mathcal{H}|}+\ln|\mathcal{H}|+\ln\frac{km^{k-1}}{\delta}\right),
\]
for constant $k$ with $m>4k$. This completes the proof of Eqn.~\eqref{eq:kmar:re2} as desired.\qed
\subsection{Proof of Theorem~\ref{thm:tool}}\label{sec:pf:tool}
For notational simplicity, we denote by $\bar{X}=(X_1,X_2,\ldots,X_m)$ a vector of $m$ i.i.d. random variables, and further set $\bar{X}^{k,Y}=(X_1,\ldots, X_{k-1}, Y, X_{k+1},\ldots,X_m)$, i.e., the vector with the the $k$th variable $X_k$ in $\bar{X}$ replaced by variable $Y$. We first introduce some lemmas as follows:
\begin{lemma}[McDiarmid Formula \citep{McDiarmid89}]\label{McDiarmid}
Let $\bar{X}=(X_1,X_2,\ldots,X_m)$ be a vector of $m$ i.i.d. random variables taking values in a set $\mathcal{A}$. For any $k\in [m]$ and $Y\in\mathcal{A}$, if $|F(\bar{X})- F(\bar{X}^{k,Y})|\leq c_k$ for $F\colon \mathcal{A}^m\to \mathbb{R}$, then the following holds for any $t>0$
\[
\Pr\left[F(\bar{X})-E[F(\bar{X})]\geq t\right]\leq \exp\left(\frac{-2t^2}{\sum_{k=1}^mc_k^2}\right).
\]
\end{lemma}
\begin{lemma}[Theorem 13 \citep{Maurer2006}]\label{lem:mau}
Let $\bar{X}=(X_1,X_2,\ldots,X_m)$ be a vector of $m$ i.i.d. random variables tanking values in a set $\mathcal{A}$. If $F\colon\mathcal{A}^m\to \mathbb{R}$ satisfies that
\[
F(\bar{X})-\inf_{Y\in\mathcal{A}}F(\bar{X}^{k,Y}) \leq 1 \text{ and } \sum_{k=1}^m \left(F(\bar{X})- \inf_{Y\in\mathcal{A}}F(\bar{X}^{k,Y})\right)^2 \leq F(\bar{X}),
\]
then the following holds for any $t>0$,
\[
\Pr[E[F(\bar{X})]-F(\bar{X})>t]\leq \exp({-t^2}/{2 E[F(\bar{X})]}).
\]
\end{lemma}
\begin{lemma}\label{lem:tmp1}
For two i.i.d random variables $X$ and $Y$, we have
\[
E[(X-Y)^2]=2E[(X-E[X])^2]=2V(X).
\]
\end{lemma}
\proof This lemma follows from the obvious fact $E[(X-Y)^2]= E(X^2+Y^2-2XY)= 2E[X^2]-2E^2[X]= 2E[(X-E[X])^2]$.\qed
\begin{theorem}\label{lem:varbound}
Let $\bar{X}=(X_1,X_2,\ldots,X_m)$ be a vector of $m\geq4$ i.i.d. random variables with values in $[0,1]$, and we denote by
\[
\hat{V}_m(\bar{X})=\frac{1}{2m(m-1)}\sum_{i\neq j}(X_i-X_j)^2.
\]
Then for any $\delta>0$, we have
\begin{eqnarray}
\Pr\left[\sqrt{E[\hat{V}_m(\bar{X})]}<\sqrt{\hat{V}_m(\bar{X})}- \sqrt{\frac{\ln1/\delta}{16m}}\right]&\leq&\delta,\label{eq:var_tmp2}\\
\Pr\left[\sqrt{E[\hat{V}_m(\bar{X})]}>\sqrt{\hat{V}_m(\bar{X})} +\sqrt{\frac{2\ln1/\delta}{m}}\right]&\leq&\delta.\label{eq:var_tmp1}
\end{eqnarray}
\end{theorem}
The bounds in this theorem are tighter than the bounds of \citep[Theorem 10]{Maurer:Pontil2009}, in particularly for Eqn.~\eqref{eq:var_tmp2}. However, our proof is simple, direct and different from work of Maurer and Pontil.\vspace{0.1in}
\noindent\textbf{Proof of Theorem~\ref{lem:varbound}} We will utilize Lemmas~\ref{McDiarmid} and \ref{lem:mau} to prove Eqns.~\eqref{eq:var_tmp2} and \eqref{eq:var_tmp1}, respectively. For Eqn.~\eqref{eq:var_tmp2}, we first observe that, for any $k\in[m]$,
\[
\left|\sqrt{\hat{V}_m(\bar{X})}-\sqrt{\hat{V}_m(\bar{X}^{k,Y})}\right| =\left|\frac{{\hat{V}_m(\bar{X})}-{\hat{V}_m(\bar{X}^{k,Y})}} {\sqrt{\hat{V}_m(\bar{X})}+\sqrt{\hat{V}_m(\bar{X}^{k,Y})}}\right|\leq \frac{1}{2\sqrt{2}m},
\]
where we use $\hat{V}_m(\bar{X}),\hat{V}_m(\bar{X}^{k,Y})\leq 1/2$ from $X_i\in[0,1]$. By using the Jenson's inequality, we have $E[\sqrt{\hat{V}_m}(\bar{X})]\leq \sqrt{E[\hat{V}_m(\bar{X})]}$ and thus,
\[
\Pr\left[\sqrt{E[\hat{V}_m(\bar{X})]}<\sqrt{\hat{V}_m(\bar{X})}-\epsilon\right]\leq \Pr\left[ E\left[\sqrt{\hat{V}_m(\bar{X})}\right]<\sqrt{\hat{V}_m(\bar{X})}- \epsilon\right]\leq \exp(-16m\epsilon^2).
\]
where the last inequality holds by applying McDiarmid formula in Lemma~\ref{McDiarmid} to $\sqrt{\hat{V}_m}$. Therefore, we complete the proof of Eqn.~\eqref{eq:var_tmp2} by setting $\delta=\exp(-16m\epsilon^2)$. \vspace{0.08in}
For Eqn.~\eqref{eq:var_tmp1}, we set $\xi_m(\bar{X})=m\hat{V}_m(\bar{X})$. For $X_i\in [0,1]$ and $\xi_m(\bar{X}^{k,Y})$, it is easy to obtain the optimal solution by simple calculation
\[
Y^*={\arg\inf}_{Y\in[0,1]}[\xi_m(\bar{X}^{k,Y})]=\sum\nolimits_{i\neq k}\frac{X_i}{m-1},
\]
which yields that
\[
\xi_m(\bar{X})-\inf_{Y\in[0,1]}[\xi_m(\bar{X}^{k,Y})] =\frac{1}{m-1}\sum_{i\neq k}(X_i-X_k)^2-(Y^*-X_i)^2=\Big(X_k-\sum_{i\neq k}\frac{X_i}{m-1}\Big)^2.
\]
For $X_i\in [0,1]$, it is obvious that
\[
\xi_m(\bar{X})- \inf_{Y\in[0,1]} [\xi_m(\bar{X}^{k,Y})]\leq 1,
\]
and we further have
\begin{multline}\label{eq:t4}
\sum_{k=1}^m(\xi_m(\bar{X})-\inf_{Y\in[0,1]}[\xi_m(\bar{X}^{k,Y})])^2=\sum_{k=1}^m \Big(X_k-\sum_{i\neq k}\frac{X_i}{m-1}\Big)^4 \\
=\frac{m^5}{(m-1)^4}\frac{1}{m} \sum_{k=1}^m \Big(X_k-\sum_{i=1}^m \frac{X_i}{m}\Big)^4 \leq \frac{m^5}{(m-1)^4}\left(\frac{1}{m} \sum_{k=1}^m \Big(X_k-\sum_{i=1}^m\frac{X_i}{m}\Big)^2\right)^2
\end{multline}
where we use the Jenson's inequality $E[a^4]\leq E^2[a^2]$. From Lemma~\ref{lem:tmp1}, we have
\[
\frac{1}{m} \sum_{k=1}^m \Big(X_k-\sum_{i=1}^m\frac{X_i}{m}\Big)^2 \leq \frac{1}{2m^2}\sum_{i,k}(X_i-X_k)^2 = \frac{1}{2m^2}\sum_{i\neq k}(X_i-X_k)^2.
\]
Substituting the above inequality into Eqn.~\eqref{eq:t4}, we have
\begin{eqnarray*}
\sum_{k=1}^m(\xi_m(\bar{X})-\inf_{Y\in[0,1]}[\xi_m(\bar{X}^{k,Y})])^2 &\leq& \frac{m^3}{4(m-1)^2}\left(\frac{1}{m(m-1)}\sum_{i\neq k}(X_i-X_k)^2\right)^2 \\
&\leq& \frac{m^3}{4(m-1)^2}\frac{1}{m(m-1)}\sum_{i\neq k}(X_i-X_k)^2\\
&=&\frac{m^2}{2(m-1)^2}\xi_m(\bar{X})\leq \xi_m(\bar{X})
\end{eqnarray*}
where the second inequality holds from $\sum_{i\neq k}(X_i-X_k)^2/m(m-1)\leq 1$ for $X_i\in[0,1]$ and the last inequality holds from $m\geq4$. Therefore, for any $t>0$, the following holds by using Lemma~\ref{lem:mau} to $\xi_m(\bar{X})$,
\[
\Pr[E[\hat{V}_m(\bar{X})]-\hat{V}_m(\bar{X})>t]= \Pr[E[\xi_m(\bar{X})]-\xi_m(\bar{X})>mt]\leq \exp\left(\frac{-mt^2} {2E[\hat{V}_m(\bar{X})]}\right).
\]
Setting $\delta=\exp({-mt^2}/{2E[\hat{V}_m(\bar{X})]})$ gives
\[
\Pr\left[E[\hat{V}_m(\bar{X})] - \hat{V}_m(\bar{X}) > \sqrt{{2E[\hat{V}_m(\bar{X})]\ln(1/\delta)}/m}\right]\leq \delta
\]
which completes the proof of Eq.~\eqref{eq:var_tmp1} by using the square-root's inequality and $\sqrt{a+b}\leq \sqrt{a}+\sqrt{b}$ for $a,b\geq0$. \qed \vspace{0.15in}
\noindent\textbf{Proof of Theorem~\ref{thm:tool} } For i.i.d. random variables $\bar{X}=(X_1,X_2,\ldots, X_m)$, we set $\hat{V}_m(\bar{X})=\sum_{i\neq j} (X_i-X_j)^2/2m(m-1)$, and observe that
\[
E[\hat{V}_m(\bar{X})]=\frac{1}{2m(m-1)}\sum_{i\neq j}E[(X_i-X_j)^2]=\frac{1}{2m(m-1)}\sum_{i\neq j}2E[X_i^2]-2E^2[X_i]=V(X_1),
\]
where $V(X_1)$ denotes the variance $V(X_1)=E[(X_1-E[X_1])^2]$. For any $\delta>0$, the following holds with probability at least $1-\delta$ from Eqn.~\eqref{eq:bernineq1},
\[
E[X]-\frac{1}{m}\sum_{i=1}^mX_i \leq \sqrt{\frac{2V(X)\ln1/\delta}{m}}+ \frac{\ln1/\delta}{3m}=\sqrt{\frac{2E[\hat{V}_m(\bar{X})]\ln1/\delta}{m}}+ \frac{\ln1/\delta}{3m},
\]
which completes the proof of Eqn.~\eqref{eq:tool2} by combining with Eqn.~\eqref{eq:var_tmp1} in a union bound and simple calculations. Similar proof could be made for Eqn.~\eqref{eq:tool1}. \qed
\subsection{Proof of Theorem~\ref{thm:our1}}\label{sec:pf2}
Similarly to the proof of Theorem~\ref{thm:kmar}, we have
\begin{equation}\label{eq:tt1}
\Pr_D[yf(x)<0]\leq \exp(-{N\alpha^2}/{2})+\Pr_{D,\mathcal{Q}(f)}[yg(x)<\alpha],
\end{equation}
for any given $\alpha>0$, $f\in\mathcal{C}(\mathcal{H})$ and $g\in \mathcal{C}_N (\mathcal{H})$ chosen i.i.d according to $\mathcal{Q}(f)$. Recall that $|\mathcal{C}_N (\mathcal{H})|\leq |\mathcal{H}|^{N}$. Therefore, for any $\delta_N>0$, combining union bound with Eqn.~\eqref{eq:tool2} in Theorem~\ref{thm:tool} guarantees that the following holds with probability at least $1-\delta_N$ over sample $S$, for any $g\in \mathcal{C}_N(\mathcal{H})$ and $\alpha\in\mathcal{A}$,
\begin{equation}\label{eq:tt0}
\Pr_D[yg(x)<\alpha]\leq \Pr_S[yg(x)<\alpha]
+\sqrt{\frac{2}{m}\hat{V}_m\ln\big(\frac{2}{\delta_N}|\mathcal{H}|^{N+1}\big)}+ \frac{7}{3m}\ln(\frac{2}{\delta_N}|\mathcal{H}|^{N+1}),
\end{equation}
where
\[
\hat{V}_m=\sum_{i<j}\frac{(I[y_ig(f(x_i))<\alpha]- I[y_jg(f(x_j))<\alpha])^2}{2m(m-1)}.
\]
Furthermore, we have
\[
\sum_{i<j}\left(I[y_ig(f(x_i))<\alpha]- I[y_jg(f(x_j))<\alpha]\right)^2=m^2 \Pr_S[yg(x)<\alpha]\Pr_S[yg(x)\geq\alpha],
\]
which yields that
\begin{equation}\label{eq:tt2}
\hat{V}_m=\frac{m}{2m-2}\Pr_S[yg(x)<\alpha]\Pr_S[yg(x)\geq\alpha] \leq\Pr_S[yg(x)<\alpha],
\end{equation}
for $m\geq 4$. By using Lemma~\ref{lem:Chern} again, the following holds for any $\theta_1>0$,
\begin{equation}\label{eq:tt4}
\Pr_S[yg(x)<\alpha]\leq \exp(-{N\theta_1^2}/{2})+\Pr_S[yf(x)<\alpha+\theta_1].
\end{equation}
Setting $\theta_1=\alpha=\theta/2$ and combining Eqns.~\eqref{eq:tt1}, \eqref{eq:tt0}, \eqref{eq:tt2} and \eqref{eq:tt4}, we have
\begin{multline*}
\Pr_D[yf(x)<0] \leq \Pr_S[yf(x)<\theta]+2\exp(-N\theta^2/8)\\ +\frac{7\mu}{3m}
+\sqrt{\frac{2\mu}{m}\left(\Pr_S[yf(x)<\theta]+\exp\left(-\frac{N\theta^2}{8}\right)\right)},
\end{multline*}
where $\mu=\ln(2|\mathcal{H}|^{N+1}/\delta_N)$. By utilizing the fact $\sqrt{a+b} \leq \sqrt{a}+ \sqrt{b}$ for $a\geq 0$ and $b\geq0$, we further have
\[
\sqrt{\frac{2\mu}{m}\left(\Pr_S[yf(x)<\theta]+\exp\left(-\frac{N\theta^2}{8} \right)\right)} \leq \sqrt{\frac{2\mu}{m}\Pr_S[yf(x)<\theta]} +\sqrt{\frac{2\mu}{m}\exp\left(-\frac{N\theta^2}{8}\right)}.
\]
Finally, we set $\delta_N=\delta/2^{N}$ so that the probability of failure for any $N$ will be no more than $\delta$. This theorem follows by setting $N=8\ln m/\theta^2$. \qed
\subsection{Proof of Corollary~\ref{coro:tighter}}\label{sec:pf4}
If the minimum margin $\theta_1=\hat{y}_1f(\hat{x}_1)>0$, then we have $\Pr_S[yf(x)<\theta_1]=0$ and further get
\begin{eqnarray}
&&\inf_{\theta\in (0,1]} \left[\Pr_S[yf(x)<\theta]+ \frac{7\mu+3\sqrt{2\mu}}{3m}+ \sqrt{\frac{2\mu}{m}\Pr_S[yf(x)<\theta]}\right] \nonumber\\
&&\leq\Pr_S[yf(x)<\theta_1]+ \frac{7\mu_1+3\sqrt{2\mu_1}}{3m}+ \sqrt{\frac{2\mu_1}{m}\Pr_S[yf(x)<\theta_1]}\nonumber\\
&&=\frac{7\mu_1+3\sqrt{2\mu_1}}{3m},\label{eq:t3}
\end{eqnarray}
where $\mu_1={8\ln m}\ln(2|\mathcal{H}|)/{\theta_1^2}+ \ln({2|\mathcal{H}|}/ {\delta})$. This gives the proof of Eqn.~\eqref{eq:tight1}. If $m\geq4$, then we have
\[
\mu_1\geq\frac{8}{\theta_1^2}\ln{m} \ln(2|\mathcal{H}|)\geq5 \text{ leading to }{\sqrt{2\mu_1}} \leq {2\mu_1}/3.
\]
Therefore, the following holds by combining Eqn.~\eqref{eq:t3} and the above facts,
\begin{eqnarray*}
&&\frac{2}{m}+\inf_{\theta\in (0,1]} \left[\Pr_S[yf(x)<\theta]+ \frac{7\mu+3\sqrt{2\mu}}{3m}+ \sqrt{\frac{3\mu}{m}\Pr_S[yf(x)<\theta]}\right]\\
&&\leq \frac{2}{m}+\frac{7\mu_1+3\sqrt{2\mu_1}}{3m} \leq\frac{2}{m}+\frac{3\mu_1}{m}=\frac{2}{m}+\frac{24\ln m}{m\theta_1^2} \ln(2|\mathcal{H}|)+\frac{3}{m}\ln\frac{2|\mathcal{H}|}{\delta}\\
&&\leq\frac{8}{m}+\frac{24\ln m}{m\theta_1^2}\ln(2|\mathcal{H}|)+ \frac{3}{m}\ln\frac{|\mathcal{H}|}{\delta}
\leq R\Big(\ln(2m)+ \ln\frac{1}{R}+1\Big)+ \frac{1}{m}\ln\frac{|\mathcal{H}|}{\delta}
\end{eqnarray*}
where the last inequality holds from the conditions of Eqn.~\eqref{eq:con} and ${8}/{m}<R$. This completes the proof of Eqn.~\eqref{eq:tight2}.\qed
\subsection{Proof of Theorem~\ref{thm:lower1}}\label{sec:pf:lower}
\proof For any given $\alpha>0$, $f\in\mathcal{C}(\mathcal{H})$ and $g\in \mathcal{C}_N (\mathcal{H})$ chosen i.i.d according to $\mathcal{Q}(f)$, it holds that from Lemma~\ref{lem:Chern},
\[
\Pr_D[yf(x)\geq0] \leq \Pr_{D,\mathcal{Q}(f)}[yg(x)\geq-\alpha]+ \exp(-{N\alpha^2}/{2}),
\]
which yields
\begin{equation}\label{eq:tmp1}
\Pr_D[yf(x)<0]\geq \Pr_{D,\mathcal{Q}(f)}[yg(x)<-\alpha]-\exp(-{N\alpha^2}/{2}).
\end{equation}
Recall that $|\mathcal{C}_N (\mathcal{H})|\leq |\mathcal{H}|^{N}$. Therefore, for any $\delta_N>0$, combining union bound with Eqn.~\eqref{eq:tool1} in Theorem~\ref{thm:tool} guarantees that the following holds with probability at least $1-\delta_N$ over sample $S$, for any $g\in \mathcal{C}_N(\mathcal{H})$ and $\alpha\in\mathcal{A}$,
\begin{equation}\label{eq:tmp2}
\Pr_D[yg(x)<-\alpha]\geq \Pr_S[yg(x)<-\alpha]
-\sqrt{\frac{2}{m}\hat{V}_m\ln\big(\frac{2}{\delta_N}|\mathcal{H}|^{N+1}\big)}- \frac{7}{3m}\ln(\frac{2}{\delta_N}|\mathcal{H}|^{N+1}),
\end{equation}
where
\[
\hat{V}_m=\sum_{i<j}\frac{(I[y_ig(f(x_i))<-\alpha]- I[y_jg(f(x_j))<-\alpha])^2} {2m^2-2m} \leq \Pr_S[yg(x)<-\alpha] \text{ for }m\geq4.
\]
By using Lemma~\ref{lem:Chern} again, it holds holds that,
\begin{eqnarray*}
\Pr_S[yg(x)<-\alpha]&\leq&\Pr_S[yg(x)<0]+\exp(-N\alpha^2/2),\\
\Pr_S[yg(x)<-\alpha]&\geq&\Pr_S[yg(x)<-2\alpha]-\exp(-N\alpha^2/2).
\end{eqnarray*}
Therefore, combining the above inequalities with Eqns.~\eqref{eq:tmp1} and \eqref{eq:tmp2}, we have
\begin{multline*}
\Pr_D[yf(x)<0]\geq \Pr_{S}[yf(x)<-2\alpha]-2\exp(-{N\alpha^2}/{2})\\ -\sqrt{\frac{2\Pr_S[yg(x)<0]+2\exp(-N\alpha^2/2)}{m}\ln\big(\frac{2}{\delta_N}|\mathcal{H}|^{N+1}\big)}- \frac{7}{3m}\ln(\frac{2}{\delta_N}|\mathcal{H}|^{N+1})
\end{multline*}
Set $\theta=2\alpha$ and $\delta_N=\delta/2^{N}$ so that the probability of failure for any $N$ will be no more than $\delta$. This theorem follows by using $\sqrt{a+b} \leq \sqrt{a}+ \sqrt{b}$ and setting $N=8\ln m/\theta^2$. \qed
\subsection{Proof of Theorem~\ref{thm:our2}}\label{sec:pf3}
Our proof is based on a new Bernstein-type bound as follows:
\begin{lemma}\label{lem:t2}
For $f\in\mathcal{C}(\mathcal{H})$ and $g\in \mathcal{C}_N(\mathcal{H})$ chosen i.i.d according to distribution $\mathcal{Q}(f)$, we have
\[
\Pr_{S,g\sim \mathcal{Q}(f)}\left[yg(x)-yf(x)\geq t\right]\leq\exp\left(\frac{-Nt^2}{2-2E^2_{S}[yf(x)]+4t/3}\right).
\]
\end{lemma}
\proof For $\lambda>0$, we utilize the Markov's inequality to have
\begin{eqnarray*}
\Pr_{S,g\sim \mathcal{Q}(f)}[yg(x)-yf(x)\geq t]&=&\Pr_{S,g\sim \mathcal{Q}(f)}[(yg(x)-yf(x))N\lambda/2\geq N\lambda t/2]\\
&\leq&\exp\left(-\frac{\lambda Nt}{2}\right) E_{S,g\sim \mathcal{Q}(f)} \left[\exp\left(\frac{\lambda}{2}\sum_{j=1}^N yh_j(x)-yf(x)\right)\right] \\
&=&\exp(-{\lambda Nt}/{2})\prod_{j=1}^N E_{S,h_j\sim \mathcal{Q}(f)}[\exp( \lambda(yh_j(x)-yf(x))/2)],
\end{eqnarray*}
where the last inequality holds from the independence of $h_j$. Notice that
$|yh_j(x)-yf(x)|\leq 2$ from $\mathcal{H}\subseteq\{h\colon \mathcal{X}\to\{-1,+1\}\}$. By using Taylor's expansion, we further get
\begin{multline*}
E_{S,h_j\sim \mathcal{Q}(f)}[\exp(\lambda(yh_j(x)-yf(x))/2)] \leq 1+E_{S,h_j\sim \mathcal{Q}(f)}[(yh_j(x)-yf(x))^2](e^\lambda-1-\lambda)/4\\
=1+E_{S}[1-(yf(x))^2](e^\lambda-1-\lambda)/4\leq \exp\left((1-E^2_{S}[yf(x)])(e^\lambda-1-\lambda)/4\right),
\end{multline*}
where the last inequality holds from Jensen's inequality and $1+x\leq e^{x}$. Therefore, it holds that
\[
\Pr_{S,g\sim \mathcal{Q}(f)}\left[yg(x)-yf(x)\geq t\right]\leq \exp\left(N(e^\lambda-1-\lambda)(1-E^2_{S}[yf(x)])/4-\lambda Nt/2\right).
\]
If $0<\lambda<3$, then we could use Taylor's expansion again to have
\[
e^{\lambda}-\lambda-1=\sum_{i=2}^\infty\frac{\lambda^i}{i!}\leq \frac{\lambda^2}{2}\sum_{i=0}^\infty\frac{\lambda^m}{3^m} = \frac{\lambda^2}{2(1-\lambda/3)}.
\]
Now by picking $\lambda={t}/(1/2-E^2_{S}[yf(x)]/2+ t/3)$, we have
\[
-\frac{\lambda t}{2}+\frac{\lambda^2(1-E^2_{S}[yf(x)])}{8(1-\lambda/3)} \leq \frac{-t^2}{2-2E^2_{S}[yf(x)]+4t/3},
\]
which completes the proof as desired.\qed\vspace{0.15in}
\noindent\textbf{Proof of Theorem~\ref{thm:our2}} This proof is rather similar to the proof of Theorem~\ref{thm:our1}, and we just give main steps. For any $\alpha>0$ and $\delta_N>0$, the following holds with probability at least $1-\delta_N$ over sample $S_m$ ($m\geq4$),
\[
\Pr_D[yf(x)<0]\leq \Pr_S[yg(x)<\alpha]+ \exp\left(-\frac{N\alpha^2}{2}\right)+
\sqrt{\frac{2\hat{V}^*_m\ln(\frac{2}{\delta_N}|\mathcal{H}|^{N+1})}{m}}+ \frac{7}{3m}\ln(\frac{2}{\delta_N}|\mathcal{H}|^{N+1}),
\]
where $\hat{V}^*_m=\Pr_S[yg(x)<\alpha]\Pr_S[yg(x)\geq\alpha]$. For any $\theta_1>0$, we use Lemma~\ref{lem:Chern} to obtain
\[
\hat{V}^*_m=\Pr_S[yg(x)<\alpha]\Pr_S[yg(x)\geq\alpha]\leq 3\exp(-N\theta_1^2/2)+\Pr_S[yf(x)<\alpha+\theta_1]\Pr_S[yf(x)>\alpha-\theta_1].
\]
From Lemma~\ref{lem:t2}, it holds that
\[
\Pr_S[yg(x)<\alpha]\leq \Pr_S[yf(x)<\alpha+\theta_1]+\exp\Big({\frac{-N\theta_1^2} {2-2E^2_{S}[yf(x)]+4\theta_1/3}}\Big).
\]
Let $\theta_1=\theta/6$, $\alpha=5\theta/6$, and set $\delta_N=\delta/2^{N}$ so that the probability of failure for any $N$ will be no more than $\delta$. We complete the proof by setting $N=144\ln m/\theta^2$ and simple calculation. \qed
\begin{table*}[t!]
\caption{Accuracy (mean$\pm$std.) comparisons of \texttt{AdaBoost} and \texttt{arc-gv} on 51 benchmark datasets. The better performance (paired $t$-test at 95\% significance level) is bold. The last line shows the win/tie/loss counts of \texttt{AdaBoost} versus \texttt{arc-gv}.}\label{tab:acc}
\begin{center}
\begin{tabular}{|l|c|c|l|c|c|}
\hline
\multirow{2}{*}{}& \multicolumn{2}{|c|}{\small Test error}&\multirow{2}{*}{}& \multicolumn{2}{|c|}{\small Test error}\\
\cline{2-3}\cline{5-6}
\small Dataset&\small \texttt{AdaBoost}&\small \texttt{Arc-gv} & Dataset&\small \texttt{AdaBoost}&\small \texttt{Arc-gv}\\
\hline
\small anneal &\small 0.0047$\pm$0.0066 &\small 0.0043$\pm$0.0067&
\small abalone &\small 0.2203$\pm$0.0208 &\small 0.2186$\pm$0.0224\\
\hline
\small artificial &\small 0.3351$\pm$0.0197 &\small\bf0.2666$\pm$0.0200&
\small auto-m &\small 0.1143$\pm$0.0471 &\small\bf0.1085$\pm$0.0436\\
\hline
\small auto &\small 0.0991$\pm$0.0670 &\small 0.0996$\pm$0.0667&
\small balance &\small 0.0088$\pm$0.0119 &\small 0.0093$\pm$0.0120\\
\hline
\small breast-w &\small 0.0411$\pm$0.0221 &\small 0.0413$\pm$0.0242&
\small car &\small 0.0502$\pm$0.0154 &\small 0.0509$\pm$0.0168\\
\hline
\small cmc &\small\bf0.2787$\pm$0.0288 &\small 0.2872$\pm$0.0311&
\small colic &\small 0.1905$\pm$0.0661 &\small 0.1935$\pm$0.0683\\
\hline
\small credit-a &\small\bf0.1368$\pm$0.0410 &\small 0.1622$\pm$0.0405&
\small cylinder &\small 0.2076$\pm$0.0509 &\small 0.2070$\pm$0.0570\\
\hline
\small diabetes &\small\bf0.2409$\pm$0.0423 &\small 0.2551$\pm$0.0440&
\small german &\small\bf0.2486$\pm$0.0372 &\small 0.2717$\pm$0.0403\\
\hline
\small glass &\small 0.2045$\pm$0.0794 &\small 0.2113$\pm$0.0848&
\small heart-c &\small\bf0.1960$\pm$0.0701 &\small 0.2161$\pm$0.0754\\
\hline
\small heart-h &\small\bf0.1892$\pm$0.0623 &\small 0.2006$\pm$0.0673&
\small hepatitis &\small 0.1715$\pm$0.0821 &\small 0.1798$\pm$0.0848\\
\hline
\small house-v &\small 0.0471$\pm$0.0333 &\small 0.0471$\pm$0.0326&
\small hypo &\small 0.0053$\pm$0.0035 &\small 0.0054$\pm$0.0034\\
\hline
\small ion &\small 0.0721$\pm$0.0432 &\small 0.0767$\pm$0.0421&
\small iris &\small 0.0000$\pm$0.0000 &\small 0.0000$\pm$0.0000\\
\hline
\small isolet &\small 0.1270$\pm$0.0113 &\small\bf0.1214$\pm$0.0116&
\small kr-vs-kp &\small 0.0354$\pm$0.0106 &\small\bf0.0326$\pm$0.0097\\
\hline
\small letter &\small 0.1851$\pm$0.0076 &\small\bf0.1778$\pm$0.0077&
\small lymph &\small 0.1670$\pm$0.0971 &\small 0.1690$\pm$0.0972\\
\hline
\small magic04 &\small\bf0.1555$\pm$0.0078 &\small 0.1578$\pm$0.0077&
\small mfeat-f &\small\bf0.0445$\pm$0.0136 &\small 0.0471$\pm$0.0143\\
\hline
\small mfeat-m &\small\bf0.0990$\pm$0.0190 &\small 0.1048$\pm$0.0200&
\small mush &\small 0.0000$\pm$0.0000 &\small 0.0000$\pm$0.0000\\
\hline
\small musk &\small 0.0916$\pm$0.0413 &\small 0.0926$\pm$0.0437&
\small nursery &\small 0.0002$\pm$0.0004 &\small 0.0002$\pm$0.0004\\
\hline
\small optdigits &\small 0.1060$\pm$0.0144 &\small 0.1048$\pm$0.0129&
\small page-b &\small 0.0331$\pm$0.0068 &\small 0.0325$\pm$0.0062\\
\hline
\small pendigits &\small 0.0796$\pm$0.0083 &\small 0.0788$\pm0.0081$&
\small satimage &\small 0.0565$\pm$0.0083 &\small\bf0.0531$\pm$0.0080\\
\hline
\small segment &\small 0.0171$\pm$0.0083 &\small\bf0.0159$\pm$0.0083&
\small shuttle &\small 0.0010$\pm$0.0001 &\small\bf0.0009$\pm$0.0001\\
\hline
\small sick &\small 0.0250$\pm$0.0082 &\small 0.0246$\pm$0.0079&
\small solar-f &\small\bf0.0440$\pm$0.0171 &\small 0.0490$\pm$0.0182\\
\hline
\small sonar &\small\bf0.1441$\pm$0.0697 &\small 0.1863$\pm$0.0881&
\small soybean &\small 0.0245$\pm$0.0188 &\small 0.0242$\pm$0.0174\\
\hline
\small spamb &\small 0.0570$\pm$0.0107 &\small\bf0.0553$\pm$0.0105&
\small spect &\small 0.1256$\pm$0.0386 &\small 0.1250$\pm$0.0414\\
\hline
\small splice &\small\bf0.0561$\pm$0.0128 &\small 0.0605$\pm$0.0131&
\small tic-tac-t &\small 0.0172$\pm$0.0115 &\small 0.0177$\pm$0.0116\\
\hline
\small vehicle &\small 0.0435$\pm$0.0215 &\small 0.0447$\pm$0.0231&
\small vote &\small 0.0471$\pm$0.0333 &\small 0.0471$\pm$0.0326\\
\hline
\small vowel &\small 0.1114$\pm$0.0276 &\small\bf0.1026$\pm$0.0278&
\small wavef &\small\bf0.1145$\pm$0.0136 &\small 0.1181$\pm$0.0141\\
\hline
\small yeast &\small\bf0.2677$\pm$0.0344 &\small 0.2841$\pm$0.0332& \multicolumn{3}{|c|}{14/27/10}\\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[t!]
\caption{Description of datasets: the number of instances, the number of class, the number of continuous and discrete features}\label{tab:data}
\begin{center}
\begin{tabular}{|l|c|c|c|c|l|c|c|c|c|}
\hline
\small dataset&\small \#inst&\small \#class&\small \#CF & \small \#DF
&\small dataset&\small \#inst&\small \# class&\small \#CF & \small \#DF\\
\hline
\small abalone &\small 4177 & 29 & 7 & 1 &
\small anneal &\small 898 & 6 & 6 & 32 \\
\hline
\small artificial &\small 5109 & 10 & 7 & -- &
\small auto-m &\small 398 & 5 & 2 & 4 \\
\hline
\small auto &\small 205 & 6 & 15 & 10&
\small balance &\small 540 & 18 & 21 & 2 \\
\hline
\small breast-w &\small 699 & 2 & 9 & --&
\small car &\small 1728 & 4 & -- & 6 \\
\hline
\small cmc &\small 1473 & 3 & 2 & 7
&\small colic &\small 368 & 2 & 10 & 12 \\
\hline
\small credit-a &\small 690 & 2 & 6 & 9
&\small cylinder &\small 540 & 2 & 18 & 21 \\
\hline
\small diabetes &\small 768 & 2 & 8 & --
&\small german &\small 1000 & 2 & 7 & 13 \\
\hline
\small glass &\small 214 & 6 & 9 & --
&\small heart-c &\small 303 & 2 & 6 & 7 \\
\hline
\small heart-h &\small 294 & 2 & 6 & 7
&\small hepatitis &\small 155 & 2 & 6 & 13 \\
\hline
\small house-v &\small 435 & 2 & -- & 16
&\small hypo &\small 3772 & 4 & 7 & 22 \\
\hline
\small ion &\small 351 & 2 & 34 & --
&\small iris &\small 150 & 3 & 4 & -- \\
\hline
\small isolet &\small 7797 & 26 & 617 & --
&\small kr-vs-kp &\small 3169 & 2 & -- & 36 \\
\hline
\small letter &\small 20000 & 26 & 16 & --
&\small lymph &\small 148 & 4 & -- & 18 \\
\hline
\small magic04 &\small 19020 & 2 & 10 & --
&\small mfeat-f &\small 2000 & 10 & 216 & -- \\
\hline
\small mfeat-m &\small 2000 & 10 & 6 & --
&\small mush &\small 8124 & 2 & -- & 22 \\
\hline
\small musk &\small 476 & 2 & 166 & --
&\small nursery &\small 12960 & 2 & 9 & -- \\
\hline
\small optdigits &\small 5620 & 10 & 64 & --
&\small page-b &\small 5473 & 5 & 10 & -- \\
\hline
\small pendigits &\small 10992 & 2 & 16 & --
&\small satimage &\small 6453 & 7 & 36 & -- \\
\hline
\small segment &\small 2310 & 7 & 19 & --
&\small shuttle &\small 58000 & 7 & 9 & -- \\
\hline
\small sick &\small 3372 & 2 & 7 & 22
&\small solar-f &\small 1066 & 6 & -- & 12 \\
\hline
\small sonar &\small 208 & 2 & 60 & --
&\small soybean &\small 683 & 19 & -- & 35 \\
\hline
\small spamb &\small 4601 & 2 & 57 & --
&\small spect & 531 & 48 & 100 & 2 \\
\hline
\small splice &\small 3190 & 3 & -- & 60
&\small tic-tac-t &\small 958 & 2 & -- & 9 \\
\hline
\small vehicle &\small 846 & 4 & 18 & --
&\small vote &\small 435 & 2 & -- & 16 \\
\hline
\small vowel &\small 990 & 11 & -- & 11
&\small wavef &\small 5000 & 3 & 40 & -- \\
\hline
\small yeast &\small 1484 & 10 & 8 & --
&\multicolumn{5}{|c|}{}\\
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Empirical Verifications}\label{sec:exper}
Though this paper mainly focuses on the theoretical explanation to \texttt{AdaBoost}, we also present empirical studies to compare \texttt{AdaBoost} and \texttt{arc-gv} in terms of their performance so as to verify our theory.
We conduct our experiments on 51 benchmark datasets from the UCI repository \citep{Asuncion:Newman2007}, which show considerable diversity in size, number of classes, and number and types of attributes. The detailed characteristics are summarized in Table~\ref{tab:data}, and most of them are investigated by previous researchers. For multi-class datasets, we transform them into two-class datasets by regarding the union of a half number of classes as one meta-class, while the other half as another meta-class, and the partition is selected by making the two meta-classes be with similar sizes. To control the complexity of base learners, we take decision stumps in our experiments as the base learners for both \texttt{AdaBoost} and \texttt{arc-gv}. On each dataset we run 10 trials of 10-fold cross validation, and the detailed results are summarized in Tables~\ref{tab:acc}.
As shown by previous empirical work \citep{Breiman1999,Reyzin:Schapire2006}, we can see clearly from Tables~\ref{tab:acc} that \texttt{AdaBoost} has better performance than \texttt{arc-gv}, which also verifies our Corollary~\ref{coro:tighter}. On the other hand, it is noteworthy that \texttt{AdaBoost} does not absolutely outperform \texttt{arc-gv} since the performances of two algorithms are comparable on many datasets. This is because that the bound of Theorem~\ref{thm:our1} and the minimum margin bound of Theorem~\ref{thm:brei} are both $O(\ln m/m)$ though former has smaller coefficients.
\section{Conclusion}\label{sec:con}
The margin theory provides one of the most intuitive and popular theoretical explanations to \texttt{AdaBoost}. It is well-accepted that the margin distribution is crucial for characterizing the performance of \texttt{AdaBoost}, and it is desirable to theoretically establish generalization bounds based on margin distribution.
In this paper, we show that previous margin bounds, such as the minimum margin bound and Emargin bound, are all single-margin bounds that do not really depend on the whole margin distribution. Then, we improve slightly the empirical Bernstein bound with different skills. As our main results, we prove a new generalization bound which considers exactly the same factors as Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998} but is uniformly tighter than the bounds of Schapire et al. \cite{Schapire:Freund:Bartlett:Lee1998} and Breiman \cite{Breiman1999}, and thus provide a complete answer to Breiman's doubt on the margin theory. By incorporating other factors such as average margin and variance, we prove another upper bound which is heavily related to the whole margin distribution. Our empirical evidence shows that AdaBoost has better performance than but not absolutely outperform arc-gv, which further confirm our theory.
|
1510.00863
|
\section{Introduction}
Consider a finite map $\pi : X \rightarrow Y$ of degree $\mu$.
Let $B = \cup B_i$ be the branch locus and its irreducible decomposition.
Let $R= \pi^{-1}( B )= \cup R_j$ be the ramification locus and the irreducible decomposition of its reduction. Note that we are taking here the potentially non-standard choice to include in $R$ even those components of $\pi^{-1}(B)$ which are not ramified, this convention will be consistent throughout.
The Riemann-Hurwitz formula for the topological Euler characteristic of curves can roughly be interpreted to say:
\[ \chi(X) - \mu \cdot \chi(Y) = \sum_i r_i\chi(R_i). \]
For some integers $r_i$ determined by local data.
This formula can be generalized both to higher dimensional manifolds, but also to the algebraic Euler characteristic.
However, in the higher dimensional algebraic setting, such a formula typically requires additional hypothesis on the ramification and/or branch locus such as:
\begin{itemize}
\item The ramification locus to be non-singular.
\item The irreducible components of the branch locus do not intersect.
\item The irreducible components of the ramification locus to be non-singular.
\item The formula is cleanest if the irreducible components of the ramification locus have trivial self intersection.
Note that the work of Izawa \cite{IzawaRH} handles the case where this last condition is not true but requires the previous conditions.
\end{itemize}
We would like to be able to reduce these conditions to the requirement that the branch and ramification locus consist of divisors with simple normal crossings.
The result which we obtain is a formula of the form:
\[ \chi(X) - \mu\cdot\chi(Y) = \sum_\alpha r_\alpha\chi(R^\alpha). \]
Where the $R^\alpha$ are irreducible components of the (possibly repeated) intersections, that is the strata, of the ramification locus.
The $r_\alpha$ are constants defined in terms of the ramification structure along $R^\alpha$ by universal equations determined by $\alpha$.
Precise statements are given in Theorem \ref{thm:main} and Corollary \ref{cor:main} noting that terminology introduced elsewhere will be necessary to understand them. Propositions \ref{prop:simpleMF} and \ref{prop:simpleNMF} give alternative expressions for some of the coefficients $r_\alpha$.
Other theorems which may be of interest are Theorems \ref{thm:pullbacklog} and \ref{thm:pullbacklogchern} which describe the functoriality of the pullback of logarithmic Chern classes and the logarithmic Euler characteristic through finite maps. It is likely both of these results admit generalizations outside the context in which the author is able to prove them.
Moreover, our argument works virtually identically in each of the following cases:
\begin{itemize}
\item $\chi(X)$ is the topological Euler characteristic, in this case the above results are then classical and follow from excision.
\item $\chi(X) = \chi(X,\mathcal{O}_X)$ is the algebraic Euler characteristic.
The result is well known when the map is \'etale (see \cite[Ex. 18.3.9]{FultonIntersection1}).
The case of no intersection between components of branch/ramification locus is handled for example in the work of Izawa \cite{IzawaRH}.
\item $\chi(X) = \chi(X,\mathcal{F})$ is the Euler characteristic of a coherent sheaf $\mathcal{F}$.
\item The same argument should apply formally to any `characteristic' defined by a multiplicative sequence on the Chern classes.
The formulas for the coefficients $r_\alpha$ do naturally depend on this choice of characteristic.
\end{itemize}
We shall only present the argument for the case of $\chi(X,\mathcal{F})$, the main results are Theorem \ref{thm:main} as well as Corollary \ref{cor:main}.
The strategy of proof uses primarily formal properties of logarithmic Chern classes and formal properties of multiplicative sequences.
The paper is organized as follows:
\begin{itemize}
\item In Section \ref{sec:bgnot} we introduce our notation and the key results we shall make use of. This includes in particular Lemmas \ref{lem:cute1}-\ref{lem:cute3} and Theorem \ref{thm:pullbacklog}.
\item In Section \ref{sec:logeuler} we introduce our definition of logarithmic Euler characteristic.
\item Section \ref{sec:logveuler} contains the key calculations, which compares the classical Euler characteristic to the logarithmic Euler characteristic.
\item Section \ref{sec:rh} applies the results of Section \ref{sec:logveuler} to the problem of giving the Riemann-Hurwitz theorem discussed above.
\item In Section \ref{sec:logeulerself} we discuss computing the contribution to the logarithmic Euler characteristics of the `self-intersection' terms.
\end{itemize}
We should mention that our original motivation for considering the objects being introduced is to compute dimension formulas for spaces of modular forms. For this application it is actually the results of Section \ref{sec:logveuler} and Section \ref{sec:logeulerself} that by way of the work of Mumford in \cite{Mumford_Proportionality} play a significant role. Though actual dimension formulas require additional arithmetic and/or combinatorial input the results of the aforementioned sections can be seen as a generalization of a key ingredient for the approach used in \cite{Tsushima}.
\section{Background and Notation}\label{sec:bgnot}
\begin{nota}\label{not:1}
We shall make use of the following notation.
\begin{enumerate}
\item $X$ and $Y$ shall always be varieties, typically assumed to be smooth and projective.
\item
Given a variety $X$ we shall denote by
\[ \Omega^1_X \]
the cotangent bundle of $X$.
\item $\Delta = \cup_i \{ D_i \}$ shall always be a collection of (reduced irreducible) divisors on a variety.
These shall typically be assumed to have simple normal crossings.
\item
Given $\Delta = \cup_i \{D_i\}$ a collection of divisors on $X$ we shall denote by:
\[ \Omega^1_X(\log \Delta) \]
the logarithmic cotangent bundle of $X$ relative to $\Delta$,
\item
For any $Y\subset X$ we shall denote by:
\[ \Omega^1_Y(\log \Delta') \]
the logarithmic cotangent bundle of $Y$, relative to $\Delta' = \cup_i \{D_i\cap Y\}$, where we consider only those $i$ such that $Y\not\subset D_i$.
When we write this we shall always assume that $Y$ meets the relevant $D_i$ transversely.
Whenever we write $\Delta'$, the relevant $Y$ shall be understood.
\item
Given any coherent sheaf $\mathcal{F}$ on $X$ we shall denote by:
\[ {\mathrm{c}}(\mathcal{F}) = \sum_i {\mathrm{c}}_i(\mathcal{F}) \]
the total Chern class and the $i$th Chern class (see \cite[Ch. 3]{FultonIntersection1}).
We shall denote by ${\mathrm{ch}}(\mathcal{F})$ and ${\rm Todd}(\mathcal{F})$ the Chern character and Todd class respectively.
The Todd class ${\rm Todd}(\mathcal{F})$ has a universal expression in terms of the ${\mathrm{c}}_i(\mathcal{F})$, whereas ${\mathrm{ch}}(\mathcal{F})$ additionally requires the rank, ${\rm rk}(\mathcal{F})$, specifically the constant part of the Chern character.
These classes can be interpreted as being in the cohomology ring or the Chow ring as appropriate from context.
We can interpret ${\mathrm{ch}}(\mathcal{F})$ as a vector determining all of ${\rm rk}(\mathcal{F}), {\mathrm{c}}_1(\mathcal{F}),\ldots, {\mathrm{c}}_n(\mathcal{F})$.
Conversely, given a vector $\underline{x} = (x_0,\ldots,x_n)$ we shall write $ {\mathrm{ch}}(\underline{x})$ to indicate the formal expression in the $x_i$ where we replace ${\mathrm{c}}_i$ by $x_i$ and ${\rm rk}(\mathcal{F})$ by $x_0$ in the formal expression for ${\mathrm{ch}}(\mathcal{F})$. For brevity, and to make clear the connection to the role of the Chern character, we shall often write ${\mathrm{ch}}(\underline{x})$ or ${\mathrm{ch}}(\mathcal{F})$ when evaluating a function on the vectors $(x_0,\ldots,x_n)$ or $({\rm rk}(\mathcal{F}),{\mathrm{c}}_1(\mathcal{F}),\ldots,{\mathrm{c}}_n(\mathcal{F}))$ when it is defined through ${\mathrm{ch}}(\mathcal{F})$ (see for example Theorem \ref{thm:RR}).
\item
Given $\Delta = \cup_i \{D_i\}$ a collection of divisors on $X$ we shall denote by:
\[ \Delta_k \]
the $k$th elementary symmetric polynomial in the $D_i$.
so that:
\[ \prod_i (1-D_i) = \sum_k (-1)^k \Delta_k. \]
The products above take place in either the cohomology ring or the Chow ring as appropriate from context.
\item
When we say that $\alpha$ is a partition of $m$ we mean that $m = \sum_i \alpha_i i $. Given a partition $\alpha$, we shall denote by $\abs{\alpha}$ the value $m$ it is partitioning. That is $\abs{\alpha} = \sum_i \alpha_i i$. Moreover, given such a partition we shall denote by:
\[ {\mathrm{c}}^\alpha(\mathcal{F}) = \prod_i {\mathrm{c}}_i(\mathcal{F})^{\alpha_i} \]
and by:
\[ \Delta^\alpha = \prod_i \Delta_i^{\alpha_i}. \]
\item
Given a monomial exponent $\underline{b} = (b_1,\ldots,b_\ell) \in \mathbb{N}^\ell$ of total degree $\abs{\underline{b}} = \sum b_i$ we shall denote by:
\[ D^{\underline{b}} = \prod D_i^{b_i}. \]
The products above take place in either the cohomology ring or the Chow ring as appropriate from context.
Whenever we write this, the choice of base $D$ will make clear the relevant $\Delta$ to which $D_i$ belong.
Do not confuse $D^\ell$ with $D^{\underline{b}}$, the former will always be the self intersection of a particular divisor $D \in \Delta$.
\end{enumerate}
\end{nota}
\begin{thm}[Riemann-Roch Theorem]\label{thm:RR}
For each $n\in \mathbb{N}$ there is a universal polynomial
\[ Q_n(x_0,\ldots,x_n; y_1,\ldots,y_n) = Q_n({\mathrm{ch}}(\underline{x});y_1,\ldots,y_n) \]
such that for all smooth projective varieties $X$ of dimension $n$ and coherent sheaves $\mathcal{F}$ on $X$ the Euler characteristic of $\mathcal{F}$ is:
\[ \chi(X,\mathcal{F}) = Q_n({\rm rk}(\mathcal{F}), {\mathrm{c}}_1(\mathcal{F}),\ldots, {\mathrm{c}}_n(\mathcal{F}); {\mathrm{c}}_1(\Omega^1_{X}),\ldots, {\mathrm{c}}_n(\Omega^1_{X})) = Q_n({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega^1_{X}),\ldots, {\mathrm{c}}_n(\Omega^1_{X})). \]
The polynomial is given explicitly by:
\[ Q_n( {\rm rk}(\mathcal{F}),{\mathrm{c}}_1(\mathcal{F}),\ldots, {\mathrm{c}}_n(\mathcal{F}); {\mathrm{c}}_1(\Omega^1_{X}),\ldots, {\mathrm{c}}_n(\Omega^1_{X})) = \deg_n({\mathrm{ch}}(\mathcal{F}){\rm Todd}(\Omega^1_X)). \]
Recall that we interpret ${\mathrm{ch}}(\mathcal{F})$ as a vector determining all of ${\rm rk}(\mathcal{F}), {\mathrm{c}}_1(\mathcal{F}),\ldots, {\mathrm{c}}_n(\mathcal{F})$ and ${\mathrm{ch}}(\underline{x})$ as the corresponding vector where the $x_i$ are substituted in the universal expression for ${\mathrm{ch}}(\mathcal{F})$.
\end{thm}
We have the following explicit formulas for $Q_n$ for small $n$:
\begin{align*}
Q_0(x_0;y_0) &= x_0 \\
Q_1(x_0,x_1; y_1) &= \frac{1}{2}x_0y_1 + x_1\\
Q_2(x_0,x_1,x_2; y_1,y_2) &= \frac{1}{12}x_0(y_1^2+y_2) + \frac{1}{2}x_1y_1 + \frac{1}{2}(x_1^2-2x_2) \\
Q_3(x_0,x_1,x_2,x_3; y_1,y_2,y_3) &= \frac{1}{24}x_0y_1y_2 + \frac{1}{12}x_1(y_1^2+y_2)+ \frac{1}{4}(x_1^2-2x_2)y_1 + \frac{1}{6}(x_1^3-3x_1x_2+3x_3) \\
Q_4(x_0,\ldots,x_4; y_1,\ldots,y_4) &= \frac{1}{720}x_0(-y_1^4 + 4y_1^2y_2 + y_1y_3 + 3y_2^2 - y_4) + \frac{1}{24}x_1y_1y_2 + \cdots
\end{align*}
\begin{rmk}
The most important feature of the explicit description we shall make use of is that
\[ {\rm Todd}(\mathcal{E}_1 \oplus \mathcal{E}_2) = {\rm Todd}(\mathcal{E}_1) {\rm Todd}(\mathcal{E}_2) \]
so that $Q_n$ is effectively multiplicative in the ${\mathrm{c}}_i(\Omega^1_{X})$ set of parameters.
\end{rmk}
The following proposition makes precise what we mean by multiplicative.
\begin{prop}\label{prop:multQ}
For notational convenience in the following we use the constants $u_0=v_0=1$ and $u_i=v_i=0$ for $i<0$.
Consider formal variables $u_1,\ldots, u_n$ and $v_1,\ldots,v_n$ and set $y_i = \sum_{j+k = i} u_jv_k$ then
\[ Q_n({\mathrm{ch}}(\underline{x}); y_1,\ldots,y_n) = \sum_{\ell + m = n} Q_\ell({\mathrm{ch}}(\underline{x}); u_1,\ldots,u_\ell) Q_m(1; v_1,\ldots,v_m). \]
\end{prop}
\begin{proof}
Denote by ${\rm Todd}(\underline{y}),\,{\rm Todd}(\underline{u}),\,{\rm Todd}(\underline{v})$ the universal expression for the Chern character or Todd class where we substitute the appropriate set of variables for the Chern classes.
We then have:
\begin{align*}
Q_n({\mathrm{ch}}(\underline{x}); y_1,\ldots,y_n) &= \deg_n( {\mathrm{ch}}(\underline{x}) {\rm Todd}(\underline{y}))\\
&= \deg_n( {\mathrm{ch}}(\underline{x}) {\rm Todd}(\underline{u}){\rm Todd}(\underline{v}))\\
&= \sum_{\ell+m=n} \deg_\ell( {\mathrm{ch}}(\underline{x}) {\rm Todd}(\underline{u}))\deg_m({\rm Todd}(\underline{v}))\\
&= \sum_{\ell + m = n} Q_\ell({\mathrm{ch}}(\underline{x}); u_1,\ldots,u_\ell) Q_m(1; v_1,\ldots,v_m).\qedhere
\end{align*}
\end{proof}
\begin{rmk}
The same formula holds if we use instead the system of polynomials
\[ Q_n(x_0,x_1,\ldots,x_n; y_1,\ldots,y_n) = y_n \]
which give the topological Euler characteristic. The algebraic Euler characteristic of $X$ is just the special case of $\mathcal{F} = \mathcal{O}_X$.
\end{rmk}
\begin{nota}\label{not:2}
We shall also need the following terminology and combinatorial quantities. Note that these are all universal and depend only on the choice of multiplicative sequence $Q$. These constants can all be effectively computed.
\begin{enumerate}
\item
Given any monomial exponent $\underline{b}$ we shall denote by:
\[ {\delta}_{\underline{b}} \]
the coefficient of $D^{\underline{b}}$ in $Q_{\abs{b}}(1;\Delta_1,\ldots,\Delta_{\abs{\underline{b}}})$. Note that these coefficients depend only on the monomial type of $\underline{b}$, that is the multi-set $\{ b_i \neq 0 \}$. In particular ${\delta}_{(2,0,1)} = {\delta}_{(1,2,0)} = {\delta}_{(2,1)}$.
In the context we are working, where $Q$ describes the algebraic Euler characteristic, this is also precisely the coefficient of $D^{\underline{b}}$ in:
\[ \prod_{D\in \Delta} \frac{D}{1-e^{-D}}. \]
For example, given that:
\[ Q_2(1,\Delta_1,\Delta_2) = \frac{1}{12}(\Delta_1^2+\Delta_2) = \frac{1}{12}\sum_i D_i^2 + \frac{1}{4}\sum_{i\neq j} D_iD_j \]
we have that:
\[ {\delta}_{(2)} = \frac{1}{12} \qquad {\delta}_{(1,1)} = \frac{1}{4}. \]
Likewise given that:
\[ Q_3(1,\Delta_1,\Delta_2,\Delta_3) = \frac{1}{24}\Delta_{1}\Delta_2 = 0 \sum_i D_i^{3} + \frac{1}{24}\sum_{i\neq j} D_i^2D_j + \frac{1}{8}\sum_{i\neq j\neq k} D_iD_jD_k \]
we have that:
\[ {\delta}_{(3)} = 0 \qquad {\delta}_{(2,1)} = \frac{1}{24} \qquad {\delta}_{(1,1,1)} = \frac{1}{8}. \]
We can likewise compute that:
\[ {\delta}_{(0)} = 1 \qquad {\delta}_{(1)} = \frac{1}{2}. \]
\item
We may think of the monomial exponents $\underline{b}$ as vectors indexed by the elements $D$ of $\Delta$.
As such, given two monomial exponents $\underline{b}$ and $\underline{b}'$ we shall denote $\underline{b} \leq \underline{b}'$ if the inequality holds component wise so that we may write:
\[ D^{\underline{b}}D^{\underline{b}''} = D^{\underline{b}'} \]
for some $\underline{b}''$ with all components $b_i''\ge0$.
By the support of a monomial exponent $\underline{b}$ we mean the collection of $D_i$ for which $b_i\neq 0$. We say $\underline{a}$ and $\underline{b}$ have disjoint support if the corresponding collections have no common elements.
Given a monomial exponent $\underline{b}$ we shall say it is multiplicity free, abbreviated MF, if $b_i \leq 1$ for all $i$, otherwise, we shall say it is not multiplicity free, abbreviated NMF.
Note that a monomial exponent is MF precisely when computing $D^{\underline{b}}$ involves no self intersections.
Finally, given a collection of monomial exponents $\underline{b}_j$ we shall write:
\[ \sum_j \underline{b}_j = \underline{b} \]
if this is true as a vector sum.
\begin{prop}\label{prop:constC}
If $\underline{b}_j$ have disjoint support then:
\[ {\delta}_{\sum_j \underline{b}_j} = \prod_j {\delta}_{\underline{b}_j}. \]
\end{prop}
\begin{proof}
This follows immediately from the multiplicativity of $Q$ as in Proposition \ref{prop:multQ} and the observation that:
\[ {\mathrm{c}}_i \left( \underset{D \in \Delta}\oplus \mathcal{O}(D)\right) = \Delta_i. \qedhere\]
\end{proof}
\item Given a monomial exponent $\underline{b}$ denote by $\tilde{\underline{b}}$ the monomial exponent such that \[ \tilde{b}_i = \min(1,b_i),\] so that $\tilde{\underline{b}}$ captures the support of $\underline{b}$ but $\tilde{\underline{b}}$ is MF.
For example $\widetilde{(1,2,3)} = (1,1,1)$.
Moreover, we shall denote by $\hat{\underline{b}}$ the monomial exponent such that
\[ \hat{b}_i = \begin{cases} 1 & b_i = 1 \\ 0 & \text{otherwise}, \end{cases} \]
so that $\hat{\underline{b}}$ captures the part of the support of $\underline{b}$ where $\underline{b}$ has no self intersection.
For example $\widehat{(2,1,3)} = (0,1,0)$.
\item Given a monomial exponent $\underline{b}$ we shall denote by:
\[ {\lambda}_{\underline{b}} = \sum_{k\ge 0} (-1)^{k+1} \underset{\sum {\underline{b}_j} = \underline{b}}{\sum_{(\underline{b}_1,\ldots,\underline{b}_k)}} \left(\prod_{j=1}^k {\delta}_{\underline{b}_j}\right) = {\delta}_{\underline{b}} \sum_{k\ge 0}(-1)^{k+1} \underset{\sum {\underline{b}_j} = \underline{b}}{\sum_{(\underline{b}_1,\ldots,\underline{b}_k)}} 1. \]
In the summation we consider only terms with all $\abs{\underline{b}_j} \ge 1$ and where in the tuple $(\underline{b}_1,\ldots,\underline{b}_k)$ all of $\underline{b}_j$ have disjoint support and each of $\underline{b}_1,\ldots,\underline{b}_{k-1}$ are MF, so that only $\underline{b}_k$ is potentially NMF.
Note, when $\underline{b}$ is MF, these last three conditions are automatic. For $k$ sufficiently large the inner sum is an empty sum.
Under these conditions the equality between the two definitions is immediate from Proposition \ref{prop:constC}.
\begin{prop}\label{prop:constD}
When $\underline{b}$ is MF we have:
\[ \sum_{k\ge 0} (-1)^{k+1} \underset{\sum {\underline{b}_j} = \underline{b}}{\sum_{(\underline{b}_1,\ldots,\underline{b}_k)}} 1 = (-1)^{\abs{\underline{b}}} \]
where the sum is taken as above.
\end{prop}
\begin{proof}
Each tuple $(\underline{b}_1,\ldots,\underline{b}_k)$ contributing to the above summation
describes an ordered factorization of $D^{\underline{b}}=D_1\cdots D_\ell$ into $k$ non-trivial coprime parts.
Denote by $ N_{k,\ell} $
the number of such length $k$ factorizations.
Using that $D_\ell$ is a factor of $D^{\underline{b}_j}$ for a unique $j$ we may uniquely associate to each length $k$ ordered factorization of $D_1\cdots D_\ell$ an ordered factorization of $D_1\cdots D_{\ell-1}$ of either length $k$ or length $k-1$ as follows:
\begin{itemize}
\item If $D^{\underline{b}_j} \neq D_\ell$ then replace $\underline{b}_j$ by $\underline{b}_j'$ where $D^{\underline{b}_j} = D^{\underline{b}_j'}D_\ell$. This gives a length $k$ factorization.
\item If $D^{\underline{b}_j} = D_\ell$ we omit $\underline{b}_j$ from the factorization entirely, and shift down the indices on $\underline{b}_i$ for $i>j$. This gives a length $k-1$ factorization.
\end{itemize}
As we run over all the ordered factorizations of $D_1\cdots D_{\ell}$ each length $k$ and each length $k-1$ ordered factorization of $D_1\cdots D_{\ell-1}$ occurs exactly $k$ times. We thus obtain a recurrence relation $ N_{k,\ell} = kN_{k,\ell-1} + k N_{k-1,\ell-1}$ and a straightforward computation yields that:
\[ \sum_{k\ge 0} (-1)^{k+1}N_{k,\ell} = - \sum_{k\ge 0} (-1)^{k+1}N_{k,\ell-1}.\]
The claim now follows by an induction on $\ell = \abs{\underline{b}}$.
\end{proof}
\begin{prop}\label{prop:constDNMF}
When $\underline{b}$ is NMF and $\abs{\hat{\underline{b}}} \ge 1$ then:
\[ \sum_{k\ge 0} (-1)^{k+1} \underset{\sum {\underline{b}_j} = \underline{b}}{\sum_{(\underline{b}_1,\ldots,\underline{b}_k)}} 1 = 0 \]
where the sum is taken as above.
\end{prop}
\begin{proof}
Every ordered factorization of $D^{\underline{b}}$ into $k$ non-trivial coprime parts where only the last one is NMF induces an ordered factorization of $D^{\hat{\underline{b}}}$ into either $k-1$ non-trivial coprime parts or $k$ non-trivial coprime parts.
Each factorization of $D^{\hat{\underline{b}}}$ arises in exactly two ways. It follows that:
\[ \sum_{k\ge 0} (-1)^{k+1} \underset{\sum {\underline{b}_j} = \underline{b}} {\sum_{(\underline{b}_1,\ldots,\underline{b}_k)}} 1 =
\sum_{k\ge 0} (-1)^{k+1} \underset{\sum {\underline{b}_j} = \hat{\underline{b}}}{\sum_{(\underline{b}_1,\ldots,\underline{b}_k)}} 1 -
\sum_{k\ge 0} (-1)^{k+1} \underset{\sum {\underline{b}_j} = \hat{\underline{b}}}{\sum_{(\underline{b}_1,\ldots,\underline{b}_k)}} 1 = 0\]
which gives the desired result.
\end{proof}
The constants ${\lambda}_{\underline{b}}$ shall be used in Corollaries \ref{cor:secondaryinduction}, \ref{cor:leprim/imprim} and \ref{cor:main}.
As an example, by considering the different ordered decompositions of $(1,1,1)$, for instance:
\[ (1,1,1) \quad (1,1,0) + (0,0,1) \quad (0,0,1) +(1,1,0) \quad (1,0,1) +(0,1,0) \quad \ldots \]
including also the $6$ permutations of $ (1,0,0) + (0,1,0) + (0,1,0)$,
we see that:
\[ {\lambda}_{(1,1,1) } = {\delta}_{(1,1,1)} - 6{\delta}_{(1,1)}{\delta}_{(1)} + 6{\delta}_{(1)}^3 = \frac{1}{8}. \]
We can also compute that:
\[ {\lambda}_{(0)} = -1 \qquad {\lambda}_{(1)} = {\delta}_{(1)} = \frac{1}{2} \qquad {\lambda}_{(1,1)} = {\delta}_{(1,1)} -2 {\delta}_{(1)}^2 = -\frac{1}{4} \]
\[ {\lambda}_{(2)} = {\delta}_{(2)} = \frac{1}{12} \qquad {\lambda}_{(2,1)} = {\delta}_{(2,1)} - {\delta}_{(2)}{\delta}_{(1)} = 0 \qquad {\lambda}_{(3)} = {\delta}_{(3)} = 0. \]
\end{enumerate}
\end{nota}
\begin{prop}\label{prop:LogChernvsChern}
Let $X$ be a smooth projective variety and $\Delta = \cup \{D_i\}$ be a collection of smooth divisors with simple normal crossings on $X$.
We have a relation:
\[ {\mathrm{c}}_i(\Omega^1_{X}) = \sum_j (-1)^{i-j}{\mathrm{c}}_j(\Omega^1_{X}(\log \Delta))\Delta_{i-j}. \]
Recall $\Delta_k$ is the $k$-th elementary symmetric polynomial in the irreducible components of the boundary of $X$.
This can also be expressed as:
\[ {\mathrm{c}}(\Omega^1_{X}) = {\mathrm{c}}(\Omega^1_{X}(\log))\prod_{D_i}(1-{D_i}). \]
\end{prop}
\begin{proof}
We follow essentially an argument for an analogous result from \cite[Prop. 1.2]{Tsushima}.
We have the following two exact sequences:
\[ \xymatrix{
0 \ar[r]& \Omega^1_{X} \ar[r]&\Omega^1_{X}(\log \Delta) \ar[r]& \oplus \mathcal{O}_{D_i} \ar[r]& 0\\
0 \ar[r]& \mathcal{O}_{X}(-D_i) \ar[r]& \mathcal{O}_{X} \ar[r]& \mathcal{O}_{D_i} \ar[r]& 0.
}\]
The first of which essentially defines $\Omega^1_{\overline{X}}(\log \Delta)$.
By the multiplicativity of the total Chern class we obtain:
\[ {\mathrm{c}}(\Omega_X^1) = {\mathrm{c}}(\Omega_X^1(\log \Delta)) \prod_{D_i} (1-D_i). \qedhere\]
\end{proof}
\begin{prop}\label{prop:CHonBDY}
Logarithmic Chern classes restrict to the boundary.
That is, let $X$ be a smooth projective variety and $\Delta = \cup \{D_i\}$ be a collection of smooth divisors with simple normal crossings on $X$.
Suppose $D \in \Delta$ is a fixed irreducible divisor then:
\[ {\mathrm{c}}^\alpha(\Omega_{X}^1(\log \Delta)) \cdot D = {\mathrm{c}}^\alpha(\Omega_{D}^1(\log \Delta')). \]
This equality should be interpreted as an equality on $D$.
\end{prop}
\begin{proof}
The result is analogous to \cite[Lem. 5.1]{Tsushima}, this proof was suggested by the referee.
By Proposition \ref{prop:LogChernvsChern} we have:
\[ {\mathrm{c}}(\Omega_{X}^1(\log \Delta)) = {\mathrm{c}}(\Omega_{X}^1) \frac1{(1-D)}\prod_{D_i\neq D} \frac1{(1-D_i)}. \]
As $ {\mathrm{c}}(\Omega_{X}^1) \frac1{(1-D)}$ restricts to $ {\mathrm{c}}(\Omega_{D}^1)$ on $D$ the right hand side of the above expression restricts to:
\[ {\mathrm{c}}(\Omega_{D}^1)\prod_{D_i\neq D} \frac1{(1-D_i')} \]
which in turn equals $ {\mathrm{c}}(\Omega_{D}^1(\log \Delta')) $ by Proposition \ref{prop:LogChernvsChern}.
As the Chern classes agree, so to do their products.
\end{proof}
\begin{nota}
Consider $\pi: X \rightarrow Y$ a ramified covering.
For $Z \subset X$ irreducible we shall denote by $e_Z$ the ramification degree of $\pi$ at $Z$ as it is defined in \cite[Ex. 4.3.4]{FultonIntersection1}.
We note that in the context of smooth varieties by \cite[Prop. 7.1]{FultonIntersection1} we can compute the ramification degree as:
\[ e_Z = {\rm length}(\mathcal{O}_{X,Z} \otimes_{\mathcal{O}_Y} \mathcal{O}_{Y,\pi(Z)}/J_{\pi(Z)}). \]
In the expression above, $J_{\pi(Z)}$ is the ideal associated to $\pi(Z)$ and the length is that of the ring as a module over itself.
\end{nota}
The following proposition is well known (see for example \cite[Ex. 4.3.7]{FultonIntersection1}), though we will not make direct use of it the statement motivates our understanding of ramification.
\begin{prop}\label{prop:sumramdeg}
Let $X$ and $Y$ be smooth projective varieties.
Consider $\pi: X \rightarrow Y$ a potentially ramified finite covering of degree $\mu$.
For any $Z' \subset Y$ irreducible, if we decompose $\pi^{-1}(Z') = \cup_i Z_i$ into irreducible components then:
\[ \sum_i \mu_{Z_i}e_{Z_i} = \mu. \]
where $\mu_{Z_i}$ is the degree of $\pi|_{Z_i}$.
\end{prop}
\begin{nota}\label{not:ERA}
Fix a ramified covering $\pi : X \rightarrow Y$ of smooth projective varieties of dimension $n$.
The collection of reduced irreducible components of the branch locus shall be denoted $\Delta(B)$, and we shall denote monomial exponents for the branch locus by $\underline{b}$ and write $B^{\underline{b}}$ for the associated equivalence class of cycle.
The collection of reduced irreducible components of the ramification locus shall be denoted $\Delta(R)$, and we shall denote monomial exponents for the ramification locus by $\underline{a}$ and write $R^{\underline{a}}$ for the associated equivalence class of cycle.
Recall that $\Delta(R) = \pi^{-1}(\Delta(B))$ includes all components $R_j$ in $\pi^{-1}(B_i)$ even those which may not themselves be ramified.
For an irreducible component $R_i$ then it is clear $\pi(R_i) = B_j$ for a unique $j$.
Given a pair of monomial exponents $\underline{a}$ and $\underline{b}$ we shall say $\pi(\underline{a}) = \underline{b}$ if for each $j$ we have:
\[ b_j = \sum_{\pi(R_i) = B_j} a_i . \]
We shall denote by:
\[ E_{R^{\underline{a}}} = \prod_i (e_{R_i})^{a_i} \]
the product of the ramification degrees. This notation is justified by Lemma \ref{lem:cute2} which says that when $\underline{a}$ is MF then $E_{R^{\underline{a}}}$ is the ramification degree of each irreducible component of $R^{\underline{a}}$.
\end{nota}
\begin{lemma}\label{lem:cute1}
Consider $\pi: X \rightarrow Y$ a potentially ramified finite map between smooth projective varieties.
Suppose $D_1$ and $D_2$ are two (reduced irreducible) divisors on $X$ which meet with simple normal crossings and that $\pi(D_1) = \pi(D_2) = D$ is smooth.
Let $Z$ be a (reduced) irreducible component of $D_1\cap D_2$.
Then there is a component $R$ of the ramification locus of $\pi$ such that $Z \subset R$ and $Z \not\in \{ D_1,D_2 \}$.
In particular the collection $ \{ D_1,D_2, R \}$ does not have simple normal crossings.
\end{lemma}
\begin{proof}
We will consider the completed local rings at $Z$ and $\pi(Z)$. By the Cohen structure theorem (for regular complete local rings, see \cite[Tag 0323 Lemma 10.154.10]{stacks-project}) these are power series rings over the coordinate ring, we can thus write these in the form:
\[ K(Z)[[s_1,s_2]] \quad \text{and}\quad K( \pi(Z))[[t_1,t_2]] \]
where $s_i$ is the local coordinate defining $D_i$ on $X$ near $Z$, the coordinate $t_1$ defines $D$ on $Y$ near $\pi(Z)$ and $t_2$ is any other local coordinate defining a divisor which meets $D$ transversely at $\pi(Z)$.
By the assumption that $\pi(D_1)=\pi(D_2)=D$ we have that we can choose our coordinates $s_1$ and $s_2$ so that $\pi^\ast(t_1) = us_1^{a_1}s_2^{a_2}$ with $a_1,a_2\ge1$ and $u\in K(Z)^\times$.
By the assumption that the map is finite we have that $s_1,s_2\not\vert \pi^\ast(t_2)$, moreover $\pi^\ast(t_2)$ vanishes at $Z$ and thus $\pi^\ast(t_2)$ has trivial constant term.
It follows that
\[ \pi^\ast(t_2) = v_1s_1^{b_1} + v_2s_2^{b_2} + (\text{other terms (not including those monomials)})\]
with $b_1,b_2 \ge 1$ and $v_1,v_2 \in K(Z)^\times$.
We can understand the ramification locus near $Z$ by way of the Jacobian condition.
The Jacobian is precisely:
\[ a_1us_1^{a_1-1}s_2^{a_2} \left( b_2v_2s_2^{b_2-1}+ \frac{\partial (\text{other terms})}{\partial s_2}\right) + a_2us_1^{a_1}s_2^{a_2-1} \left( b_1v_1s_1^{b_1-1} + \frac{\partial (\text{other terms})}{\partial s_1} \right). \]
As the expressions $s_2 \frac{\partial (s_1^{\ell_1}s_2^{\ell_2})}{\partial s_2}$ and $s_1 \frac{\partial (s_1^{\ell_1}s_2^{\ell_2})}{\partial s_1}$ both have the same monomial type, namely $s_1^{\ell_1}s_2^{\ell_2}$, as the starting monomial we find that we may rewrite the Jacobian above as:
\[ us_1^{a_1-1}s_2^{a_2-1}( a_2b_1v_1s_1^{b_1} + a_1b_2v_2s_2^{b_2} + (\text{other terms (not including those monomials)})). \]
The term $(a_2b_1v_1s_1^{b_1}+ a_1b_2v_2s_2^{b_2} + (\text{other terms}))$ vanishes at $Z$ and is not divisible by $s_1$ or $s_2$ and thus defines at least one component of the ramification locus that pass through $Z$ which is not equal to $D_1$ or $D_2$.
\end{proof}
\begin{lemma}\label{lem:cute2}
Consider $\pi: X \rightarrow Y$ a potentially ramified finite map between smooth projective varieties.
Let $\Delta(B)$ be the collection of irreducible components of the branch locus (on $Y$) and $\Delta(R) = \pi^{-1}(\Delta(B))$ be the collection of (reduced) irreducible components of the ramification locus (on $X$). Suppose $\Delta(B)$ and $\Delta(R)$ have simple normal crossings.
If $R_1,\ldots R_\ell \in \Delta(R)$ are distinct and if $Z$ is a (reduced) irreducible component of $\cap_i R_i$ then:
\[ e_Z = \prod_i e_{R_i}. \]
\end{lemma}
\begin{proof}
We will consider the completed local ring at $Z$ and $\pi(Z)$.
The completed local rings at the generic points are of the form:
\[ K(Z)[[s_1,\ldots,s_{\ell}]] \quad \text{and}\quad K( \pi(Z))[[t_1,\ldots,t_{\ell}]] \]
where $s_i$ is a local parameter defining $R_i$, and $t_i$ a local parameter defining $B_i = \pi(R_i)$. That $\pi(R_i)$ are all distinct follows from Lemma \ref{lem:cute1}.
It follows from this setup that we may choose the local coordinate $s_i$ so that $\pi^\ast(t_i) = u_is_i^{a_i}$ with $a_i \ge 1$ and $u_i\in K(Z)^\times$.
The claim now follows from a direct computations of lengths.
In particular $e_{R_i} = a_i$ and $e_Z = \prod_i a_i$.
\end{proof}
\begin{lemma}\label{lem:cute3}
Consider $\pi: X \rightarrow Y$ a potentially ramified finite map between smooth projective varieties.
Let $\Delta(B)$ be the collection of irreducible components of the branch locus (on $Y$) and $\Delta(R) = \pi^{-1}(\Delta(B))$ be the collection of (reduced) irreducible components of the ramification locus (on $X$). Suppose $\Delta(B)$ and $\Delta(R)$ have simple normal crossings.
\begin{enumerate}
\item If $\pi(\underline{a}) = \underline{b}$, and the monomial type of $\underline{a}$ and $\underline{b}$ are not the same then:
\[ R^{\underline{a}} = 0. \]
\item If $\pi(\underline{a}) = \underline{b}$, and the monomial type of $\underline{a}$ and $\underline{b}$ are the same then in the formal expansion:
\[ \pi^\ast(B^{\underline{b}}) = \prod_i \pi^{\ast}(B_i)^{b_i} = \prod_i \left( \sum_{\pi(R_j) = B_i} e_{R_j}R_j \right)^{b_i} = \sum_{\pi(\underline{a})=\underline{b}} x_{\underline{a}}R^{\underline{a}} \]
the coefficient $x_{\underline{a}}$ of $R^{\underline{b}}$ is $E_{R^{\underline{a}}}$.
\item We have the following identity in the Chow ring:
\[ \pi^\ast(B^{\underline{b}}) = \sum_{\pi(\underline{a}) = \underline{b}} E_{R^{\underline{a}}} R^{\underline{a}}. \]
\end{enumerate}
\end{lemma}
\begin{proof}
The first statement follows immediately from Lemma \ref{lem:cute1}, in particular if the monomial exponents are not the same then the expression $R^{\underline{a}}$ involves intersecting two components which map to the same $B_i$. If these two components do not have trivial intersection than the ramification locus does not have simple normal crossings.
The second statement is a straightforward check and indeed is a basic property of multinomial coefficients.
The third statement then combines the previous two by observing that $R^{\underline{a}} = 0$ whenever the coefficient of $R^{\underline{a}}$ is not $E_{R^{\underline{a}}}$.
\end{proof}
\begin{thm}\label{thm:pullbacklog}
Logarithmic Chern classes respect pullbacks through ramified covers.
That is, let $X$ and $Y$ be smooth projective varieties of dimension $n$.
Consider $\pi: X \rightarrow Y$ a potentially ramified finite covering.
Let $\Delta(B)$ be the collection of irreducible components of the branch locus (on $Y$) and $\Delta(R) = \pi^{-1}(\Delta(B))$ be the collection of (reduced) irreducible components of the ramification locus (on $X$).
Suppose that $\Delta(R)$ and $\Delta(B)$ consist of simple normal crossing divisors.
Then:
\[ \pi^\ast( \Omega_Y(\log \Delta(B)) = \Omega_X(\log \Delta(R))). \]
Recall that $\Delta(R)$ includes even those irreducible components of $\pi^{-1}(B)$ which are not themselves ramified.
\end{thm}
\begin{proof}
The claim can be checked locally on $Y$.
Suppose $x_1,\ldots,x_n$ are a local system of coordinates at some point $\underline{x}$ of $X$, and $y_1,\ldots,y_n$ are a local system of coordinates near $\underline{y} = \pi(\underline{x})$.
We may suppose that $y_1,\ldots,y_\ell$ define the branch locus of $\pi$ near $\underline{y}$ and further that $\pi^\ast(y_i) = x_i^{a_i}$ so that $x_1,\ldots,x_\ell$ define the ramification locus of $\pi$ near $\underline{x}$ (see proof of Lemma \ref{lem:cute2}).
Set $\epsilon_i = 1$ if $i \leq \ell$ and $0$ otherwise.
Then the bundle $\Omega_Y(\log \Delta(R))$ has a basis of sections near $\underline{y}$ given by:
\[ \frac{dy_1}{y_1^{\epsilon_1}} ,\ldots, \frac{dy_n}{y_n^{\epsilon_n}}. \]
By the choice of $\epsilon_i$ we find that for all $i$:
\[ \frac{d (\pi^\ast y)}{ (\pi^\ast y)^{\epsilon_i}} = \frac{d (x_i^{a_i})}{ x_i^{a_i \epsilon_i}} = a_i\frac{dx_i}{x_i^{\epsilon_i}} \]
we find that $\pi^\ast(\Omega_Y(\log \Delta(R)))$ has a basis of sections near $\underline{x}$:
\[ \frac{dx_1}{x_1^{\epsilon_1}} ,\ldots, \frac{dx_n}{x_n^{\epsilon_n}}. \]
This precisely agrees with the bundle $\Omega_X(\log \Delta(B))$ near $\underline{x}$.
\end{proof}
\section{Logarithmic Euler Characteristic}\label{sec:logeuler}
Aside from its present application to a Riemann-Hurwitz formula, the following definition is motivated in part by its appearance in Mumford's work in \cite[Cor. 3.5]{Mumford_Proportionality}.
\begin{df}
Let $X$ be a smooth projective variety and $\Delta$ be a collection of smooth divisors with simple normal crossings on $X$.
We define the logarithmic Euler characteristic of a sheaf $\mathcal{F}$ on $X$ with respect to the boundary $\Delta$ to be:
\[ \chi(X,\Delta,\mathcal{F}) = Q_n( {\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega^1_{\overline{X}}(\log \Delta)),\ldots, {\mathrm{c}}_n(\Omega^1_{\overline{X}}(\log \Delta))). \]
\end{df}
Though it is not a priori clear what use this definition can have, the following theorem shows that it in some sense behaves better than the standard Euler characteristic.
\begin{thm}\label{thm:pullbacklogchern}
Let $X$ and $Y$ be smooth projective varieties.
Consider $\pi: X \rightarrow Y$ a potentially ramified finite covering of degree $\mu$.
Let $\Delta(B)$ be the collection of irreducible components of the branch locus (on $Y$) and $\Delta(R) = \pi^{-1}(\Delta(B))$ be the collection of (reduced) irreducible components of the ramification locus (on $X$). Suppose $\Delta(B)$ and $\Delta(R)$ have simple normal crossings.
Let $\mathcal{F}$ be any coherent sheaf on $Y$ then:
\[ \chi(X,\Delta(R),\pi^\ast(\mathcal{F})) = \mu \cdot \chi(Y,\Delta(B),\mathcal{F}). \]
\end{thm}
\begin{proof}
By Theorem \ref{thm:pullbacklog} (and functoriality) we have that:
\[ \pi^\ast({\mathrm{ch}}(\mathcal{F}){\rm Todd}(\Omega^1_Y(\log \Delta(B)))) = {\mathrm{ch}}(\pi^{\ast}(\mathcal{F})){\rm Todd}(\Omega^1_X(\log \Delta(R))). \]
The result then follows by recalling that the effect of pullback on the degree of a class is to multiply by $\mu$.
\end{proof}
\section{Logarithmic Euler Characteristic vs The Euler Characteristic}\label{sec:logveuler}
The key to obtaining our results is the following comparison between the usual Euler characteristic and the logarithmic Euler characteristic we have just defined.
\begin{thm}\label{thm:eulvslogeul}
Let $X$ be a smooth projective variety and let $\mathcal{F}$ be any coherent sheaf on $X$.
Suppose $\Delta$ is a collection of smooth divisors with simple normal crossings on $X$.
Then
\[
\chi(X,\mathcal{F}) - \chi(X,\Delta,\mathcal{F}) = \sum_{\abs{\underline{b}}\ge 1} (-1)^{\abs{\underline{b}}} {\delta}_{\underline{b}} D^{\underline{b}} Q_{n-\abs{\underline{b}}}({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X^1(\log \Delta)),\ldots, {\mathrm{c}}_{n-\abs{\underline{b}}}(\Omega_X^1(\log\Delta))) .
\]
The notation $D^{\underline{b}}$ is introduced in \ref{not:1}.(9), the polynomial $Q$ is defined in Theorem \ref{thm:RR}, the constants ${\delta}_{\underline{b}}$ are introduced in \ref{not:2}.(1).
\end{thm}
\begin{proof}
Recall that by Proposition \ref{prop:multQ} we have:
\[ Q_n({\mathrm{ch}}(\underline{x}); y_1,\ldots,y_n) = \sum_{\ell + m = n} Q_\ell({\mathrm{ch}}(\underline{x}); u_1,\ldots,u_\ell) Q_m(1; v_1,\ldots,v_m). \]
In this context if we set $x_i = {\mathrm{c}}_i(\mathcal{F})$, $u_i = {\mathrm{c}}_i(\Omega_X^1(\log \Delta))$, and $v_i = (-1)^i\Delta_i$ then by Proposition \ref{prop:LogChernvsChern} we have in the setting of Proposition \ref{prop:multQ} that $y_i = {\mathrm{c}}_i(\Omega_X^1)$ and it follows that we can rewrite $ Q_n({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X^1),\ldots, {\mathrm{c}}_n(\Omega_X^1)) $ as being equal to:
\[ \sum_{\ell+m=n} Q_\ell({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X^1 (\log\Delta))\ldots, {\mathrm{c}}_{\ell}(\Omega_X^1(\log\Delta))) Q_m(1; (-1)^1\Delta_1, (-1)^2\Delta_2,\ldots, (-1)^m\Delta_m). \]
The result then follows from the observation that:
\[ Q_m(1; (-1)^1\Delta_1, (-1)^2\Delta_2,\ldots, (-1)^m\Delta_m) = \sum_{\abs{\underline{b}}=m} (-1)^{\abs{\underline{b}}} {\delta}_{\underline{b}} D^{\underline{b}}. \qedhere\]
\end{proof}
\begin{nota}\label{not:logeuler}
If $\underline{a}$ is multiplicity free, so that $D^{\underline{a}}$ has no self intersections, then we may write
\[ D^{\underline{a}} = \sum_j x_j [C_j] \quad\text{ where }\quad \left(\underset{a_i\neq0}\cap\; D_i\right)_{red} = \underset{j}\cup \;C_j \]
in this setting we interpret $\chi(D^{\underline{a}},\Delta',\mathcal{F}|_{D^{\underline{a}}}) $ to mean:
\[ \chi(D^{\underline{a}},\Delta',\mathcal{F}|_{D^{\underline{a}}}) = \sum_i m_i\chi(C_i,\Delta',\mathcal{F}|_{C_i}) \]
the weighted sum of the logarithmic Euler characteristics of the irreducible components of $D^{\underline{a}}$, the weights being precisely the intersection multiplicities. We interpret $\chi(D^{\underline{a}},\mathcal{F}|_{D^{\underline{a}}}) $ similarly.
Both of these expressions most naturally live on the disjoint unions of irreducible components of $D^{\underline{a}}$.
Note that in the context of simple normal crossings the intersection will already be reduced and the multiplicities, $m_i$, will all be $1$.
\medskip
When $\underline{a}$ is MF we have by Proposition \ref{prop:CHonBDY} that
\[ \chi(D^{\underline{a}},\Delta',\mathcal{F}|_{D^{\underline{a}}}) = D^{\underline{a}} Q_{n-\abs{\alpha}}({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X^1(\log\Delta)),\ldots,{\mathrm{c}}_{n-\abs{\alpha}}(\Omega_X^1(\log\Delta))) \]
when this expression is viewed as an equality on the disjoint union of the irreducible components of $D^{\underline{a}}$.
\medskip
By an abuse of notation we shall extend this to the case where there may be self intersections and denote by:
\[ \chi(D^{\underline{a}},\Delta',\mathcal{F}|_{D^{\underline{a}}}) = D^{\underline{a}} Q_{n-\abs{\alpha}}({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X^1(\log\Delta)),\ldots,{\mathrm{c}}_{n-\abs{\alpha}}(\Omega_X^1(\log\Delta))) \]
even when $a_i$ are potentially greater than $1$ so that we may interpret $ \chi(D^{\underline{a}},\Delta',\mathcal{F}|_{D^{\underline{a}}}) $ as an object on $X$. This interpretation is compatible with the interpretation as a push-forward whenever $\underline{a}$ is MF.
\end{nota}
\begin{cor}\label{cor:secondaryinduction}
With the same notation as in the Theorem,
if the irreducible components of $\Delta$ have trivial self intersection, then:
\[ \chi(X,\mathcal{F})- \chi(X,\Delta,\mathcal{F}) = \sum_{\abs{\underline{b}}\ge 1} (-1)^{\abs{\underline{b}}}{\lambda}_{\underline{b}} \chi(D^{\underline{b}},\mathcal{F}|_{D^{\underline{b}}}). \]
The notation $D^{\underline{b}}$ is introduced in \ref{not:1}.(9), the constants ${\lambda}_{\underline{b}}$ are introduced in \ref{not:2}.(4).
\end{cor}
\begin{proof}
In the above notation Theorem \ref{thm:eulvslogeul} gives us that:
\[ \chi(X,\mathcal{F})- \chi(X,\Delta,\mathcal{F}) = \sum_{\abs{\underline{b}}\ge 1} (-1)^{\abs{\underline{b}}}{\delta}_{\underline{b}} \chi(D^{\underline{b}},\Delta',\mathcal{F}|_{D^{\underline{b}}}). \]
As the same result allows us to compute $\chi(D^{\underline{b}},\Delta',\mathcal{F}|_{D^{\underline{b}}}) - \chi(D^{\underline{b}},\mathcal{F}|_{D^{\underline{b}}})$ whenever $\underline{b}$ is MF a recursive process will allow us to write:
\[ \chi(X,\mathcal{F}) - \chi(X,\Delta,\mathcal{F}) = \sum_{\abs{\underline{b}}\ge 1} e_{\underline{b}} \chi(D^{\underline{b}},\mathcal{F}|_{D^{\underline{b}}}). \]
We must only show that $e_{\underline{b}} = (-1)^{\underline{b}}{\lambda}_{\underline{b}}$
The coefficient of $\chi(D^{\underline{b}},\mathcal{F}|_{D^{\underline{b}}})$ can be computed by explicitly writing out the result of the recursive process.
The process will yield a sequence of formulas, indexed by $\ell$, of the form:
\begin{align*}
\chi(X,\mathcal{F}) - \chi(X,\Delta,\mathcal{F}) = \sum_{k=1}^{\ell-1} (-1)^{k+1} \sum_{(\underline{b}_1,\ldots,\underline{b}_k)} \left(\prod_{j=1}^k (-1)^{\abs{\underline{b}_j}}{\delta}_{\underline{b}_j}\right) \chi(D^{\sum_{j=1}^k \underline{b}_j}, \mathcal{F}|_{D^{\sum_{j=1}^k \underline{b}_j}}) \\
+ (-1)^{\ell+1} \sum_{(\underline{b}_1,\ldots,\underline{b}_{\ell})} \left(\prod_{j=1}^{\ell} (-1)^{\abs{\underline{b}_j}}{\delta}_{\underline{b}_j}\right) \chi(D^{\sum_{j=1}^k \underline{b}_j},\Delta' , \mathcal{F}|_{D^{\sum_{j=1}^{\ell+1} \underline{b}_j}})
\end{align*}
In the summations the elements of the tuples $(\underline{b}_1,\ldots,\underline{b}_k)$ always have disjoint support and $\abs{\underline{b}_j} \ge 1$.
We note that in the context of this corollary we need never consider any terms where $\underline{b} = \sum_{j=1}^{\ell} \underline{b}_j$ is NMF as for each such term we have $D^{\underline{b}}$ vanishes.
The base case of the induction, the case $\ell=1$, is precisely the statement of Theorem \ref{thm:eulvslogeul}.
The formula for $\ell+1$ is obtained from that for $\ell$ by simply expanding every term:
\[ \chi(D^{\sum_{j=1}^k \underline{b}_j},\Delta' , \mathcal{F}|_{D^{\sum_{j=1}^{\ell} \underline{b}_j}}) = \chi(D^{\sum_{j=1}^{\ell} \underline{b}_j}, \mathcal{F}|_{D^{\sum_{j=1}^{\ell} \underline{b}_j}}) - \sum_{\underline{c}} (-1)^{\abs{\underline{c}}}{\delta}_{\underline{c}}\chi(D^{\underline{c}+\sum_{j=1}^k \underline{b}_j},\Delta', \mathcal{F}|_{D^{\underline{c}+\sum_{j=1}^{\ell+1} \underline{b}_j}}) \]
with each term $\underline{c}$ in the summation avoiding the support of $\sum_{j=1}^k \underline{b}_j$.
This recursion terminates as soon as $\ell > n$ because then $D^{\underline{b}}$ is an intersection of more than $n$ divisors, hence empty.
By regrouping terms on $\chi(D^{\underline{b}},\mathcal{F}|_{\underline{b}})$ we find that the coefficient of this term is precisely \[ (-1)^{\abs{\underline{b}}}{\lambda}_{\underline{b}} = (-1)^{\abs{\underline{b}}} \sum_{k\ge 0} \underset{\sum {\underline{b}_j} = \underline{b}}{\sum_{(\underline{b}_1,\ldots,\underline{b}_k)}} \left(\prod_{j=1}^k {\delta}_{\underline{b}_j}\right). \]
In the summation we consider only terms with all $\abs{\underline{b}_j} \ge 1$ and where in the tuple $(\underline{b}_1,\ldots,\underline{b}_k)$ all of $\underline{b}_j$ have disjoint support. For $k$ sufficiently large the inner sum is an empty sum.
\end{proof}
\begin{rmk}
The proofs of the above theorem and corollary work formally when we replace $Q(x_1,\ldots,x_n; y_1,\ldots,y_n)$ by any other polynomial which is a multiplicative sequence in the $y_i$ with respect to products of varieties and such that the $x_j$ are `functorial' with respect to restriction.
We should also note that in light of Propositions \ref{prop:constC} and \ref{prop:constD} the coefficient $(-1)^{\abs{\underline{b}}}{\lambda}_{\underline{b}}$ can be rewritten as ${\delta}_{(1)}^{\abs{\underline{b}}}$ whenever $\underline{b}$ is MF (as in the Corollary above or below). Also, by Proposition \ref{prop:constDNMF} the constants ${\lambda}_{\underline{b}}$ in the Corollary below are typically $0$ when $\underline{b}$ is NMF.
\end{rmk}
\begin{cor}\label{cor:leprim/imprim}
With the same notation as in the Theorem, we have:
\begin{align*}
\chi(X,\mathcal{F}) - \chi(X,\Delta,\mathcal{F}) = \underset{\abs{\underline{b}} \ge 1}{\sum_{\underline{b}\;{\rm MF}}} &(-1)^{\abs{\underline{b}}}{\lambda}_{\underline{b}} \chi(D^{\underline{b}},\mathcal{F}|_{D^{\underline{b}}})
\\&+ \sum_{\underline{b}\;{\rm NMF}} (-1)^{\abs{\underline{b}}}{{\lambda}}_{\underline{b}} \chi(D^{\underline{b}},\Delta',\mathcal{F}|_{D^{\underline{b}}}).
\end{align*}
The notation $D^{\underline{b}}$ is introduced in \ref{not:1}.(9), the terminology MF and NMF is from \ref{not:2}.(2), the constants ${\lambda}_{\underline{b}}$ are introduced in \ref{not:2}.(4).
\end{cor}
\begin{proof}
The argument is the same as above, except rather than being able to completely ignore any NMF term which may appear, we simply include their contribution separately. The constant ${{\lambda}}_{\underline{b}}$ is defined precisely so as to count the appropriate weighted count of the number of possible factorizations of $D^{\underline{b}}$ in which terms have disjoint support and only the final term is potentially NMF.
\end{proof}
\section{Riemann-Hurwitz}\label{sec:rh}
In this section we establish our main result.
\begin{thm}\label{thm:main}
Consider $\pi: X \rightarrow Y$ a potentially ramified finite covering of degree $\mu$ between smooth projective varieties of dimension $n$.
Let $\Delta(B)$ be the collection of irreducible components of the branch locus (on $Y$) and $\Delta(R) = \pi^{-1}(\Delta(B))$ be the collection of (reduced) irreducible components of the ramification locus (on $X$).
Let $\mathcal{F}$ be any coherent sheaf on $Y$.
Suppose that $\Delta(R)$ and $\Delta(B)$ consist of simple normal crossing divisors.
We have that:
\[ \chi(X, \pi^\ast( \mathcal{F})) - \mu\cdot \chi(Y,\mathcal{F}) = \sum_{\underline{a}} (-1)^{\abs{\underline{a}}}{\delta}_{\underline{a}}(E_{R^{\underline{a}}} - 1)\chi(R^{\underline{a}},\Delta(R)', \pi^\ast( \mathcal{F})).
\]
The notation $D^{\underline{b}}$ is introduced in \ref{not:1}.(9), the constants ${\delta}_{\underline{b}}$ are introduced in \ref{not:2}.(1), the notation $E_{R^{\underline{a}}}$ is from \ref{not:ERA}, and the notation $\chi(R^{\underline{a}},\Delta(R)', \pi^\ast( \mathcal{F}))$ is from \ref{not:logeuler}.
\end{thm}
\begin{proof}
Firstly, by Theorem \ref{thm:eulvslogeul} we have:
\begin{align*}
\chi(X, \pi^\ast( \mathcal{F})) - \mu\cdot \chi(Y,\mathcal{F}) = \sum_{\abs{\underline{a}}\ge 0} &(-1)^{\abs{\underline{a}}}{\delta}_{\underline{a}} \chi(R^{\underline{a}},\Delta(R)',\pi^\ast( \mathcal{F})|_{R^{\underline{a}}})\\ &- \mu\left( \sum_{\abs{\underline{b}}\ge 0} (-1)^{\abs{\underline{b}}}{\delta}_{\underline{b}} \chi(B^{\underline{b}},\Delta',\mathcal{F}|_{B^{\underline{b}}}) \right).
\end{align*}
Next, by Theorem \ref{thm:pullbacklogchern} we have that:
\[ \chi(X, \Delta(R), \pi^\ast (\mathcal{F}))) = \mu\cdot \chi(Y, \Delta(B), \mathcal{F})) \]
so that these terms cancel out in the above expression.
With the remaining terms we can naturally group together those terms involving $\underline{a}$ and those with $\pi(\underline{a}) = \underline{b}$ in the summation above.
The error term arising from $\underline{a}$ in the expansion is:
\[ (-1)^{\abs{\underline{a}}}(\mu {\delta}_{\underline{b}} \chi(B^{\underline{b}}, \Delta(B)', \mathcal{F})) - \sum_{\pi(\underline{a}) = \underline{b}} {\delta}_{\underline{a}}\chi(R^{\underline{a}}, \Delta(R)', \pi^\ast (\mathcal{F}))). \]
Next, we observe that:
\begin{align*} \mu\cdot \chi(B^{\underline{b}}, \Delta(B)', \mathcal{F})) &= \mu (B^{\underline{b}} Q_n({\mathrm{ch}}(\mathcal{F});{\mathrm{c}}_1(\Omega_Y^i(\log\Delta(B))),\ldots,{\mathrm{c}}_n(\Omega_Y^i(\log\Delta(B)))) \\
&= \pi^\ast (B^{\underline{b}} Q_n({\mathrm{ch}}(\mathcal{F});{\mathrm{c}}_1(\Omega_X^i(\log\Delta(B))),\ldots,{\mathrm{c}}_n(\Omega_X^i(\log\Delta(B)))) \\
&= \pi^\ast(B^{\underline{b}}) Q_n({\mathrm{ch}}(\pi^\ast( \mathcal{F}));{\mathrm{c}}_1(\Omega_X^i(\log\Delta(R))),\ldots,{\mathrm{c}}_n(\Omega_X^i(\log\Delta(R)))).
\end{align*}
By Lemma \ref{lem:cute3} we have that
\[ \pi^\ast(B^{\underline{b}}) = \sum_{\pi(\underline{a}) = \underline{b}} E_{R^{\underline{a}}} R^{\underline{a}} \]
and so we obtain:
\[ \mu \cdot \chi(B^{\underline{b}}, \Delta(B)', \mathcal{F})) = \sum_{\pi(\underline{a}) = \underline{b}} E_{R^{\underline{a}}}\chi(R^{\underline{a}}, \Delta(R)', \pi^\ast(\mathcal{F})). \]
Grouping the terms on $\underline{a}$ we now immediately see that the contribution from the $\underline{a}$ terms is:
\[ (-1)^{\abs{\underline{a}}}{\delta}_{\underline{a}}(E_{R^{\underline{a}}} - 1)\chi(R^{\underline{a}},\Delta(R)', \pi^\ast (\mathcal{F}))). \]
Collecting these over all $\underline{a}$ we obtain the theorem.
\end{proof}
The coefficients in the following corollary can be rewritten using Propositions \ref{prop:simpleMF} and \ref{prop:simpleNMF}.
\begin{cor}\label{cor:main}
Consider $\pi: X \rightarrow Y$ a potentially ramified finite covering of degree $\mu$ between smooth projective varieties of dimension $n$.
Let $\Delta(B)$ be the collection of irreducible components of the branch locus (on $Y$) and $\Delta(R) = \pi^{-1}(\Delta(B))$ be the collection of (reduced) irreducible components of the ramification locus (on $X$).
Let $\mathcal{F}$ be any coherent sheaf on $Y$.
Suppose that $\Delta(R)$ and $\Delta(B)$ consist of simple normal crossing divisors.
Then the difference $\chi(X, \pi^\ast (\mathcal{F}))) - \mu \cdot \chi(Y,\mathcal{F})$ is equal to:
\begin{align*} \sum_{\underline{a}\; {\rm MF}} &(-1)^{\abs{\underline{a}}} \left( \underset{\abs{\underline{a}'} \ge 1}{\sum_{\underline{a}'\leq\underline{a}}}(- {\lambda}_{\underline{a}-\underline{a}'}{\delta}_{\underline{a}'})(E_{R^{\underline{a}'}} - 1)\right)\chi(R^{\underline{a}}, \pi^\ast (\mathcal{F}))) \\
&+ \sum_{\underline{a}\; {\rm NMF}} (-1)^{\abs{\underline{a}}}\left( {\delta}_{\underline{a}}(E_{R^{\underline{a}}} - 1) + \underset{\abs{\underline{a}'} \ge 1}{\sum_{\underline{a}'\leq\hat{\underline{a}}}}(- {\lambda}_{\underline{a}-\underline{a}'}{\delta}_{\underline{a}'})(E_{R^{\underline{a}'}} - 1)\right)\chi(R^{\underline{a}},\Delta(R)', \pi^\ast (\mathcal{F}))).
\end{align*}
The notation $D^{\underline{b}}$ is introduced in \ref{not:1}.(9), the constants ${\delta}_{\underline{b}}$ are introduced in \ref{not:2}.(1), the terminology MF and NMF is from \ref{not:2}.(2), the constants ${\lambda}_{\underline{b}}$ are introduced in \ref{not:2}.(4), the notation $E_{R^{\underline{a}}}$ is from \ref{not:ERA}, and the notation $\chi(R^{\underline{a}},\Delta(R)', \pi^\ast( \mathcal{F}))$ is from \ref{not:logeuler}.
\end{cor}
\begin{proof}
The proof is the same as that for Corollary \ref{cor:leprim/imprim}.
The terms $-{\lambda}_{\underline{a}-\underline{a}'}{\delta}_{\underline{a}'}(E_{R^{\underline{a}'}} - 1)$ account for the contribution to the coefficient of $\chi(R^{\underline{a}}, \pi^\ast (\mathcal{F})))$ from the expansion of the terms $\chi(R^{\underline{a'}},\Delta' ,\pi^\ast (\mathcal{F})))$ where $\underline{a'}$ is MF.
The term ${\delta}_{\underline{a}}(E_{R^{\underline{a}}} - 1)$ in the NMF case accounts for the contribution of the term which already appears in Theorem \ref{thm:main}.
\end{proof}
\begin{prop}\label{prop:simpleMF}
If $\underline{a}$ is MF and $R^{\underline{a}} = R_1\cdots R_k$ then:
\[ (-1)^{\abs{\underline{a}}} \underset{\abs{\underline{a}'} \ge 1}{\sum_{\underline{a}'\leq\underline{a}}}(- {\lambda}_{\underline{a}-\underline{a}'}{\delta}_{\underline{a}'})(E_{R^{\underline{a}'}} - 1) = {\delta}_{\underline{a}}\prod_{i=1}^k (1-e_{R_i}). \]
\end{prop}
\begin{proof}
When $\underline{a}$ is MF we have by Proposition \ref{prop:constD} that:
\[ (-1)^{\abs{\underline{a}}}{\lambda}_{\underline{a}-\underline{a}'}{\delta}_{\underline{a}'}= (-1)^{\underline{a}'}{\delta}_{\underline{a}}. \]
It follows that:
\[ (-1)^{\abs{\underline{a}}} \underset{\abs{\underline{a}'} \ge 1}{\sum_{\underline{a}'\leq\underline{a}}}(- {\lambda}_{\underline{a}-\underline{a}'}{\delta}_{\underline{a}'})(E_{R^{\underline{a}'}} - 1) = {\delta}_{\underline{a}}\underset{\abs{\underline{a}'} \ge 1}{\sum_{\underline{a}'\leq\underline{a}}} (-1)^{\abs{\underline{a}'}}(1-E_{R^{\underline{a}'}}). \]
A direct computation yields that:
\[ \underset{\abs{\underline{a}'} \ge 1}{\sum_{\underline{a}'\leq\underline{a}}}(-1)^{\underline{a}'}(1-E_{R^{\underline{a}'}}) = \prod_{i=1}^k (1-e_{R_i}) \]
from which the result follows.
\end{proof}
\begin{prop}\label{prop:simpleNMF}
If $\underline{a}$ is NMF and $\abs{\hat{\underline{a}}} \ge 1$ then:
\[ {\delta}_{\underline{a}}(E_{R^{\underline{a}}} - 1) + \underset{\abs{\underline{a}'} \ge 1}{\sum_{\underline{a}'\leq\hat{\underline{a}}}}(- {\lambda}_{\underline{a}-\underline{a}'}{\delta}_{\underline{a}'})(E_{R^{\underline{a}'}} - 1) =
{\delta}_{\underline{a}}(E_{R^{\underline{a}}} - E_{R^{\hat{\underline{a}}}}) .\]
\end{prop}
\begin{proof}
When $\underline{a}$ is NMF and $\abs{\hat{\underline{a}}} \ge 1$ the same will be true for $\underline{a}-\underline{a'}$ for all choices of $\underline{a}'$ except $\underline{a}'=\hat{\underline{a}}$.
We thus have by Proposition \ref{prop:constDNMF} that:
\[ {\delta}_{\underline{a}}(E_{R^{\underline{a}}} - 1) + \underset{\abs{\underline{a}'} \ge 1}{\sum_{\underline{a}'\leq\hat{\underline{a}}}}(- {\lambda}_{\underline{a}-\underline{a}'}{\delta}_{\underline{a}'})(E_{R^{\underline{a}'}} - 1) =
{\delta}_{\underline{a}}(E_{R^{\underline{a}}} - 1) - {\lambda}_{\underline{a}-\hat{\underline{a}}}{\delta}_{\hat{\underline{a}}}(E_{R^{\hat{\underline{a}}}} - 1). \]
By noting that $ {\lambda}_{\underline{a}-\hat{\underline{a}}}{\delta}_{\hat{\underline{a}}} = {\delta}_{\underline{a}}$ the result now follows immediately.
\end{proof}
\section{Handling Self Intersections}\label{sec:logeulerself}
The purpose of this section is to describe a method for interpreting the logarithmic Euler characteristic when there are self intersection terms. In particular we will show that these can be viewed as a weighted sum of the Euler characteristics of the components of the self intersection.
The expressions one obtains are `non-canonical' but may be amenable to computation depending on the context.
In order to carry out the procedure outlined here, one needs to have a good understanding of the Chow ring of the variety $X$. In particular the process may require a large number of relations consisting entirely of elements with simple normal crossings.
The reason we need an alternate approach is that though ideally we would be able to write:
\[ D^\ell {\mathrm{c}}_i(\Omega_X(\log D)) = {\mathrm{c}}_i(\Omega_{D^\ell}), \]
this is simply not true if $\ell > 1$.
In order to handle this, we must have at least enough information to compute $D^\ell$.
In particular we will need to make use of relations:
\[ D\sim \sum_i u_{i}E_{i} \]
with the $E_i$ not being equal to any other divisor already in use, and with the total collection $E_{i}$, $D$, and every other divisor in use, having simple normal crossings.
\begin{lemma}
Let $X$ be a smooth projective variety and let $\mathcal{F}$ be any coherent sheaf on $X$.
Suppose $\Delta$ is a collection of smooth divisors with simple normal crossings on $X$.
Fix $D \in \Delta$ and a relation
\[ D\sim \sum_{i\in I} u_{i}E_{i} \]
with simple normal crossings as above.
We may rewrite:
\[ D^{\underline{a}} D^\ell Q_m({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log\Delta)),\ldots,{\mathrm{c}}_m(\Omega_X(\log\Delta))) \]
as:
\[ D^{\underline{a}} D^{\ell-1} \sum_{k=1}^m (-1)^{k-1}{\delta}_{(k-1)} \sum_i u_{i} E_i^k Q_{m-k+1}({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E_i)),\ldots,{\mathrm{c}}_{m-k+1}(\Omega_X(\log \Delta\cdot E_i))).
\]
The constant ${\delta}_{(k-1)}$ is defined in \ref{not:2}.(1).
\end{lemma}
\begin{proof}
This follows immediately by a comparison between
\[ Q_m({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta)),\ldots,{\mathrm{c}}_m(\Omega_X(\log \Delta)) ) \]
and
\[ \qquad Q_m({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E_i)),\ldots,{\mathrm{c}}_m(\Omega_X(\log \Delta\cdot E_i))) \]
as in Theorem \ref{thm:eulvslogeul}.
\end{proof}
\begin{lemma}
Let $X$ be a smooth projective variety and let $\mathcal{F}$ be any coherent sheaf on $X$.
Let $\Delta$ be a collection of smooth divisors with simple normal crossings on $X$.
Suppose we are given sufficiently many rules in the Chow ring of $X$ of the form:
\begin{align*}
(a)\quad D_j \sim \sum_{i\in I_{ja}} u_iE_i \qquad \qquad\text{and} \qquad \qquad
(b)\quad E_j \sim \sum_{i\in I_{jb}} u_iE_i
\end{align*}
expressed with respect to a collection of divisors $E_i$ indexed by $I = \sqcup I_{ja}$, a universal family of shared indices and such that the total collection of divisors $D_i, E_j$ has simple normal crossings, then
we may rewrite:
\[ D^{\underline{a}} D^\ell Q_m({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta)),\ldots,{\mathrm{c}}_m(\Omega_X(\log \Delta))) \]
as a weighted sum of terms:
\[ D^{\underline{\tilde{a}}}E^{\underline{b}}Q_{n-\abs{\underline{\tilde{a}}}-\abs{\underline{b}}}({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E^{\underline{b}})),\ldots,{\mathrm{c}}_{n-\abs{\underline{\tilde{a}}}-\abs{\underline{b}}}(\Omega_X(\log \Delta\cdot E^{\underline{b}}))) \]
with $b_i \leq 1$.
\end{lemma}
\begin{proof}
The key is to inductively apply the previous lemma.
We observe that at each application of the lemma we produce new terms of the form:
\[ D^{\underline{a}'}E^{\underline{b}'}Q_{n-\abs{\underline{a}'}-\abs{\underline{b}'}}.
\]
However, each new term introduced either satisfies:
\begin{enumerate}
\item The number of self intersections has been decreased, or
\item The subscript on $Q_m$ has decreased.
\end{enumerate}
It follows that the inductive process terminates provided we have enough rules to carry it out.
\end{proof}
\begin{prop}
In the setting of the Lemma,
the coefficient of
\[ D^{\underline{\tilde{a}}}E^{\underline{b}}Q_{m}({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E^{\underline{b}})),\ldots, {\mathrm{c}}_m(\Omega_X(\log \Delta\cdot E^{\underline{b}})) \]
in the formal expansion of
\[ D^{\underline{a}} Q_m({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta)),\ldots,{\mathrm{c}}_m(\Omega_X(\log \Delta))) \]
is:
\[ \prod_i (u_i^{b_i} {\delta}_{(y_i)}) \]
where
$ y_j = \abs{\underset{z}\cup I_{jz} \cap \underline{b}},$
that is, $y_j$ is the number of rules that must be used in the expansion of $E_j$.
\end{prop}
\begin{proof}
The appearance of the $\prod_i u_i^{b_i} {\delta}_{(y_i)}$ is apparent from the lemma, as these are precisely the terms that appear when we apply it. The only remaining question is the computation of $y_i$ based on the shape of $\underline{b}$. One readily checks the given formula.
\end{proof}
The only information we still lack about our expansion is which $E^{\underline{b}}$ actually appear. This depends on choices made during the inductive process, however, if one orders the rules one can obtain a systematic result. The following proposition is an immediate consequence of the inductive process.
\begin{prop}
Carrying out the inductive procedure as above, if the rules:
\begin{align*}
(a)\quad D_j \sim \sum_{i\in I_{ja}} u_iE_i \qquad \qquad\text{and} \qquad \qquad
(b)\quad E_j \sim \sum_{i\in I_{jb}} u_iE_i
\end{align*}
are ordered by $(a)$ and $(b)$ and we always select the first rule which does not conflict with choices already made then
the collection $E^{\underline{b}}$ which appear in the expansion are precisely those which satisfy:
\begin{enumerate}
\item $\abs{\underline{b}\cap I_{jc}} = 0,1$.
\item For each $D_j$, the number of $a$ for which $\abs{\underline{b}\cap I_{ja}} = 1$ is $a_j-1$.
\item $\abs{\underline{b}\cap I_{jc}} = 1$ and $c>0$ implies $\abs{\underline{b}\cap I_{j{c-1}}} = 1$.
\item $\abs{\underline{b}} \leq n-\abs{\tilde{\underline{a}}}$.
\end{enumerate}
\end{prop}
\begin{rmk}
Because $ D^{\underline{\tilde{a}}}E^{\underline{b}}Q_{m}({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E^{\underline{b}})),\ldots,{\mathrm{c}}_m(\Omega_X(\log \Delta\cdot E^{\underline{b}}))) $
is computing a logarithmic Euler characteristic on $D^{\underline{a}}E^{\underline{b}}$ the above expansion gives a weighted sum of the logarithmic Euler characteristics for some representative cycles for various $D^{\underline{x}}$.
We note that $\prod_i u_i^{b_i}$ is somehow related to the coefficient that would have appeared had we been computing the self intersection whereas the coefficient $\prod_i {\delta}_{(y_i)}$ is universal. None the less, we note that this process involves a number of non-canonical choices.
It is worth noting that by performing a further induction, as in Corollary \ref{cor:secondaryinduction}, we could replace the Logarithmic Euler characteristics with the actual Euler characteristics of the same components of the self intersections, simply with different weights.
\end{rmk}
\begin{ex}
Suppose we have relations:
\[ D \sim E_1+E_2 \qquad E_1 \sim E_3 \sim E_4 \qquad E_2 \sim E_5 \sim E_6 \qquad E_3 \sim E_7\qquad E_5\sim E_8. \]
(Note that the implied relation $E_3\sim E_4$ (respectively $E_5\sim E_6$) is not being viewed as a rule for $E_3$ (respectively $E_5$).
Then we may carry out the procedure above as follows:
\begin{align*}
D^2Q_2(&{\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta)),{\mathrm{c}}_2(\Omega_X(\log \Delta))) \\
&= DE_1Q_2({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E_1)),{\mathrm{c}}_2(\Omega_X(\log \Delta\cdot E_1)))\\&\qquad
+DE_1^2{\delta}_{(1)}Q_1({\mathrm{ch}}(\mathcal{F});{\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E_1)) )
+DE_1^3{\delta}_{(2)}Q_0({\mathrm{ch}}(\mathcal{F}); )\\&\qquad
+DE_2Q_1({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E_2)),{\mathrm{c}}_2(\Omega_X(\log \Delta\cdot E_2)))\\&\qquad
+DE_2^2{\delta}_{(1)}Q_1({\mathrm{ch}}(\mathcal{F});{\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E_2)) )
+DE_2^3{\delta}_{(2)}Q_0({\mathrm{ch}}(\mathcal{F}); )\\
&= DE_1Q_2({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E_1)),{\mathrm{c}}_2(\Omega_X(\log \Delta\cdot E_1)))\\&\qquad
+DE_1E_3{\delta}_{(1)}Q_1({\mathrm{ch}}(\mathcal{F});{\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E_1E_3)) )
+DE_1E_3^2{\delta}_{(1)}{\delta}_{(1)}Q_0({\mathrm{ch}}(\mathcal{F}); ) \\&\qquad
+DE_1E_3E_4{\delta}_{(2)}Q_0({\mathrm{ch}}(\mathcal{F}); ) \\&\qquad
+DE_2Q_1({\mathrm{ch}}(\mathcal{F}); {\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E_2)),{\mathrm{c}}_2(\Omega_X(\log \Delta\cdot E_2)))\\&\qquad
+DE_2E_5{\delta}_{(1)}Q_1({\mathrm{ch}}(\mathcal{F});{\mathrm{c}}_1(\Omega_X(\log \Delta\cdot E_2E_5)) )
+DE_2E_5^2{\delta}_{(1)}{\delta}_{(1)}Q_0({\mathrm{ch}}(\mathcal{F}); ) \\&\qquad
+DE_2E_5E_6{\delta}_{(2)}Q_0({\mathrm{ch}}(\mathcal{F}); ).
\end{align*}
Which we can ultimately express as:
\begin{align*}
&= \chi(DE_1, \Delta', \mathcal{F}|_{DE_1})
+{\delta}_{(1)}\chi(DE_1E_3, \Delta', \mathcal{F}|_{DE_1E_3}) \\&\qquad
+{\delta}_{(1)}{\delta}_{(1)}\chi(DE_1E_3E_7, \Delta', \mathcal{F}|_{DE_1E_3E_7})
+{\delta}_{(2)}\chi(DE_1E_3E_4, \Delta', \mathcal{F}|_{DE_1E_3E_4}) \\&\qquad
+ \chi(DE_2, \Delta', \mathcal{F}|_{DE_2})
+{\delta}_{(1)}\chi(DE_2E_5, \Delta', \mathcal{F}|_{DE_2E_5}) \\&\qquad
+{\delta}_{(1)}{\delta}_{(1)}\chi(DE_2E_5E_8, \Delta', \mathcal{F}|_{DE_2E_5E_8})
+ {\delta}_{(2)}\chi(DE_2E_5E_6, \Delta', \mathcal{F}|_{DE_2E_5E_6}).
\end{align*}
In particular we can express the result purely as a sum of logarithmic Euler characteristics.
\end{ex}
\section{Conclusions and Further Questions}
We have obtained a natural generalization of the Riemann-Hurwitz results to the algebraic Euler characteristic.
The formulas given are certainly more complicated than for the standard Euler characteristic.
It is natural to ask to what extent any of the results here can be generalized outside the context in which we are able to prove them.
\section*{Acknowledgments}
This work was primarily conducted while I was a Fields Postdoctoral researcher at Queen's University.
I would like to thank Prof. Mike Roth at Queen's University for teaching me many of the tools needed in this work as well as for his help improving this manuscript.
I would also like to thank the referee for many very useful comments.
\providecommand{\MR}[1]{}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1910.07134
|
\section{Introduction}
The Transformer network \citep{vaswani2017attention} is a neural sequence-to-sequence model that has achieved state-of-the-art results in machine translation. However, Transformer models tend to be very large, typically consisting of hundreds of millions of parameters. As the number of parameters directly corresponds to secondary storage requirements and memory consumption during inference, using Transformer networks may be prohibitively expensive in scenarios with constrained resources. For the 2019 Workshop on Neural Generation of Text (WNGT) Efficiency shared task \citep{wngt2019}, the Notre Dame Natural Language Processing (NDNLP) group looked at a method of inducing sparsity in parameters called auto-sizing in order to reduce the number of parameters in the Transformer at the cost of a relatively minimal drop in performance.
Auto-sizing, first introduced by \citet{murrayauto}, uses group regularizers to encourage parameter sparsity. When applied over neurons, it can delete neurons in a network and shrink the total number of parameters. A nice advantage of auto-sizing is that it is independent of model architecture; although we apply it to the Transformer network in this task, it can easily be applied to any other neural architecture.
NDNLP's submission to the 2019 WNGT Efficiency shared task uses a standard, recommended baseline Transformer network. Following \citet{murray19autosizing}, we investigate the application of auto-sizing to various portions of the network. Differing from their work, the shared task used a significantly larger training dataset from WMT 2014 \citep{bojar2014findings}, as well as the goal of reducing model size even if it impacted translation performance. Our best system was able to prune over 25\% of the parameters, yet had a BLEU drop of only 1.1 points. This translates to over 25 million parameters pruned and saves almost 100 megabytes of disk space to store the model.
\section{Auto-sizing}
Auto-sizing is a method that encourages sparsity through use of a group regularizer. Whereas the most common applications of regularization will act over parameters individually, a group regularizer works over groupings of parameters. For instance, applying a sparsity inducing regularizer to a two-dimensional parameter tensor will encourage individual values to be driven to 0.0. A sparsity-inducing group regularizer will act over defined sub-structures, such as entire rows or columns, driving the entire groups to zero. Depending on model specifications, one row or column of a tensor in a neural network can correspond to one neuron in the model.
Following the discussion of \citet{murrayauto} and \citet{murray19autosizing}, auto-sizing works by training a neural network while using a regularizer to prune units from the network, minimizing:
\begin{equation*}
\mathcal{L} = -\sum_{\text{$f, e$ in data}} \log P(e \mid f; W) + \lambda R(\|W\|).
\end{equation*}
$W$ are the parameters of the model and $R$ is a regularizer.
Here, as with the previous work, we experiment with two regularizers:
\begin{align*}
R(W) &= \sum_i \left(\sum_j W_{ij}^2\right)^{\frac12} && (\ell_{2,1}) \\
R(W) &= \sum_i \max_j |W_{ij}| && (\ell_{\infty,1})
\end{align*}
The optimization is done using proximal gradient descent \citep{parikh+boyd:2014}, which alternates between stochastic gradient descent steps and proximal steps:
\begin{align*}
W &\leftarrow W - \eta \nabla \log P(e \mid f; w) \\
W &\leftarrow \argmin_{W'} \left(\frac1{2\eta} \|W-W'\|^2 + R(W') \right)
\end{align*}
\section{Auto-sizing the Transformer}
\begin{figure}
\centering
\includegraphics[width=7cm]{transformer.png}
\caption{Architecture of the Transformer \citep{vaswani2017attention}. We apply the auto-sizing method to the feed-forward (blue rectangles) and multi-head attention (orange rectangles) in all $N$ layers of the encoder and decoder. Note that there are residual connections that can allow information and gradients to bypass any layer we are auto-sizing. Following the robustness recommendations, we instead layer norm before.}
\label{fig:transformer}
\end{figure}
The Transformer network \citep{vaswani2017attention} is a sequence-to-sequence model in which both the encoder and the decoder consist of stacked self-attention layers. The multi-head attention uses two affine transformations, followed by a softmax layer.
Each layer has a position-wise feed-forward neural network (FFN) with a hidden layer of rectified linear units.
Both the multi-head attention and the feed-forward neural network have residual connections that allow information to bypass those layers. In addition, there are also word and position embeddings. Figure \ref{fig:transformer}, taken from the original paper, shows the architecture. NDNLP's submission focuses on the $N$ stacked encoder and decoder layers.
The Transformer has demonstrated remarkable success on a variety of datasets, but it is highly over-parameterized. For example, the baseline Transformer model has more than 98 million parameters, but the English portion of the training data in this shared task has only 116 million tokens and 816 thousand types. Early NMT models such as \citet{sutskever2014sequence} have most of their parameters in the embedding layers, but the transformer has a larger percentage of the model in the actual encoder and decoder layers. Though the group regularizers of auto-sizing can be applied to any parameter matrix, here we focus on the parameter matrices within the encoder and decoder layers.
We note that there has been some work recently on shrinking networks through pruning. However, these differ from auto-sizing as they frequently require an arbitrary threshold and are not included during the training process. For instance, \citet{see2016compression} prunes networks based off a variety of thresholds and then retrains a model.
\citet{voita-etal-2019-analyzing} also look at pruning, but of attention heads specifically. They do this through a relaxation of an $\ell_0$ regularizer in order to make it differentiable. This allows them to not need to use a proximal step. This method too starts with pre-trained model and then continues training.
\citet{michel2019sixteen} also look at pruning attention heads in the transformer. However, they too use thresholding, but only apply it at test time. Auto-sizing does not require a thresholding value, nor does it require a pre-trained model.
Of particular interest are the large, position-wise feed-forward networks in each encoder and decoder layer:
\vspace{1mm}
\begin{equation*}
\text{FFN}(x) = W_2(\max(0,W_1x + b_1)) + b_2.
\label{eq:ffn}
\end{equation*}
$W_1$ and $W_2$ are two large affine transformations that take inputs from $D$ dimensions to $4D$, then project them back to $D$ again. These layers make use of rectified linear unit activations, which were the focus of auto-sizing in the work of \citet{murrayauto}. No theory or intuition is given as to why this value of $4D$ should be used.
Following \cite{murray19autosizing}, we apply the auto-sizing method to the Transformer network, focusing on the two largest components, the feed-forward layers and the multi-head attentions (blue and orange rectangles in Figure \ref{fig:transformer}). Remember that since there are residual connections allowing information to bypass the layers we are auto-sizing, information can still flow through the network even if the regularizer drives all the neurons in a layer to zero -- effectively pruning out an entire layer.
\begin{figure}
\tikzset{inner sep=1pt}
\begin{center}
\begin{tikzpicture}[scale=0.75]
\draw[fill=none] (-.5,-8.5) rectangle (7.5,4.5);
\draw[fill=gray!30,draw=none] (0,0) rectangle (7, 4);
\draw[fill=gray!30,draw=none] (1.5,-1) rectangle (5.5, -8);
\node[draw,draw=none] at (3.5,-4.5) {$W_1$};
\node[draw,draw=none] at (3.5,2) {$W_2$};
\draw[->,thick] (3.5,-1) to node[fill=white] {ReLU} (3.5,0);
\draw[->,thick] (3.5,-9) -- (3.5,-8);
\draw[->,thick] (3.5,4) -- (3.5,5);
\draw[fill=white!30,draw=none] (1.55,-2) rectangle (5.45, -2.2);
\draw[fill=white!30,draw=none] (1.55,-3.5) rectangle (5.45, -3.7);
\draw[fill=white!30,draw=none] (1.55,-5.5) rectangle (5.45, -5.7);
\draw[fill=white!30,draw=none] (1.55,-7.4) rectangle (5.45, -7.6);
\draw[fill=white!30,draw=none] (1.55,-7.1) rectangle (5.45, -7.3);
\draw[fill=blue!30,draw=none] (1,0.05) rectangle (1.2, 3.95);
\draw[fill=blue!30,draw=none] (2.5,0.05) rectangle (2.7, 3.95);
\draw[fill=blue!30,draw=none] (4.5,0.05) rectangle (4.7, 3.95);
\draw[fill=blue!30,draw=none] (6.4,0.05) rectangle (6.6, 3.95);
\draw[fill=blue!30,draw=none] (6.1,0.05) rectangle (6.3, 3.95);
\end{tikzpicture}
\end{center}
\caption{Auto-sizing FFN network. For a row in the parameter matrix $W_1$ that has been driven completely to 0.0 (shown in white), the corresponding column in $W_2$ (shown in blue) no longer has any impact on the model. Both the column and the row can be deleted, thereby shrinking the model.}
\label{fig:ffn}
\end{figure}
\section{Experiments}
All of our models are trained using the fairseq implementation of the Transformer \cite{gehring2017convs2s}.\footnote{https://github.com/pytorch/fairseq}
For the regularizers used in auto-sizing, we make use of an open-source, proximal gradient toolkit implemented in PyTorch\footnote{https://github.com/KentonMurray/ProxGradPytorch} \citep{murray19autosizing}. For each mini-batch update, the stochastic gradient descent step is handled with a standard PyTorch forward-backward call. Then the proximal step is applied to parameter matrices.
\begin{table*}
\centering
\begin{tabular}{r|c|c|c|c}
\toprule
System & Disk Size & Number of Parameters & newstest2014 & newstest2015 \\
\hline
Baseline & 375M & 98.2M & 25.3 & 27.9 \\
\hline
All $\ell_{2,1}=0.1$ & 345M & 90.2M & 21.6 & 24.1 \\
\hline
Encoder $\ell_{2,1}=0.1$ & 341M & 89.4M & 23.2 & 25.5 \\
Encoder $\ell_{2,1}=1.0$ & 327M & 85.7M & 22.1 & 24.5 \\
\hline
FFN $\ell_{2,1}=0.1$ & 326M & 85.2M & 24.1 & 26.4 \\
FFN $\ell_{2,1}=1.0$ & 279M & 73.1M & 24.0 & 26.8 \\
FFN $\ell_{2,1}=10.0$ & 279M & 73.1M & 23.9 & 26.5 \\
FFN $\ell_{\infty,1}=100.0$ & 327M & 73.1M & 23.8 & 26.0 \\
\bottomrule
\end{tabular}
\caption{Comparison of BLEU scores and model sizes on newstest2014 and newstest2015. Applying auto-sizing to the feed-forward neural network sub-components of the transformer resulted in the most amount of pruning while still maintaining good BLEU scores.}
\label{tab:scores}
\end{table*}
\subsection{Settings}
We used the originally proposed transformer architecture -- with six encoder and six decoder layers. Our model dimension was 512 and we used 8 attention heads. The feed-forward network sub-components were of size 2048. All of our systems were run using subword units (BPE) with 32,000 merge operations on concatenated source and target training data \cite{sennrich2016linguistic}. We clip norms at 0.1, use label smoothed cross-entropy with value 0.1, and an early stopping criterion when the learning rate is smaller than $10^{-5}$. We used the Adam optimizer \cite{kingma2014adam}, a learning rate of $10^{-4}$, and dropout of 0.1. Following recommendations in the fairseq and tensor2tensor \cite{tensor2tensor} code bases, we apply layer normalization before a sub-component as opposed to after. At test time, we decoded using a beam of 5 with length normalization \cite{boulanger2013audio} and evaluate using case-sensitive, tokenized BLEU \cite{papineni2002bleu}.
For the auto-sizing experiments, we looked at both $\ell_{2,1}$ and $\ell_{\infty,1}$ regularizers. We experimented over a range of regularizer coefficient strengths, $\lambda$, that control how large the proximal gradient step will be. Similar to \citet{murrayauto}, but differing from \citet{alvarez2016learning}, we use one value of $\lambda$ for all parameter matrices in the network. We note that different regularization coefficient values are suited for different types or regularizers. Additionally, all of our experiments use the same batch size, which is also related to $\lambda$.
\subsection{Auto-sizing sub-components}
We applied auto-sizing to the sub-components of the encoder and decoder layers, without touching the word or positional embeddings. Recall from Figure \ref{fig:transformer}, that each layer has multi-head attention and feed-forward network sub-components. In turn, each multi-head attention sub-component is comprised of two parameter matrices. Similarly, each feed-forward network has two parameter matrices, $W_1$ and $W_2$. We looked at three main experimental configurations:
\begin{itemize}
\item All: Auto-sizing is applied to every multi-head attention and feed-forward network sub-component in every layer of the encoder and decoder.
\item Encoder: As with All, auto-sizing is applied to both multi-head attention and feed-forward network sub-components, but only in the encoder layers. The decoder remains the same.
\item FFN: Auto-sizing applied only to the feed-forward network sub-components $W_1$ and $W_2$, but not to the multi-head portions. This too is applied to both the encoder and decoder.
\end{itemize}
\subsection{Results}
Our results are presented in Table \ref{tab:scores}. The baseline system has 98.2 million parameters and a BLEU score of 27.9 on newstest2015. It takes up 375 megabytes on disk. Our systems that applied auto-sizing only to the feed-forward network sub-components of the transformer network maintained the best BLEU scores while also pruning out the most parameters of the model. Overall, our best system used $\ell_{2,1}=1.0$ regularization for auto-sizing and left 73.1 million parameters remaining. On disk, the model takes 279 megabytes to store -- roughly 100 megabytes less than the baseline. The performance drop compared to the baseline is 1.1 BLEU points, but the model is over 25\% smaller.
Applying auto-sizing to the multi-head attention and feed-forward network sub-components of \emph{only} the encoder also pruned a substantial amount of parameters. Though this too resulted in a smaller model on disk, the BLEU scores were worse than auto-sizing just the feed-forward sub-components. Auto-sizing the multi-head attention and feed-forward network sub-components of both the encoder \emph{and} decoder actually resulted in a larger model than the encoder only, but with a lower BLEU score. Overall, our results suggest that the attention portion of the transformer network is more important for model performance than the feed-forward networks in each layer.
\section{Conclusion}
In this paper, we have investigated the impact of using auto-sizing on the transformer network of the 2019 WNGT efficiency task. We were able to delete more than 25\% of the parameters in the model while only suffering a modest BLEU drop. In particular, focusing on the parameter matrices of the feed-forward networks in every layer of the encoder and decoder yielded the smallest models that still performed well.
A nice aspect of our proposed method is that the proximal gradient step of auto-sizing can be applied to a wide variety of parameter matrices. Whereas for the transformer, the largest impact was on feed-forward networks within a layer, should a new architecture emerge in the future, auto-sizing can be easily adapted to the trainable parameters.
Overall, NDNLP's submission has shown that auto-sizing is a flexible framework for pruning parameters in a large NMT system. With an aggressive regularization scheme, large portions of the model can be deleted with only a modest impact on BLEU scores. This in turn yields a much smaller model on disk and at run-time.
\section*{Acknowledgements}
This research was supported in part by University of Southern California, subcontract 67108176 under DARPA contract HR0011-15-C-0115.
|
1510.07604
|
\section*{\abstractname}
\else
\small
\begin{center}
{\bfseries \ackname\vspace{-.5em}\vspace{\z@}}
\end{center}
\quotation
\fi}
{\if@twocolumn\else\endquotation\fi}
\fi
\makeatother
\hypersetup{
pdfmenubar=false,
pdfauthor={Eris Runa},
pdfsubject={Subject},
pdfcreator={Eris Runa},
pdfproducer={Eris Runa},
pdfnewwindow=true,
colorlinks=false,
linkcolor=blue,
citecolor=blue,
filecolor=magenta,
urlcolor=cyan
}
\renewcommand{\theenumi}{\emph{(\roman{enumi})}}
\renewcommand{\labelenumi}{\theenumi}
\renewcommand{\theenumii}{\emph{(\alph{enumii})}}
\renewcommand{\labelenumii}{\theenumii}
\usepackage[left=1in,right=1in,top=1.3in,bottom=1.4in]{geometry}
\newcommand{\babs}[1]{\ensuremath{\big\vert#1\big\vert}}
\newcommand{\Babs}[1]{\ensuremath{\Big\vert#1\Big\vert}}
\newcommand{\knorm}[1]{\ensuremath{\Vert#1\Vert}}
\newcommand{W^{qc}}{W^{qc}}
\newcommand{{ \textrm{hom} }}{{ \textrm{hom} }}
\newcommand{\kabs}[1]{\ensuremath{\vert#1\vert}}
\usepackage{xspace}
\usepackage{enumitem}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\textrm{SBV}}{\textrm{SBV}}
\newcommand{\textrm{SBV}_p}{\textrm{SBV}_p}
\def\mathbb M{\mathbb M}
\newcommand{\textrm{BV}}{\textrm{BV}}
\newcommand{\textrm{Lip}}{\textrm{Lip}}
\newcommand{\textrm{GSBV}}{\textrm{GSBV}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{L}^{n}}{\mathcal{L}^{n}}
\def\mbox{\,a.e.\xspace}{\mbox{\,a.e.\xspace}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\ensuremath{\S}}{\ensuremath{\S}}
\newcommand{\mathrm{Id}}{\mathrm{Id}}
\newcommand{\mathrm{Id}}{\mathrm{Id}}
\newcommand{\mathrm{Id_m}}{\mathrm{Id_m}}
\newcommand{{\,\mathrm{d}x}}{{\,\mathrm{d}x}}
\newcommand{{\,\mathrm{d}q}}{{\,\mathrm{d}q}}
\newcommand{{\,\mathrm{d}t}}{{\,\mathrm{d}t}}
\newcommand{{\,\mathrm{d}z}}{{\,\mathrm{d}z}}
\newcommand{{\,\mathrm{d}y}}{{\,\mathrm{d}y}}
\newcommand{\mathrm{Av}}{\mathrm{Av}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathbb{T}}{\mathbb{T}}
\newcommand{\mathrm{L}}{\mathrm{L}}
\newcommand{L\log^{1/2}\!L}{L\log^{1/2}\!L}
\newcommand{L^p\log^{1/2}\!L}{L^p\log^{1/2}\!L}
\newcommand{\mathrm{BMO}}{\mathrm{BMO}}
\newcommand\scal[2]{{\left\langle #1 ,#2\right\rangle}}
\newcommand\scalare[1]{{\left\langle #1 \right\rangle}}
\newcommand\norm[1]{\| #1\|}
\newcommand{\formueller}[1]{{#1}}
\newcommand{\achtung}[1]{{\bf \color{red} #1 }}
\newcommand{\achtungcheck}[1]{{\bf \color{red}#1 }}
\newcommand{\fachtung}[1]{\footnote{\emph{\textcolor{red}{#1}}}}
\newcommand{\ottoboh}[1]{ }
\newcommand{\akmboh}[1]{{\color{green}}}
\newcommand{\BrascampLiebBoh}[1]{}
\newcommand{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}{\mathop{\hbox{\vrule height 7pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits}
\newcommand{{{\mathbb R}^n}}{{{\mathbb R}^n}}
\newcommand{{{\mathbb R}^m}}{{{\mathbb R}^m}}
\newcommand{\hausd}{\mathcal H}
\newcommand{\Haus}[1]{{\mathscr S}^{#1}}
\newcommand{\Leb}[1]{{\mathscr L}^{#1}}
\newcommand{\Probabilities}[1]{\mathscr P(#1)}
\newcommand{\ProbabilitiesTwo}[1]{\mathscr P_2(#1)}
\newcommand{\Measures}[1]{\mathscr M(#1)}
\newcommand{\Measuresp}[1]{\mathscr M_+(#1)}
\newcommand{\RelativeEntropy}[2]{\mathcal H(#1|#2)}
\newcommand{\BorelSets}[1]{{\mathscr B}(#1)}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\varrho}{\varrho}
\newcommand{(T(t))_{t\geq 0}}{(T(t))_{t\geq 0}}
\def\,\mathrm{d}{\,\mathrm{d}}
\def\,\ell{\,\ell}
\newcommand{\insieme}[1]{\left \{#1\right \}}
\newcommand{\mathcal A}{\mathcal A}
\newcommand{\mathcal F}{\mathcal F}
\newcommand{\mathrm{Per}}{\mathrm{Per}}
\newcommand{\nabla_{\!\mathrm {H}}}{\nabla_{\!\mathrm {H}}}
\newcommand{\mathrm{C_1}}{\mathrm{C_1}}
\newcommand{{\mathrm{Cap}}}{{\mathrm{Cap}}}
\newcommand{\bigtriangleup}{\bigtriangleup}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{e.g.,\xspace}{e.g.,\xspace}
\newcommand{E.g.,\xspace}{E.g.,\xspace}
\newcommand{\emph{et~al.}\xspace}{\emph{et~al.}\xspace}
\newcommand{etc.\@\xspace}{etc.\@\xspace}
\newcommand{i.e.,\xspace}{i.e.,\xspace}
\newcommand{I.e.,\xspace}{I.e.,\xspace}
\DeclareMathOperator{\spt}{spt}
\DeclareMathOperator{\diver}{\nabla^{*}}
\DeclareMathOperator{\dist}{dist}
\DeclareMathOperator{\sign}{sign}
\DeclareMathOperator*{\esssup}{ess\,sup}
\DeclareMathOperator{\Reg}{Reg}
\DeclareMathOperator{\Sing}{Sing}
\DeclareMathOperator{\osc}{osc}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{thm}[theorem]{Theorem}
\newtheorem{thrm}[theorem]{Theorem}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{lmm}[theorem]{Lemma}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{dfntn}[theorem]{Definition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{rmk}[theorem]{Remark}
\newtheorem{rmrk}[theorem]{Remark}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{prpstn}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{crllr}[theorem]{Corollary}
\newtheorem{ex}[theorem]{Example}
\newtheorem{xmpl}[theorem]{Example}
\title{Finite range decomposition for a general class of elliptic operators}
\usepackage{authblk}
\author{Eris Runa\thanks{eris.runa@mis.mpg.de}}
\affil{Max Planck Institut for Mathematics in the Sciences,\\ Inselstrasse 22, Leipzig\\ Germany}
\usepackage{esint}
\newcommand{\abs}[1]{{\lvert #1\rvert}}
\newcommand{\tnorm}[1]{{|\hspace{-0.35mm}\lVert #1\rVert\hspace{-0.35mm}|}}
\newcommand{\Acal} {{\mathcal A }}
\newcommand{\Bcal} {{\mathcal B }}
\newcommand{\Ccal} {{\mathcal C }}
\newcommand{\Dcal} {{\mathcal D }}
\newcommand{\Ecal} {{\mathcal E }}
\newcommand{\Fcal} {{\mathcal F }}
\newcommand{\Gcal} {{\mathcal G }}
\newcommand{\Hcal} {{\mathcal H }}
\newcommand{\Ical} {{\mathcal I }}
\newcommand{\Jcal} {{\mathcal J }}
\newcommand{\Kcal} {{\mathcal K }}
\newcommand{\Lcal} {{\mathcal L }}
\newcommand{\Mcal} {{\mathcal M }}
\newcommand{\Ncal} {{\mathcal N }}
\newcommand{\Ocal} {{\mathcal O }}
\newcommand{\Pcal} {{\mathcal P }}
\newcommand{\Qcal} {{\mathcal Q }}
\newcommand{\Rcal} {{\mathcal R }}
\newcommand{\Scal} {{\mathcal S }}
\newcommand{\Tcal} {{\mathcal T }}
\newcommand{\Ucal} {{\mathcal U }}
\newcommand{\Vcal} {{\mathcal V }}
\newcommand{\Wcal} {{\mathcal W }}
\newcommand{\Xcal} {{\mathcal X }}
\newcommand{\Ycal} {{\mathcal Y }}
\newcommand{\Zcal} {{\mathcal Z }}
\newcommand{\cH }{\boldsymbol{\mathcal H}}
\newcommand{\As }{\mathscr{A}}
\newcommand{\Bs }{\mathscr{B}}
\newcommand{\Ts }{\mathscr{T}}
\newcommand{\Rs }{\mathscr{R}}
\newcommand{\Cs }{\mathscr{C}}
\usepackage{parskip}
\parskip=0.32\baselineskip
\setlength{\parindent}{0em}
\usepackage{titlesec}
\usepackage{titling}
\date{}
\usepackage{microtype}
\begin{document}
\maketitle
\begin{abstract}
We consider a family of gradient Gaussian vector fields on $\mathbb{Z}^d$, where the covariance operator is not translation invariant.
A uniform finite range decomposition of the corresponding covariance operators is proven, i.e.,\xspace the covariance operator can be written as a sum of covariance operators whose kernels are supported within cubes of increasing diameter.
An optimal regularity bound for the subcovariance operators is proven. We also obtain regularity bounds as we vary the coefficients defining the gradient Gaussian measures.
This extends a result of S. Adams, R. Koteck\'y and S. M\"uller.
\end{abstract}
\begin{acknowledgements}
The present results were obtained during my PhD studies. I would like to express my gratitude to my advisor Prof. Stefan M\"uller for the support and helpful discussions on the topic. I am also grateful to the Bonn International Graduate school in Mathematics, Hausdorff Center for Mathematics, SFB 1060 and the Institute for Applied Mathematics in Bonn for the support and the nice environment.
\end{acknowledgements}
\section{Introduction}
\label{sec:intro-frd}
Recently, there has been some interest in the finite range decompositions of gradient Gaussian fields on $\mathbb{Z}^{d}$.
In particular, in \cite{MR2995704}, S.~Adams, R.~Koteck\'y and S.~M\"uller construct a finite range decomposition for a family of translation invariant gradient Gaussian fields on $\mathbb{Z}^d $ ($d \geq 2$) which depends real-analytically on the quadratic from that defines the Gaussian field: they consider a large torus $\mathbb{T}^{d}_{N}:=(\mathbb{Z}/L^N \mathbb{Z})^d$ and obtain a finite range decomposition with estimates that do not depend on $N$.
More precisely they consider a constant coefficient discrete elliptic system $\Acal = \nabla* A \nabla$ and show that its Green's function $G(\cdot,\cdot) $ can be decomposed as
\begin{equation*}
\begin{split}
G_{A}(x,y) = \sum_{k} G_{A,k}(x,y)
\end{split}
\end{equation*}
where $G_{A}(\cdot,\cdot)$ have finite range i.e.,\xspace
\begin{equation*}
\begin{split}
G_{A,k}(x,y) = 0 \qquad \text{whenever } |x-y | > L^{k}
\end{split}
\end{equation*}
and they are positive definite i.e.,\xspace $\sum_{x,y} \varphi(x)G_{A,k}(x,y)\varphi(y) \geq 0$ for every $\varphi:\mathbb{T}^{d}_{N}\to \mathbb{R}^{m} $.
Moreover they prove optimal estimates for $D^{\beta}\nabla^{\alpha }G_{A,k}$.
We improve their result by extending it to the space dependent case. Namely, we consider an elliptic operator of the form $\Acal = \nabla* A \nabla$, where $A=A(x)$ is dependent on the space variable. Then we show that its Green's function can be written as the sum of positive and finite range functions $G_{A,k}(x,y)$
Looking at their proof this extension is highly non-trivial. Indeed, their proof uses both careful Fourier Analysis and Combinatorial techniques, which due to the space dependence, neither of them seem to apply. Our approach takes a different route: we use $L^{p} $-theory arguments. Because some of this well-known $L^p$-estimates are not present in the discrete setting, we also need to prove the $L^p$ estimates for the discrete setting.
As a byproduct, we are also able to prove the equivalent of the Finite range Decomposition in the continuous setting which to our knowledge is also not known.
The manuscript is organized as follows: in \ensuremath{\S}~\ref{sec:Preliminary Results}, we give a brief introduction to the results contained in \cite{MR2995704}, introduce some notation; in \ensuremath{\S}~\ref{sec:hypothesis} state our main result; in \ensuremath{\S}~\ref{sec:outlie} we give an outline of the proof in the continuous setting, hoping that this will make the proof easier to understand due to smaller notation, in \ensuremath{\S}~\ref{sec:construction-frd} we briefly discuss the construction of the finite range decomposition; in \ensuremath{\S}~\ref{sec:discrete-estimates} we show extend $L^p$-theory to the discrete setting and show how to obtain the bounds; finally in \ensuremath{\S}~\ref{sec:analytic-dependence} we briefly discuss how to prove the bounds the derivative of $A$. Because the construction and the analyticity (\ensuremath{\S}~\ref{sec:construction-frd}, \ensuremath{\S}~\ref{sec:analytic-dependence}) are basically the same as in \cite{MR2995704}, we only sketch their proof.
\section{Preliminary Results}
\label{sec:Preliminary Results}
In this section we are going to describe \emph{briefly} the results in \cite{MR2995704}.
Before writing precisely the statements contained in \cite{MR2995704}. We would like to introduce some notation.
We will fix a positive integer $N$ and odd integer $L>3$.
The torus of size $N$ is defined as $\mathbb{T}^d_{N}:= (\mathbb{Z}/L^{N}\mathbb{Z})^d $. The space of all function on $\mathbb{T}^{d}_N$ with values in $\mathbb{R}^m $ will be denoted by
\begin{equation*}
\begin{split}
\mathbf{X}_{N}:= (\mathbb{R}^{m})^{\mathbb{T}^{d}_{N}}= \insieme{\varphi:\mathbb{Z}^{d}\to \mathbb{R}^{m}:\ \varphi(x+z) = \varphi(x),\ \forall \varphi (L^{N}\mathbb{Z})^d}.
\end{split}
\end{equation*}
This space will be endowed with with $\ell_2 $-scalar product, i.e.,\xspace
\begin{equation*}
\begin{split}
\scalare{\varphi,\psi} =\sum_{x\in \mathbb{T}^{d}_{N}} \scalare{\varphi(x),\psi(x)}_{\mathbb{R}^{m}}.
\end{split}
\end{equation*}
In the last section, the $\mathbf{X}_{N} $ will be complexified and will be substituted by the appropriate Hermitian inner product.
We also define
\begin{equation*}
\begin{split}
\dist(x,y)&:=\inf\insieme{\abs{x-y+z}\colon z\in (L^N\mathbb Z)^d},\\
\dist_\infty(x,y)&:= \inf\insieme{\abs{x-y+z}_\infty\colon z\in (L^N\mathbb Z)^d},
\end{split}
\end{equation*}
and with a slight abuse of notation
\begin{equation*}
\begin{split}
\dist_\infty(x,M) := \min \{ \rho_\infty(x,y)\colon y \in M \}.
\end{split}
\end{equation*}
Gradient Gaussian fields are naturally defined on
\begin{equation}
\mathcal{X}_N:=\{\varphi\in \mathbf{X}_N: \sum_{x\in \mathbb T_N }\varphi(x)=0\}.
\end{equation}
For any set $M \subset \Lambda_N$, we define its closure by
\begin{equation}
\overline M=\{x\in \Lambda_N\colon \dist_\infty(x,M)\le 1\}.
\end{equation}
The forward and backward derivative are defined as
\begin{equation}
(\nabla_j \varphi)(x):=\varphi(x+ e_j)-\varphi(x)\quad \text{and}\quad (\nabla^*_j\varphi)(x):=\varphi(x-e_j)-\varphi(x),
\end{equation}
Until the end of this section we will denote by $A:\mathbb{R}^{m\times d}\to\mathbb{R}^{m\times d}$ a linear, symmetric and positive definite matrix.
The Dirichlet form on $\mathcal{X}_N$ is defined by,
\begin{equation*}
\scalare{\varphi,\psi}_+:=\sum_{x\in \mathbb{T}^{d}_N} \scalare{ A(\nabla\varphi(x)),\nabla\psi(x)}_{\mathbb{R}^{m\times d}},
\end{equation*}
where $\varphi,\psi: \mathcal{X}_{N}\to \mathbb{R}^{m}$.
It is not difficult to notice that $(\cdot,\cdot)_+$, defines a norm on $\mathcal{X}$.
Moreover, we will use $\|\cdot \|_{2}$ and $\|\cdot \|_{-} $ to denote the standard $\ell_{2}$ and the dual norm of $\|\cdot \|_{+}$; we will use $\cH_+ ,\ \cH ,\ \cH_-$ to denote $\mathcal{X}$ endowed with the norms $\|\cdot \|_{+} $, $\|\cdot \|_{2} $ and $\|\cdot \|_{-} $ respectively.
Consider now the Green's operator $\Cs_A:={\As}^{-1}$ of the operator $\As$ and the corresponding bilinear form on $\mathcal{X}_N $ defined by
\begin{equation*}
\mathcal{G}_A(\varphi,\psi)=\langle\Cs_A\varphi,\psi\rangle=(\varphi,\psi)_-,\quad \varphi,\psi\in\mathcal{X}_N.
\end{equation*}
Given that the operator $\As$ and its inverse commutes with translations on $\mathbb{T}_N$, there exists a unique kernel $\Ccal_A$ such that
\begin{equation}
(\Cs_A\varphi)(x)=\sum_{y\in\mathbb{T}_N}\Ccal_A(x-y)\varphi(y).
\end{equation}
It is easy to see that the function $G_{A,y}(\cdot)=\Ccal_A(\cdot-y)$ is the unique solution(with zero-mean) of the equation
\begin{equation}
\As G_{A,y}=\bigl(\delta_y -\frac1{L^{Nd}}\bigr) \mathrm{Id_m},
\end{equation}
where $\mathrm{Id_m}$ is the unit $m\times m$ matrix.
Notice that for any $a\in\mathbb{R}^m$ one has:
\begin{equation*}
\begin{split}
(\As G_{A,y}) =\bigl(\delta_y -\frac1{L^{Nd}}\bigr) \in \mathcal{X}_N.
\end{split}
\end{equation*}
In \cite{MR2995704}, among other things, the following result is proved:
\begin{theorem}[{\cite{MR2995704}}]
\label{thm:AKM_FRDfamily}
For any $d \geq2 $ and any multiindex $\alpha $, there exists a constant $C_{\alpha}(d),\ \eta_{d}(\alpha)$ such that the following properties hold:
For any integer $N\geq1 $, every $k=1,\ldots,N+1$ and every odd integer $L\geq 16 $, the map $A\mapsto C_{A,k}$ is real-analytic and
\begin{enumerate}
\item There exist positive definite operators $\mathcal C_{A,k}$ such that
\begin{equation*}
\begin{split}
\mathscr C_{A} = \sum_{k=1}^{N+1} \mathscr C_{A,k}.
\end{split}
\end{equation*}
\item There exist constants $C_{A,k} $ such that
\begin{equation*}
\begin{split}
\mathcal{C}_{A,k} = C_{A,k} \quad\text{whenever }\dist_{\infty}(x,0) > 1/2L^{k}
\end{split}
\end{equation*}
\item Let $A_{0} $ be such that $\scalare{A_{0}F,F}_{\mathbb{R}^{m}} \geq c_{0}\|F\|_{\mathbb{R}^{m\times d}}$. Then
\begin{equation*}
\begin{split}
\sup_{\|\dot{A}\|\leq 1} \big\| (\nabla^{\alpha} D^{j}_{A} \mathcal C _{A_{0},k}(x)(\dot{A},\ldots,\dot{A})) \big\|\leq C_{\alpha}(d) \big ( \frac{2}{c_{0}}\big)^{j} j! L^{-(k-1)(d-2+|\alpha|)}L^{\eta_d(\alpha)},
\end{split}
\end{equation*}
where $D^{j}_A $ denotes the $j $-th derivative with respect to $A $ and $\|A\|$, denotes the operator norm of a linear mapping $A:\mathbb{R}^{m\times d}\to \mathbb{R}^{m\times d}$.
\end{enumerate}
\end{theorem}
\section{Notation and Hypothesis}
\label{sec:hypothesis}
Let $\bar A:\mathbb{T}^d\to \Lcal_{\rm sym}(\mathbb{R}^{m\times d})$ be a $C^{3}$ function, where
$\Lcal_{\rm sym}(\mathbb{R}^{m\times d})$ is the space of linear maps on $\mathbb{R}^{m\times
d}$ such that $A=A^{*}$ and the associated operator is elliptic, namely there
exists a constant $c_1,c_0 >0$ such that
\begin{equation}
\label{eq:ellipticity}
\begin{split}
c_1 |P|^2 \geq \bar A_{i,j}^{\alpha,\beta} P_{\alpha}^{i} P_\beta^j \geq
c_0 |P|^2\qquad \forall P\in \mathbb{R}^{m\times d}
\end{split}
\end{equation}
and there exists an $\varepsilon_0>0$ (small enough)
such that
\begin{equation}
\label{eq:cond-bar-A}
\begin{split}
\sum_{|\gamma |\leq 3} \sup_{\mathbb{T}^d} |D^{\gamma } \bar{A}_{i,j}^{\alpha,\beta}| \leq \varepsilon_{0},
\end{split}
\end{equation}
where $\gamma $ is a multi-index.
For every $N>1$, we define the function $A_N:\mathbb{T}^d_N\to \Lcal_{\rm sym}(\mathbb{R}^{m\times d})$ in the following natural way:
\begin{equation}
\label{eq:def-A_N}
\begin{split}
A_{N}(x)=\bar A(x/L^{N}).
\end{split}
\end{equation}
The condition~\eqref{eq:cond-bar-A}, can be expressed in terms of $A_{N}$ as
\begin{equation}
\label{eq:cond-A_N}
\begin{split}
\sup_{|\gamma |\leq 3}\sup_{\mathbb{T}^d_{N}} L^{N|\gamma |}|\nabla
^{\gamma } ( {A}_{N} )_{i,j}^{\alpha,\beta}| \leq \varepsilon_{0}.
\end{split}
\end{equation}
On the other hand, if there exists a $A_{N}$ such that \eqref{eq:cond-A_N} holds, then by some elementary interpolation one can construct a $\bar{A}$ such that \eqref{eq:def-A_N} holds.
Given that we will mainly work for $N$ fixed, if it is clear from the context we will drop the $N$-subscript.
We denote by $\Ecal\subset \insieme{q:\ \mathbb{T}^{d}_{N}\to \Lcal_{\rm sym}(\mathbb{R}^{m\times d})}$ such that
there exist constants $c_{0},c_{1} \geq 0 $ such that for every
$x\in T^{d}_{N} $ and $F\in M_{\rm sym}(\mathbb{R}^{m\times d})$, it holds
\begin{equation*}
\begin{split}
c_{0} \scal{F}{F} \leq\scal{q(x) F}{F}\leq c_{1} \scal{F}{F}.
\end{split}
\end{equation*}
The space $\Ecal$, is not a vector space.
It will be endowed with the distance induced by the norm norm
\begin{equation*}
\begin{split}
\|q \|_{\Ecal} =\sup_{x\in\mathbb{T}^{d},|\beta|\leq 3} \|
L ^{|\beta|N}\nabla^{\beta} q(x)\|_{M_{\rm sym}( \mathbb{R}^{m\times d} )},
\end{split}
\end{equation*}
where $\beta $ is a multiindex.
Similarly as before, we introduce the following notations:
\begin{equation}
\mathcal{X}_N:=\{\varphi\in \mathbf{X}_N: \sum_{x\in \mathbb T_N }\varphi(x)=0\},
\end{equation}
and
\begin{equation*}
\begin{split}
\As\colon \cH_+\to\cH_-,\quad \varphi\mapsto \As\varphi:=\nabla^*(A\nabla\varphi).
\end{split}
\end{equation*}
As in \ensuremath{\S}~\ref{sec:intro-frd}, let
$\Ccal_{A}:\mathbb{T}^{d}_{N}\times \mathbb{T}^{d}_{N} \to \mathbb{R}^{m\times d}$ such that
\begin{equation*}
\As \Ccal_{A,y}=\bigl(\delta_y -\frac1{L^{Nd}}\bigr).
\end{equation*}
We will extend Theorem~\ref{thm:AKM_FRDfamily} in the following way:
\begin{theorem}\label{thm:FRD-mio}
Let $d\geq 3$, $A_{N}$ be defined as above.
Then there exists $\varepsilon _{0}>0$, $C_{d}(\alpha)$ and $\eta_{d}(\alpha)$, such that for every $\varepsilon <\varepsilon _0$
the operator $\Cs_A \colon \cH_-\to\cH_+$, where $\|A\|_{\Ecal}\leq \varepsilon$, admits a finite range
decomposition, i.e., there exist positive-definite
operators
\begin{equation}
\Cs_{A,k} \colon \cH_-\to\cH_+,\ (\Cs_{A,k}\varphi)(x)=\sum_{y\in\mathbb{T}^{d}_N}\Ccal_{A,k}(x,y)\varphi(y),\ k=1,\dots, N+1,
\end{equation}
such that
\begin{equation*}
\Cs_A=\sum_{k=1}^{N+1} \Cs_{A,k},
\end{equation*}
and for associated kernel $\Ccal_{A,k}$, there exists a constant matrix $C_{A,k}$ such that
\begin{equation*}
\Ccal_{A,k}(x,y)= C_{A,k}\ \text{ whenever } \ \dist_\infty(x,y)\geq \frac{1}{2} L^k\quad \mbox{ for } k=1,\dots,N .
\end{equation*}
Moreover, if
$(A_0 F, F)_{\mathbb{R}^{m\times d}} \geq c_0 \norm{F}_{\mathbb{R}^{m\times d}}^2$ for all
$F \in \mathbb{R}^{m \times d}$ and $c_0 > 0$ and if
$\|A\|_{\Ecal} \leq 1/2$ then
\begin{equation*}
\sup_{\norm{\dot{A}}\le 1}\Big\|\big(\nabla^{\alpha}_{y}D_A^j\Ccal_{A_0,k}(x,y)(\dot{A},\ldots,\dot{A})\Big\|
\le C_{\alpha}(d) \left(\frac{2}{c_0}\right)^j j! \, L^{-(k-1)(d-2+|\alpha|)}L^{\eta(\alpha, d)}.
\end{equation*}
\end{theorem}
\section{Outline of the proof in the continuous case}
\label{sec:outlie}
Before going to the discrete setting, we would like to briefly expose the basic idea
in the continuous case.
In what follows, we will use the symbol $\lesssim $ to indicate an inequality is
valid up to universal constants depending eventually on the dimensions $d,m$.
For the sake of simplicity, we take $A=A(x)$ be elliptic with $A$ smooth.
Let $B$ be a ball, $\Pi_B: W^{1,2}(\mathbb{R}^n)\to W^{1,2}_{0} (B)$ be the
projection operator. Moreover, we define $P_B := \mathrm{Id} -\Pi_{B}$.
The construction technique is due to Brydges \emph{et~al.}\xspace (see
\cite{MR2070102,MR2240180}) and consists in considering the operators
\begin{equation*}
\begin{split}
\Ts_{B} f := \frac{1}{|B|}\int _{\mathbb{T}^d} \Pi_{x+B} f{\,\mathrm{d}x} \qquad \text{and} \qquad
{\Rs_B} := \mathrm{Id} - \Ts_B.
\end{split}
\end{equation*}
Let $r_1,\ldots,r_k>0$ and $B_{r_1},\ldots,B_{r_k}$ be the balls of radius $r_k$ centered in $0$.
Whenever it is clear from the context, we will denote by $\Rs_{k}:=\Rs_{B_{k}}$.
The operators $\Cs_k$ that appear in the Theorem~\ref{thm:AKM_FRDfamily} and Theorem~\ref{thm:FRD-mio}, will be of the form
\begin{equation*}
\Cs_k : =(\Rs_{1}\dots \Rs_{k-1} )\Cs(\Rs_{k-1}^\prime\dots \Rs_{1}^\prime)
- (\Rs_{1}\dots \Rs_{k-1} \Rs_k)\Cs(\Rs_k^\prime \Rs_{k-1}^\prime
\dots \Rs_{1}^\prime), \ k=1,\dots, N,
\end{equation*}
for a particular choice of $\{ r_{k}\}$.
Then the proof of the finite range property will follow by abstract reasoning (see $\S$~\ref{sec:construction-frd}).
In \cite{MR1354111}, among other things the authors show:
\begin{thm}[{\cite[Theorem~1]{MR1354111}}]
\label{thm:dolzman-mueller-orig-thm1}
Let $\Omega $ be a regular domain and
$A^{\alpha ,\beta }_{i,j}\in C^{k,\alpha }(\bar{\Omega })$ for some $\alpha \in (0,1)$ such that
\begin{equation*}
\begin{split}
A_{i,j}^{\alpha ,\beta } P^{i}_{\alpha } P^{j}_{\beta } > c |P
|^{2}, \qquad \text{for some } c>0 \text { and every } P\in \mathbb{R}^{d\times m}.
\end{split}
\end{equation*}
Then there exists a matrix $G_{y}$ such that
\begin{equation*}
\begin{split}
-D_{\alpha }(A_{i,j}^{\alpha ,\beta }D_{\beta }( G_{y} )^{j}_{k}) =
\delta _{i,k}\delta _{j}\qquad\text{in } \Omega
\end{split}
\end{equation*}
in the sense of distributions and
\begin{equation*}
\begin{split}
G_{y}=0\qquad\text{ on }\partial \Omega.
\end{split}
\end{equation*}
Moreover, it holds
\begin{equation*}
\begin{split}
|D^{\nu } G(x,\cdot ) | \leq C |x-y |^{2-d - |\nu |},
\end{split}
\end{equation*}
where $\nu $ is a multi-index such that $|\nu |\leq k$.
\end{thm}
The above theorem is proven by using the following well-known $L^{p}$-estimates.
\begin{lemma}
\label{lemma:dolz-mue-orig-2}
Suppose the same hypothesis as in Theorem~\ref{thm:dolzman-mueller-orig-thm1}
and let $p\in (1,\infty )$, $q\in (1,n)$.
\begin{enumerate}
\item If
$f\in L^{p}(\Omega,\mathbb{R}^{m\times d} ), F\in L^{q}(\Omega ,\mathbb{R}^{m})$, then the system
\begin{equation*}
\begin{split}
- D_{\alpha }(A_{i,j}^{\alpha ,\beta } D_{\beta }u^{j}) =
D_{\alpha } f^{\alpha }_{j} + F^{i} \qquad \text{in }\Omega,
\end{split}
\end{equation*}
with boundary condition
\begin{equation*}
\begin{split}
u=0\qquad \text{on } \partial \Omega,
\end{split}
\end{equation*}
has a weak solution in $W^{1,s}(\Omega ; \mathbb{R}^{m})$, where
\begin{equation*}
\begin{split}
s=\min(p,q^{*}), \qquad q^{*}=\frac{nq}{n-q},
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\|u \|_{W^{1,s}} \leq C(\|f \|_{L^{p}} + \|F \|_{L^{q}}).
\end{split}
\end{equation*}
\item If $f\in L^{p,\infty },\ F\in L^{q,\infty }$ then there exists a
weak solution that satisfies
\begin{equation}
\begin{split}
\|u \|_{L^{s^{*},\infty }} + \|Du \|_{L^{s,\infty }}\leq C(\|f \|_{L^{p, \infty }} + \|Du \|_{L^{q,\infty }}).
\end{split}
\end{equation}
\end{enumerate}
\end{lemma}
To simplify the notation we will write $\diver(A \nabla u)$ instead of $D_{\alpha }(A_{\alpha ,\beta}^{i,j}D_\beta u^j)$.
\begin{lemma}
\label{lemma:dolz-mue-orig-3}
Suppose the same hypothesis as in Theorem~\ref{thm:dolzman-mueller-orig-thm1}.
Let $B_{2r}$ be a ball of radius $2r$ centered in $0$, $p>d$ and let $u$ be a
solution to
\begin{equation*}
\begin{split}
\diver (A \nabla u)=0 \qquad \text{in }B_{2r}.
\end{split}
\end{equation*}
Then
\begin{equation*}
\begin{split}
\sup_{B_{r}} |u |\leq r^{- n/q} M + r^{1- n/p} \|f \|_{B_{2r}},
\end{split}
\end{equation*}
where
\begin{equation*}
\begin{split}
M=\|Du \|_{L^{q,\infty }(B_{2r})} + \|u \|_{L^{q^{*},\infty }(B_{2r})}.
\end{split}
\end{equation*}
\end{lemma}
\begin{proposition}
\label{proposition:pre_kryesorja}
Let $B_1,\dots,B_k$ be balls with radii $r_1,\cdots,r_k$ respectively. Then, there exists a dimensional constant $C_d$, such that
\begin{equation*}
\begin{split}
\sup|\nabla ^{j }u|\leq C_{d}^{k} \max\left( |x-y|,\dist(y, {B}_{1}^{C}),\ldots,\dist(y, B_k^{C}) \right)^{2-d+j},
\end{split}
\end{equation*}
where $u= (P_{B_1}\cdots P_{B_k} C(x,\cdot))$ and $C(x,y)$ is the Green's function and $j<d-2$.
\end{proposition}
\begin{proof}
Let us sketch the proof of the above fact.
In the discrete case it will be done in more detail.
The proof will follow by induction.
Let $B_{1}$ be a ball in generic position of size $r_1$.
Given that $\diver(A\nabla C_{x}(y))=0$, if $x\not\in B_1$ then $\Pi_{B_1}C(x,y)=0$, thus $P_{B_1}C(x,y)=C(x,y)$, hence the inequality follows from Theorem~\ref{thm:dolzman-mueller-orig-thm1}.
Let $\varepsilon:=\dist(y, {B}^{C}_1) <r_1$.
If $|x-y|>\varepsilon/2$, then by estimating the different terms
$\Pi_{B_1}C(x,y)$ and $C(x,y)$ separately one has the desired result.
Indeed, $C(x,y)\lesssim |x-y |^{2-d}$. Then by using an appropriate
version of Lemma~\ref{lemma:dolz-mue-orig-3} one has that
\begin{equation*}
\begin{split}
|\Pi _{B_{1}} C(x,y)|\lesssim |x-y |^{2-d} M,
\end{split}
\end{equation*}
where
\begin{equation*}
\begin{split}
M=\|D \Pi _{B_1} C_{x} \|_{L^{d/d-2,\infty }(B_{1})} + \|\Pi _{B_1} C_{x} \|_{L^{d/d-1,\infty }(B_{1})}.
\end{split}
\end{equation*}
Then by using Lemma~\ref{lemma:dolz-mue-orig-2} one has that
\begin{equation*}
\begin{split}
\|D\Pi _{B_{1}} C_x \|_{L^{d/(d-2),\infty}} +
\|\Pi _{B_{1}} C_x \|_{L^{d/(d-1),\infty}} \lesssim
\|D C_x\|_{L^{d/(d-2),\infty}} +
\|C_x\|_{L^{d/(d-1),\infty}}< \tilde{C}_{d},
\end{split}
\end{equation*}
where $\tilde{C}_{d}$ is a constant depending only on the dimension $d$.
The inductive step is done in a very similar way and the higher derivative estimates follow similarly.
\end{proof}
Let $B_1,\dots,B_k$ be $k$ balls centered in $0$, with radii $r_1,\dots,r_k$ respectively and let $C(\cdot,\cdot)$ be the Green's function. We will denote by $C_k(x,\cdot ):= \Rs_k \cdots \Rs_1 C(x,\cdot)$.
Let us now give a simple calculation that will be useful in Theorem~\ref{thm:kryesorja}.
\begin{lemma}
\label{lemma:2}
Let $j> 1$ be an integer. Then
\begin{equation*}
\begin{split}
\frac{1}{r^{d}}\int_{0}^{r} \max(\alpha, |r-\rho|)^{-j}\rho^{d-1}d\rho \lesssim \frac{\alpha^{1-j}}{r}.
\end{split}
\end{equation*}
Indeed, let us denote by $I$ the right hand side of the previous equation. With a change of variables one has
\begin{equation*}
\begin{split}
I&= \frac{1}{r^{d}}\int_{0}^{r-\alpha} |r-\rho|^{-j}\rho^{d-1}d\rho +
\int_{r-\alpha}^{r} \alpha^{-j} \rho^{d-1} \,\mathrm{d}\rho
\\& =\frac{1}{r^{j}}\int^{1-\frac{\alpha}{r}}_{0} |1-t|^{-j} t^{d-1}{\,\mathrm{d}t} +
\int ^{1}_{1-\frac{\alpha}{r}} \alpha^{-j} t^{d-1}{\,\mathrm{d}t}
\\& =\frac{1}{r^{j}}\int^{1-\frac{\alpha}{r}}_{0} |1-t|^{-j} {\,\mathrm{d}t} +
\int ^{1}_{1-\frac{\alpha}{r}} \alpha^{-j} {\,\mathrm{d}t} \leq
r^{-j} \left( \frac{\alpha^{1-j}}{r^{1-j}} - 1 \right ) +
\frac{\alpha^{1-j}}{r} \\ &\leq
\frac{2\alpha^{1-j}}{r}.
\end{split}
\end{equation*}
If $j=1$, then
\begin{equation*}
\begin{split}
I&= \frac{1}{r^{d}}\int_{0}^{r-\alpha} |r-\rho|^{-1}\rho^{d-1}d\rho +
\int_{r-\alpha}^{r} \alpha^{-1} \rho^{d-1} \,\mathrm{d}\rho
\\& =\frac{1}{r^{1}}\int^{1-\frac{\alpha}{r}}_{0} |1-t|^{-1} t^{d-1}{\,\mathrm{d}t} +
\int ^{1}_{1-\frac{\alpha}{r}} \alpha^{-1} t^{d-1}{\,\mathrm{d}t}
\\& =\frac{1}{r^{1}}\int^{1-\frac{\alpha}{r}}_{0} |1-t|^{-1} {\,\mathrm{d}t} +
\int ^{1}_{1-\frac{\alpha}{r}} \alpha^{-1} {\,\mathrm{d}t} \leq
\frac{1}{r} \Big( \big|\log\big(\frac{\alpha}{r}\big) \big| +1\Big).
\end{split}
\end{equation*}
\end{lemma}
\begin{theorem}
\label{thm:kryesorja}
Let $C_k,B_i,r_i$ as above and such that $r_1<\dots<r_h<|x-y|<r_h+1<\dots<r_k$. Then,
\begin{enumerate}
\item if $k -h< d-2$, then it holds
\begin{equation*}
\begin{split}
|C_{k}(x,y)|&\lesssim \frac{1}{r_{h+1}\cdots r_k} |x-y|^{2-d+k -
h}\prod_{i=h+1}^{k}\left( \Big|\log\left(\frac{|x-y|}{r_i}\right)\Big|+1\right)\\
|\nabla ^{j}_{y}C_{k}(x,y)| &\lesssim \frac{1}{r_{h+1}\cdots
r_k} |x-y|^{2-d+k -j -h},
\end{split}
\end{equation*}
\item if $k-h\geq d-2$, it holds
\begin{equation*}
\begin{split}
|C_{k}(x,y)|&\lesssim \frac{1}{ r_{k-d+3}\cdots r_k} \left|\log( |x-y| )\right|\\
|\nabla ^{j}_{y}C_{k}(x,y)| &\lesssim \frac{1}{ r_{k-d +2
-j}\cdots r_k} \prod_{i=h+1+j}^{k}\left( \Big|\log\left(\frac{|x-y|}{r_i}\right)\Big|+1\right).\\
\end{split}
\end{equation*}
\end{enumerate}
\end{theorem}
\begin{proof}
We will prove only (i). The proof of (ii) is very similar.
Let us initially consider the case $k=1$.
For simplicity we denote $\Pi_z:= \Pi_{B_1 +z}$. With simple
computations, one has
\begin{equation*}
\label{eq:01071357524110}
\begin{split}
\sup \left|C_1(x,y)\right| \leq \frac{1}{|B|}\int_{B_1+y}\sup |(\mathrm{Id} - \Pi_{z})C(x,\cdot)| +
\sup\left|\frac{1}{|B|} \int_{(y+B_1)^{C}} \Pi_{z} C(x,\cdot){\,\mathrm{d}z}\right|.
\end{split}
\end{equation*}
Because of the fact that for every $t\in B_1+z$ the function $\Pi _{z}C_{x}$
is harmonic and has null boundary condition, one has that the second term
in the right hand side of \eqref{eq:01071357524110} is null. Hence it is enough to
prove a bound only on the first
term. Given that for every $z\in y+B$ it holds $\dist(y,z+B_1)=r_1 -
|z-y|$. Then, by using Proposition~\ref{proposition:pre_kryesorja}, one has that
\begin{equation*}
\begin{split}
\sup |(\mathrm{Id} - \Pi_{z})C(x,\cdot)| \leq
\begin{cases}
(r_1 - |z-y|)^{2-d}& \text{ if\ \ $
r_1 - |y-z|\geq |x-y|$} \\
|x-y|^{2-d}& \text{otherwise}.
\end{cases},
\end{split}
\end{equation*}
Thus,
\begin{equation*}
\begin{split}
\sup |C_1 (x,y)| &\lesssim \int_{0}^{r_1- |y-x|}
|r_1-\rho|^{2-d}\rho^{d-1}\,\mathrm{d}\rho + \int_{r_1 -
|x-y|}^{r_1}|x-y|^{2-d}\rho^{d-1} \,\mathrm{d}\rho \\ &
\lesssim \frac{|x-y|^{3-d}}{r_1} - r_{1}^{2-d} +
\frac{|x-y|^{3-d}}{r_1}\lesssim \frac{|x-y|^{3-d}}{r_1}.
\end{split}
\end{equation*}
Let us now turn to the general case $k<d -2$, and let $B_1,\dots,B_k$ be balls of radii $r_1,\dots,r_k$ centered at the origin.
From Proposition~\ref{proposition:pre_kryesorja}, we have that
\begin{equation*}
\begin{split}
&\sup | P_{z_1+B_1}\cdots P_{z_k+B_k} C(x,\cdot)|\leq
\max\insieme{|x-y|,r_1 - |z_1-y|,\dots,r_k- |z_k-y|}^{2-d}\\
&\leq \max\insieme{|x-y|}^{2-d+k}\cdot\max\insieme{|x-y|,r_k-
|z_k-y|}^{-1}\cdots\max\insieme{|x-y|,r_k- |z_k-y|}^{-1}\\ &\hspace{10cm} =:g(z_1,\dots,z_k).
\end{split}
\end{equation*}
Thus,
\begin{equation*}
\begin{split}
\sup \Rcal_1\cdots \Rcal_k C (x,\cdot )\leq
\int_{B_1\times\cdots\times B_k} g(z_1,\ldots,z_k) {\,\mathrm{d}z}_1\cdots{\,\mathrm{d}z}_k.
\end{split}
\end{equation*}
From Lemma~\ref{lemma:2} we have that
\begin{equation*}
\begin{split}
\int_{B_1\times\cdots\times B_k} g(z_1,\ldots,z_k) {\,\mathrm{d}z}_1\cdots{\,\mathrm{d}z}_k
\leq \frac{1}{r_1\cdots r_k}|x-y|^{2-d+k}\prod_{i} (|\log (|x-y|)|
+\log(r_i) +1),
\end{split}
\end{equation*}
which proves the desired result.
\end{proof}
\begin{corollary}
Suppose that $|x-y|>1$ and let $B_1,\ldots,B_k$ and such that $r_i=L^{i}$
with $L>1$. Then there exists $\eta (j,d)$ such that
\begin{equation*}
\begin{split}
\nabla ^{j}C_k(x,y) \lesssim \frac{L^{\eta(j,d)}}{L^{k(d-2 -j)}}.
\end{split}
\end{equation*}
Indeed, given that $\Rs'_{k} = \As \Rs_{k} \Cs$ one has that
\begin{equation*}
\begin{split}
\Rs_{1}\cdots\Rs_{k} \Cs \Rs'_{k}\cdots\Rs'_{1} = \Rs_{1}\cdots\Rs_{k} \cdot \Rs_{k}\cdots\Rs_{1} \Cs
\end{split}
\end{equation*}
hence by using Theorem~\ref{thm:kryesorja}, one has the desired result.
\end{corollary}
\section{Construction of the finite range decomposition}
\label{sec:construction-frd}
In this section, we will briefly describe the construction of the finite range decomposition.
Let us \emph{stress} that main idea in the construction of the finite decomposition goes back to Brydges \emph{et~al.}\xspace (e.g.,\xspace \cite{MR2070102,MR2240180}).
Because the construction is rather well-known and general, in this section we will briefly sketch how such construction can be made.
There are different versions of the construction above mentioned construction. We have in mind in particular a very closely related construction that can be found in \cite{MR2995704}.
Let $Q$ be a cube of size $l$ and let us denote for simplicity of notation we will use $\Pi_x:=\Pi_{Q+x}$.
For every $\varphi \in \cH_+$, define
\begin{equation*}
\Ts(\varphi ) :=\frac{1}{l^d}\sum_{x\in \mathbb{T}^{d}_N}\Pi_x \varphi.
\end{equation*}
One also introduces ${\Ts}^{\prime}: \cH_- \to\cH_-$ be the dual of $\Ts$ i.e.,\xspace
\begin{equation}
\langle {\Ts}^{\prime}\varphi,\psi\rangle=\langle \varphi,\Ts\psi\rangle,\quad \varphi\in\cH_-, \psi\in\cH_+.
\end{equation}
It is not difficult to notice that
\begin{equation}
\label{eq:propertiesOfTT'}
{\Ts}^{\prime}=\As{\Ts}{\As}^{-1},
\quad (\Ts' \varphi, \psi)_- = (\varphi, \Ts' \psi)_-,
\quad \mbox{and } (\Ts' \varphi, \varphi)_- = (\Ts \As^{-1} \varphi, \As^{-1} \varphi)_+.
\end{equation}
In order to construct the finite range decomposition we will also need ${\Rs}:=\mathrm{Id}-{\Ts}$ and its dual ${\Rs}^{\prime}=\mathrm{Id}-{\Ts}^{\prime}$.
Using \eqref{eq:propertiesOfTT'} one has that
\begin{equation*}
\begin{split}
\Rs' = \As \Rs \As^{-1}
\end{split}
\end{equation*}
Given that $0 \leq \scalare{\Ts\varphi,\varphi} \leq \scalare{\varphi,\varphi}$, and \eqref{eq:propertiesOfTT'}, for every $\varphi \neq 0$ one has that $(\Ts' \varphi, \varphi)_- > 0, \quad (\Rs' \varphi, \varphi)_- > 0$ and $(\Ts' \varphi, \Ts' \varphi)_- \leq (\Ts' \varphi, \varphi)_-$.
Moreover, given a bilinear form on $\mathcal{X}_{N}$, there exists a (unique) linear map such that
\begin{equation*}
\begin{split}
B(\varphi,\psi) = \scalare{\Bs \varphi,\psi}.
\end{split}
\end{equation*}
The map $\Bs $ can be represented as kernel, namely there exist a map $\Bcal$ such that
\begin{equation*}
\begin{split}
(\Bs\psi)(x) = \sum_{\xi\in \mathbb{T}^{d}_{N}} \Bcal(x,y)\psi(y).
\end{split}
\end{equation*}
Indeed, for our case when all the functions live in a finite dimensional vector space, this is a simple linear algebra exercise.
For every $M_1, M_2 \subset \mathbb{T}_N$, we will define the distance
\begin{equation}
\dist_\infty(M_1,M_2) := \min \{ \dist_\infty(x,y)\colon x \in M_1, y \in M_2 \}.
\end{equation}
Let us define $\Cs_1:=\Cs- \Rs \Cs{\Rs}^{\prime}$. As we saw $\Cs $ is positive. The crucial step in proving the finite range decomposition is proving that $\Cs_1 $ is finiterange and also positive definite. The proof is a minor modification of the original one.
Finally the finite range decomposition can be construced by an iterated application of the above.
Namely, let $(l_{j})$ be an increasing sequence. We will apply the above procedure $ Q_j $ instead of $Q$. Namely, set
\begin{equation}
\Cs_k : =(\Rs_{1}\dots \Rs_{k-1} )\Cs(\Rs_{k-1}^\prime\dots \Rs_{1}^\prime)
- (\Rs_{1}\dots \Rs_{k-1} \Rs_k)\Cs(\Rs_k^\prime \Rs_{k-1}^\prime \dots \Rs_{1}^\prime), \ k=1,\dots, N,
\end{equation}
and
\begin{equation}
\Cs_{N+1} :=(\Rs_{1} \dots \Rs_{N-1}\dots \Rs_N)\Cs(\Rs_N^\prime \Rs_{N-1}^\prime \dots \Rs_{1}^\prime).
\end{equation}
By doing this we have the desired finite range decomposition.
\section{Discrete gradient estimates and $L^p$-regularity for elliptic systems}
\label{appSobolev}
\label{sec:discrete-estimates}
Let us now introduce some of the norms that will be used in the sequel. Let $ Q=[0,n]^d\cap\mathbb{Z}^d$, be a generic cube. For $ p>0 $ denote
\begin{equation}
\norm{f}_{p,Q}=\Big(\frac{1}{|Q|}\sum_{x\in Q_n}|f(x)|^p\Big)^{1/p},
\end{equation}
where $|Q|:=\#Q$.
To simplify notation, we will write $\sum_Q f := \sum_{i\in Q} f(i)$ and
$f_Q:=|Q|^{-1}\sum_{Q}f$.
Additionally, let us define
\begin{equation*}
\begin{split}
f^\#(x):=\sup_{Q\ni x}\ \frac{1}{|Q|}\sum_{Q }\big| f -
f_Q\big|{\,\mathrm{d}x} \qquad\text{and}\qquad \|f\|_{\mathrm{BMO}}:=\sup_{x\in \mathbb{T}^{d}_{N}} |f^{\#}(x)|.
\end{split}
\end{equation*}
The Maximal Operator is defined by
\begin{equation*}
\begin{split}
\Mcal f(x):=\sup_{Q\ni x}\ \frac{1}{|Q|}\sum_{Q }| f |{\,\mathrm{d}x}
\end{split}
\end{equation*}
Moreover, let
\begin{equation*}
\|f\|_{p,\infty}= \inf \insieme{\alpha:\ \frac{1}{\lambda
}|\insieme{f>\lambda}|^{1/p} \leq \alpha,\ \text{for all $\lambda>0$}}
\end{equation*}
and
\begin{equation*}
\|f\|_{p,\infty,Q}= |Q|^{-1/p} \inf \insieme{\alpha:\ \frac{1}{\lambda
}|\insieme{f>\lambda}\cap Q|^{1/p} \leq \alpha,\ \text{for all $\lambda>0$}}.
\end{equation*}
We now state a version of Sobolev inequality (see \cite{MR961019,AKM10}).
\begin{proposition}\label{Sobolevpropi-iv-in-chapter}
\label{Sobolevpropi-iv}
For every $p\ge 1$ and $m,M\in N$ there exists a constant $C=C(p,M,m)$ such that:
\begin{enumerate}
\item[(i)] If $1\le p\le d$, $\frac{1}{p^*}=\frac{1}{p}-\frac{1}{d} $, and $ q\le p^*$, $q<\infty $, then
\begin{equation}
n^{-\frac{d}{q}}\norm{f}_q\le Cn^{-\frac{d}{2}}\norm{f}_2+Cn^{1-\frac{d}{p}}\norm{\nabla f}_p.
\end{equation}
\item[(ii)] If $ p> d $, then
\begin{equation}
\big|f(x)-f(y)\big| \le Cn^{1-\frac{d}{p}}\norm{\nabla f}_p \qquad\mbox{ for all } x,y\in Q_n.
\end{equation}
\item[(iii)] If $ m\in\mathbb{N}$, $1\le p\le \frac{d}{m}$, $\frac{1}{p_m}=\frac{1}{p}-\frac{m}{d}$, and $q\le p_m$, $q<\infty $, then
\begin{equation}
n^{-\frac{d}{q}}\norm{f}_q\le Cn^{-\frac{d}{2}} \sum_{k=0}^{M-1}\norm{(n\nabla)^kf}_2 +Cn^{-\frac{d}{p}}\norm{(n\nabla)^Mf}_p.
\end{equation}
\item[(iv)] If $ M=\lfloor\frac{d+2}2\rfloor$, the integer value of $\frac{d+2}2$, then
\begin{equation}
\max_{x\in Q_n}|f(x)|\le Cn^{-\frac{d}{2}}\sum_{k=0}^M\norm{(n\nabla)^kf}_2.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{lemma}[Caccioppoli inequality]
Let $v$ be such that $\diver (A \nabla v)=0$ for every $x\in Q_{M}$ then
\begin{equation*}
\begin{split}
\sum_{Q_m} |\nabla v(x)|^{2}\leq \frac{c_0^{4}}{( M-m )^{2}}\sum_{Q_M}|v-\lambda|^{2},
\end{split}
\end{equation*}
where $c_0$ is the constant defined in \eqref{eq:ellipticity}.
\end{lemma}
\begin{proof}
Let $0\leq \eta\leq1$ be a that $|\nabla \eta|\leq \frac{1}{M-m}$ and such
that $\eta\equiv 1$ on $Q_m$ and $\eta = 0 $ on $\mathbb{T}^d_{N}\setminus \bar Q_M$.
Then
\begin{equation*}
\begin{split}
\sum_{Q_M} (A \nabla u \cdot \nabla u )\eta^2 = \sum_{Q_M} A \nabla u \cdot \nabla (\eta^2 (u- \lambda)) -\sum_{Q_M} A \nabla u \cdot 2 \eta ((u-\lambda) \otimes D\eta)
\end{split}
\end{equation*}
By hypothesis, the first term in the right hand side vanishes.
Using the previous formula and the ellipticity, one has that
\begin{align}
\sum_{Q_M} \kabs{\nabla u}^2 \eta^2 & \leq c_0\sum_{Q_M} A \nabla u \cdot 2 \eta ((u-\lambda) \otimes D\eta) \leq \frac{1}{2} \sum_{Q_M} \kabs{\nabla u}^2 \eta^2 + \frac{c_0^4}{2} \sum_{Q_M} \kabs{D\eta}^2 \kabs{u - \lambda}^2 ,
\end{align}
from which one has that
\begin{equation*}
\begin{split}
\sum_{Q_{m}} \kabs{\nabla u}^{2}\leq \sum_{Q_{M}} \kabs{\nabla u}^{2}\eta^2\leq
\frac{c_0^{4}}{( M-m )^2} \sum_{Q_M} |u-\lambda|^2.
\end{split}
\end{equation*}
\end{proof}
\begin{lemma}[Decay estimates]
\label{lemma:consequences_caccippoli}
\label{lemma:decay_estimates}
Let $v$ be such that $\diver (A \nabla v)=0$ on $Q_M$, with $M, M/2\in \mathbb{N}$ and
$2m\leq M$. Then,
\begin{equation*}
\begin{split}
\sum_{Q_m} |u(x)|^{2}& \lesssim (m/M)^{d} \sum_{Q_M} |u(x)|^2,\\
\sum_{Q_{m}} |u - ( u )_{m}|^{2}&\lesssim (m/M)^{d+2}|\sum_{Q_M}u - ( u )_{M}|^2.
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
From the Caccioppoli's inequality, one has that
\begin{equation*}
\begin{split}
\sum_{Q_{M/2}} |M \nabla u (x)|^2 \lesssim \sum_{Q_{M}} |u(x)|^{2}.
\end{split}
\end{equation*}
Noticing that if $u$ is a solution then also $\nabla u$ is a solution, we have that
\begin{equation*}
\begin{split}
\sum_{Q_M} \|(M\nabla )^ju\|\lesssim \sum_{Q_{M}}|u(x)|^2,
\end{split}
\end{equation*}
hence
\begin{equation*}
\begin{split}
M^{-d}\sum_{j=0}^{k}\sum_{Q_{M/2}} \|(M/2 \nabla )^{j}u\|\lesssim
M^{-d}\sum_{Q_M} \|u\|^2.
\end{split}
\end{equation*}
Finally applying the Sobolev, inequality we have that
\begin{equation}
\label{eq:04131397396068}
\begin{split}
\sum_{Q_m}\|u\|^2 \leq m^{d} \max_{Q_{M/2}} \|u\|^2 \leq (\frac{m}{M})^{d} \sum_{Q_M}\|u\|^2.
\end{split}
\end{equation}
Let us now prove the second inequality. Using the Poincar\'e inequality and than \eqref{eq:04131397396068}, we have that
\begin{equation*}
\begin{split}
\sum_{Q_M}|u-(u)_m|^{2}&\leq m^2 \sum_{Q_m} |\nabla u|^{2}\lesssim m^2
\left(\frac{2m}{M}\right)^{d}\sum_{Q_{M/2}}|\nabla u|^{2}\\ &\lesssim \left(
\frac{m}{M} \right)^{d+2} \sum_{Q_M} |u - (u)_{M}|^{2},
\end{split}
\end{equation*}
where in the last step we have used the Caccioppoli inequality.
\end{proof}
\begin{lemma}\label{dolzman-mueller-lemma-1}
Let $p_1,p_2, q_1,q_2\in [1,\infty]$, $p_1\neq p_2$, $q_1\neq q_2$. Let
$\theta \in (0,1)$ and define $p,q$ by
\begin{equation}
\label{eq:09131347499984}
\frac{1}{p}=\frac{\theta}{p_1} + \frac{1-\theta}{p_2},\qquad \frac{1}{q}=\frac{\theta}{q_1} + \frac{1-\theta}{q_2}
\end{equation}
Suppose that $T$ is a linear operator such that
\begin{equation*}
\begin{split}
\left( \frac{1}{|Q|} \sum_Q |T f|^{q_i}\right)^{\frac{1}{q_i}} \leq C_i\left(
\frac{1}{|Q|} \sum_Q | f|^{p_i}\right) ^{\frac{1}{p_i}}
\end{split}
\end{equation*}
Then
\begin{equation*}
\begin{split}
\|Tf\|_{q,\infty,Q} \leq C_3 \|f\|_{p,\infty,Q},
\end{split}
\end{equation*}
where $C_3$ depends on $\theta$, $C_1$, $C_2$.
\end{lemma}
\begin{proof}
The proof of this result is well-known (see e.g.,\xspace \cite[Theorem~3.3.1]{butzer_berens}).
For completeness, we report an adapted elementary proof from \cite[Lemma~1]{MR1354111}. Let $p_1 < p_2,$ $q_1 < q_2$ and
$p$ is as in \eqref{eq:09131347499984}. Assume
that $\|Tf\|_{q_i}\leq C_{i} \| f\|_{p_i}$ with $i=1,2$. Let $\gamma >0 $ define
\begin{equation}
f_1= \begin{cases}
f & \qquad \text{if $|f|>\gamma$}\\
0& \qquad \text{if $|f|\leq \gamma$}
\end{cases}
\end{equation}
and
\begin{equation}
f_2= \begin{cases}
0 & \qquad \text{if $|f|>\gamma$}\\
f& \qquad \text{if $|f|\leq \gamma$}.
\end{cases}
\end{equation}
Given that
\begin{equation*}
\begin{split}
\frac{1}{|Q|} \sum_{Q} |f_1|^{p_1} \leq \frac{p_1}{p-p_1} \gamma^{p_1-p} \|f\|^{p}_{p,\infty,Q}
\end{split}
\end{equation*}
we have that
\begin{equation*}
\begin{split}
\Big|\insieme{|Tf_{1}|>\frac{\alpha}{2}}\Big|&\leq A^{q_1}_{1}
\big(\frac{2}{\alpha} \big)^{q_1} \|f_1\|_{p_1}^{q_1}\\ & \leq
A^{q_1}_1 \big( \frac{2}{\alpha} \big )^{q_1}
\Big(\frac{p_1}{p-p_1}\Big)^{q_1/p_1}\gamma ^{q_1-
pq_1/p_1}\|f\|_{p,\infty,Q}^{pq_1/p_1}\\ &= B_1 \alpha^{-q_1}\gamma^{q_1-pq_1/p_1}
\end{split}
\end{equation*}
and similarly
\begin{equation}
\Big|\insieme{|Tf_2|\geq \frac{\alpha}{2}}\Big|\leq B_2\alpha^{- q_2} \gamma^{q_2-pq_2/p_2}.
\end{equation}
Now
\begin{equation*}
\|Tf\|_{q,\infty}^{q}= \sup_{\alpha}\alpha^q |\insieme{|Tf|>\alpha}|
\end{equation*}
and now using the triangular inequality, we have
\begin{equation*}
\begin{split}
\alpha^q|\insieme{|Tf|>\alpha/2}|&\leq \alpha^q|\insieme{|Tf_1|>\alpha/2}| +\alpha^q|\insieme{|Tf_2|>\alpha/2}|
\\ &\leq B_1 \alpha^{-q_1}\gamma^{q_1-pq_1/p_1}+ B_2\alpha^{- q_2} \gamma^{q_2-pq_2/p_2}.
\end{split}
\end{equation*}
One can archive the desired result by choosing $\gamma=\alpha^\beta$ where
$\beta= \big( \frac{q}{q_1} -\frac{q}{q_2}\big) \big(\frac{p}{p_1} -\frac{p}{p_2}\big)^{-1}$.
\end{proof}
\begin{theorem}[Marcinkiewicz interpolation theorem]
Let $0 < p_0, p_1, q_0, q_1 \leq \infty$ and $0 < \theta < 1$ be such that
$q_0 \neq q_1$, and $p_i \leq q_i$ for $i=0,1$. Let $T$ be a sublinear
operator which is of weak type $(p_0,q_0)$ and of weak type $(p_1,q_1)$.
Then $T$ is of strong type $(p_\theta,q_\theta)$.
\end{theorem}
\begin{proof}
The proof is well-known.
\end{proof}
\begin{remark}
\label{rmk:norma-debole-K}
Let $K:\mathbb{T}^{d}_{N}\times \mathbb{T}^{d}_{N}\to \mathbb{R}^{d\times m}$ be such that
$|K(x,y)|\leq |x-y|^{2-d}$. Then has that
\begin{equation*}
\begin{split}
\|K(x,\cdot)\|_{L^{\frac{n}{n-2}},\infty}\leq 1
,\qquad\text{and}\qquad \|K(x,\cdot)\|_{L^{\frac{n}{n-2}},Q,\infty}\leq 1.
\end{split}
\end{equation*}
Indeed, fix $t>0$ then
\begin{equation*}
\begin{split}
|\insieme{y:\ |K(x,y)| > t}|\leq |\insieme{y: \ |x-y |^{2-d} >t}| =
|\insieme{y:\ |x-y|< t^{-(2-d)}} |\leq t^{-\frac{d}{d-2}}.
\end{split}
\end{equation*}
\end{remark}
Let us recall the celebrated Hardy-Littlewood maximal theorem:
\begin{thm}
\label{thm:hardy-littlewood-maximal}
Let $f:\mathbb{T}^{d}_{N}\to \mathbb{R}^m$. Then
\begin{equation*}
\begin{split}
|\Mcal f|_{p}\leq |f|_{p}
\end{split}
\end{equation*}
\end{thm}
\begin{theorem}[Fefferman-Stein]
\label{thm:feffermanstein}
Let $Q$ be a cube and let $f:Q\to \mathbb{R}^m$ such that $\sum_{Q} f = 0$. Then there
exists constants $C_1,C_2$ such that
\begin{equation*}
\begin{split}
\|\Mcal f\|_{p,Q}\leq C_{1} \|f^{\#}\|_{p,Q} \qquad \text{and} \qquad
\|f^{\#}\|_{p,Q}\leq C_2\|\Mcal f\|_{p,Q}.
\end{split}
\end{equation*}
\end{theorem}
\begin{proof}
The proof follows from the classical Fefferman\&Stein result after one does a piecewise linear interpolation of the function $f:Q\to \mathbb{R}^{m}$.
\end{proof}
\begin{corollary}
Let $T$ be an linear operator such that for every $f:Q\to \mathbb{R}^{m}$.
Then for every $q > p$, there exists a constant $C:=C(p)$ such that for every $f:Q\to \mathbb{R}^{m}$ it holds
\begin{equation*}
\begin{split}
\sum_{x \in Q} |Tf^\#(x) |^{p} \leq \sum_{x\in Q} |f(x)|^{p}.
\end{split}
\end{equation*}
\end{corollary}
\begin{proof}
The map $f\mapsto (Tf)^\#$ is a sublinear and a bounded map from
$\mathrm{L}^{\infty}(\mathcal{X})\to \mathrm{L}^{\infty}(\mathcal{X})$ which is of weak type $(p,p)$ and of weak type $(\infty,\infty)$.
Then for every $q\geq p$, it holds that $f\mapsto (Tf)^\#$ is bounded.
This implies that $f\mapsto M(Tf)$ is bounded because Theorem~\ref{thm:feffermanstein}
and hence $f\mapsto Tf$ is bounded.
\end{proof}
In the next lemma $A=A_0$ is a constant positive definite operator.
Let us now recall a classical result. We also provide a proof for completeness.
\begin{lemma}[{\cite[Lemma V.3.1]{MR717034} }]
\label{lemma_iteration1}
Assume that $\phi(\rho)$ is a non-negative, real-valued, bounded function defined on an interval $[r,R] \subset \mathbb{R}^+$. Assume further that for all $r \leq \rho < \sigma \leq R$ we have
\begin{equation*}
\phi(\rho) \leq \big[ A_1 (\sigma - \rho)^{-\alpha_1} + A_2 (\sigma - \rho)^{- \alpha_2} + A_3 \big] + \vartheta \phi(\sigma)
\end{equation*}
for some non-negative constants $A_1, A_2, A_3$, non-negative exponents $\alpha_1\geq \alpha_2$, and a parameter $\vartheta \in [0,1)$. Then we have
\begin{equation*}
\phi(r) \leq c(\alpha_1,\vartheta) \big[ A_1 (R - r)^{-\alpha_1} + A_2 (R - r)^{- \alpha_2} + A_3 \big] \,.
\end{equation*}
\begin{proof}
We proceed by iteration and start by defining a sequence $(\rho_i)_{i \in \mathbb{N}_0}$ via
\begin{equation*}
\rho_i := r + (1-\lambda^i) (R - r)
\end{equation*}
for some $\lambda \in (0,1)$. This sequence is increasing, converging to $R$, and the difference of two subsequent members is given by
\begin{equation*}
\rho_{i} - \rho_{i-1} = (1 - \lambda) \lambda^{i-1} (R - r) \,.
\end{equation*}
Applying the assumption inductively with $\rho=\rho_i$, $\sigma = \rho_{i-1}$ and taking into account $\alpha_1 > \alpha_2$, we obtain
\begin{align*}
\phi(r) & \leq A_1 (1-\lambda)^{-\alpha_1} (R - r)^{-\alpha_1} + A_2 (1-\lambda)^{-\alpha_2} (R - r)^{-\alpha_2} + A_3 + \vartheta \phi(\rho_1) \\
& \leq \vartheta^k \phi(\rho_k) + (1-\lambda)^{- \alpha_1} \sum_{i=0}^{k-1} \vartheta^{i} \lambda^{-i \alpha_1} \big[ A_1 (R - r)^{-\alpha_1} + A_2 (R - r)^{- \alpha_2} + A_3 \big]
\end{align*}
for every $k \in \mathbb{N}$. If we now choose $\lambda$ in dependency of $\vartheta$ and $\alpha_1$ such that $\vartheta \lambda^{-\alpha_1} < 1$, then the series on the right-hand side converges. Therefore, passing to the limit $k \to \infty$, we arrive at the conclusion with constant $c(\alpha_1,\vartheta)= (1-\lambda)^{- \alpha_1} (1- \vartheta \lambda^{-\alpha_1})^{-1}$.
\end{proof}
\end{lemma}
\begin{lemma}
Let $u$ be a solution to
\begin{equation}
\label{eq:main}
\begin{split}
\begin{cases}
\Acal_0 u = \diver f, &\text{in }Q_{M},\\
u =0\, &\text{in } \mathbb{T}^{d}_{N}\setminus \bar Q_{M}.
\end{cases}
\end{split}
\end{equation}
The map $f\mapsto \nabla u$ is a continuous map from $\mathrm{L}^{\infty}\to \mathrm{BMO}$
\end{lemma}
\begin{proof}
Let $m\leq[M/2]$ and let $u_1$ be such that
\begin{equation*}
\begin{split}
\begin{cases}
\diver (A \nabla u_1) = \diver f &\text{in } Q_{M}\\
u_1 =0 &\text{in }\mathbb{T}^{d}_N\setminus \bar Q_M
\end{cases}
\end{split}
\end{equation*}
and $u_0=u-u_1$. Notice that $\diver (A \nabla u_0)=0$ in $Q_M$.
We have
\begin{equation*}
\begin{split}
\sum_{Q_M} |\nabla u_1|^2\lesssim\sum_{Q_M} A \nabla u_1\cdot \nabla u_1 \leq\sum_{Q_M}
f \nabla u_1 \leq |f|_{\infty}M^{d/2} \left( \sum_{Q_{M}} |\nabla u_1|^2\right)^{1/2}
\end{split}
\end{equation*}
from which we have that
\begin{equation*}
\begin{split}
\sum_{Q_M} |\nabla u_1|^{2} \leq M^{d} |f|^{2}_{\infty}
\end{split}
\end{equation*}
Given that from Lemma~\ref{lemma:consequences_caccippoli} we have that
\begin{equation*}
\begin{split}
\sum_{Q_m}| \nabla u_0 -(\nabla u_0)_{m}|^{2} \lesssim \left( \frac{m}{M}
\right)^{d+2} \sum_{Q_M} |\nabla u_0 - (\nabla u_0)_{M}|^2
\end{split}
\end{equation*}
it follows that
\begin{equation*}
\begin{split}
\sum_{Q_m} |\nabla u - (\nabla u)_m|^{2} \leq \left( \frac{m}{M}\right)^{d+2}
\sum_{Q_M}|\nabla u -(\nabla u)_{M}|^2 + \sum_{Q_m} |\nabla u_1|^{2}\leq \left(
\frac{m}{M}\right) ^{d+2} + M^{d} |f|_{\infty}^2
\end{split}
\end{equation*}
Finally using Lemma~\ref{lemma_iteration1} we have the desired result.
\end{proof}
From now on $A=A(x)$, namely depends on the space.
The next lemma is an adaption of \cite[Lemma~2]{MR1354111} to
the discrete case. The original proof is based on an argument
in \cite{morrey}. We will rather use an argument based on Theorem~\ref{thm:feffermanstein}.
In the continuous case, the analog version of the next lemma can be found in~\cite[Lemma~2]{MR1354111}.
\begin{lemma} [Global estimate]
\label{lemma:dolzman-mueller-lemma-2} Let $p \in (1, \infty)$ $q\in (1,n)$
\begin{enumerate}
\item If $f: \mathbb{T}^d_N\to R^{md}$,
$g:\mathbb{T}^d_N\to \mathbb{R}^m$ and let $u$ be the solution of
\begin{equation*}
\begin{cases}
-\diver (A\nabla u) = \diver f + g& \text {in } Q_M\\
u = 0 &\text{in } \mathbb{T}^d_N \setminus\bar Q_M
\end{cases}
\end{equation*}
Then if
\begin{equation*}
\begin{split}
s=\min(p,q^*), \qquad q^{*}= \frac{dq}{d-q}
\end{split}
\end{equation*}
we have
\begin{equation*}
\begin{split}
\left(\sum_{Q_{M}} | \nabla u|^{s}\right)^{1/s} \lesssim \left(
\sum_{Q_M} | f|^{p}\right)^{1/p} + \left( \sum_{Q_M} |M g|^{q} \right)^{1/q}
\end{split}
\end{equation*}
\item and
\begin{equation*}
\|u\|_{s^*,\infty} + \|\nabla u\|_{s,\infty}\leq C \left(
\|f\|_{p,\infty,Q_M} + |g|_{q,\infty,Q_M}\right)
\end{equation*}
\end{enumerate}
\end{lemma}
\begin{proof}
Let $x_0$ be the center of the cube $Q_M$. For simplicity of notation we
will denote by $A_0:=A(x_0)$.
With simple algebraic manipulations we have
\begin{equation*}
\begin{split}
\diver(A_0\nabla u)= \diver (f + (A_0-A)\nabla u )
\end{split}
\end{equation*}
Let $\eta$ such that $\eta\equiv 0$ in $\mathbb{T}^d_N\setminus \bar Q_M$. Then we have
\begin{equation*}
\begin{split}
\diver ( A_{0} \nabla (u\eta)) = \diver \left( (A_0-A) \nabla (u\eta) \right) + G +\diver F
\end{split}
\end{equation*}
where $G= g\eta + fD\eta + A(x)\nabla u D\eta$ and $F= f\eta + A(x)u D\eta$.
Let $w$ be defined as
\begin{equation*}
\begin{split}
\begin{cases}
\diver ( \nabla w ) = - G & \text{in } Q_M\\
w=0 & \text{in }\mathbb{T}^d_{N}\setminus \bar Q_M
\end{cases}
\end{split}
\end{equation*}
Hence, from the constant coeficient case one has that
\begin{equation*}
\begin{split}
\left(\sum_{Q_M}\|M \nabla w\|^{r^*} \right) ^{1/r^{*}} \lesssim \left(\sum_{Q_M} \|G\|^{r}\right)^{\frac{1}{r}}
\end{split}
\end{equation*}
Denoting with $\tilde{F}=F +\nabla w$ we have that
\begin{equation*}
\begin{split}
\diver (A_{0}\nabla (u\eta)) = \diver
\left(A-A_0)\nabla v\right) + \diver \tilde{F} \qquad
\text{in } Q_M.
\end{split}
\end{equation*}
We will now make a fixed point argument. Fix $V$ and consider the
linear operator
$T:V\mapsto v$ where $v$ is the solution of
\begin{equation*}
\begin{split}
\diver (A_0 \nabla v) = \diver \left(A-A_0)\nabla V\right) +
\diver \tilde{F}
\end{split}
\end{equation*}
The operator $T$ is continuous, namely
\begin{equation*}
\begin{split}
\sum_{x\in Q_{M}} | \nabla T(V_1 -V_2) | ^{s}\leq c \sup_{x\in
Q_{M}} |A(x) -A(x_{0}) |^{s} \sum_{x\in Q_{M}} |\nabla
V_{1}(x) - \nabla V_{2}(x) |^{s} + c \sum_{x\in Q} |\tilde{F}
|^{s}
\end{split}
\end{equation*}
If
\begin{equation}
\label{eq:cond-10171382046429}
\begin{split}
\sup_{x\in Q_{M}} |A(x)-A_{0} |\leq \frac{1}{2}A(x_0)
\end{split}
\end{equation}
one can apply
the fixed point theorem and deduce that the solution coincides with
$u\eta $, and that
\begin{equation*}
\begin{split}
\left(\sum_{Q_M}|(M \nabla)u|^{s}\right) ^{1/s} \leq C \left( \sum_{Q_M}|\tilde{F}|^{s} \right)^{1/s}.
\end{split}
\end{equation*}
Finally the condition \eqref{eq:cond-10171382046429} is ensured by \eqref{eq:cond-A_N}.
\end{proof}
For the continuous version of the following lemma see \cite[Lemma~4]{MR1354111}
\begin{lemma}
\label{lemma:dolzmann_mueller_lemma_3}
Let $q\in (1,d)$ $p>d$. Let
\begin{equation*}
T= \|\nabla u\|_{\mathrm{L}^{q,\infty}(Q_{2M})} + \|u\|_{\mathrm{L}^{q^*,\infty}(Q_{2M})}.
\end{equation*}
Suppose that $u$ satisfies
\begin{equation*}
-\diver (A \nabla u )= \diver f \qquad \text{in } Q_{2M}
\end{equation*}
Then there exists $m_{0}:=m_{0}(p,q)$ such that if $M> m_{0}$ then
\begin{equation*}
\sup_{Q_m} |u| \lesssim M^{- \frac{d}{q}}T +M^{1-\frac{d}{p}} \|f\|_{\mathrm{L}^p},
\end{equation*}
where $m = \big[M/d \big]$
\end{lemma}
\begin{proof}
Let $\delta\in \mathbb{N}$ such that $\delta\leq M$. Set
$\kappa = \lfloor\frac{M}{\delta}\rfloor$ and let $\varphi$ be such that
$\varphi \equiv 1$ in $Q_M$,
$\varphi \equiv 0$ in $\mathbb{T}^d_N\setminus \bar Q _{M+\delta}$, and such that
$|\nabla \varphi|\leq \frac{1}{\delta}$.
Then for every $p_1>0$ one has that
\begin{equation*}
\begin{split}
\left(\frac{1}{|Q_M|}\sum_{Q_M}|\nabla u|^{p_1}\right)^{\frac{1}{p_1}}
\leq \left(\frac{|Q_{M+\delta}|}{|Q_{M}|}\right)^{1/p_1}\left(\frac{1}{|Q_{M+\delta}|}
\sum_{Q_{M+\delta}} |\nabla (\varphi u)|^{p_1} \right)^{\frac{1}{p_1}}
\end{split}
\end{equation*}
With simple calculations one has that
\begin{equation}
\label{eq:01141358159624}
\begin{split}
\diver (A \nabla (\varphi u))= \sum_{i,j} \nabla^{*}_{j}\left( \varphi(x) A_{i,j}(x)
\nabla_{i} u + A_{i,j}(x)\nabla_i\varphi \otimes u(x+e_j)\right)\\
=\sum_{j}\nabla_{j}^{*} (\varphi f_j) + \sum_{i,j}
A_{i,j}(x)\left(\nabla_j u(x)-f_j(x)\right) \nabla_{i} \varphi(x) + \sum_{i,j}
\nabla _j^{*}\left( A_{i,j} \nabla_i \varphi \otimes u(x+e_i)\right)
\end{split}
\end{equation}
Denote by
\begin{equation*}
\begin{split}
\tilde f_j&:= \varphi f_j +\sum_{i} A_{i,j} \nabla_i\varphi(x) \otimes u(x+e_i)\\
g&:= \sum_{i,j}A_{i,j}(\nabla_j u - f_j) \nabla_{i} \varphi(x)
\end{split}
\end{equation*}
Equation \eqref{eq:01141358159624} can be rewritten as
\begin{equation*}
\begin{split}
\diver (A(\varphi u)) = \diver \tilde{f} + \tilde{g}
\end{split}
\end{equation*}
Let $s=\min(p,t^{*})$. One has that
\begin{equation*}
\begin{split}
\bigg(
\frac{1}{(M+\delta)^{d}}\sum_{Q_{M+\delta}}\|\tilde f\|^{s}\bigg)^{1/s}&\leq \bigg(
\frac{1}{(M+\delta)^{d}}\sum_{Q_{M+\delta}}|\varphi f| ^{p}
\bigg)^{1/p} \!\!+ \sum_{i,j} \bigg(
\frac{1}{(M+\delta)^{d}}\sum_{Q_{M+\delta}}A_{i,j} |\nabla_i
\varphi|^{t^*} |u|^{t^{*}} \bigg)^{1/t^{*}} \\ &\lesssim \bigg(
\frac{1}{(M+\delta)^{d}}\sum_{Q_{M+\delta}}|\varphi f| ^{p}
\bigg)^{1/p} +
\bigg(
\frac{1}{(M+\delta)^{d}}\sum_{Q_{M+\delta}}|u|^{t^{*}} \bigg)^{1/t^{*}}
\end{split}
\end{equation*}
Using the Sobolev inequality, the last term in the previous equation can be bounded by
\begin{equation*}
\begin{split}
\left(\frac{1}{(M+\delta)^{d}}\sum_{Q_{M\delta}}
|u|^{t^{*}}\right)^{\frac{1}{t^{*}}} \leq
\left[\left(\frac{1}{(M+\delta)^d}\sum_{Q_{M + \delta }}
|u|^{t}\right)^{1/t}\!\! + \left(\frac{1}{(M+\delta)}\sum_{Q_{M+\delta}}|(M+\delta)\nabla u|^{t}\right)^{1/t}\right]
\end{split}
\end{equation*}
In a similar way one has
\begin{equation*}
\begin{split}
\left(\frac{1}{( M +\delta )^{d}}\sum _{Q_{M+\delta}}
|g|^{t}\right)^{1/t} \lesssim \big( \sup_{i,j}|A_{i,j}| \big) \frac{1}{\delta}
\left( \frac{1}{(M+\delta)^{d}}\sum_{Q_{M+\delta}} |\nabla u|^{t}
\right)^{1/t} \\ + \sup |A_{i,j}|\frac{1}{\delta} \left(
\frac{1}{(M+\delta)^{d}}\sum _{Q_{M+\delta}}| f_j|^{p}\right)^{p}
\end{split}
\end{equation*}
Putting together all the previous inequalities and using
Lemma~\ref{lemma:dolzman-mueller-lemma-2}, one has that
\begin{equation*}
\begin{split}
\left(\frac{1}{M^d}\sum_{Q_M} \|\nabla u\|^{s}\right)^{\frac{1}{s}}
\lesssim \left(\frac{1}{(M+\delta)^d}\sum_{Q_{M + \delta }}
|u|^{t}\right)^{1/t} &+ \left(\frac{1}{(M+\delta)}\sum_{Q_{M+\delta}}|(M+\delta)\nabla u|^{t}\right)^{1/t}
\\&+ \frac{M+\delta}{\delta}\left(\frac{1}{(M+\delta)^d}\sum_{Q_{M+\delta}} |f|^{p}\right)^{\frac{1}{p}}.
\end{split}
\end{equation*}
Applying the previous reasoning $\kappa$ times, we have that
\begin{equation*}
\begin{split}
\left(\frac{1}{M^d}\sum_{Q_M} \|\nabla u\|^{t_\kappa}\right)^{\frac{1}{t_\kappa}} \leq C_{\kappa}
\left(\frac{1}{(M+k\delta)^d}\sum_{Q_{M + k\delta }}
|u|^{t}\right)^{1/t} &+ C_{\kappa}\left(\frac{1}{(M+k\delta)}\sum_{Q_{M+\delta}}|(M+\delta)\nabla u|^{t}\right)^{1/t}
\\&+ C_{\kappa}\left(\frac{1}{(M+k\delta)^d}\sum_{Q_{M+k\delta}} |f|^{p}\right)^{\frac{1}{p}},
\end{split}
\end{equation*}
where $t_\kappa$ is given by the recursive equation
$t_{j}= \max(p,t_{j-1}^{*})$ and $t_1=t$. It can be easily seen that for
every $t>1$, it holds that $t_j\geq d$ for some $j$ which depends only on
$p$ and $q$.
\end{proof}
\begin{proposition}
\label{prop:stima-green}
Let $C(x,y)$ be the Green function,i.e.,\xspace for every $x\in \mathbb{T}^{d}_{N}$ one has
\begin{equation*}
\begin{split}
\diver(A\nabla C(x,\cdot ))=\delta _{x}
\end{split}
\end{equation*}
where $A$ satisfies the usual conditions.
Then
\begin{equation*}
\begin{split}
|\nabla^{\alpha}C(x,y)|\lesssim |x-y|^{2-d-|\alpha|}.
\end{split}
\end{equation*}
\end{proposition}
\begin{proof}
Let $K$ be the solution of
\begin{equation*}
\begin{split}
\diver ( \nabla K) = \delta_x.
\end{split}
\end{equation*}
It is well-known that the following estimates hold
\begin{equation*}
\begin{split}
|(\nabla^{\alpha} K)(x-y)|\lesssim |x-y|^{2-d-|\alpha|}.
\end{split}
\end{equation*}
From Remark~\ref{rmk:norma-debole-K} we have that $|(\nabla^{\alpha}K)(x-y)|_{\frac{d}{d+|\alpha| -2},\infty}\leq C_{ d,\alpha}$ where
$C_{ d,\alpha }$ is a constant depending only on the dimension $d$ and the multiindex $\alpha$.
Let us denote with $u(y)=C(x,y)$. Then from the definitions of $K$ and $C$
one has that
\begin{equation*}
\begin{split}
\diver(A\nabla u) = \diver (\nabla K(x-\cdot ))
\end{split}
\end{equation*}
Let $|x-y|=R$. Without loss of generality we may assume that $M>2m_{0}$,
where $m_{0}$ is the constant in Lemma~\ref{lemma:dolzmann_mueller_lemma_3}. Let $M= [\frac{R}{2}]$ and let $Q_M$ be a cube such that
$y\in Q_M$ and $x\not\in Q_{ 2M }$. Given that $\Acal C(x,\cdot)=0 $ in
$Q_{2M}$, using Lemma~\ref{lemma:dolzmann_mueller_lemma_3} we have that
\begin{equation*}
\begin{split}
C(x,y)\lesssim M^{2-d} C_{d}\leq |x-y|^{2-d} C_d.
\end{split}
\end{equation*}
Higher derivative follow in a similar way.
For example to estimate $\nabla _{i}u$ it is enough to consider the equation
\begin{equation*}
\begin{split}
\diver (A \nabla\nabla_{i} u) = \diver ((\nabla \nabla_{i}u)
)- \diver ((\nabla_{i} A) \nabla u),
\end{split}
\end{equation*}
and apply the above reasoning, and hence using the global estimate one has that $|\nabla \nabla u |$
\end{proof}
\begin{proposition}
\label{proposition:pre_kryekryesorja}
Let $Q_1,\dots,Q_k$ be cubes of length $l_1,\cdots,l_k$ respectively such
that $y\in Q_i$. Then there exists a dimensional constants $C_{d,j}$ such that
\begin{equation}
\label{eq:01071357515050x}
\begin{split}
\sup |\nabla ^{j }u|\leq 2^{k} C_{d,j} \max\left( |x-y|,\dist(x,T^{d}_{N}\setminus Q_1),\ldots,\dist(x,\mathbb{T}^{d}_{N}\setminus Q_k) \right)^{2-d+j},
\end{split}
\end{equation}
where $u= (P_{Q_1}\cdots P_{Q_k} C(x,\cdot))$ and $C(x,y)$ is the Green's
function.
\end{proposition}
\begin{proof}
Let $Q_{1}$ be a cube of size $l_1$ in generic position.
Given that $\diver(A\nabla C_{x}(y))=0$, if $x\not\in \bar{Q}_1$ then
$\Pi_{Q_1}C(x,y)=0$, thus $P_{Q_1}C(x,y)=C(x,y)$,
hence the inequality follows from Proposition~\ref{prop:stima-green}.
Let $\varepsilon:=\dist(y, \bar{ Q }^{C}_1) <l_1 $.
If $|x-y|>\varepsilon/2$, then by estimating the different terms $\Pi_{Q_1}C(x,y)$ and $C(x,y)$ separately one has the desired result.
Indeed, it is immediate that $C(x,y)\lesssim |x-y |^{2-d}$.
On the other side it is not difficult to see that there exits a cube of size $\varepsilon $ touching the boundary such that it does not contain $x$ and such that twice the cube does not contain $x$.
Then by using Lemma~\ref{lemma:dolz-mue-orig-3}, one has that
\begin{equation*}
\begin{split}
|\Pi _{Q_{1}} C(x,y)|\lesssim |x-y |^{2-d} M,
\end{split}
\end{equation*}
{where}
\begin{equation*}
\begin{split}
M=\|D \Pi _{Q_1} C_{x} \|_{L^{d/d-2,\infty }(Q_{1})} + \|\Pi _{Q_1} C_{x} \|_{L^{d/d-1,\infty }(Q_{1})}.
\end{split}
\end{equation*}
Then by using Lemma~\ref{lemma:dolzman-mueller-lemma-2} one has that
\begin{equation*}
\begin{split}
\|D\Pi _{B_{1}} C_x \|_{L^{d/(d-2),\infty}} +
\|\Pi _{B_{1}} C_x \|_{L^{d/(d-1),\infty}} \lesssim
\|D C_x\|_{L^{d/(d-2),\infty}} +
\|C_x\|_{L^{d/(d-1),\infty}}
\end{split}
\end{equation*}
Suppose that $|x-y |\leq \varepsilon /2$. Then one can find a cube of
size $\lfloor \varepsilon /2\rfloor$ such that double the cube is contained
in $Q_{1}$. Finally by using Lemma~\ref{lemma:dolzmann_mueller_lemma_3} we
have the desired result.
Let us now prove the inductive step. Let $Q_{1},\ldots,Q_{k}$ be $k$ cubes
cetered in $0$.
If the maximum in the right hand side of \eqref{eq:01071357515050x} is
$|x-y|$ or $\dist(x,\mathbb{T}^{d}_{n}\setminus Q_1)$, then the same reasoning as above would apply.
For simplicity let us suppose that
\begin{equation*}
\max\left(|x-y|,\dist(x,\mathbb{T}^{d}_{N}\setminus \bar Q_1),\dots,\dist(x,\mathbb{T}^d_N
\setminus \bar Q_k)\right)=\dist(x,\mathbb{T}^{d}_{N}\setminus \bar Q_1)=:\delta.
\end{equation*}
From the inductive step we know that
\begin{equation*}
\begin{split}
\sup |v|\lesssim \delta^{2-d}\qquad
\sup |\nabla ^{\alpha}v|\lesssim \delta^{2-d -|\alpha|},
\end{split}
\end{equation*}
where $v:=P_2\dots P_k C(x,\cdot)$. From the definition we have that
$u= v- P_{Q_1}v$, hence $\sup |u| = \sup |v|+ \sup |\Pi_{Q_1} v|$.
Thus by using Lemma~\ref{lemma:dolzmann_mueller_lemma_3} and a very similar reasoning as above we have the desired result.
\end{proof}
Let $Q_1,\dots,Q_k$ be $k$ cubes with radii $l_1,\dots,l_k$ respectively and
let $\Ccal$ be the Green's function. From now on we fix $x$ and denote with
$u(y):= (\Rs_1 \cdots \Rs _k \Ccal(x,\cdot))(y)$, where for simplicity we will use
$\mathcal R_{i}=\mathcal R_{Q_i}$.
The following simple calculation will be repeatedly used in the next theorem.
\begin{remark}
\label{rmk:2x}
Let $j> 1$ be an integer and $Q$ be a cube of size $l$. Then
\begin{equation*}
\begin{split}
\frac{1}{|Q|}\sum_{z\in Q} \max(\alpha, \dist(z,\mathbb{T}^d_N\setminus \bar Q))^{-j} \lesssim
\frac{\alpha^{1-j}}{l}
\end{split}
\end{equation*}
and if $j=1$ then
\begin{equation*}
\begin{split}
\frac{1}{|Q|}\sum_{z\in Q} \max(\alpha, \dist(z,\mathbb{T}^d_N\setminus \bar
Q))^{-j} \lesssim \frac{\log(\alpha)}{l} .
\end{split}
\end{equation*}
To prove the above calculation, it is enough to view it as a discretization of the Lemma~\ref{lemma:2}, hence use a similar process.
\end{remark}
\begin{theorem}
\label{thm:diskrete-kryekryesorja}
Let $C_k,Q_i,r_i$ as above and such that $r_1<\dots<r_h<|x-y|<r_h+1<\dots<r_k$. Then
\begin{enumerate}
\item if $k -h< d-2$
\begin{equation*}
\begin{split}
|C_{k}(x,y)|&\lesssim \frac{1}{r_{h+1}\cdots r_k} |x-y|^{2-d+k -
h}\prod_{i=h+1}^{k}\left( \log\left({|x-y|}\right)+1\right)\\
|\nabla ^{j}_{y}C_{k}(x,y)| &\lesssim \frac{1}{r_{h+1}\cdots
r_k} |x-y|^{2-d+k -j -h}
\end{split}
\end{equation*}
\item if $k-h\geq d-2$
\begin{equation*}
\begin{split}
|C_{k}(x,y)|&\lesssim \frac{1}{ r_{k-d+3}\cdots r_k} \left|\log( |x-y| )\right|\\
|\nabla ^{j}_{y}C_{k}(x,y)| &\lesssim \frac{1}{ r_{k-d +2
-j}\cdots r_k} \prod_{i=h+1}^{k}\left( \log\left({|x-y|}\right)+1\right)\\
\end{split}
\end{equation*}
\end{enumerate}
\end{theorem}
\begin{proof}
We will only prove the first part of (i). The proof of the other parts is
similar.
Let us initially consider the case $k=1$.
For simplicity we denote $\Pi_z:= \Pi_{Q_1 +z}$. With simple
computations one has
\begin{equation*}
\begin{split}
\sup_{y} \left|u(y)\right| \leq \frac{1}{|Q|}\sum_{Q_1+y}\sup_{y} |(\mathrm{Id} - \Pi_{z})u(y)|
\end{split}
\end{equation*}
Given that for every $z\in y+Q$ it holds $\dist(y,z+Q_1)=r_1 - |z-y|$, it holds
\begin{equation*}
\begin{split}
\sup |(\mathrm{Id} - \Pi_{z})u| \leq
\begin{cases}
(r_1 - |z-y|)^{2-d}& \text{ if\ \ $
r_1 - |y-z|\geq |x-y|$} \\
|x-y|^{2-d}& \text{otherwise}
\end{cases},
\end{split}
\end{equation*}
The above can be reformulated as
$\sup |(\mathrm{Id} - \Pi_{z})u|\leq \max(|x-y|,\dist(z,\mathbb{T}^{d}_{N}\setminus \bar Q))$.
Hence using Remark~\ref{rmk:2x} one immediately has
\begin{equation*}
\begin{split}
\sup_{y} |u_1(y)| &\lesssim \frac{|x-y|^{3-d}}{r_1}.
\end{split}
\end{equation*}
Let us now turn to the general case $k<d -2$. And let $Q_1,\dots,Q_k$ be
balls of radiusis $r_1,\dots,r_k$ centered in $0$. From
Proposition~\ref{proposition:pre_kryesorja} we have that
\begin{equation*}
\begin{split}
&\sup | P_{z_1+Q_1}\cdots P_{z_k+Q_k} C(x,\cdot)|\leq
\max\insieme{|x-y|,r_1 - |z_1-y|,\dots,r_k- |z_k-y|}^{2-d}\\
&\leq \max\insieme{|x-y|}^{2-d+k}\cdot\max\insieme{|x-y|,r_k-
|z_k-y|}^{-1}\cdots\max\insieme{|x-y|,r_k- |z_k-y|}^{-1}\\ &\hspace{10cm} =:g(z_1,\dots,z_k).
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
\sup \Rcal_1\cdots \Rcal_k C (x,\cdot )\leq
\sum_{Q_1}\cdots \sum_{Q_k} g(z_1,\ldots,z_k)
\end{split}
\end{equation*}
From Remark~\ref{rmk:2x} we have that
\begin{equation*}
\begin{split}
\sum_{Q_1}\cdots \sum_{Q_k} g(z_1,\ldots,z_k)
\leq \frac{1}{r_1\cdots r_k}|x-y|^{2-d+k}\prod_{i} (|\log (|x-y|)|
+1)
\end{split}
\end{equation*}
\end{proof}
A direct consequence is the following corrollary:
\begin{corollary}
Suppose that $|x-y|>1$ and let $Q_1,\ldots,Q_k$ and such that $r_i=L^{i}$
with $L>1$. Then there exists $\eta (j,d)$ such that
\begin{equation*}
\begin{split}
|\nabla ^{j}C_k(x,y)| \lesssim \frac{L^{\eta(j,d)}}{L^{k(d-2 -j)}}.
\end{split}
\end{equation*}
\end{corollary}
\begin{thm}[Fixed $A$]
Let
\begin{equation}
\label{eq:01091357722313}
\begin{split}
\Ccal_{k}:=\Rcal_{1}\cdots \Rcal_k \Ccal\Rcal^{*}_{k}\cdots
\Rcal_1^{*} - \Rcal_{1}\cdots \Rcal_{ k +1} C\Rcal^{*}_{k+1}\cdots
\Rcal_1^{*}.
\end{split}
\end{equation}
Then
\begin{equation*}
\begin{split}
\sup_{y\in \mathbb{T}^{d}_{N}} |\nabla ^{\alpha}\tilde C_k(x,y) |\leq L^{\eta(d,|\alpha|)} L^{-(k-1)(d-2 + |\alpha|)}
\end{split}
\end{equation*}
\end{thm}
\begin{proof}
We will estimate the two term in right hand side of \eqref{eq:01091357722313} separately.
Given that $\Rcal^{*}=\Acal \Rcal \Acal^{-1}$, and denoting
by $\Dcal_k=\Rcal_{1}\cdots \Rcal_k C\Rcal^{*}_{k}\cdots\Rcal_1^{*}$.
one has that
\begin{equation}
\begin{split}
\Dcal_k = \Rcal_1\cdots\Rcal_{k}\Rcal_{k}\cdots \Rcal_1 C.
\end{split}
\end{equation}
Applying Theorem~\ref{thm:diskrete-kryekryesorja}, we obtain that the supremum of
$\Dcal_k$ is bounded by
\begin{equation*}
\begin{split}
\prod_{j=1}^{d-2} L^{-k+j} \prod_{j=1}^{d-2}\log( L^{-k+j} ) \leq
L^{-k(d-2)} L^{\eta(d)}.
\end{split}
\end{equation*}
\end{proof}
\section{Analytic dependence on $A$}
\label{sec:analytic-dependence}
The proof of the analyticity is based on a very elegant argument using complex analysis, and it is originally found in \cite{MR2995704}.
Because most of the arguments follow by trivial modification, we will only sketch the passages.
The main tool of the Analytic dependence is the use of the following facts:
Given an homomorphic $f:D\to C^{m\times m}$, where $D $ is the unit disk and let $M $ be such that $\sup_{z\in D} \|f(z) \| \leq M $. Then one has that $ \|f^{j}(0) \| \leq j! M$, where $f^{j}$ is the $j $-th derivative. Moreover let $g:D\to C^{m\times m}$ be an additional homomorphic function and $\bar{M}$ such that $ \sup_{z\in D} \|f(z) \| \leq \bar{M} $ then $\|h^{j} (0)\| \leq M \bar{M}j! $, where $h=fg^{*}$.
Fix $c_{0}$ and let $A = A_0 + z A_{1}$ such that $A_{0} $ is symmetric and such that
\begin{equation*}
\begin{split}
\scalare{A_0(x) F, F}_{\mathbb{C}^{m\times d}} \geq c_0 \abs{F}^2, \qquad \text{and} \qquad \sup_{x\in \mathbb{T}^{d}_{N}}\| A_1(x) \| \leq \frac{c_0}{2}.
\end{split}
\end{equation*}
As in the previous sections we define
\begin{equation*}
\As := \nabla^* A \nabla.
\end{equation*}
This induces the sesquilinear form $\scalare{\varphi,\psi}= \scalare{\As \varphi , \psi} $. Notice that if $A$ is real and symmetric, then $\scalare {\cdot, \cdot}_A$ is a scalar product and agrees with $\scalare{\cdot, \cdot}+$.
One then goes on and shows that $\Ts $ defined as usual satisfies $\|\Ts_{A}\varphi \|_{A_{0}} \lesssim \|\varphi \|_{A_{0}} $. The above fact, and the complex version Lax-Milgram theorem shows existence of the bounded inverse $\Cs_{A}= \As^{-1}$. Finally to conclude one shows that for every $z$ $C_{A(z),k}$ is bounded. Thus by using the complex analysis facts shown in the beginning of this section one has the desired result.
|
1510.07939
|
\section{Introduction and motivation}
Atom interferometers are rapidly evolving, being used as new quantum sensors
for fundamental physics experiments and in several other applications
\cite{Varenna2013}. In gravitational physics, for example, they
enable precise measurements of gravity \cite{Peters1999,Gillot2014}, gravity
gradients \cite{McGuirk2002,Sorrentino2014}, gravity curvature
\cite{Rosi2015}, and of the Newtonian gravitational constant
\cite{Rosi2014}.
Important goals are the
increase of their sensitivity and the demonstration of interferometry with
atomic species other than alkali atoms, which are most commonly used. For some experiments, indeed, the possibility of choosing the atomic species with the right characteristics is crucial. In particular, for precision measurements there is a considerable interest in using
alkaline-earth or alkaline-earth-like atoms, such as Ca, Sr or Yb \cite{Riehle1991,Tarallo2011,Graham2013,Tarallo2014,Jamison2014,Hartwig2015}, that are already used for the
most advanced optical atomic clocks \cite{Hinkley2013,Ushijima2015,Bloom2014,Poli2013}.
Alkaline-earth atoms have several characteristics that make them particularly interesting in this context.
Firstly, their zero electronic angular momentum
in the $^1$S$_0$ ground state makes these atoms less sensitive to perturbation due to magnetic fields than alkali atoms.
Furthermore, they offer more flexibility thanks to the presence of both
dipole allowed transitions and narrow intercombination
transitions that can be used for efficient multiphoton Bragg diffraction
\cite{Giltner1995,Muller2008b,Altin2013} and for single-photon atom
interferometry schemes \cite{Yu2011,Graham2013}.
Finally, resonance transitions from the ground state are in the blue/near-UV (e.g., 461 nm for
Sr, 399 nm for Yb) resulting in a larger momentum transferred to the
atoms for the same diffraction
order compared to alkali atoms and hence in a correspondingly higher potential sensitivity of the interferometers.
Here, we demonstrate the first atom interferometer based on large
momentum transfer (LMT) Bragg diffraction in a fountain of alkaline-earth
atoms, namely strontium, and its use for the measurement of gravity acceleration.
\begin{figure}[b]\begin{center}
\includegraphics[width=0.47 \textwidth]{fig1.pdf}
\caption{(a) Simplified picture of the experimental apparatus. The \textsuperscript{88}Sr atoms
are cooled in a double-stage magneto-optical trap. The Bragg laser beams with frequencies $\omega_1$
and $\omega_2$ and orthogonal polarizations are sent vertically from the bottom
of the chamber, rotated by a $\lambda/4$ wave-plate and
retro-reflected by a mirror (M) installed on a
vibration isolation platform (VIP). (b) Scheme of the atom interferometer with separated arms corresponding to different momentum states under the effect of gravity.
Before the interferometric sequence the atoms are
velocity selected and launched by a sequence of $\pi$ pulses.
(c) Time of flight image ($T_{\textrm{tof}}=30$ ms) of the two interferometer arms split
by a 1\textsuperscript{st} order $\pi/2$ pulse. The spatial separation
after 30 ms is 600~$\mu$m.} \label{fig.apparatus}
\end{center}
\end{figure}
In addition to the general features of alkaline-earth atoms listed above, the \textsuperscript{88}Sr isotope that we use in this work has specific favorable characteristics: it has no nuclear spin so that in the ground state it is a scalar particle which is virtually insensitive to stray magnetic
fields,
and its small scattering length $a=-2a_0$ \cite{Ferrari2006,Martinez2008,Stein2010} results in reduced decoherence due to cold collisions.
This allows, for example, observation of extremely long-lived Bloch oscillations of \textsuperscript{88}Sr atoms in a vertical optical lattice \cite{Ferrari2006,Poli2011}.
On the other hand, since strontium has no hyperfine structure in the ground state,
the usual schemes based on Raman transitions cannot be employed to realize the
beam splitters and the mirrors for an interferometer.
In this work, we use Bragg diffraction which acting only on the atom's external degrees of freedom can split the atomic wavepacket into two momentum states separated by
$2n\hbar k$ (where $n$ is the Bragg diffraction order, and $k=
2\pi/\lambda$ is the wavevector of the Bragg laser light with a wavelength $\lambda$), while maintaining the same electronic state.
\section{Method and experimental setup}
In Fig.\ref{fig.apparatus}(a), a schematic view of the
experimental apparatus is shown.
The beams at 461 nm for the Bragg transitions are produced by a home-made laser that is frequency-locked to the main cooling laser with a red detuning $\Delta$ which is set, for
different Bragg orders, in the $3-8$ GHz range with respect to the
$^{1}S_{0}$--$^{1}P_{1}$ transition frequency. The output power is
about 200 mW and the emission linewidth is about 1 MHz. The
laser intensity is actively stabilized using an external single-pass
acousto-optical-modulator (AOM)
(see appendix for details on the noise spectrum).
The two Bragg beams, with frequencies $\omega_1$ and $\omega_2$,
are obtained using two separate AOMs and they are coupled with mutually orthogonal polarizations into a single-mode polarization-maintaining
fiber. They are
collimated at an $1/\mathrm{e}^2$ intensity radius of $r=2.5$ mm
and sent vertically upwards onto the atomic sample. The light is then retro-reflected by a $2"$ mirror
suspended on a vibration isolation platform (MinusK 25BM-4). A
quarter-wave plate is placed before the retro-reflection
mirror to rotate the polarization of the returning light by
$90^{\circ}$. This allows the beams to interfere with each other
to generate two travelling waves moving in opposite directions,
while the formation of standing waves by pairs of beams with the same
frequency is avoided. The difference between the beams' frequencies
$\delta_n=\omega_1-\omega_2$ is adjusted in order to have the upward moving lattice drive the Bragg
transitions,
which occur for $\delta_n= 4n \omega_r$ in the falling frame, where $\omega_r=\hbar k^2/2m = 2\pi \times
10.7$ kHz is the recoil frequency for strontium atoms. The lattice moving downward is Doppler shifted
out of resonance during most of the atoms' free-fall. Bragg pulses at the apogee
of the ballistic trajectories are avoided to prevent
double diffraction. The verticality of the beam is verified at
1~mrad by retro-reflecting it on a water surface.
The residual vibrations and tilt coupled to the retro-reflecting mirror
are monitored by a triaxial accelerometer (Episensor
ES-T) and a precision tiltmeter (Applied Geomechanics Tuff Tilt
420) placed on top of the vibration-isolation platform
(see appendix for details on noise spectra).
The whole platform is enclosed in an acoustic
isolation box.
The two Bragg AOMs are driven by two radio-frequency (RF) generators
phase-locked to a 10 MHz reference signal provided by a Rb clock and the
pulses are shaped to have a Gaussian profile \cite{Muller2008}
using an additional signal generator that drives two
variable attenuators acting on both the RF Bragg signals. The
phase noise of the Bragg beams in this configuration was
characterized with a digital phase detector by comparing the beat
note of the two frequency components detected on a photodiode
(placed after the optical fiber) with a reference RF synthesizer
(see appendix).
With the available optical power on the atoms of $P=20$ mW per
beam, the typical optical intensity is $I=250$~mW/cm$^2$ and the
maximum two-photon Rabi frequency estimated for Gaussian pulses at
a detuning $\Delta=8$~GHz is $\Omega = 2\pi\times150$~kHz.
For different Bragg orders the detuning is adjusted to maintain a
high effective Rabi frequency
$\Omega_{\textrm{eff}}=\Omega^n/[(8\omega_r)^{n-1}(n-1)!^2]$ \cite{Giltner1995b}.
The pulse duration is kept larger than $n^{1/6}[\omega_r(n-1)]$
to maintain the losses into other orders negligible \cite{Muller2008}
and thus guarantee high $\pi$-pulse efficiencies.
We set a typical effective Rabi frequency
$\Omega_{\textrm{eff}} = 2\pi\times80$~kHz, with a
$\pi$ pulse duration of $15$~$\mu$s full width at half-maximum (FWHM), corresponding
to a Fourier width larger than the atoms' momentum spread.
At a detuning $\Delta=2.8$ GHz and full
power, we obtain a diffraction efficiency of 50\% for the
4\textsuperscript{th} order.
The experimental sequence is the following: \textsuperscript{88}Sr atoms
from an atomic beam produced using a high-efficiency
oven \cite{Schioppo2012} are decelerated in a Zeeman slower and then
trapped and cooled in a two-stage magneto-optical trap (MOT). The
first ``blue'' MOT is realized using the strong
$^{1}S_{0}$--$^{1}P_{1}$ transition at 461~nm to reach a
temperature of 1~mK.
The atoms are then further cooled in a ``red'' MOT
operating on the narrow intercombination $^{1}S_{0}$--$^{3}P_{1}$
transition at 689 nm, reaching a final temperature of $1.2~\mu$K,
with a spatial radial (vertical) size of $300$ $\mu$m (50 $\mu$m) FWHM.
The sequence produces about $2\times10^6$ trapped atoms in $1.5$~s.
A small fraction of the
atoms ($\sim 10^5$) is selected from the MOT and launched upwards with
a sequence of Bragg $\pi$ pulses with a typical duration of 47~$\mu$s FWHM,
up to a total momentum transfer of 40~$\hbar k$.
Even though a single $\pi$ pulse would be
sufficient to isolate the selected atoms from the freely falling cloud after the release from the
red MOT, a larger number of pulses is applied to increase the
total time of flight up to 150~ms.
By means of Bragg spectroscopy \cite{Stenger1999} we estimate a
vertical momentum spread of $1.5~\hbar k$ FWHM for the red MOT,
and $0.2~\hbar k$ for the selected atomic sample, that allows high
fidelity $\pi$ and $\pi/2$ pulses in the interferometer \cite{Szigeti2012}.
Incidentally, in this work we also performed
preliminary tests of velocity selection and launch of Sr atoms in a fountain using
Bloch oscillations in an accelerated vertical optical lattice at 532 nm.
After the launch of the atoms in the fountain, a Mach-Zehnder
interferometer is realized by applying three Bragg pulses in a
$\pi/2$--$\pi$--$\pi/2$ configuration.
As shown in Fig.\ref{fig.apparatus}(b), the first $\pi/2$ pulse coherently splits the atomic wavepacket over two paths separated by $2n\hbar k$.
Fig.\ref{fig.apparatus}(c) shows an image of the atoms in the two arms of the interferometer
after 30 ms for a 1\textsuperscript{st} order
pulse. The spatial separation
between the two interferometer arms is 600~$\mu$m, that is
about two times larger than the separation induced by near-infrared light in alkali atom interferometers.
The two paths in the interferometer are recombined after a time $2T$.
The population in the two output ports is
detected by either absorption imaging or fluorescence collection about
40~ms after the last pulse is applied, when the two
momentum states are sufficiently separated in space.
The interferometer time $T$ is currently limited by the vertical
size of the vacuum chamber (10 cm) which limits the total
time of flight for the atoms in the fountain.
The number of atoms in the two outputs $N_{\left|p_{0}\right\rangle }$
and $N_{\left|p_{0}+2n\hbar k\right\rangle }$ is determined by fitting the detected signal with two Gaussian profiles.
They oscillate periodically as a function of the
relative phase $\Phi$ acquired by the atoms in the two arms. The output signal of the interferometer
$P(\Phi)$ is given by the relative
population:
\begin{equation}\label{eq.contrast}
P(\Phi)=\frac{N_{\left|p_{0}\right\rangle }}{N_{\left|p_{0}\right\rangle }+N_{\left|p_{0}+2n\hbar k\right\rangle }}=P_{0}+\frac{C}{2}\cos(\Phi)
\end{equation}
where $P_{0}\sim0.5$ is an offset and $C$ is the contrast.
For a vertical Mach-Zehnder Bragg interferometer, the relative phase depends on the
gravity acceleration $g$, the effective laser wave number $2nk$, the interferometer
time $T$ and the optical phase of the Bragg pulses:
\begin{equation}\label{eq.InterfPhase}
\Phi=n(2kg-\alpha)T^{2}+n(\phi_{1}-2\phi_{2}+\phi_{3})
\end{equation}
where $\alpha = 2\pi\times42.5509$ kHz/ms is a frequency chirping applied to the Bragg beams
in order to compensate for the varying Doppler shift of the falling atoms,
and $\phi_{i}$ is the relative phase between the two beams for the $i^{th}$ pulse.
\section{Experimental results and discussion}
\subsection{Contrast}
\begin{figure}[t]\begin{center}
\includegraphics[width=0.47 \textwidth]{fig2.pdf}
\caption{Contrast of the interference fringes as a function of
time $T$ for 1\textsuperscript{st}, 2\textsuperscript{nd} and
3\textsuperscript{rd} order Bragg diffraction with detuning $\Delta$. The inset shows a
typical fringe observed at $T= 0.2$~ms for a 3\textsuperscript{rd}
order Bragg interferometer.} \label{fig.fringecontrast}
\end{center}
\end{figure}
The contrast of the interference fringes, obtained by scanning
the phase $\phi_{3}$ of the last $\pi/2$ pulse, was determined from the values of $P(\Phi)$ between
the 2\textsuperscript{nd} and the 98\textsuperscript{th} percentile
\cite{McDonald2013}.
Fig.~\ref{fig.fringecontrast} shows the values of the observed contrast
for 1\textsuperscript{st}, 2\textsuperscript{nd} and
3\textsuperscript{rd} Bragg order as a function of the interferometer time $T$.
For different orders, the Bragg laser detuning $\Delta$ was chosen
in order to maintain a high Rabi frequency and a low rate of light scattering,
according to the available laser power.
For short interferometer times, the contrast is mainly limited by
the velocity spread along the vertical direction and by the
residual light scattering, which limits the $\pi$ pulse
efficiency.
For long interferometer times, the contrast is mainly limited by the Rabi
frequency inhomogeneity which is due to both the radial expansion of the atomic cloud and the intensity profile imperfections of the Bragg beams
\cite{Muller2008b,McDonald2013b}.
The sensitivity to this inhomogeneity becomes more critical as the Bragg order n increases because the effective Rabi frequency scales as the $n$\textsuperscript{th} power of the two-photon Rabi frequency.
This shows that the small sample size and the ultralow temperatures achievable with strontium atoms
can lead to a high contrast for long interferometer times even with
relatively narrow Bragg beams. Further improvement in the contrast can be obtained by reducing the probe beam size in order to only interact with the central atoms, for which the Rabi frequency inhomogeneities due to the transverse expansion are smaller. However, in doing this the effect on the sensitivity has to be taken into account. Reducing the interrogation area will reduce the number of interrogated atoms, leading to an increase of the shot noise limit and of detection noise. Therefore, there is a trade-off between contrast gain and noise suppression which has to be optimised in order to really improve the sensitivity of the gravimeter.
Conversely, it is possible to explore geometries where the atoms are guided by a dipole trap along the falling axis. In this scenario the atoms could be forced to remain in the region of maximum intensity of the Bragg beams, ensuring that they all contribute to the interferometer signal.
A technically feasible improvement by an order of magnitude in the Bragg laser power would allow
us to move further from resonance ($\Delta \sim 600~\Gamma$)
maintaining a sufficiently high Rabi frequency and therefore realize a higher-order interferometer
as demonstrated for Cs \cite{Muller2008b}.
\begin{figure}[t]\begin{center}
\includegraphics[width=0.47 \textwidth]{fig3.pdf}
\caption{Allan deviation of the gravity acceleration measurements
for a 1\textsuperscript{st} order interferometer with a time
$T=30$ ms (black squares). The inset shows the corresponding fringe and the point at which the phase fluctuations are measured. Also shown in the figure are the estimated effects due to
the residual acceleration noise of the retro-reflection mirror (dash red line),
the optical phase noise of the Bragg beams (dash dot blue line),
the intensity noise of the Bragg beams (short-dash orange line) and
the shot noise ($1\times10^{5}$ atoms, dash dot dot green line).
} \label{fig.allan}
\end{center}
\end{figure}
\subsection{Sensitivity}
The sensitivity $\delta g/g$ of the interferometer as a gravimeter is determined by measuring
the phase fluctuations $\delta \Phi$ at the slope of the
central fringe:
\begin{equation}\label{eq.sens}
\frac{\delta g}{g} = \frac{\delta \Phi}{2nkgT^2}.
\end{equation}
The short and long-term sensitivities are
characterized with the Allan deviation. The results for a
1\textsuperscript{st} order interferometer with a time $T=30$~ms
and the estimated effect of the main noise sources are shown in
Fig.~\ref{fig.allan}. The Allan deviation scales as the
inverse-root of the integration time with
$\delta g/g = 1.5\times10^{-6}$ at 1~s, reaching $4\times10^{-8}$ at 2000~s. The sensitivity of our interferometer
is presently limited by the residual acceleration of the suspended
retro-reflection mirror.
The estimated phase noise due to the mirror vibrations
is 380 mrad$/\sqrt{\tau}$ where $\tau$ is the averaging time.
The second major noise contribution comes from the optical phase noise
of the Bragg beams which is estimated to be 20 mrad$/\sqrt{\tau}$,
more than one order of magnitude smaller than the vibration noise.
The calculated phase noise arising from intensity fluctuations of the
Bragg laser is 1 mrad$/\sqrt{\tau}$, while other noise sources
such as AC Stark shift effects and Bragg frequency noise are
estimated to give contributions below the $\mu$rad$/\sqrt{\tau}$ level (see
appendix). Finally, the shot noise limit for $10^5$ atoms is $10$ mrad$/\sqrt{\tau}$.
\section{Conclusions and outlook}
In conclusion, we demonstrated LMT Bragg interferometry in a fountain of alkaline-earth atoms for the first time.
The results are mainly limited by technical aspects such as the available laser power, the size of the vacuum cell and residual vibrations; therefore we anticipate a dramatic increase in performance with the increasing power of available lasers, a larger chamber to increase the interferometer time and improved isolation from vibrational noise. A variation on our scheme is the possibility to induce the Bragg transitions using the narrow intercombination line at 689 nm where stable lasers with a higher output power are already available. Moreover, schemes based on the combination of Bragg diffraction and Bloch oscillations
\cite{Kovachy2010,Muller2009,Charriere2012} might allow superior performances in terms of precision and accuracy thanks to the specific properties of strontium. Other relevant prospects are the use of ultracold Sr sources \cite{Stellmer2013} and high sensitivity detection schemes beyond the classical limit \cite{Norcia2015}.
In order to surpass present limits and take full advantage of the methods and ideas discussed in this paper, we are developing a new apparatus for a large-scale ($\sim$~10~m high) Sr fountain.
Possible fundamental
physics experiments include stringent tests of the Einstein equivalence principle and possible spin-gravity coupling \cite{Poli2011,Tarallo2014}, tests of models of quantum gravity \cite{Amelino2009} and dark matter \cite{Hamilton2015}, new schemes to determine the value of the gravitational constant \cite{Tino2013} and the detection of gravitational waves \cite{Tino2011}. Potential applications in geophysics and geodesy can also be envisaged \cite{deAngelis2009}.
In the long term, a space mission based on strontium atoms combining atom interferometers and transportable optical clocks \cite{Poli2014} together with a suitable configuration for gravitational wave detection \cite{Yu2011,Graham2013} would enable extremely high precision tests of different fundamental aspects of gravitational physics \cite{Tino2007,Tino2013}.
\section*{Acknowledgments}
We acknowledge support by INFN and the Italian
Ministry of Education, University and Research (MIUR) under the
Progetto Premiale ``Interferometro Atomico'' and by LENS. We also acknowledge
support from the European Union's Seventh Framework Programme
(FP7/2007-2013 grant agreement 250072 - ``iSense'' project and
grant agreement 607493 - ITN ``FACT'' project). We thank G. Rosi
for useful discussions.
|
1510.07865
|
\section{Introduction}
As more and more different types of smart devices are produced and applied in people's daily life, wireless traffic demand has experienced an unprecedented growth. Cisco's most recent report\cite{cisco} forecasts that the mobile multimedia data will grow at a compound annual growth rate of more than $60\%$. On the other hand, users demand for multimedia contents is highly redundant, i.e., a few popular contents account for a majority of all requests\cite{Zipf}. Therefore, caching popular contents at various nodes in the network is a promising approach to alleviate the network bottleneck\cite{IWCT}.
For the wireless caching systems where helpers (WiFi, femtocells) have high storage capacity, the performance depends heavily on the adopted caching replacement. The caching placement for helpers is firstly investigated in \cite{4femetocaching} to minimize the downloading time, where both uncoded and coded cases are considered. It is shown
that the optimization problem for the uncoded case is NP-hard. In addition, \cite{cachingplacement} considers the channel fading factor and develops the caching placement to minimize the average bit error rate, where the optimal caching placement is to balance between the channel diversity gain and the caching diversity gain. Moreover, the problem of optimal
MDS-encoded caching placement at the wireless edge is investigated in \cite{MDS} to minimize
the backhaul rate in heterogeneous networks. However, all above analyses \cite{4femetocaching,cachingplacement,MDS} are based on the fixed topology between users and helpers. In \cite{geographic}, more realistic network models are adopted to characterize the stochastic natures of geographic location and the corresponding optimal caching placements are derived according to the total hit probability.
On the other hand, the potential cache capacity at user side can also be exploited, e.g., local cache offloading or D2D sharing. Various works have been done on the caching placement at user side. In \cite{outage}, the D2D outage-throughput tradeoff problem is investigated and the optimal scaling laws are characterized. \cite{USC} analyzes the scaling
behavior of the throughput with the number of devices per cell under Zipf distributed content request probability with exponent $\gamma_r$, and concludes that the optimal cache distribution is also a Zipf distribution with a different value $\gamma_c$. By modeling the mobile devices as a homogeneous Poisson Point Processes (PPP), \cite{malakGlobecom} derives the optimal cache distribution resulting in the total probability of content delivery is maximized. However, the local offloading probability is not considered in their analysis. In addition, coded caching is also an effective approach to exploit the content diversity \cite{codedcache1}. By caching contents partially at user side according to the developed caching distribution during the first phase, a coded multicasting opportunity can be created even for different content requests in the second phase. Moreover, \cite{Hierarchical} further proposes the hierarchical coded caching
to address the joint caching placement problem at both users and helpers. However, these analyses \cite{codedcache1,codedcache2,Hierarchical} are based on the fixed topology which is not suitable for the user mobility scenario.
Despite the aforementioned studies, to the best of our knowledge, the optimal caching placements for both helpers and users under realistic network models remain unsolved to date. Thus in this paper, we consider a two-tier caching system, where the helpers and users are spatially distributed according to two mutually independent homogeneous Poisson Point Processes (PPPs) with different densities \cite{YangCC}. In order to alleviate the traffic load in the cellular network, we aim to develop an optimal caching placement scheme to maximize the offloading probability, where the offloading includes self-offloading, D2D-offloading and helper-offloading. More details along with the main contributions are as follows:
\begin{itemize}
\item
We consider a D2D assisted two-tier wireless caching network consisting of users and helpers where the offloading comes from self-offloading, D2D-offloading and helper-offloading. Different from \cite{malakGlobecom}, we take self-offloading events into consideration. Moreover, the practical assumption that only a part of users has caching ability is considered.
\item
We formulate the total offloading probability of caching placement in the two-tier wireless networks and adopt the DC programming to solve the non-convex maximization problem. In addition, we notice that users and helpers ought to cache the popular contents while the density is low and ought to cache different contents while the density is high. And our proposed caching placement can achieve a balance between them.
\item
The two extreme cases for one-tier caching systems are considered. In absence of user caching ability, we formulate the caching placement for helper-tier as a convex problem, and can be effectively solved by the classical waterfilling method; In absence of helper caching ability, the caching placement for users is also formulated into a convex problem. Furthermore, we combine the solutions of the two cases as a non-joint optimal caching placement and compare it with the proposed placement.
\end{itemize}
\section{System model and content access protocol}
In this section, we first introduce the two-tiered caching system as illustrated in Fig. \ref{fig:system}, where the helpers and users are spatially distributed according to two mutually independent homogeneous Poisson Point Processes (PPPs) with density $\lambda_{\text{H}}$ and $\lambda_{\text{UE}}$, respectively. Then the content access protocol is provided.
\subsection{System Model}
\subsubsection{Content module}
The content library consists of $N$ contents. The popularity distribution vector of the contents is denoted by $\mathbf{q}=\{q_1,\ldots,q_N\}$, where $q_i$ is the access probability for the $i$-th content. In this paper, we characterize the popularity distribution as a Zipf distribution with parameter $\gamma$\cite{Zipf}. If we arrange contents in descending order of popularity, the popularity of the $i$-th ranked content is \cite{8Push-Based}
\begin{equation}
q_i={\frac{1/i^{\gamma}}{\sum_{j=1}^{N}1/j^{\gamma}}},
\end{equation}
where $\gamma$ governs the skewness of the popularity. The popularity is uniform over contents for $\gamma=0$, and becomes more skewed as $\gamma$ grows. For simplicity, we assume all the $N$ contents are of equal size $L$.
\subsubsection{Network module}
In addition to the macro base stations (BSs), the network module also consists of the helpers with caching ability, where helpers could successfully send the contents in its local cache to requesting users within radius $R_{\text{H}}$ at relatively low cost. For simplicity, we assume the caching capacity for all helpers are the same, denoted by $M_{\text{H}}L$, where $M_{\text{H}}<N$. Therefore, the helper can cache up to $M_{\text{H}}$ different contents entirely. Also we assume a content can only be cached entirely rather than partially. Denote the caching placement at the helpers for each content as $\mathbf{P_{\text{H}}}=[p_{1}^{\text{H}},p_{2}^{\text{H}},\ldots,p_{N}^{\text{H}}]$, where $p_i^{\text{H}}$ is the proportion of helpers caching the $i$-th content and $0 \leq p_i^{\text{H}}\leq 1$ for $i=1,2,\ldots,N$. The cache storage constraint at the helpers can then be written as $\sum\limits_{i=1}^{N}p_i^{\text{H}}\leq M_{\text{H}}$. The helpers caching the $i$-th content also follow a PPP with density $\lambda_{\text{H}}p_{i}^{\text{H}}$.
\subsubsection{User module}
We assume part of the users having caching ability. Let $\alpha$ denote the proportion of cache-enabled users, where $0\leq \alpha \leq 1$. The cache-enabled users also follow a thinning homogeneous PPP with density $\alpha\lambda_{\text{UE}}$. For simplicity, we assume the caching capacity for the cache-enabled users are the same, denoted by $M_{\text{UE}}L$. Therefore, cache-enabled users can cache up to $M_{\text{UE}}$ different contents entirely in its local cache. Moreover, a device to device (D2D) communication can be established if the distance between the requesting user and the user caching the desired content is less than $R_{\text{UE}}$, where $R_{\text{UE}}<R_{\text{H}}$ due to the transmitting power. Let $\mathbf{P_{\text{UE}}}=[p_{1}^{\text{UE}},p_{2}^{\text{UE}},\ldots,p_{N}^{\text{UE}}]$ denote the caching placement at the cache-enabled users for each content, where $p_i^{\text{UE}}$ is the proportion of users caching the $i$-th content, and $0 \leq p_i^{\text{UE}}\leq 1$ for $i=1,2,\ldots,N$. The cache storage constraint at the cache-enabled users can then be written as $\sum\limits_{i=1}^{N}p_i^{\text{UE}}\leq M_{\text{UE}}$. Therefore, the users caching the $i$-th content also follows a PPP with density $\alpha\lambda_{\text{UE}}p_{i}^{\text{UE}}$.
\begin{figure}[t]
\centering
\includegraphics[height=2.2in]{./system1.pdf
\vspace{-0.3cm}
\caption{System model of the D2D assisted wireless caching system, where (a), (b), (c) and (d) stand for Self-offload, D2D-offload, Helper-offload and Celluar-response, respectively.}
\vspace{-0.2cm}
\label{fig:system}
\end{figure}
\subsection{Content Access Protocol}
As indicated in Fig. \ref{fig:system}, the content access protocol is as follows:
\begin{enumerate}[(a)]
\item
\textbf{Self-offloading}:
When a content request occurs, the user first checks its local storage whether the desired content has been stored in it. The request will be satisfied and offloaded immediately if the user has cached the desired content in its local storage space. We term it as ``Self-offloading".
\item
\textbf{D2D-offloading}:
If the exact content has not been cached in the local storage or the requesting user does not have cache ability, the user will turn to search near devices for the desired content. If there is at least one users have stored the desired content within the radius $R_{\text{UE}}$. The request would be met and offloaded by establishing a D2D communication, termed as ``D2D-offloading".
\item
\textbf{Helper-offloading}:
In addition to ``D2D-offloading", if there is at least one helper have stored the desired content within $R_{\text{H}}$, the request would be satisfied and offloaded by the helper transmission, termed as ``Helper-offloading".
\item
\textbf{Cellular-response}:
If the request could not be offloaded via local cache, D2D or the helpers\, then it need to be forwarded to the cellular base station and the cellular network transmits the requested content in response.
\end{enumerate}
\section{offloading probability and Problem formulation}
In this paper, in order to alleviate the traffic load from the cellular network, our goal is to find the optimal caching placement to maximize the offloading probability. Therefore, we first analyze the offloading probability for the D2D assisted wireless caching network. Then, the optimal caching placement problem is formulated.
\subsection{Offloading probability analysis}
For a PPP distribution with density $\lambda$, the probability that there are $n$ devices in the area within a radius $r$ is:
\begin{equation}
\mathbb{F}(n,r,\lambda)= \frac{(\pi r^2\lambda)^n}{n!} e^{-\pi r^2 \lambda}
\end{equation}
Therefore, for a reference user located at the origin, the probability of at least another user caching the $i$-th content within the transmission range is
\begin{align}
P_{i,\text{off}}^{\text{D2D}}=1-\mathbb{F}(0,R_{\text{UE}},\alpha \lambda_{\text{UE}} p_{i}^{\text{UE}})=1-e^{-\pi\alpha \lambda_{\text{UE}} p_{i}^{\text{UE}} {R_{\text{UE}}}^2}.
\end{align}
Similarly, the probability of at least one helper caching the $i$-th content within the radius $R_{\text{H}}$ is
\begin{equation}
P_{i,\text{off}}^{\text{H}}=1-\mathbb{F}(0,R_{\text{H}},\lambda_{\text{H}} p_{i}^{\text{H}})=1-e^{-\pi\lambda_{\text{H}} p_{i}^{\text{H}} {R_{\text{H}}}^2}.
\end{equation}
The offloading probability for the $i-$th content of cache-unabled users, i.e the probability at least one helper or one user caching the $i$-th content is:
\begin{equation}
\begin{split}
P_{i,\text{NC}}=&1-(1-P_{i,\text{off}}^{\text{D2D}})(1-P_{i,\text{off}}^{\text{H}})\\
=&1-e^{-(\pi\alpha \lambda_{\text{UE}} p_{i}^{\text{UE}} {R_{\text{UE}}}^2+\pi\lambda_{\text{H}} p_{i}^{\text{H}} {R_{\text{H}}}^2)}.
\end{split}
\end{equation}
The corresponding offloading probability of the cache-enabled users for the $i$-th content is
\begin{equation}
P_{i,\text{C}}=p_{i}^{\text{UE}}+(1-p_{i}^{\text{UE}})P_{i,\text{NC}}.
\end{equation}
Therefore, the offloading probability for the $i$-th content becomes
\begin{equation}
\begin{split}
P_{i,\text{off}}=&\alpha{P_{i,\text{C}}}+(1-\alpha )P_{i,\text{NC}}\\
=&1-(1-\alpha{p_{i}^{\text{UE}}})e^{-(\pi\alpha \lambda_{\text{UE}} p_{i}^{\text{UE}} {R_{\text{UE}}}^2+\pi\lambda_{\text{H}} p_{i}^{\text{H}} {R_{\text{H}}}^2)}.
\end{split}
\end{equation}
The total offloading probability for the D2D assisted wireless caching system becomes
\begin{equation}
P_{\text{off}}=\sum_{i=1}^{N} q_{i}P_{i,\text{off}},
\end{equation}
while more data offloaded by the wireless caching network, less data needs to be sent via the cellular network, alleviating the traffic load for the cellular network.
\subsection{Problem Formulation}
Let $\mathbf{P}=[\mathbf{P_{\text{H}}}\quad \mathbf{P_{\text{UE}}}]$ denotes the caching placement at helper and user sides.
The optimal caching placement that maximizes the offloading probability for the wireless caching network can be formulated as
\begin{align}
& \max_{\mathbf{P}} ~~\sum_{i=1}^{N}q_{i}P_{i,\text{off}}\\
& \textup{s.t.}~~\begin{cases}\sum\limits_{i=1}^{N} {p_{i}^{\text{UE}}}\leq M_{\text{UE}}\\
\sum\limits_{i=1}^{N} {p_{i}^{\text{H}}}\leq M_{\text{H}}\\
0 \leq p_i^{\text{UE}}\leq 1, i\in\{1,\ldots,N\}\\
0 \leq p_i^{\text{H}}\leq 1, i\in\{1,\ldots,N\}\\
\end{cases}.
\end{align}
\section{DC Programming for Caching Placement Optimization}\label{sec:dc}
In this section, we adopt the difference of convex (DC) program to solve the above problem. The maximization problem is equivalent to the following minimization problem:
\begin{align}\label{pro:equvalent_min}
& \min_{\mathbf{P}} ~~-\sum_{i=1}^{N}q_{i}P_{i,\text{off}}\\
& \textup{s.t.}~~\begin{cases}\label{pro:st}\sum\limits_{i=1}^{N} {p_{i}^{\text{UE}}}\leq M_{\text{UE}}\\
\sum\limits_{i=1}^{N} {p_{i}^{\text{H}}}\leq M_{\text{H}}\\
0 \leq p_i^{\text{UE}}\leq 1, i\in\{1,\ldots,N\}\\
0 \leq p_i^{\text{H}}\leq 1, i\in\{1,\ldots,N\}\\
\end{cases}.
\end{align}
Let $F(\mathbf{P})=-\sum_{i=1}^{N}q_{i}P_{i,\text{off}}$ denote the objective function in problem (\ref{pro:equvalent_min}) and it can be easily verified that the hessian matrix of $F(\mathbf{P})$ is not positive definite and hence $F(\mathbf{P})$ is non-convex.
Let $H(\mathbf{P})=\sum_{i=1}^{n}q_ih_i$, where $h_i=\alpha\pi\lambda_{\text{H}}{R_{\text{H}}}^2({p_i^{\text{UE}}}^2+{p_i^{\text{H}}}^2)$. Denote $G(\mathbf{P})=F(\mathbf{P})+H(\mathbf{P})$, we then have the following proposition.
\begin{proposition}
$H(\mathbf{P})$ and $G(\mathbf{P})$ are both convex of $\mathbf{P}$.
\end{proposition}
\begin{proof}
Let $A_i$ denote the hessian matrix of $ h_{i}$
$A_i=\begin{bmatrix}
\frac{\partial^{2}{h_{i}}}{\partial {(p_i^{\text{UE}})}^{2}} & \frac{\partial^{2}{h_{i}}}{\partial {p_i^{\text{UE}}}\partial {p_i^{\text{H}}}}\\
\frac{\partial^{2}{h_{i}}}{\partial {p_i^{\text{H}}}\partial {p_i^{\text{UE}}}}&\frac{\partial^{2}{h_{i}}}{\partial {(p_i^{\text{H}})}^{2}}
\end{bmatrix}=\begin{bmatrix}
2\alpha\pi\lambda_{\text{H}}{R_{\text{H}}}^2& 0 \\
0 & 2\alpha\pi\lambda_{\text{H}}{R_{\text{H}}}^2
\end{bmatrix}$.
Hence the matrix $A_i$ is positive definite and $h_i$ is convex. Since the linear combination of convex functions is also convex, $H(\mathbf{P})$ is convex. Similarly, we have the hessian matrix of $G(\mathbf{P})$ is definite and $G(\mathbf{P})$ convex of $\mathbf{P}$.
\end{proof}
Hence, $ F(\mathbf{P})$ can be written as a difference of the following two convex functions:
\begin{equation}
F(\mathbf{P})=G(\mathbf{P})-H(\mathbf{P}).
\end{equation}
Therefore, we adopt the DC programming to solve this problem. DC programming is a quick convergence programming which can obtain a partial optimal solution and sometimes the global optimal solution of a non-convex function\cite{dc}. Since $\frac{\partial{H(\mathbf{P})}}{\partial {\mathbf{P}}}$ is continuous and the constraint of problem (\ref{pro:equvalent_min}) is a convex set, the DC programming can be simply described in Algorithm $1$. The result will be illustrated in section \ref{sec:result}.
\begin{algorithm}\label{dc}
\caption{DC programming for caching placement}
\begin{algorithmic}[1]
\STATE initial value: $\mathbf{P}_{0}^{\text{UE}}=\frac{M_\text{UE}}{\text{N}},
\mathbf{P}_{0}^{\text{H}}=\frac{M_\text{H}}{\text{N}}$;\\
\STATE solve the convex optimization problem:
$\min G(\mathbf{P})-H(\mathbf{P}_{k})-(\mathbf{P}
-\mathbf{P}_{k}) \frac{\partial{H(\mathbf{P}_{k})}}{\partial {\mathbf{P}}}~~\textup{s.t.}(\ref{pro:st})$;\\
\STATE the solution of step $2$ is $\mathbf P_{{k+1}}$;\\
\STATE if $ ||F(\mathbf{P}_{k})-F(\mathbf{P}_{{k+1}})|| \leq \varepsilon$ or $||\mathbf P_{k}-\mathbf P_{{k+1}}||\leq \varepsilon$,$\mathbf P_{k}$ is the optimal solution of $F(\mathbf{P})$; otherwise,return to step$2$ ;\\
\STATE RETURN:the result is:$F(\mathbf{P}_{k})$,the solution is: $\mathbf{P}_{k}$;
\end{algorithmic}
\end{algorithm}
\section{extreme case analyses}
In this section, we consider the caching problem under extreme cases where only one tier of the caching system is considered and the optimal caching placement can be calculated. We analyze the caching placement of the two extreme cases and combine the solutions as a baseline.
\subsection{${\alpha}=0$: \textbf{helper-tier caching network}}
\subsubsection{problem formulation}
In this case, all users have no caching ability and we only need to optimize the caching placement $\mathbf{P_{H}}$ at helper side. The offloading probability for the $i$-th content is reduced to
\begin{equation}
P_{i,\text{off}}=P_{i,\text{off}}^{\text{H}}=1-e^{-\pi\lambda_{\text{H}} p_{i}^{\text{H}} {R_{\text{H}}}^2}
\end{equation}
Problem (\ref{pro:equvalent_min}) can be written as
\begin{align}\label{case1}
& \min_{\mathbf{P_{\text{H}}}} ~~-\sum_{i=1}^{N}q_{i}(1-e^{-\pi\lambda_{\text{H}} p_{i}^{\text{H}} {R_{\text{H}}}^2})\\
& \textup{s.t.}~~\begin{cases} \sum\limits_{i=1}^{N} {p_{i}^{\text{H}}}\leq M_{\text{H}}\\
0 \leq p_i^{\text{H}}\leq 1, i\in\{1,\ldots,N\}\\
\end{cases},
\end{align}
\begin{lemma} (\textbf{Water-filling method}) The optimal caching placement of the helpers is
\begin{equation}
p_{i}^{\text{H}}=\min\left((\beta+\frac{\ln {q_{i}}}{\pi \lambda_{\text{H}} {R_{\text{H}}}^2})^+,1\right)
\end{equation}
for $i=1,2,\ldots,N$, where $x^+=\max{(x,0)}$ and $\beta$ is effectively solved by the bisection search with $\sum\limits_{i=1}^{N} {p_{i}^{\text{H}}}=M_{\text{H}}$.
\end{lemma}
\begin{proof}
The second derivative of $P_{i,\text{off}}$ is
\begin{align}
\frac{\partial^2{P_{i,\text{off}}}}{\partial{{p_{i}^{\text{H}} }^2}}=-\pi^2\lambda_{\text{H}}^2{R_{H}^{4}}e^{-\pi\lambda_{\text{H}} p_{i}^{\text{H}} {R_{\text{H}}}^2}< 0,
\end{align}
thus $-P_{i,\text{off}}$ is convex in $p_{i}^{\text{H}}$ and the objective function $-\sum\limits_{i=1}^{N}q_{i}P_{i,\text{off}}$ is also convex. Therefore, the caching placement optimization problem is a convex problem. Consider the following Lagrangian
\begin{align}
{\cal{L}}= -\sum_{i=1}^{N}q_{i}(1-e^{-\pi\lambda_{\text{H}} p_{i}^{\text{H}} {R_{\text{H}}}^2})+\mu(\sum\limits_{i=1}^{N} {p_{i}^{\text{H}}}- M_{\text{H}})
\end{align}
where $\mu$ is the Lagrange multiplier. The KKT condition for the optimality of a caching placement is
\begin{equation}
\frac{\partial{\cal{L}}}{\partial{p_{i}^{\text{H}}}}=-\pi\lambda_{\text{H}}{R_{\text{H}}}^2q_{i}e^{-\pi\lambda_{\text{H}} p_{i}^{\text{H}} {R_{\text{H}}}^2}+\mu\begin{cases}=0 \quad \text{if } 0<p_i^{\text{H}}<1\\
\geq 0 \quad \text{if } p_i^{\text{H}}=0\\
\leq 0 \quad \text{if } p_i^{\text{H}}=1
\end{cases}.
\end{equation}
Let $\beta=\frac{\ln {(\pi \lambda_{\text{H}}{R_{\text{H}}}^2)}-\ln \mu }{\pi \lambda_{\text{H}} {R_{\text{H}}}^2}$ and $x^+=\max{(x,0)}$, we then have
\begin{equation}
p_{i}^{\text{H}}=\min\left((\beta+\frac{\ln {q_{i}}}{\pi \lambda_{\text{H}} {R_{\text{H}}}^2})^+,1\right),
\end{equation}
where $\beta$ can be solved by the bisection search method under the cache storage constraint.
\end{proof}
\begin{figure}[t]
\centering
\includegraphics[width=3.2in]{./plot_waterfilling.pdf}
\caption{Optimal caching placement at helper side under the settings $N=20,
\gamma=1,M_{\text{H}}=4,\lambda_{\text{H}}=\frac{20}{\pi500^2}$.}
\label{fig:waterfilling}
\end{figure}
As illustrated in Fig. \ref{fig:waterfilling}, the water-filling method allocate more cache probability to contents with larger popularity. For instance, the contents with larger popularities under the water level, i.e., the first content and the second contents have been cached in all helpers. While the contents with smaller popularity above the water level, i.e., the $7$-th content to the last content, have not been cached in any helper.
\begin{remark}
According to \textbf{Lemma 1}, it is straightforward that the most popular contents are cached in helper storage under relatively low helper density i.e., $p_{1}^{\textbf{H}}=\ldots=p_{M_\textbf{H}}^{\textbf{H}}=1$ and $p_{M_\textbf{H}+1}^{\textbf{H}}=\ldots=p_{N}^{\textbf{H}}=0$. While under relatively high density, contents are evenly cached at helper side, i.e, $p_{1}^{\textbf{H}}=\ldots=p_{N}^{\textbf{H}}=\frac{M_{\text{H}}}{N}$.
\end{remark}
\subsection{$\lambda_{\text{H}}M_{\text{H}}=0$: \textbf{user-tier caching network}}
In this case, no helpers with caching ability participate in offloading the user requests, the optimization problem is reduced to the optimal caching placement of $p_{i}^{\text{UE}}$. We thus rewritten the function of offloading probability as:
\begin{equation}
P_{i,\text{off}}=1-(1-\alpha{p_{i}^{\text{UE}}})e^{-\pi\alpha \lambda_{\text{UE}} p_{i}^{\text{UE}} {R_{\text{UE}}}^2}.
\end{equation}
Then problem (\ref{pro:equvalent_min}) becomes
\begin{align}\label{case2}
& \min_{\mathbf{P_{\text{UE}}}} ~~-\sum_{i=1}^{N}q_{i}(1-(1-\alpha{p_{i}^{\text{UE}}})e^{-\pi\alpha \lambda_{\text{UE}} p_{i}^{\text{UE}} {R_{\text{UE}}}^2})\\
& \textup{s.t.}~~\begin{cases} \sum\limits_{i=1}^{N} {p_{i}^{\text{UE}}}\leq M_{\text{UE}}\\
0 \leq p_i^{\text{UE}}\leq 1, i\in\{1,\ldots,N\}\\
\end{cases},
\end{align}
\begin{proposition}
The above problem is also a convex problem.
\end{proposition}
\begin{proof}
The second derivative of the objective function becomes
\begin{align}
\frac{\partial^2{P_{i,\text{off}}}}{\partial{{p_{i}^{\text{UE}} }^2}}=-[2\alpha{b}+b^2(1-\alpha{p_{i}^{\text{UE}}})]e^{-bp_{i}^{\text{UE}}}< 0,
\end{align}
where $b=\pi\alpha \lambda_{\text{UE}}{R_{\text{UE}}}^2$. Therefore, $-P_{i,\text{off}}$ is convex in $p_{i}^{\text{UE}}$ and the objective function $-\sum\limits_{i=1}^{N}q_{i}P_{i,\text{off}}$ is also convex. Therefore, the caching placement optimization problem is convex.
\end{proof}
As a result, we can adopt a inter-point method to achieve the optimal solution\cite{convex}.
\section{Simulations}\label{sec:result}
\begin{table}
\centering
\caption{Default parameter setting}
\label{table:parameter}
\begin{tabular}{|c|c|}
\hline
Parameters& values \\
\hline
D2D communication range:$R_{\text{UE}}$ & 15(m) \\
helper transmission range:$R_{\text{H}}$ & 100 (m) \\
the proportion of cache-enabled users:$\alpha$&0.5 \\
the density of users:$\lambda_{\text{UE}}$& $5000/{\pi 500^2}$\\
the density of helpers:$\lambda_{\text{H}}$&$50/{\pi 500^2}$ \\
the cache capacity of users and helpers&$M_{\text{UE}}=2;M_{\text{H}}=8$ \\
the size of content library:N& 30\\
the skewness of the popularity:$\gamma$&1\\
\hline
\end{tabular}
\end{table}
\begin{table}
\centering
\begin{center}
\caption{Different caching schemes}
\label{table:baseline}
\begin{tabular}{|c|c|c|}
\hline
Schemes & caching schemes of users & caching schemes of helpers \\
\hline
{popular cache}& $p_i^{\text{UE}}=1,i\in[1,M_{\text{UE}}] $ & $p_i^{\text{H}}=1,i\in[1,M_{\text{H}}]$ \\
& $p_j^{\text{UE}}=0, j\in[M_{\text{UE}}+1,N]$ & $p_j^{\text{H}}=0, j\in[M_{\text{H}}+1,N]$ \\
\hline
{even cache}& $\mathbf{P}^{\text{UE}}=M_{\text{UE}}/N$ & $\mathbf{P}^{\text{H}}=M_{\text{H}}/N$ \\
\hline
{Non-joint}&the solution of Problem (\ref{case2})&the solution of problem (\ref{case1})\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=2.8in]{./lambda2.pdf
\caption{The impact of $\lambda_{\text{H}}$ on the offloading probability}
\label{fig:lambda2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=2.8in]{./lambda1.pdf
\caption{The impact of $ \lambda_{\text{UE}}$ on the offloading probability}
\label{fig:lambda1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3.1in]{./distribution.pdf
\caption{the caching placement of the proposed placement}
\label{fig:distribution}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=2.8in]{./beta.pdf
\caption{The impact of $\alpha$ on the offloading probability}
\label{fig:beta}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=2.8in]{./alpha.pdf
\caption{The impact of $\gamma$ on the offloading probability}
\label{fig:alpha}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=2.8in]{./M.pdf
\caption{The impact of N on the offloading probability}
\label{fig:M}
\end{figure}
In this section, we provide some numerical results to verify our analysis and compare the performance of the proposed caching placement with other baselines. Parameter setting and the three baselines are described in Table \ref{table:parameter} and Table \ref{table:baseline}. In particular, we combine the optimal solutions of the two one-tier caching cases as a baseline and named it non-joint optimal caching placement.
Fig. \ref{fig:lambda2} shows that the offloading probability increases with helper density $\lambda_{\text{H}}$. The performance of the proposed caching placement is better than other three baselines no matter how $\lambda_{\text{H}}$ changes. When $\lambda_{\text{H}}=0$, the performance of the proposed caching placement is equal to the non-joint one, because in this situation there are no helpers joining to offload traffic data. With the increasing of $\lambda_{\text{H}}$, the performance of the proposed caching placement becomes better than the non-joint one. While $\lambda_{\text{H}}$ is considerable large, non-joint caching placement approaches to the proposed scheme again. That is because the caching placement of non-joint schemes is also a optimal solution when there is only helper-tier, and the offloading is mostly consisted of helper-tier in this situation. From Fig. \ref{fig:lambda1} we can draw a similar conclusion about $\lambda_{\text{UE}}$.
Furthermore, from Fig. \ref{fig:lambda2} and Fig. \ref{fig:lambda1}, we can see that when both of $\lambda_{\text{H}}$ and $\lambda_{\text{UE}}$ are small, the performance of even cache scheme is the worst one. As $\lambda_{\text{H}}$ or $\lambda_{\text{UE}}$ increases, the performance of even cache scheme becomes better. When $\lambda_{\text{H}}$=$0.8\times10^{-4}$ and $\lambda_{\text{UE}}$=$1.2\times10^{-2}$, it exceeds over the popular cache scheme. That is because while there are few devices participating in the caching network, users and helpers need to cache popular contents to cope with the corresponding high request probability, thus the popular cache scheme performs well; When the resource of the caching network is rich i.e the node density is relatively high, the offloading probabilities for the most popular contents are easily satisfied, and the surplus storage can be used to cache other relatively less popular contents. So the even cache scheme performs better and the offloading probability of popular cache scheme no longer increases. When $\lambda_{\text{UE}}$ is considerable large, the performance of even cache scheme approaches to the optimal caching placement. In Fig. \ref{fig:distribution}, we demonstrate the proposed caching placement which is calculated by DC programming where $N=5,M_{\text{UE}}=1,M_{\text{H}}=3$. As $\lambda_{\text{UE}}$ increases, we can see that the optimal caching placement changes from a popular cache scheme to a even cache scheme which is consistent with our analysis.
Fig. \ref{fig:beta} illustrates the impact of $\alpha$ on offloading probability, where $\alpha$ stands for the proportion of cache-enabled users. When $\alpha=0$, the system reduces to a helper-tier caching system hence, the offloading probability of the proposed caching placement is equal to the non-joint one. While $\alpha$ is larger, there are more cache-enabled users joining in the caching system and therefore the offloading probability will increase. And we can see that while $\alpha \neq0$, the performance of the proposed placement is clearly better than the non-joint one.
In this paper, $\gamma$ is denoted as the skewness of content popularity. While $\gamma$ is large, the user requests focus on the popular contents and the caching system may have large probabilities to cache the "right" contents. Therefore the offloading probability usually increases with $\gamma$ and we show it in Fig. \ref{fig:alpha}. The performance of popular cache scheme grows rapidly with increasing $\gamma$ while the performance of even cache scheme is not affected by $\gamma$, because it caches every content with a same probability.
Fig. \ref{fig:M} illustrates that the offloading probability decreases with $N$. To expand the size of content library $N$, in a sense, is similar to reduce the cache capacity $M$, thus the offloading probability will experience a decline accordingly. However we can notice that the performance of our proposed caching placement is still well. It demonstrates that when the system is applied into a multi-contents situation, the proposed caching placement can finely adjust the caching proportion of each content by a joint optimization and keep a good performance.
\section{conclusion}
In this paper, the optimal caching placement are proposed to maximize the total offloading probability for the D2D assisted wireless caching network. Specifically, the caching placement problem for the two-tier caching network is formulated as a DC problem and be solved by the DC programming. In addition, the extreme analysis are provided for helper-tier (or user-tier) caching case in absence of the other tier. The caching placements for both cases are proved to be convex. Moreover, the classical water-filling method is adopted to solve the helper-tier caching case. Simulation results indicate the most popular contents are ought to be cached under relatively low node density, while contents are ought to be cached evenly under relatively high node density. And our proposed caching placement can always make a balance of that.
\bibliographystyle{IEEEtran}
|
1510.07702
|
\section{Introduction}
The most massive galaxies in the Universe are interesting for several reasons, so they have been the object of much study. Recent work has shown that the most luminous galaxies at $z\sim 0.1$ are more abundant than expected from the most commonly used parametrizations of the luminosity function (Bernardi et al. 2010, 2013). When converted to a stellar mass function (a conversion which is rather sensitive to the assumed stellar mass-to-light ratio, about which, as we show below, one has to be very careful), this mis-match is important for models which use the observed abundance and its evolution to constrain the issue of whether these objects were assembled via major or minor mergers. E.g., Bernardi et al. (2011a,b; 2014) and Cappellari et al. (2013) show that two mass scales are special for both early- and late-type galaxies: $M_* \sim 3\times 10^{10}M_\odot$ and $2\times 10^{11}M_\odot$. These scales are thought to be related to a change in the assembly histories (e.g. to ones in which wet versus dry or minor versus major mergers become important). These mass scales are particularly interesting because it has long been thought that the most massive galaxies are also the ones whose stellar populations are most likely to have evolved passively (e.g. Cimatti et al. 2006). If they evolve passively, or they do not merge with other members of their sample as their stellar populations age, then the fact that their comoving number density does not evolve allows one to use the evolution of their clustering strength to constrain the growth factor (e.g. Wake et al. 2008). However, the most massive galaxies are rare, so measuring their clustering reliably requires a large volume.
The BOSS survey (Anderson et al. 2012) defines a sample of massive galaxies -- the CMASS sample -- chosen to be a population of nearly constant comoving number density over $0.5\le z\le 0.7$. Maraston et al. (2013; hereafter M13) showed that, across $0.4\le z\le 0.6$, the stellar masses in CMASS have evolved little if at all, especially if one restricts attention to the reddest objects (those with observed $g-i>2.35$). Montero-Dorta et al. (2014) show that the CMASS luminosity function also appears to evolve passively over this range. One of our main goals is to explore the possibility that the CMASS galaxies evolve passively all the way to $z=0$. This is complicated because a galaxy's luminosity can evolve significantly as its stellar population ages, and this evolution can be a function of waveband. Both of these motivate observational estimates of the stellar mass instead, since, for a population which is neither forming new stars nor merging, this quantity (if appropriately defined) should be constant and independent of waveband. Since stellar masses $M_*$ are determined from the product of $M_*/L$ and $L$, they tend to be less reliably determined than luminosities themselves (Bernardi et al. 2013). To address the question of passive evolution, one must ensure that systematics associated with estimating $L$ or $M_*/L$ at different $z$ are neither hiding real, nor masquerading as, evolution.
This is nontrivial because, even at fixed redshift, estimates of the bright end of the luminosity function depend strongly on how one fits the light profile. Estimates based on fitting single component Sersic profile (S{\'e}rsic 1963) are much less biased than SDSS-pipeline analyses based on the de~Vaucouleur's profile or Petrosian's procedure, both of which underestimate the total light (Meert et al. 2013, 2015).
The use of this more accurate photometry substantially increases the inferred stellar mass density at $z\sim 0.1$ (Bernardi et al. 2013). Although this impacts studies which seek to relate the stellar mass of a galaxy to the mass of its parent halo (Kravtsov et al. 2014; Shankar et al. 2014), it makes little sense to compare their local measurement with stellar masses at $z\sim 0.55$, say, unless the high redshift sample is also based on similar photometric reductions. In Section~\ref{sec:lf}, we use the same photometric pipeline which was used to analyze the $z\sim 0.1$ sample, {\tt PyMorph} (Meert et al. 2015), to study the CMASS sample.
Comparison of the CMASS and SDSS luminosity and stellar mass functions requires an understanding of $k+e$ corrections. Section~\ref{sec:passive} shows that these depend strongly on assumptions about the age of the population. However, if we use the same models -- those of Maraston et al. (2009; hereafter M09) -- to analyze both SDSS and CMASS, {\em and} we require that the models return old ages, then passive evolution appears consistent with the data.
There is a well-known prediction for how the clustering signal of a passively evolving population should evolve. In view of the sensitivity to the $k+e$-corrections, in Section~\ref{sec:xi} we use the clustering of the CMASS and SDSS galaxies to check if the most massive SDSS galaxies really are simply passively evolved from CMASS. We find that the clustering of the most massive galaxies in the SDSS is weaker than expected if the most massive SDSS galaxies were also the most massive galaxies in CMASS. Appendix~\ref{sec:toy} discusses implications of this for merger models, and a final section summarizes our findings and places them in the context of previous work.
Where necessary, we assume a flat $\Lambda$CDM cosmology with $\Omega_{m} = 0.3$ and Hubble constant $H_0 = 70~$km~s$^{-1}$~Mpc$^{-1}$ at the present time. Changing $\Omega_m$ by 10\% does not affect our conclusions. All volumes and number densities we quote are comoving. Our SDSS analysis is based on the DR7 Main Galaxy sample (Abazajian et al. 2009), which provides redshifts, {\tt cmodel} magnitudes for the apparent brightness and {\tt model} magnitudes for the colors of each object (see the survey documentation at {\tt www.sdss.org} for details of these quantities). In addition, all stellar masses we quote assume 97\% solar metallicity plus 3\% of 0.05 solar (following M13) and a Chabrier IMF.
\section{Effect of {\tt PyMorph} photometry on the luminosity function}\label{sec:lf}
\begin{figure}
\centering
\includegraphics[scale = .43]{FIGURES/LF_CMASS_atcmass.ps}
\caption{Comparison of $i$-band luminosity functions based on {\tt cmodel} and {\tt PyMorph} {\tt Sersic} photometry. Symbols show results for the CMASS galaxies as well as a red subsample (having observed {\tt model} $g-i>2.35$). Solid and dashed black curves show the corresponding results for all the SDSS main sample galaxies, while gray solid and dashed curves show only the E+S0s. The reasonable agreement between CMASS and SDSS is fortuitous since no $k+e$ corrections have been applied, but this helps illustrate the fact that the difference between {\tt cmodel} and {\tt PyMorph} photometry is the same in both SDSS and CMASS.}
\label{pymorph}
\end{figure}
In the SDSS, the bright end of the luminosity function is much brighter if {\tt PyMorph} derived Sersic rather than {\tt cmodel} photometry is used. Our first goal is to determine if this is also true in CMASS. The cyan and magenta symbols in Figure~\ref{pymorph} show $1/V_{\rm max}$-based estimates of the CMASS luminosity functions derived from {\tt cmodel} and Sersic photometric reductions, respectively. Although we have used $i$-band photometry, these are not $i$-band luminosity functions in the strict sense, because no $k+e$ corrections have been applied. This is not a concern for the present purpose, since, for each object, these would likely be the same for both photometric reductions. The figure clearly shows that the Sersic reductions produce substantially more high luminosity objects.\footnote{Strictly speaking, Meert et al. (2013) and Bernardi et al. (2014) showed that, although single Sersic fits to galaxies at $z\sim 0.1$ are much less biased than {\tt cmodel} magnitudes, they do slightly overestimate the total light -- estimates based on fitting two component Sersic + Exponential profiles are less biased. However, the CMASS galaxies are too distant to allow accurate determinations of the two component fits.}
For most of the remainder of this paper we will be interested in the question of passive evolution. The reddest CMASS objects -- those with observed $g-i>2.35$ -- are much more likely to evolve passively (see, e.g., M13 for details). Therefore, we only test for passive evolution using this redder subset. The green and red symbols in Figure~\ref{pymorph} show the luminosity functions for this subset. The red galaxies account for about 70 percent of all the objects at the bright end (i.e. comoving number densities less than about $0.5\times 10^{-4}$/Mpc$^3$), but otherwise the distribution of luminosities is very similar to that of the full sample.
The main point of this figure is to check if the difference between {\tt cmodel} and {\tt PyMorph} magnitudes in CMASS is similar to that in the SDSS. This raises the question of what the analog of the $g-i>2.35$ color cut is at $z\sim 0.1$. A simple color cut will not suffice, because it is well known that a substantial fraction of red SDSS galaxies are actually edge-on disks (i.e. are not passively evolving). Recently, Huertas-Company et al. (2011) have used a variety of different observables (in addition to color) to assign to each galaxy a probability that it is a certain morphological type. These Bayesian Automated Classification (hereafter BAC) probabilities are available for all the DR7 objects. So, in what follows, we weight each galaxy in our SDSS sample by the BAC probability of Huertas-Company et al. (2011) that it is an elliptical (E) or an S0. In support of this choice, we note that, for the luminosity thresholds we consider later, the red galaxies are the same fraction, $\sim 0.7$, of the full CMASS sample that E+S0s are of SDSS galaxies.
The solid curves show Sersic determinations of the luminosity function of all galaxies in the SDSS and of E+S0s (obtained by weighting each SDSS galaxy by $p$(E) + $p$(S0), the probability that it is an E or S0 as determined by Huertas-Company et al. 2011). The dashed curves show a similar analysis using {\tt cmodel} magnitudes. Since no $k+e$ corrections are applied, the reasonable agreement at the bright end of the CMASS measurements is fortuitous. However, this makes it easy to see that Sersic photometry results in more high luminosity objects in CMASS, quantitatively just like it does in the SDSS.
Therefore, just as the SDSS analysis results in a significant revision of the $M_*-M_{\rm halo}$ relation at $z\sim 0$, our {\tt PyMorph} reductions of the CMASS sample imply a significant revision at the high mass end of the $M_*-M_{\rm halo}$ relation at $z\sim 0.55$. The revision is not quite as large as the figure suggests because, in the SDSS, the Sersic luminosities are biased slightly high compared to those derived from Sersic + Exponential fits (Bernardi et al. 2013; also see D'Souza et al. 2015 for a different analysis); we expect this to be true for the CMASS sample as well. (The slight brightward bias of the Sersic values is much less than the amount by which {\tt cmodel} magnitudes are biased faintwards.) In this context, it is worth noting that Sersic based estimates of BCGs in Sparcs and Cosmos imply a modified $M_*-M_{\rm halo}$ relation (Shankar et al. 2014) which is qualitatively consistent with what our Sersic reductions in CMASS imply.
\section{Evidence for passive evolution from $\phi(L)$ and $\phi(M_*)$ in the SDSS and CMASS}\label{sec:passive}
The previous section showed that, when no $k+e$ corrections are applied, the SDSS and CMASS $i$-band luminosity functions are in remarkable agreement. Meaningful conclusions about evolution rest on the accuracy of $k+e$ corrections, which we now discuss.
\subsection{Age dependence of $k+e$ corrections}
\label{sec:age}
In what follows, we will mainly use the $(k+e)$-corrections in the $r$- and $i$-bands obtained by fitting the observed CMASS colors to the single burst stellar population (hereafter SSP) models of M09, assuming almost solar metallicity (97\% solar plus 3\% of 0.05 solar, as in M13) and a Chabrier IMF (we converted from Kroupa to Chabrier IMF as described in Tab. 2 of Bernardi et al. 2010). These are the same models used by M13 for the red galaxies (i.e. those with $g-i>2.35$). (M13 used a suite of templates with different star formation histories for bluer galaxies, i.e. those with $g-i<2.35$. Since we are only interested in the brightest galaxies which are very likely to be well-described by the passive template anyway, we do not use this suite of other templates.)
\begin{figure}
\centering
\includegraphics[scale = .43]{FIGURES/Age_cmass.ps}
\includegraphics[scale = .43]{FIGURES/Age_cmass2.ps}
\caption{Top panels shows the distribution of $z=0$ ages of SDSS ellipticals and CMASS red galaxies, as determined from fitting M13 models to the observed colors. The sharp cuts at 3~Gyr and 8~Gyr, for SDSS and CMASS, are by construction. In both cases, the less luminous galaxies tend to be younger. SDSS ellipticals older than $\sim 5$~Gyrs may have been present at CMASS redshifts. Bottom panel shows the age distribution of such galaxies, except that all ages between 0 and 3~Gyrs at a given $z$ have been replaced with an age of 3~Gyrs, before being shifted back to $z=0$ ages. In effect, the distribution above 10~Gyrs is unchanged, but that between 8 and 10~Gyrs has been increased. The age distributions, and their dependence on luminosity, are in good agreement.}
\label{ageDist}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale = .43]{FIGURES/kcorrEvol2_CMASS.ps}
\caption{Comparison of $(k+e)_{rr}$ and $(k+e)_{ii}$ corrections from the M09 models described in the text for a range of ages (dashed and dotted, as labelled) with those from W06 (thin solid). Symbols with error bars show the median $k+e$ at each $z$ (plus and minus 3 times the rms error on the mean in the bin) which results from fitting the CMASS red galaxies to these M09 models; the lowest set (blue) for each band is for the full population, and the other set (cyan) is for the luminous subsample which has $M_i<-23$. The difference indicates differential evolution of the population resulting from the fact that more luminous galaxies are older on average. The convergence at high $z$ is spurious; it results from requiring the galaxies to be at least 3~Gyrs old at each $z$.}
\label{keage}
\end{figure}
In practice, when M13 fit passive models, they only fit to templates with ages exceeding 3 Gyrs, but less than the age of the Universe at each $z$. As a result, there is a lower limit on the $z=0$ age (the $z=0$ age of a redshift $z$ galaxy is the age it would have if it still exists at $z=0$), and this limit is lower for the lower redshift population. While imposing such a lower limit is reasonable -- the CMASS colors have relatively large errors which can bias the inferred ages -- this is not the only physically reasonable choice. For example, most previous work assumes the same $z=0$ age for all galaxies (e.g. Wake et al. 2006), so placing the same lower limit on the $z=0$ age for all galaxies would have been the most natural generalization.
The dashed (red) and solid (black) histograms in the top panel of Figure~\ref{ageDist} show the distribution of $z=0$ ages obtained from fitting the M09 models to the red CMASS and SDSS elliptical galaxies (i.e., $p$(E)$\ge 0.75$), respectively. The sharp cuts at 8~Gyr and 3~Gyr, for CMASS and SDSS, are by construction, as described earlier. Most of the difference between the two distributions is due to objects which did not exist at CMASS redshifts. To illustrate, the bottom panel shows the result of assigning each SDSS elliptical a new redshift drawn from a distribution of constant comoving number density which covers the same redshift depth as CMASS, removing from the sample all objects which were too young to have had existed at the newly assigned redshift, and replacing all ages between 0 and 3~Gyrs with an age of 3~Gyrs, before shifting back to $z=0$ ages. This brings the two histograms into reasonably good agreement, lending qualitative support to the hypothesis that passive evolution of CMASS galaxies can account for the oldest galaxies in the SDSS.
The dot-dot-dot-dashed (red) and dotted (black) histograms in the two panels show a similar analysis of the objects more luminous than $M_i<-23$: they are clearly older on average. This correlation between age and luminosity matters because the $k+e$ corrections depend strongly on age.
Figure~\ref{keage} shows this explicitly: the dashed and dotted lines show the $k+e$ corrections in these models for different $z=0$ ages (as labelled). Note that these corrections are rather similar for ages between 11 and 13 Gyrs, but the age matters increasingly for younger ages. E.g., the $k+e$ correction differs by more than 0.2~mags at $z=0.7$ for galaxies having $z=0$ ages of 9 and~10 Gyrs respectively; and $(k+e)_{rr}$ becomes negative for ages less than 8~Gyrs. Therefore, how one imposes a lower limit to the ages when fitting to CMASS is important.
The upper most thin solid curve for each band shows the $(k+e)$-corrections used by Wake et al. (2006). These were based on SSP models of Bruzual \& Charlot (2003), under the assumption that all the stars in a galaxy were formed in a single instantaneous burst at $z = 9.84$ (solar metallicity) after which the population evolves passively with no further star formation. In our cosmology, this means the galaxies are assumed to be 13~Gyrs old today, and indeed, the Wake et al. (2006) $k+e$ corrections track those of M09 for this same (old) age closely.
The lines with error bars show the median $k+e$ corrections derived for the red CMASS galaxies (i.e., $g-i>2.35$; blue lines), and for the more luminous subsample which has $M_i<-23$ (cyan lines); each error bar shows three times the rms error on the mean in the redshift bin. Some of the tendency for the two populations to have the same $k+e$ correction at high redshift is a consequence of requiring galaxies to be at least 3~Gyrs old at the redshift of observation. I.e., the lookback time to $z=(0.6,0.7)$ is (5.8,6.3)~Gyrs. Hence, if a galaxy's $z=0$ age is really 9~Gyrs, at $z=0.7$ it will be (incorrectly) assigned an age of 9.3~Gyrs. Since the $k+e$ correction is a strong function of age, particularly at younger ages, this results in a spurious upturn in $k+e$ at high-$z$.
Notice that even the older, more luminous CMASS red galaxies lie well below the relations used by Wake et al. (2006). Comparison with the dashed and dotted curves suggests that the full CMASS red sample is about 9 Gyrs old on average whereas the more luminous subset is about 1~Gyr older: both are substantially younger than the Wake et al. template. We return to this later. The fact that the $k+e$ correction depends on luminosity is also interesting, as it is evidence of differential evolution. This means that use of a global $k+e$ correction -- as is often assumed for massive galaxies -- may lead to biases. Therefore, in what follows, we use the M09 corrections on an object-by-object basis.
We have also considered $k+e$ corrections based on the Charlot \& Bruzual (2007; CB07) algorithm. In this case, we use SSPs which have ages between 8 and 12.5~Gyrs today. As a result, the CB07 models allow the CMASS galaxies to be slightly younger than do M13. Provided that we shift the CB07 model fainter by 0.08~mags in $r$ before fitting, to account for known problems with the $r$-band in these models (see, e.g., M09), the $k+e$ corrections are in rather good agreement at $z\le 0.2$. But, by $z\sim 0.6$, there are systematic differences, with the CB07 based corrections being smaller (or more negative) by about 0.1~mags. Although we are mostly interested in the $k+e$ correction, it is worth noting that the models have very similar $k$ corrections, so the differences are due to the $e$ part of the correction.
Although we do not use the CB07-based corrections further, it is worth making the point that, without a priori knowledge of which $k+e$ correction is correct, conclusions about pure passive evolution will be limited by this uncertainty. This is particularly worrying, since the $k+e$ corrections are very sensitive to the lower limit on the ages which has been imposed by hand. In an attempt to determine if our decision to use corrections based on M09 is correct, we have performed two tests of the hypothesis that the CMASS galaxies have evolved passively to the present. The first is a more careful study of the luminosity and stellar mass functions where, because of the upturn in $k+e$ at high redshifts in Figure~\ref{keage}, we confine our study to $z<0.6$. The second uses their spatial distribution.
\subsection{SDSS$_{\rm CMASS}$: A passively evolved mock catalog}
We begin with all the objects in the SDSS-DR7 Main Galaxy sample. Since the SDSS is apparent magnitude limited in the $r-$band, each of these could have been observed out to a maximum comoving volume $V_{smax}$, which depends on the object's Petrosian $r$-band luminosity and the SDSS apparent magnitude limit. We assign each object a new redshift, $z_c$, where $z_c$ is drawn from a distribution which has constant comoving density between $z=0.5$ and $z=0.6$. We then use the same models we use to analyze the high redshift sample -- in this case, the M09 models -- to $k+e$ correct the $gri$ magnitudes of each SDSS object from their true $z$ values to their new $z_c$ values.
\begin{figure}
\centering
\includegraphics[scale = .43]{FIGURES/rmi_CMASS_red_M09.ps}
\caption{Absolute {\tt model}-color -- {\tt cmodel}-magnitude relation in the red CMASS $z\sim 0.55$ sample, the corresponding SDSS$_{\rm CMASS}$ sample, and the E+S0s in the SDSS, all $k+e$ corrected to $z=0$ using the M13 prescriptions. Solid curves show the median relations, dashed and dotted curves show the region which contains 68\% and 95\% of the objects.}
\label{Mr-Mi}
\end{figure}
Each object now has a fainter apparent magnitude, so we add (correlated) random numbers to each of the bands to mimic the noisier photometry associated with fainter apparent magnitudes. To do this, we first measured how the error depends on band and on the type of photometry. For {\tt cmodel} and {\tt PyMorph} a single Gaussian is sufficient. Hence, to construct our mock catalog, to Sersic $i$-band and {\tt cmodel} $r$- and $i$-bands magnitudes we added independent Gaussian noise with rms 0.1. For the {\tt model} magnitudes (which we use to compute colors), two Gaussian components are required, and the errors in other bands are highly correlated with those in $r$. In this case, the rms in $(g,r,i)$ equals $(0.2,0.1,0.1)$ and the correlation coefficient between the error in $r$- and another band is 0.98. To add the second component, we select 25\% of the objects and add Gaussian noise with rms $0.1$, again taking care to account for the correlation with the error in $r$, for which the correlation coefficient is 0.8.
Finally, we apply the CMASS selection cuts (Anderson et al. 2012) to obtain what we call the SDSS$_{\rm CMASS}$ sample.
(This involves generating a {\tt fiber2} magnitude for each object which we estimate, following its definition, from our seeing-corrected Sersic fits.) If our $k+e$ corrections are correct, and we have accurately accounted for the photometric errors, then weighting each SDSS$_{\rm CMASS}$ object by its $V_{smax}^{-1}$ yields a purely passively evolving population which we can compare with CMASS. Potential tests include $\phi(L|z)$, $\phi(M_*|z)$ the distribution of colors, ages, or, as we describe later, the clustering signal.
Since we are most interested in comparing CMASS and SDSS$_{\rm CMASS}$ to test for passive evolution, we are only interested in CMASS objects having observed $g-i>2.35$, which are much more likely to evolve passively (see M13 for details). The corresponding cut in our SDSS$_{\rm CMASS}$ sample is to weight each object by the BAC probability $p$(E) $+$ $p$(S0) when computing quantities like $\phi(L|z)$ and $\phi(M_*|z)$. (Recall from Paragraph 3 of Section 2 that these $p$(type) weights are always necessary because a simple color cut on the $z\sim 0.1$ population, from which we built our SDSS$_{\rm CMASS}$ sample, will not remove red edge-on disks. In addition, the $k+e$ corrections we used when building our SDSS$_{\rm CMASS}$ sample are not appropriate for galaxies which are not early-types; weighting by the BAC probability is a simple way of removing them from the sample, since we know they cannot possibly be the descendents of CMASS galaxies anyway. )
\begin{figure}
\centering
\includegraphics[scale = .43]{FIGURES/LF_CMASS_M09.ps}
\includegraphics[scale = .43]{FIGURES/LF_CMASS_red_M09.ps}
\caption{Top panel shows the $i$-band {\tt cmodel} and {\tt PyMorph}-Sersic luminosity functions for all galaxies in CMASS (cyan and magenta symbols with error bars), SDSS$_{\rm CMASS}$ (cyan and magenta dotted lines) and SDSS (black solid or dashed lines) all corrected to $z=0$ using the M09 $(k+e)$-corrections discussed previously. Bottom panel shows the luminosity functions for red galaxies in CMASS (red and green symbols with error bars), E+S0s in the SDSS$_{\rm CMASS}$ sample (red and green dotted curves), and E+S0s in the SDSS (gray solid and dashed) all corrected to $z=0$ using the M09 $(k+e)$-corrections. Extra red dashed curves for SDSS$_{\rm CMASS}$ E+S0s in bottom panel show the effect of ignoring photometric errors when constructing the SDSS$_{\rm CMASS}$ mock sample.}
\label{LFred}
\end{figure}
\subsection{Absolute magnitudes}\label{lf}
Figure~\ref{Mr-Mi} shows the color magnitude relation for the red CMASS galaxies, and the E+S0s in the SDSS$_{\rm CMASS}$ and SDSS samples, $k+e$ corrected to $z=0$.
Dashed and dotted curves show the range in color which encloses 50 and 90 percent of the objects at each $M_i$. The relation in the SDSS is considerably narrower than in CMASS: this is a consequence of the larger photometric errors associated with the fainter higher redshift objects. However, the SDSS$_{\rm CMASS}$ sample does exhibit this larger scatter, suggesting that our treatment of photometric errors is reasonably accurate.
We now make a more careful comparison of the CMASS and SDSS$_{\rm CMASS}$ luminosity functions. In all cases, our estimate weights each galaxy by the inverse of the comoving volume over which it could have been observed. In principle, this comoving volume is determined by a complicated combination of the $i$-band apparent brightness and colors in the other bands. In practice, assuming that $V_{max}$ is determined by the cut on $m_i$ only is a reasonably good approximation, so we do not include an additional term for the color cuts.
Figure~\ref{LFred} shows the $i$-band luminosity function $(k+e)$-corrected to $z=0$ using the M09 $k+e$ corrections; the dotted curves show the {\tt cmodel} and {\tt PyMorph}-Sersic luminosity functions measured in our SDSS$_{\rm CMASS}$ samples. These should be compared with the symbols which show the corresponding measurement in CMASS using galaxies between $0.5\le z\le 0.6$. The top panel shows results for all galaxies, and the bottom for the subset of red CMASS galaxies (i.e., they have observed $g-i>2.35$), and E+S0s in SDSS$_{\rm CMASS}$. The counts are in rather good agreement in the top panel, and in even better agreement in the bottom. The fact that our SDSS$_{\rm CMASS}$ counts lie slightly but consistently above the CMASS counts may be indicating incompleteness in CMASS. The required offset is a factor of 1.25, which is in good agreement with the recent estimate of 20 percent incompleteness by Leauthaud et al. (2015). However, this conclusion rests heavily on the assumption that our $k+e$ corrections are indeed realistic, and that our treatment of the photometric errors is as well.
Before we discuss these, notice that the counts are also in good agreement with the bright end of the dashed and solid gray curves. These show the {\tt cmodel} and {\tt PyMorph}-Sersic luminosity functions in the full SDSS sample (weighted by $p$(E) $+$ $p$(S0), of course), respectively. The dotted (SDSS$_{\rm CMASS}$-based) curves overlap the SDSS measurements at the bright end, but fall off rapidly at the faint end. This fall-off is the expected consequence of the CMASS-selection cuts. However, the exquisite match at the bright-end indicates that the bright end of the SDSS$_{\rm CMASS}$ sample is made up of the brightest galaxies in SDSS. Therefore, the remarkable agreement between CMASS, SDSS$_{\rm CMASS}$ and SDSS in Figure~\ref{LFred} -- for both {\tt cmodel} and {\tt PyMorph} photometry -- suggests that the red CMASS sample is related to the (bright-end of) the SDSS E+S0 sample by purely passive evolution. Of course, this agreement depends on the choice of $k+e$ correction, and our treatment of photometric errors.
To show that these matter, the red and green dashed curves in the bottom panel show the result of ignoring these errors when constructing the SDSS$_{\rm CMASS}$ sample. Clearly, this matters much more for the faint end, and more for Sersic photometry. The dependence on photometry is not because {\tt PyMorph} photometry is noisier than {\tt cmodel}. Rather, it is more closely related to the fact that sample selection is done using {\tt cmodel} photometry. See Appendix~A for a more detailed discussion.
\subsection{Stellar masses}
We expect the differences in $\phi(L)$ to carry over to the stellar mass functions. However, as Bernardi et al. (2013) have highlighted recently, this is not entirely straightforward because $M_*$ is estimated from the product of $M_*/L$ and $L$. Therefore, $M_*/L$ can be systematically different even when $L$ is the same, or systematic differences in $L$ may not be not balanced by differences in $M_*/L$.
\begin{figure}
\centering
\includegraphics[scale = .43]{FIGURES/ML_models.ps}
\caption{Stellar mass to light ratios as a function of age output by the CB07 (green diamonds) and M09 (blue asterisks) models for solar metallicity and a Chabrier IMF. Bottom curves account only for the mass in stars and stellar remnants; top curves include the mass in processed gas as well. These differences directly impact the transformation from $\phi(L)$ to $\phi(M_*)$.}
\label{compareML}
\end{figure}
To illustrate the level at which systematics matter, Figure~\ref{compareML} shows how $M_*/L$ depends on the age of the stellar population in the CB07 and M09 SSP models we used to estimate the $(k+e)$-corrections. The bottom blue asterisks show $M_*/L$ in the M09 models when $M_*$ is the mass in stars $M_{\rm liv}$ and stellar remnants $M_{\rm rem}$; the top blue asterisks add the gas which has been returned to the ISM by stellar evolution to give $M_{\rm tot}$.
This raises the question of which $M_*/L$ estimates one should use? There are two reasons why one might use the values based on $M_{\rm tot}$. One is that $M_{\rm tot}$ is constant whereas $M_{\rm liv}+M_{\rm rem}$ evolves (as M13 note, the stellar mass lost to stellar evolution can be large: $\sim 40$~percent at $z\sim 0.5$). Hence, not accounting for the mass in gas compromises one of the advantages of stellar mass relative to luminosity when testing for passive evolution. The second reason to prefer the total $M_*/L$ values is that they are in better agreement across the models: the M09 and CB07 models differ more in the amount of gas returned to the ISM than in the total mass-to-light ratio.
Since passive evolution conserves $M_{\rm tot}$ and not the other quantity anyway, Figure~\ref{compareML} argues that $M_*/L$ measurements in the future should be based on $M_{\rm tot}$.
In practice, we are primarily interested in older galaxies, for which stellar evolution has slowed substantially, so the assumption that the mass in stars is constant between $z\sim 0.6$ and $z\sim 0$ is plausible. Thus, the potential advantage of using $M_{\rm tot}$ is not so great. In addition, the difference between the model predictions for $M_{\rm liv}+M_{\rm rem}$ is typically less than 0.1~dex. Therefore, in what follows we will follow M13 in working with the mass currently in stars, rather than the total mass ever in stars. But the difference between the two should be borne in mind.
\begin{figure}
\centering
\includegraphics[scale = .43]{FIGURES/MsF_CMASS_1.ps}
\caption{The CMASS stellar mass function based on luminosities derived from {\tt cmodel} $i$-band photometry. Large cyan symbols connected by a solid line count the mass in stars and stellar remnants; small cyan symbols include the mass in gas as well, and blue symbols are taken from Figure~19 in M13. }
\label{M*maraston}
\end{figure}
The large cyan symbols in Figure~\ref{M*maraston} show the result of starting from the same SDSS {\tt cmodel} luminosities used by M13 (those which led to the cyan symbols in the top panel of Figure~\ref{LFred}) and using the SSP models of M09 to compute the $M_*/L$ values from which to determine $M_*\equiv M_{\rm liv}+M_{\rm rem}$. As a consistency check, the blue symbols show the number counts reported in Figure~19 of M13; at large stellar masses they are very similar to our larger cyan symbols, as they should, since most of these massive galaxies have $g-i>2.35$. The difference at smaller masses is due to the fact that M13 used a suite of templates with different star formation histories for bluer galaxies (i.e. those with $g-i<2.35$) whereas we did not (as we noted earlier). The smaller cyan symbols in Figure~\ref{M*maraston} show the result of including the mass that is now in the form of gas for the M09 models; this results in an increase by a factor of 1.5.
\begin{figure}
\centering
\includegraphics[scale = .43]{FIGURES/MsF_CMASS_SDSS_M09.ps}
\includegraphics[scale = .43]{FIGURES/MsF_CMASS_SDSS_red_M09.ps}
\caption{Comparison of stellar mass functions of all (top) and red (bottom) galaxies in CMASS with all and E+S0 galaxies in SDSS$_{\rm CMASS}$ and SDSS. In all cases, $M_*$ comes from combining the M09 models with {\tt cmodel} or {\tt PyMorph} photometry.}
\label{MFred}
\end{figure}
Figure~\ref{MFred} shows the $\phi(M_*)$ estimates which result from combining the luminosities which led to Figure~\ref{LFred} with $M_*/L$ estimates from the same M09 models (ignoring the mass in gas). The top panel shows $\phi(M_*)$ for all galaxies in CMASS, SDSS, and SDSS$_{\rm CMASS}$, and the bottom panel for the reds in CMASS and the E+S0s in SDSS and SDSS$_{\rm CMASS}$. The agreement between CMASS and SDSS$_{\rm CMASS}$, which is already good in the top panel, is even better in the bottom. Passive evolution between CMASS and the SDSS appears to be an excellent approximation.
Note that using the same expression for converting from $L$ to $M_*$ was crucial. Had we used the M09 $M_*/L$ to estimate CMASS $M_*$ values, but one of the relations from Bernardi et al. (2010) to estimate SDSS$_{\rm CMASS}$ values, then we would have found that CMASS galaxies were more massive than the most SDSS$_{\rm CMASS}$ massive galaxies. Indeed, this was the puzzle raised by M13: How can the high redshift sample be more massive? They noted that it was possible that systematically different $M_*$ estimates might be the reason, and our analysis appears to confirm this. (Figure~\ref{LFri} and associated discussion argues that measurement errors do not affect this conclusion about the bright-end.)
Although we do not show it here, analysis of the higher redshift range $0.6\le z\le 0.7$ yields similar results. We have highlighed the $0.5\le z\le 0.6$ range because biases resulting from the M13 age requirement are less of an issue at low redshifts (c.f. Figure~\ref{keage} and associated discussion).
The agreement between CMASS and SDSS$_{\rm CMASS}$ is consistent with passive evolution. However, this conclusion depends crucially on the accuracy of the $k+e$ corrections (and our treatment of the photometric errors). For this reason, we now turn to a very different test of the passive evolution hypothesis.
\section{Comparison of clustering in SDSS and CMASS}\label{sec:xi}
In this section, we use the clustering of the CMASS and SDSS$_{\rm CMASS}$ samples to determine if the two are simply related by passive evolution. Similar tests are described in Wake et al. (2008), Tojeiro et al. (2012) and Guo et al. (2013).
Note that it is conventional in the literature on large scale structure to work in units in which $H_0 = 100h$~km~s$^{-1}$Mpc$^{-1}$, so all distances are quoted in units of $h^{-1}$Mpc. We will do so here, but remind the reader that the number densities in the previous section all use $h=0.7$. In addition, whereas the previous section restricted attention to $z<0.6$ (because of potential systematics in the $k+e$ corrections), here we also include galaxies at higher redshifts. This is because -- as we describe below -- for this test we care mainly about the rank ordering of the luminosities (or stellar masses) than their absolute values, and we do not expect the potential systematic in $k+e$ corrections to change the rank order.
\subsection{Clustering of conserved tracers}
If CMASS galaxies evolve passively, then their comoving number density will remain unchanged. In this case, their large scale clustering signal should evolve as
\begin{equation}
\xi(r|z) = b_z^2\,\xi_m(r|z) = [D_z + b_0-1]^2\,\xi_m(r|z=0)
\label{xirz}
\end{equation}
where $D_z$ is the linear theory growth factor at redshift $z$ in units of its value at $z=0$, and $b_z$ is the bias of the population at redshift $z$ (Nusser \& Davis 1994; Mo \& White 1996). Therefore,
\begin{equation}
\frac{\xi(r|z)}{\xi(r|0)} = \frac{[D_z + b_0-1]^2}{b_0^2}
= \frac{b_z^2}{[D_z^{-1} + b_z-1]^2}.
\label{xizratio}
\end{equation}
(The growth on smaller scales can be slightly different; see Wake et al. 2008 for some explicit examples of the expected magnitude of this difference.) If we measure the redshift space distorted signal $\xi(s)$, then on the scales where equation~(\ref{xizratio}) applies we expect
\begin{equation}
\frac{\xi(s|0)}{\xi(s|z)} \approx
\frac{\left(b_0^2 + 2b_0 f_0/3 + f_0^2/5\right)}
{\left(b_z^2 + 2b_z f_z/3 + f_z^2/5\right)\,D_z^2}
\label{xisratio}
\end{equation}
where $f_z\approx \Omega_m(z)^{5/9}$ (Kaiser 1987).
Although we introduced equations~(\ref{xirz}--\ref{xisratio}) in the context of passively evolving galaxies, they are more generally applicable to any population of tracers having conserved comoving density. E.g., suppose that CMASS galaxies only merge with non-CMASS galaxies. Then their luminosities and stellar masses will almost certainly be inconsistent with those of passive evolution models. However, their comoving number density will remain unchanged. If these differences from true passive evolution preserve the rank order -- the most luminous/massive CMASS galaxy remains the most luminous/massive one at lower redshift -- then the clustering at fixed comoving number density (not fixed stellar mass or luminosity!) will obey equations~(\ref{xirz}--\ref{xisratio}). Another example, which is potentially relevant to the discussion of the previous section, is to suppose that SSP models differ from one another only in the strength of the evolution in luminosity or stellar mass, such that although they predict different luminosities, they all have the same rank ordering. In this case also, the clustering at fixed comoving number density should obey equations~(\ref{xirz}--\ref{xisratio}).
Previous work (Anderson et al. 2012; Nuza et al. 2013; Guo et al. 2013) has shown that $b_z\approx 2$ for the full CMASS sample. Hence, equation~(\ref{xirz}) indicates that the low redshift sample should be more strongly clustered if the CMASS galaxies have survived to become the SDSS$_{\rm CMASS}$ galaxies because of passive evolution (as Figures~\ref{LFred} and~\ref{MFred} suggest).
In our background cosmology $D_{0.55} = 0.75$, $D_{0.2} = 0.9$, $f_{0.55} = 0.76$, $f_{0.2}=0.62$ and $f_0=0.51$, so if $b_z\sim 2$ at $z\sim 0.55$, then $\xi(r|0.55)/\xi(r|0) = 0.73$ and $\xi(s|0.55)/\xi(s|0) = 0.78$, whereas $\xi(r|0.55)/\xi(r|0.2) = 0.83$ and $\xi(s|0.55)/\xi(s|0.2) = 0.85$.
Therefore, if the objects in our SDSS$_{\rm CMASS}$ sample are descendents of those in the CMASS sample, then, at constant comoving number density, we expect the clustering amplitude to have increased by about 20 percent. (Current uncertainty about the background cosmology means this number is uncertain by only a few percent. This number is about three times larger than the fractional change in the growth factor over the redshift range spanned by CMASS.)
On the other hand, if there were some CMASS-CMASS mergers, then the clustering signal will evolve differently. The CMASS and SDSS$_{\rm CMASS}$ samples are large enough that this signal of passive-like evolution should be detectable.
Guo et al. (2013) have performed a test of the passive-like/conserved tracer evolution hypothesis over the redshift range covered by CMASS (approximately 0.45-0.65). Their Fig.~13 shows their results, which they argue are consistent with passive evolution. However, because the expected fractional change in the clustering strength over the CMASS redshift range is only of order ten percent, their measurements are also consistent with no evolution whatsoever. Indeed, Reid et al. (2014) use this lack of evolution to argue that one can simply ignore evolution across the entire CMASS sample.
Some of the mildness of the measured evolution across the CMASS sample can be attributed to the joint effects of passive-like evolution and luminosity- or type-dependent clustering in a survey in which the observed mix of galaxy types depends on redshift. This is relevant because Guo et al. (2013) have shown that clustering in CMASS depends on luminosity, and the fainter CMASS galaxies are only observed at the low redshift end of the sample. Let $f$ denote the fraction of faint galaxies, $b$ the bias of this faint subset, and $B$ that of the brighter objects. At the high redshift end, we only observe the bright objects: suppose their clustering signal is $B_{hi-z}^2\xi_{hi-z}$. If these bright objects evolve passively, then at lower redshifts their clustering signal is $B_{lo-z}^2\xi_{lo-z} = (D_{lo-z} + B_0-1)^2\,\xi_0$. Although these objects accounted for the full observed population at high-$z$, they only account for $1-f$ of the observed objects at lower $z$. The clustering of the full low redshift sample is $(fb_{lo-z} + (1-f)B_{lo-z})^2\xi_{lo-z}$. The high and low redshift samples will have the same observed clustering signal if
$fD_{lo-z}b_{lo-z} + (1-f)D_{lo-z}B_{lo-z} = D_{hi-z}B_{hi-z}$. A little algebra shows this occurs if
$f (B_{lo-z} - b_{lo-z}) = 1 - D_{hi-z}/D_{lo-z}$. Since the right hand size is positive, and $f<1$, this will be satisfied only if $(B_{lo-z} - b_{lo-z})$ is large enough.
If it is, then the luminosity dependence of clustering can cancel the effect of passive evolution in a survey (such as CMASS) in which the fainter galaxies are not seen at the highest redshifts. To ensure that we are not affected by this, we will work exclusively with volume limited samples.
\subsection{Technical note}
In practice, the clustering test is complicated by the fact that equation~(\ref{xisratio}) is only expected to apply on scales of order $10h^{-1}$Mpc or larger. Since most of the signal comes from smaller scales where this expression may not be extremely accurate, and we care about ten percent effects, it is desirable to test equation~(\ref{xizratio}) directly, rather than via~(\ref{xisratio}). Following Davis \& Peebles (1983) it is commonly assumed that this can be done by measuring the projected quantity $w_p(r_p)$, which, in principle, is not affected by redshift space distortions. However, although the definition of $w_p(r_p)$ involves an integral over all pair separations along the line of sight, the measurement is usually restricted to pair separations smaller than some scale that is typically of order $60h^{-1}$~Mpc. As a result, the measured quantity is not completely independendent of redshift space distortions, and, if one is interested in ten percent level effects, this matters.
For example, Reid et al. (2014) present measurements of both $\xi(s)$ and $w_p(r_p)$ for the full CMASS sample (but not for the red subset of most interest to us!). Although they do not say so, the usual naive interpretation of the two measurements returns estimates of the large scale bias factor which differ by more than ten percent: the bias inferred from fitting $w_p(r_p)$, when inserted into Kaiser's formula results in an overestimate of $\xi(s)$ of more than twenty percent. Unfortunately, this systematic is precisely the expected magnitude of the passive evolution signal.
Although this drawback of $w_p(r_p)$ has been known since its inception, it is only recently that datasets have reached the precision where this matters. van den Bosch et al. (2013) describe how to modify the estimator of $w_p(r_p)$ to mitigate this effect. They estimate a multiplicative correction factor which their Figure~5 suggests is approximately (i.e., to within a few percent) independent of galaxy type. Therefore, although measurements of $w_p(r_p)$ in two samples may each be biased, their ratio is not. We make use of this below.
\begin{figure}
\centering
\includegraphics[scale = .4]{FIGURES/xiDR7DR9.eps}
\caption{Projected two point correlation function of all CMASS galaxies (open circles, from Table~2 in Reid et al. 2014) and of the most luminous SDSS galaxies, selected to have comoving number densities as labelled (triangles, taken from Table C7 of Zehavi et al. 2011). Bottom panel shows the SDSS measurements divided by $w_p$ of the full CMASS sample. }
\label{xi79}
\end{figure}
\subsection{Clustering of the most luminous objects in CMASS and SDSS$_{\rm CMASS}$}
To set the stage, we first compare measurements of $w_p(r_p)$ taken from the literature. The open circles in Figure~\ref{xi79} show the values for the full CMASS sample taken from Table~2 of Reid et al. (2014). While they do not quote a comoving number density for their sample, their measurements are almost indistinguishable from those of Nuza et al. (2013), who quote $n = 3.6\times 10^{-4}h^3$Mpc$^{-3}$. The SDSS DR7 sample with the most similar clustering signal has $M_r<-21.5$ and $n = 2.8\times 10^{-4}h^3$Mpc$^{-3}$; the triangles show $w_p$ for this sample taken from Table C7 of Zehavi et al. (2011).
Before we discuss the relative amplitudes, it is worth noting how remarkably similar the shapes of curves are: the bottom panel shows that they differ by a single scale independent multiplicative bias factor. While there certainly are pairs common to the two SDSS samples, there are none in common with CMASS, so the agreement in shape is truly remarkable.
We now discuss the amplitudes, bearing in mind that we are most interested in comparisons at fixed comoving number density. We use the results of Nuza et al. (2013) to account for the fact that the CMASS abundances are larger. On the basis of mock catalogs matched to CMASS they report that the clustering strength increases as the abundance decreases; a CMASS sample with $n = 2.8\times 10^{-4}h^3$Mpc$^{-3}$ would be 6\% more strongly clustered than we have shown. This would make the SDSS sample slightly less strongly clustered than a CMASS of the same number density. This conflicts with the conserved tracers prediction that SDSS should be of order 20\% more strongly clustered.
The discrepancy may not be unexpected, since one expects passive evolution to be a better approximation for the rarer more massive objects. The filled triangles show an SDSS DR7 sample having $n = 0.5\times 10^{-4}h^3$Mpc$^{-3}$ (again from Table C7 of Zehavi et al. 2011). The Nuza et al. scaling of $b$ with $n$ indicates that the clustering should be 60\% higher. This is similar to or slightly larger than the corresponding SDSS measurements, so also conflicts with the conserved tracers prediction.
Since it is possible that this discrepancy is due to contamination by bluer galaxies in both CMASS and SDSS, we made our own measurements of red galaxies in CMASS and E+S0s in the SDSS. To check consistency of our measurement software with previous work, we first measured $\xi(s)$ and $w_p(r_p)$ in the full CMASS sample, using the weighting scheme detailed in Anderson et al. (2012), finding good agreement with $\xi_{\rm N+S}$ of Table~2 in Nuza et al. (2013) and with Tables~2 and~3 in Reid et al. (2014).\footnote{However, in what follows, we work with volume limited catalogs, so the $w_{\rm FKP}$ weights of all the objects in a given catalog are the same. Also, we are only interested in scales on which fiber collisions matter at less than the few percent level.}
Having established that our software reproduces published results, we next made three volume limited catalogs, defined by choosing the most luminous CMASS galaxies having redshifts in the range $z_{min}=0.5$ and $z_{max} = (0.62,0.67,0.67)$; the associated luminosity thresholds $M_i<(-22.62,-22.87,-23.08)$ are chosen to yield comoving number densities of $n = (2,1,1/2)$ $\times 10^{-4}h^3$Mpc$^{-3}$.
We then measured $\xi(s)$ and $w_p(r_p)$ in each of these samples and found that the more luminous samples were more strongly clustered. This extends the findings of Guo et al. (2013) to higher luminosities. Since this is not the main point of our paper, we have not dedicated a figure to this here. (We simply note that the offset between the two sets of symbols in Figure~\ref{compareWp} is a direct consequence of this luminosity dependence.)
\begin{figure}
\centering
\includegraphics[scale = .4]{FIGURES/compareWp.eps}
\caption{Ratio of projected two point correlation function of CMASS red galaxies (squares) and SDSS$_{\rm CMASS}$ E+S0 galaxies (triangles, connected by lines) to that of the full CMASS sample (open circles in previous figure). Upper symbols show results for comoving number densities of $0.4\times 10^{-4}h^3{\rm Mpc}^{-3}$; lower symbols are for $1.6\times 10^{-4}h^3{\rm Mpc}^{-3}$. If the conserved tracer/passive-like hypothesis were correct, then the SDSS$_{\rm CMASS}$ galaxies would be about 20\% more strongly clustered than their CMASS counterparts.}
\label{compareWp}
\end{figure}
Next, from each catalog we chose the red subset which had observed $g-i>2.35$, and measured $\xi(s)$ and $w_p(r_p)$ in it. These red galaxies have comoving number densities which are about 20\% smaller than those of their parent volume limited catalogs, and they are also more strongly clustered.
E.g., the red objects have a bias factor which is about $5/3$ times that of the blue objects (those which have $g-i<2.35$). The reddest galaxies in a luminosity threshold sample are known to be more strongly clustered than the rest (e.g. Skibba \& Sheth 2009; Guo et al. 2013), so it is reassuring that, despite the relatively large errors in the photometry, this extremely simple color cut appears to have removed a physically different sample with a substantially smaller clustering signal. These are the measurements which we use for our `conserved tracers' test of passive evolution.
The squares in Figure~\ref{compareWp} show our measurements of $w_p$ for the red CMASS galaxies in our brightest and faintest volume limited catalogs, divided by a fiducial value $w_p^{\rm fid}$, for which we use the measurement for the full CMASS sample reported in Table~2 of Reid et al. (2014). The rarer, more luminous galaxies are clearly more strongly clustered, being offset from $w_p^{\rm fid}$ by a scale independent multiplicative factor over all scales larger than a few Mpc.
The triangles connected by a solid line show similar clustering measurements made in volume limited catalogs of the most luminous SDSS DR7 E+S0 galaxies, with the limits chosen to yield the same comoving number density as the CMASS red galaxies: $z\le (0.2,0.24)$ and {\tt cmodel} $M_i \le (-22.77, -23.17)$. These catalogs have comoving volumes that are about $7\times$ smaller than their CMASS counterparts, so the error bars on the SDSS measurements are correspondingly larger. This shows that the more luminous E+S0s at $z\sim 0.2$ are slightly less clustered than their red CMASS counterparts at $z\sim 0.55$ whereas the less luminous galaxies are substantially less clustered. We find, but do not show here, similar results using $\xi(s)$. This is reassuring, in view of our comments earlier about systematics associated with the $w_p$ measurement (and which our use of ratios to present results mitigates). Since the conserved tracer assumption predicts that the low redshift sample should be more strongly clustered, we conclude that our clustering measurements are inconsistent with pure passive evolution. In fact, for the reasons given at the start of this section, the clustering measurements are inconsistent with any merger model which preserves the rank ordering in luminosity.
\subsection{Systematics and an additional test}
In view of how very passive both $\phi(L)$ and $\phi(M_*)$ seem to be, it is prudent to consider how the clustering test may have gone wrong.
The completeness of the CMASS sample is still under investigation (e.g. Leauthaud et al. 2015 and our Figure~\ref{LFred} suggest this is of order of 80 percent). However, because we have been careful to match comoving number densities in SDSS and CMASS, any incompleteness in CMASS would almost certainly mean that our current CMASS samples contain more lower luminosity galaxies than they should have (for a given number density cut). Since lower luminosity galaxies are less strongly clustered, incompleteness in CMASS would mean that we have underestimated the clustering strength of the $z\sim 0.55$ population. Thus, incompletenesses in the CMASS sample will only exacerbate the mismatch with the passive evolution prediction.
Another possibility is that we are somehow underestimating the clustering signal of the low-$z$ sample. For example, perhaps the $p$(type) weights we use are not sufficiently reliable, and yield a systematic underestimate of the clustering strength. This may be: the brightest SDSS$_{\rm CMASS}$ E+S0s in Figure~\ref{compareWp} have $w_p/w_p^{\rm fid}\approx 1.7$, and this is not very different from the value of $\sim 1.6$ in the bottom panel of Figure~\ref{xi79}, for which no $p$(type) weights were applied. On the other hand, although we expect E+S0s to be more strongly clustered than the total, this is most dramatic at faint luminosities (e.g., because faint satellite galaxies in clusters tend to be early-type). At the highest luminosities of interest here, most galaxies are E+S0s, so the expected difference is small. E.g., Zehavi et al. (2011) suggest that this difference is less than 10 percent for the most luminous (lowest comoving number density) sample.
Moreover, the similarity we see in the clustering strength of the rarest objects is consistent with the analysis of luminous red galaxies (LRGs) presented in Wake et al. (2008). They find that the clustering of LRGs (approximately equivalent to our rarer more luminous sample) has evolved little between $z=0.55$ and $z=0.2$: it has not increased. They attribute this to mergers involving a small fraction of objects (our Appendix~B discusses a simple toy model which illustrates why the clustering signal decreases because of mergers).
While this agreement may be reassuring, we note that their measurements of $\phi(L)$ were not as precise as ours -- the precision of the $\phi(M_*)$ measurements shown in the previous section leaves little room for mergers.
\begin{figure}
\centering
\includegraphics[scale = .4]{FIGURES/xiLPassive.eps}
\caption{Redshift space correlation function of the full CMASS sample (open circles), and two subsamples of CMASS red galaxies chosen to have the luminosity densities indicated (squares), and SDSS$_{\rm CMASS}$ E+S0 galaxies with same luminosity density (triangles). Bottom panel shows the ratio of these measurements to that in the full sample. If the conserved tracer/passive-like hypothesis were correct, then the SDSS$_{\rm CMASS}$ galaxies would be about 15\% more strongly clustered than their CMASS counterparts.}
\label{xiLpassive}
\end{figure}
A final possibility is that we are simply not making the appropriate comparison between the low and high redshift samples. Motivated by the fact that the stellar mass of a galaxy which results from the merger of two passive galaxies will almost certainly be close to that of the sum of the masses of its progenitors, Tojeiro et al. (2012) advocate testing for passive evolution by using samples matched in (comoving) luminosity- or stellar-mass density, rather than in number density. (A little thought shows that this is not as clean a test as advertised: If CMASS galaxies merged with non-CMASS galaxies, then although the number density of CMASS galaxies is conserved, the luminosity- or stellar-mass density is not, so matching them -- rather than number density -- at different times is no longer appropriate.) Since we have found that $\phi(L)$ and $\phi(M_*)$ are so close to passive, we do not expect $L$ or $M_*$- weighting each galaxy to make much of a difference. Nevertheless, we have performed such a test using the {\tt cmodel} luminosities.
The brighter and fainter samples studied previously have luminosity densities of 0.7 and $1.7\times 10^{-4}L_{23}/(h^{-1}{\rm Mpc})^3$, where $L_{23}$ is the luminosity associated with {\tt cmodel} $M_i=-23$. (The mean luminosity in each sample is 1.45 and 1.05$L_{23}$) We then measured the luminosity-weighted $\xi(s)$ and $w_p(r_p)$ in these matched CMASS and SDSS$_{\rm CMASS}$ samples (i.e., for the SDSS$_{\rm CMASS}$ sample, each galaxy was weighted by $p$(E)+$p$(S0) times $L/L_{23}$. To make the point that both give similar results, we now show $\xi(s)$, since we previously showed $w_p$. Note that the conserved tracer/passive evolution prediction for the redshift space ratio -- a growth of about 15\% between $z=0.55$ and $z=0.2$ -- is slightly smaller than for the real-space ratio (equation~\ref{xisratio}).
Squares and triangles in Figure~\ref{xiLpassive} show results for our CMASS and SDSS$_{\rm CMASS}$ subsamples. The open circles show $\xi(s)$ for the full CMASS sample taken from Table~2 in Reid et al. (2014; since these were not luminosity-weighted, they correspond to the open circles in Figure~\ref{xi79}). The bottom panel shows the ratio of our luminosity-weighted measurements to the open circles. As for $w_p$, this ratio is rather scale independent. And, as before, the SDSS$_{\rm CMASS}$ signal does not exceed that for CMASS on scales larger than a few Mpc. (The drop on large scales for the most luminous sample is most likely due to cosmic variance.) Since the prediction was an increase of about 15\%, we conclude that the conserved tracer/passive evolution hypothesis is inconsistent with the results of this test as well.
Having argued that our clustering results seem to be robust, we now consider the possibility that systematic errors in our determination $L$ or $M_*$ are to blame. We noted at the start of this section that because we match comoving densities we are immune to systematic problems in the models we use for converting from apparent magnitudes to luminosities ($k+e$ corrections) and from $L$ to $M_*$, provided these leave the rank ordering the same. This same argument applies to systematics in the photometric reductions (e.g., Meert et al. 2014 showed that our Sersic reductions are slightly biased; also see D'Souza et al. 2015) if they leave the rank ordering of luminosities the same. However, this is not quite the full story. Suppose the systematic is different for CMASS galaxies than it is for SDSS$_{\rm CMASS}$ (e.g., because of surface-brightness effects, etc.), but it preserves the rank ordering in each. Then, although our clustering measurements will not change (because the rank ordering is unchanged), our interpretation will, because $\phi(L)$ and $\phi(M_*)$ will no longer be consistent with passive evolution. This might provide room for the mergers which our clustering results suggest have occured. Although the tests in Meert et al. (2014) suggest that such a systematic is not present in {\tt PyMorph}, we believe our results motivate further testing, ideally by other groups with different analysis pipelines.
\section{Conclusions}
We showed that our Sersic-based photometric reductions imply more luminous, massive galaxies in the CMASS sample (at $z\sim 0.55$) than when {\tt cmodel} photometry is used (Figure~\ref{pymorph}). This difference is consistent with the effects of Sersic- rather than Petrosian or de~Vaucouleur-based photometry on the SDSS main galaxy sample at $z\sim 0.1$. This implies a significant revision of the high mass end of the correlation between stellar and halo mass, and impacts the need for feedback processes operating at $z\sim 1$ (e.g. Kravtsov et al. 2014; Shankar et al. 2014).
Inferences about the evolution of the luminosity and stellar mass functions, and hence of the $M_*-M_{\rm halo}$ relation, depend strongly on the assumed and uncertain $k+e$ corrections. In most stellar population synthesis codes, these depend strongly on the age of the stellar population. The models of M09 suggest that CMASS galaxies are about 2~Gyrs younger than Wake et al. (2006) assumed for LRGs at these same redshifts (Figure~\ref{keage}). They also indicate that more luminous CMASS galaxies are older (although not as old as the LRGs were assumed to be). This implies differential evolution across the population, and suggests that use of a single $k+e$ correction for all galaxies at the same $z$ will lead to biases, even if they are all early-type.
To test the hypothesis that CMASS galaxies evolve passively to populate the high mass/luminosity end of the SDSS sample, we described how to use the M09 models to construct a mock CMASS sample from the SDSS sample. We called the result the SDSS$_{\rm CMASS}$ sample; if passive evolution is accurate then the luminosity and stellar mass functions in it should be the same as in CMASS. They are: passive evolution is an even better approximation if we compare red galaxies in CMASS (observed $g-i>2.35$) with E+S0s in SDSS$_{\rm CMASS}$ (Figure~\ref{LFred} and~\ref{MFred}).
If the CMASS and SDSS$_{\rm CMASS}$ galaxies are indeed related by simple passive evolution, then the conservation of comoving number density implies that the SDSS$_{\rm CMASS}$ population should be about 20 percent more strongly clustered than their counterparts in CMASS. To test this prediction, we matched samples in comoving number density ($\sim 2\times 10^{-4} h^3$~Mpc$^{-3}$) and type ($g-i>2.35$ in CMASS and morphological type E+S0 in SDSS$_{\rm CMASS}$), and measured the clustering signals in each, finding that SDSS$_{\rm CMASS}$ sample is less clustered than its CMASS counterpart (Figure~\ref{compareWp}). I.e., the low redshift clustering signal lies well below the passive evolution/conserved tracer prediction.
We repeated the test using more luminous, less abundant galaxies (comoving densities $\sim 0.5\times 10^{-4} h^3$~Mpc$^{-3}$). In both CMASS and SDSS$_{\rm CMASS}$, the more luminous objects are more strongly clustered. In addition, at fixed luminosity, the redder CMASS objects are more strongly clustered: simply requiring $g-i>2.35$ removes a physically different, substantially less clustered population of objects from the CMASS sample.
(Both these findings extend recent analyses of trends with luminosity and color in the CMASS sample by Guo et al. (2013) to higher luminosities and masses.) However, although the differences between the CMASS and SDSS$_{\rm CMASS}$ clustering strengths decrease for the rarer objects, in no case was the SDSS$_{\rm CMASS}$ sample more strongly clustered than the corresponding CMASS sample (Figure~\ref{compareWp}): pure passive evolution is always a bad approximation.
The nature of the clustering test means that our measurements are immune to systematic biases in the models we use for converting from apparent magnitudes to luminosities ($k+e$ corrections) and from $L$ to $M_*$, provided these leave the rank ordering of $L$ or $M_*$ the same. They are also inconsistent with any merger model which preserves the rank ordering in luminosity.
Incompleteness in the CMASS sample will likely amplify rather than mitigate this discrepancy. Matching the CMASS and SDSS$_{\rm CMASS}$ samples in comoving luminosity (rather than number) density, and weighting each galaxy by its luminosity when measuring the clustering signal leads to a similar conclusion: since the low redshift sample is never more strongly clustered than the high redshift sample (Figure~\ref{xiLpassive}), pure passive evolution is inconsistent with our measurements.
While this is in conflict with our findings based on the luminosity and stellar mass functions, it is consistent with analyses of the abundance and clustering of LRGs in Wake et al. (2008). Indeed, our finding that clustering rules out passive evolution means that the merger models in Wake et al. (2008) for the evolution of LRGs are also likely to be relevant for CMASS. There also exists a large body of work which suggests -- primarily on the basis of number counts alone -- that there must have been of order 0.1 to 0.2~dex mass growth via mergers over the redshifts and masses of interest here (see e.g. Marchesini et al. 2014; Ownsworth et al. 2014 and references therein). Although we outlined a simple toy model for how abundance {\em and} clustering measurements constrain merger models (Appendix~B), the exquisitely passive nature of $\phi(M_*)$ which we have found is a puzzle which we hope will spur further work. In particular, the larger CMASS sample size has allowed a more precise determination of the luminosity and stellar mass functions than was possible in Wake et al. (2006); this means that, if the M09 models -- on which our $k+e$ corrections and hence our evidence for passive evolution -- are correct, then there is substantially less room for mergers to play a role.
The discrepancy between the abundances which are consistent with passive evolution and the clustering measurements which are not, might be alleviated if our photometric reductions suffer from a systematic bias which affects $z\sim 0.2$ galaxies differently from those at $z\sim 0.6$. While the tests in Meert et al. (2014) have failed to uncover such a systematic, we believe this tension motivates further work -- perhaps by other groups with different photometric analysis pipelines -- along these lines.
Before closing, we note that the passive evolution/conserved tracers prediction is robust to currently acceptable changes in the $\Lambda$CDM model parameters. Hence, if both the stellar population synthesis models and our clustering analyses are correct -- passive evolution of the stellar population and weaker than expected clustering at late times -- then the overprediction of the clustering signal may be pointing to new physics. E.g. faster than expected expansion between $z=0.55$ and the present would lead to a freezing-out of structure formation, thus reducing the tension between the SSP models and the lack of evolution in the clustering signal. However, this is a rather radical solution -- one to which we are reluctant to turn -- given current uncertainties on the models and the modest comoving volume currently available to perform the clustering test. Nevertheless, as larger samples over larger comoving volumes become available, we believe that the combination of SSP and clustering analyses we have used here will yield interesting results.
\subsection*{Acknowledgements}
We are grateful to A. Montero-Dorta for helpful discussions about his work, and the staff of the LYTE center for its hospitality when this work was completed.
|
2205.15511
|
\section{Introduction}
Sentiment analysis (SA) has received much attention in both academia and industry for its great value \cite{zhou2020sentix,zhou2020modeling}. Instead of assuming that the entire text has an overall sentiment polarity, more and more researchers turn to investigate the fine-grained SA, such as event-level SA \cite{pontiki-etal-2014-semeval,patil2018event,petrescu2019sentiment}.
Event-level SA aims to identify the feelings or opinions expressed by users on a social platform about real-time events from financial news, sports, weather, entertainment, etc. It is vital for many applications, such as stock prediction \cite{makrehchi2013stock}, public opinion analysis \cite{petrescu2019sentiment}.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.14]{Example.pdf}
\caption{An example of structured event-level SA.}
\label{fig:example}
\end{figure}
Previous studies about event-level SA mainly utilize the related snippet or context of the event for SA \cite{patil2018event,makrehchi2013stock,fukuhara2007understanding,jagdale2016sentiment,zhou2013sentiment}. \citet{petrescu2019sentiment,patil2018event} detected bursty topics as the events via LDA or clustering algorithm and inferred their sentiments.
Moreover, \citet{deng2015joint} recognized sentiments toward entities and events that are several terms in the text. Whereas, these studies focused on the event related text modeling, which neglects the influence of the event's inherent structure.
According to our previous observations, the event structured arguments, such as subject, object, time and location, play an important role in event-level SA. As shown in Figure \ref{fig:example}, the events with the same trigger ``increase" indicate opposite sentiments for different subjects, namely ``operating costs" and ``revenue". Additionally, for the two events (the second and fourth row in the table) with the same negative sentiment, the objects as ``10\%-20\%" and ``64.65\%" help reveal the strength of sentiment polarity.
Therefore, this work aims to enhance event-level SA with the structured arguments.
Noting that there are few studies about event SA with the fine-grained structured arguments, we reformalize a structured event-level SA task, which extracts the events with arguments and predicts the sentiments. This task suffers from four challenges as follows:
C1) The multi-subtasks (e.g., trigger extraction, argument extraction, sentiment classification) are related with each other, and performing them independently will lead to error propagation;
C2) One document may contain multiple events with different sentiments. Taking Figure \ref{fig:example} as an example, there are four events, the sentiment polarities of them are different among each other;
C3) Unlike general aspect-level SA, the event consists of triggers and arguments associated with their roles, which is harder to model than a topic. Thus, the existing aspect-based SA models can not be applied to this task directly;
C4) Lack of the labeled datasets for this task. The existing datasets mainly focus on event extraction or aspect's SA, while the sentiment of the structured event is not well studied.
To deal with the above challenges, we present an end-to-end approach for structured event-level SA, which reduces the error propagation among the subtasks (C1).
Particularly, we first design a feature-enhanced trigger extractor to extract multiple events' triggers simultaneously (C2).
Second, to better model the event information, we design a trigger-enhanced argument extractor and event-level sentiment classifier, which take trigger and argument information into account (C3).
Finally, we collect and label a real-world dataset in the finance domain for this task (C4). This dataset provides a new perspective for SA and a new benchmark for event-level SA.
The main contributions of this paper are summarized as follows.
\begin{itemize}[leftmargin=*, align=left]
\item We reformalize a structured event-level SA task, which focuses on enhancing the event-level SA with structured arguments.
\item To mitigate the effect of error propagation, we propose $\textit{E}^3 \textit{SA}$ to model the relationships among the multiple tasks and multiple events by taking structured event elements into account.
\item For the lack of the off-the-shelf datasets, we release one large-scale dataset for this task. Also, extensive experiments also show that our model outperforms all the strong baselines significantly.
\end{itemize}
\section{Related Work}
In this section, we mainly review the most related papers about event extraction, sentiment analysis and sentiment analysis on events.
\paragraph{\textbf{Event Extraction}}
Event extraction (EE) is a critical task in public opinion monitoring and financial field \cite{li2020unified,yang2019using,DBLP:conf/sigir/LiaoZLZT21,DBLP:conf/sigir/FengLLNC21}.
It mainly has two key methods, pipeline and joint model.
The pipeline model, which performs trigger extraction and argument role assignment independently \cite{chen2015event,nguyen2015event}, ignoring the relationships among the elements in events.
The joint model generally extracts the related elements at the same time \cite{nguyen2016joint,zhang2019joint,wadden2019entity,li2021document}.
Recently, researchers have paid attention to document-level EE, which is more complex.
\citet{yang2018dcfee} proposed a DCFEE model to extract the event expressed by multiple sentences based on distant supervision.
\citet{liu2018jointly} and \citet{zheng2019doc2edag} proposed to extract multiple events jointly.
\citet{du-cardie-2020-event} regarded EE as a question answering task to extract the event arguments without entity propagation. Unlike the current work, we focus on modeling the events' sentiment information.
\paragraph{\textbf{Sentiment Analysis}}
Sentiment analysis can be divided into three types: sentence-level, document-level and aspect-level SA \cite{liu2012sentiment,bakshi2016opinion}.
We mainly review the most related works about aspect-based SA (ABSA) \cite{pontiki-etal-2014-semeval,zhou2019deep}.
This task aims to predict the sentiments of the aspects in the document, where the aspects are categories, topics or target terms.
To take the aspect into account, attention-based models were designed to capture the relationships between the aspect and its context \cite{fan2018multi,zeng2019lcf}.
Moreover, position information and syntax information was integrated to better model the aspect, such as a category or target \cite{li2018transformation,zhou2020sk}. Applying these models to our task will reduce the performance since we focus on modeling the structured events with complex argument information.
\paragraph{\textbf{Sentiment Analysis on Events}}
There are some works namely event-based sentiment analysis \cite{patil2018event,makrehchi2013stock,petrescu2019sentiment,fukuhara2007understanding,jagdale2016sentiment,zhou2013sentiment,ebrahimi2017challenges}, which are different from our task.
Most of these studies focus on detecting the event via topic model and judging the sentiment of the event, which is a category, topic, or term, while the detailed information (e.g., arguments) is ignored by them.
Additionally, they only consider one event in a sentence or document.
In fact, a text may consist of multiple events and an event is not only a topic.
To address these problems, we propose an end-to-end approach for event-level sentiment analysis, which aims to identify the events and their sentiments.
\section{Dataset}
\paragraph{\textbf{Data Collection and Annotation}}
Due to lack of annotated resources, we collect and annotate a financial corpus with events and their sentiment polarities, and obtain an event-level SA corpus.
Specifically, we collect 3500 financial news from the portal website\footnote{Note that all the texts are open accessed news on the https://www.eastmoney.com/ without personal information.}.
We filter the documents that contain less than 50 words or more than 500 words and finally obtain 3500 short documents.
Then, we give annotation guidelines and ten examples to eight human annotators, who manually annotate triggers, arguments and the sentiment labels via Baidu's EasyData platform\footnote{http://ai.baidu.com/easydata/}.
Note that we only consider the important events that may influence companies' stock or users' decisions.
Because there are too many events in the text and some of the events are not useless for the downstream tasks.
To ensure the labeling quality, each document is annotated by three annotators in order.
Moreover, we randomly select 100 examples and ask another three annotators to label these documents.
We measure pairwise inter-annotator agreement on tuples among two versions using Krippendorff's alpha coefficient \cite{krippendorff2011computing}.
\begin{table}[t!]
\centering
\caption{The statistic information of our dataset.
}
\label{table:dataset}
\scriptsize
\setlength{\tabcolsep}{0.6mm}{\begin{tabular}{lcccccccccc}
\hlineB{4}
& \#Doc & \#AvgLen & \#E & \#MultiE & \#PosE & \#NegE & \#NeuE & \#AvgS & \#MultiP & \#E-Across \\ \hline
Train & 2142 & 148.78 & 4210 & 1281 & 2659 & 635 & 916 & 3.41 & 1134 & 474 \\
Dev & 500 & 151.75 & 962 & 293 & 591 & 154 & 216 & 3.41 & 265 & 104 \\
Test & 500 & 148.14 & 1005 & 317 & 662 & 138 & 205 & 3.51 & 280 & 122 \\
\hline
Total & 3142 & 149.15 & 6177 & 1891 & 3912 & 927 & 1337 & 3.42 & 1679 & 593
\\
\hlineB{4}
\end{tabular}}
\end{table}
\begin{table}[t!]
\centering
\scriptsize
\caption{The comparison with the existing datasets.
}
\label{table:existing datasets}
\setlength{\tabcolsep}{1.0mm}{\begin{tabular}{ll|ccccc}
\hlineB{4}
Task & Dataset & Event & Doc & E-Across & MultiE & Sentiment \\
\hline
\multirow{3}{*}{EE} & ACE05 & $\surd$ & $\surd$ & - & - & - \\
& MUC-4 Event & $\surd$ & $\surd$ & $\surd$ & $\surd$ & - \\
& DocEDAG & $\surd$ & $\surd$ & $\surd$ & $\surd$ & - \\ \hline
\multirow{3}{*}{ABSA} & Twitter & - & - & - & - & $\surd$ \\
& Rest 14 & - & $\surd$ & - & $\surd$ & $\surd$ \\
& Lap 14 & - & $\surd$ & - & $\surd$ & $\surd$ \\ \hline
Our task & Our dataset & $\surd$ & $\surd$ & $\surd$ & $\surd$ & $\surd$ \\
\hlineB{4}
\end{tabular}}
\end{table}
\paragraph{\textbf{Data Analysis}}
We obtain 3142 samples after filtering the examples without events (Table \ref{table:dataset}).
This dataset has several characteristics: 1) One document always contains multiple events; 2) The events in a document may have different sentiment polarities (\#MultiP); 3) The arguments contained in the same event may be across in different sentences (\#E-Across).
We compare our dataset with the existing ones to clarify the differences.
First, EE focuses on extracting the events while ignoring their sentiments, and ABSA focuses on the aspects' sentiments while ignoring the structure information.
Our task aims to judge the events' sentiments, which is more challenging than these two tasks.
Second, though most datasets are at the document level, one event or aspect is always in a sentence.
In our dataset, one event may be across in multiple sentences.
Third, our dataset is larger than or comparable with most datasets. ACE2005 and MUC-4 contain less than 1,500 documents.
The size of DOC2EDAG is 32,040, while it is labeled using distant supervision.
\section{Our Approach}
In this paper, we propose a $\textit{E}^3 \textit{SA}$ framework for structured event-level SA (Figure \ref{fig:framework}).
To reduce the propagated errors in the pipeline,
we propose a joint approach.
$\textit{E}^3 \textit{SA}$ consists of four parts: (i) contextualized word embedding module that models the document with contextual representation; (ii) feature-enhanced trigger extractor that extracts all triggers in the documents with additional features such as POS and NER labels;
(iii) trigger-enhanced argument extractor that extracts the arguments concerning the given trigger by taking trigger information into account; (iv) event-level sentiment classifier that judges the events’ sentiment polarities with argument information.
Formally, given a document $d = \{s_1, ..., s_{|d|}\}$, where $|d|$ is the number of the sentences. $s_i$ is the $i$-th sentence in the document $d$, which contains $|s_i|$ words, $\{w^{(i)}_{1}, ..., w^{(i)}_{|s_i|}\}$. The goal of this task is to extract all the events $E=\{\mathrm{event}_1, ..., \mathrm{event}_{|E|}\}$ in the documents, where the $k$-th event $\mathrm{event}_k=(t_k, \mathrm{a}_k, y_k)$ consists of triggers $t_k$, arguments $a_k$ (subject $\mathrm{sub}_k$, object $\mathrm{obj}_k$, time $\mathrm{time}_k$, location $\mathrm{loc}_k$) and sentiment polarities $y_k$ tuple. The event sentiment polarity $y_k \in \{P, N, O\}$, which represents positive, negative and neutral.
We aim to maximize the data likelihood of the training set as follows.
\begin{equation}
\nonumber
\scriptsize
\begin{aligned}
\prod_{k=1}^{|E|}{p(\mathrm{event}_k|d)} = & \prod_{k=1}^{|E|}{p((t_k, \mathrm{a}_k, y_k)|d)}
= \prod_{k=1}^{|E|}{p(t_k|d)} p((\mathrm{a}_k, y_k)|t_k, d) \\
= & \prod_{k=1}^{|E|}{\underbrace{p(t_k|d)}_{\text{Trigger Extraction}}} \underbrace{p(\mathrm{a}_k|t_k, d)}_{\text{Argument Extraction}} \underbrace{p(y_k|t_k, \mathrm{a}_k, d)}_{\text{Sentiment Classification}}
\end{aligned}
\end{equation}
\subsection{Contextualized Word Embedding}
In the word embedding module, we map each word $x_i$ in the input sequence $d$ into a continuous vector space.
Contextualized embedding produced by pre-trained language models \cite{devlin2019bert} have been proved to be capable of improving the performance of a variety of tasks.
Here, we employ the contextualized representations produced by \textit{BERT-base} to obtain the word embedding.
Specifically, we input the document
$\{\mathrm{[CLS]}, w_{1}, w_{2}, ..., w_{m}, \mathrm{[SEP]}\}$ into \textit{BERT-base}. Then we obtain the word embeddings $\{x^w_{\mathrm{[CLS]}}, x^w_{1}, x^w_{2}, ..., x^w_{m}, x^w_{\mathrm{[SEP]}}\}$, where $m$ is the number of the words in $d$.
where $\mathrm{[CLS]}$ is BERT’s special classification token,
$\mathrm{[SEP]}$ is the special token to denote separation.
\subsection{Feature-Enhanced Trigger Extractor}
Trigger extractor aims to identify whether words trigger an event. We formulate trigger extraction as a token-level classification task and extract all the triggers simultaneously.
We integrate the semantic features (e.g., POS and NER) into text modeling because they are useful for this task.
For example, most of the triggers are verbs, and most of the arguments are entities and nouns.
Stanza \cite{qi2020stanza} is used to obtain the POS and NER tags of the words.
We forward the concatenation of three types of embedding, including word embedding $x^w$, pos embedding $x^{pos}$ and ner embedding $x^{ner}$ to a feed-forward network (FFN), $x^f_i = \mathrm{FFN}(\mathrm{concat}(x^w_i, x^{pos}_i, x^{ner}_i))$.
Inspired by \cite{wei-etal-2020-novel,yang-etal-2019-exploring}, we train start and end classifiers to enforce the model focus on the triggers' boundaries. The distributions of trigger' start and end are computed as
,
\begin{equation}
\small
\nonumber
\begin{aligned}
p^{t^s}_{i} = \mathrm{Sigmoid}(W^{t^s}x^f_i+b^{t^s}) ;
p^{t^e}_{i} = \mathrm{Sigmoid}(W^{t^e}x^f_i+b^{t^e})
\end{aligned}
\end{equation}
where $^s$ and $^e$ denote the start and end indices, $W^{t^s}$, $W^{t^e}$, $b^{t^s}$ and $b^{t^e}$ are the learnable weights.
As general, we adopt cross entropy (CE) between the predicted probabilities and the ground truth labels as the loss function for fine-tuning.
\begin{equation}
\nonumber
\mathcal{L}_t = \frac{1}{m}\sum_{i=1}^{m}{\mathrm{CE}(y^{t^s}_i, p^{t^s}_{i})+\mathrm{CE}(y^{t^e}_i, p^{t^e}_{i})}
\vspace{-1mm}
\end{equation}
where $y^{t^s}_i$/$y^{t^e}_i$ is $1$ if $i$-th word is the trigger's start/end.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.115]{Framework.pdf}
\caption{Our $\textit{E}^3 \textit{SA}$ framework.}
\label{fig:framework}
\end{figure}
\begin{table*}[t!]
\centering
\caption{The results of event-level SA with extracted arguments. The best scores are marked with bold.}
\label{table:main results}
\scriptsize
\setlength{\tabcolsep}{1.2mm}{\begin{tabular}{l|ccc|ccc|ccc|ccc|ccc|ccc}
\hlineB{4}
& \multicolumn{3}{c|}{\multirow{2}{*}{Trigger}} & \multicolumn{12}{c|}{Arguments} & \multicolumn{3}{c}{\multirow{2}{*}{Sentiment}} \\
& \multicolumn{3}{c|}{} & \multicolumn{3}{c}{Sub} & \multicolumn{3}{c}{Obj} & \multicolumn{3}{c}{Time} & \multicolumn{3}{c|}{Loc} & \multicolumn{3}{c}{} \\
& P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline
DCFEE-O & 41.69 & 27.59 & 33.21 & 43.40 & 14.73 & 21.99
&50.79 & 19.30 & 27.97
&71.90 & 48.46 & 57.89
&0.00 & 0.00 & 0.00 &19.75 & 13.07 & 15.73 \\
DCFEE-M & 33.87& 44.64 & 38.52 & 34.66 & 19.00 & 24.55
&40.81 & 25.17 & 31.14
&58.62 & 59.91 & 59.26
&16.67 & 9.09 & 11.76 & 14.60 & 19.24 & 16.60 \\
GreedyDec & \textbf{67.23} & 24.62 & 36.04 & 67.78 & 16.12 & 26.05
&63.74 & 16.62 & 26.36
&79.08 & 53.30 & 63.68
&0.00 & 0.00 & 0.00 & 15.93 & 5.83 & 8.54 \\
Doc2EDAG & 38.94 & 16.49 & 23.17 & 62.11 & 14.03 & 22.89
&58.75 & 14.03 & 22.65
&56.25 & 11.89 & 19.64
&0.00 & 0.00 & 0.00 & 30.73 & 13.01 & 18.28 \\
BERT-QA & 51.40 & 60.85 & 55.73 & 69.16 & 55.22 & 61.80
&\textbf{69.96} & 52.84 & 59.20
&75.58 & 57.27 & 65.16
&0.00 & 0.00 & 0.00 & 44.53 & 52.72 & 48.28 \\ \hline
$\textit{E}^3 \textit{SA}$ (Ours) & 54.79 & \textbf{62.82} & \textbf{58.53} & \textbf{69.83} & \textbf{60.80} & \textbf{65.00} & 64.68 & \textbf{55.02} & \textbf{59.46} &\textbf{89.54} & \textbf{60.35} & \textbf{72.11} &\textbf{66.67} & \textbf{18.18} & \textbf{28.57} & \textbf{48.24} & \textbf{55.30} & \textbf{51.53} \\
\hlineB{4}
\end{tabular}}
\end{table*}
\subsection{Trigger-Enhanced Argument Extractor}
Argument extractor aims to identify the related arguments concerning the given trigger.
To better capture the trigger information, we design a trigger-enhanced argument extractor to integrate the trigger's representation and position information into word representation.
For the trigger representation, we use its head and tail word representations.
Also, we define the word position index according to the relative distance with the trigger.
The word position embedding $x^{\mathrm{position}_k}_{i}$ specific to a trigger $t_k$ can be looked up by a position embedding matrix, which is randomly initialized and updated during the training process.
Then, we concatenate the trigger's representation and position embedding with feature-enhanced word representation $x^f_i$, and feed them into FFN module, $x^{t_k}_i = \mathrm{FFN}(\mathrm{concat}(x^f_i, x^{f}_{t_k^s}, x^{f}_{t_k^e}, x^{\mathrm{position}_k}_{i}))$,
where $x^{f}_{t_k^s}$ and $x^{f}_{t_k^e}$ is the head and tail representation of the trigger $t_k$ obtained from $x^f$.
Similarly, a word $x_i$ is predicted as the start and end of an argument that plays role $r$ w.r.t. $t_k$ with the probability,
\begin{equation}
\nonumber
\begin{aligned}
p^{r_k^s}_{i} = \mathrm{Sigmoid}(W^{r^s}x^{t_k}_i+b^{r^s});
p^{r_k^e}_{i} = \mathrm{Sigmoid}(W^{r^e}x^{t_k}_i+b^{r^e})
\end{aligned}
\end{equation}
The loss function for argument extraction is,
\begin{equation}
\nonumber
\scriptsize
\begin{aligned}
\mathcal{L}_a = \frac{1}{|E| \times |R| \times m}\sum_{k=1}^{|E|}\sum_{r \in |R|}\sum_{i=1}^{m}{
{\mathrm{CE}(y^{r_k^s}_i, p^{r_k^s}_{i})+ \mathrm{CE}(y^{r_k^e}_i, p^{r_k^e}_{i})}
}
\end{aligned}
\end{equation}
where $R$ is the set of roles, including subject, object, time and location.
\subsection{Event-Level Sentiment Classifier}
Besides the trigger, the arguments information can also help to model the event and its sentiment.
Thus, we model the argument role into an embedding $x^{r_k}_{i}$, which tells not only the position but also the type information of the arguments.
We integrate it with trigger-enhanced word representation $x^{t_k}_i$. Then we adopt a max-pooling layer (MaxPooling) to obtain the event representation, $v^{\mathrm{event}_k} = \mathrm{MaxPooling}(\mathrm{concat}(x^{t_k}_i, x^{r_k}_{i}))$.
We input the event representation $v^{\mathrm{event}_k}$ into a softmax layer for sentiment classification, $p_k = \mathrm{Softmax}\\(W^{c}{v^{\mathrm{event}_k}}+b^{c}) $,
where $W^{c}$ and $b^c$ are the learnable parameters.
Then, we calculate the loss of SA, $\mathcal{L}_c = \frac{1}{|E|}\sum_{k=1}^{|E|}\mathrm{CE}{(y_k, p_k)}$.
Finally, to learn the relationships among the multiple subtasks, we learn them jointly by adding the losses together, $\mathcal{L} = \mathcal{L}_t + \mathcal{L}_a + \mathcal{L}_c$.
\section{Experiments}
\subsection{Experimental Setup}
\paragraph{\textbf{Evaluation}}
As for evaluation, we adopt three metrics: precision (P), recall (R), and F1 scores (F1), the same as \cite{li2013joint,zhang2019joint}.
Additionally, we evaluate the performance of sentiment classification with gold arguments in terms of P, R, F1 and accuracy.
\paragraph{\textbf{Baselines}}
To verify the effectiveness of our model, we conduct the experiments from two perspectives, end-to-end event extraction and aspect-based SA methods.
First, we select four widely-used end-to-end event extraction baselines
to verify their performance on structured event-level SA, including
DCFEE \cite{yang2018dcfee} (consists of two versions: DCFEE-O and DCFEE-M), GreedyDec \cite{zheng2019doc2edag}, Doc2EDAG \cite{zheng2019doc2edag}, BERT-QA \cite{du-cardie-2020-event}.
For a fair comparison, we replace word embeddings with BERT embeddings for DCFEE.
Second, we compare our model with four Non-BERT-based and three BERT-based typical ABSA models by inputting the documents and gold events for sentiment prediction, including MemNet \cite{tang2016aspect}, ATAE\_LSTM \cite{wang2016attention}, MGAN \cite{fan2018multi}, TNet \cite{li2018transformation}, BERT-SPC \cite{devlin2019bert}, AEN\_BERT \cite{song2019attentional}, LCF-BERT \cite{zeng2019lcf}.
For the limitation of the space, please see the details of the baselines in the related studies.
\vspace{-2mm}
\paragraph{\textbf{Implementation Details}}
BERT \cite{devlin2019bert} is utilized as the word embedding.
We use Adam optimizer with the learning rates of 1e-5. The dimensions of position, pos, and ner embedding are 128. The max sequence length is 512. The dropout is 0.1.
The reported test results are based on the parameters that obtain the best performance on the development set with five random seeds.
\subsection{Main Results}
\paragraph{\textbf{Event-level SA with Extracted Arguments}}
We apply the typical end-to-end event extraction baselines to structured event-level SA task and report the results of these models and $\textit{E}^3 \textit{SA}$ (Table \ref{table:main results}).
From this table, we obtain the following findings.
\textbf{First}, our model outperforms the strong baselines in most cases. In particular, $\textit{E}^3 \textit{SA}$ obtains better performance than all the baselines significantly in terms of F1 for all the subtasks.
\textbf{Second}, $\textit{E}^3 \textit{SA}$ captures the sentiment information of the events effectively. These baselines focus on predicting the event type while the argument information and the relationships among the events are ignored by these models.
\textbf{Third},
most of the models can not extract the location of the event since there are only a few location labels (about 20 times) in the training data. Furthermore,
our model can extract it more effectively via feature and trigger-enhanced sentence representation.
\begin{table}[]
\centering
\caption{\small The results of event-level SA with gold arguments.}
\scriptsize
\label{table: event-classification}
\setlength{\tabcolsep}{1.4mm}{\begin{tabular}{ll|cccc}
\hlineB{4}
& & P & R & F1 & Acc \\ \hline
\multirow{4}{*}{Non-BERT-based} & MemNet & 71.25 & 69.65 & 70.41 & 78.41 \\
& ATAE\_LSTM & 74.84 & 67.92 & 70.72 & 80.00 \\
& MGAN & 76.37 & 69.95 & 72.45 & 81.59 \\
& TNet & 79.53 & 66.74 & 71.16 & 81.19 \\ \hline
\multirow{3}{*}{BERT-based} & BERT-SPC & 82.27 & 79.92 & 80.71 & 85.17 \\
&AEN\_BERT & 79.94 & 73.11 & 75.93 & 83.18 \\
&LCF-BERT & 81.42 & 80.16 & 80.91 & 85.87 \\ \hline
Ours & $\textit{E}^3 \textit{SA}$ & \textbf{82.57} & \textbf{80.24} & \textbf{81.32} & \textbf{86.17} \\
\hlineB{4}
\end{tabular}}
\end{table}
\paragraph{\textbf{Event-level SA with Gold Arguments}}
To further verify the effectiveness of $\textit{E}^3 \textit{SA}$ on inferring the events' sentiment polarities, we adopt the existing strong baselines of ABSA and perform sentiment classification over structured event-level SA (Table \ref{table: event-classification}).
We observe that $\textit{E}^3 \textit{SA}$ outperforms all the baselines in terms of F1 and accuracy.
All the baselines focus on the interaction between the event and the text to capture the event-specific sentiments.
$\textit{E}^3 \textit{SA}$ not only considers the relationships among multiple subtasks but also the relationships among multiple events.
Moreover, we integrate the trigger and argument information into sentiment classification to capture the sentiment information towards the given events effectively.
\subsection{Ablation Studies}
To further prove the effectiveness of the components contained in $\textit{E}^3 \textit{SA}$, we do ablation studies (Table \ref{table: ablation study}).
First, comparing with the pipeline model (row 2), we find that the end-to-end framework can improve the performance of each subtask by modeling the relationships among the subtasks.
Second, features such as POS and NER can improve the performance effectively because the arguments are always entities and the triggers are always verbs.
Third, removing the trigger information (e.g., trigger head and tail representations, position embedding) will reduce the performance of argument extraction since we aim to extract the argument information w.r.t. the given trigger.
However, the influence of removing the trigger information for sentiment classification is limited because argument information can also help the model learn the event representation.
Fourth, integrating the argument information can capture the sentiment information of the events more effectively.
To further investigate the effectiveness of event information for sentiment classification, we remove both the trigger and argument information from our model, which will reduce the performance significantly.
\begin{table}[]
\centering
\scriptsize
\caption{The results of ablation studies in terms of F1.}
\label{table: ablation study}
\setlength{\tabcolsep}{1.2mm}{\begin{tabular}{l|cccccc}
\hlineB{4}
& \multirow{2}{*}{Trigger} & \multicolumn{4}{c}{Argument} & \multirow{2}{*}{Sentiment} \\
& & Sub & Obj & Time & Loc & \\ \hline
$\textit{E}^3 \textit{SA}$ (Ours) & \textbf{58.53} & 65.00 & \textbf{59.46} & \textbf{72.11} & \textbf{28.57} & \textbf{51.53} \\
Pipeline & 56.05 & 64.89 & 58.16 & 71.22 & 16.67 & 50.25 \\ \hline
- Feature & 58.35 & 62.13 & 58.68 & 69.36 & 24.06 & 50.08 \\
- Trigger Info & 57.93 & 54.41 & 55.43 & 67.36 & 18.24 & 51.04 \\
- Argument Info & 58.52 & \textbf{65.14} & 58.54 & 71.85 & 27.50 & 50.97 \\
- Trigger+Argument & 57.20 & 53.07 & 50.06 & 36.43 & 00.00 & 49.58 \\
\hlineB{4}
\end{tabular}}
\end{table}
\section{Conclusions and Future Work}
In this paper, we propose an effective $\textit{E}^3 \textit{SA}$ approach for structured event-level sentiment analysis.
This joint approach models the relationships among the multi-subtasks and multi-events with structured arguments.
We conduct extensive experiments to evaluate our model on both event extraction and sentiment classification.
The results demonstrate the great advantages of our model by comparing it with the state-of-the-art baselines.
Additionally, we label a real-world corpus for this task for lack of the off-the-shelf datasets.
It would be interesting to investigate how to integrate users' reviews to better capture the sentiment information of the events.
\begin{acks}
The authors wish to thank the reviewers for
their helpful comments and suggestions.
This research is funded by the Science and Technology Commission of Shanghai Municipality (No. 19511120200\&2151\\1100100 and 21511100402) and by Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, No. 2020KEY001 and the Fundamental Research Funds for the Central Universities.
This research is also funded by the National Key Research and Development Program of China (No. 2021ZD0114002), the National Nature Science Foundation of China (No. 61906045), and Shanghai Science and Technology Innovation Action Plan International Cooperation project ``Research on international multi language online learning platform and key technologies (No.20510780100)". The computation is performed in ECNU Multi-functional Platform for Innovation (001).
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
2104.10243
|
\section{Introduction}
\subsection{Zeros of derivatives of zeta functions}
Let $s=\sigma+it$ be a complex variable and $\zeta(s)$ be the Riemann zeta function, which satisfies the functional equation \[h(s)\zeta(s)=h(1-s)\zeta(1-s),\] where $h(s)=\pi^{-s/2}\Gamma(s/2)$.
In 1859, Riemann predicted that all the complex zeros of the zeta function lies on the critical line $\operatorname{Re}(s)=\frac{1}{2}.$ From the work of Speiser~\cite{Speiser}, we know that the Riemann hypothesis is equivalent to show the fact that the derivative of zeta function does not have zeros on the strip $0< \operatorname{Re}(s)<\frac{1}{2}$. The distribution of zeros of $\zeta^{(k)}$ have been studied widely in literature (see~\cite{Berndt},~\cite{Spira1}~\cite{Spira2},~\cite{Spira3}~\cite{Spira4}).
In 1974, Levinson and Montgomery~\cite{LM} have studied further the distribution of zeros for the higher derivatives for Riemann zeta function. They showed that the number of zeros of $\zeta$ and $\zeta^{'}$ to the left the critical line are same. In particular they proved that
if $\rho_k = \beta_k + i \gamma_k$ denote the non-trivial zeros of $\zeta^{(k)}, k\geq 1$ and $\operatorname{Li}(x)=\int_{2}^x(\operatorname{log}{t})^{-1}dt$, then under the Riemann hypothesis,
\begin{align}\label{urh}
\sum_{\substack{0 \leq \gamma_k \leq T\\ \beta_k> \frac{1}{2} }} \Big(\beta_k - \frac{1}{2} \Big) = k\frac{T}{2\pi}\operatorname{log}\log{\frac{T}{2\pi}} - k\operatorname{Li}\Big(\frac{T}{2\pi}\Big) + \frac{T}{2\pi}\Big(\frac{1}{2}\operatorname{log}{2}- k\operatorname{log}\log{2}\Big)+ \operatorname{O}(\operatorname{log}{T}),
\end{align}
and $\zeta^{(k)}(s), k\geq 1$ has only a finite number of complex zeros in the region $\operatorname{Re}(s)<\frac{1}{2}$. Further more, unconditionally they showed that for $T^a\leq U \leq T$, $a >\frac{1}{2}$,
\begin{align}\label{lm}
2\pi\sum_{\substack{T \leq \gamma_1\leq T+U\\ \beta_1> \frac{1}{2} }} \Big(\beta_1 - \frac{1}{2} \Big) = U\operatorname{log}\log{\frac{T}{2\pi}} +\operatorname{O}(U) \mbox{ and } \sum_{\substack{T \leq \gamma_1\leq T+U\\ \beta_1< \frac{1}{2} }} \Big(\frac{1}{2}-\beta_1 \Big) = \operatorname{O}(U).
\end{align}
Recently, Ki and Lee~\cite{KL} generalized the result of Levinson and Montgomery~\cite{LM} that appears in~\eqref{lm} for higher derivatives. Precisely, they showed that for
$T^a\leq U \leq T$, $a >\frac{1}{2}$,
\begin{align}\label{kl}
2\pi\sum_{\substack{T \leq \gamma_k\leq T+U\\ \beta_k> \frac{1}{2} }} \Big(\beta_k - \frac{1}{2} \Big) = kU\operatorname{log}\log{\frac{T}{2\pi}} +\operatorname{O}(U) \mbox{ and } \sum_{\substack{T \leq \gamma_k\leq T+U\\ \beta_k< \frac{1}{2} }} \Big(\frac{1}{2}-\beta_k \Big) = \operatorname{O}(U).
\end{align}
In~\cite[Theorem 1]{CG5}, Conrey and Ghosh have shown that all most all zeros of $\zeta^{k}(s)$ are in the region $\sigma> \frac{1}{2}-\frac{\phi(t)}{\operatorname{log} \, t}$ for any $\phi(t) \to \infty$ as $t \to \infty$. They also have shown that for any $c>0,$ $\zeta^{k}(s)$ has a positive proportion of zeros in the region $\sigma \geq \frac{1}{2}+ \frac{c}{\operatorname{log} \, t}.$ Moreover, under Riemann hypothesis they have proved that for any $\epsilon>0$, $\zeta^{k}$ has $\gg_{\epsilon } T$ zeros in the region $\frac{1}{2} \leq \sigma < \frac{1}{2}+ \frac{(1+\epsilon) \operatorname{log} \, \operatorname{log} \, T}{\operatorname{log} \, T}, \ \ 0<t<T.$
The reader may refer to~\cite{CG5} for more exposer to the distribution of zeros of derivatives of the zeta function.
In this section, we obtain the explicit constant in the error terms of Levinson and Montgomery~\cite[Theorem 5, Corollary of Theorem 4]{LM} result. Further, we also obtain the constant term in the error term of Ki and Lee~\cite[Theorem 3 and Theorem 2]{KL} results.
More explicitly, we improve the error terms appearing in \eqref{lm} and \eqref{kl} as follows:
\begin{theorem}\label{thm6}
Let $ 0 < \theta < \frac{2k+1}{4(k+1)}$ and $\rho_k = \beta_k + i \gamma_k$ be zero of $\zeta^{(k)}$ such that $T \leq \gamma_k \leq T+H $, where $k\geq 1$. Then, for any $H=T^a, \frac{1}{2} +\theta < a \leq 1$ we have
\begin{enumerate}
\item
\begin{align*}
2\pi \sum_{\substack{T \leq \gamma_k \leq T+H \\ \beta_k>\frac{1}{2}}} \left(\beta_k - \frac{1}{2}\right) \leq kH\operatorname{log}\log{\frac{T}{2\pi }} &
+ \frac{H}{2}\operatorname{log}{\Big(1+ \frac{2}{(2k+1)\theta} + \frac{2k^2\theta}{3(2k-1)}\Big)}\\
\nonumber & -Hk\operatorname{log}\log{2} + \operatorname{O}_k\Big(\frac{H(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big).
\end{align*}
\item
\begin{align*}
\sum_{\substack{T \leq \gamma_k \leq T+H \\ \beta_k<\frac{1}{2}}} \left(\frac{1}{2} - \beta_k \right) \leq \frac{H}{4\pi}\operatorname{log}{\Big(\frac{1}{2}+ \frac{1}{(2k+1)\theta} + \frac{k^2\theta}{3(2k-1)}\Big)} + \operatorname{O}_k\Big(\frac{H(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big).
\end{align*}
\end{enumerate}
\end{theorem}
The following corollary of Theorem~\ref{thm6} is about the minimum value of the coefficient of $H$. We see that the expression $\frac{1}{(2k+1)\theta} + \frac{k^2\theta}{3(2k-1)}$ attains its minimum value at $\theta = \sqrt{\frac{3(2k-1)}{k^2(2k+1)}}$ and this extremal point must be $ < \frac{2k+1}{4(k+1)}$. Hence $k \geq 4$. Thus, as a corollary of Theorem~\ref{thm6} we obtain
\begin{corollary}\label{coro6}
Let $k\geq 4$ and $H=T^a$ where $\frac{1}{2} +\sqrt{\frac{3(2k-1)}{k^2(2k+1)}} < a \leq 1$. Then, we have
\begin{align*}
2\pi\sum_{\substack{T \leq \gamma_k \leq T+H \\ \beta_k> \frac{1}{2} }} \Big(\beta_k - \frac{1}{2} \Big) \leq k\frac{H}{2}\operatorname{log}\log{\frac{T}{2\pi}} &+\frac{H}{2}\operatorname{log}{\Big(1+\frac{4k}{\sqrt{12k^2-3}}\Big)}\\
&-kH\operatorname{log}\log{2}+\operatorname{O}_k\Big(\frac{H(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big).
\end{align*}
\begin{align*}
&\sum_{\substack{T \leq \gamma_k \leq T+H \\ \beta_k< \frac{1}{2} }} \Big(\frac{1}{2} - \beta_k \Big) \leq \frac{H}{4\pi}\operatorname{log}{\Big(\frac{1}{2}+\frac{2k}{\sqrt{12k^2-3}}\Big)}+ \operatorname{O}_k\Big(\frac{H(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big).
\end{align*}
\end{corollary}
\begin{remark}
If we compare the second order term from \eqref{urh} and (a) of Theorem~\ref{thm6} then naturally we should expect $1$ in place of $\frac{2}{(2k+1)\theta} + \frac{2k^2\theta}{3(2k-1)}$ for some choice of $\theta$ in $(0,\frac{2k+1}{4(k+1)})$.
The range of $\theta$ is from Corollary~\ref{coro5}.
For $k\geq 1$ and $\theta>0,$ the optimal value of $\frac{2}{(2k+1)\theta} + \frac{2k^2\theta}{3(2k-1)}$ is $\frac{4k}{\sqrt{12k^2-3}}$.
The optimal value attend when $\theta$ lies in $(0,\frac{2k+1}{4(k+1)})$ and a little right of $\frac{2k+1}{4(k+1)}$ for $k \geq 4$ and $k=1,2,3$ respectively.
Note that $\frac{4k}{\sqrt{12k^2-3}}$ is a decreasing function such that for $k\geq 4$ the values lies in $(1.1547, \, 1.1639)$ and for $k=1,2,3$ the values equals to $\frac{4}{3}, 1.1925\ldots, 1.1710\ldots$ respectively. From this we can conclude that improving the range of $\theta$ in Corollary~\ref{coro5} with the mollifier would not improve the second order term in Corollary~\ref{coro6}. We hope that choosing some other mollifier cleverly in Corollary~\ref{coro5} may improve the constant in Corollary~\ref{coro6}.
\end{remark}
Note that Levinson and Montgomery~\cite{LM} applied the well-known Littlewood lemma on the function $ z_k(s):= (-1)^k2^s(\operatorname{log}{2})^{-k}\zeta^{(k)}(s)$ and concluded their results. Ki and Lee~\cite{KL} applied the Littlewood lemma on a function which is a linear combination of derivatives of $\zeta$ and by using an upper bound estimate of a fractional moment of logarithmic derivatives of $\zeta$, proved their unconditional result. But, we consider the function $z_k(s)\Phi(s)$ in the place of $z_k(s)$, where $\Phi(s)$ is a mollifier and apply Littlewood lemma, this improves the explicit constant of the error terms.
\subsection{Zeros of Matsumoto-Tanigawa $\eta_k$ function}
The Hardy's $Z$-function $Z(t)$ is a real valued function for $ t \in \mathbb{R}$ which is defined by
\begin{align}\label{hardyz}
Z(t)= e^{i\theta(t)}\zeta(1/2 + it),
\end{align}
where $\theta(t)=\arg{h(1/2+it)}$. In other words,
\begin{equation*}
Z(t) := \zeta\left(\frac{1}{2}+it\right)\chi\left(\frac{1}{2}+it\right)^{-\frac{1}{2}},
\end{equation*}
where $\chi(s) = \frac{h(1-s)}{h(s)}$. Clearly, $e^{i\theta(t)}=\chi\left(\frac{1}{2}+it\right)^{-\frac{1}{2}}.$ It is known that $Z(t)$ is a smooth, real valued function such that $|Z(t)|=|\zeta(\frac{1}{2}+it)|$. The zeros of $Z(t)$ are the complex zeros of zeta function of the form $\frac{1}{2}+it$ for real $t$. So, the function $Z(t)$ plays an important role in the theory of zeta function. Therefore, it is important to understand the behaviour of $Z(t)$ and its derivatives. Although, the magnitude of the zeta function along the half line and that of the $Z$-function is same but this is not the case for derivatives $Z$ function and derivatives of zeta function. In fact $Z^{(k)}$ can be expressed as a linear sum of the products of derivatives of the zeta function, namely, $\zeta^{(j)}{(\frac{1}{2}+it)}, 0\leq j \leq k $, with some meromorphic functions on the half line. Anderson~\cite{Ande} defined a meromorphic function $\eta(s)$ as follows:
\begin{align*}
\eta(s)= \zeta(s)-\frac{\zeta'(s)}{\omega(s)},
\end{align*}
where $\omega(s)=\left(\frac{\chi'}{\chi}\right)(s)$. It was shown in the same paper that the function $\eta(s)$ satisfies the relation
\begin{align*}
Z'(t)=\left(e^{i\theta(t)}\right)'\eta\left(\frac{1}{2}+it\right)
\end{align*}
and that all the zeros of $Z'(t)$ are the zeros of $\eta$ of the form $\frac{1}{2}+it$. He also counted the number of zeros of $\eta$ in some strip as well as the critical line and under Riemann hypothesis he proved that all non-trivial zeros of $\eta$-function are along the critical line. For the cases of higher derivatives, Matsumoto and Tanigawa~\cite{MT} generalized the function $\eta(s)$ and constructed the meromorphic function $\eta_k(s)$ in a recursive way. They denote $\eta(s)$ by $\eta_1(s)$ and recursively defined $\eta_k$
as follows:
\begin{equation*}
\eta_{k+1}(s)=\lambda(s)\eta_k(s)+ \eta'_k(s), \quad k\geq 1,
\end{equation*}
where $\lambda(s)=\dfrac{\omega'}{\omega}(s)- \frac{1}{2}\omega(s)$. They showed that the function $\eta_k$ satisfy the functional equation
\begin{align}\label{eta-equation}
\eta_k(s) = (-1)^k\chi(s)\eta_k(1-s).
\end{align}
They also proved that the $k$-th derivative of Hardy's $Z$-function satisfies the following relation:
\begin{align}\label{int1}
Z^{(k)}(t)=i^{k-1}\left(e^{i\theta(t)}\right)'\eta_k\left(\frac{1}{2}+it\right).
\end{align}
One has the identity $\omega\left(\frac{1}{2}+it\right)= -2\theta'(t)$ and by using it we extend the recurrence relation of $\eta_k$ for all non-negative integers by defining $\eta_0(s)=\frac{2\zeta(s)}{\omega(s)}$. This gives the extension of the relation \eqref{int1} to all non-negative integers. They determined an explicit formula for $\eta_k(s)$, for any positive integer $k$,
\begin{align}\label{int2}
\eta_k(s)=\lambda_k(s)\zeta(s)+\sum_{j=1}^{k-1}\binom{k}{j}\lambda_{k-j}(s)\zeta^{(j)}(s)-\frac{2}{\omega(s)}\zeta^{(k)}(s),
\end{align}
where $\lambda_1(s)=1, \lambda_2(s)=\lambda(s)$ and for $k\geq 1$
\begin{align}\label{lambda-recu}
\lambda_{k+1}(s)=\lambda(s)\lambda_k(s)+ \lambda_k^{'}(s).
\end{align}
The relation \eqref{int1} imply that all the zeros of $\eta_k$ of the form $\frac{1}{2} + it$ are the zeros of $Z^{(k)}$. In fact, they proved that for any $k$ there exist a sufficiently large integer $m= m(k)$ such that the number of zeros of $\eta_k$ lies in the rectangular region
\begin{eqnarray}\label{eta_k_zero}
R_k & := & \lbrace \sigma+it: 1-2m < \sigma < 2m, 0 <t <T \rbrace \nonumber \\
& = & \frac{T}{2\pi}\operatorname{log}{\frac{T}{2\pi}} - \frac{T}{2\pi} + \operatorname{O}_k(\operatorname{log}{T}).
\end{eqnarray}
Also, they have shown that if the number of zeros of $Z^{(k)}$ in $(0, T)$ is $N_{k,0}(T)$, then
\begin{align*}
T\operatorname{log}{T} \ll N_{k,0}(T) \leq \frac{T}{2\pi}\operatorname{log}{\frac{T}{2\pi}} - \frac{T}{2\pi} + \operatorname{O}_k(\operatorname{log}{T}).
\end{align*}
Conray and Ghosh~\cite[p. 195]{CG} remarked that for each positive integer $k$, it is possible to find a meromorphic function $Z_k(s)$ such that $|Z^{(k)}(t)|=\left|Z_k\left(\frac{1}{2}+it\right)\right|$ and satisfying the functional equation $Z_k(s)=(-1)^k\chi(s)Z_k(1-s)$. Although they did not give any explicit construction of the function $Z_k(s)$ for $k\geq 2$, the $\eta_k$-function satisfy the functional equation~\eqref{eta-equation}, like $Z_k$ but not satisfy other condition of their remark.
\begin{remark}
Define $Z_k(s):= \frac{\omega \eta_k}{2i^k}(s),$ for $k\geq 0$. Then $Z_k$ is a meromorphic function, satisfying the conditions remarked by Conrey and Ghosh~\cite[p. 195]{CG}.
\end{remark}
In the next theorem we study the distribution of zeros of $\eta_k.$
\begin{theorem}\label{thm2}
Let $ \rho_k^{'}:= \beta_k^{'} +i\gamma_k^{'}$ denotes the zeros of $\eta_k(s)$ in the region $\frac{1}{2} <\sigma < m, T<t \leq T+H$ with $2<m=\operatorname{o}(H)$, where $ H=T^a, \frac{1}{2} +\theta <a \leq 1$ and $0<\theta < \frac{2k+1}{4(k+1)}$. Then we have
\begin{align*}
\sum_{\substack{\eta_k(\rho_k^{'})=0\\\frac{1}{2}<\beta_k^{'}\leq m}} \left(\beta_k^{'} - \frac{1}{2}\right) \leq \left(\frac{\operatorname{log}{(4^kP_k(\theta)}}{2} +(k-1)\operatorname{log}{2} + \operatorname{o}(1) \right)\frac{H}{2\pi}.
\end{align*}
\end{theorem}
Furthermore, let $N_{\eta_k}(\sigma, m, H):= |\{ \rho_k^{'}:= \beta_k^{'} +i\gamma_k^{'} : m>\beta_k^{'}>\sigma >\frac{1}{2}, T <\gamma_k^{'} < T+H \}|$. Then,
\begin{align*}
\sum_{\substack{\eta_k(\rho_k^{'})=0\\\frac{1}{2}<\beta_k^{'}\leq m}} \left(\beta_k^{'} - \frac{1}{2}\right) \geq \Big(\sigma-\frac{1}{2}\Big)N_{\eta_k}(\sigma, m, H).
\end{align*}
From Theorem \ref{thm2} we derive the following corollary:
\begin{corollary}\label{eta_k_0}
For $ H=T^a, \frac{1}{2} +\theta <a \leq 1$ and $0<\theta < \frac{2k+1}{4(k+1)}$, we have
\begin{align*}
N_{\eta_k}(\sigma, m, H) \leq \frac{H}{2\pi (\sigma-\frac{1}{2})}\left(\frac{\operatorname{log}{(4^kP_k(\theta))}}{2} +(k-1)\operatorname{log}{2} + \operatorname{o}(1) \right).
\end{align*}
\end{corollary}
From \eqref{eta_k_zero} and Corollary~\ref{eta_k_0} we conclude that all most all zeros of $\eta_k$-function are near the critical line.
\subsection{Mean square of Hardy $Z$ function and its derivatives}
In 1918 Hardy and Littlewood~\cite{HL} proved the well known asymptotic for the mean square of $\zeta$ or $Z$ as
\[
\int_{0}^{T}\left|\zeta\Big(\dfrac{1}{2}+it\Big)\right|^2dt = \int_{0}^{T}Z(t)^2dt \sim T \operatorname{log}{T} \quad \mbox{ as } T\rightarrow \infty.
\]
In 1928 Ingham~\cite{Ing} studied the mean square of products of derivatives of the zeta function. More explicitly they showed that
\[
\int_{0}^{T}\left|\zeta^{(k_1)}\Big(\dfrac{1}{2}+it\Big)\zeta^{(k_2)}\Big(\dfrac{1}{2}-it\Big)\right|^2dt \sim \frac{T}{k_1+k_2+1}(\operatorname{log}{T})^{k_1 + k_2+1} \quad \mbox{ as } T\rightarrow \infty.
\]
In 1999, Hall~\cite[Theorem 3]{Hall} obtained the mean square of derivatives of the Hardy $Z$ function. They showed that
\begin{align}\label{int3}
\int_{0}^{T}\left|Z^{(k)}(t))\right|^2dt = \frac{1}{4^k(2k+1)}TP_{2k+1}{\left(\operatorname{log}{\frac{T}{2\pi}}\right)} +\operatorname{O}\left(T^{\frac{3}{4}}(\operatorname{log}{T})^{k+\frac{1}{2}}\right),
\end{align}
where $P_{2k+1}$ is a monic polynomial of degree $2k+1$. The error term of \eqref{int3} suggests that we can derive an asymptotic for the mean square of $Z^{(k)}$ in a short interval $[T, T+H]$, whenever $H\gg T^{\frac{3}{4}}$.
In this paper we are interested in the mean-square of the product of $Z^{(k)}$ or $\zeta^{(k)}$ with some Dirichlet polynomial $\Phi(s)$ in the interval $[T, T+H]$. More precisely, we would like to study
\begin{align}\label{mean-square}
J{(\Phi, H, k)} := \int_{T}^{T+H}\left(Z^{(k)}(t)\right)^2\left|\Phi\left(\frac{1}{2}+ it\right)\right|^2 dt
\end{align}
where the Dirichlet polynomial $\Phi$ is given by
\begin{equation}\label{Dirpol}
\Phi(s):= \sum_{n\leq T^\theta}\frac{b_n}{n^s},\quad b_n \ll_\epsilon n^{\epsilon}, \quad \theta<\frac{1}{2}, \mbox{ for any}\quad \epsilon >0.
\end{equation}
More generally, we study the mean square of $Z^{(k_1)}Z^{(k_2)}$ product with $\Phi$ (see Theorem \ref{thm4}) and as an application of the above facts, we obtain the following (see Corollary \ref{coro5}):
\begin{align}\label{mean2}
I{(\Phi, H, k)} := \int_{T}^{T+H}\left|\zeta^{(k)}\Big(\frac{1}{2}+it\Big)\Phi\left(\frac{1}{2}+ it\right)\right|^2dt.
\end{align}
This type of mean value has many important applications in the theory of zeta function and $L$-functions. For example, when $k=0$, the above mean values tell us the distribution of zeros and distribution of signs of Hardy's $Z$-function (see~\cite{BCH, BG, Levi, Selb}).
In~\cite[Theorem 1]{BCH} Balasubramanian, Conrey and Heath-Brown obtained an asymptotic formula for $J{(\Phi, T, 0)}$ when $\theta < \frac{1}{2}$.
Namely,
\begin{align}\label{int55}
\int_{T}^{2T}Z^{2}(t)\left|\Phi\left(\frac{1}{2}+ it\right)\right|^2 dt \sim T \sum_{q,l\leq T^{\theta}}\frac{b_l\overline{b_q}}{lq}(l,q)\left(\operatorname{log}{\frac{T(l,q)^2}{2\pi lq}} +2\gamma + \operatorname{log}{4} -1\right) \ \text{as $T \rightarrow \infty$},
\end{align}
where $\Phi$ is a Dirichlet polynomial defined as in \eqref{Dirpol}.
They also have proved that (see \cite[Corollary 1]{BCH}) for $\theta < \frac{9}{17}$,
\begin{align}\label{int4}
\int_{T}^{2T}Z^{2}(t)\Big|\sum_{n\leq T^{\theta}}\frac{\mu(n)}{n^{\frac{1}{2}+ it}}\Big(1- \frac{\operatorname{log}{n}}{\operatorname{log}{X}}\Big)\Big|^2 dt \sim \left(1+\frac{1}{\theta}\right)T \mbox{ as } T\rightarrow \infty,
\end{align}
where the $n$-th coefficient of the Dirichlet polynomial in \eqref{Dirpol} is
$$\mu(n)\left(1- \frac{\operatorname{log}{n}}{\operatorname{log}{X}}\right),$$ with $\mu(\cdot)$ the M\"obius function. Later Conrey~\cite{Con} improved the range of $\theta$ upto $\frac{4}{7}$ in \eqref{int4}. It was conjectured by Farmer~\cite{Far} that $\theta$ can be arbitrarily large in \eqref{int4}. Recently, Bettin and Gonek~\cite{BG} proved that Farmer's conjecture implies the Riemann hypothesis. In 2017, Bettin, Chandee and Radziwill~\cite{BCR} break the $\frac{1}{2}$-barrier and proved that the result \eqref{int55} is true with $\theta < \frac{1}{2}+ \frac{1}{66}$.
Note that the results \eqref{int55} and \eqref{int4} has been modified from the original result by using the fact that $|Z(t)|=|\zeta(\frac{1}{2}+it)|$ and $X=T^\theta.$ It is well known that the mean square of $Z(t)$ is asymptotic to $ T\operatorname{log}{T}$ and from \eqref{int4} we observe that the mollification of $Z(t)$ reduced one $\operatorname{log} $ factor by $(1+\theta^{-1})$ in this mean square.
In~\cite[p. 108]{Selb} Selberg obtained upper bounds for mean square like \eqref{int55} and \eqref{int4} for short intervals. More explicitly, his result can be stated in terms of Hardy's $Z$-function as follows: \\
Let $\frac{1}{2} < a < 3/5$ and $X$ be the length of the Dirichlet polynomial $\Psi_1(s)$, where $X = T^\theta$ with $0<\theta = (2a-1)/20 < 1/100$. The $n$-th coefficient of $\Psi_1(s)$ has given by $\alpha(n)(1 - \frac{\operatorname{log}{n}}{\operatorname{log}{X}})$. Here $\alpha(n)$ is define by the following:
$$\frac{1}{\sqrt{\zeta(s)}} = \sum_{n=0}^{\infty}\frac{\alpha(n)}{n^s}, \quad \sigma>1.$$ Then, if $U\geq T^{\frac{1}{2}+7\theta}$, we have
\begin{align*}
\int_{T}^{T+U}Z^{2}(t)\left|\Psi_1\left(\frac{1}{2}+ it\right)\right|^2 dt \ll U.
\end{align*}
Recently, the authors~\cite[Proposition 1]{DP} have generalized the above Selberg's result for $Z$-function correspond to Dirichlet $L$-function $L(s, \chi)$. Here $\chi$ is a Dirichlet character modulo $q$ and $1\leq q < T^{\frac{1}{5}-8 \theta}$, where $X=T^\theta, 0< \theta < \frac{1}{40}$ is the length of the mollifier of Selberg type and $U \geq T^{\frac{3}{5}}$(see \cite[p. 426--431]{DP}).
Here we obtain an asymptotic formula for mean squares of $J{(\Phi, H, k)}$ and $I(\Phi, H, k)$ defined in \eqref{mean-square} and \eqref{mean2} respectively, where $ H=T^a, a > \frac{1}{2}$. More specifically, we prove the following results:
\begin{theorem}\label{thm0}
Let $k$ be any non-negative integer and $\epsilon$ be as defined in \eqref{Dirpol}. Then, uniformly for $\frac{1}{2} +\theta+\epsilon < a \leq 1$, where $H= T^a$ and $X= T^\theta, 0<\theta < \frac{2k+1}{4(k+1)}$, we have
\begin{align*}
J{(\Phi, H, k)} = \frac{H}{4^k} \sum_{q,l\leq T^{\theta}}\frac{b_l\overline{b_q}}{lq}(l,q) \operatorname{log}{\frac{T(l,q)^2}{2\pi lq}}\int_{0}^{1}\left(x^2\operatorname{log}^2{\frac{T(l,q)^2}{2\pi lq}} - \operatorname{log}^2{ \frac{q}{l}} \right)^k dx + \mathcal{E} ,
\end{align*}
where $\mathcal{E}$ is the error term and it is given by
$$\mathcal{E} \ll_{k,\epsilon} \begin{cases}
H\Big(\frac{H}{T}\Big)^k T^{\frac{1}{4}}X^\epsilon(\operatorname{log}{T})^{k+3} +X^{1+2\epsilon}\sqrt{T} (\operatorname{log}{T})^{2k+3} & \mbox{ if } \frac{1}{2}+\theta+\epsilon < a < \frac{4k+3}{4(k+1)},\\
HT^{- \frac{2k+1}{4(k+1)}+\epsilon}X(\operatorname{log}{T})^{2k+3}, & \mbox{ if } \frac{4k+3}{4(k+1)} \leq a \leq 1.
\end{cases}$$
\end{theorem}
In the above theorem we have taken Dirichlet polynomial $\Phi$ of length $T^\theta$ with arbitrary coefficient of size $a_n \ll_\epsilon n^\epsilon$, for any $\epsilon >0$. Now, in the next theorem we are choosing $n$-th coefficient $b_n$ of $\Phi$ as $\mu(n)\left(1- \frac{\operatorname{log}{n}}{\operatorname{log}{X}}\right)$ where $X = T^\theta$ and derive the asymptotic mean square in short interval $[T, T+H]$ where $H= T^a$ and $\frac{1}{2}< a \leq 1$.
\begin{theorem}\label{thm1}
Let $k$ be a non negative integer. For $H= T^a, \frac{1}{2} + \theta < a \leq 1$ and $0<\theta < \frac{2k+1}{4(k+1)}$, we have
\begin{align*}
\int_{T}^{T+H}\left(Z^{(k)}(t)\right)^2\Big|\sum_{n\leq T^\theta}\frac{\mu(n)}{n^{\frac{1}{2}+it}}\Big(1- \frac{\operatorname{log}{n}}{\theta\operatorname{log}{T}}\Big)\Big|^2 dt \sim [P_k(\theta)+ \operatorname{o}(1)]H(\operatorname{log}{T})^{2k} \mbox{ as } T\rightarrow \infty,
\end{align*}
where
\begin{align*}
P_k(\theta):=\frac{k^2}{3(2k-1)4^{k-1}}\theta +\frac{1}{4^k(2k+1)\theta}+ \frac{1}{4^k}.
\end{align*}
\end{theorem}
The first corollary of the Theorem~\ref{thm1} is about improving the result of Balasubramanian, Conrey and Heath-Brown~\cite[Corollary]{BCH} to a shorter interval with a smaller mollifier length.
\begin{corollary}\label{coro1}
For $0< \theta < \frac{1}{4}$ and $ H=T^{a}, \frac{1}{2} +\theta < a \leq 1$ we have
\begin{align*}
\int_{T}^{T+H}\Big|\zeta\Big(\frac{1}{2}+ it\Big)\sum_{n\leq T^\theta}\frac{\mu(n)}{n^{\frac{1}{2}+it}}\Big(1- \frac{\operatorname{log}{n}}{\operatorname{log}{X}}\Big)\Big|^2 dt \sim \Big( 1 + \frac{1}{\theta} \Big)H \mbox{ as } T\rightarrow \infty.
\end{align*}
\end{corollary}
Note that the methods in the proofs of Theorem~\ref{thm0}, Theorem~\ref{thm1} (see Section \ref{proofs}) and the main result Theorem 1 of Balasubramanian, Conrey and Heath-Brown~\cite[Theorem 1, Corollary]{BCH} are different. The main argument of~\cite{BCH} is the evaluation of the line integral
\begin{align*}
\int_{(\frac{1}{2})}e^{(s-s_0)^2\Delta^{-2}}\zeta(s)^2\chi(1-s)\Phi(s)\overline{\Phi}(1-s)ds,
\end{align*}
where $s_0=\frac{1}{2}+iu, u \in [T, 2T]$, $\operatorname{exp}{(5(\operatorname{log}{T})^{\frac{1}{2}})} \leq \Delta \leq T(\operatorname{log}{T})^{-1}$ and the notation $(\frac{1}{2})$ denote the line $\frac{1}{2}+it, t \in \mathbb{R}$. They computed this integral by moving the line of integration to the line $1+\eta$, where $\eta>0$ is depends on $\epsilon$.
Our method is built upon the works of Selberg~\cite{Selb} and Levinson~\cite{Levi}, who showed, a positive proportion and more than $\frac{1}{3}$ proportion of zeros of $\zeta$-function are on the critical line respectively.
\begin{remark}\label{rem1}
If we separately write down the asymptotic formulas in Theorem~\ref{thm1} and Corollary~\ref{coro1} as sum of the main term and error term then the error term would be $$\operatorname{O}_k(H(\operatorname{log}\log{T})^3(\operatorname{log}{T})^{-1}).$$
\end{remark}
Another immediate corollary can be given for the maximum length of the mollifier as follows:
\begin{corollary}\label{coro2}
Let $k$ be a non-negative integer and $\delta >0$ be sufficiently small real number. For $H=T^{a}, \frac{4k+3}{4(k+1)} \leq a \leq 1$ and $\theta = \frac{2k+1}{4(k+1)}-\delta$, we have
\begin{align*}
&\int_{T}^{T+H}\left(Z^{(k)}(t)\right)^2\Big|\sum_{n\leq T^{\theta}}\frac{\mu(n)}{n^{\frac{1}{2}+it}}\Big(1- \frac{\operatorname{log}{n}}{\theta\operatorname{log}{T}}\Big)\Big|^2 dt\\
& \sim \frac{1}{4^k}\left(\frac{4k^2}{3(2k-1)}\Big(\frac{2k+1}{4(k+1)}-\delta\Big) +\frac{1}{2k+1}\Big(\frac{2k+1}{4(k+1)}-\delta\Big)^{-1} + 1\right) H(\operatorname{log}{T})^{2k} \mbox{ as } T\rightarrow \infty.
\end{align*}
\end{corollary}
The next corollary of Theorem \ref{thm1} is about the mean value of $\eta_k$:
\begin{corollary}\label{coro3}
For any positive integer $k$, $0<\theta < \frac{2k+1}{4(k+1)}$ and $\frac{1}{2} +\theta <a \leq 1$,
we have
\begin{align*}
\int_{T}^{T+H}\Big|\eta_k\Big(\frac{1}{2}+ it\Big)\sum_{n\leq T^{\theta}}\frac{\mu(n)}{n^{\frac{1}{2}+it}}\Big(1- \frac{\operatorname{log}{n}}{\theta\operatorname{log}{T}}\Big)\Big|^2 dt \sim \frac{P_k(\theta)}{4}H(\operatorname{log}{T})^{2(k-1)} \mbox{ as } T\rightarrow \infty.
\end{align*}
\end{corollary}
Further more, we generalize Theorem \ref{thm0} and \ref{thm1} by replacing the $(Z^{(k)})^2$ with $Z^{(k_1)}Z^{(k_2)}$ for any non-negative integers $k_1, k_2$. In particular, we prove the following theorem:
\begin{theorem}\label{thm4}
Let $k_1, k_2$ be any non-negative integer and $k=\min\{k_1,k_2\}$. Then, for any $H=T^a, \frac{1}{2} +\theta +\epsilon < a \leq 1$, where $X= T^\theta, 0 < \theta < \frac{2k+1}{4(k+1)}$ we have
\begin{align*}
\int_{T}^{T+H}Z^{(k_1)}(t)Z^{(k_2)}(t)\left|\Phi\left(\frac{1}{2}+ it\right)\right|^2dt \sim H\frac{\vartheta(k_1,k_2) }{2^{k_1+k_2}} \sum_{q,l\leq T^{\theta}}\frac{b_l\overline{b_q}}{lq}(l,q)&(\mathcal{F}{(l,q)} + \mathcal{G}(l,q)) \\
& \mbox{ as } T \rightarrow \infty,
\end{align*}
where $\vartheta(k_1,k_2)=(-1)^{k_2-k_1}$ if $k_1+k_2$ is even, $\vartheta(k_1,k_2)= 0$ if $k_1+k_2$ is odd. Here $\mathcal{F}(l,q)$ and $\mathcal{G}{(l,q)}$ are given by
\begin{align*}
&\mathcal{F}(l,q) = \operatorname{log}{\frac{T(l,q)^2}{2\pi lq}}\int_{0}^{1}\Big(x\operatorname{log}{\frac{T(l,q)^2}{2\pi lq}} - \operatorname{log}{ \frac{q}{l}} \Big)^{k_1} \Big(x\operatorname{log}{\frac{T(l,q)^2}{2\pi lq}} +\operatorname{log}{ \frac{q}{l}} \Big)^{k_2} dx,\\
& \mathcal{G}(l,q) =\Big(\frac{1}{2}\operatorname{log}{{\frac{q}{l}}}\Big)^{k_1}\Big(\frac{1}{2}\operatorname{log}{{\frac{l}{q}}}\Big)^{k_2+1} \int_{0}^{1}(y+1)^{k_1}(1-y)^{k_2} - (1-y)^{k_1}(y+1)^{k_2} dy.
\end{align*}
In particular, we get
\begin{align}\label{moment-prod-hardy}
&\int_{T}^{T+H} Z^{(k_1)}(t)Z^{(k_2)}(t)\Big|\sum_{n\leq T^\theta}\frac{\mu(n)}{n^{\frac{1}{2}+ it}}\Big(1 - \frac{\operatorname{log}{n}}{\theta \operatorname{log}{T}}\Big)\Big|^2dt\\
\nonumber & = \frac{\vartheta(k_1,k_2) }{2^{k_1+k_2}} \Big( 1 + \frac{1}{(k_1+k_2+1)\theta} + \frac{4 k_1 k_2 \theta}{3(k_1+k_2 -1)} + \operatorname{o}(1)\Big)H\Big(\operatorname{log}{\frac{T}{2\pi}}\Big)^{k_1+k_2}.
\end{align}
\end{theorem}
Applying such general results we obtain (see Theorem \ref{thm5}) mean values of higher derivatives of $\zeta$-function product with Dirichlet polynomial in short intervals.
\subsection{ Mean values of higher derivatives of Riemann zeta function product with Dirichlet Polynomial }
In 1989, Conrey~\cite{Con} proved that more than two-fifths of the zeros of the Riemann zeta function are on the critical line. To obtained the above result he showed the following:
Let $B(s):= \sum_{n\leq T^{\theta}}\frac{b(n)}{n^{s+R/(\operatorname{log} \, T)}}$, where $b(n)= \mu(n)P\Big(1-\frac{\operatorname{log}{n}}{\operatorname{log}{T^{\theta}}}\Big)$ with $0<\theta< 4/7$ and $P$ is a polynomial such that $P(0)=0, P(1)=1$. For any polynomial $Q$, consider $V(s):=Q\Big(-\frac{1}{\operatorname{log} \, T}\frac{d}{ds}\Big)\zeta{(s)}$. Then,
\begin{align}\label{C1}
\int_{T}^{2T}\Big|VB\Big(\frac{1}{2}-\frac{R}{\operatorname{log} \, T}+it\Big)\Big|^2 dt \sim c(P,Q,R)T \mbox{ as } T\rightarrow \infty,
\end{align}
where
$0<R \ll 1$, and
\begin{align*}
c(P,Q,R)= |P(1)Q(0)|^2+ \frac{1}{\theta}\int_{0}^{1}\int_{0}^{1}e^{2Ry}|Q(y)P'(x)+\theta Q'(y)P(x)+ \theta RQ(y)P(x)|^2
dxdy. \end{align*}
The above mean square result was the key ingredient to prove his result.
In~\cite[(7)]{CG5}, Conrey and Ghosh have stated that for $B(s)$ defined as above with $R=0$, there exist a choice of $\theta $ and $b(n)$ such that
\begin{equation}\label{ECG1}
\int_2^{T} \left| \frac{\zeta^{(k)}(\frac{1}{2}+it)}{\operatorname{log}^k t} B\left(\frac{1}{2}+it \right)\right|^2dt \sim c_kT,
\end{equation}
where $c_k=\frac{1}{2} + \frac{\coth\left(\frac{k}{2}\sqrt{\frac{1+\frac{1}{2k}}{1-\frac{1}{2k}}}\right)}{2 \sqrt{1-\frac{1}{4k^2}}}=1+\operatorname{O}(\frac{1}{k^2}).$
The next theorem is about the asymptotic mean values of the product of higher derivatives of Riemann zeta function with Dirichlet polynomial in short intervals.
\begin{theorem}\label{thm5}
Let $m, n$ be any non-negative integer and $k=\min\{m,n\}$. Then, for any $ H=T^a, \frac{1}{2} +\theta < a \leq 1$, where $X= T^\theta, 0<\theta < \frac{2k+1}{4(k+1)}$ we have
\begin{align*}
&\int_{T}^{T+H}\zeta^{(m)}\Big(\frac{1}{2}+it \Big)\zeta^{(n)}\Big(\frac{1}{2}-it \Big) \Big|\sum_{n\leq T^{\theta}}\frac{\mu{(n)}}{n^{\frac{1}{2}+it}}\Big(1- \frac{\operatorname{log}{n}}{\theta\operatorname{log}{T}}\Big)\Big|^2 dt\\
&= (-1)^{m+n}\Big[\frac{1}{2} + \frac{1}{\theta (m+n+1)} + \frac{mn\theta}{3(m+n-1)} + \operatorname{O}_{m,n}\Big(\frac{(\operatorname{log}\log{T})^{3}}{\operatorname{log}{T}}\Big)\Big]H \Big(\operatorname{log}{\frac{T}{2\pi}} \Big)^{m+n}.
\end{align*}
\end{theorem}
\begin{remark}
Note that in \eqref{ECG1} the range of $\theta$ is not specified. In Theorem \ref{thm5}, we have given the upper bound of $\theta$ as $\frac{2k+1}{4(k+1)}$.
\end{remark}
Thus, at $m=n=k$ we obtain the mean value of higher derivatives of Riemann zeta function product with the Dirichlet polynomial in the following corollary:
\begin{corollary}\label{coro5}
Let $ 0 < \theta < \frac{2k+1}{4(k+1)}$. Then, for any $H=T^a, \frac{1}{2} + \theta < a \leq 1$ we have
\begin{align*}
& \int_{T}^{T+H}\Big|\zeta^{(k)}\Big(\frac{1}{2}+it\Big)\sum_{n\leq T^{\theta}}\frac{\mu{(n)}}{n^{\frac{1}{2}+it}}\Big(1- \frac{\operatorname{log}{n}}{\theta\operatorname{log}{T}}\Big)\Big|^2dt \\
& = \Big(\frac{1}{2}+ \frac{1}{(2k+1)\theta}+ \frac{\theta k^2}{3(2k-1)} + \operatorname{O}_k\Big(\frac{(\operatorname{log}\log{T})^{3}}{\operatorname{log}{T}}\Big)\Big)H\Big(\operatorname{log}{\frac{T}{2\pi}}\Big)^{2k}.
\end{align*}
\end{corollary}
The Corollary~\ref{coro5} shows that for $P(x)=x$ and $Q(x)=x^k, k \geq 0$ and $\theta< \frac{2k+1}{4(k+1)}$ the mean square result of Conrey \eqref{C1} can be extended to $R=0$ as well as into the short interval $[T, T+H]$ with $H=T^a, \frac{1}{2}< a \leq 1$.
The article is organized as follows. In Section 2, we recall and prove some preliminary results that we need to prove our main results. In the final section, we prove the main results.
\section{Preliminaries}
In this section, we establish several lemmas which are the main ingredients to prove our theorems. We are going to use many of these lemmas to establish our main proposition (see Proposition \ref{L12}) at the end of this section.
We start by recalling the following lemma of Conrey, Ghosh and Gonek.
\begin{lemma}[Lemma 6, \cite{CGG}]\label{L1}
For a polynomial $Q$, we have
$$
\sum_{n\leq x}{\frac{\mu^2(n)}{\varphi(n)}}Q\left(\frac{\operatorname{log}{\frac{x}{n}}}{\operatorname{log}{x}}\right)= \operatorname{log}{x}\int_0^1Q(u)du,
$$
where $\varphi$ is the Euler function.
\end{lemma}
The next lemma is a variation of~\cite[Lemma 3.5]{Levi} and~\cite[Lemma 1]{PT}.
\begin{lemma}\label{EXL}
Let $\xi >0$, $\alpha$ be a non-negative integer and $\sqrt{T} \leq H \leq T/\operatorname{log}{T}$. We define $I(\xi,\alpha)$ by
\begin{align*}
I(\xi,\alpha) := \int_{T}^{T+H}\left(\frac{t}{e\xi}\right)^{it}\frac{dt}{(\operatorname{log}{\tau})^{\alpha}},
\end{align*}
where $\tau=\sqrt{\frac{t}{2 \pi}}.$
Then we get
\begin{align*}
I(\xi,\alpha)= \begin{cases} 2^\alpha e^{\frac{\pi i}{4}}\sqrt{2\pi \xi}e^{-i\xi}(\operatorname{log}{\frac{\xi}{2\pi}})^{-\alpha} + \operatorname{O}\left(R(\xi)(\operatorname{log}{T})^{-\alpha}\right) & \mbox{if } T\leq \xi \leq T+H, \\
\operatorname{O}\left((\operatorname{log}{T})^{-\alpha}R(\xi)\right) & \mbox{if } \xi\leq T \mbox{ or }\xi \geq T+H, \end{cases}
\end{align*}
where $$R(\xi)= \operatorname{O}(1)+ \operatorname{O}\left(\frac{T}{|T-\xi|+\sqrt{T}}\right)+ \operatorname{O}\left(\frac{T+H}{|T+H-\xi|+\sqrt{T+H}}\right).$$
\end{lemma}
\begin{proof}
Let $T\leq \xi \leq T+H$. We have the following identity:
\begin{align*}
\sum_{j=0}^{\alpha-1}\binom{\alpha}{j}\left(\operatorname{log}{\frac{\xi}{2\pi}}\right)^{j} \left(\operatorname{log}{\frac{t}{\xi}}\right)^{\alpha-j}& =\left(\operatorname{log}{\frac{\xi}{2\pi}} + \operatorname{log}{\frac{t}{\xi}}\right)^{\alpha}-\left(\operatorname{log}{\frac{\xi}{2\pi}}\right)^{\alpha}\\
&= \left(\operatorname{log}{\frac{t}{2\pi}}\right)^{\alpha}-\left(\operatorname{log}{\frac{\xi}{2\pi}}\right)^{\alpha}.
\end{align*}
Now, dividing both sides by $\left(\operatorname{log}{\frac{t}{2\pi}}\right)^{\alpha}$ and then interchanging the terms from both sides we get
\begin{align*}
\left(\operatorname{log}{\frac{\xi}{2\pi}}\right)^{\alpha}\left(\operatorname{log}{\frac{t}{2\pi}}\right)^{-\alpha}= 1 - \left(\operatorname{log}{\frac{t}{2\pi}}\right)^{-\alpha}\sum_{j=0}^{\alpha-1}\binom{\alpha}{j}\left(\operatorname{log}{\frac{\xi}{2\pi}}\right)^{j} \left(\operatorname{log}{\frac{t}{\xi}}\right)^{\alpha-j}.
\end{align*}
Since $\operatorname{log}{\tau}=\frac{1}{2}\operatorname{log}{\frac{t}{2\pi}}$, from the above relation we get the following identity:
\begin{align}\label{identity}
\frac{1}{(\operatorname{log}{\tau})^{\alpha}}=\frac{2^\alpha}{(\operatorname{log}{\frac{\xi}{2\pi}})^{\alpha}}\left[1 - \left(\operatorname{log}{\frac{t}{2\pi}}\right)^{-\alpha}\sum_{j=0}^{\alpha-1}\binom{\alpha}{j}\left(\operatorname{log}{\frac{\xi}{2\pi}}\right)^{j} \left(\operatorname{log}{\frac{t}{\xi}}\right)^{\alpha-j}\right].
\end{align}
By considering $G(\xi,t)=\left({t}/{e\xi}\right)^{it} $ and using the identity~\eqref{identity} in the definition of $I(\xi, \alpha),$ we have
\begin{align*}
I(\xi,\alpha)=\frac{2^\alpha}{(\operatorname{log}{\frac{\xi}{2\pi}})^{\alpha}}\int_{T}^{T+H}G(\xi,t)dt- \sum_{j=0}^{\alpha-1}\binom{\alpha}{j}\left(\operatorname{log}{\frac{\xi}{2\pi}}\right)^{j-\alpha} \int_{T}^{T+H}\frac{G(\xi,t)\left(\operatorname{log}{\frac{t}{\xi}}\right)^{\alpha-j} }{(\operatorname{log}{\tau})^{\alpha}}dt.
\end{align*}
Since $G'(\xi,t)=iG(\xi,t)\operatorname{log}{({t}/{\xi})},$
applying integration by parts, we have
\begin{align*}
I(\xi,\alpha)=\frac{2^\alpha}{(\operatorname{log}{\frac{\xi}{2\pi}})^{\alpha}}\int_{T}^{T+H}G(\xi,t)dt +\operatorname{O}\left(\left|\sum_{j=0}^{\alpha-1}\binom{\alpha}{j}\frac{\left(\operatorname{log}{\xi}\right)^{j-\alpha}}{(\operatorname{log}{(T+H)})^{\alpha}}\left(\operatorname{log}{\frac{T+H}{\xi}}\right)^{\alpha-j-1}\right|\right).
\end{align*}
Using the identity for the derivative of binomial identity on the right-hand side of the above expression, the error term is
$$\ll \operatorname{log}{(T+H)}^{-1}(|\operatorname{log}{\xi}|)^{-\alpha} \ll (\operatorname{log}{T})^{-\alpha-1}.$$
Now, putting the value of $\int_{T}^{T+H}G(\xi,t)dt$ which is evaluated by Levinson~\cite[Lemma 3.4(eq: 3.7)]{Levi} we get the required result.
Let us consider the case $\xi \leq T$.
In this case, first using the fact that $\frac{d}{dt}G(\xi, t) = iG{(\xi, t)}\operatorname{log}{(t/\xi)}$ and then integration by parts, we get
\begin{align*}
I(\xi,\alpha)& =\Big[ \frac{G(\xi,t)}{i\operatorname{log}{(t/\xi)}(\operatorname{log}{\tau})^{\alpha}}\Big]_{T}^{T+H} + \frac{\alpha}{i}\int_{T}^{T+H}\frac{G(\xi,t) }{t\operatorname{log}{(t/\xi)}(\operatorname{log}{\tau})^{\alpha-1}}dt\\
& \ll \frac{1}{\operatorname{log}{(T/\xi)}(\operatorname{log}{\tau_0})^{\alpha}} \ll \frac{R(\xi)}{(\operatorname{log}{\tau_0})^{\alpha}}.
\end{align*}
Similarly, we can obtain the estimate for the remaining case $\xi \geq T+ H$. This completes the proof of the lemma.
\end{proof}
In this article $p$ denote a prime number unless otherwise mentioned.
\begin{lemma}\label{L2}
For any positive integer $k\geq 1$ and large square-free positive integer $n$ we have
\begin{equation*}
\sum_{p|n}{\frac{\operatorname{log}^k{p}}{p}}= \operatorname{O}((\operatorname{log}{\operatorname{log}{n}})^k).
\end{equation*}
\end{lemma}
\begin{proof}
For $k=1,2$ see~\cite[Lemma 3.9 and 3.10 ]{Levi}. By observing that $\frac{(\operatorname{log}{x})^k}{x}$ is decreasing for $x\geq e^k$ and following the same path~\cite[Lemma 3.9 and 3.10 ]{Levi}, we can prove the lemma for $k\geq 2.$
\end{proof}
\begin{lemma}\label{L4}
For any non-negative integer $q$,
\begin{equation*}
F_q(n):= \sum_{d|n}\frac{\mu(d)}{d}\operatorname{log}^q{d}= \operatorname{O}(f(n,1)(\operatorname{log}{\operatorname{log}{n}})^q),
\end{equation*}where $f(d,s):=\prod_{p|d}\left(1-\frac{1}{p^s}\right).$
\end{lemma}
\begin{proof}
We see that the expression of $F_q(n)$ is sum over all square free numbers which are divisor of $n$. So we write
$$F_q(n)= \sum_{d|n}\frac{\mu(d)}{d}\Big(\sum_{p|d}\operatorname{log}{p}\Big)^q.$$
Now, the $q$ time product of $\sum_{p|d}\operatorname{log}{p}$ can be written as following form:
$$\sum_{\substack{1\leq m\leq q\\ \alpha_1+ \cdots +\alpha_{m}=q}}\sum_{p_1\ldots p_m|d}\prod_{t=1}^{m}(\operatorname{log}{p_t})^{\alpha_t},$$
where $p_i\neq p_j$ for $i\neq j$, $i,j=1,\ldots, m$ and $\alpha_i, i= 1,2,\ldots, m$ are the integers such that $0 \leq \alpha_i \leq q$. Note that the multinomial theorem give us a nice expression than our identity, but for our convenience, we prefer the later form. For a square free number $d$, let $r$ be a square free number such that $p_{1}...p_{m}r=d$, then by interchanging the summations we write
\begin{align*}
F_q(d) = \sum_{\substack{1\leq m \leq q\\ \alpha_1+\cdots +\alpha_m=q}}\sum_{d|n}\frac{\mu(d)}{d}\sum_{\substack{p_{1}...p_{m}r=d\\ (r,p_{1} \cdots p_{m})=1}}\prod_{t=1}^{m}(\operatorname{log}{p_{t}})^{\alpha_{t}}.
\end{align*}
Since $(p_{1}...p_{m} , r) = 1$, we get the following:
\begin{align*}
F_q(d) &= \sum_{\substack{1\leq m \leq q\\ \alpha_1+\cdots +\alpha_m=q}}\sum_{\substack{p_{1} \cdots p_{m}r|n}}\frac{\mu(r)\mu(p_{1} \cdots p_{m})}{r p_{1}...p_{m}}\prod_{t=1}^{m}(\operatorname{log}{p_{t}})^{\alpha_{t}}\\
&=\sum_{\substack{1\leq m \leq q\\ \alpha_1+\cdots +\alpha_m=q}}\sum_{p_{1}...p_{m}|n}\frac{(-1)^m}{p_{1}...p_{m}}\prod_{t=1}^{m}(\operatorname{log}{p_{t}})^{\alpha_{t}}\sum_{r|\frac{n}{p_{1}...p_{m}}}\frac{\mu(r)}{r}.
\end{align*}
Note that $\sum_{d|n}\frac{\mu(d)}{d} = \prod_{p|n}\Big(1 - \frac{1}{p}\Big)$, so we obtain
\begin{align*}
F_q(d) &= f(n,1)\sum_{\substack{1\leq m \leq q\\ \alpha_1+\cdots +\alpha_m=q}}\sum_{p_{1}...p_{m}|n}(-1)^m\prod_{t=1}^{m}\frac{(\operatorname{log}{p_{t}})^{\alpha_{t}}}{p_{t}\left(1-\frac{1}{p_{t}}\right)} \\
&= f(n,1)\sum_{\substack{1\leq m \leq q\\ \alpha_1+\cdots +\alpha_m=q}}(-1)^m\sum_{p_{1}...p_{m}|n}\prod_{t=1}^{m}\frac{(\operatorname{log}{p_{t}})^{\alpha_{t}}}{p_t-1}.
\end{align*}
For $m=1$ the inner most sum on the right hand side above is $\sum_{p|n}\frac{(\operatorname{log}{p})^{q}}{p-1}$ and we have
\begin{align*}
\sum_{p|n}\frac{(\operatorname{log}{p})^{q}}{p-1}= \sum_{p|n}\frac{(\operatorname{log}{p})^{q}}{p}\left(1-\frac{1}{p}\right)^{-1}= \sum_{p|n}\frac{(\operatorname{log}{p})^{q}}{p} + \operatorname{O}{(1)}.
\end{align*}
Thus, by Lemma \ref{L2} we obtain $$\sum_{p|n}\frac{(\operatorname{log}{p})^{q}}{p-1}= \operatorname{O}((\operatorname{log}{\operatorname{log}{n}})^q).$$
For $m\geq 2$ and $\sum_{t=1}^m\alpha_t=q$, we write
\begin{align*}
\sum_{p_{1}...p_{m}|n}\prod_{t=1}^{m}\frac{(\operatorname{log}{p_{t}})^{\alpha_{t}}}{p_t-1}= \operatorname{O}\left(\prod_{t=1}^m\sum_{p|n}\frac{(\operatorname{log}{p})^{\alpha_t}}{p-1}\right)= \operatorname{O}\left(\prod_{t=1}^m(\operatorname{log}\log{n})^{\alpha_t}\right) = \operatorname{O}((\operatorname{log}{\operatorname{log}{n}})^q).
\end{align*}
Hence, we have $F_q(n)= \operatorname{O}(f(n,1)(\operatorname{log}{\operatorname{log}{n}})^q).$
\end{proof}
By using Lemma~\ref{L4} we obtain an upper bound estimate for the higher derivatives of a certain function.
\begin{lemma}\label{EL5}
Let $g:\mathbb{C}\rightarrow \mathbb{C}$ be a function analytic at $1$. Then for any positive integers $n, k$ we have
$$\left(\frac{g(1)}{f(n,1)}\right)^{(k)}=\operatorname{O}\left(\frac{(\operatorname{log}\log{n})^k}{f(n,1)}\right).$$
\end{lemma}
\begin{proof} Let $\operatorname{Re}{(s)}> \frac{1}{2}$. As a first step, we have to evaluate the $m$-th derivative of $f(n,s)$. To do so we are taking the logarithmic derivative of $f(n,s)$ and it is given by
\begin{align*}
L(s) := \frac{f'(n,s)}{f(n,s)}= \sum_{p|n}\frac{\operatorname{log}{p}}{p^s-1}=\sum_{p|n}\frac{\operatorname{log}{p}}{p^s}\left(1-\frac{1}{p^s}\right)^{-1} =\sum_{p|n}\frac{\operatorname{log}{p}}{p^s} + \operatorname{O}_{|s|}(1).
\end{align*}
Now, differentiating $L(s)$ $m$-times at $s=1$ and then by using Lemma \ref{L2}, we get
$$L^{(m)}(1)=\sum_{p|n}\frac{(-1)^{m}(\operatorname{log}{p})^{m+1}}{p} + \operatorname{O}_{|s|}(1)=\operatorname{O}\left((\operatorname{log}\log{n})^{m+1}\right).$$
Using Leibniz formula we have
\begin{align*}
f^{(m)}(n,s)= (f(n,s)L(s))^{(m-1)}= \sum_{k=0}^{m-1}\binom{m-1}{k}(f(n,s))^{(m-1-k)}L^{(k)}(s).
\end{align*}
Hence, we get
\begin{align}\label{le2}
f^{(m)}(n,1)=\operatorname{O}\left(f(d,1)(\operatorname{log}\log{n})^m\right).
\end{align}
By Fa\`a di Bruno's formula \cite[eq: 1.1]{John}, we write
\begin{align*}
\left(\frac{1}{f(n,s)}\right)^{(m)}= \sum_{b_1,b_2,\ldots,b_m}\frac{m!k!(-1)^c}{b_1!b_2!\cdots b_m!}\frac{1}{(f(n,s))^{c+1}}\left(\frac{f^{(1)}(n,s)}{1!}\right)^{b_1}\cdots \left(\frac{f^{(m)}(n,s)}{m!}\right)^{b_m},
\end{align*}
where the sum is over all non negative integers $b_1,\ldots,b_m$ such that $b_1+2b_2+\cdots +mb_m=m$ and $c=b_1+b_2+\cdots+b_m.$ For any positive integer $a$, using the estimate of $f^{(a)}(n,1)$ from \eqref{le2} to the above relation, we get
\begin{align}\label{le1}
\nonumber \left(\frac{1}{f(n,1)}\right)^{(m)}&=\sum_{b_1,b_2,\ldots,b_m}\frac{m!k!(-1)^c}{b_1!b_2!\cdots b_m!}\frac{1}{(f(n,1))^{c+1}}\operatorname{O}\left(f^c(n,1)(\operatorname{log}\log{n})^m\right)\\
&=\operatorname{O}\left(\frac{(\operatorname{log}\log{n})^m}{f(n,1)}\right).
\end{align}
Now, by Leibniz formula and then using \eqref{le1}, we have
\begin{align*}
\left(\frac{g(1)}{f(n,1)}\right)^{(k)}=\sum_{m=0}^k\binom{k}{m}(g(1))^{(k-m)}\left(\frac{1}{f(n,1)}\right)^{(m)}= \operatorname{O}\left(\frac{(\operatorname{log}\log{n})^k}{f(n,1)}\right).
\end{align*}
\end{proof}
Next, we prove one of the main lemmas to prove Theorem \ref{thm1}, which is about the evaluation of partial sum of an arithmetic function.
\begin{lemma}\label{L7}
For any positive real number $x$ and natural numbers $k,d,$ we have
\begin{align*}
s_x(k):= \sum_{\substack{n\leq x\\ (n,d)=1}}\frac{\mu(n)}{n}\left(\operatorname{log}{\frac{x}{n}}\right)^k
=\begin{cases}
\frac{1}{f(d,1)} + E_1(x; 1)+ E_2(x;1) &\mbox{if }k=1,\\
\frac{k}{f(d,1)}\left(\operatorname{log}{x}\right)^{k-1} +E_1(x; k)+ E_2(x; k) & \mbox{if } k\geq 2,
\end{cases}
\end{align*}
where $f$ is as in Lemma \ref{L4},
$E_1(x; k)= \operatorname{O}\left(\frac{(\operatorname{log}{L})^{k+1}}{x^bf(d,1-b)}\right)$ for $k\geq 1,$ $E_2(x;1)=\operatorname{O}\left(\frac{\operatorname{log}{L}}{L^{10}f(d,1)}\right)$ and for $k\geq 2$, $E_2(x; k)= \operatorname{O}\left(\frac{\operatorname{log}\log{d}}{f(d,1)}\left(\operatorname{log}{x}\right)^{k-2}\right)$ such that $L:=\operatorname{log}{\tau_0},$ $b:= \frac{1}{M\operatorname{log}{L}}$ for some suitably large constant $M$.
\end{lemma}
\begin{proof}
The idea of the proof is similar to the derivation of \cite[Eq: 11.3]{Levi}. Let us define the arithmetic function $a_n$ by
\begin{equation*}
a_n = \begin{cases} \mu(n) &\mbox{if } (n,d)=1, \\
0 & \mbox{if } (n,d)\geq 2. \end{cases}
\end{equation*}
Then, the associated Dirichlet series to $a_n$ is $\frac{1}{\zeta(s)f(d,s)}$. By using Perron's summation formula~\cite[Exercise $169,$ p. 228]{Tene}, we write
\begin{equation*}
s_x(k)= \frac{k!}{2\pi i}\int_{2-i\infty}^{2+i\infty}\frac{x^{s-1}ds}{(s-1)^{k+1}\zeta(s)f(d,s)}.
\end{equation*}
Now, we set $L=\operatorname{log}{\tau_0},$ $b= \frac{1}{M\operatorname{log}{L}}$ for some large enough constant $M.$ We can change the path of integration to the new path by joining the line segments as follows:
\begin{enumerate}
\item $L_1:=\{s=1+it: -\infty < t \leq -L^{10}\}$ ,
\item $L_2:=\{s=\sigma-iL^{10}: 1\geq \sigma \geq 1-b\}$,
\item $L_3:=\{s=1-b+it: -L^{10} < t \leq L^{10}\}$,
\item $L_4:=\{s=\sigma+iL^{10}: 1-b\leq \sigma\leq 1\}$,
\item $L_5:=\{s=1+it: L^{10}\leq t <\infty\}.$
\end{enumerate}
Let $Q_i$ denote the integral value of the integrand along the line segment $L_i$ for $i=1,2,...,5$. So, $s_x(k)$ can be written as
\begin{equation*}
\frac{s_x(k)}{k!}= Q_0+Q_1+Q_2+Q_3+Q_4+Q_5,
\end{equation*}
where $Q_0$ is the residue of the function $\frac{x^{s-1}z(s)}{(s-1)^kf(d,s)}$ at $s=1$, where $z(s)= \frac{1}{(s-1)\zeta(s)}$. Clearly, the functions $z(s)$, $f(d,s)$ are analytic on $\mathbb{C}$ with $z(1)=1$ and $f(d,1) \neq 0$. So, we have
\begin{align*}
Q_0= \frac{1}{(k-1)!}\frac{d^{k-1}}{ds^{k-1}}\left(\frac{x^{s-1}z(s)}{f(d,s)}\right)_{s=1}.
\end{align*}
Applying Leibniz rule we obtain
$$Q_0 = \frac{1}{(k-1)!}\sum_{j=0}^{k-1}\binom{k-1}{j}(\operatorname{log}{x})^{k-1-j}\left(\frac{z(1)}{f(d,1)}\right)^{(j)}.$$
By using Lemma \ref{EL5} we get
\begin{align*}
Q_0 = \frac{(\operatorname{log}{x})^{k-1}}{(k-1)!f(d,1)} + \operatorname{O}\left(\sum_{j=1}^{k-1}(\operatorname{log}{x})^{k-1-j}\frac{(\operatorname{log}\log{d})^j}{f(d,1)}\right).
\end{align*}
Now, we will obtain the upper bound of other $Q_i$'s by using the estimate of ${\zeta(s)}^{-1}$ in the zero-free region of zeta function. For $\sigma>1-\frac{A}{\operatorname{log}{t}}$ we have ${\zeta(s)}^{-1} = \operatorname{O}(\operatorname{log}{t})$ (see~\cite[p. 60]{Titc}). So, for $s\in L_1,L_5$ we have ${\zeta(s)}^{-1} = {\zeta(1+it)}^{-1} = \operatorname{O}(\operatorname{log}{t})$, and for $s\in L_2,L_3,L_4$ we have ${\zeta(s)}^{-1} = \operatorname{O}(\operatorname{log}{L}).$ Using this bounds we deduce the following:
\begin{align*}
Q_1,Q_5 \ll \frac{1}{f(d,1)}\int_{L^{10}}^{\infty}\frac{\operatorname{log}{t}}{t^{k+1}}dt \ll \frac{\operatorname{log}{L}}{f(d,1)L^{10k}} \quad\text{[Using integration by parts]}.
\end{align*}
Also, for $Q_2, Q_4$ we get
\begin{align*}
Q_2,Q_4 &\ll \frac{\operatorname{log}{L}}{f(d,1-b)}\int_{1-b}^1\frac{x^{\sigma-1}d\sigma}{\left((\sigma-1)^2+ L^{20}\right)^{\frac{k+1}{2}}} \ll \frac{\operatorname{log}{L}}{f(d,1-b)}\int_{0}^b\frac{d\sigma}{\left(\sigma^2+ L^{20}\right)^{\frac{k+1}{2}}}.
\end{align*}
By the change of variable $\sigma= L^{10}\tan{\theta}$, we get
\begin{align*}
Q_2,Q_4 \ll \frac{\operatorname{log}{L}\tan^{-1}{\frac{b}{L^{10}}}}{f(d,1-b)L^{10k}} \ll \frac{\operatorname{log}{L}}{f(d,1-b)L^{10k}}.
\end{align*}
The last equality is because $\tan^{-1}$ is a bounded function.
Now, the remaining case is $Q_3$ and it can be estimated as follows:
\begin{align*}
Q_3 & \ll \frac{\operatorname{log}{L}}{x^bf(d,1-b)} \int_{-L^{10}}^{L^{10}}\frac{dt}{(t^2+ b^2)^{\frac{k+1}{2}}} \ll \frac{\operatorname{log}{L}}{x^bf(d,1-b)} \int_{0}^{L^{10}}\frac{dt}{(t^2+ b^2)^{\frac{k+1}{2}}}.
\end{align*}
Thus, by change of variable $t= b\tan \,{\theta}$,
\begin{align*}
Q_3 \ll \frac{\operatorname{log}{L}}{x^bf(d,1-b)b^{k}}\int_{0}^{\frac{\pi}{2}}\cos^{k-1}\theta d\theta \ll \frac{(\operatorname{log}{L})^{k+1}}{x^bf(d,1-b)}.
\end{align*}
Clearly, the order of magnitude of $Q_3$ is larger than those of $Q_i, i=1,2,4,5$ whenever $x\ll T^c$, $c$ is a fixed positive constant. Hence, the result follows.
\end{proof}
\begin{lemma}\label{L9}
For any integer $k \geq 0$, we have
\begin{equation*}
\sum_{n\leq x}\frac{\operatorname{log}^k{n}}{n}=\frac{\operatorname{log}^{k+1}{x}}{k+1}+ (-1)^kk!\gamma_k^{'} + \operatorname{O}\left(\frac{\operatorname{log}^{k}{x}}{x}\right),
\end{equation*}
where $\gamma_k^{'}$ is called the Stieltjes constant.
\end{lemma}
For details of the proof, the reader may see \cite[p.19-20]{Iwa}.
\begin{lemma}\label{L10}
Suppose $k$ is any non-negative integer and $\tau_0=\sqrt{\frac{T}{2\pi}}$. Let $t$ be a real number in $[T, T+H]$ such that $H=T^a$, where $\frac{1}{2} \leq a < \frac{4k+3}{4(k+1)}$. Then we have
\begin{equation*}
e^{-i\theta(t)}(-i)^kZ^{(k)}(t)=\sum_{n\leqslant \tau_0}{\dfrac{(\operatorname{log}{\tau/n})^k}{n^{\frac{1}{2}+it}}} + e^{-2i\theta(t)}\sum_{n\leqslant \tau_0}{\dfrac{(\operatorname{log}{n/\tau})^k}{n^{\frac{1}{2}-it}}}+ \operatorname{O}(Y_k(T)),
\end{equation*}
where $\tau = \sqrt{\frac{t}{2\pi}}$, $\theta(t)$ is defined in \eqref{hardyz} and
\begin{align*}
Y_k(T)\leq \begin{cases} T^{-\frac{1}{4}}\operatorname{log}^k{T} & \mbox{ if } \frac{1}{2} \leq a \leq \frac{2k+1}{2(k+1)},\\
\Big(\frac{H}{T}\Big)^{k+1}T^{\frac{1}{4}} & \mbox{ if } \frac{2k+1}{2(k+1)} < a < \frac{4k+3}{4(k+1)}.
\end{cases}
\end{align*}
\end{lemma}
\begin{proof} For $k=0,$ the result follows from \cite[Eq: 4.2, p.98]{Ivic}. For $k\geq 1,$ we obtain the lemma by using~\cite[Eq:22]{Hall} of R. R. Hall, where he gives a beautiful approximate functional equation for $Z^{(k)}(t)$ which says that for $k\in \mathbb{N}, 2\pi xy=t~\text{and}~ x,y\geq 1$ we have
\begin{equation*}
e^{-i\theta(t)}(-i)^kZ^{(k)}(t)=\sum_{n\leqslant x}{\dfrac{(\operatorname{log}{\tau/n})^k}{n^{\frac{1}{2}+it}}} + \chi\left(\frac{1}{2}+it\right)\sum_{n\leqslant y}{\dfrac{(\operatorname{log}{n/\tau})^k}{n^{\frac{1}{2}-it}}}+\operatorname{O}\left(\left(x^{-\frac{1}{2}}+{ y^{-\frac{1}{2}}}\right)\operatorname{log}^k{t}\right).
\end{equation*}
Putting $x=y=\sqrt{\frac{t}{2\pi}}$ for $T\leq t\leq T+H$ in the above equation, we get
\begin{align}\label{zkap}
e^{-i\theta(t)}(-i)^kZ^{(k)}(t)&=\sum_{n\leqslant \tau}{\dfrac{(\operatorname{log}{\tau/n})^k}{n^{\frac{1}{2}+it}}} + \chi\left(\frac{1}{2}+it\right)\sum_{n\leqslant \tau}{\dfrac{(\operatorname{log}{n/\tau})^k}{n^{\frac{1}{2}-it}}}+ \operatorname{O}(T^{-\frac{1}{4}}\operatorname{log}^k{T}).
\end{align}
Now, $$\left| \sum_{\tau_0<n\leq \tau}{\dfrac{(\operatorname{log}{\tau/n})^k}{n^{\frac{1}{2}+it}}}\right|\leq \sum_{\tau_0<n\leq \tau}{\dfrac{(\operatorname{log}{\tau/n})^k}{n^{\frac{1}{2}}}}
<\sum_{\tau_0<n\leq \tau}{\dfrac{(\frac{1}{2}\operatorname{log}{\frac{T+H}{T}})^k}{\tau_0^{\frac{1}{2}}}} \ll \Big(\frac{H}{T}\Big)^k\frac{1}{\tau_0^{\frac{1}{2}}}\sum_{\tau_0<n\leq \sqrt{\frac{T+H}{2\pi}}}1.$$
But, $\sqrt{\frac{T+H}{2\pi}}=\tau_0\sqrt{1+\frac{H}{T}} \leq \tau_0\left(1 + \frac{H}{T}\right) $ gives
\begin{align}\label{zkzp1}
\left| \sum_{\tau_0<n\leq \tau}{\dfrac{(\operatorname{log}{\tau/n})^k}{n^{\frac{1}{2}+it}}}\right| \ll \Big(\frac{H}{T}\Big)^k \frac{1}{\sqrt{\tau_0}}\frac{\tau_0H}{T} \ll \Big(\frac{H}{T}\Big)^{k+1}T^{\frac{1}{4}}.
\end{align}
Since $\left|\chi\left(\frac{1}{2}+it\right)\right| =1,$ the result follows from the comparison of the error term of \eqref{zkap} and the estimate \eqref{zkzp1}.
\end{proof}
\begin{lemma}\label{L14}
For any integer $k\geq 0$ real $\sigma\geq \frac{1}{2}$ and $T\leq t\leq 2T$, we have
\begin{align*}
\zeta^{(k)}(s) \ll \left(t^{\frac{1-\sigma}{3}}+1\right)(\operatorname{log}{t})^{k+1}.
\end{align*}
\end{lemma}
\begin{proof}
The well known convexity bound for $\zeta(s)$ in $\sigma \geq \frac{1}{2},~ T\leq t\leq 2T$ is given by $\zeta(s)\ll \left(t^{\frac{1-\sigma}{3}}+1\right)\operatorname{log}{t}$ (see \cite[eq: 2.4]{GI}). By Cauchy's integral theorem we write
\begin{align*}
\zeta^{(k)}(s)= \frac{k!}{2\pi i}\int_{|z-s|=\rho}\frac{\zeta(z)}{(z-s)^{k+1}}dz,
\end{align*}
where $\rho$ is suitably chosen small radius of a circle centre at $s$. Then we get the upper bound
\begin{align*}
\zeta^{(k)}(s) \ll \frac{\left(t^{\frac{1-\sigma}{3}}+1\right)\operatorname{log}{t}}{\rho^k}.
\end{align*}
Taking $\rho = \frac{1}{\operatorname{log}{t}}$ and $T\leq t\leq 2T$ we get the required result.
\end{proof}
\begin{lemma}\label{L15}
Let $\mathscr{D}:= \left\lbrace s: \frac{1}{2}\leq\sigma\leq m,~ T\leq t\leq 2T\right\rbrace$ with $m=\operatorname{o}(T)$. For $k\geq 1$ we get $$\lambda_k(s)=\left(\operatorname{log}{\tau}\right)^{k-1} + \operatorname{O}\left(T^{-1}(\operatorname{log}{T})^{k-1}\right),$$
where $\tau = \sqrt{\frac{t}{2\pi}}$.
\end{lemma}
\begin{proof}
First, we expand the recurrence relation~\eqref{lambda-recu} of $\lambda_k$ for $k\geq 3$. By using induction principle we write
\begin{align*}
\lambda_k(s)=\lambda(s)^{k-1} + \Lambda_k{(s)} \mbox{ with } \Lambda_k(s)=\sum_{m=0}^{k-3}(\lambda(s))^{k-3-m}P_{m+1}(\lambda)(s)
\end{align*}
and $P_n(\lambda)$ is given by the expression
\begin{align*}
P_n(\lambda):=\sum_{(m_1,m_2,...,m_n)}a_{(m_1,m_2,...,m_n)}\prod_{j=1}^n \left(\lambda^{(j)}\right)^{m_j},
\end{align*}
with non-negative integers $a_{(m_1,m_2,...,m_n)}$ for $0\leq m_j\leq n-1$ and $\sum_{j=1}^njm_j\leq n$.
Now, by logarithmic derivative of $\chi(s)$ along with~\cite[Eq: 3.1, 3.2]{MT}, we write
\begin{align*}
\omega(s)= \frac{\pi}{2}\tan{\frac{\pi s}{2}} - \operatorname{log}{\frac{s}{2\pi}} +\frac{1}{2s}+\operatorname{O}\left(\frac{1}{|s|^2}\right),
\end{align*}
where $\sigma>0.$ For $s\in \mathscr{D}$ it holds that $\tan{\frac{\pi s}{2}} = i + \operatorname{O}(e^{-\pi t})$. Thus, in $\mathscr{D}$ we obtain
\begin{align*}
\omega(s)= - \operatorname{log}{\frac{|s|}{2\pi}} + \operatorname{O}\left(\frac{1}{|s|}\right) \mbox{ and } \omega^{'}{(s)} = \frac{\pi^2}{4}\sec^2{\frac{\pi s}{2}} + \operatorname{O}\left(\frac{1}{|s|}\right)= \operatorname{O}\left(\frac{1}{|s|}\right).
\end{align*}
Since $\lambda(s)=\dfrac{\omega'}{\omega}(s)- \frac{1}{2}\omega(s)$, we get
$\lambda(s)= \frac{1}{2}\operatorname{log}{\frac{|s|}{2\pi}} + \operatorname{O}\left(\frac{1}{|s|}\right)= \operatorname{log}{\tau} + \operatorname{O}\left(\frac{1}{T}\right)$. Also, for higher derivative of $\lambda(s)$ we write
\begin{align*}
\lambda^{(k)}(s)= \left(\frac{\omega'}{\omega}\right)^{(k)}(s) - \frac{1}{2}\omega^{(k)}(s).
\end{align*}
For $s \in \mathscr{D}$ we obtain
\begin{align*}
\lambda^{(k)}(s) = \operatorname{O}{(T^{-k}}).
\end{align*}
Hence, for $k\geq 1$ and $s\in \mathscr{D}$ we get $$\lambda_k(s)=\left(\operatorname{log}{\tau}\right)^{k-1} + \operatorname{O}\left(T^{-1}(\operatorname{log}{T})^{k-2}\right).$$
\end{proof}
Now, we state the proposition, which is useful to prove our results. To serve our purpose, we define $G_k(t):=Z^{(k)}{(t)}(\operatorname{log}{\tau})^{-k}$ and
$$J(T):= \int_{T}^{T+H}{G_{k_1}(t)G_{k_2}(t)\left|\Phi\left(\frac{1}{2}+it\right)\right|^2}dt,$$
where $\Phi$ is the Dirichlet polynomial as in \eqref{Dirpol}.
\begin{proposition}\label{L12}
Let $k_1, k_2$ be non-negative integers such that $k:=\min(k_1, k_2)$. For $H =T^{a}, \frac{1}{2} < a <\frac{4k+3}{4(k+1)}$ we have
\begin{align*}
J(T) &=2H\vartheta(k_1,k_2)\sum_{r_1=0}^{k_1}\sum_{r_2=0}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}\frac{(-1)^{r_1+r_2}J(r_1,r_2)}{(\operatorname{log}{\tau_0})^{r_1+r_2}}\\
&+ \operatorname{O}\left(\frac{HY_k(T)X^\epsilon}{(\operatorname{log}{\tau_0})^{k-3}} +X^{1+2\epsilon}\sqrt{T} \operatorname{log}^3{T}\right),
\end{align*}
where
\begin{align*}
J(r_1, r_2):=\sum_{\substack{0\leq m_1\leq r_1\\0\leq m_2\leq r_2}}\binom{r_1}{m_1}\binom{r_2}{m_2}\sum_{\ l,q\leq X}\frac{b_l\overline{b}_q}{lq}(l,q)\left(\operatorname{log}{\frac{l}{(l,q)}}\right)^{m_1}\left(\operatorname{log}{\frac{q}{(l,q)}}\right)^{m_2}\sum_{y\leq \frac{\tau_0(l,q)}{q}}\frac{\operatorname{log}^{r}{y}}{y},
\end{align*}
and $\epsilon$ is as in \eqref{Dirpol} such that $Y_k$ be as in Lemma \ref{L10}.
Here $\vartheta(k_1,k_2) = 0$ if $k_1+k_2 $ is odd and $\vartheta(k_1,k_2) = (-1)^{\frac{k_2-k_1}{2}}$ if $k_1+k_2 $ is even. Moreover, $r:= r_1+r_2-m_1-m_2$.
In particular, if we choose $\Phi(s)$ as a Dirichlet polynomial with $n$-th coefficient $b_n=\mu{(n)}\operatorname{log}{(X/n)}(\operatorname{log}{X})^{-1}$ of length $X= T^{\theta}$ for $0< \theta < \frac{2k+1}{4(k+1)} $. For $H= T^a, \frac{1}{2}+\theta < a <\frac{4k+3}{4(k+1)}$ and $k_1, k_2 \geq 1 $, we have
\begin{align}\label{asym-sp}
J(T)= \vartheta(k_1,k_2)\left(1 + \frac{1}{\theta (k_1+k_2+1)} + \frac{4 k_1k_2 \theta }{3(k_1+k_2-1)} + \operatorname{o}(1)\right)H\qquad\text{as}~ T \rightarrow \infty.
\end{align}
Also, we have
\begin{align}\label{ext-sp}
J(T) = \begin{cases}
H(1+ \frac{1}{\theta} + \operatorname{o}(1)), \mbox{ if } k_1 = 0, k_2 =0,\\
\operatorname{o}(H) , \mbox{ if } k_1 = 0 , k_2 =1,\\
\vartheta{(0,k_2)} H\Big(1 +\frac{1}{(k_2+1)\theta} +\operatorname{o}(1)\Big), \mbox{ if } k_1 = 0 , k_2 \geq 2.
\end{cases}
\mbox{as } T \rightarrow \infty.
\end{align}
\end{proposition}
\begin{remark}\label{rem2}
From \eqref{final-moment} and \eqref{final-sepcial} we see that if we write the formulas \eqref{asym-sp} and \eqref{ext-sp} as sum of main term and error term then the error term would be $\operatorname{O}\Big(\frac{H(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big)$.
\end{remark}
\begin{proof}
Since $\chi\left(\frac{1}{2}+it\right)=e^{-2i\theta(t)}$, using Lemma \ref{L10} we write
\begin{equation}\label{nn1}
G_{k}(t)=F_k(t)+\overline{F_k}(t) + \operatorname{O}\left(\frac{Y_k(T)}{\operatorname{log}^k\tau_0}\right),\quad \mbox{ where } F_k(t):=\frac{e^{i\theta(t)}i^k}{\operatorname{log}^k\tau}\sum_{n\leqslant \tau_0}{\dfrac{(\operatorname{log}{\tau/n})^k}{n^{\frac{1}{2}+it}}}.
\end{equation}
For any two non-negative integers $k_1$ and $k_2$ we consider the product $G_{k_1}(t)G_{k_2}(t)$. Using \eqref{nn1} we get,
\begin{align}\label{n1}
J(T)&= 2\mathfrak{R}\left(\int_{T}^{T+H}{F_{k_2}(t)\overline{F_{k_1}}(t)\left|\Phi\left(\frac{1}{2}+it\right)\right|^2}dt + \int_{T}^{T+H}{F_{k_1}(t)F_{k_2}(t))\left|\Phi\left(\frac{1}{2}+it\right)\right|^2}dt \right)\\
\nonumber &+ \operatorname{O}\left(\int_{T}^{T+H}{\left(\frac{Y_{k_2}(T)|F_{k_1}(t)|}{(\operatorname{log}{\tau_0})^{k_1}}+\frac{Y_{k_1}(T)|F_{k_2}(t)|}{(\operatorname{log}{\tau_0})^{k_2}}\right)\left|\Phi\left(\frac{1}{2}+it\right)\right|^2}dt\right)\\
&\nonumber + \operatorname{O}\left(\frac{Y_{k_1}(T)Y_{k_2}(T)}{(\operatorname{log}{\tau_0})^{k_1+k_2}}\int_{T}^{T+H}{\left|\Phi\left(\frac{1}{2}+it\right)\right|^2}dt\right).
\end{align}
By using the mean value theorem of Dirichlet polynomial \cite[Theorem 5.2]{Ivic} we estimate the last $\operatorname{O}$-term of \eqref{n1} as $ \operatorname{O}\left(\frac{HX^{2\epsilon}Y_{k_1}(T)Y_{k_2}(T)}{(\operatorname{log}{\tau_0})^{k_1+k_2-1}}\right)$. Apply the Cauchy-Schwarz inequality on the first $\operatorname{O}$-term of \eqref{n1}, we get it's upper bound
\begin{align*}
\ll \left(\int_{T}^{T+H}{\Big(\frac{Y_{k_2}^2(T)|F_{k_1}^2(t)|}{(\operatorname{log}{\tau_0})^{2k_1}}+\frac{Y_{k_1}^2(T)|F_{k_2}^2(t)|}{(\operatorname{log}{\tau_0})^{2k_2}}\Big)\Big|\Phi\Big(\frac{1}{2}+it\Big)\Big|^2}dt\right)^{\frac{1}{2}}\Big(\int_{T}^{T+H}{\Big|\Phi\Big(\frac{1}{2}+it\Big)\Big|^2}dt\Big)^{\frac{1}{2}}.
\end{align*}
Again, by using \cite[Theorem 5.2]{Ivic} on the second integral in the $\operatorname{O}$-term of the above expression we get the estimate for the first $\operatorname{O}$-term of \eqref{n1} as
$$ \operatorname{O}\left( \sqrt{H}X^\epsilon\operatorname{log}{\tau_0} \left(\int_{T}^{T+H}{\left(\frac{Y_{k_2}^2(T)|F_{k_1}^2(t)|}{(\operatorname{log}{\tau_0})^{2k_1}}+\frac{Y_{k_1}^2(T)|F_{k_2}^2(t)|}{(\operatorname{log}{\tau_0})^{2k_2}}\right)\left|\Phi\left(\frac{1}{2}+it\right)\right|^2}dt\right)^{\frac{1}{2}}\right).$$
We define $E, E'$ by
\begin{align}\label{p2eq11}
E= \int_{T}^{T+H}{F_{k_1}(t)F_{k_2}(t)\left|\Phi\left(\frac{1}{2}+it\right)\right|^2}dt
\end{align}
and
\begin{align}\label{p2eq111}
E'= \int_{T}^{T+H}{F_{k_1}(t)\overline{F_{k_2}}(t)\left|\Phi\left(\frac{1}{2}+it\right)\right|^2}dt.
\end{align}
Thus,
\begin{align}\label{neq:1}
J(T)= 2\mathfrak{R}(E + E') + \operatorname{O}\left(\sqrt{HE^{''}}X^{\epsilon}\operatorname{log}{\tau_0}\right),
\end{align}
where
\begin{align*}
E^{''} = \int_{T}^{T+H}{\left(\frac{Y_{k_2}^2(T)|F_{k_1}^2(t)|}{(\operatorname{log}{\tau_0})^{2k_1}}+\frac{Y_{k_1}^2(T)|F_{k_2}^2(t)|}{(\operatorname{log}{\tau_0})^{2k_2}}\right)\left|\Phi\left(\frac{1}{2}+it\right)\right|^2}dt.
\end{align*}
Now, we have to evaluate $E, E'$.
First of all we write
\begin{align}\label{xxxx}
E=\sum_{l,q \leq X}\frac{b_lb_q}{\sqrt{lq}}\int_{T}^{T+H}{F_{k_1}(t)F_{k_2}(t)\left(\dfrac{q}{l}\right)^{it}}dt .
\end{align}
Naturally, our next task is the evaluation of the integral on the right-hand side above. Thus, we consider the following:
\begin{align}\label{p2eq12}
E_1:= \int_T^{T+H}F_{k_1}(t)F_{k_2}(t)\left(\dfrac{\mu_2}{\mu_1}\right)^{it}dt,
\end{align}
where $\mu_1, \mu_2 \leq X$ with $(\mu_1, \mu_2)=1$. It is clear that for the choice $\mu_1=l/(l,q)$ and $\mu_2=q/(l,q)$, the integral $E_1$ is the same integral that appear in \eqref{xxxx}. For $-1\leq \sigma \leq 2, t\geq 2,$ by using Stirling's formula we write
\begin{equation}\label{proeq1}
\chi{(s)}=\left(\frac{2\pi}{t}\right)^{\sigma+it-1/2}e^{i(t+\pi/4)}
\left(1+\operatorname{O}{\left(\frac{1}{t}\right)}\right).
\end{equation}
For details on Stirling's formula one can see ~\cite[Corollary II.0.13]{Tene}.
Since $ e^{-2i\theta(t)}=\chi\left(\frac{1}{2}+it\right),$ from \eqref{proeq1} we write $E_1$ explicitly as
\begin{align}\label{n0001}
E_1=\frac{i^{k_1+k_2}}{e^{\pi i/4}}\sum_{r_1=0}^{k_1}\sum_{r_1=0}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}(-1)^{r_1+r_2}H(r_1,r_2)+ \operatorname{O}(\tau_0 \operatorname{log}{T}),
\end{align}
where $$H(r_1,r_2):=\sum_{\substack{ m,n\leq \tau_0}}\frac{\operatorname{log}^{r_1}{m}\operatorname{log}^{r_2}{n}}{\sqrt{nm}}I\left(\frac{2\pi mn\mu_1}{\mu_2}, r_1+r_2\right)$$ and $I(\alpha , \beta)$ is defined in Lemma \ref{EXL}. Now, we compute $E_1$ for the various cases. For the case $\mu_2/\mu_1<1$ if we restrict $$\tau_0^2\mu_2/\mu_1 \leq mn \leq \tau_1^2\mu_2/\mu_1$$ then by Lemma \ref{EXL} we get $$ E_1=E_1^{'}+E_1^{''},$$
where
$$E_1^{''}= \left(\operatorname{log}{T}\right)^{-k_1- k_2} \sum_{\substack{ m,n\leq \tau_0}}\frac{\operatorname{log}^{k_1}{m}\operatorname{log}^{k_2}{n}}{\sqrt{nm}}R{\left(\frac{2\pi mn\mu_1}{\mu_2}\right)}$$ and
\begin{align*}
E_1^{'}=2\pi i^{k_1+k_2} \sqrt{\frac{\mu_1}{\mu_2}}\sum_{r_1=0}^{k_1}\sum_{r_1=0}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}(-2)^{r_1+r_2}\sideset{}{'}\sum_{m,n\leq \tau_0}\frac{(\operatorname{log}{m})^{r_1}(\operatorname{log}{n})^{r_2}e{\left(-\frac{mn\mu_1}{\mu_2}\right)}}{\left(\operatorname{log}{\frac{mn\mu_1}{\mu_2}}\right)^{r_1+r_2}}.
\end{align*}
Here $\sideset{}{'}\sum$ means the sum have the restriction $\tau_0^2\mu_2/\mu_1 \leq mn \leq \tau_1^2\mu_2/\mu_1$ and the notation $e(x)$ means $ e^{2\pi i x}$. For $\mu_2/\mu_1<1$ if $\tau_0^2\mu_2/\mu_1 \geq mn$ or $mn \geq \tau_1^2\mu_2/\mu_1$ then $E_1$ contribute only the error term $E_1^{''}$. If we take $\mu_2/\mu_1\geq 1,$ then we see that $$\frac{2\pi mn\mu_1}{\mu_2}\leq 2\pi mn \leq T.$$ So, in this case again $E_1$ contribute the error term $E_1^{''}$. Note that $|b_l|, |b_q| \leq X^{\epsilon}$. Thus,
\begin{align}\label{error-R}
\sum_{l,q \leq X}\frac{b_lb_q}{\sqrt{lq}}E_1^{''}=X^{2\epsilon} \sum_{\substack{l,q \leq X \\ m,n \leq \tau_0}}\frac{1}{\sqrt{lqmn}}R{\left(\frac{2\pi mnl}{q}\right)}.
\end{align}
For $\operatorname{O}(1)$-term of $R$, the estimate for \eqref{error-R} is $\operatorname{O}(\sqrt{T}X^{1+2\epsilon})$. Infact, for $\frac{2\pi mnl}{q} \leq 3T/4$ or $\frac{2\pi mnl}{q} \geq 5T/4$ we have $R=\operatorname{O}(1)$, so the estimate for \eqref{error-R} would be $\operatorname{O}(\sqrt{T}X^{1+2\epsilon})$. The remaining case is $3T/4 \leq \frac{2\pi mnl}{q} \leq 5T/4$ and estimate \eqref{error-R} at the second and third term of $R$. For the second term of $R$ and $3T/4 \leq \frac{2\pi mnl}{q} \leq 5T/4$ we obtain the following:
\begin{align*}
\sum_{\substack{l,q \leq X \\ m,n \leq \tau_0}}\frac{1}{\sqrt{lqmn}}\frac{T}{|T-\frac{2\pi mnl}{q}| + \sqrt{T}} \leq \frac{1}{\sqrt{T}}\sum_{\substack{l,q \leq X \\ m,n \leq \tau_0}}\frac{1}{q}\frac{\frac{Tq}{2\pi ln}}{|\frac{Tq}{2\pi ln} -m| + \sqrt{T}\frac{q}{2\pi ln}}.
\end{align*}
Since the right hand side of above equation is identical with \cite[eq: 8.7]{Levi}, and its estimate is $\operatorname{O}(\sqrt{T}X \operatorname{log}^3{T})$, so we obtain
\begin{align*}
\sum_{l,q \leq X}\frac{b_lb_q}{\sqrt{lq}}E_1^{''} =\operatorname{O}(\sqrt{T}X^{1+2\epsilon}\operatorname{log}^3{T}).
\end{align*}
Now, by using binomial theorem on $E_1^{'}$ and taking into account the above error term we deduce
\begin{align*}
E=2\pi i^{k_1}(-i)^{k_2} \sum_{l,q \leq X}\frac{b_lb_q}{\sqrt{lq}} \sqrt{\frac{\mu_1}{\mu_2}}\sideset{}{'}\sum_{m,n\leq \tau_0}\frac{(\operatorname{log}{\frac{n\mu_1}{m\mu_2}})^{k_1}(\operatorname{log}{\frac{n\mu_2}{m\mu_1}})^{k_2}e{\big(-\frac{mn\mu_1}{\mu_2}\big)}}{\left(\operatorname{log}{\frac{mn\mu_1}{\mu_2}}\right)^{k_1+k_2}} +\operatorname{O}(X^{1+2\epsilon}\tau_0 \operatorname{log}^3{T}),
\end{align*}
where $\mu_1=l/(l,q)$ and $\mu_2=q/(l,q)$.
Now, we simplify $\sideset{}{'}\sum$ by using the argument of Selberg~\cite[Page 104]{Selb} and obtain
\begin{align}\label{p2ext11}
E&=2\pi i^{k_1}(-i)^{k_2} \sum_{l,q \leq X}\frac{b_lb_q}{\sqrt{lq}} \sqrt{\frac{\mu_1}{\mu_2}}\sum_{\tau_0\frac{\mu_2}{\mu_1}<n\leq \tau_0}\sideset{}{'}\sum_{m}\frac{(\operatorname{log}{\frac{n\mu_1}{m\mu_2}})^{k_1}(\operatorname{log}{\frac{n\mu_2}{m\mu_1}})^{k_2}e{\big(-\frac{mn\mu_1}{\mu_2}\big)}}{\big(\operatorname{log}{\frac{mn\mu_1}{\mu_2}}\big)^{k_1+k_2}}\\
\nonumber & +\operatorname{O}(X^{1+2\epsilon}\tau_0 \operatorname{log}^3{T}).
\end{align}
We divide the set $\{n: \tau_0\frac{\mu_2}{\mu_1}<n\leq \tau_0\}$ in two disjoint set according to the condition $\mu_2\nmid n$ and $\mu_2\mid n$.
In the case $\mu_2\nmid n$ we have $n\mu_1\equiv \mu (\rm mod \, {\mu_2}), 1\leq \mu \leq \mu_2-1.$ Then we obtain
\begin{align*}
\sideset{}{'}\sum_{m}e{\left(-\frac{mn\mu_1}{\mu_2}\right)} \ll \parallel\frac{\mu}{\mu_2}\parallel^{-1} \ll\frac{\mu_2}{\mu}+ \frac{\mu_2}{\mu_2-\mu}.
\end{align*}
Thus, by partial summation formula we find that the upper bound of $\sideset{}{'}\sum\limits_{m}$-term is $$\operatorname{O}\left(\frac{\mu_2}{\mu}+ \frac{\mu_2}{\mu_2-\mu}\right).$$
Now, if we break the set $\{n: \tau_0\frac{\mu_2}{\mu_1}<n\leq \tau_0, \mu_2\nmid n\}$ into successive subsets of length $\mu_2-1$, then the number of such subsets is $\operatorname{O}(\tau_0/\mu_2)$. Since $(\mu_1, \mu_2)=1,$ when $n$ vary in one of these subsets then $\mu$ vary in the complete non-zero residue system. So, the contribution of sum over the terms in the main term of $E$ for which $\mu_2\nmid n$, is bounded by
\begin{align*}
\ll \tau_0\sum_{l,q \leq X}\frac{(lq)^{\epsilon}}{\sqrt{lq}} \sqrt{\frac{\mu_1}{\mu_2}}\operatorname{log}{(\mu_2)} \ll X^{1+2\epsilon}\tau_0 \operatorname{log}^3{T}.
\end{align*}
In the case $\mu_2\mid n$, we write $n=\mu_2r$ for some positive integer $r$, then \eqref{p2ext11} can be written as
\begin{align}\label{p2ext111}
E=2\pi i^{k_2}(-i)^{k_1} \sum_{l,q \leq X}\frac{b_lb_q}{\sqrt{lq}} \sqrt{\frac{\mu_1}{\mu_2}}\sum_{\frac{\tau_0}{\mu_1}<r\leq \frac{\tau_0}{\mu_2}}\sum_{\frac{\tau_0^2}{r\mu_1}\leq m \leq \frac{\tau_1^2}{r\mu_1}}\frac{(\operatorname{log}{\frac{m}{r\mu_1}})^{k_1}(\operatorname{log}{\frac{m\mu_1}{r\mu_2^2}})^{k_2}}{\left(\operatorname{log}{mr\mu_1}\right)^{k_1+k_2}} +\operatorname{O}(X^{1+2\epsilon}\tau_0 \operatorname{log}^3{T}).
\end{align}
Since $\tau_0^2 \leq mr\mu_1 \leq \tau_1^2$ we get
\begin{align*}
\operatorname{log}{mr\mu_1}-\operatorname{log}{\tau_0^2} \leq \operatorname{log}{\tau_1^2}-\operatorname{log}{\tau_0^2} =\operatorname{log}{\left(1+\frac{H}{T}\right)}=\operatorname{O}{\left(\frac{H}{T}\right)}.
\end{align*} Thus, we obtain $$\left(\operatorname{log}{mr\mu_1}\right)^{k_1+k_2}= \left(2\operatorname{log}{\tau_0}\right)^{k_1+k_2} + \operatorname{O}((H/T)(\operatorname{log}{T})^{k_1+k_2-1}).$$
Similarly, we deduce that
\begin{align*}
&\left(\operatorname{log}{(m/r\mu_1)}\right)^{k_1}= \left(2\operatorname{log}{(\tau_0/r\mu_1)}\right)^{k_1} + \operatorname{O}((H/T)(\operatorname{log}{T})^{k_1-1}),\\
& \left(\operatorname{log}{(m\mu_1/r\mu_2^2)}\right)^{k_2}= \left(2\operatorname{log}{(\tau_0/r\mu_2)}\right)^{k_2} + \operatorname{O}((H/T)(\operatorname{log}{T})^{k_2-1}).
\end{align*}
Putting all these estimate into the most inner sum of \eqref{p2ext111}, finally we obtain
\begin{align}\label{p2ext1111}
E= \frac{i^{k_2}(-i)^{k_1}H}{\left(\operatorname{log}{\tau_0}\right)^{k_1+k_2}}\sum_{\substack{l,q \leq X\\ q<l }}\frac{b_lb_q}{\sqrt{lq\mu_1\mu_2}}\sum_{\frac{\tau_0}{\mu_1}<r\leq \frac{\tau_0}{\mu_2}}\frac{1}{r}\left(\operatorname{log}{\frac{r\mu_1}{\tau_0}}\right)^{k_1}\left(\operatorname{log}{\frac{r\mu_2}{\tau_0}}\right)^{k_2} + \operatorname{O}(X^{1+2\epsilon}\sqrt{T} \operatorname{log}^3{T}),
\end{align}
where $\mu_1=l/(l,q)$ and $\mu_2=q/(l,q)$.
Now, we evaluate the integral $E'$ that define in \eqref{p2eq111}. We write
\begin{align}\label{neqq13}
E' &= (-i)^{k_1} i^{k_2}\sum_{\substack{m,n\leq\tau_0\\ l,q\leq X\\ nl=qm}}\frac{b_lb_q}{\sqrt{lmnq}}\int_T^{T+H}\frac{(\operatorname{log}{\tau/n})^{k_2}(\operatorname{log}{\tau/m})^{k_1}}{(\operatorname{log}{\tau})^{k_1+k_2}}dt\\
\nonumber & + (-i)^{k_1} i^{k_2} \sum_{\substack{m,n\leq\tau_0\\ l,q\leq X\\ nl\neq mq}}\frac{b_lb_q}{\sqrt{lmnq}}\int_T^{T+H}\frac{(\operatorname{log}{\tau/n})^{k_2}(\operatorname{log}{\tau/m})^{k_1}}{(\operatorname{log}{\tau})^{k_1+k_2}}\left(\frac{mq}{nl}\right)^{it}dt\\
\nonumber &= J_1 + J_2,
\end{align}
where $J_1, J_2$ denote the first and second term respectively. First, we estimate $J_2.$ By binomial theorem we write
\begin{align*}
J_2 =(-i)^{k_1} i^{k_2}\sum_{r_1 =0}^{k_1}\sum_{r_2=0}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}(-1)^{r_1+r_2}\sum_{\substack{m,n\leq\tau_0\\ l,q\leq X\\ nl\neq mq}}\frac{b_lb_q\operatorname{log}^{r_1}{m}\operatorname{log}^{r_2}{n}}{\sqrt{lmnq}}\int_T^{T+H}\frac{\left(\frac{mq}{nl}\right)^{it}}{(\operatorname{log}{\tau})^{r_1+r_2}}dt.
\end{align*}
Since $|b_l|\leq l^\epsilon$ and $|b_q|\leq q^\epsilon$, by using~\cite[Lemma 1, p. 88]{Selb} we have
\begin{align*}
&J_2 \ll \sum_{\substack{m,n\leq\tau_0\\ l,q\leq X\\ nl\neq mq}}\frac{(lq)^\epsilon}{\sqrt{lmnq}|\operatorname{log}{\frac{mq}{nl}}|} \ll X^{2\epsilon}\sum_{\substack{a\leq X\tau_0\\ b\leq X\tau_0\\ a\neq b}}\frac{1}{\sqrt{ab}|\operatorname{log}{\frac{a}{b}}|}\ll \operatorname{O}(X^{1+2\epsilon} \tau_0\operatorname{log}{X\tau_0}).
\end{align*}
Now, for any positive integer $r\geq 1$ and $H < T$, we have $\operatorname{log}{\tau} = \operatorname{log}{\tau_0}\big( 1+ \operatorname{O}(\frac{H}{T\operatorname{log}{T}}) \big)$. This gives
\begin{equation}\label{p2ext44}
\int_{T}^{T+H}\frac{dt}{\operatorname{log}^{ r}{\tau}}= \frac{H}{\operatorname{log}^{r}{\tau_0}} + \operatorname{O}\left(\frac{H}{T\operatorname{log}^{r+1}{\tau_0}}\right).
\end{equation}
Thus, by using binomial theorem and then \eqref{p2ext44}, we write
\begin{align}\label{neqq0}
J_1 = (-i)^{k_1} i^{k_2} H &\sum_{r_1 =0}^{k_1}\sum_{r_2}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}(-1)^{r_1+r_2}\frac{S(r_1,r_2)}{(\operatorname{log}{\tau_0})^{r_1+r_2}}\\
\nonumber & + \operatorname{O}\Big( \frac{H^2}{T} \sum_{r_1 =0}^{k_1}\sum_{r_2}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}\frac{|S(r_1,r_2)|}{(\operatorname{log}{\tau_0})^{r_1+r_2+1}}\Big),
\end{align}
where
\begin{align}\label{neqq01}
S(r_1,r_2) =\sum_{\substack{m,n\leq\tau_0\\ l,q\leq X\\ nl= mq}}\frac{b_lb_q\operatorname{log}^{r_1}{m}\operatorname{log}^{r_2}{n}}{\sqrt{lmnq}}.
\end{align}
Reduce the equality condition $nl=mq$ with simplification we write
\begin{align*}
S(r_1,r_2)
&=\sum_{\substack{0\leq m_1\leq r_1\\0\leq m_2\leq r_2}}\binom{r_1}{m_1}\binom{r_2}{m_2}\sum_{\ l,q\leq X}\frac{b_lb_q}{lq}(l,q)\left(\operatorname{log}{\frac{l}{(l,q)}}\right)^{m_1}\left(\operatorname{log}{\frac{q}{(l,q)}}\right)^{m_2}\sum_{y\leq \frac{\tau_0(l,q)}{\max{\lbrace{ l, q\rbrace}}}}\frac{\operatorname{log}^{r}{y}}{y}
\end{align*}
where $r=r_1+r_2-m_1-m_2.$ From this expression of $S(r_1, r_2)$ we obtain the trivial upper bound $S(r_1, r_2) \ll X^{2\epsilon}(\operatorname{log}{X})^{r_1+r_2+4}$ and
$E' \ll J_1 \ll X^{2\epsilon}H\operatorname{log}^4{T}$. Thus, we get
\begin{align}\label{neqq00}
J_1 = (-i)^{k_1} i^{k_2} H \sum_{r_1 =0}^{k_1}\sum_{r_2=0}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}(-1)^{r_1+r_2}\frac{S(r_1,r_2)}{(\operatorname{log}{\tau_0})^{r_1+r_2}} + \operatorname{O}\Big( \frac{X^{2\epsilon}H^2 \operatorname{log}^3{T}}{T} \Big).
\end{align}
By using the trivial upper bound of $E'$ we get the upper bound of $E^{''}$ is $$\ll X^{2\epsilon}H(\operatorname{log}{T})^{4}\max\Big\{\frac{Y_{k_1}^2(T)}{\operatorname{log}^{k_2}{T}}, \frac{Y_{k_2}^2(T)}{\operatorname{log}{k_1}{T}}\Big\}.$$
Since $k_1\leq k_2$, we get that $Y_{k_1}(T)\geq Y_{k_2}(T)$. Thus, we rewrite \eqref{neq:1} as
\begin{align}\label{p2ext22}
J(T)=2\mathfrak{R}(J_1+E) + \operatorname{O}\left( \frac{HX^\epsilon Y_{k}(T) }{(\operatorname{log}{\tau_0})^{k - 3}}+X^{1+2\epsilon}\sqrt{T} \operatorname{log}^3{T}\right),
\end{align}
where $k= \min \{k_1, k_2\}$. After expanding the terms of $E$ in \eqref{p2ext1111} by using binomial theorem and then adding term by term of $J_1$ and $E$ we obtain
\begin{align}\label{p2ext33}
J_1+E= (-i)^{k_1} i^{k_2} H \sum_{r_1 =0}^{k_1}\sum_{r_2=0}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}(-1)^{r_1+r_2}\frac{J(r_1,r_2)}{(\operatorname{log}{\tau_0})^{r_1+r_2}} + \operatorname{O}\left(X^{1+2\epsilon}\sqrt{T}\operatorname{log}^3{T}\right),
\end{align}
where
\begin{align*}
J(r_1, r_2):=\sum_{\substack{0\leq m_1\leq r_1\\0\leq m_2\leq r_2}}\binom{r_1}{m_1}\binom{r_2}{m_2}\sum_{\ l,q\leq X}\frac{b_lb_q}{lq}(l,q)\left(\operatorname{log}{\frac{l}{(l,q)}}\right)^{m_1}\left(\operatorname{log}{\frac{q}{(l,q)}}\right)^{m_2}\sum_{y\leq \frac{\tau_0(l,q)}{q}}\frac{\operatorname{log}^{r}{y}}{y}.
\end{align*}
Note that in the expression of $J(r_1, r_2)$ one can take the limit of most inner sum of the form $y\leq \frac{\tau_0(l,q)}{l} $. This proves the first part of the proposition.
Now we prove the particular case that is
if we take $\Phi$ as a mollifier, $b_n= \mu(n)(1-\operatorname{log}{n}/\operatorname{log}{X}),$ then we get
\begin{align}\label{p2ext222}
J(T)=2\mathfrak{R}(J_1+E) + \operatorname{O}\left(\frac{ Y_k(T)H}{(\operatorname{log}{\tau_0})^{k-3}} +X\sqrt{T} \operatorname{log}^3{T}\right).
\end{align}
First, we are going to simplify $J(r_1, r_2)$ when $r_1, r_2 \geq 1$. Put variable $\alpha:= (l,q)$. From Lemma \ref{L9} we write
\begin{equation}\label{neqq1}
\sum_{y\leq \frac{\tau_0\alpha}{q}}\frac{\operatorname{log}^{r}{y}}{y} =\frac{1}{r+1}\left(\operatorname{log} {\tau_0}-\operatorname{log}{\dfrac{q}{\alpha}}\right)^{r+1} + (-1)^rr!\gamma_r+ \operatorname{O}\left(\frac{q \operatorname{log}^{r}{\tau_0}}{\alpha\tau_0}\right).
\end{equation}
Using binomial theorem in \eqref{neqq1} we get the expression for $ J(r_1,r_2)$ as the following:
\begin{align}\label{neqqq1}
J(r_1,r_2)&=\sum_{\substack{0\leq m_1\leq r_1\\0\leq m_2\leq r_2}}\binom{r_1}{m_1}\binom{r_2}{m_2} \frac{1}{r+1}\sum_{j=0}^{r+1}(-1)^{j}\binom{r+1}{j}(\operatorname{log}{\tau_0})^{r+1-j}A(X;m_1,m_2,j)\\
& \nonumber + (-1)^rr!\gamma_r\sum_{\substack{0\leq m_1\leq r_1\\0\leq m_2\leq r_2}}\binom{r_1}{m_1}\binom{r_2}{m_2}A(X;m_1,m_2,0)+\operatorname{O}\left(\frac{X(\operatorname{log}{\tau_0})^{k_1+k_2+1}}{\tau_0}\right),
\end{align}
where
$$\quad A(X;m_1,m_2, j):=\sum_{\ l,q\leq X}\frac{b_lb_q}{lq} \alpha \left(\operatorname{log}{\frac{l}{\alpha}}\right)^{m_1} \left(\operatorname{log}{\frac{q}{\alpha}}\right)^{m_2+j}.$$
For any positive integer $n$ we have
\begin{align*}
&n\sum_{d|n}\frac{\mu(d)}{d}\left(\operatorname{log}{\frac{ld}{n}}\right)^{m_1}\left(\operatorname{log}{\frac{qd}{n}}\right)^{m_2+j}\\
&=\sum_{\substack{0\leq t_1\leq m_1\\{0\leq t_2\leq m_2}}}\binom{m_1}{t_1}\binom{m_2+j}{t_2}n\left(\operatorname{log}{\frac{l}{n}}\right)^{m_1-t_1}\left(\operatorname{log}{\frac{q}{n}}\right)^{m_2+ j-t_2} F_{t}(n),
\end{align*}
where $t:=t_1+t_2$ and $F_t(n)$ has defined in Lemma \ref{L4}. So, by applying M\"obius inversion formula on $$\alpha \left(\operatorname{log}{\frac{l}{\alpha}}\right)^{m_1} \left(\operatorname{log}{\frac{q}{\alpha}}\right)^{m_2+j},$$ and then simplify the expression we write
\begin{align}\label{neqq2}
A(X;m_1,m_2, j)=\sum_{d\leq X}\phi(d)M_d(m_1,m_2, j) + E(X),
\end{align}
where
\begin{align*}
M_d(m_1,m_2,j):=\sum_{\substack{l\leq X\\d|l}}\frac{b_l}{l}\left(\operatorname{log}{\frac{l}{d}}\right)^{m_1}\sum_{\substack{q\leq X\\d|q}}\frac{b_q}{q}\left(\operatorname{log}{\frac{q}{d}}\right)^{m_2+j}\quad \mbox{and}\\
E(X):=\sum_{\substack{0\leq t_1\leq m_1\\{0\leq t_2\leq m_2}}}\binom{m_1}{t_1}\binom{m_2+b_2}{t_2}\sum_{\substack{d\leq X \\ t>0}}dF_{t}(d)M_d(m_1-t_1,m_2-t_2,j).
\end{align*}
Our next task is the evaluation of $M_d(m_1,m_2,j)$ asymptotically and put it in \eqref{neqq2}. We replace $b_l,b_q$ by corresponding expressions and running sums on the quotient of the divisor $d$, we get
\begin{align}\label{neqq3}
M_d(m_1,m_2,j)=\frac{\mu^2{(d)}}{(d\operatorname{log}{X})^2}&\sum_{\substack{l\leq X/d\\(l,d)=1}}\frac{\mu(l)}{l}\left(\operatorname{log}{l}\right)^{m_1}\operatorname{log}{\frac{X}{ld}} \sum_{\substack{q\leq X/d\\(q,d)=1}}\frac{\mu(q)}{q}\left(\operatorname{log}{q}\right)^{m_2+ j}\operatorname{log}{\frac{X}{qd}}.
\end{align}
Now, first we replace $\operatorname{log}{l}$ by $\operatorname{log}{({X}/{d})}-\operatorname{log}{({X}/{ld})}$ and $\operatorname{log}{q}$ by $\operatorname{log}{({X}/{d})}-\operatorname{log}{({X}/{qd})}$ in the right hand side of \eqref{neqq3}. Then expands their power by using binomial theorem and interchanging the summations, we get
\begin{align}\label{neqq9}
\nonumber M_d(m_1,m_2,j)=\frac{\mu^2{(d)}}{(d\operatorname{log}{X})^2}&\sum_{i=0}^{m_1}\sum_{u=0}^{m_2+j}\binom{m_1}{i}\binom{m_2+j}{u}(-1)^{i+u}\left(\operatorname{log}{\frac{X}{d}}\right)^{m_1+m_2+j-i-u}\\
& \times \sum_{\substack{l\leq X/d\\(l,d)=1}}\frac{\mu(l)}{l}\left(\operatorname{log}{\frac{X}{ld}}\right)^{i+1}\sum_{\substack{q\leq X/d\\(q,d)=1}}\frac{\mu(q)}{q}\left(\operatorname{log}{\frac{X}{qd}}\right)^{u+1}.
\end{align}
For more simplification we use the following identity that can be derived from binomial theorem.
\begin{equation}\label{L3}
\delta_r(n):=\sum_{j=0}^{n}{(-1)^j \binom{n}{j}(j+r)} = \begin{cases} r &\mbox{if } n = 0, \\
-1 &\mbox{if } n = 1, \\
0 & \mbox{if } n \geq 2,\end{cases}
\end{equation}
where $r,n$ are any non-negative integer. Now by using Lemma \ref{L7} in two inner sums on the right hand side of \eqref{neqq9} and then \eqref{L3}, we get
\begin{align*}
M_d(m_1,m_2, j)=\frac{\mu^2{(d)}}{(\phi(d)\operatorname{log}{X})^2}\left(\operatorname{log}{\frac{X}{d}}\right)^{m_1+m_2+ j}\delta_1(m_1)\delta_1(m_2+ j) + R_1,
\end{align*}
where $R_1$ has given by the expression
\begin{align*}
R_1 =\operatorname{O}\left(\frac{\mu^2{(d)}}{(d\operatorname{log}{X})^2}\left(\frac{d^b\operatorname{log}^2{L}(\operatorname{log}{\frac{X}{d}})^{m_1+m_2+j}}{f(d,1)f(d,1-b)X^b} + \frac{\operatorname{log}\log{d}(\operatorname{log}{\frac{X}{d}})^{m_1+m_2+j-1}}{f^2(d,1)}\right) \right),
\end{align*}
where $L=\operatorname{log}{\tau_0}$ and $b= (M\operatorname{log}{L})^{-1}$. Note that
\begin{align*}
\sum_{d\leq X}\frac{|\mu(d)|d^b}{df(d,1-b)}&\ll \sum_{d\leq X}\frac{|\mu(d)|}{d^{1-b}}\prod_{p|d}\left(1+\frac{1}{p^{1-b}}\right)=\sum_{d\leq X}\frac{|\mu(d)|}{d^{1-b}}\sum_{n|d}\frac{1}{n^{1-b}}\\
&\ll \sum_{n\leq X}\frac{1}{n^{2-2b}}\sum_{r\leq X/n }\frac{1}{r^{1-b}}\ll \zeta(2-2b)\int_{1}^{X}\frac{dt}{t^{1-b}}\ll \frac{X^b}{b}\ll X^b\operatorname{log}{L}.
\end{align*}
Now, using the above results we get
\begin{align*}
\sum\limits_{d \leq X}\phi(d) R_1\ll (\operatorname{log}{X})^{m_1+m_2+j-2}(\operatorname{log}{L})^3.
\end{align*}
Next, we estimate $E(X).$
Since $t=t_1+t_2 \geq 1,$ we get
$$M_d(m_1-t_1,m_2-t_2, j)\ll \frac{\mu^2(d)}{\phi^2(d)}(\operatorname{log}{X})^{m_1+m_2+ j-t_1-t_2-2},$$ and this gives the upper bound
\begin{align*}
E(X)&\ll \sum_{t_1=0}^{m_1}\sum_{t_2=0}^{m_2+j}\binom{m_1}{t_1}\binom{m_2+j}{t_2}(\operatorname{log}{X})^{m_1+m_2+j-t-2}\sum_{d\leq X}\frac{\mu^2(d)dF_{t}(d)}{\phi^2(d)}\\
&\ll (\operatorname{log}{X})^{m_1+m_2+j-2}\operatorname{log}\log{X}.
\end{align*}
Thus, we get
\begin{align*}
A(X;m_1,m_2,j)=\frac{\delta_1(m_1)\delta_1(m_2+j)}{\operatorname{log}^2{X}}\sum_{d\leq X}\frac{\mu^2{(d)}}{\phi(d)}\left(\operatorname{log}{\frac{X}{d}}\right)^{m_1+m_2+j} +R(m_1+m_2+j),
\end{align*}
where $R{(m)} = \operatorname{O}\left((\operatorname{log}{X})^{m-2}(\operatorname{log}{L})^3\right)$.
Finally, by using Lemma \ref{L1}, we get
\begin{align}\label{neqq6}
A(X;m_1,m_2,j)=\frac{\delta_1(m_1)\delta_1(m_2+j)}{m_1+m_2+j+1}(\operatorname{log}{X})^{m_1+m_2+j-1} + R(m_1+m_2+j).
\end{align}
Now, by using \eqref{L3}, we find all possible cases when the main terms of $ A(X;m_1,m_2,j)$ are non-vanishing.
We make a list of all possible non-vanishing main term of $A(X;m_1,m_2,j)$ with simplified forms:
\begin{align}\label{neqq7}
& A(X;0,0,0)= (\operatorname{log}{X})^{-1} + R(0), \\
& \nonumber A(X;1,0,0)=A(X;0,1,0)= A(X;0,0,1)=-{1}/{2} +R(1),\\
&\nonumber A(X;1,1,0)= A(X;1,0,1)=1/3\operatorname{log}{X}+ R(2).
\end{align}
So, using \eqref{neqq7} in \eqref{neqqq1} we get the expression for $J(r_1,r_2)$ in the following;
\begin{align*}
J(r_1,r_2)&= \frac{1}{r_1+r_2+1}(\operatorname{log}{\tau_0})^{r_1+r_2+1}A(X;0,0,0) +\frac{r_1}{r_1+r_2}(\operatorname{log}{\tau_0})^{r_1+r_2}A(X;1,0,0)\\
& +\frac{r_2}{r_1+r_2}(\operatorname{log}{\tau_0})^{r_1+r_2}A(X;0,1,0)- (\operatorname{log}{\tau_0})^{r_1+r_2}A(X;0,0,1) \\
& + \frac{r_1r_2}{r_1+r_2-1}(\operatorname{log}{\tau_0})^{r_1+r_2-1}A(X;1,1,0,) -r_1(\operatorname{log}{\tau_0})^{r_1+r_2-1}A(X;1,0,1)\\
& + \operatorname{O}\left((\operatorname{log}{X})^{r_1+r_2-1}\operatorname{log}^3{L}\right).
\end{align*}
Since $\operatorname{log}{X}\approx 2\theta \operatorname{log}{\tau_0}$ we get
\begin{align}\label{neqq12}
J(r_1,r_2)=\left(\dfrac{1}{2\theta(r_1+r_2+1)}+ \dfrac{2\theta r_1r_2}{3(r_1+r_2-1)} - \frac{2\theta r_1}{3}\right)&(\operatorname{log}{\tau_0})^{r_1+r_2}\\
\nonumber & + \operatorname{O}\left((\operatorname{log}{X})^{r_1+r_2-1}\operatorname{log}^3{L}\right).
\end{align}
Note that the right hand side of \eqref{neqq12} is well defined as $r_1, r_2 \geq 1$. Now, for the remaining cases we have $r_1 +r_2 \leq 1$ and if we proceed similar to the case of $r_1, r_2 \geq 1,$
we obtain
$$J(0,0)=\frac{1}{2\theta} +\frac{1}{2},\quad J(1,0)=\left(\frac{1}{4\theta} +\frac{2\theta}{3}\right)\operatorname{log}{\tau_0},\quad J(0,1)=\frac{\operatorname{log}{\tau_0}}{4\theta},$$ along with error term at most $\operatorname{O}\left((\operatorname{log}{X})^{-1}\operatorname{log}^3{L}\right)$.
Now, if we take the limit $y \leq \frac{\tau_0(l, q)}{l} $ in the most inner sum of $J(r_1, r_2)$, which is defined in Proposition~\ref{L12}, then similar to \eqref{neqq12} we obtain
\begin{align}\label{alternative}
J(r_1,r_2)=\left(\dfrac{1}{2\theta(r_1+r_2+1)}+ \dfrac{2\theta r_1r_2}{3(r_1+r_2-1)} - \frac{2\theta r_2}{3}\right)&(\operatorname{log}{\tau_0})^{r_1+r_2}\\
\nonumber & + \operatorname{O}\left((\operatorname{log}{X})^{r_1+r_2-1}\operatorname{log}^3{L}\right),
\end{align}
when $r_1, r_2 \geq 1$ and we have
$$J(0,0)=\frac{1}{2\theta} +\frac{1}{2},\quad J(0,1)=\left(\frac{1}{4\theta} +\frac{2\theta}{3}\right)\operatorname{log}{\tau_0},\quad J(1,0)=\frac{\operatorname{log}{\tau_0}}{4\theta},$$ along with error term at most $\operatorname{O}\left((\operatorname{log}{X})^{-1}\operatorname{log}^3{L}\right)$.
So, from \eqref{neqq12} and \eqref{alternative} along with the exceptional cases, we get
\begin{align*}
\frac{2(J_1+E)}{(-i)^{k_1}i^{k_2}}&=\frac{H}{\theta} \sum_{r_1=0}^{k_1}\sum_{r_2=0}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}
\dfrac{(-1)^{r_1+r_2}}{(r_1+r_2+1)} + \frac{4H\theta}{3} \sum_{r_1=1}^{k_1}\sum_{r_2=1}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}\dfrac{ r_1r_2(-1)^{r_1+r_2}}{(r_1+r_2-1)}\\
& \nonumber- \frac{2H\theta}{3} \sum_{r_1=1}^{k_1}\sum_{r_2=0}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}r_1(-1)^{r_1+r_2} - \frac{2H\theta}{3} \sum_{r_1=0}^{k_1}\sum_{r_2=1}^{k_2}\binom{k_1}{r_1}\binom{k_2}{r_2}r_2(-1)^{r_1+r_2}\\
&+H +\operatorname{O}{\left(X\sqrt{T}\operatorname{log}^3{T} + \frac{H\operatorname{log}^3{L}}{\operatorname{log}{T}}\right)}.
\end{align*}
Hence, for $k_1, k_2 \geq 1$ from the binomial identity we obtain
\begin{align}\label{final-moment}
\nonumber \frac{2(J_1+E)}{(-i)^{k_1}i^{k_2}} &= H + \frac{H}{\theta}\int_{0}^1(1-x)^{k_1+k_2}dx + \frac{4\theta H}{3}\int_{0}^1\Big(\prod_{j=1}^2\Big((1-x)^{k_j}\Big)'\Big)^2dx+\operatorname{O}{(X\sqrt{T}\operatorname{log}^3{T})} \\
& = H\left(1 + \frac{1}{\theta (k_1+k_2+1)} + \frac{4 k_1k_2 \theta }{3(k_1+k_2-1)}\right)+\operatorname{O}{\left(X\sqrt{T}\operatorname{log}^3{T} + \frac{H\operatorname{log}^3{L}}{\operatorname{log}{T}}\right)}.
\end{align}
If $k_1=0$ we have without error term
\begin{align}\label{final-sepcial}
2(J_1+E) = \begin{cases}
H(1+ \frac{1}{\theta}), \mbox{ if } k_2 =0,\\
i H(1 +\frac{1}{2\theta} +\frac{2\theta}{3}), \mbox{ if } k_2 =1,\\
i^{k_2} H(1 +\frac{1}{(k_2+1)\theta} ), \mbox{ if } k_2 \geq 2.
\end{cases}
\end{align}
Note that the equation \eqref{final-sepcial} allow the error term $\operatorname{O}\Big(\frac{H\operatorname{log}^3{L}}{\operatorname{log}{T}}\Big)$.
Hence, using \eqref{final-moment} and \eqref{final-sepcial} in \eqref{p2ext222} we conclude the result.
\end{proof}
Next we establish the relation between higher derivatives of zeta function and Hardy's $Z$-function. For the first derivative R. R. Hall~\cite{Hall} proved the following identity:
\begin{align}\label{hall-identity}
\zeta^{'}\Big( \frac{1}{2}+it\Big)= e^{-i\theta(t)}\{Z^{'}(t) -i\theta^{'}(t)Z(t)\}.
\end{align}
Thus,
\begin{align*}
\left|\zeta^{'}\Big( \frac{1}{2}+it\Big)\right|^2= Z^{'}(t)^2 + \theta^{'}(t)^2Z(t)^2.
\end{align*}
From this Hall's identity \eqref{hall-identity} we obtain identities for higher derivative case upto some error term in the following proposition.
\begin{proposition}\label{darivative-zeta}
For $n\geq 2$ and $T\leq t \leq 2T$, we get
\begin{align*}
i^{n-1}\zeta^{(n)}\Big( \frac{1}{2}+it\Big)= e^{-i\theta(t)} \sum_{j=0}^{n-1}\binom{n-1}{j}&\Big((-i\operatorname{log}{\tau})^{n-1-j} Z^{(j+1)}{(t)} + (-i\operatorname{log}{\tau})^{n-j}Z^{(j)}(t)\Big)\\
\nonumber & +\operatorname{O}\Big(T^{-\frac{5}{6}}\operatorname{log}^{n-1}{T}\Big),
\end{align*}
where $\tau= \sqrt{\frac{t}{2 \pi}}.$
\end{proposition}
\begin{proof} Setting $f(t):= Z^{'}(t) -i\theta^{'}(t)Z(t)$, then by Leibniz's rule we write
\begin{align}\label{zeta^n}
i^{n-1}\zeta^{(n)}\Big( \frac{1}{2}+it\Big)= \sum_{j=0}^{n-1}\binom{n-1}{j}\frac{d^{j}}{dt^{j}}f(t) \frac{d^{n-1-j}}{dt^{n-1-j}}e^{-i\theta(t)}.
\end{align}
From \cite[eq: 30]{Hall} we have $\theta^{'}(t)= \operatorname{log}{\tau} + \operatorname{O}(t^{-1})$, where $\tau= \sqrt{t/2\pi}$ and $\theta^{(k)}(t)= \operatorname{O}(t^{-k+1})$ for $k\geq 2$. So, for a positive integer $k$ we get
\begin{align}\label{e^n}
\frac{d^{k}}{dt^{k}} e^{-i\theta(t)} = (-i \operatorname{log}{\tau})^{k}e^{-i\theta(t)} + \operatorname{O}\Big(\frac{(\operatorname{log}{t})^{k-1}}{t}\Big).
\end{align}
Similarly,
\begin{align*}
\frac{d^{k}}{dt^{k}}f(t) &= Z^{(k+1)}(t)-i \sum_{l=0}^{k}\binom{k}{l}\frac{d^{l}}{dt^{l}}\theta^{'}(t) Z^{(k-l)}(t)\\
& = Z^{(k+1)}(t)- i Z^{(k)}(t) \operatorname{log}{\tau} +\operatorname{O}\Big(\frac{|Z^{(k)}(t)|}{t}\Big).
\end{align*}
From \eqref{int2} and Lemma \ref{L14}, \ref{L15} we obtain
\begin{align}\label{trivial bound-Z^k}
|Z^{(k)}(t)| \ll t^{\frac{1}{6}}\operatorname{log}^{k}{t}.
\end{align}
Thus,
\begin{align}\label{f^k}
\frac{d^{k}}{dt^{k}}f(t) = Z^{(k+1)}(t)- i Z^{(k)}(t) \operatorname{log}{\tau} +\operatorname{O}\Big(\frac{\operatorname{log}^k{t}}{t^{\frac{5}{6}}}\Big).
\end{align}
Hence, using \eqref{e^n} and \eqref{f^k} in \eqref{zeta^n} we obtain
\begin{align*}
i^{n-1}\zeta^{(n)}\Big( \frac{1}{2}+it\Big)= e^{-i\theta(t)} \sum_{j=0}^{n-1}\binom{n-1}{j}&\Big((-i\operatorname{log}{\tau})^{n-1-j} Z^{(j+1)}{(t)} + (-i\operatorname{log}{\tau})^{n-j}Z^{(j)}(t)\Big)\\
\nonumber & +\operatorname{O}\Big(t^{-\frac{5}{6}}\operatorname{log}^{n-1}{t}\Big).
\end{align*}
\end{proof}
\section{Proof of theorems}\label{proofs}
\begin{proof}[\textbf{Proof of Theorem} \ref{thm1}]
For $T \leq t \leq T+H$ with $H <T$ and any non-negative integer $k$, we have
$$(\operatorname{log}{\tau})^{k} =(\operatorname{log}{\tau_0})^{k} +\operatorname{O}\left(\frac{H(\operatorname{log}{\tau_0})^{k-1}}{T}\right).$$
Since $Z^{(k_j)}(t)=G_{k_j}(t)(\operatorname{log}{\tau})^{k_j}$ for $j=1,2,$ we write
\begin{small}
\begin{align}\label{neq:00001}
&\int_{T}^{T+H}Z^{(k_1)}(t)Z^{(k_2)}(t)\Big|\Phi\Big(\frac{1}{2}+ it\Big)\Big|^2 dt = (\operatorname{log}{\tau_0})^{k_1+k_2}\int_{T}^{T+H}{G_{k_1}(t)G_{k_2}(t)\Big|\Phi\Big(\frac{1 }{2}+it\Big)\Big|^2}dt \\
&\nonumber + \operatorname{O}\left(\frac{H(\operatorname{log}{\tau_0})^{k_1+k_2-1}}{T}\Big(\int_{T}^{T+H}{\Big|G_{k_1}(t)\Phi\Big(\frac{1 }{2}+it\Big)\Big|^2}dt\Big)^{\frac{1}{2}}\Big(\int_{T}^{T+H}{\Big|G_{k_2}(t)\Phi\Big(\frac{1 }{2}+it\Big)\Big|^2}dt\Big)^{\frac{1}{2}}\right).
\end{align}
\end{small}
Taking $k_1=k_2=k$ in \eqref{neq:00001} we get
\begin{align}\label{neq:000011}
\int_{T}^{T+H}(Z^{(k)}(t))^2\Big|\Phi\Big(\frac{1}{2}+ it\Big)\Big|^2 dt & = (\operatorname{log}{\tau_0})^{2k}\int_{T}^{T+H}{G_{k}^2(t)\Big|\Phi\Big(\frac{1 }{2}+it\Big)\Big|^2}dt \\
&\nonumber + \operatorname{O}\left(\frac{H(\operatorname{log}{\tau_0})^{2k-1}}{T}\int_{T}^{T+H}{\Big|G_{k}(t)\Phi\Big(\frac{1 }{2}+it\Big)\Big|^2}dt\right).
\end{align}
Putting $k_1=k_2 =k$ in \eqref{asym-sp} and \eqref{ext-sp} we have
\begin{align}\label{men5}
\int_{T}^{T+H}{G_{k}^2(t)\Big|\Phi\Big(\frac{1 }{2}+it\Big)\Big|^2}dt \sim \left(1 + \frac{1}{\theta (2k+1)} + \frac{4 k^2 \theta }{3(2k-1)} \right)H\qquad\text{as}~ T \rightarrow \infty,
\end{align}
provided that $k$ is a non-negative integer and $H=T^a, \frac{1}{2}+\theta < a <\frac{4k+3}{4(k+1)} $.
Now, by using \eqref{men5} in \eqref{neq:000011} we obtained our Theorem~\ref{thm1}, for $H=T^a, \frac{1}{2}+\theta < a <\frac{4k+3}{4(k+1)} $. Whenever $\frac{4k+3}{4(k+1)} \leq a \leq 1$ we can split the interval $[T, T+H]$ into subinterval of length $T^b$, where $ b =\frac{4k+3}{4(k+1)}-\delta$ and $\delta >0$ is any sufficiently small number, then adding the corresponding result we obtain that \eqref{men5} is true for $\frac{1}{2}+\theta < a \leq 1$. This yield Theorem~\ref{thm1}.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem} \ref{thm0}]
By using Proposition \ref{L12} for $k_1=k_2 =k$, in \eqref{neq:000011} we write,
for $H=T^a, \frac{1}{2}+\theta+\epsilon < a < \frac{4k+3}{4(k+1)} $ and $0< \theta < \frac{2k+1}{4(k+1)} $,
\begin{align*}
J(\Phi, H, k) &= 2H \sum_{0\leq r_1,r_2\leq k}(-1)^{r_1+r_2}\binom{k}{r_1}\binom{k}{r_2}{J(r_1,r_2)}{(\operatorname{log}{\tau_0})^{2k-r_1-r_2}}\\
& + \operatorname{O}\Big( HY_k(T)X^\epsilon(\operatorname{log}{T})^{k+3} + \sqrt{T}X^{1+2\epsilon}(\operatorname{log}{T})^{2k+3} \Big).
\end{align*}
Now, by using Lemma \ref{L9}, we rewrite $J(r_1,r_2)$ given in Proposition \ref{L12} as
\begin{align*}
J(r_1,r_2)= \sum_{\ l,q\leq X}\frac{b_l\overline{b}_q}{lq}(l,q)\int_{0}^{\operatorname{log}{\frac{\tau_0(l,q)}{q}}}\left(\operatorname{log}{\frac{l}{(l,q)}}+t\right)^{r_1}\left(\operatorname{log}{\frac{q}{(l,q)}} + t \right)^{r_2}dt.
\end{align*}
Now by interchanging the variables $l$ and $q$, the value of the main term of $J(\Phi, H, k) $ remains the same. Thus, we write
\begin{small}
\begin{align*}
J(\Phi, H, k) &= H \sum_{\ l,q\leq X}\frac{b_l\overline{b}_q}{lq}(l,q)\left(\int_{0}^{\operatorname{log}{\frac{\tau_0(l,q)}{q}}}+\int_{0}^{\operatorname{log}{\frac{\tau_0(l,q)}{l}}}\right)\Big(\operatorname{log}{\frac{\tau_0(l,q)}{l}}-t\Big)^{k}\Big(\operatorname{log}{\frac{\tau_0(l,q)}{q}} - t \Big)^{k}dt\\
& + \operatorname{O}\Big( HY_k(T)X^\epsilon(\operatorname{log}{T})^{k+3} + \sqrt{T}X^{1+2\epsilon}(\operatorname{log}{T})^{2k+3} \Big).
\end{align*}
\end{small}
Let us define $\beta:=\operatorname{log}{({\tau_0(l,q)}/{\sqrt{lq}})}$, then we get
$$\operatorname{log}{\frac{\tau_0(l,q)}{q}}= \beta +\operatorname{log}{\sqrt{l/q}} \mbox{ and }\operatorname{log}{\frac{\tau_0(l,q)}{l}}= \beta +\operatorname{log}{\sqrt{q/l}}.$$
Now, by the change of variables $t=\beta +z$ and then $z=y\operatorname{log}{\sqrt{{l}/{q}}}$, we find that
\begin{align*}
&\int_{\beta}^{\operatorname{log}{\frac{\tau_0(l,q)}{q}}}\Big(\operatorname{log}{\frac{\tau_0(l,q)}{l}}-t\Big)^{k}\Big(\operatorname{log}{\frac{\tau_0(l,q)}{q}} - t \Big)^{k}dt
=\frac{1}{2}\left(\frac{1}{4}\operatorname{log}{{\frac{q}{l}}}\operatorname{log}{{\frac{l}{q}}}\right)^k\operatorname{log}{{\frac{l}{q}}}\int_{0}^{1}(1-y^2)^k dy,\\
&\int_{\beta}^{\operatorname{log}{\frac{\tau_0(l,q)}{l}}}\Big(\operatorname{log}{\frac{\tau_0(l,q)}{l}}-t\Big)^{k}\Big(\operatorname{log}{\frac{\tau_0(l,q)}{q}} - t \Big)^{k}dt
=\frac{1}{2}\left(\frac{1}{4}\operatorname{log}{{\frac{q}{l}}}\operatorname{log}{{\frac{l}{q}}}\right)^k\operatorname{log}{{\frac{q}{l}}}\int_{0}^{1}(1-y^2)^k dy.
\end{align*}
Note that the sum of the above two integrals is zero. Thus,
\begin{align}\label{asym1}
J(\Phi, H, k) &= 2H \sum_{\ l,q\leq X}\frac{b_l\overline{b}_q}{lq}(l,q)\int_{0}^\beta \left(\operatorname{log}{\frac{\tau_0(l,q)}{l}}-t\right)^{k}\left(\operatorname{log}{\frac{\tau_0(l,q)}{q}} - t \right)^{k}dt\\
\nonumber & + \operatorname{O}\Big( HY_k(T)X^\epsilon(\operatorname{log}{T})^{k+3} + \sqrt{T}X^{1+2\epsilon}(\operatorname{log}{T})^{2k+3} \Big).
\end{align}
Again by changing the $t=\beta + z$ and then $z=\beta x$ in the integral of the right-hand side of \eqref{asym1}, we get
\begin{align}\label{asym11}
J(\Phi, H, k) &= H \sum_{\ l,q\leq X}\frac{b_l\overline{b}_q}{lq}(l,q)2\beta\int_{0}^1\left(\beta^2x^2-\frac{1}{4}\operatorname{log}^2{\frac{q}{l}}\right)^k dx \\
& \nonumber + \operatorname{O}\Big( HY_k(T)X^\epsilon(\operatorname{log}{T})^{k+3} + \sqrt{T}X^{1+2\epsilon}(\operatorname{log}{T})^{2k+3} \Big).
\end{align}
Now by replacing $\beta$ by its exact representation in \eqref{asym11}, we conclude the result for
$\frac{1}{2}+\theta+\epsilon < a < \frac{4k+3}{4(k+1)}$. Whenever $\frac{4k+3}{4(k+1)} \leq a \leq 1$ we can split the interval $[T, T+H]$ into subinterval of length $T^b$, where $ b =\frac{4k+3}{4(k+1)}-\delta$ and $\delta >0$ is any sufficiently small number. Then the number of such subintervals would be less than equal to $ HT^{-b}$. Thus, adding the corresponding results in each subinterval we obtain the Theorem~\ref{thm0} for $\frac{1}{2}+\theta+\epsilon < a \leq 1$.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem} \ref{thm2}]
The proof of this theorem is the application of the famous result called `Littelwood's Theorem' on $f(s)$ in the region $\mathcal{D}$ with vertices $\frac{1}{2}+iT, m+ iT, m+i(T+H)$ and $\frac{1}{2}+i(T+H)$, where $k$ is a positive integer and $f(s)$ has defined by
$$f{(s)}=\eta_k(s) \Phi(s)(\omega(s))^{-k+1}.$$ Note that $\omega(s)$ is non-vanishing and analytic on $\mathcal{D}$. So, the number of zeros of $\eta_k$ in $\mathcal{D}$ is less than or equal to the number of zeros of $\eta_k\Phi$ in $\mathcal{D}$. Also, we have
\begin{align*}
\Big|\eta_k\Big( \frac{1}{2}+it\Big)\left(\omega\Big(\frac{1}{2}+it\Big)\right)^{-k+1}\Big|= |G_k(t)|.
\end{align*}
Thus from Littlewood's Theorem, we get
\begin{align}\label{pfeq7}
2\pi\sum_{\substack{\eta_k(\rho_k^{'})=0 \\\rho_k^{'} \in \mathcal{D}, \beta_k^{'}> \frac{1}{2} }} \left(\beta_k^{'} - \frac{1}{2} \right) & \leq \int_{T}^{T+H}\operatorname{log}{\Big|G_k(t)\Phi\Big(\frac{1}{2}+it\Big)\Big|}dt - \int_{T}^{T+H}\operatorname{log}|f\left(m+it\right)|dt\\
& \nonumber + \int_{\frac{1}{2}}^{m}\arg f\left(\sigma+i(T+H)\right)d\sigma - \int_{\frac{1}{2}}^{m}\arg f\left(\sigma+iT\right)d\sigma.
\end{align}
For $\sigma >1$, by using Lemma \ref{L15} in \eqref{int2} we get
\begin{align}\label{pfeq10}
\eta_k(\sigma+it)(\omega(\sigma+it))^{1-k}=\left(\frac{-1}{2}\right)^{k-1}\zeta(\sigma+it) + \operatorname{O}\left(\frac{1}{\operatorname{log}{t}}\right).
\end{align}
Since for $\sigma >1$ we know that
$$\operatorname{log}{\zeta(s)}= \sum_{n=2}^{\infty} \frac{\Lambda(n)}{n^s\operatorname{log}{n}}.$$
It follows that
\begin{align}\label{pfeq9}
\int_{T}^{T+H}\operatorname{log}\lbrace\eta_k(m+it)(\omega(m+it))^{1-k}\rbrace dt =-(k-1)H\operatorname{log}{2} + \operatorname{O}{(H{(\operatorname{log}{T}})^{-1})}.
\end{align}
Also, for $\sigma >3/2$ the function $\operatorname{log}{\Phi(\sigma+it)}$ is analytic. Because we write
$\Phi(s)=1+\Phi_1(s)$ and since $b_n \leq 1$ we get
$|\Phi_1(s)|< 2^{1-\sigma} <1.$ So, by using Cauchy integral formula for the contour
$$\{\sigma+it : m \leq \sigma < \infty, T\leq t \leq T+H \},$$ we get
\begin{align}\label{pfeq8}
\int_{T}^{T+H}\operatorname{log}{\Phi{(m+it)}}dt =\operatorname{O}(1).
\end{align}
Since $\operatorname{Re}{(\operatorname{log}{f(s)})}=\operatorname{log}{|f(s)|}$, from \eqref{pfeq10}, \eqref{pfeq9} and \eqref{pfeq8} we conclude that $$\int_{T}^{T+H}\operatorname{log}|f\left(m+it\right)|dt= -(k-1)H\operatorname{log}{2} + \operatorname{O}{(H{(\operatorname{log}{T}})^{-1})}.$$
Again, taking into account the fact that
$$\arg{(f(s))}= \operatorname{Im}{(\operatorname{log}{f(s)})} =\operatorname{Im}{(\operatorname{log}{\eta_k(s)\omega(s)^{1-k}})} + \operatorname{Im}{(\operatorname{log}{\Phi(s)})}.$$
From \eqref{pfeq10}, it is not hard to show that
\begin{align*}
\int_{\frac{3}{2}}^{m}\arg f\left(\sigma+i(T+H)\right)d\sigma - \int_{\frac{3}{2}}^{m}\arg f\left(\sigma+iT\right)d\sigma \ll m.
\end{align*}
Now, note that $|\operatorname{Re}{f(2+it)}|> \frac{1}{2^{k+2}} > 0$ and from Lemma \ref{L14} and \eqref{int2} we obtain
\begin{align*}
f(\sigma + it) \ll X^{1- \sigma} \Big(t^{\frac{1- \sigma}{3}} +1\Big)\operatorname{log}{t}\quad \mbox{ for } \frac{1}{2} \leq \sigma \leq 2.
\end{align*}
Thus, by using \cite[Lemma, p. 213]{Titc} we obtain
\begin{align*}
\arg f\left(\sigma + iT \right) \ll \operatorname{log}{T} +\operatorname{log}{X}.
\end{align*}
Similar bound can be taken for $ \arg f\left(\sigma + i(T+H)\right)$. Thus, for a real number $\beta<2$ closer to $2$, we get
\begin{align*}
\int_{\frac{1}{2}}^{\beta}\arg f\left(\sigma+i(T+H)\right)d\sigma - \int_{\frac{1}{2}}^{\beta}\arg f\left(\sigma+iT\right)d\sigma \ll \operatorname{log}{T}.
\end{align*}
Now, we recall the well known inequality
\begin{align}\label{pfeq11}
\int_{T}^{T+H}\operatorname{log}{|g(\sigma+it)|}dt\leq \frac{H}{2}\operatorname{log}{\left(\frac{1}{H}\int_{T}^{T+H}|g(\sigma+it)|^2dt\right)}
\end{align}
for any analytic function $g$. Hence, we choose $g(\sigma+it)=G_k(t)\Phi\left(\frac{1}{2}+it\right)$ in \eqref{pfeq11} and then replace the first integral on the right hand side of \eqref{pfeq7} by its upper bound from Proposition \ref{L12} and conclude the result for $H=T^a$, where $\frac{1}{2}+\theta<a < \frac{4k+3}{4(k+1)}$. For $\frac{4k+3}{4(k+1)} \leq a \leq 1$, by proceeding in a similar way as in the last paragraph of the proof of Theorem~\ref{thm1}, we conclude the required result for $\frac{1}{2}+\theta <a \leq 1$ .
\end{proof}
\begin{proof}[\textbf{Proof of Theorem} \ref{thm4}]
The proof of this theorem is similar to the proof of Theorem~\ref{thm0} and Theorem~\ref{thm1}. For the notation $\beta:=\operatorname{log}{({\tau_0(l,q)}/{\sqrt{lq}})}$, we get
$$\operatorname{log}{\frac{\tau_0(l,q)}{q}}= \beta +\operatorname{log}{\sqrt{l/q}} \mbox{ and }\operatorname{log}{\frac{\tau_0(l,q)}{l}}= \beta +\operatorname{log}{\sqrt{q/l}}.$$
Now, by the change of variable $t=\beta +z$ and then $z=y\operatorname{log}{\sqrt{{l}/{q}}}$, we find that
\begin{align*}
&\Big(\int_{\beta}^{\operatorname{log}{\frac{\tau_0(l,q)}{q}}} + \int_{\beta}^{\operatorname{log}{\frac{\tau_0(l,q)}{l}}}\Big)\left(\operatorname{log}{\frac{\tau_0(l,q)}{l}}-t\right)^{k_1}\left(\operatorname{log}{\frac{\tau_0(l,q)}{q}} - t \right)^{k_2}dt \\
&= \Big(\frac{1}{2}\operatorname{log}{{\frac{q}{l}}}\Big)^{k_1}\Big(\frac{1}{2}\operatorname{log}{{\frac{l}{q}}}\Big)^{k_2+1} \int_{0}^{1}(y+1)^{k_1}(1-y)^{k_2} - (1-y)^{k_1}(y+1)^{k_2} dy.
\end{align*}
Again, by replacing $t=\beta+z$ and then $z=x\beta$ we get
\begin{align*}
&\int_{0}^\beta \Big(\operatorname{log}{\frac{\tau_0(l,q)}{l}}-t\Big)^{k_1}\Big(\operatorname{log}{\frac{\tau_0(l,q)}{q}} - t \Big)^{k_2}dt\\
&= \beta(-1)^{k_2}\int_{0}^{1}\Big(\frac{1}{2}\operatorname{log}{\frac{q}{l}}-\beta x \Big)^{k_1}\Big(\frac{1}{2}\operatorname{log}{\frac{l}{q}}-\beta x \Big)^{k_2}dx.
\end{align*}
By using above integrals and following the proof of Theorem~\ref{thm0} we obtain the result appearing in the first part of the theorem. For the second part of this theorem we follow the proof of Theorem~\ref{thm1}.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem} \ref{thm5}]
From the Proposition~\ref{darivative-zeta} we get
\begin{align}\label{zeta^n-final2}
(-i)^{n-1}\zeta^{(n)}\Big( \frac{1}{2}-it\Big)= e^{i\theta(t)} \sum_{j=0}^{n-1}\binom{n-1}{j}&\Big((i\operatorname{log}{\tau})^{n-1-j} Z^{(j+1)}{(t)} + (i\operatorname{log}{\tau})^{n-j}Z^{(j)}(t)\Big)\\
\nonumber & +\operatorname{O}\Big(t^{-\frac{5}{6}}\operatorname{log}^{n-1}{t}\Big).
\end{align}
By using Proposition \ref{darivative-zeta} and \eqref{zeta^n-final2}, we have
\begin{small}
\begin{align}\label{zeta-prod1}
\zeta^{(m)}\Big(\frac{1}{2}+it \Big)\zeta^{(n)}\Big(\frac{1}{2}-it \Big)=(-\operatorname{log}{\tau})^{m+n}
&\sum_{k=0}^{n-1}\sum_{j=0}^{m-1}\binom{m-1}{j}\binom{n-1}{k}\frac{(-i)^ki^j}{(\operatorname{log}{\tau})^{j+k+2}}A(j,k)\\
\nonumber & + \operatorname{O}\left(T^{-\frac{2}{3}}(\operatorname{log}{T})^{m+n-2}\right),
\end{align}
\end{small}
where $A(j,k)$ is given by the expression
\begin{align*}
A(j,k)& = Z^{(j+1)}(t)Z^{(k+1)}(t) +i \operatorname{log}{\tau} Z^{(j+1)}(t)Z^{(k)}(t) -i\operatorname{log}{\tau} Z^{(j)}(t)Z^{(k+1)}(t) \\
&+(\operatorname{log}{\tau})^{2} Z^{(j)}(t)Z^{(k)}(t).
\end{align*}
Rewriting the above equation in terms of $G_k(t)$ that has been defined in Proposition~\ref{L12}, we have
\begin{align*}
\frac{A(j,k)}{(\operatorname{log}{\tau})^{j+k+2}} = G_{j+1}(t)G_{k+1}(t) + iG_{j+1}(t)G_{k}(t)-iG_{j}(t)G_{k+1}(t) + G_{j}(t)G_{k}(t).
\end{align*}
Now, applying Proposition~\ref{L12}, we evaluate the integral
$$\mathcal{I}(j,k, T):= \int\limits_{T}^{T+H}\frac{A(j,k)}{(\operatorname{log}{\tau})^{j+k+2}} \left|\Phi\left(\frac{1}{2}+ it\right)\right|^2 dt, $$
where $H=T^a, \frac{1}{2}+\theta< a < \frac{4k+3}{4(k+1)}$. Note that by using integration by parts and Theorem~\ref{thm4}, we also can compute $\mathcal{I}(j,k, T)$.
By using the definition of $\vartheta(j,k)$ and applying second part of Proposition~\ref{L12}, we get
\begin{align*}
\mathcal{I}(j,k, T) &= i^{k-j}\Big[2 +\frac{1}{\theta}\Big(\frac{1}{k+j+3}+ \frac{1}{k+j+1}\Big)+ \frac{4\theta}{3}\Big(\frac{(j+1)(k+1)}{k+j+1}+ \frac{jk}{k+j-1}\Big)\Big]H\\
& + \operatorname{O}\Big(H\frac{(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big),
\end{align*}
if $j+k$ is even. For the case $j+k$ odd we get
\begin{align*}
\mathcal{I}(j,k, T) = i^k(-i)^j\Big[2 +\frac{2}{\theta}\Big( \frac{1}{k+j+2}\Big)+ \frac{4\theta}{3}\Big(\frac{(j+1)k}{k+j}+ \frac{j(k+1)}{k+j}\Big) + \operatorname{O}\Big(\frac{(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big)\Big]H.
\end{align*}
Note that proceedings like Theorems \ref{thm0}, \ref{thm1}, we can extend the range of $a$ to $1$ that is $\frac{1}{2}+\theta < a \leq 1$ in the above formulas of $\mathcal{I}(j,k, T)$.
Further we have to evaluate the sum $$\sum_{k=0}^{n-1}\sum_{j=0}^{m-1}\binom{m-1}{j}\binom{n-1}{k}(-i)^ki^j\mathcal{I}(j,k, T).$$
Now,
\begin{align*}
\sum_{k=0}^{n-1}\sum_{\substack{j=0}}^{m-1}=\sum_{k=0}^{n-1}\sum_{\substack{j=0\\j+k \mbox{ even}}}^{m-1} +\sum_{k=0}^{n-1}\sum_{\substack{j=0\\j+k \mbox{ odd}}}^{m-1}.
\end{align*}
Thus,
\begin{small}
\begin{align*}
\sum_{k=0}^{n-1}\sum_{j=0}^{m-1}\binom{m-1}{j}\binom{n-1}{k}(-i)^ki^j\mathcal{I}(j,k, T) = 2H\sum_{k=0}^{n-1}\sum_{\substack{j=0}}^{m-1}\binom{m-1}{j}&\binom{n-1}{k} + \frac{H}{\theta}\mathcal{I}_1 + \frac{4H\theta}{3}\mathcal{I}_2\\
& + \operatorname{O}\Big(\frac{H(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big),
\end{align*}
\end{small}
where $\mathcal{I}_1, \mathcal{I}_2$ are defined by
\begin{footnotesize}
\begin{align*}
\mathcal{I}_1 := \sum_{k=0}^{n-1}\sum_{\substack{j=0\\j+k \mbox{ even}}}^{m-1}\binom{m-1}{j}\binom{n-1}{k}\Big(\frac{1}{k+j+3}+ \frac{1}{k+j+1}\Big) + \sum_{k=0}^{n-1}\sum_{\substack{j=0\\j+k \mbox{ odd}}}^{m-1}\binom{m-1}{j}\binom{n-1}{k}\frac{2}{k+j+2},
\end{align*}
\end{footnotesize}
and
\begin{small}
\begin{align*}
\mathcal{I}_2& := \sum_{k=0}^{n-1}\sum_{\substack{j=0\\j+k \mbox{ even}}}^{m-1}\binom{m-1}{j}\binom{n-1}{k}\Big(\frac{(j+1)(k+1)}{k+j+1}+ \frac{jk}{k+j-1}\Big)\\
& + \sum_{k=0}^{n-1}\sum_{\substack{j=0\\j+k \mbox{ odd}}}^{m-1}\binom{m-1}{j}\binom{n-1}{k}\Big(\frac{(j+1)k}{k+j}+\frac{(k+1)j}{k+j}\Big).
\end{align*}
\end{small}
From the binomial identity we get
\begin{align*}
\sum_{k=0}^{n-1}\sum_{\substack{j=0}}^{m-1}\binom{m-1}{j}\binom{n-1}{k} =2^{m+n-2}.
\end{align*}
Taking integration on both sides we get
\begin{align*}
\mathcal{I}_1 = \frac{1}{2}\int_{-1}^{1}(x^2 + 1 + 2x)(1+x)^{m+n-2}dx = \frac{2^{m+n}}{m+n+1}.
\end{align*}
Again, by doing integration of product of derivatives of binomial identities we write
\begin{align*}
\mathcal{I}_2 &= \frac{1}{2} \int_{-1}^{1} \Big\lbrace(x(1+x)^{m-1})^{'} (x(1+x)^{n-1})^{'} +((1+x)^{m-1})^{'}((1+x)^{n-1})^{'} \Big\rbrace dx \\
& +\frac{1}{2} \int_{-1}^{1} \Big\lbrace(x(1+x)^{m-1})^{'} ((1+x)^{n-1})^{'} +((1+x)^{m-1})^{'}(x(1+x)^{n-1})^{'} \Big\rbrace dx \\
& = \frac{1}{2}mn\int_{-1}^{1}(1+x)^{m+n-2}dx = \frac{mn}{m+n-1}2^{m+n-2}.
\end{align*}
Hence, we get
\begin{align}\label{mean-}
\sum_{k=0}^{n-1}\sum_{j=0}^{m-1}&\binom{m-1}{j}\binom{n-1}{k}(-i)^ki^j\mathcal{I}(j,k, T) \\
\nonumber &= H2^{m+n}\Big(\frac{1}{2} + \frac{1}{\theta (m+n+1)} + \frac{mn\theta}{3(m+n-1)} +\operatorname{O}\Big(\frac{(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big)\Big).
\end{align}
Finally, we obtain
\begin{align*}
&\int_{T}^{T+H}\zeta^{(m)}\Big(\frac{1}{2}+it \Big)\zeta^{(n)}\Big(\frac{1}{2}-it \Big) \left|\Phi\left(\frac{1}{2}+ it\right)\right|^2 dt \\
& = (-1)^{m+n}\Big(\frac{1}{2} + \frac{1}{\theta (m+n+1)} + \frac{mn\theta}{3(m+n-1)}+\operatorname{O}\Big(\frac{(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big)\Big)H \Big(\operatorname{log}{\frac{T}{2 \pi}}\Big)^{m+n}.
\end{align*}
\end{proof}
\begin{proof}[\textbf{Proof of Theorem} \ref{thm6}]
\textbf{1st part:}
The proof of Theorem \ref{thm6} is similar to Theorem~\ref{thm2}. We would apply Littlewood's lemma on $z_k(s)\Phi(s)$ in the region which is given by joining the vertices $\frac{1}{2}+iT, 2+iT, 2+ i(T+H)$ and $\frac{1}{2}+i(T+H)$. The function $z_k(s)$ was introduced by Levinson and Montgomery~\cite[Section 3]{LM} and it is given by
\begin{align*}
z_k(s)= (-1)^k2^s(\operatorname{log}{2})^{-k}\zeta^{(k)}(s).
\end{align*}
Following the proof of Theorem~\ref{thm2} and \cite[Section 3]{LM} we get
\begin{align*}
2\pi\sum_{\substack{T \leq \gamma_k\leq T+H \\ \beta_k > \frac{1}{2}}} \Big(\beta_k - \frac{1}{2} \Big)=\int_{T}^{T+H}\operatorname{log}\Big|z_k\Big(\frac{1}{2}+it\Big)\Phi\Big(\frac{1}{2}+it\Big)\Big|dt + \operatorname{O}(\operatorname{log}{T}).
\end{align*}
Thus, we get
\begin{align}\label{pfeq71}
2\pi\sum_{\substack{T \leq \gamma_k\leq T+H \\ \beta_k > \frac{1}{2}}} \Big(\beta_k - \frac{1}{2} \Big) & = \int_{T}^{T+H}\operatorname{log}\Big|\zeta^{(k)}\Big(\frac{1}{2}+it\Big)\Phi\Big(\frac{1}{2}+it\Big)\Big|dt )\\
\nonumber &+ H\Big(\frac{\operatorname{log}{2}}{2}-k\operatorname{log}\log{2}\Big) + \operatorname{O}(\operatorname{log}{T}).
\end{align}
Now, by applying arithmetic mean-geometric inequality on the integral appearing on the right-hand side of \eqref{pfeq71}, we have
\begin{align*}
\int_{T}^{T+H}\operatorname{log}\Big|\zeta^{(k)}\Big(\frac{1}{2}+it\Big)\Phi\Big(\frac{1}{2}+it\Big)\Big|dt \leq \frac{H}{2}\operatorname{log}\Big(\frac{1}{H}\int_{T}^{T+H}\Big|\zeta^{(k)}\Big(\frac{1}{2}+it\Big)\Phi\Big(\frac{1}{2}+it\Big)\Big|^2dt\Big).
\end{align*}
Hence, from Corollary~\ref{coro5} we obtain
\begin{align}\label{pfeq712}
2\pi\sum_{\substack{T \leq \gamma_k\leq T+H \\ \beta_k > \frac{1}{2}}} \Big(\beta_k - \frac{1}{2} \Big) \leq kH\operatorname{log}\log{\frac{T}{2\pi }} &+ \frac{H}{2}\operatorname{log}{\Big(\frac{1}{2}+ \frac{1}{(2k+1)\theta} + \frac{k^2\theta}{3(2k-1)} \Big)}\\
\nonumber & + H\Big(\frac{\operatorname{log}{2}}{2}-k\operatorname{log}\log{2}\Big) + \operatorname{O}\Big(\frac{H(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big).
\end{align}
\textbf{2nd part:}\\
Levinson and Montgomery~\cite[Theorem 3]{LM} proved that for $H > 0,$
\begin{align}\label{pfeq713}
2\pi\sum_{\substack{T \leq \gamma_k\leq T+H }} \Big(\beta_k - \frac{1}{2} \Big) = kH\operatorname{log}\log{\frac{T}{2\pi}} + H\Big(\frac{\operatorname{log}{2}}{2}-k\operatorname{log}\log{2}\Big) + \operatorname{O}\Big(\frac{H^2}{T\operatorname{log}{T}}+\operatorname{log}{T}\Big).
\end{align}
Also, we have
\begin{align}\label{pfeq714}
-\sum_{\substack{T \leq \gamma_k\leq T+H }} \Big(\beta_k - \frac{1}{2} \Big) = \sum_{\substack{T \leq \gamma_k\leq T+H\\ \beta_k < 1/2 }} \Big(\frac{1}{2} -\beta_k\Big) - \sum_{\substack{T \leq \gamma_k\leq T+H \\ \beta_k > 1/2}} \Big(\beta_k - \frac{1}{2} \Big).
\end{align}
Hence, using \eqref{pfeq712} and \eqref{pfeq713} in \eqref{pfeq714} under the condition $H=T^a, \frac{1}{2} + \theta < a \leq 1$, where $0< \theta < \frac{2k+1}{4(k+1)}$, we obtain
\begin{align*}
2\pi \sum_{\substack{T \leq \gamma_k\leq T+H\\ \beta_k < 1/2 }} \Big(\frac{1}{2} -\beta_k\Big) \leq \frac{H}{2}\operatorname{log}{\Big(\frac{1}{2}+ \frac{1}{(2k+1)\theta} + \frac{k^2\theta}{3(2k-1)}\Big)} +\operatorname{O}\Big(\frac{H(\operatorname{log}\log{T})^3}{\operatorname{log}{T}}\Big).
\end{align*}
\end{proof}
\section{Acknowledgement}
We would like to thanks Prof. J. B. Conrey for informing us of the fact that~\cite[Theorem 2]{Con}is true for $R=0$ in personal communication. The authors would like to thanks Prof. R. Balasubramanian for many helpful discussions. The project got initiated at The Harish-Chandra Research Institute, where the first author was a graduate student and the second author was a post-doctoral fellow. So, they would like to thanks The Harich-Chandra Research Institute for providing all kinds of necessary facilities and a working environment. The second author would also like to thanks The University of Hong Kong for providing all kind of facilities and working environment, where he was a post-doctoral fellow. The second author was supported by Czech Science Foundation GACR, grant 21-00420M. The first author also thanks Indian Statistical Institute, Kolkata for providing a nice working environment.
|
1604.06371
|
\section{Introduction}
Disequilibrium species in the atmosphere of Jupiter and Saturn can be used to constrain the deep water abundance and the deep eddy diffusion coefficient in the atmospheres of Jupiter and Saturn \citep[e.g.,][]{PB77, FL94}. Various disequilibrium species such as CO, PH$_3$, GeH$_4$, and AsH$_3$ have been detected on Jupiter and Saturn with abundances orders of magnitude higher than their respective chemical equilibrium abundances at the pressure level where they are observable \citep[e.g.,][]{Beer75, Noll86, Ridgway76, Larson80, Fink78, Noll88b, Bezard89, Noll89}. These species at a few bars are transported upward by vertical mixing from the deep atmosphere where they are more abundant, therefore, they contain the information of the atmosphere down to a few hundred bars. In this paper, we model the vertical profiles of disequilibrium species with updated thermodynamic and kinetic data. The dependence on the water abundance and the eddy diffusion coefficient is investigated.
Our study is timely for the following reasons. The JIRAM instrument on board the Juno spacecraft will be able to measure the disequilibrium species CO, PH$_3$, GeH$_4$, and AsH$_3$ down to a few bars when it will arrive at Jupiter in 2016 \citep{Grassi10}. The microwave radiometer onboard Juno will also be able to measure the deep water abundance \citep{Janssen05}. With the abundances of disequilibrium species and water, constraints should be made on the deep eddy diffusion coefficient. A Saturn probe proposal has been submitted to the ESA 2015 call for medium class mission \citep{Mousis15} and a similar concept is under study for a submission to the NASA 2016 New Frontier call\citep{Atkinson12}. Current entry probes are designed to go down to 10--20 bar and can make in-situ measurements of the atmosphere composition via mass spectrometry \citep{Wong04}. However, it is unlikely that such probes will be able to descend below the water cloud deck and measure the deep water abundance. A study with the updated kinetic data is then necessary for evaluating whether deep water abundance can be effectively constrained by disequilibrium species.
We use the diffusion-kinetic model developed in \citet{Wang15}. A C/N/O/H reaction network is employed to predict the abundances of various carbon bearing species. The reaction networks for P/H/O and Si/H/O species are applied for the first time to study planetary atmospheres in this paper. New chemical pathways for PH$_3$ and GeH$_4$ destructions are then proposed. New compilations of thermochemical data, especially for P, Ge, and As, are used in our model.
The paper is organized as follows. In section 2, we introduce the current status of the measurement of disequilibrium species. In section 3, we describe our models for the chemistry and transport of disequilibrium species. In section 4, we present our results. In section 5, we discuss the implications for Juno and a Saturn entry probe. The conclusions are summarized in section 6.
\section{Measurements of disequilibrium species: current status}
The tropospheric abundances of CO, PH$_3$, SiH$_4$, GeH$_4$ and AsH$_3$ are primarily measured in the 5$\mu$m window for Jupiter and Saturn. Apart from a 1 ppb tropospheric component \citep[e.g.,][]{Larson78, Bjoraker86}, CO also has a stratospheric component \citep[e.g.,][]{Bezard02}. The tropospheric CO is supplied by vertical convective mixing from deep levels where CO prevails \citep{PB77}, while the stratospheric CO can be supplied by micrometeoroids \citep{Prather78}, infalling materials from icy satellites \citep{Strobel79}, or shock chemistry from infalling kilometer to subkilometer-sized comets \citep{Lellouch95, Bezard02}. At Jupiter and Saturn, comets are more probable than other sources \citep{Bezard02,Cavalie10}. The tropospheric CO contains information on the deep atmosphere, and thus can be used to probe the deep water abundance and the deep eddy diffusion coefficient. The retrieval of Saturn's tropospheric CO has not been successful due to its very low mixing ratios \citep{Cavalie09}. Tropospheric PH$_3$ was measured in the 5$\mu$m window for Jupiter with a mixing ratio of $(6\sim9) \times 10^{-7}$ \citep[e.g.,][]{Kunde82, Bjoraker86, Encrenaz96, Irwin98}, and for Saturn with a mixing ratio of $(3\sim5) \times 10^{-6}$ \citep[e.g.,][]{NL91, deGraauw97, Fletcher11}. The vertical profile of PH$_3$ was retrieved from the spectra by Cassini CIRS and VIMS \citep{Fletcher09a, Fletcher11}. The PH$_3$ abundance starts to be depleted in the upper troposphere where the pressure is about 1 bar \citep{Fletcher12}, due to decreased eddy mixing, UV photolysis and chemical re-equilibration \citep{Irwin98,Irwin04}. Tropospheric GeH$_4$ was identified and measured at the 5$\mu$m window \citep{Fink78, Noll88b} at a mixing ratio of a few times 10$^{-10}$. The mixing ratio of GeH$_4$ in the stratosphere is expected to be lower than that in the troposphere because of UV photolysis. The tropospheric AsH$_3$ was measured on both Jupiter and Saturn \citep{Bezard89, Noll89, NL91}. The mixing ratio of AsH$_3$ on Jupiter is about $2\times10^{-10}$ \citep{Noll90}, while for the Saturn, the mixing ratio is about $3\times10^{-9}$ \citep{Bezard89, NL91}. In table \ref{tab: compositions}, we summarize the measurements of tropospheric CO, PH$_3$, SiH$_4$, GeH$_4$, and AsH$_3$ abundances for both Jupiter and Saturn.
\section{Model}
\subsection{Introduction to the model}
We developed a code to solve the 1-D transport-kinetic equation:
\begin{equation}\label{eqn: TK}
\frac{\partial{Y_i}}{\partial{t}} = \frac{1}{\rho} \frac{\partial}{\partial{z}}(\rho K_{\rm eddy} \frac{\partial{Y_i}}{\partial{z}}) + P_i - L_i,
\end{equation}
where $Y_i$ is the mass fraction of species i, $\rho$ is the density of the atmosphere, $z$ is the vertical coordinate relative to a reference point in the atmosphere (we choose the 1 bar level in the code), $K_{\rm eddy}$ is the vertical eddy diffusion coefficient, $P_i$ is the
chemical production rate of species i, and $L_i$ is the chemical loss rate of species i. Both $P_i$ and $L_i$ have a unit of g cm$^{-3}$ s$^{-1}$. The time evolution of $Y_i$ is controlled by two physical processes: one is the chemical production and destruction of species i, and the other is its corresponding vertical transport. In the convective envelope of Jupiter and Saturn, the transport of mass is mainly by turbulent convection. Here in equation (\ref{eqn: TK}), the convective transport of species is approximated by diffusion transport with an coefficient $K_{\rm eddy}$, which is a good approximation justified by the success of mixing length theory in explaining stellar convection \citep{Stone76}. The mass fractions $Y_i$ are initialized using their local chemical equilibrium values along the adiabat. The chemical net production rate ($P_i - L_i$) is integrated using \textit{Cantera}, a software toolkit developed for problems involving chemical kinetics and thermodynamics \citep{Cantera15}. At each time step, we call \textit{Cantera} to do the integration and include the result in the resolution of the continuity equation. \textit{Cantera} has been used and tested for many applications including combustion, detonation, fuel cells, batteries, etc. The integration is terminated when the mass fractions $Y_i$ reach steady state. The code requires three kinds of input. One is the temperature pressure profile ($T-P$ profile), the second is a list of thermodynamic properties in the format of NASA polynomials \citep{McBride93} for each species, and a list of reactions between these species, the third is the elemental composition and the vertical eddy diffusion coefficient $K_{\rm eddy}$.
The $T-P$ profiles for Jupiter and Saturn are calculated following the method described in \citet{FP85}. We choose a reference point where temperature and pressure are measured and extrapolated into the deep atmosphere assuming a dry adiabat. For Jupiter, we use $T$ = 427.71 K at 22 bars as our reference point \citep{Seiff98}. The pressure and temperature conditions lower than 22 bars are from the Galileo entry probe measurements \citep{Seiff98}. For Saturn, we use $T$ = 134.8 K at $P$ = 1 bar as our reference point \citep{Lindal85}. The heat capacity of the atmosphere used in the calculation is computed by linearly combining the heat capacities of H$_2$ and He, and the heat capacities of H$_2$ and He are from NIST-JANAF thermochemical table \citep{Chase98}. The helium mixing ratio we use is 0.157 for Jupiter \citep{Niemann98} and 0.135 for Saturn \citep{CG00}. The T-P profiles for Jupiter and Saturn are calculated and shown in Fig. \ref{fig: adiabat}.
The thermodynamic and reaction data are gathered from various sources which are detailed in the following:
\begin{itemize}
\item{\it C/N/O/H reaction network}
Our C/N/O/H reaction network used in this paper is developed based on the network from \citet{Venot12} downloaded from the KIDA database \citep[][\underline{http://kida.obs.u-bordeaux1.fr}]{Wakelam12}. The network consists of 105 neutral species and 963 reactions. Among the reactions, 957 of them are reversible reactions and 6 of them are irreversible reactions. A complete list of the species can be found in \citet{Venot12}. The network has been validated against various combustion experiments in the temperature range between 300 K and 2000 K and in the pressure range between 0.01 bar and several hundred bars.
Alternative network applied to the hydrogen-rich atmospheres is that of \citet{Moses11} and \citet{VM11}. \citet{Moses14} compared their model with the \citet{Venot12} model and found that the major difference in CO/CH$_4$ chemistry comes from the rate coefficient of the reaction H + CH$_3$OH $\leftrightarrow$ CH$_3$ + H$_2$O. The \citet{Venot12} model used the rate coefficient obtained by \citet{Hidaka89} from laboratory experiments. However, \citet{Moses14} argued that this reaction likely is prohibited by a very large energy barrier and is much slower than was estimated by \citet{Hidaka89} based on quantum chemical calculations in \citet{Moses11}. It remains to be seen whether changing the rate coefficient following the suggestions by \citet{Moses11} would reproduce the experimental results in \citet{Hidaka89}. Therefore, because the discrepancy remains unresolved, we have considered two reaction networks in our model, which are:
\begin{itemize}
\item{\it network A}: it is based on the \citet{Venot12} network with some modifications. Among the species in the list, we remove HNC because it does not participate in any reactions in the reaction network. We include CH$_3$HN$_2$, CH$_3$HN, CH$_2$NH2, and CH$_2$NH into the network since these species are expected to be important in a hydrogen rich environment \citep{Moses10}. The final network A consists of 108 species and 1000 reactions. Added reactions and their rate coefficients are from \citet{DB00}. The thermodynamic properties are mainly compiled from \citet{BR05}, \citet{McBride93}, \citet{DB00} and \citet{Venot12}. An online updated version of the \citet{BR05} database can be found at \underline{http://garfield.chem.elte.hu/Burcat/burcat.html}.
The whole reaction list along with thermodynamic data and rate coefficient data are available in the KIDA database (\underline{http://kida.obs.u-bordeaux1.fr/networks.html}).
\item{\it network B}: it is the same as the network A except the rate coefficient for the reaction H + CH$_3$OH $\leftrightarrow$ CH$_3$ + H$_2$O is revised to be much slower following the recommended rate in \citet{Moses11}. The slower rate leads to nearly two orders of magnitude increase in the CO/CH$_4$ conversion timescale near the quench level.
\end{itemize}
\item{\it H/P/O reaction network}
The H/P/O reaction network is based on \citet{Twarowski95}. The reaction network consists of 24 species and 175 reactions. The phosphorus containing species included are: PH$_3$, PH$_2$, PH, HOPO, HPO, PO,
PO$_2$, PO$_3$, HOPO$_2$, P$_2$O$_3$, P, P$_2$, P$_4$, P$_2$O, P$_2$O$_2$, HPOH, H$_2$POH. The network has been used to explain the faster recombination rate of H and OH in the presence of phosphine combustion products \citep{Twarowski96}. To make sure all important species under Jupiter/Saturn's atmospheric conditions are included, we performed an equilibrium calculation using the NASA Chemical Equilibrium Application (CEA) \citep{GM94,MG96} for temperature and pressure conditions along the adiabats of Jupiter and Saturn. We find H$_3$PO$_4$ is important but missing from this reaction network, so we added it and associated reactions into the reaction network. The thermodynamic data are primarily from \citet{BR05}, \citet{McBride93}. The thermodynamic data for P$_2$O and P$_2$O$_2$ are from \citet{Twarowski93}. The whole reaction network is available in the KIDA database.
\item{\it Si/O/H reaction network}
The Si/O/H reaction network is from \citet{Miller04}. The network consists of 69 species and 198 reactions. The silicon bearing species included in the network are:
Si, Si$_2$, Si$_3$, SiH, SiH$_2$,
cis-OSiH$_2$O, Si$_2$H$_2$, SiH$_4$, SiH$_3$, H$_3$SiSiH$_3$,
H$_3$SiSiH, H$_2$SiSiH$_2$, Si$_2$H$_5$, Si$_2$H$_3$, Si$_2$O$_2$,
Si$_3$O$_3$, Si$_4$O$_4$, Si$_5$O$_5$, Si$_6$O$_6$, Si$_7$O$_7$,
Si$_8$O$_8$, Si$_9$O$_9$, Si$_{10}$O$_{10}$, (SiH$_2$O)$_2$, SiO$_2$,
H$_3$SiOSiH$_3$, H$_3$SiOOH, H$_3$SiOO, SiOOH, H$_2$SiOH,
H$_3$SiO, HOSiO$_2$, SiO, Si(OH)$_2$, SiOH,
H$_2$Si(OH)O, H$_3$SiOH, HSiOH, HSiO, H$_2$SiO,
HSiO(OH), HSiO$_2$, HOSiO, HSiOOH, SiO$_2$(c),
SiO$_2$(l), SiO$_2$(g), H$_2$SiOOH, (HSiOOH)$_2$, Si$_2$O$_4$,
Si$_3$O$_6$, Si$_4$O$_8$, Si$_5$O$_{10}$, Si$_6$O$_{12}$, Si$_7$O$_{14}$,
Si$_8$O$_{16}$, Si$_9$O$_{18}$, Si$_{10}$O$_{20}$.
This network has been used to model the combustion of silane (SiH$_4$). The whole reaction network is available in the KIDA database.
\end{itemize}
For arsenic (As) and germanium (Ge) containing species, there is no reaction network available in the literature. We use the quenching timescale approach to compute the abundances of GeH$_4$ and AsH$_3$ along Jovian and Saturnian adiabats. This approach does not need a whole reaction network but a single rate determining step. This method was widely used in the literature as an alternative to the diffusion-kinetic modeling \citep[e.g.,][]{PB77, FL94}, and can correctly predict the quench level as long as the appropriate rate determining step is used and the recipe in \citet{Smith98} is followed to compute the mixing timescale. A detailed description of this approach can be found in \citet{Wang15}.
For equilibrium computations, we use the NASA Chemical Equilibrium Application (CEA) as our reference \citep{GM94, MG96}. This code is in wide use in the aerodynamic and thermodynamic community. The code includes over 2000 species in the database.
The elemental abundances of Jupiter and Saturn used in this paper are summarized in table \ref{tab: elements}. We use the solar composition table from \citet{Asplund09} as our reference. The elemental abundances are inferred from the observed abundances of their hydrogen compounds. For phosphorus, we assume the total phosphorus abundance is equal to the observed PH$_3$ abundance. For silicon and germanium, we assume they have enrichments similar to carbon, since their observed hydrogen compounds (SiH$_4$ and GeH$_4$) do not represent their total elemental abundances. For arsenic, we assume its total abundance is represented by the observed AsH$_3$ abundance.
\subsection{Validation of the Diffusion-Kinetic Code}
We first test our code without diffusion using the C/N/O/H reaction network computing the time evolution of the abundances of different species at constant temperature and pressure. The integration is done using \textit{Cantera}. A comparison between our results and those from the nominal model of \citet{Venot12} is shown in Fig. \ref{fig: 0_D}. The evolution is very similar for H and OH, but not exactly the same for C$_2$H$_6$ and NNH. The difference is due to the utilized thermodynamic properties. Indeed, we have changed the thermodynamic data to those provided in \citet{Venot12} and exactly obtained the same evolution of the abundance profiles for all four species. This prompts us to compare the thermodynamic data we used with those in \citet{Venot12thesis}. Our thermodynamic data are gathered from widely used compilations, for example, \citet{McBride93} and \citet{BR05}. The species that are not available from literature are estimated using the software THERM \citep{Ritter91}. \citet{Venot12} also gathered thermodynamic data in a similar manner. However, we do not know the source of thermodynamic data for each species in \citet{Venot12}, therefore, comparisons for individual species is not possible. This is not expected to be a major source of uncertainty since the uncertainties in the kinetic data are much larger than those in the thermodynamic data.
The test with both kinetics and diffusion is done by simulating the Saturn's atmosphere thermochemistry using the C/N/O/H reaction network, and comparing against the results in Fig. 1 of \citet{Mousis14}. The reference result in \citet{Mousis14} for Saturn is computed using the same code as in \citet{Venot12}. The comparison, represented in Fig. \ref{fig: 1_D}, shows that the differences in the mixing ratios are within 10$\%$. There might be three sources of error that contribute to the differences of the mixing ratios shown in Fig. \ref{fig: 1_D}: (1) the temperature-pressure profile; (2) the thermodynamic data; (3) the elemental abundances, which are all inputs to our code. These difference are small and the comparison shows our code can correctly solve the diffusion-kinetics Equation (\ref{eqn: TK}).
\section{Results}
In this section, we predict the abundances of various disequilibrium species in Jupiter's and Saturn's atmospheres at a few bars pressure. Above this level altitude, the abundances are affected by photochemistry and the enrichment from external sources, which are not included in our calculations here. The dependence on the water abundance and the deep eddy diffusion coefficient are investigated.
We make a few definitions regarding the abundance of species Z. Its concentration is denoted as [Z] with units of molecules$\cdot$cm$^{-3}$. Mole fraction of Z is denoted as $X_{\rm Z}$ = [Z]/$n$, where $n$ is total number density of the atmosphere (molecule$\cdot$cm$^{-3}$). Mixing ratio is denoted as $q_{\rm Z}$ = [Z]/[H$_2$]. For an element M, we define $E_{M}$ as the enrichment of M relative to solar, which is the ratio of [M]/[H] in the planet to that in the Sun, where [M] and [H] are the total number density of M and H atoms respectively, in whatever form. The elemental composition of the solar atmosphere is taken from \citet{Asplund09}. The elemental abundances of Jupiter and Saturn used in our simulations can be found in Table \ref{tab: elements}.
\subsection{Simulation results using the C/N/O/H reaction network}
The C/N/O/H reaction network used is described in section 2. We considered the two aforementioned reaction networks, namely, network A and network B. The vertical profile of the mixing ratios of species is computed and the ($E_{\rm H_2O}$, $K_{\rm eddy}$) parameter space is investigated. As an illustration, the mixing ratios along Jupiter's adiabat for parameters $E_{\rm H_2O}$ = 10 and $K_{\rm eddy}$=1$\times$10$^8$ cm$^2$s$^{-1}$ using the network A are presented in Fig. \ref{fig: carbon_jupiter}. Our calculations show that N$_2$ is the major nitrogen bearing species after NH$_3$ with a mixing ratio of about 1 ppm. C$_2$H$_6$, CO and CO$_2$ are the major carbon bearing species after CH$_4$. CO and CO$_2$ are the major oxygen bearing species after H$_2$O. The mixing ratio of CO and C$_2$H$_6$ are at 1 ppb level, while the mixing ratio of CO$_2$ is about 0.1 ppb. Mixing ratios of other species such as CH$_3$NH$_2$ and HCN are below 1$\times$10$^{-12}$.
We focus on three species, CO, C$_2$H$_6$ and CO$_2$, and investigate their dependence on parameters $E_{\rm H_2O}$ and $K_{\rm eddy}$. For each combination of ($E_{\rm H_2O}$, $K_{\rm eddy}$), we run the simulation to steady state. The mixing ratios of C$_2$H$_6$, CO, and CO$_2$ have been extracted from each simulation and the results are summarized in Fig. \ref{fig: carbon_jupiter}. As is shown by the figure, the abundance of C$_2$H$_6$ is sensitive to the vertical eddy diffusion coefficient while it is not affected by the deep water abundance. Therefore, C$_2$H$_6$ is a good tracer for the eddy diffusion coefficient alone. In contrast, the abundance of CO is both sensitive to the eddy diffusion coefficient and the deep water abundance. The constraints on the deep water abundance placed by the
CO abundance is limited by the information on the eddy diffusion coefficient. If both CO and C$_2$H$_6$ can be measured, we can use the mixing ratio of C$_2$H$_6$ to put a tight constraint on the $K_{\rm eddy}$, then we can determine how much water is needed to match the observed CO abundance with the predicted CO abundance.
CO$_2$ is another tracer for the deep water abundance and the eddy diffusion coefficient. Although it is very sensitive to the water abundance ($\propto E_{\rm H_2O}^2$), its mixing ratio is an order of magnitude less than CO, making it more difficult to be measured. In Fig. \ref{fig: carbon_jupiter}, we also show the results computed using the network B. The predictions on CO and CO$_2$ abundances using network B are very different from those using network A. Therefore, significant uncertainties still remain for the CO kinetics that require laboratory measurements for their definite resolutions.
The tropospheric CO abundance on Jupiter was measured at the Northern Equatorial Belt ($\sim 9^{\circ}$) by \citet{Bezard02} with a mole fraction of $(1.0\pm0.2)\times10^{-9}$ . Combined with the predicted eddy diffusion coefficient of $\sim 1\times10^8$ cm$^2$s$^{-1}$ \citep{Wang15} near the equatorial regions, the deep water abundance is constrained to be $\sim 7$ times solar with network A and $\sim 0.6$ times solar with network B.
The simulation results for Saturn are presented in Fig. \ref{fig: carbon_saturn}. Similarly to the results for Jupiter, we find C$_2$H$_6$ is a good tracer for the eddy diffusion coefficient. Both CO and CO$_2$ are good tracers for the deep water abundance. Dual constraints from CO and C$_2$H$_6$ can break the degeneracy between high (low) eddy diffusion coefficient and high (low) deep water abundance.
The tropospheric CO abundance on Saturn was not measured so far and an upper limit is put by \citet{Cavalie09} with a mixing ratio smaller than 1 ppb. Combined with the predicted eddy diffusion coefficient of $\sim 1\times10^8$ cm$^2$s$^{-1}$ \citep{Wang15} near the equator, the deep water abundance is constrained to be $\lesssim 60$ times solar with network A and $\lesssim 10$ times solar with network B.
\subsection{Simulation results using the H/P/O reaction network}
We use the NASA CEA chemical equilibrium code to compute the equilibrium abundances of phosphorus containing species along Saturn's adiabat, and the results are shown in Fig. \ref{fig: P_eq}. The elemental abundances used in the calculations are summarized in Table \ref{tab: elements}. Our calculations show that H$_3$PO$_4$ is the major phosphorus bearing species below 700 K instead of P$_4$O$_6$ in the literature \citep[e.g.,][]{FL94, VF05}. We find the difference is due to the different thermodynamic data used for P$_4$O$_6$. \citet{FL94} and others use the standard enthalpy of formation of P$_4$O$_6$ ($\Delta H_f^{0} [\rm P_4O_6]$) from JANAF table \citep{Chase85}, which is based on the experiments by \citet{KD52}. As a comparison, the CEA code uses the $\Delta H_f^{0} [\rm P_4O_6]$ from Gurvich's table \citep{Gurvich90}, which is based on the experiments by \citet{HM63}. \citet{FL94} favored the data from the JANAF table because they see ``no compelling reasons, such as a problem with experimental methods or data reduction, to reject the work of \citet{KD52} in favor of the work of \citet{HM63}". They point out that the discrepancy needs to be resolved by a new experimental determination of the $\Delta H_f^{0} [\rm P_4O_6]$. However, we favor the data from Gurvich's table for the following reasons. \citet{HM63} actually pointed out that in the experiment of \citet{KD52} the sample examined is a mixture of P$_4$O$_6$ and P$_4$O$_{10}$ rather than pure P$_4$O$_6$. The identification of P$_4$O$_6$ is not necessarily definitive and it may lead to the more negative values of the enthalpy of formation for P$_4$O$_6$ (due to pollution by P$_4$O$_{10}$). Later studies by \citet{Muenow70} and \citet{Smoes73} also supported the measurement by \citet{HM63}. Quantum chemical calculations by \citet{Morgon12} get values of the enthalpy of formation of P$_4$O$_6$ that are closer to the values of \citet{HM63}. We didn't find another direct or indirect measurement that supported the values by \citet{KD52}. The Burcat database \citep{BR05} adopted the values by \citet{HM63} and pointed out that the values used in JANAF is erroneous. For these reasons, we choose to take the data from Gurvich's table. Our computation shows that PH$_3$ is converted to H$_3$PO$_4$ at about 700 K in Saturn's atmosphere. From our computations for Jupiter, PH$_3$ is largely converted to H$_3$PO$_4$ at about 650 K in Jupiter's atmosphere.
The main chemical pathway for PH$_3$/H$_3$PO$_4$ conversion is identified from the H/P/O reaction network by comparing the rates of all the reactions in the network. This method is very robust since no information is needed other than the kinetic data. The details of our analysis are presented in the Appendix B. We find the main chemical pathway consists of the following reactions:
\begin{subequations}\label{eqn: P_pathway}
\begin{align}
& \textrm{PH}_{3} \leftrightarrow \textrm{PH}_2 + \textrm{H} \\
& \textrm{PH}_{2} + \textrm{H}_{2}\textrm{O} \leftrightarrow \textrm{H}_{2}\textrm{POH} + \textrm{H} \\
& \textrm{H}_{2}\textrm{POH} + \textrm{PH}_{2} \leftrightarrow \textrm{H}\textrm{POH} + \textrm{PH}_{3} \\
& \textrm{H}\textrm{POH} \leftrightarrow \textrm{H}\textrm{PO} + \textrm{H} \\
& \textrm{H}\textrm{PO} \leftrightarrow \textrm{PO} + \textrm{H} \\
& \textrm{PO} + \textrm{H}_{2}\textrm{O} \leftrightarrow \textrm{HOPO} + \textrm{H} \\
& \textrm{HOPO} + \textrm{H} \leftrightarrow \textrm{PO}_{2} + \textrm{H}_{2} \\
& \textrm{PO}_{2} + \textrm{H}_{2}\textrm{O} \leftrightarrow \textrm{HOPO}_{2} + \textrm{H} \\
& \textrm{HOPO}_{2} + \textrm{H}_{2}\textrm{O} \leftrightarrow \textrm{H}_{3}\textrm{PO}_{4} \\
\cline{1-2}
& \textrm{PH}_{3} + 4\, \textrm{H}_{2}\textrm{O} \leftrightarrow \textrm{H}_{3}\textrm{PO}_{4} + 4\textrm{H}_{2} \tag{\ref{eqn: P_pathway}, net}
\end{align}
\end{subequations}
The rate determining step for the pathway is reaction (\ref{eqn: P_pathway}h).
The rate coefficient for this reaction is unknown, however, the backward reaction coefficient is estimated in \citet{Twarowski95}. For the reverse reaction of \ref{eqn: P_pathway}h, the rate coefficient $k_{\ref{eqn: P_pathway}h,r} = 5.24\times10^{-11}e^{-6014/T}$ cm$^3$molecule$^{-1}$s$^{-1}$. Using the detailed balance, the forward rate coefficient $k_{\ref{eqn: P_pathway}h} = k_{\ref{eqn: P_pathway}h,r} K_{\ref{eqn: P_pathway}h, eq}$, where $K_{\ref{eqn: P_pathway}h, eq}$ is the equilibrium constant of reaction (\ref{eqn: P_pathway}h). Therefore, the forward rate coefficient is estimated to be $k_{\ref{eqn: P_pathway}h} = 2.35\times10^{-12} e^{-1.067\times10^4/T}$ cm$^3$molecule$^{-1}$s$^{-1}$.
In Figures \ref{fig: PH3_J} and \ref{fig: PH3_S}, we have plotted the predicted mixing ratios of PH$_3$ as a function of the $K_{\rm eddy}$ computed using the rate determining step. Our calculations show that the mixing ratio of PH$_3$ is not sensitive to the values of $K_{\rm eddy}$ unless $K_{\rm eddy}$ is less than 1$\times$10$^5$ cm$^2$s$^{-1}$. We also compared the predicted PH$_3$ for different values of $E_{\rm H_2O}$. The mixing ratio of PH$_3$ is also not
sensitive to the value of $E_{\rm H_2O}$ unless $E_{\rm H_2O}$ is greater than 30. PH$_3$'s insensitivity to parameters is due to its quench level deep in the regime where it is the dominant species. For example, when $K_{\rm eddy}$ = 1$\times$10$^7$ cm$^2$s$^{-1}$ and $E_{\rm H_2O} = 10$, the quench level is at $T \approx 900 K$. From our equilibrium calculations shown in Fig. \ref{fig: P_eq}, PH$_3$ is stable and dominant until below 700 K. Almost all the phosphorus is sequestered in PH$_3$ on Jupiter and Saturn.
\subsection{Simulation results for SiH$_4$}
Figure \ref{fig: Si_eq} shows the predicted equilibrium abundances of some Si-bearing species along Saturn's adiabat computed using the NASA CEA code. The elemental abundances used in the calculations are summarized in Table \ref{tab: elements}. MgSiO$_3$ (l,s) and Mg$_2$SiO$_4$(s) condensate are the primary carriers of Si below 2000 K.
The mixing ratio of SiH$_4$ is 1$\times$10$^{-9}$ at about 1500 K, and decreases to 1$\times$10$^{-18}$ at about 1000 K. The equilibrium results are in agreement with the
results from \citet{FL94}. Observationally, the upper limit for $q_{\rm SiH_4}$ on Jupiter is 2.5$\times$10$^{-9}$ \citep{Treffers78},
and the upper limit for $q_{\rm SiH_4}$ on Saturn is 2.0$\times$10$^{-10}$ \citep{NL91}. This indicates that almost all of the silicon on Jupiter
and Saturn is removed by the rock formation and subsequent condensation. The total oxygen in Jupiter should be at least the oxygen locked in the rocks \citep[e.g.,][]{VF05}. Since MgSiO$_3$(s) is the major condensates for silicon, each silicon atom is combined with three oxygen atoms. Assuming the total silicon enrichment relative to solar is similar to the enrichment for carbon, then Si/H is about 1.43$\times$10$^{-4}$ on Jupiter and 3.24$\times$10$^{-4}$
on Saturn. The oxygen locked with silicon therefore has a mixing ratio of O/H $>$ 4.4$\times$10$^{-4}$ for Jupiter and O/H $>$ 1.0$\times$10$^{-3}$ for Jupiter and Saturn, corresponding to about 0.9 and 2.0 times solar, respectively. Adding this part of oxygen and the oxygen sequestered in water would give the total oxygen in Jupiter and Saturn.
Thanks to the detailed kinetic data of \citet{Miller04}, we can extract the main chemical pathway for SiH$_4$ destruction from the reaction network. The main chemical pathway is identified by comparing the rates of all the reactions in the network. The analysis is detailed in the Appendix C. The main chemical pathway of SiH$_4$ destruction is given by the following reactions:
\begin{subequations}\label{eqn: Si_pathway}
\begin{align}
& \textrm{SiH}_{4} + \textrm{M} \leftrightarrow \textrm{SiH}_{2} + \textrm{H}_{2} + \textrm{M}, \\
& \textrm{SiH}_{2} + \textrm{H}_{2}\textrm{O} \leftrightarrow \textrm{HOSiH} + \textrm{H}_2, \\
& \textrm{HSiOH} \leftrightarrow \textrm{SiO} + \textrm{H}_{2}, \\
& \textrm{SiO} + \textrm{H}_{2}\textrm{O} \leftrightarrow \textrm{Si(OH)}_2, \\
& \textrm{Si(OH)}_2 + \textrm{H} \leftrightarrow \textrm{HOSiO} + \textrm{H}_2, \\
& \textrm{Si(OH)}_2 \leftrightarrow \textrm{HSiO(OH)} \\
& \textrm{HSiO(OH)} + \textrm{H} \leftrightarrow \textrm{HOSiO} + \textrm{H}_2, \\
& \textrm{HOSiO} \leftrightarrow \textrm{SiO}_2(cr) + \textrm{H}, \\
& \textrm{SiO}_2(cr) + \textrm{Mg(OH)}_2 \leftrightarrow \textrm{MgSiO}_{3}(s) + \textrm{H}_{2}\textrm{O}, \\
\cline{1-2}
& \textrm{SiH}_{4} + \textrm{H}_{2}\textrm{O} + \textrm{Mg(OH)}_2 \leftrightarrow \textrm{MgSiO}_{3}(s) + 4\textrm{H}_2 \tag{\ref{eqn: Si_pathway}, net}.
\end{align}
\end{subequations}
The steps (\ref{eqn: Si_pathway}a) to (\ref{eqn: Si_pathway}h) are directly identified from the network, but the last step (\ref{eqn: Si_pathway}i) is added because SiO$_2$(cr) is not a stable product in the atmospheres of Jupiter and Saturn according our equilibrium calculations and those of \citet{FL94}. A reaction incorporating Si into to MgSiO$_3$(s) completes the chemical pathway.
Compared with the chemical path proposed in \citet{FL94}, the formation of SiH$_2$ and MgSiO$_3$(s) is the same, but the path from SiH$_2$ to SiO$_2$(cr) is different. The difference is due to the methodologies used to identify main chemical pathways. \citet{FL94} constructed their chemical pathway by identifying important intermediates observed in the experiments and then devising a pathway to connect the reactants, intermediates, and the products. There might be other important intermediates that are missing from the pathway, and the way of connection is not unique. This method is used when we do not have a reaction network or do not have enough kinetic data to perform detailed analysis. However, since we now have a reaction network, we can simulate the chemical evolution using the network and identify the fastest route from SiH$_4$ to SiO$_2$(cr). Our analysis is detailed in the Appendix C.
Among the chemical pathway, we find reactions \ref{eqn: Si_pathway}b, \ref{eqn: Si_pathway}e, and \ref{eqn: Si_pathway}g all serve as the bottlenecks. There is no unique rate limiting step for SiH$_4$ destruction.
This analysis helps us simplify the original H/Si/O reaction network from \citet{Miller04}. The original network is designed for the simulation of SiH$_4$ burning in the molecular oxygen. Therefore, the composition is silicon-rich and oxygen-rich. However, the atmosphere of Jupiter and Saturn is hydrogen rich. Some species in the reaction network are expected to be unimportant in Jupiter and Saturn's atmosphere, and thus can be removed from the network to increase the efficiency of
time integration. In the simplified reaction network, the species included are H$_2$, He, H$_2$O, OH, H, SiH$_4$, SiH$_3$, SiH$_2$, SiO, Si(OH)$_2$, HSiOH, HSiO(OH), HOSiO, and SiO$_2$(c). These species are the major intermediates for SiH$_4$ destruction, as is shown in Fig. \ref{fig: path_Si} in the Appendix C. This simplified reaction network is used to
predict the abundance of SiH$_4$ along Saturn's adiabat for different values of $K_{\rm eddy}$ and $E_{\rm O}$. The results are presented in Fig. \ref{fig: SiH4}. For $E_{\rm O} = 2$ and $K_{\rm eddy}$ = 1$\times$10$^9$ cm$^2$ s$^{-1}$, the predicted mixing ratio of SiH$_4$ at a few bars level is about $1\times10^{-17}$, while for $E_{\rm O} = 30$ and $K_{\rm eddy}$ = 1$\times$10$^7$ cm$^2$ s$^{-1}$, the mixing ratio of SiH$_4$ falls to 1$\times$10$^{-26}$. For the two extreme cases considered here, SiH$_4$ abundance is too small to be detected. Therefore, we conclude SiH$_4$ is not expected in Saturn's troposphere. The same analysis is applied to Jupiter. Similar to the result for Saturn, SiH$_4$ is also not expected in Jupiter's troposphere. \citet{FL94}'s calculation for SiH$_4$ kinetics also concludes the low mixing ratio of SiH$_4$. We confirm this result using a new Si/H/O reaction network.
\subsection{Simulation results for GeH$_4$}
In Fig. \ref{fig: Ge_eq}, we show the equilibrium abundances of germanium containing species along Saturn's adiabat computed using the NASA CEA code. The elemental abundances used in the calculations are summarized in Table \ref{tab: elements}.
The figures show that GeH$_4$ is not the dominating germanium containing species, and it is converted to more abundant GeS at lower temperature. Condensation happens near 700 K, and the major condensates are Ge(cr), GeS(cr) and GeO$_2$(cr). There are limited data for the GeH$_4$ kinetics, therefore, we propose a chemical pathway for the conversion between GeH$_4$ and GeS by analogy with SiH$_4$ and SiO, since Si and Ge are in the same group and next to each other in the periodic table, and O and S are in the same group and also next to each other in the periodic table. The main chemical pathway for the conversion from SiH$_4$ to SiO is given by reactions \ref{eqn: Si_pathway}a,
\ref{eqn: Si_pathway}b, and \ref{eqn: Si_pathway}c. The rate determining reaction is \ref{eqn: Si_pathway}b. By analogy, we propose the main chemical pathway below for the conversion from GeH$_4$ to GeS.
\begin{subequations}\label{eqn: Ge_pathway}
\begin{align}
& \textrm{GeH}_{4} + \textrm{M} \leftrightarrow \textrm{GeH}_{2} + \textrm{H}_{2} + \textrm{M}, \\
& \textrm{GeH}_{2} + \textrm{H}_{2}\textrm{S} \leftrightarrow \textrm{HSGeH} + \textrm{H}_2, \\
& \textrm{HSGeH} \leftrightarrow \textrm{GeS} + \textrm{H}_{2},
\end{align}
\end{subequations}
and the rate determining reaction is \ref{eqn: Ge_pathway}b. This chemical pathway is very similar to the one proposed in \citet{FL94}. The only difference is the step from GeH$_2$ to HSGeH. In \citet{FL94}, two steps are used: the first is GeH$_2$ + H$_2$S $\leftrightarrow$ H$_2$Ge=S + H$_2$, and the second is H$_2$Ge=S $\leftrightarrow$ HGe=SH. In the H/Si/O reaction network, the rate coefficient for SiH$_2$ + H$_2$O $\leftrightarrow$ H$_2$Si=O + H$_2$ is $3.84\times10^{10} T^{-0.6} e^{-4905/T}$ cm$^3$mol$^{-1}$s$^{-1}$, and the rate coefficient for SiH$_2$ + H$_2$O $\leftrightarrow$ HSi=OH + H$_2$ is 2.15$\times10^{10} T^{0.7} e^{-4956/T}$ cm$^3$mol$^{-1}$s$^{-1}$ \citep{ZT95}. At 800 K, the first reaction is much
slower than the second reaction. By analogy, we therefore favor the single step from GeH$_2$ to HGe=SH proposed here instead of the double step proposed in \citet{FL94}.
The rate coefficient for the reaction \ref{eqn: Ge_pathway}b is not in the literature. Again by analogy with reaction SiH$_2$ + H$_2$O, we use $k_{\ref{eqn: Ge_pathway}b} \approx 2.15\times10^{10}T^{0.7}e^{-4956/T}$ cm$^3$mol$^{-1}$s$^{-1}$ \citep{ZT95}. The predicted mole fraction of GeH$_4$ as a function of the vertical eddy diffusion coefficient $K_{\rm eddy}$ is shown in Fig. \ref{fig: GeH4_J} and \ref{fig: GeH4_S} for Jupiter and Saturn, respectively. For Jupiter, our predicted mole fraction is consistent with the observed mixing ratio of $7^{+4}_{-2} \times 10^{-10}$ by \citet{Bjoraker86} with the values of $K_{\rm eddy}$ predicted by Wang et al. (2015). For Saturn, our predicted mole fraction is also consistent with the observed value of
(4$\pm$2)$\times$10$^{-10}$ by \citet{Noll88b}. This consistency indicates the rate coefficient of GeH$_2$ + H$_2$S is close to our estimate here. Within a factor 5 uncertainty on both sides of the rate coefficient (total factor of 25), we find the predicted mixing ratios of GeH$_4$ is in agreement with the observed values. Adjusting the rate coefficient even higher or lower will yield a disagreement between the prediction and the observation. Therefore, the rate coefficient of reaction \ref{eqn: Ge_pathway}b should be within a factor of 5 of our estimation. When we have an accurate measurement of the rate coefficient, we can use the GeH$_4$ abundance to constrain the value of $K_{\rm eddy}$. The sensitivity of GeH$_4$ abundance on the $K_{\rm eddy}$ implies the effectiveness of using GeH$_4$ abundances to constrain the $K_{\rm eddy}$, but the kinetics needs to be better constrained by laboratory measurements first.
\subsection{Simulation results for AsH$_3$}
Equilibrium calculations show that AsH$_3$ is the dominant arsenic bearing species until it is converted to As$_4$ or As$_2$S$_2$ at about 400 K \citep{FL94}.
Due to the lack of kinetic data, it is quite uncertain where the quench level is. \citet{FL94} proposed three possible chemical pathways of the conversion from AsH$_3$ to As$_4$ or As$_2$S$_2$. The first pathway starts from the combination of AsH and AsH$_3$ forming As$_2$H$_2$, then As$_2$H$_2$ is decomposed into As$_2$ which combines to form the As$_4$ condensates. A similar mechanism starts from two AsH$_2$ forming the As$_2$H$_2$, and the rest is the same. The third one start from the combination of AsH with HS forming AsS, and two AsS combines to form the As$_2$S$_2$ condensates. The first and the third chemical pathway were investigated by \citet{FL94}. The second one was not investigated because the authors do not have the thermodynamic data for AsH$_2$. However, the second pathway is more likely than the fist pathway since AsH$_2$ is expected to be more abundant than AsH. Now we study the second pathway, which is described by the following reactions:
\begin{subequations}\label{eqn: As_pathway}
\begin{align}
& \textrm{AsH}_{3} + \textrm{H} \leftrightarrow \textrm{AsH}_{2} + \textrm{H}_{2}, \\
& \textrm{AsH}_{2} + \textrm{AsH}_{2} \leftrightarrow \textrm{As}_2\textrm{H}_2 + \textrm{H}_2, \\
& \textrm{As}_2\textrm{H}_2 \leftrightarrow \textrm{As}_2 + \textrm{H}_{2}, \\
& \textrm{As}_2 + \textrm{As}_2 \leftrightarrow \textrm{As}_4, \\
& \textrm{As}_4 \leftrightarrow \textrm{As}_4(s),
\end{align}
\end{subequations}
where the rate determining step is taken as the reaction \ref{eqn: As_pathway}b. The thermodynamic data for AsH$_2$ is from \citet{TP86}. The rate coefficient of the reaction \ref{eqn: As_pathway}b is taken as $4.63\times10^{-11} T^{0.04} e^{-16.8/T}$ cm$^3$ molecule$^{-1}$ s$^{-1}$, by analogy with the rate coefficient of SiH$_2$ + SiH$_2$ \citep{Dollet07}. The analogy between As and Si is probably not good, but we are forced by the lack of kinetic data for elements in the same column. We consider an overall two order of magnitude uncertainty for the rate coefficient. The kinetic results are shown in Fig. \ref{fig: AsH3_J} and \ref{fig: AsH3_S}. From our calculations, the predicted AsH$_3$ abundance is
nearly equal to the total arsenic abundance for a large range of the $K_{\rm eddy}$. Now with the three chemical pathways studied, the conclusion is that AsH$_3$ abundance is not sensitive to the value of $K_{\rm eddy}$, and AsH$_3$/H $\approx$ As/H in both Jupiter and Saturn.
The AsH$_3$ is observed on Jupiter with a mixing ratio of $(2.2\pm1.1)\times10^{-10}$ \citep{Noll90}, corresponding to 0.3$\sim$0.8 times solar abundance. The AsH$_3$ is observed on Saturn with a mixing ratio of $(3\pm1)\times10^{-9}$ \citep{Bezard89, Noll89, NL91}, corresponding to 5$\sim$10 times solar abundance. The higher enrichment of As on Saturn than on Jupiter is consistent with other elements, such as C and P. However, the subsolar As/H ratio on Jupiter is puzzling because other rock forming elements such as phosphorus and germanium are all enriched relative to solar \citep[e.g.,][]{FL94}. There could be three possibilities. One is the observations have underestimated the AsH$_3$ abundance in Jupiter, and the AsH$_3$ abundance could be higher than solar. The second possibility is the observations are correct, but the kinetics are not. This may be hard to understand since the kinetics work well for Saturn. The third possibility is both observations and the kinetics are correct, and the subsolar abundance of As is realistic, but then a mechanism is needed to deplete arsenic on Jupiter. The JIRAM instrument on board Juno spacecraft has the capability of measuring the AsH$_3$ abundance at a few bars in Jupiter's atmosphere \citep{Grassi10}. However, to resolve this issue, experimental measurements on AsH$_3$ are necessary to better determine the arsenic kinetics.
\section{Discussion}
In section 4, we have presented our modeling results on the disequilibrium species using updated thermodynamic and kinetic data for Jupiter and Saturn. We have explored the dependence on the two free parameters in our model: the eddy diffusion coefficient and the deep water abundance. We find that CO and CO$_2$ are sensitive to both the eddy diffusion coefficient and the deep water abundance, while C$_2$H$_6$ and GeH$_4$ are only sensitive to the eddy diffusion coefficient. In this section, we discuss how Juno and a Saturn probe can improve our understanding on the $K_{\rm eddy}$ and $E_{\rm H_2O}$ through the measurement of disequilibrium species. We also discuss the uncertainties on the kinetic networks and suggest reactions that should be further studied.
\subsection{Further constraints by Juno}
Juno is about to arrive at Jupiter in July 2016, and the microwave radiometer is expected to map the water abundance down to $\sim$100 bars. The JIRAM instrument will measure the abundances of H$_2$O, NH$_3$, PH$_3$, CO, GeH$_4$, AsH$_3$ at a few bars level \citep{Grassi10}. Using the measured deep water abundance and the tropospheric CO abundance measured by \citet{Bezard02}, the eddy diffusion coefficient can be constrained as is shown by Fig. \ref{fig: carbon_jupiter} in this paper and Fig. 5 in \citet{Wang15}. Another constraint on the $K_{\rm eddy}$ can be placed using the measured GeH$_4$ abundance, as is shown by Fig. \ref{fig: GeH4_J}. However, a quantified reaction network is needed, or at least a determination of the rate limiting reaction is required to reduce the uncertainties in GeH$_4$ kinetics.
Both \citet{FG78} and \citet{Wang15} predicted latitudinal variations on the deep eddy diffusion coefficient due to the rotational effects on the convection. The latitudinal variation of the eddy diffusion coefficient should result in latitudinal variations of the abundances of CO, C$_2$H$_6$, CO$_2$, and GeH$_4$ at a few bars level. Multi-latitude measurements of CO and GeH$_4$ will test the prediction, if performed by JIRAM. We predict no latitudinal variation on the PH$_3$ and AsH$_3$ abundances since their abundances are not sensitive to the eddy diffusion coefficient. The horizontal (latitudinal) profile of CO on Jupiter is presented in the Fig. 9 of \citet{Wang15}. The mole fraction of CO is predicted to be about $1\times10^{-9}$ near the equator and decreases to about $4\times10^{-10}$ near the pole. Fig. 10 of Wang et al. (2015) shows latitudinal variations of PH3 on Jupiter because the old chemical model is used in that paper. With the new chemical model for phosphorus in this paper, we predict no latitudinal variations for PH$_3$ at a few bars level. The predicted horizontal profile of GeH$_4$ is shown in Fig. \ref{fig: GeH4_J_horizontal}. The horizontal profile of eddy diffusion coefficient used in the calculations is from \citet{Wang15}. The mole fraction of GeH$_4$ is predicted to be about 7$\times$10$^{-10}$ near the equator and slowly decreases to about 3$\times$10$^{-10}$ near the pole.
\subsection{Application to a Saturn entry probe}
A Saturn entry probe with a mass spectrometer on board will be able to make in-situ measurements of the composition of Saturn, including the abundances of various disequilibrium species at a few bars level. The mass spectrometer is expected to have much higher resolution than the one on the Galileo entry probe. Therefore, it has the capability to make more precise and sensitive measurements. The probe may not be able to descend below the water cloud deck, and therefore cannot really determine the deep (global) water abundance. However, as is shown by Fig. \ref{fig: carbon_saturn}, if both C$_2$H$_6$ and CO can be measured, useful constraints can be placed on both the eddy diffusion coefficient and the deep water abundance. The mixing ratio of C$_2$H$_6$ on Saturn is predicted to be about $1\times10^{-9}$, and the mixing ratio of the tropospheric CO is predicted to be below $1\times10^{-9}$. Therefore, any instrument must be sensitive enough to measure sub-ppb level of mixing ratios. In addition, since CO and N$_2$ have nearly identical molecular weight, the payload must resolve the ppb level CO from the ppm level N$_2$.
In \citet{Wang15}, we predicted a higher eddy diffusion coefficient near the equator and a decreasing eddy diffusion coefficient at higher latitudes. Therefore, we expect CO and C$_2$H$_6$ to be higher in abundance near the equator. The probe entry site should be preferentially near the equator in order to maximize the possibility of detecting CO and C$_2$H$_6$.
The mixing ratio of GeH$_4$ is predicted to be a few times 10$^{-10}$, and is a sensitive function of the eddy diffusion coefficient. The measurement of GeH$_4$ can add another constraint on the eddy diffusion coefficient.
The Juno mission measurements provide a potentially important synergy with the Saturn Probe measurements. By measuring both the deep water abundance and disequilibrium species, it will be possible to determine both the deep oxygen abundance and the magnitude of vertical eddy mixing. The latter determination is especially robust if the disequilibrium species are measured as a function of latitude \citep{Wang15}. Having a determination of the Jovian value of the eddy mixing will provide a useful constraint on that for Saturn, which will help lift the degeneracy between deep water abundance and vertical mixing from Saturn Probe measurements. Alternatively, if both CO and C$_2$H$_6$ can be measured by Saturn Probe, then an independent determination of the eddy mixing from those measurements will allow comparison of the vertical dynamics from the hundreds of bars pressure level upward on the two planets. All of this depends on the ability to resolve the uncertainties in the kinetics.
\subsection{Uncertainties in the kinetics}
The uncertainties in the kinetics affect the constraints on both the eddy diffusion coefficient and the deep water abundance. In the planetary science community,
various reaction networks have been developed for modeling many kinds of atmospheres, such as the atmosphere of Jupiter \citep{Visscher10}, hot Jupiters \citep[e.g.,][]{Moses11, Venot12, MK14}, terrestrial exoplanets \citep[e.g.,][]{HS14} and brown dwarfs \citep{ZM14}. These reaction networks are successful in predicting the presence of major species in the atmospheres under a wide range of temperature and pressure conditions. However, the predicted mixing ratios are subject to large errors due to the large uncertainties on individual rate coefficients in the network. As an example, the CO/CH$_4$ chemistry has been studied for decades \citep[e.g.,][]{PB77, Yung88, VM11}, yet some uncertainty still remains because of reaction coefficients under debate \citep[e.g.][]{Moses14}. The accuracy of most kinetic networks have yet to be tested by experiments except the one of \citet{Venot12}. The \citet{Venot12} network is derived from one in the combustion industry and has been validated against various combustion experiments. So is this reaction network accurate in modeling the atmospheres of giant planets? One concern is that the combustion experiments are usually conducted under carbon and oxygen rich environment, however, the atmospheres of giant planets are extremely hydrogen-rich (Moses, personal communication, 2015). We investigated several other reaction networks developed in the combustion community. The reaction networks we investigated are: the ``Aramco Mech v1.3" from \citet{Metcalfe13}; the ``San Diego Mechanism version 20141004" (\underline{http://web.eng.ucsd.edu/mae/groups/combustion/mechanism.html}); the ``C1/O$_2$ mechanism" from \citet{Li07}; the ``fort15 mech" from \citet{Sung98}; the ``aaumech" from \citet{Zabetta08}; and the ``GRI-30 mech" from \underline{http://www.me.berkeley.edu/gri\_mech/}. All the reaction networks listed above have been validated against many combustion experiments. We applied these reaction networks to Jupiter's atmosphere and compared the predicted CO mixing ratios in Fig. \ref{fig: mech_comp}. The predicted CO mixing ratios do not agree with each other, especially the \citet{Venot12} network, which is predicting a much lower value than other networks. The source of the difference is that no other networks include the channel H + CH$_3$OH $\leftrightarrow$ CH$_3$ + H$_2$O, which means this channel is not important for oxidation in an oxygen-rich environment. However, if this channel is indeed as fast as that measured in \citet{Hidaka89} as used by the \citet{Venot12} network, it can be crucial in determining the whole CO/CH$_4$ conversion rate. The comparison in Fig. \ref{fig: mech_comp} does not imply the \citet{Venot12} network is wrong, but illustrates the point that networks validated under oxygen rich environments can give different results when applied to a hydrogen-rich environment. Ideally, networks should be tested by experiments conducted under hydrogen-rich conditions in order to improve their accuracy. In this paper, due to the unresolved uncertainties in kinetics, we considered two extreme cases for CO reaction networks. The \citet{Moses11} reaction network represents the slowest pathway for CO destruction, while the \citet{Venot12} model represents the fastest pathway for CO destruction. In order to have more precise predictions on CO, C$_2$H$_6$ and GeH$_4$, a few reactions need to be further investigated experimentally or theoretically to determine their rate coefficients. For a tighter CO mixing ratio prediction, the reaction rate coefficient of H + CH$_3$OH $\leftrightarrow$ CH$_3$ + H$_2$O should be experimentally studied under conditions relevant to Jupiter and compared with the theoretical estimate by Moses et al. (2011), and the experimental estimate by \citet{Hidaka89}. For a tighter C$_2$H$_6$ mixing ratio prediction, the rate coefficient of CH$_3$ + CH$_3$ + M $\leftrightarrow$ C$_2$H$_6$ + M under high pressure should be better determined. For a better GeH$_4$ mixing ratio prediction, the rate coefficient of GeH$_2$ + H$_2$S $\leftrightarrow$ HGe=SH + H$_2$ should be determined.
The arsenic chemistry in a hydrogen rich environment should be studied to tighten the constraints on the total As abundance.
\section{Conclusions}
In this paper, we used a diffusion kinetic code developed in \citet{Wang15} to predict the abundances of various disequilibrium species on Jupiter and Saturn with updated thermodynamic and kinetic data. The dependence on the vertical eddy diffusion coefficient and the deep water abundance have been explored. We summarize our simulation results below.
\begin{itemize}
\item{We find C$_2$H$_6$ is a useful tracer for the deep eddy diffusion coefficient. The degeneracy between high (low) eddy diffusion coefficient and high (low) deep water abundance from CO constraints can be broken by adding the constraint by C$_2$H$_6$. }
\item{We find PH$_3$ is converted to H$_3$PO$_4$ instead of P$_4$O$_6$ as in previous studies. We identified a new chemical pathway based on a H/P/O reaction network. The PH$_3$ abundance is predicted to be insensitive to either the eddy diffusion coefficient or the deep water abundance unless $E_{\rm H_2O}$ is higher than 20. }
\item{We confirm that SiH$_4$ is not expected in the troposphere of either Jupiter or Saturn based on a H/Si/O reaction network. A new chemical pathway for SiH$_4$/MgSiO$_3$ (s) conversion is proposed. }
\item{We propose a new chemical pathway for GeH$_4$ destruction. The GeH$_4$ abundance is predicted to be a sensitive function of the eddy diffusion coefficient. }
\item{We confirm that the element As is primarily sequestered in AsH$_3$ in Jupiter and Saturn's atmosphere by exploring a new chemical pathway for AsH$_3$ destruction. }
\item{Since the eddy diffusion coefficient is predicted by theoretical models to be latitudinally dependent, we predict the tropospheric abundances of CO, C$_2$H$_6$, CO$_2$, and GeH$_4$ to have latitudinal variations, and the tropospheric abundances of PH$_3$ and AsH$_3$ to have no latitudinal variations. }
\end{itemize}
Juno can provide multiple constraints on the eddy diffusion coefficient from its measurement of disequilibrium species by JIRAM and its measurement of the deep water abundance from the microwave radiometer.
A probe with a mass spectrometer sensitive enough to detect sub-ppb level of CO and C$_2$H$_6$ can place constraints on both the deep diffusion coefficient and the deep water abundance on Saturn. A probe should be sent to equatorial latitude to maximize the probability of detecting disequilibrium species.
The predictions on the disequilibrium chemistry are limited by the uncertainty in kinetics. Several reactions are worth further investigations to reduce the uncertainties and they are H + CH$_3$OH $\leftrightarrow$ CH$_3$ + H$_2$O, CH$_3$ + CH$_3$ + M $\leftrightarrow$ C$_2$H$_6$ + M, and GeH$_2$ + H$_2$S $\leftrightarrow$ HGe=SH + H$_2$.
\section*{Acknowledgments}
We thank O. Venot for providing her reaction network. We are most grateful to B. B\'{e}zard and T. Cavali\'{e} for their careful reviews of the paper. Support from Juno project is gratefully acknowledged. This work has been partly carried out thanks to the support of the A*MIDEX project (n\textsuperscript{o} ANR-11-IDEX-0001-02) funded by the ``Investissements d'Avenir'' French Government program, managed by the French National Research Agency (ANR). D. W. and J. L. are supported by the Juno project.
\bibliographystyle{elsarticle-harv}
|
1509.02391
|
\section{Introduction}
Imaging Atmospheric Cherenkov Telescopes (IACTs) are ground-based telescopes that study the gamma ray sky at very high-energies (VHE; E>100 GeV).
Current generation of IACTs include the successful experiments H.E.S.S., located in Namibia, MAGIC, in the Canary Islands, and VERITAS, in Arizona.
Due to the opacity of the atmosphere to gamma rays, IACTs observe a by-product of electromagnetic cascades initiated in the atmosphere by very energetic gamma-rays, namely the Cherenkov light. This light is focused into each telescope camera and forms an image that is analyzed in order to characterize the primary particle and determine its energy and incoming direction.
The {\it camera acceptance} can be defined as the systemic response for the detection of $\gamma$-ray events \cite{fernandes14}. When observing with a single telescope, the camera acceptance is in first approximation radially symmetric, with the maximum at the camera center, meaning that the system is more sensitive to light emitted by gamma-like showers focused in the central part of the camera than in the outer part. Second order asymmetries are due to large zenith angle observations and also to the geomagnetic field.
With two or more telescopes, the camera acceptance becomes more complicated, being the intersection of each trigger telescope field of view (FoV). The camera acceptance is expected to be different for VHE gamma-ray induced images and background images, typically due to proton-induced showers. It can also depends on other parameters \cite{hess_template}.
We present a detailed study of the camera acceptance of the MAGIC telescopes. We will discuss its dependence on the observation parameters (such as azimuth and zenith angle) and on the nature of the particle initiating the atmospheric shower.
This study is motivated by the possibility of applying new sophisticated methods for the background reconstruction, such as the template background method \cite{hess_template}. This method is suited for the reconstruction of sky maps of complex FoVs, and is based on the detailed knowledge of both gamma and hadron-like event acceptances.
\section{The MAGIC Telescopes}
MAGIC is a system of two IACTs located in the Canary Island of La Palma, at 2200~m asl.
It observes VHE gamma rays from 50~GeV up to several tens of TeV.
The angular resolution at around 200~GeV energies is < 0.07 degree, while the energy resolution is 16\% \cite{mag_analysis, magicperf14}.
During the observation of a VHE gamma-ray candidate,
the system records the azimuth (Az) and zenith (Zd) angle of each observation.
The system acceptance is the intersection of the two radially symmetric telescopes acceptances, and in first approximation is an ellipse centered in the camera center, at a given azimuth and zenith angle.
In order to characterize systematically the MAGIC camera acceptance, we studied a large sample of real data collected by the telescopes between the end of 2013 and Summer 2014.
\subsection{Data selection and analysis chain}
For the study of the MAGIC acceptance, we selected 79~hours of so-called Off data, i.e. data without any significant gamma-ray signal. The good Az-Zd coverage of this dataset is shown in Figure~\ref{fig:data_azzd}.
The data selection was based mainly on the rate of the events, an indicator of good weather conditions and dark sky. The MAGIC telescopes can observe also during low and moderate moonlight conditions. For the dataset used here, we allow only data taken during a low level of moonlight for which the analysis pipeline is not altered with respect to dark sky conditions. The data were processed using the standard MAGIC data analysis chain \cite{mag_analysis}.
\begin{figure}[htp]
\centering
\includegraphics[width=3.5in]{data_skyline.pdf}
\caption{Zenith versus azimuth angle distribution for the 79 hours of data ($5.9~\cdot~10^7$ events) used in this study. For graphical reasons, the azimuth is displayed in the range -80 to 280 degrees (instead of 0 to 360 degrees).}
\label{fig:data_azzd}
\end{figure}
In the analysis chain, a geometrical reconstruction of the camera image is done for each triggered event, following the extended Hillas parametrization, \cite{hillas85}. From this reconstruction, the energy and nature of the primary particle as well as its reconstructed arrival direction is then estimated. Even for strong gamma-ray emitters, such as the Crab Nebula, the large majority of these images are associated with hadron initiated showers.
In order to provide an independent check of the reported results, a second analysis was done with a partially different dataset both for signal and background characterization. In this analysis, we have found results compatible with those reported in the following.
\section{Study of the Acceptance}
The arrival direction estimation is refined during the last step of the analysis chain with the random forest technique \cite{mag_analysis}, where the random forest is trained on simulated gamma rays. For pure geometrical reasons, this incoming direction is a point in the camera.
The plot in camera coordinates (in degrees) of the reconstructed incoming directions is used to describe the acceptance of the MAGIC telescopes, Figure~\ref{fig:acc_all}. The binning adopted in the study is 51 bins from -3 to 3 degrees both for X and Y axis.
\begin{figure}[htp]
\centering
\includegraphics[width=2.5in]{acceptance_all.pdf}
\caption{Acceptance map for the 79~hours of Off data ($5.9~\cdot~10^6$ events) considered in this study.}
\label{fig:acc_all}
\end{figure}
\subsection{Azimuthal dependence of the acceptance}
In MAGIC data analysis, azimuth is counted from Geographic North (0 deg) in the direction of East (90 deg).
In order to characterize the effect of the azimuth angle of the observation on the acceptance, we divided our sample in 18 azimuth bins, spanning the full range in azimuth.
Figure~\ref{fig:acc_az} shows the resulting acceptance plots, for each azimuth bin.
It is worth noticing that, while in Figure~\ref{fig:acc_all} the acceptance shows mostly a radial dependence from the center of the camera, when we plot the acceptance in azimuth bin this dependance is complicated and the shape becomes elliptical, but still centered in the camera center. This effect is due to the interception of the acceptances of each MAGIC telescope, which is azimuth dependent\footnote{In the standard MAGIC analysis, this azimuth angle dependence is taken into account both in the spectral and morphological analyses, which are usually performed in bins of azimuth angle. However, the strategy adopted for estimating the background is not optimal in the case of complex FoVs.}.
In Figure~\ref{fig:acc_all} the effect is not visible because a large amount of data covering almost all the azimuth range is considered.
\begin{figure}[htp]
\centering
\includegraphics[width=\textwidth]{acceptance_az.pdf}
\caption{From left to right, top to bottom: acceptance map in 18 azimuth bins, from 0 to 360 degrees, in step of 20 degrees. The first bin (azimuth 0-20 degrees) is drawn in the upper row on the left, while the last bin (azimuth 340-360 degrees) is drawn in the bottom row on the right.}
\label{fig:acc_az}
\end{figure}
The elliptic shape assumed by the acceptance in each azimuth bin can be parametrized by the eigenvectors of
the covariance matrix, which represent the directions of the semi-axes of the ellipse
(red and green lines drawn in Fig.\ref{fig:acc_az}). The eigenvalues of the covariance matrix,
instead, are related to the variance of the data, and are therefore connected to the shape of the acceptance.
Figure~\ref{fig:acc_angle} shows the dependance of the angle between the major axis of the ellipse and the
horizontal axis of the camera plane with the azimuth angle of the observation. Its is evident that this relation is linear, and therefore, on first order, the azimuth dependance can be corrected by derotating the acceptance map.
Hence, to correct for the azimuth effect on the acceptance, the arrival direction coordinates of each event taken at a particular azimuth have to be
derotated of an angle given by the formula:
\begin{equation}
\phi_0 = \phi - 120,
\end{equation}
where $\phi$ is the azimuth of the observation, and $\phi_0$ is the derotation angle, in degrees. This relation has been obtained by fitting the data shown in \ref{fig:acc_angle} in the azimuth range 40-240 degrees.
\begin{figure}[htp]
\centering
\includegraphics[width=5.0in]{acceptance_angle.pdf}
\caption{ Angle between the major axis of the ellipse describing the camera acceptance and the horizontal axis of the camera plane, as a function of the azimuth angle.}
\label{fig:acc_angle}
\end{figure}
\subsection{Zenithal dependence of the acceptance}
The comparison of low (< 22.5 degrees) and high (> 22.5 degrees) zenith angle acceptance maps for events with estimated energy larger than 250\,GeV is plotted in the first row of Figure~\ref{fig:acc_zd}.
To avoid contamination of the azimuth angle dependance, we have derotated the maps following the method described in the previous section. By visual inspection, it seems that the MAGIC telescopes acceptance for high zenith angle events is more roundish than that obtained for low zenith angle events.
This is confirmed by the study of the difference of the two maps, normalized for the total number of entries, and its significance obtained with the Li\&Ma formula \cite{lima}. Both graphs are drawn in the bottom row of Figure~\ref{fig:acc_zd}.
The low zenith angle acceptance is found to be significantly more peaked at the camera center, and the ellipse describing this average acceptance is more elongated with respect to the one describing the acceptance to the high zenith angle events. A similar effect is observed when comparing events with low and high estimated energy.
In conclusion, we demonstrated that the zenith angle of the observation affects the acceptance of the MAGIC telescopes, and therefore it should be taken into account when the background is modeled.
\begin{figure}[htp]
\centering
\includegraphics[width=5.0in]{acceptance_zd.pdf}
\caption {Upper panels: Acceptance of low and high zenith angles events (corrected for the azimuth dependence). Bottom panels: difference between the two normalized acceptances and corresponding Li\&Ma significance map.}
\label{fig:acc_zd}
\end{figure}
\subsection{Comparison of hadron-like and gamma-like events acceptance}
A crucial point to be addressed when a IACT acceptance is studied,
is the systemic response to gamma and background-induced (i.e. proton) showers.
As detailed in \cite{hess_template}, this is of particular relevance for the development of advanced image tools using for example the template background technique.
In order to study the MAGIC response to the two kind of showers, we compare the acceptance
of our system to gamma-like and hadron-like events with estimated energy larger than 250\,GeV.
During the data analysis, a parameter named {\it hadronness} is assigned to each event.
This parameter is related to the probability that a given event is
induced or not by a primary gamma, and spans from 0 (gamma-like event) to 1 (hadron-like event). For the computation of the hadronness, the random forest technique is employed, where the random forest is trained on simulated gamma rays as well as on real hadronic showers.
In this study we have applied the following cuts:
\begin{itemize}
\item hadronness < 0.28: gamma-like events;
\item 0.4 < hadronness < 0.8: hadron-like events;
\item size > 200 photo-electrons to all the events.
\end{itemize}
The resulting acceptances are plotted in Figure~\ref{fig:acc_hadronness}.
The first plot on the left of the upper panel represents the MAGIC
acceptance to gamma-like showers, while the second plot is the acceptance for hadron-like showers.
The difference between the two maps (normalized to the entries in the inner part of the camera)
and its Li \& Ma significance is displayed on the bottom row.
These plots shows that there is a clear difference in the camera acceptances between gamma and hadron-like events.
In particular, the positive excesses at low radii and negative excesses at large radii show that the acceptance to gamma-like events is more peaked than the corresponding acceptance to hadron-like events.
\begin{figure}[htp]
\centering
\includegraphics[width=5.0in]{acceptance_hadronness.pdf}
\caption{Upper panels: Acceptances of gamma- and hadron-like events. Bottom panels: difference between the two normalized acceptances, and Li\&Ma significance of the difference.}
\label{fig:acc_hadronness}
\end{figure}
\section{Conclusions}
In this proceedings, we have presented a detailed study of the MAGIC telescopes acceptance,
based on the analysis of nearly 80 hours of real Off data.
For a given observation, the acceptance has been found to have an elliptic shape.
We have demonstrated that, as expected for purely geometrical reasons, different azimuth
angles induce a rotation of this ellipse in the camera plane.
The zenith angle of the observations has also an effect on the MAGIC acceptance, which is more
roundish for large zenith angle observations.
Finally, we have compared the hadron-like and gamma-like events acceptances, finding significant differences
in the two responses.
The characterization of the camera acceptance of the MAGIC telescopes performed in this study
opens the possibility of applying improved background estimation methods to the MAGIC data, useful to investigate the morphology of extended or multiple sources.
\section*{Acknowledgements}
We would like to thank the Instituto de Astrof\'{\i}sica de Canarias
for the excellent working conditions at the Observatorio del Roque de los Muchachos in La Palma. The financial support of the German BMBF and MPG, the Italian INFN and INAF, the Swiss National Fund SNF, the ERDF under the Spanish MINECO (FPA2012-39502), and the Japanese JSPS and MEXT is gratefully acknowledged. This work was also supported by the Centro de Excelencia Severo Ochoa SEV-2012-0234, CPAN CSD2007-00042, and MultiDark CSD2009-00064 projects of the Spanish Consolider-Ingenio 2010 programme, by grant 268740 of the Academy of Finland, by the Croatian Science Foundation (HrZZ) Project 09/176 and the University of Rijeka Project 13.12.1.3.02, by the DFG Collaborative Research Centers SFB823/C4 and SFB876/C3, and by the Polish MNiSzW grant 745/N-HESS-MAGIC/2010/0.
Elisa Prandini gratefully acknowledges the financial support of the Marie Heim-Vogtlin grant of the Swiss National Science Foundation.
|
2203.11879
|
\section{Some Properties of $H^{1/2}_{0,}(a,b)$} \label{sec:NormEquivalence}
In this appendix,
we provide proofs of Lemma~\ref{lem:NormEquivalence} and Lemma~\ref{lem:FractionalNormPointtau}
of Section~\ref{sec:FctSpc} concerning the Sobolev space $H^{1/2}_{0,}(a,b)$, and
state Poincar\'e and interpolation inequalities in $H^{1/2}_{0,}(a,b)$.
This result of Lemma~\ref{lem:NormEquivalence} is well-known, but
we need to make explicit the dependency of the involved constants on the interval $(a,b)$,
which is essential for the derivation of the temporal $hp$-error estimates in Section~\ref{sec:ConvRate}.
For simplicity, we restrict to the case of real-valued functions $v \colon \, (a,b) \to \mathbb R$.
All results and proofs can be generalized straightforwardly
to $X$-valued functions $v \colon \, (a,b) \to X$ for a Hilbert space $X$.
We introduce the following notation.
For the classical Sobolev space
\begin{equation*}
H^{1/2}(\mathbb R) = (H^1(\mathbb R), L^2(\mathbb R))_{1/2,2},
\end{equation*}
where $H^1(\mathbb R)$ is equipped with the norm
$\| \circ \|_{H^1(\mathbb R)} = ( \| \circ \|_{L^2(\mathbb R)}^2 + \| \partial_t \circ \|_{L^2(\mathbb R)}^2 )^{1/2}$,
we consider the interpolation norm $\| \circ \|_{H^{1/2}(\mathbb R)}$ and the Slobodetskii norm
\[
\normiii{v}_{H^{1/2}(\mathbb R)}
:=
\left(
\| v \|_{L^2(\mathbb R)}^2
+
|v|_{H^{1/2}(\mathbb R)}^2
\right)^{1/2}
\]
for $v \in H^{1/2}(\mathbb R)$, with
\[
|v|_{H^{1/2}(\mathbb R)}
:=
\left( \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \frac{|v(s)-v(t)|^2}{|s-t|^2} \mathrm ds \mathrm dt \right)^{1/2}.
\]
\begin{proof}[Proof of Lemma~\ref{lem:NormEquivalence}]
The equivalence of norms is proven in, e.g., \cite{McLean2000}.
We give more details about the norm equivalence constants.
For this purpose,
we introduce an extension operator and establish bounds on its norm.
Define
$\mathcal E_1 \colon \, H^1_{0,}(a,b) \to H^1(\mathbb R)$,
\begin{equation*}
\mathcal E_1 v(t) := \begin{cases}
v(t), & t \in [a,b], \\
v(2b-t), & t \in (b,2b-a], \\
0, & \text{otherwise}
\end{cases}
\end{equation*}
for $v \in H^1_{0,}(a,b)$.
The mapping $\mathcal E_0 \colon \, L^2(a,b) \to L^2(\mathbb R)$
is defined for $v \in L^2(a,b)$
as
\begin{equation*}
\mathcal E_0 v(t) := \begin{cases}
v(t), & t \in (a,b), \\
v(2b-t), & t \in (b,2b-a), \\
0. & \text{otherwise}
\end{cases}
\end{equation*}
Evidently, $\mathcal E_1 v = \mathcal E_0 v$ for $v \in H^1_{0,}(a,b)$.
Next, for $v \in L^2(a,b)$,
\begin{equation*}
\norm{\mathcal E_0 v}_{L^2(\mathbb R)}^2
=
\int_a^b \abs{v(t)}^2 \mathrm dt + \int_b^{2b-a} \abs{v(2b-t)}^2 \mathrm dt = 2 \norm{v}_{L^2(a,b)}^2
\end{equation*}
and, for $v \in H^1_{0,}(a,b)$,
\begin{equation*}
\norm{\partial_t \mathcal E_1 v}_{L^2(\mathbb R)}^2
= \int_a^b \abs{\partial_t v(t)}^2 \mathrm dt + \int_b^{2b-a} \abs{\partial_t v(2b-t)}^2 \mathrm dt
= 2 \norm{\partial_t v}_{L^2(a,b)}^2 \;.
\end{equation*}
Hence, for $v \in H^1_{0,}(a,b)$, it holds true that
\begin{equation*}
\norm{\mathcal E_1 v}_{H^1(\mathbb R)}^2
= 2 \norm{v}_{H^1(a,b)}^2
\leq 2 \left( 1 + \frac{4(b-a)^2}{\pi^2} \right) \norm{\partial_t v}_{L^2(a,b)}^2,
\end{equation*}
where the Poincar\'e inequality (see Lemma~\ref{lem:Poincare} below)
is used in the last step.
Interpolation yields an operator
$\mathcal E_{1/2} \colon \, H^{1/2}_{0,}(a,b) \to H^{1/2}(\mathbb R)$
with $\mathcal E_{1/2} v = \mathcal E_0 v$ for $v \in H^{1/2}_{0,}(a,b)$
and
\begin{equation} \label{NormEquivalenceExtension}
\forall v \in H^{1/2}_{0,}(a,b) : \quad \norm{\mathcal E_{1/2}v}_{H^{1/2}(\mathbb R)}^2 \leq 2 \sqrt{ 1 + \frac{4(b-a)^2}{\pi^2}} \norm{v}_{H^{1/2}_{0,}(a,b)}^2.
\end{equation}
Next, we estimate $\normiii{\mathcal E_{1/2}v}_{H^{1/2}(\mathbb R)}$ for $v \in H^{1/2}_{0,}(a,b)$. For this purpose, we compute
\begin{align}
\abs{\mathcal E_{1/2} v}_{H^{1/2}(\mathbb R)}^2 &= \int_a^\infty \int_a^\infty() +2 \int_a^\infty \int_{-\infty}^a () + \int_{-\infty}^a \int_{-\infty}^a () \nonumber \\
&= \abs{(\mathcal E_{1/2} v)_{|(a,\infty)}}_{H^{1/2}(a,\infty)}^2 + 2 \int_a^\infty \int_{-\infty}^a \frac{|\mathcal E_{1/2}v(t)|^2}{|s-t|^{2}} \mathrm ds \mathrm dt + 0 \nonumber \\
&=\abs{(\mathcal E_{1/2} v)_{|(a,\infty)}}_{H^{1/2}(a,\infty)}^2 + 2 \int_a^\infty \frac{\abs{\mathcal E_{1/2}v(t)}^2}{t-a} \mathrm dt \label{NormEquivalenceRepresentationZero}
\end{align}
for $v \in H^{1/2}_{0,}(a,b)$, where the seminorm $|\circ|_{H^{1/2}(a,\infty)}$
is defined by \eqref{SlobodetskiiSemi} with $b=\infty$.
The integral in the bound
\eqref{NormEquivalenceRepresentationZero}
is finite due to $v \in H^{1/2}_{0,}(a,b)$, cf. \eqref{Sob:NormTriple}.
Thus, we get
\begin{multline*}
\normiii{\mathcal E_{1/2}v}_{H^{1/2}(\mathbb R)}^2
=
2 \norm{v}_{L^2(a,b)}^2 + \abs{(\mathcal E_{1/2} v)_{|(a,\infty)}}_{H^{1/2}(a,\infty)}^2
+ 2 \int_a^\infty \frac{\abs{\mathcal E_{1/2}v(t)}^2}{t-a} \mathrm dt
\\
= 2 \norm{v}_{L^2(a,b)}^2 + \abs{v}_{H^{1/2}(a,b)}^2 + 2 \int_b^\infty \int_a^b () + \int_b^\infty \int_b^\infty() + 2 \int_a^\infty \frac{\abs{\mathcal E_{1/2}v(t)}^2}{t-a} \mathrm dt.
\end{multline*}
The third term on the right side is bounded by
\begin{align*}
2 \int_b^\infty \int_a^b () &= 2 \int_b^{2b-a} \int_a^b \frac{|v(s) - v(2b-t)|^2}{|s-t|^{2}} \mathrm ds \mathrm dt + 2\int_{2b-a}^\infty \int_a^b \frac{|v(s)|^2}{|s-t|^{2}} \mathrm ds \mathrm dt \\
&= 2 \int_a^b \int_a^b \frac{|v(s) - v(t)|^2}{\underbrace{ |2b-s-t|^{2}}_{\geq |s-t|^2}} \mathrm ds \mathrm dt + 2\int_a^b \frac{|v(s)|^2}{\underbrace{2b-a-s}_{\geq s-a}} \mathrm ds \\
&\leq 2 \abs{v}_{H^{1/2}(a,b)}^2 + 2\int_a^b \frac{|v(s)|^2}{s-a} \mathrm ds,
\end{align*}
the fourth term is
\begin{align*}
&\int_b^\infty \int_b^\infty() = \int_b^{2b-a} \int_b^{2b-a}() + 2\int_{2b-a}^\infty \int_b^{2b-a}() +\int_{2b-a}^\infty \int_{2b-a}^\infty() \\
&= \int_b^{2b-a} \int_b^{2b-a} \frac{|v(2b-s) - v(2b-t)|^2}{|s-t|^{2}} \mathrm ds \mathrm dt + 2 \int_{2b-a}^\infty \int_b^{2b-a} \frac{|v(2b-s)|^2}{|s-t|^{2}} \mathrm ds \mathrm dt + 0\\
&= \abs{v}_{H^{1/2}(a,b)}^2 + 2 \int_b^{2b-a} \frac{|v(2b-s)|^2}{2b-a-s} \mathrm ds = \abs{v}_{H^{1/2}(a,b)}^2 + 2\int_a^b \frac{|v(s)|^2}{s-a} \mathrm ds,
\end{align*}
whereas for the fifth term, we have
\begin{align*}
2 \int_a^\infty \frac{\abs{\mathcal E_{1/2}v(t)}^2}{t-a} \mathrm dt &= 2 \int_a^b \frac{\abs{v(t)}^2}{t-a} \mathrm dt + 2 \int_b^{2b-a} \frac{\abs{v(2b-t)}^2}{t-a} \mathrm dt \\
&= 2 \int_a^b \frac{\abs{v(t)}^2}{t-a} \mathrm dt + 2 \int_a^b \frac{\abs{v(t)}^2}{\underbrace{2b-a-t}_{\geq t-a}} \mathrm dt \leq 4\int_a^b \frac{\abs{v(t)}^2}{t-a} \mathrm dt.
\end{align*}
Using the above estimates gives for all $v \in H^{1/2}_{0,}(a,b)$
\begin{align*}
\normiii{\mathcal E_{1/2}v}_{H^{1/2}(\mathbb R)}^2 \leq 2 \norm{v}_{L^2(a,b)}^2 + 4\abs{v}_{H^{1/2}(a,b)}^2 + 8\int_a^b \frac{\abs{v(t)}^2}{t-a} \mathrm dt \leq 8 \normiii{v}_{H^{1/2}_{0,}(a,b)}^2\;.
\end{align*}
With these properties, we have for all $v \in H^{1/2}_{0,}(a,b)$ the lower bound in the norm equivalence:
\begin{align*}
\norm{v}_{H^{1/2}_{0,}(a,b)} \leq \norm{\mathcal E_{1/2}v}_{H^{1/2}(\mathbb R)} \leq \frac{1}{C_{\mathbb R,1}} \normiii{\mathcal E_{1/2}v}_{H^{1/2}(\mathbb R)} \leq \frac{2\sqrt{2} }{C_{\mathbb R,1}} \normiii{v}_{H^{1/2}_{0,}(a,b)}\;.
\end{align*}
Here, the first inequality is proven by interpolation, the second estimate follows from
\begin{equation} \label{NormEquivalenceR}
\forall z \in H^{1/2}(\mathbb R) : \quad C_{\mathbb R,1} \norm{z}_{H^{1/2}(\mathbb R)} \leq \normiii{z}_{H^{1/2}(\mathbb R)} \leq C_{\mathbb R,2} \norm{z}_{H^{1/2}(\mathbb R)}
\end{equation}
with constants $C_{\mathbb R,1}$, $C_{\mathbb R,2} > 0$, see \cite[Theorem~B.7]{McLean2000} and \cite[Lemma~4.1]{Eskin1981}.
For the upper bound, relations
\eqref{NormEquivalenceRepresentationZero}, \eqref{NormEquivalenceR} and \eqref{NormEquivalenceExtension}
yield
\begin{align*}
\normiii{v}_{H^{1/2}_{0,}(a,b)}^2 &\leq \norm{\mathcal E_{1/2}v}_{L^2(\mathbb R)}^2 +\abs{(\mathcal E_{1/2} v)_{|(a,\infty)}}_{H^{1/2}(a,\infty)}^2 + \int_a^\infty \frac{\abs{\mathcal E_{1/2}v(t)}^2}{t-a} \mathrm dt \\
&= \norm{\mathcal E_{1/2}v}_{L^2(\mathbb R)}^2 + \frac 12 \abs{(\mathcal E_{1/2} v)_{|(a,\infty)}}_{H^{1/2}(a,\infty)}^2 + \frac 12 \abs{\mathcal E_{1/2} v}_{H^{1/2}(\mathbb R)}^2 \\
&\leq \normiii{\mathcal E_{1/2}v}_{H^{1/2}(\mathbb R)}^2 \\
&\leq (C_{\mathbb R,2})^2 \norm{\mathcal E_{1/2}v}_{H^{1/2}(\mathbb R)}^2 \leq (C_{\mathbb R,2})^2 2 \sqrt{ 1 + \frac{4(b-a)^2}{\pi^2}} \norm{v}_{H^{1/2}_{0,}(a,b)}^2,
\end{align*}
i.e., the assertion is proven.
\end{proof}
The following proof of Lemma~\ref{lem:FractionalNormPointtau}
restricts the argument of~\cite{Faermann2000} to our particular case.
\begin{proof}[Proof of Lemma~\ref{lem:FractionalNormPointtau}]
Let $v \in H^{1/2}(a,b)$ and $\tau \in (a,b$) be given. Then, we
split the integral in the definition~\eqref{SlobodetskiiSemi} as follows:
\begin{align*}
| v |_{H^{1/2}(a,b)}^2
&= \int_a^{\tau} \int_a^b \Big( \cdots \Big) \mathrm ds \mathrm dt + \int_{\tau}^b \int_a^b \Big( \cdots \Big) \mathrm ds \mathrm dt \\
&= \int_a^{\tau} \int_a^{\tau} \Big( \cdots \Big) \mathrm ds \mathrm dt
+ 2 \int_a^{\tau} \int_{\tau}^b \Big( \cdots \Big) \mathrm ds \mathrm dt + \int_{\tau}^b \int_{\tau}^b \Big( \cdots \Big) \mathrm ds \mathrm dt\\
&=| v |_{H^{1/2}(a,\tau)}^2+2 \int_a^{\tau} \int_{\tau}^b \Big( \cdots \Big) \mathrm ds \mathrm dt+| v |_{H^{1/2}(\tau,b)}^2.
\end{align*}
For the integral on the right side, we get
\begin{align*}
2 \int_a^{\tau} \int_{\tau}^b \frac{|v(s)-v(t)|^2}{|s-t|^{2}} \mathrm ds \mathrm dt
&\leq 4 \int_a^{\tau} \int_{\tau}^b \frac{|v(s)|^2}{|s-t|^{2}} \mathrm ds \mathrm dt
+ 4 \int_a^{\tau} \int_{\tau}^b \frac{|v(t)|^2}{|s-t|^{2}} \mathrm ds \mathrm dt \\
&= 4 \int_{\tau}^b |v(s)|^2 [ (s-\tau)^{-1} - (s-a)^{-1} ] \mathrm ds \\
&\quad + 4 \int_a^{\tau} |v(t)|^2 [(\tau-t)^{-1} - (b-t)^{-1}] \mathrm dt \\
&\leq 4 \int_{\tau}^b \frac{|v(s)|^2}{s-\tau} \mathrm ds + 4 \int_a^{\tau} \frac{|v(t)|^2}{\tau-t} \mathrm dt.
\end{align*}
Thus, the assertion follows.
\end{proof}
\begin{lemma} \label{lem:Poincare}
For $a,b \in \mathbb R$, $a<b$, the Poincar\'e inequalities
\begin{align*}
\forall v \in H^{1/2}_{0,}(a,b) : \quad &\| v \|_{L^2(a,b)} \leq \sqrt{\frac{2 (b-a)}{\pi}} \| v \|_{H^{1/2}_{0,}(a,b)}, \\
\forall v \in H^1_{0,}(a,b) : \quad &\| v \|_{H^{1/2}_{0,}(a,b)} \leq \sqrt{\frac{2 (b-a)}{\pi}} \| \partial_t v \|_{L^2(a,b)}, \\
\forall v \in H^1_{0,}(a,b) : \quad &\| v \|_{L^2(a,b)} \leq \frac{2 (b-a)}{\pi} \| \partial_t v \|_{L^2(a,b)}
\end{align*}
hold true, where the constants are sharp.
\end{lemma}
\begin{proof}
By interpolation, we have the Fourier series representations
\begin{equation} \label{norm:representations}
\| v \|_{L^2(a,b)}^2 = \sum_{k=0}^\infty |v_k|^2, \quad \| v \|_{H^{1/2}_{0,}(a,b)}^2 = \sum_{k=0}^\infty \sqrt{\lambda_k} |v_k|^2, \quad \| \partial_t v \|_{L^2(a,b)}^2 = \sum_{k=0}^\infty \lambda_k |v_k|^2
\end{equation}
with coefficients $v_k$ as in \eqref{eq:FourierRepresentation} and eigenvalues $\lambda_k = \frac{\pi^2 (2k+1)^2}{4(b-a)^2}$ of the eigenvalue problem \eqref{time:eigenvalues}.
Hence, all Poincar\'e inequalities follow from these representations. The constants are sharp since for $v$ with $v_0 \neq 0$ and $v_k=0$ for $k \in \mathbb N$, equality holds true.
\end{proof}
\begin{lemma} \label{lem:interpolationEstimate}
For $a,b \in \mathbb R$ with $a<b$, the interpolation estimate
\begin{equation*}
\forall v \in H^1_{0,}(a,b) : \, \|v\|_{H^{1/2}_{0,}(a,b)} \leq \sqrt{ \| v \|_{L^2(a,b)} \| \partial_t v \|_{L^2(a,b)} }
\end{equation*}
holds true, where $\| \circ \|_{H^{1/2}_{0,}(a,b)}$ denotes the interpolation norm \eqref{eq:H120def}.
\end{lemma}
\begin{proof}
Using the Cauchy--Schwarz inequality, the assertion follows immediately from the Fourier representations \eqref{norm:representations}.
\end{proof}
\section{Proof of Lemma~\ref{lem:H12X2Reg}} \label{sec:ProofLemH12X2Reg}
Let $b \in (0,T]$ be fixed.
According to~\eqref{eq:T'TBd} for $l=0$, the estimate
\begin{equation*}
\forall t>0 \colon
\norm{E(t)}_{\mathcal L(X_\varepsilon, X_2)}^2 \leq
\frac{1}{\sqrt{2\pi}} \left(\frac{1}{2}\right)^{2-\varepsilon}
\Gamma(3-\varepsilon)
t^{-2+\varepsilon}.
\end{equation*}
holds true.
The logarithmic convexity of the gamma function gives
$\Gamma(3-\varepsilon)=\Gamma\left(2\varepsilon+3(1-\varepsilon)\right)\le\Gamma(2)^\varepsilon\Gamma(3)^{1-\varepsilon}=
2^{1-\varepsilon}$ and we obtain
\begin{equation}\label{eq:Tt}
\forall t>0 \colon \norm{E(t)}_{\mathcal L(X_\varepsilon, X_2)}
\leq
\sqrt{\frac{1}{\sqrt{2\pi}} \left(\frac{1}{2}\right) t^{-2+\varepsilon}}
=
\frac{1}{\sqrt[4]{8\pi}} \, t^{-1+\varepsilon/2}.
\end{equation}
The solution $u$ admits the representation
\begin{equation}\label{eq:repr}
u(t) = \int_0^t E(\tau) g(t-\tau) \mathrm d\tau, \quad 0 \leq t \leq b,
\end{equation}
see~\eqref{eq:Duhamel},
and for $t \in [0,b]$, it follows that
\begin{align*}
\norm{u(t)}_{X_2} &\leq \int_0^t \norm{E(\tau) g(t-\tau)}_{X_2} \mathrm d\tau
\leq \int_0^t \norm{E(\tau)}_{\mathcal L(X_\varepsilon,X_2)} \norm{g(t-\tau)}_{X_\varepsilon} \mathrm d\tau \\
&\leq \frac{1}{\sqrt[4]{8\pi}} C_g \int_0^t \tau^{-1+\varepsilon/2} \mathrm d\tau
= \frac{1}{\sqrt[4]{8\pi}} C_g\, \frac{2}{\varepsilon}\, t^{\varepsilon/2}
= \sqrt[4]{\frac{2}{\pi}}\, C_g\, \frac{1}{\varepsilon}\, t^{\varepsilon/2}.
\end{align*}
We estimate the three terms of $\normiii{u}_{H^{1/2}_{0,}((0,b);X_2)}$
expressed as in~\eqref{Sob:NormTriple}.
\noindent
\textbf{First term:} From the previous bound for $\norm{u(t)}_{X_2}$, we derive
\begin{equation*}
\| u \|_{L^2((0,b);X_2)}^2 = \int_0^b \norm{u(t)}_{X_2}^2 \mathrm dt \leq \sqrt{\frac{2}{\pi}}\, C_g^2\, \frac{1}{\varepsilon^2} \int_0^b t^\varepsilon \mathrm dt = \sqrt{\frac{2}{\pi}}\, C_g^2\, \frac{1}{\varepsilon^2(1+\varepsilon)}\, b^{1+\varepsilon}.
\end{equation*}
\noindent
\textbf{Third term:} Similarly, we obtain
\begin{equation*}
\int_0^b \frac{\norm{u(t)}_{X_2}^2}{t} \mathrm dt \leq \sqrt{\frac{2}{\pi}}\, C_g^2\, \frac{1}{\varepsilon^2} \int_0^b t^{\varepsilon-1} \mathrm dt = \sqrt{\frac{2}{\pi}}\, C_g^2\, \frac{1}{\varepsilon^3}\, b^\varepsilon.
\end{equation*}
\noindent
\textbf{Second term:} Recalling~\eqref{SlobodetskiiSemi}, we need to
estimate $\int_0^b \int_0^b \frac{\|u(s)-u(t)\|_{X_2}^2}{|s-t|^2} \mathrm ds \mathrm dt$.\\
For $b \geq s \geq t \geq 0$, we have
\begin{align*}
&\norm{u(s)-u(t)}_{X_2} \stackrel{\eqref{eq:repr}}{=} \norm{ \int_0^s E(\tau) g(s-\tau) \mathrm d\tau - \int_0^t E(\tau) g(t-\tau) \mathrm d\tau }_{X_2} \\
&\qquad= \norm{ \int_t^s E(\tau) g(s-\tau) \mathrm d\tau + \int_0^t E(\tau) g(s-\tau) \mathrm d\tau - \int_0^t E(\tau) g(t-\tau) \mathrm d\tau }_{X_2} \\
&\qquad\leq \norm{ \int_t^s E(\tau) g(s-\tau) \mathrm d\tau }_{X_2} + \norm{ \int_0^t E(\tau) [g(s-\tau) - g(t-\tau)] \mathrm d\tau }_{X_2} \\
&\qquad\leq \int_t^s \norm{E(\tau)}_{\mathcal L(X_\varepsilon,X_2)} \norm{g(s-\tau)}_{X_\varepsilon} \mathrm d\tau \\
&\qquad\quad + \int_0^t \norm{ E(\tau) }_{\mathcal L(X_\varepsilon,X_2)} \norm{g(s-\tau) - g(t-\tau)}_{X_\varepsilon} \mathrm d\tau \\
&\qquad\stackrel{\eqref{eq:Tt}}{\leq} \frac{1}{\sqrt[4]{8\pi}} \left(\int_t^s \tau^{-1+\varepsilon/2} \norm{g(s-\tau)}_{X_\varepsilon} \mathrm d\tau + \int_0^t \tau^{-1+\varepsilon/2} \norm{\int_{t-\tau}^{s-\tau} g'(r) dr}_{X_\varepsilon} \mathrm d\tau\right) \\
&\qquad\stackrel{\eqref{eq:boundg}}{\leq} \frac{1}{\sqrt[4]{8\pi}}\, C_g \,\frac{2}{\varepsilon}\, \left((s^{\varepsilon/2}-t^{\varepsilon/2}) + t^{\varepsilon/2}\, (s-t)\right).
\end{align*}
Analogously, for $b \geq t \geq s \geq 0$, the estimate
\begin{equation*}
\norm{u(s)-u(t)}_{X_2} \leq
\frac{1}{\sqrt[4]{8\pi}}\, C_g \,\frac{2}{\varepsilon}\, \left((t^{\varepsilon/2}-s^{\varepsilon/2}) + s^{\varepsilon/2}\, (t-s)\right)
\end{equation*}
holds true.
We conclude that
\begin{align*}
\int_0^b \int_0^b &\frac{\|u(s)-u(t)\|_{X_2}^2}{|s-t|^2} \mathrm ds \mathrm dt \\
&= \int_0^b \int_0^t \frac{\|u(s)-u(t)\|_{X_2}^2}{|s-t|^2} \mathrm ds \mathrm dt + \int_0^b \int_0^s \frac{\|u(s)-u(t)\|_{X_2}^2}{|s-t|^2} \mathrm dt \mathrm ds \\
&\leq \frac{1}{\sqrt{8\pi}}\, C_g^2\, \frac{16}{\varepsilon^2} \int_0^b
\int_0^s
\left(\frac{(s^{\varepsilon/2}-t^{\varepsilon/2})^2}{(s-t)^2}+t^\varepsilon\right)
\mathrm dt \mathrm ds\\
&= \sqrt{\frac{32}{\pi}}\, \frac{1}{\varepsilon^2}\, C_g^2 \left(\int_0^b s^{-1+\varepsilon} \int_0^1 \frac{(1-r^{\varepsilon/2})^2}{(1-r)^2} \mathrm dr \mathrm ds + \frac{b^{\varepsilon + 2}}{(\varepsilon +1)(\varepsilon + 2)}\right) \\
&\stackrel{\text{$\varepsilon/2\le 1$}}{\leq} \sqrt{\frac{32}{\pi}}\, \frac{1}{\varepsilon^2}\, C_g^2 \left(\int_0^b s^{-1+\varepsilon} \int_0^1 \frac{(1-r^{\varepsilon/2})^2}{(1-r^{\varepsilon/2})^2} \mathrm dr \mathrm ds + \frac{b^{\varepsilon + 2}}{(\varepsilon +1)(\varepsilon + 2)}\right) \\
&= \sqrt{\frac{32}{\pi}}\, \frac{1}{\varepsilon^2}\, C_g^2 \left(\frac{b^\varepsilon}{\varepsilon} + \frac{b^{\varepsilon + 2}}{(\varepsilon +1)(\varepsilon + 2)}\right).
\end{align*}
\noindent
\textbf{Conclusion of the proof:}
By combining the bounds of the three terms, we arrive at the \emph{a~priori} estimate
\begin{align*}
\normiii{u}_{H^{1/2}_{0,}((0,b);X_2)} \leq \sqrt[4]{\frac{2}{\pi}}\, \frac{1}{\varepsilon}\, b^{\varepsilon/2}
\left( \frac{b}{1+\varepsilon} + \frac{3}{\varepsilon} +
\frac{4 b^2}{(\varepsilon +1)(\varepsilon + 2)}
\right)^{1/2} C_g\;,
\end{align*}
which gives the assertion.
\section{Proof of Lemma~\ref{lem:BoundGamma}} \label{sec:ProofLemBoundGamma}
This proof is a slight modification of the proof of \cite[Lemma~3.4]{DD20}.
We use Stirling's inequalities
\begin{equation} \label{Stirling}
\forall x > 0 : \quad \sqrt{2 \pi} x^{x-1/2} \mathrm{e}^{-x}
\leq \Gamma(x) \leq \sqrt{2 \pi} x^{x-1/2} \mathrm{e}^{-x} \mathrm{e}^{\frac{1}{12x}}.
\end{equation}
For $j \geq 1$, \eqref{Stirling} yields
\begin{multline*}
\frac{\Gamma(\lfloor \mu j \rfloor-j+1)}{\Gamma(\lfloor \mu j \rfloor +j+1)} \leq \frac{\Gamma(\mu j-j+1)}{\Gamma(\mu j +j)} \\
\leq \frac{\sqrt{2\pi} (\overbrace{\mu j-j+1}^{\leq \mu j \text{ as } j\geq 1})^{\overbrace{\mu j-j+1/2}^{\geq 0 \text{ as } \mu \geq 1}} \mathrm e^{-(\mu j-j+1)} \overbrace{\mathrm{e}^{1/(12( \mu j -j+1))}}^{\leq 2} }{ \sqrt{2\pi} (\underbrace{\mu j+j}_{\geq \mu j})^{\mu j + j -1/2} \mathrm e^{-(\mu j+j)} } \leq \frac{2 \mu j }{\mathrm e} \left( \frac{ \mathrm e }{\mu j} \right)^{2j}
\end{multline*}
and
\begin{equation*}
\Gamma(j+3)^2 = (\underbrace{j+2}_{\leq 3j})^2 (\underbrace{j+1}_{\leq 2j})^2 j^2 \Gamma(j)^2 \leq j^6 \cdot 72 \pi j^{2j - 1} \mathrm{e}^{-2j} \underbrace{\mathrm{e}^{\frac{1}{6j}}}_{< 2} \leq 144 \pi j^5 j^{2j} \mathrm{e}^{-2j}.
\end{equation*}
Thus, we have
\begin{equation*}
\forall j \in \mathbb N : \quad \alpha^{2j} \frac{\Gamma(\lfloor \mu j \rfloor-j+1)}{\Gamma(\lfloor \mu j \rfloor +j+1)} \Gamma(j+3)^2 \leq \frac{288 \pi \mu }{\mathrm e} j^6 \left( \frac{\alpha }{\mu} \right)^{2j}.
\end{equation*}
Hence, we conclude that
\begin{multline*}
\sum_{j=0}^m \alpha^{2j} \frac{\Gamma(\lfloor \mu j \rfloor-j+1)}{\Gamma(\lfloor \mu j \rfloor +j+1)} \Gamma(j+3)^2 = 4 + \sum_{j=1}^m \alpha^{2j} \frac{\Gamma(\lfloor \mu j \rfloor-j+1)}{\Gamma(\lfloor \mu j \rfloor +j+1)} \Gamma(j+3)^2 \\
\leq 4 + \frac{288 \pi \mu }{\mathrm e} \sum_{j=1}^{\infty} j^6 \left( \frac{\alpha }{\mu} \right)^{2j} < \infty,
\end{multline*}
since the ratio test gives
\begin{equation*}
\lim_{j\to \infty} \frac{ (j+1)^6 \left( \frac{\alpha }{\mu} \right)^{2(j+1)} }{ j^6 \left( \frac{\alpha }{\mu} \right)^{2j} } = \left( \frac{\alpha }{\mu} \right)^2 < 1,
\end{equation*}
i.e., the assertion is proven.
\subsection{Numerical Implementation of ${\mathcal{H}}_T$}
\label{sec:NumHT}
We describe the assembling of the matrices
$M_t^{{\mathcal{H}}_T}$ and $A_t^{{\mathcal{H}}_T}$ in \eqref{eq:matricesTime}. The crucial
point is the realization of the modified Hilbert transformation ${\mathcal{H}}_T$,
for which different possibilities exist, see
\cite{SteinbachZankNoteHT, ZankExactRealizationHT}.
In particular, for
a uniform degree vector ${\boldsymbol{p}} := (p, p, \dots, p)$ with a fixed, low
polynomial degree $p \in \mathbb N$, e.g., $p=1$ or $p=2$, the matrices $M_t^{{\mathcal{H}}_T}$ and
$A_t^{{\mathcal{H}}_T}$ in \eqref{eq:matricesTime} can be calculated using a
series expansion based on the \textit{Legendre chi function}, which
converges very fast, independently of the temporal mesh widths; see
\cite[Subsection~2.2]{ZankExactRealizationHT}. As for the temporal
$hp$-FEM the degree vector ${\boldsymbol{p}}$ is not uniform, it is convenient to
apply numerical quadrature rules to numerically approximate
the matrix entries.
From the integral representation of ${\mathcal{H}}_T$,
\[
({\mathcal{H}}_T v)(t) =
- \frac{2}{\pi} v(0) \ln \tan \frac{\pi t}{4T} -\frac{1}{\pi}
\int_0^{T} \ln \left[ \tan \frac{\pi (s+t)}{4T} \tan \frac{\pi \abs{t-s}}{4T} \right] \partial_t v(s) \mathrm ds,
\]
$t \in J$, $v \in H^1(J),$ as a weakly singular integral, see
\cite[Lemma 2.1]{SteinbachZankNoteHT}, we have
\begin{align}
A_t^{{\mathcal{H}}_T}[k, l] &=
\langle \partial_t \varphi_l , {\mathcal{H}}_T \varphi_k
\rangle_{L^2(J)} \nonumber \\
&= - \frac{1}{\pi} \int_0^T \partial_t \varphi_l(t)
\int_0^T \ln \left[ \tan \frac{\pi (s+t)}{4T} \tan \frac{\pi \abs{t-s}}{4T} \right] \, \partial_t \varphi_k(s) \, \mathrm ds \, \mathrm dt \label{Num:Aht_entries}
\end{align}
and
\begin{align}
M_t^{{\mathcal{H}}_T}[k, l] &= \langle \varphi_l , {\mathcal{H}}_T \varphi_k \rangle_{L^2(J)} \nonumber \\
&= - \frac{1}{\pi} \int_0^T \varphi_l(t) \int_0^T \ln \left[ \tan \frac{\pi (s+t)}{4T} \tan \frac{\pi \abs{t-s}}{4T} \right] \, \partial_t \varphi_k(s) \, \mathrm ds \, \mathrm dt \label{Num:Mht_entries}
\end{align}
for $k,l=1,\ldots,M$, with the temporal basis functions $\varphi_l$ in~\eqref{Num:BasisVt}.
In the following, we only describe how to compute the matrix entries
$M_t^{{\mathcal{H}}_T}[k, l]$ in \eqref{Num:Mht_entries}, since the matrix entries
$A_t^{{\mathcal{H}}_T}[k, l]$ in \eqref{Num:Aht_entries} can be computed in the
same way.
The matrix entries $M_t^{{\mathcal{H}}_T}[k, l]$ in \eqref{Num:Mht_entries} are computed element-wise for the partition $\mathcal G^m_\sigma = \{I_j\}_{j=1}^m$ of $J$ into time intervals
$I_j = (t_{j-1},t_j)\subset J$, $j=1,\dots, m$. Fix two time
intervals $I_i = (t_{i-1},t_i)$, $I_j = (t_{j-1},t_j)$ with indices
$i, j \in \{1,\dots,m\}$ and related local polynomial degrees $p_i,
p_j \in \mathbb N$.
We define the local matrix $M_t^{{\mathcal{H}}_T,i,j} \in \mathbb R^{(p_i+1) \times (p_j+1)}$ by
\begin{multline} \label{Num:Mht_local}
M_t^{{\mathcal{H}}_T,i,j}[\kappa, \ell] = \\
- \frac{1}{\pi} \int_{t_{j-1}}^{t_j} \varphi_{\alpha(\ell,j)}(t)
\int_{t_{i-1}}^{t_i} \ln \left[ \tan \frac{\pi (s+t)}{4T} \tan \frac{\pi \abs{t-s}}{4T} \right] \,
\partial_t \varphi_{\alpha(\kappa,i)}(s) \, \mathrm ds \, \mathrm dt
\end{multline}
for $\kappa=1,\dots,p_i+1$ and $\ell=1,\dots,p_j+1$.
Here,
$\alpha(\kappa,i) \in \{0,1,\dots,M\}$ is the global index related to the local index
$\kappa$ for the time interval $I_i$; similarly for $\alpha(\ell,j)$.
Notice that the function $\varphi_0$, corresponding to the vertex $t=0$,
does not contribute to the global matrix $M_t^{{\mathcal{H}}_T}$. On the reference interval $(-1,1)$,
we use the Lobatto polynomials (or integrated Legendre polynomials) as hierarchical shape functions, i.e.,
we set
\[
N_1(\xi) = \frac{1-\xi}{2},
\quad
N_2(\xi) = \frac{1+\xi}{2},
\quad
N_\ell(\xi) = \int_{-1}^\xi
L_{\ell-2}(\zeta) \, \mathrm d\zeta \quad \text{ for }\ell \geq 3,
\]
$\xi \in [-1,1]$,
where $L_\ell$ denotes the $\ell$-th Legendre polynomial on $[-1,1]$,
see~\cite[Chapter~3]{Schwab98}.
With these shape functions and the affine transformation $T_\iota\colon \, [-1,1] \to [t_{\iota-1},t_\iota]$
for $\iota \in \{1,\dots,m\}$, the entries \eqref{Num:Mht_local}
of the local matrix $M_t^{{\mathcal{H}}_T,i,j}$ are
\begin{multline} \label{Num:Mht_local_referenceelement}
M_t^{{\mathcal{H}}_T,i,j}[\kappa, \ell] =\\
- \frac{k_j}{2\pi} \int_{-1}^1
N_\ell(\eta) \int_{-1}^1
\ln \left[ \tan \frac{\pi (T_i(\xi) + T_j(\eta))}{4T}
\tan \frac{\pi \abs{T_j(\eta)-T_i(\xi)}}{4T} \right] \, N'_\kappa(\xi) \, \mathrm d\xi \, \mathrm d\eta
\end{multline}
for $\kappa=1,\dots,p_i+1$ and $\ell=1,\dots,p_j+1$, where $k_j = \abs{t_j - t_{j-1}}$
is the length of the time interval $I_j$. To compute the integrals in \eqref{Num:Mht_local_referenceelement},
we split these integrals into regular and singular parts, see \cite[Subsection~3.1]{SteinbachZankNoteHT}.
For the regular parts, a tensor Gauss quadrature is applied. In \cite[Subsection~3.1]{SteinbachZankNoteHT},
it is proposed to calculate the singular parts analytically or with an adapted numerical integration.
As the polynomial degrees $p_i, p_j$ may be high, we use the latter.
The singularity of the singular parts is of logarithmic type.
Thus, we apply so-called classical and nonclassical Gauss--Jacobi quadrature rules of order adapted to $p_i, p_j$,
see
\cite[Eq. (1.6), (1.7)]{GautscheLog},
to the singular parts.
These adapted integration rules allow us to calculate the singular
parts exactly.
In summary, the matrix entries of the matrices $M_t^{{\mathcal{H}}_T}$ and $A_t^{{\mathcal{H}}_T}$ in \eqref{eq:matricesTime} are computable
to high float point accuracy efficiently.
\subsection{Numerical Examples in 1D}
\label{sec:Num1D}
We present a numerical example in the one-dimensional spatial domain
$\mathrm{D} = (0,1) \subset \mathbb R$ with final time $T=2$, i.e.,
$Q = J \times \mathrm{D} = (0,2) \times (0,1)\subset \mathbb R^2$.
We choose the constant right-hand side $g_1 \equiv 1$,
for which the solution to problem~\eqref{eq:modelproblem} is given by the Fourier series
\begin{equation} \label{Num:u1}
u_1(t,x) = \sum _{\eta=1}^{\infty} \frac{4 - 4 e^{-\pi ^2 (2 \eta-1)^2 t}}{\pi ^3 (2 \eta-1)^3}
\sin (\pi (2 \eta-1) x), \quad (t,x) \in \overline{Q}.
\end{equation}
In the calculation of the errors of the space-time Galerkin
approximation \eqref{eq:IBVPVarxtPerturbed}, we truncate the
series~\eqref{Num:u1} at $\eta=1000$.
For the spatial discretization, we choose a uniform initial mesh with mesh width $h_x$ and apply a uniform refinement strategy.
In the first test, we use a temporal mesh with mesh width
$k_{\max}=k_1=\dots=k_m$ and linear polynomials, i.e., ${\boldsymbol{p}} =
(1,\dots,1) \in \mathbb N^m$. The errors and the estimated orders of
convergence (eoc) are reported in Table~\ref{Num:Tab:u1}.
We observe a reduced order of convergence, as the compatibility condition between the right-hand side $g_1 \equiv 1$ and the homogeneous initial condition is not satisfied.
Note that the forcing $g_1 \equiv 1$ satisfies the temporal analytic regularity \eqref{eq:TimeReg} for any $\varepsilon \in (0,1/2)$ with $\delta=1$ and a constant $C=C(\varepsilon)$ depending on $\varepsilon$.
\begin{table}[h]
\begin{center}
\begin{tabular}{rcccccccccc}
\hline
$MN$ & $h_x$ & $k_{\max}$ & $[u_1 - u_1^{MN}]_{H^{1/2}_{0,}(J;L^2(\mathrm{D} ))}$ & eoc\\
\hline
12 & 0.25000 & 0.50000 & 7.330e-02 & - \\
56 & 0.12500 & 0.25000 & 3.423e-02 & 0.99 \\
240 & 0.06250 & 0.12500 & 1.355e-02 & 1.27 \\
992 & 0.03125 & 0.06250 & 5.396e-03 & 1.30 \\
4032 & 0.01562 & 0.03125 & 2.267e-03 & 1.24 \\
16256 & 0.00781 & 0.01562 & 9.531e-04 & 1.24 \\
65280 & 0.00391 & 0.00781 & 4.004e-04 & 1.25 \\
261632 & 0.00195 & 0.00391 & 1.682e-04 & 1.25 \\
1047552 & 0.00098 & 0.00195 & 7.070e-05 & 1.25 \\
4192256 & 0.00049 & 0.00098 & 2.971e-05 & 1.25 \\
\hline
\end{tabular}
\caption{Numerical results with
the space-time Galerkin
approximation~\eqref{eq:IBVPVarxtPerturbed}
for the 1D example with the right-hand side $g_1 \equiv 1$
and solution $u_1$ in \eqref{Num:u1}, for a uniform mesh refinement
strategy and piecewise linear polynomials both in space and time.} \label{Num:Tab:u1}
\end{center}
\end{table}
In the second test,
we use the temporal $hp$-approximation of
Subsection~\ref{sec:hpApprJ}. For this purpose, we apply a uniform
refinement strategy for the spatial discretization, i.e., the number
$N$ of degrees of freedom in the spatial discretization doubles with
each uniform refinement. Then, corresponding to a given spatial
discretization with parameter $N$, we choose the temporal mesh as in \eqref{def:tj} with
grading parameter $\sigma = 0.31$, slope parameter $\mu_{\mathrm{hp}} = 2.0$, numbers of elements $m_1 = \lfloor 1.4 \cdot \ln N \rfloor$, $m_2 = 1$, and temporal polynomial degrees $\boldsymbol{p} \in \mathbb N^m$ as in \eqref{def:pj}. This choice of the discretization parameters fulfills condition~\eqref{muhp} with $\mu_{\mathrm{hp}} = 2.0 > \frac{345}{31 \sqrt{31}} \approx 1.99883$ and condition~\eqref{m2} with $m_2 = 1 > \frac{5}{2 \sqrt{31}} \approx 0.45$. In addition, this choice balances the terms of the error bound~\eqref{eq:hpxtErrBd}, i.e., the
total number of degrees of freedom $MN$ behaves like in
\eqref{ST:dofsBehavior}.
The numerical results reported in Figure~\ref{Num:Fig:u1}
confirm Theorem~\ref{thm:Conv}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[font=\footnotesize,scale=0.9]
\begin{axis}[xlabel={degrees of freedom $MN$} ,ylabel={$[u_1 - u_1^{MN}]_{H^{1/2}_{0,}(J;L^2(\mathrm{D} ))}$} ,ymode=log,xmode=log,
legend entries={{ $\mathbb P^1$-FEM uniform}, {temporal $hp$-FEM}, $(MN)^{-\frac 5 8} \sim h_{xt}^{1.25}$, $(MN)^{-1} \sim h_{xt}^2$},
legend pos = {south west},
legend style={cells={align=left}},
mark size=3pt]
\addplot[color=red,mark=otimes, mark options={scale=0.8},line width=1pt] plot file {Numerics/1du1/1dhFEMH12.txt};
\addplot[color=blue,mark=square, mark options={scale=0.8},line width=1pt] plot file {Numerics/1du1/1dhpFEMH12.txt};
\addplot[color=black] plot[samples=200,domain=100:8192256] {0.9/(x^(5/8))};
\addplot[color=black, dashed] plot[samples=200,domain=10000:1092256] {2/(x^(1))};
\end{axis}
\end{tikzpicture}
\caption{
%
Numerical results with the space-time Galerkin
approximation~\eqref{eq:IBVPVarxtPerturbed} for the 1D example
with the right-hand side $g_1 \equiv 1$ and solution $u_1$
in~\eqref{Num:u1}, for a spatial uniform mesh refinement and temporal
$\mathbb P^1$-FEM approximations with uniform mesh refinement or with temporal
$hp$-FEM with geometric partition of~$J$ with
grading parameter $\sigma = 0.31$,
slope parameter $\mu_{\mathrm{hp}} = 2.0$,
numbers of elements $m_1 = \lfloor 1.4 \cdot \ln N \rfloor$, $m_2 = 1$,
and temporal polynomial degrees $\boldsymbol{p} \in \mathbb N^m$ as in \eqref{def:pj}.
}
\label{Num:Fig:u1}
\end{center}
\end{figure}
\subsection{Numerical Examples in 2D}
\label{sec:Num2D}
We present numerical examples in the two-dimensional spatial L-shaped domain
\[
\mathrm{D} = (-1,1)^2 \setminus [0,1]^2 \subset \mathbb R^2,
\]
and final time $T=2$, i.e., $Q = J \times \mathrm{D} = (0,2) \times \mathrm{D} \subset \mathbb R^3$.
\subsubsection{Spatial Meshes}
For the spatial discretization, we consider uniformly refined
meshes, see Figure~\ref{Num:Fig:LShapeUniform}, or
meshes with corner-refinements towards the origin,
where in both cases, the mesh width $h_x$ decreases by a factor 2 with
each refinement.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{Numerics/2dmeshes/LShapeUniformLevel0}
\includegraphics[scale=0.5]{Numerics/2dmeshes/LShapeUniformLevel1}
\caption{Spatial meshes with uniform refinement strategy: starting mesh and mesh after one refinement step.}
\label{Num:Fig:LShapeUniform}
\end{center}
\end{figure}
As pointed out in Section~\ref{Sec:P1:d2},
spatial meshes with corner-refinements towards the origin are needed to ensure
second-order convergence in $L^2(\mathrm{D} )$ for $\mathbb P^1$-FEM approximations in $\mathrm{D}$.
For a given maximal mesh width $h_x>0$, we construct
spatial meshes $\mathcal T^N_\beta$ with corner-refinements towards the origin
fulfilling the grading condition
\begin{equation} \label{Num:CondGradedMeshes}
\forall \omega \in \mathcal T^N_\beta \colon \quad h_{x,\omega} \sim \begin{cases}
h_x^{1/\beta}, & \mathrm{dist}(\omega, \boldsymbol 0) = 0, \\
h_x \cdot \mathrm{dist}(\omega, \boldsymbol 0)^{1-\beta}, & 0 < \mathrm{dist}(\omega, \boldsymbol 0) \leq R,\\
h_x, & \mathrm{dist}(\omega, \boldsymbol 0) > R, \end{cases}
\end{equation}
where the mesh grading parameters $\beta \in (0,1]$ and $R > 0$ are fixed.
Here, $h_{x,\omega}$ is the spatial mesh width of the triangle $\omega \in \mathcal T^N_\beta$,
and $\mathrm{dist}(\omega, \boldsymbol 0)$ is the distance of the triangle $\omega \in \mathcal T^N_\beta$
from the origin $\boldsymbol 0$.
To get a sequence of these graded spatial meshes, we halve the maximal mesh width $h_x$
and use the newest vertex bisection for the refinement, see Remark~\ref{rmk:BisTree}.
Figure~\ref{Num:Fig:LShapeGraded} shows the spatial graded meshes for
the first four levels of refinement with mesh
grading parameters $\beta = 0.6$ and $R=0.25$,
which are used in the remainder of this section.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{Numerics/2dmeshes/LShapeGradedLevel0}
\includegraphics[scale=0.5]{Numerics/2dmeshes/LShapeGradedLevel1}
\includegraphics[scale=0.5]{Numerics/2dmeshes/LShapeGradedLevel2}
\includegraphics[scale=0.5]{Numerics/2dmeshes/LShapeGradedLevel3}
\caption{Spatial meshes with corner-refinements towards the origin
fulfilling the grading condition \eqref{Num:CondGradedMeshes} with parameters $\beta=0.6$ and $R=0.25$.}
\label{Num:Fig:LShapeGraded}
\end{center}
\end{figure}
\subsubsection{Spatially Singular Solution}
\label{sec:2dSingSol}
We consider the manufactured solution
\begin{equation} \label{Num:u2}
u_2(t,x_1,x_2)
=
u_{\mathrm{reg}}(t,x_1,x_2) + t \mathrm{e}^{-t} \eta(x_1,x_2) \cdot r(x_1,x_2)^{2/3}
\cdot \sin \left(\frac2 3 \left( \arg(x_1,x_2) - \frac{\pi}{2} \right) \right)
\end{equation}
for $(t,x_1,x_2) \in \overline{Q}$ with the smooth part
\begin{equation} \label{Num:u2reg}
u_{\mathrm{reg}}(t,x_1,x_2) = \frac{1}{100} t \sin (\pi x_1) \sin (\pi x_2) \mathrm e^{ -t \left(x_1-\frac{1}{4}\right)^2 - t\left(x_2+\frac{1}{4}\right)^2}, \quad (t,x_1,x_2) \in \overline{Q},
\end{equation}
where $r(x_1,x_2) \in [0,\infty)$ is
the radial coordinate, $\arg(x_1,x_2) \in (0,2\pi]$ is the angular
coordinate, and the cutoff function $\eta
\in C^2(\mathbb R^2)$ is given by
\begin{equation} \label{Num:Cutoff}
\eta(x_1,x_2) := \begin{cases}
1, & r(x_1,x_2) \leq 1/4, \\
\frac{27}{8} -\frac{135}{4} r(x_1,x_2) +180 r(x_1,x_2)^2 \\
\quad -440 r(x_1,x_2)^3+480 r(x_1,x_2)^4 \\
\quad -192 r(x_1,x_2)^5, & 1/4 < r(x_1,x_2) \leq 3/4, \\
0, & 3/4 < r(x_1,x_2).
\end{cases}
\end{equation}
Note that the solution $u_2$ is smooth in time but has a corner
singularity in space, which leads to reduced convergence rates,
when the spatial meshes are
refined uniformly. Hence, we use the spatial graded meshes as in
Figure~\ref{Num:Fig:LShapeGraded} in order to recover maximal
convergence rates. We point out that, in numerical tests not reported here, we have verified that,
for a Poisson problem with a solution of regularity
as the regularity in space of $u_2$ in~\eqref{Num:u2}, one obtains for the $L^2(\mathrm{D})$ error
convergence rates $N^{-2/3}\sim h_x^{4/3}$ with uniform meshes, and $N^{-1}\sim h_x^{2}$
with the considered graded meshes.
For the temporal discretizations, we use $\mathbb P^1$-FEM approximation
on uniformly refined meshes, or $p$-FEM for a fixed number $m=4$ of elements.
In connection with the spatial uniform or graded meshes as in
Figure~\ref{Num:Fig:LShapeUniform}, Figure~\ref{Num:Fig:LShapeGraded},
respectively, we investigate four possibilities:
\emph{i)} uniform mesh refinement both in space and in time,
\emph{ii)} uniform mesh refinement in space and $p$-FEM in time,
\emph{iii)} graded meshes in space and uniform mesh refinement in
time,
\emph{iv)} graded meshes in space and $p$-FEM in time.
For all four cases, the numerical results for the space-time
Galerkin approximation \eqref{eq:IBVPVarxtPerturbed} of the solution
$u_2$ are reported in Figure~\ref{Num:Fig:u2}.
For a given spatial discretization parameter $N$ and $m = 4$ temporal elements,
we choose the temporal polynomial degrees $\boldsymbol{p} = (p,p,p,p)$ with $p=\lfloor \frac{\ln N}{2} \rfloor$.
This choice of the discretization parameters balances the terms of the error bound \eqref{pxtErrBd}.
Hence, the total number of degrees of freedom $MN$ behaves like in
\eqref{ST:dofsBehaviorpFEM}. The numerical results in
Figure~\ref{Num:Fig:u2} confirm Remark~\ref{remk:pFEM}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[font=\footnotesize,scale=0.9]
\begin{axis}[xlabel={degrees of freedom $MN$} ,ylabel={$[u_2 - u_2^{MN}]_{H^{1/2}_{0,}(J;L^2(\mathrm{D} ))}$} ,ymode=log,xmode=log,
legend entries={{uniform in $xt$}, {uniform in $x$, \\ $p$-FEM in $t$}, {graded in $x$, \\ uniform in $t$}, {graded in $x$, \\ $p$-FEM in $t$}, $(MN)^{-\frac 1 2} \sim h_{xt}^{\frac 3 2}$, $(MN)^{-\frac 2 3} \sim h_{xt}^2$},
legend pos = {south west},
legend style={cells={align=left}},
mark size=3pt]
\addplot[color=magenta,mark=star, mark options={scale=0.8},line width=1pt] plot file {Numerics/2du2/huniform/heatHilbertTriCylTikzH12.tex};
\addplot[color=blue,mark=x, mark options={scale=0.8},line width=1pt] plot file {Numerics/2du2/puniform/heatHilbertTriCylTikzH12.tex};
\addplot[color=orange,mark=square, mark options={scale=0.8},line width=1pt] plot file {Numerics/2du2/hgradedNVBmu06/heatHilbertTriCylTikzH12.tex};
\addplot[color=red,mark=o, mark options={scale=0.6},line width=1pt] plot file {Numerics/2du2/pgradedNVBmu06/heatHilbertTriCylTikzH12.tex};
\addplot[color=black] plot[samples=200,domain=100000:803210240] {0.35/(x^(1/2))};
\addplot[color=black, dashed] plot[samples=200,domain=2000000:707825152] {0.3/(x^(2/3))};
\end{axis}
\end{tikzpicture}
\caption{
Numerical results with the space-time Galerkin
approximation \eqref{eq:IBVPVarxtPerturbed} for the 2D
example with the singular-in-space solution
$u_2$ in \eqref{Num:u2},
for all combinations of
uniform mesh refinement or graded meshes in space
(with grading parameter $\beta=0.6$),
and $\mathbb P^1$-FEM with uniform mesh refinement or $p$-FEM in
time.
For the $p$-FEM in time, for a spatial discretization of parameter $N$,
we use a fixed mesh with $m = 4$ elements and polynomial
degrees $\boldsymbol{p} = (p,p,p,p)$ with $p=\lfloor \frac{\ln N}{2} \rfloor$.
}
\label{Num:Fig:u2}
\end{center}
\end{figure}
\subsubsection{Singular Solution}
We consider the singular solution
\begin{multline} \label{Num:u3}
u_3(t,x_1,x_2) = u_{\mathrm{reg}}(t,x_1,x_2) \\
+ t^{3/5} \mathrm{e}^{-t} \eta(x_1,x_2) \cdot r(x_1,x_2)^{2/3} \cdot \sin \left(\frac2 3 \left( \arg(x_1,x_2) - \frac{\pi}{2} \right) \right)
\end{multline}
for $(t,x_1,x_2) \in \overline{Q}$ with the smooth part
$u_{\mathrm{reg}}$ in \eqref{Num:u2reg}, the radial coordinate
$r(x_1,x_2) \in [0,\infty)$, the angular coordinate $\arg(x_1,x_2) \in
(0,2\pi]$, and the cutoff function $\eta \in C^2(\mathbb R^2)$ in
\eqref{Num:Cutoff}. This solution has a temporal singularity at $t=0$.
We observe that the corresponding
right-hand side $g_3$ does not fulfill the temporal analytic
regularity \eqref{eq:TimeReg}. On the other hand, solutions with a
singular behavior as $u_3$ are possible even for sources $g$, which
satisfy the condition~\eqref{eq:TimeReg}.
As closed-form representations of such singular solutions
do not seem to be available, we perform our numerical tests
with the manufactured solution $u_3$ in \eqref{Num:u3}.
Furthermore, the solution $u_3$ has the same spatial singularity as
$u_2$ in \eqref{Num:u2}.
Thus, in order to get the full convergence rates, we use the graded meshes in Figure~\ref{Num:Fig:LShapeGraded}
for the spatial discretization, and a temporal $hp$-FEM.
We investigate four possibilities:
\emph{i)} uniform mesh refinement both in space and in time,
\emph{ii)} uniform mesh refinement in space and $hp$-FEM in time,
\emph{iii)} graded meshes in space and uniform mesh refinement in
time,
\emph{iv)} graded meshes in space and $hp$-FEM in time.
For all four cases, the numerical results for the space-time
Galerkin approximation \eqref{eq:IBVPVarxtPerturbed} of the solution
$u_3$ are reported in Figure~\ref{Num:Fig:u3}.
For a given spatial discretization parameter $N$, we choose the temporal mesh as in \eqref{def:tj} with
grading parameter $\sigma = 0.17$, slope parameter $\mu_{\mathrm{hp}} = 1.0$, numbers of elements $m_1 = \lfloor 2.2 \cdot \ln N \rfloor$, $m_2 = 1$, and temporal polynomial degrees $\boldsymbol{p} \in \mathbb N^m$ as in \eqref{def:pj}.
This choice of the discretization parameters balances the terms of the error bound~\eqref{eq:hpxtErrBd}.
Hence, the total number of degrees of freedom $MN$ behaves like in \eqref{ST:dofsBehavior}.
The numerical results in Figure~\ref{Num:Fig:u3} are in accordance with Theorem~\ref{thm:Conv},
when the temporal analytic regularity condition \eqref{eq:TimeReg}, and hence, the conditions on parameters $\mu_{\mathrm{hp}}$, $m_2$, i.e., \eqref{muhp}, \eqref{m2}, are ignored.
\begin{figure}
\begin{center}
\begin{tikzpicture}[font=\footnotesize,scale=0.9]
\begin{axis}[xlabel={degrees of freedom $MN$} ,ylabel={$[u_3 - u_3^{MN}]_{H^{1/2}_{0,}(J;L^2(\mathrm{D} ))}$} ,ymode=log,xmode=log,
legend entries={{uniform in $xt$}, {uniform in $x$, \\ $hp$-FEM in $t$}, {graded in $x$, \\
uniform in $t$}, {graded in $x$, \\ $hp$-FEM in $t$}, $(MN)^{-\frac 1 5} \sim h_{xt}^{0.6}$, $(MN)^{-\frac 2 3} \sim h_{xt}^2$},
legend pos = {south west},
legend style={cells={align=left}},
mark size=3pt]
\addplot[color=purple,mark=otimes, mark options={scale=0.8},line width=1pt] plot file {Numerics/2du3/huniform/heatHilbertTriCylTikzH12.tex};
\addplot[color=blue,mark=x, mark options={scale=0.8},line width=1pt] plot file {Numerics/2du3/hpuniform/heatHilbertTriCylTikzH12.tex};
\addplot[color=orange,mark=square, mark options={scale=0.8},line width=1pt] plot file {Numerics/2du3/hgradedNVBmu06/heatHilbertTriCylTikzH12.tex};
\addplot[color=red,mark=o, mark options={scale=0.6},line width=1pt] plot file {Numerics/2du3/hpgradedNVBmu06/heatHilbertTriCylTikzH12.tex};
\addplot[color=black] plot[samples=200,domain=300000:803210240] {0.07/(x^(1/5))};
\addplot[color=black, dashed] plot[samples=200,domain=10000000:1003210240] {35/(x^(2/3))};
\end{axis}
\end{tikzpicture}
\caption{
Numerical results with the space-time Galerkin
approximation \eqref{eq:IBVPVarxtPerturbed} for the 2D
example with the singular solution
$u_3$ in \eqref{Num:u3},
for all combinations of
uniform mesh refinement or graded meshes in space (with
grading parameter $\beta=0.6$),
and $\mathbb P^1$-FEM with uniform mesh refinement or $hp$-FEM in
time.
For the $hp$-FEM in time, for a spatial discretization of parameter $N$,
we use a geometric temporal mesh with subdivision
parameter $\sigma = 0.17$, slope parameter $\mu_{\mathrm{hp}} = 1.0$,
numbers of elements $m_1 = \lfloor 2.2 \cdot \ln N \rfloor$, $m_2 = 1$,
and temporal polynomial degrees $\boldsymbol{p} \in \mathbb N^m$ as in \eqref{def:pj}.
}
\label{Num:Fig:u3}
\end{center}
\end{figure}
\section{Introduction} \label{sec:intro}
\label{sec:Intro}
Efficient numerical solution of parabolic evolution problems is required
in many applications. In addition to the plain numerical solution of
associated initial-boundary value problems, in recent years the efficient
numerical treatment of optimal control problems and of uncertain input data
has been considered.
Here, often a large number of cases needs to be treated, and the
(numerical) solution must be stored in a data-compressed format.
Rather than the (trivial) option of a~posteriori compressing a
numerical solution obtained by a standard scheme,
novel algorithms have emerged featuring some form of
\emph{space-time compressibility} in the numerical solution process.
I.e., the numerical scheme will obtain directly, at runtime, a
numerical solution in a compressed format. As examples,
we mention only sparse-grid and wavelet-based methods (e.g., \cite{GrOeltz}),
and wavelet-based compressive schemes (e.g., \cite{AKChS,ScSt} and the references there).
Key to successful compressive space-time discretizations is an
appropriate \emph{variational formulation} of the evolution problem under consideration.
Accordingly, recent years have seen the development of a variety of,
in general nonequivalent, space-time variational formulations of parabolic initial-boundary value
problems.
Departing from the classical, Bochner-space perspective used to establish well-posedness,
the novel formulations adopt the perspective of treating the parabolic evolution problem
as an operator equation between appropriate function spaces,
the primary motivation being accomodation of efficient, compressive space-time
numerical schemes.
We mention only~\cite{ScSt,Andr,Ost,LMN_2016,CDG_2017,OStMZ,MMST_2020,FueKar,GGRSt,SW_2021} and the references there. A comprehensive account of the numerical analysis of
\emph{fixed order time-discretizations} is provided in~\cite{ThomeeBk2nd} and the references there.
In the results given in that volume, the semigroup perspective is adopted,
and the mathematical setting is based on homogeneous Sobolev spaces $\dot{H}^s(\mathrm{D})$,
which impose implicit boundary compatibilities of regular data, see \cite[Chapter~19]{ThomeeBk2nd}.
The presently investigated time-discretization approach
is based on the space-time variational formulation in \cite{OStMZ}.
It is of Petrov--Galerkin type, and is based on a fractional order Sobolev
space in the temporal direction.
It has been proposed and developed in a series of
papers~\cite{OStMZ, SteinbachZankNoteHT, ZankExactRealizationHT, langer2020efficient, SteinbachMissoni2022}.
We briefly recapitulate it here, and refer to \cite{OStMZ}
for full development of details.
The compressive aspect is here realized
by the $hp$-time discretization for this formulation.
Throughout, we denote by $\mathrm{D}\subset \mathbb R^d$ a bounded
interval (if $d=1$), or a bounded
polygonal (if $d=2$) or polyhedral (if $d=3$) domain, with
a Lipschitz boundary $\Gamma = \partial\mathrm{D}$ consisting of a
finite number of plane faces, and by $T>0$ a finite time horizon.
In the space-time cylinder $Q=(0,T)\times \mathrm{D}$,
we consider the parabolic initial-boundary value problem
(IBVP for short) governed by the partial differential equation
\begin{equation}\label{eq:IBVP}
Bu := \partial_t u + A(\partial_x)u = g \quad \mbox{in}\quad (0,T)\times \mathrm{D}.
\end{equation}
Here, the forcing function $g:Q\to \mathbb R$ is assumed to belong to
$\mathcal A([0,T];L^2(\mathrm{D}))$, i.e., it is analytic as a map from $[0,T]$ into $L^2(\mathrm{D})$.
The spatial differential operator $A(\partial_x)$
is assumed linear, self-adjoint, in divergence form,
i.e.,
\[
A(\partial_x) = -\nabla_x \cdot(A(x) \nabla_x)
\]
with $A\in L^\infty(\mathrm{D};\mathbb R^{d\times d})$
being a symmetric,
positive definite matrix function of $x\in \mathrm{D}$
which does not depend on the temporal variable $t$.
The PDE \eqref{eq:IBVP} is completed by initial condition
\begin{equation}\label{eq:IC}
u|_{t=0} = u_0\;,
\end{equation}
and by mixed boundary conditions
\begin{equation}\label{eq:BC}
\gamma_0(u) = u_D \quad \mbox{on}\quad \Gamma_D\;,
\quad
\gamma_1(u) = u_N \quad \mbox{on}\quad \Gamma_N\;.
\end{equation}
Here, $\Gamma_D$ and $\Gamma_N$ denote a partitioning
of $\Gamma = \partial \mathrm{D}$ into a Dirichlet and a Neumann part,
$\gamma_0$
denotes the Dirichlet trace map,
and
$\gamma_1$ denotes the conormal trace operator, given (in strong form)
by $\gamma_1(v) = n_x \cdot (A(x) \nabla_x v)|_\Gamma$,
with $\Gamma = \partial\mathrm{D}$ denoting the
boundary of $\mathrm{D}$,
and $n_x \in L^\infty(\Gamma;\mathbb R^d)$ the exterior unit normal vector
field on~$\Gamma$.
\begin{remark}\label{rmk:HomBC}
In the rest of this paper, the results are formulated for $u_0=0$, $u_D = 0$,
and $u_N = 0$. Since the IBVP~\eqref{eq:IBVP}--\eqref{eq:BC} is linear,
superposition for a sufficiently regular function $U(x,t)$ in $Q$,
which satisfies~\eqref{eq:IC} and~\eqref{eq:BC}, will imply
that the function $u-U$ will solve~\eqref{eq:IBVP}--\eqref{eq:BC} with
$g-BU$ in place of $g$ in~\eqref{eq:IBVP}, and with homogeneous
initial and boundary data in~\eqref{eq:IC} and~\eqref{eq:BC}.
All regularity hypotheses which we will impose
below on the source term $g$ in \eqref{eq:IBVP}
(in particular, time-analyticity \eqref{eq:TimeReg})
entail via $U$ corresponding assumptions on $u_0$, $u_D$, and $u_N$.
\end{remark}
Exploiting the analytic semigroup property of the parabolic evolution operator,
we provide in Section \ref{sec:tReg}
sufficient conditions for the time analyticity of solutions when considered
as maps from the time interval $[0,T]$ into a
suitable Sobolev space $W\subset L^2(\mathrm{D})$
on the bounded spatial domain $\mathrm{D} \subset \mathbb R^d$.
\emph{Contributions of the present paper} are a weighted analytic,
temporal regularity analysis
based on the analytic semigroup theory for linear, parabolic evolution equations,
for source terms and coefficients of finite spatial regularity,
and the proof of exponential convergence of a temporal $hp$-discretization.
For polygonal spatial domain $\mathrm{D}\subset \mathbb R^2$, and for data without
boundary compatibility, we establish
\emph{a~priori} convergence rate bounds for
fully discrete, space-time approximations
which are based on a fractional order space-time formulation,
on $hp$-time-stepping and on $h$-FEM with
corner-refined, regular graded triangulations in~$\mathrm{D}$.
The diffusion coefficient $A(x)$ is assumed to be independent of $t$, and
to belong to $W^{1,\infty}(\mathrm{D}; \mathbb R^{2\times 2})$.
We comment on the cases $d=1$ (when $\mathrm{D}$ is a bounded interval)
and $d=3$ (when $\mathrm{D}$ is a polyhedron).
The layout of this paper is as follows:
In Section~\ref{sec:FctSpcxtForm}, we introduce notation and function spaces
of tensor product and of Bochner type, which will be used in the following.
We also provide the space-time variational formulation in fractional order
spaces and the subspaces used in discretization.
Section~\ref{sec:Reg} addresses the solution regularity, with particular attention
to temporal analytic regularity in weighted, analytic Bochner spaces
of functions taking values in corner-weighted, Kondrat'ev type spaces on
the domain $\mathrm{D}$. Section~\ref{sec:Approx} then introduces the
Galerkin approximations in space and time that will be used,
and their approximation properties.
Section~\ref{sec:ConvRate} contains the main results on the convergence
rate of the discretization. Section~\ref{sec:NumExp} describes the
numerical realization of the nonlocal temporal bilinear form, and
reports numerical results which are in full agreement with the
convergence rate analysis.
We use standard notation:
$\mathbb N = \{1,2,\dots\}$
shall denote the natural numbers, and
$\mathbb N_0:=\mathbb N\cup \{0\}$.
For Banach spaces $X$ and $Y$, $\mathcal L(X,Y)$ denotes the space of bounded
linear operators from $X$ to $Y$, and $X':=\mathcal L(X,\mathbb R)$ denotes the dual of $X$.
For $q\in [1,\infty]$,
the usual notation $L^q(\mathrm{D})$ is adopted
for Lebesgue spaces of $q$-integrable functions $u:\mathrm{D}\to\mathbb R$ over some (bounded)
domain $\mathrm{D}$ in the Euclidean space $\mathbb R^d$.
For nonnegative integers~$k$, Hilbertian
Sobolev spaces (where $q=2$) on such domains $\mathrm{D}$ are denoted by
$H^k(\mathrm{D})$.
For $k=0$, as usual, $H^0(\mathrm{D}) = L^2(\mathrm{D})$.
Hilbertian Sobolev spaces of noninteger order $s = k + \theta$
for $k\in \mathbb N_0$ and $0<\theta<1$ are defined by interpolation
(real method, with fine index $2$).
\section{Function Spaces and Space-time Variational Formulation}
\label{sec:FctSpcxtForm}
We introduce several Bochner-type Sobolev spaces
in the space-time cylinder $Q:=J\times \mathrm{D}$, with the finite time
interval $J := (0,T)$ and the bounded spatial domain $\mathrm{D}\subset \mathbb R^d$.
\subsection{Function Spaces}
\label{sec:FctSpc}
Bochner-type function spaces defined on the space-time cylinder
$Q=J\times \mathrm{D}$
are spaces of strongly measurable maps $u \colon \, J \to H^l(\mathrm{D})$,
such that $u\in H^k(J;H^l(\mathrm{D}))$ for nonnegative integers $k$,$l$.
Due to the Hilbertian structure of $H^k$, these separable Hilbert spaces
admit tensor product structure, i.e.,
\[
H^k(J;H^l(\mathrm{D}))
\simeq
H^k(J)\otimes H^l(\mathrm{D})
\simeq
H^l(\mathrm{D};H^k(J))\;,
\]
where
$\simeq$ denotes (isometric) isomorphism and
$\otimes$ the Hilbertian tensor product.
For any integer $k\geq 1$, we denote by $H^k_0$ the closed subspace of $H^k$
of functions with homogeneous boundary values in the sense of closure of
$C^\infty_0$ with respect to the norm of $H^k$.
For instance, $H^1_0$
denotes the closed nullspace of the Dirichlet trace operator $\gamma_0$.
To consider mixed boundary value problems on $\mathrm{D}$, we partition $\Gamma = \partial \mathrm{D}$
into two disjoint pieces $\Gamma_D$ and $\Gamma_N$.
Assuming positive $(d-1)$-dimensional
measure of $\Gamma_D$ if $d=2,3$, or
that $\Gamma_D$ contains at least one endpoint of $D$ if $d=1$,
we set
\[
H^1_{\Gamma_D}(\mathrm{D})
:=
\{ v \in H^1(\mathrm{D}) |\ \gamma_0(v)_{\mid_{\Gamma_D}} = 0 \}
\;.
\]
Evidently, for $\Gamma_D\subset\Gamma$,
$H^1_0(\mathrm{D}) = H^1_\Gamma(\mathrm{D}) \subset H^1_{\Gamma_D}(\mathrm{D}) \subset H^1(\mathrm{D})$.
In the following,
we introduce Sobolev spaces for functions defined on an interval $(a,b) \subset \mathbb R$ with $a<b$.
For simplicity,
we consider real-valued functions
$v \colon \, (a,b) \to \mathbb R$.
All results and proofs can be generalized straightforwardly
to $X$-valued functions $v \colon \, (a,b) \to X$ for a Hilbert space $X$, i.e., Bochner--Sobolev spaces.
We write
\[
\begin{split}
H^1_{0,}(a,b) &= H^1_{\{a\}}(a,b) = \{v \in H^1(a,b) |\ v(a) = 0 \}, \\
H^1_{,0}(a,b) &= H^1_{\{b\}}(a,b) = \{v \in H^1(a,b) |\ v(b) = 0 \}\;.
\end{split}
\]
In either of these two spaces, the seminorm
$| \circ |_{H^1(a,b)} = \| \partial_t \circ \|_{L^2(a,b)}$ is a norm.
Thus, $| \circ |_{H^1(a,b)}$ is considered as the norm in $H^1_{0,}(a,b)$ and $H^1_{,0}(a,b)$,
whereas the space $H^1(a,b)$ is endowed with the norm
$\|\circ\|_{H^1(a,b)} = ( \|\circ\|_{L^2(a,b)}^2 + \|\partial_t \circ\|_{L^2(a,b)}^2 )^{1/2}$.
Fractional order spaces shall be defined by interpolation,
via the real method
of interpolation (see, e.g., \cite[Chapter~1]{Triebel}).
We use the fine index $q=2$ to preserve the Hilbertian structure.
Of particular interest will be the space
\[
H^{1/2}_{0,}(a,b) := (H^1_{0,}(a,b),L^2(a,b))_{1/2,2}
\;,
\]
where $|\circ|_{H^1(a,b)}= \| \partial_t \circ \|_{L^2(a,b)}$ is the norm of the space $H^1_{0,}(a,b)$.
The Sobolev space $H^{1/2}_{0,}(a,b)$ is a Hilbert space endowed
with the interpolation norm (see \cite[Section~2.3]{OStMZ} for $(a,b)=(0,T)$)
defined by
\begin{equation} \label{eq:H120def}
\| v \|_{H^{1/2}_{0,}(a,b)}
:= \left(
\sum_{k=0}^\infty \frac{\pi (2k+1)}{2(b-a)} |v_k|^2
\right)^{1/2}, \quad v \in H^{1/2}_{0,}(a,b),
\end{equation}
where the Fourier coefficients $v_k$ are given by
$
v_k
=
\int_a^b v(s) V_k(s) \mathrm ds
\;.
$
Here, we use that any $z \in L^2(a,b)$ admits a representation as a Fourier series
\begin{equation} \label{eq:FourierRepresentation}
z(t) = \sum_{k=0}^\infty z_k V_k(t), \quad z_k = \int_a^b z(s) V_k(s) \mathrm ds, \; k \in \mathbb N_0,
\end{equation}
where $V_k$ denotes an
eigenfunction corresponding to
eigenvalue $\lambda_k = \frac{\pi^2 (2k+1)^2}{4(b-a)^2}$
of
\begin{equation} \label{time:eigenvalues}
-\partial_{tt} V_k(t) = \lambda_k V_k(t) \; \text{ for } t \in (a,b), \quad V_k(a)=\partial_t V_k(b)=0, \quad \norm{V_k}_{L^2(a,b)} = 1.
\end{equation}
In particular for $J=(0,T)=(a,b)$, we have
\[
\| v \|_{H^{1/2}_{0,}(J)}
= \left( \frac{\pi}{2T}
\sum_{k=0}^\infty (2k+1) |v_k|^2
\right)^{1/2}, \quad v \in H^{1/2}_{0,}(J),
\]
with the Fourier representation
\begin{equation} \label{FS:FourierSeries}
v(t) = \sum_{k=0}^\infty v_k \sqrt{\frac{2}{T}} \sin\left(\left(\frac{\pi}{2}+k\pi\right)\frac{t}{T}\right), \; v_k = \int_0^T v(s) \sqrt{\frac{2}{T}} \sin\left(\left(\frac{\pi}{2}+k\pi\right)\frac{s}{T}\right) \mathrm ds.
\end{equation}
Analogous to $H^{1/2}_{0,}(J)$, the Hilbert space
$ H^{1/2}_{,0}(J) := (H^1_{,0}(J),L^2(J))_{1/2,2}$
is endowed with the Hilbertian norm (see \cite[Section~2.3]{OStMZ}) defined by
\[
\| w \|_{H^{1/2}_{,0}(J)}
:= \left( \frac{\pi}{2T}
\sum_{k=0}^\infty (2k+1) |w_k|^2
\right)^{1/2}, \quad w \in H^{1/2}_{,0}(J),
\]
where the Fourier coefficients are given by
$
w_k
=
\int_0^T w(s) \sqrt{\frac{2}{T}} \cos\left(\left(\frac{\pi}{2}+k\pi\right)\frac{s}{T}\right) \mathrm ds.
$
To prove exponential convergence of a temporal $hp$-discretization,
we need further investigations of the Sobolev space $H^{1/2}_{0,}(a,b)$
and its norm $\| \circ \|_{H^{1/2}_{0,}(a,b)}$.
For this purpose,
let the classical Sobolev space $H^{1/2}(a,b)$ be endowed with the Slobodetskii norm \cite[p.~74]{McLean2000}
\begin{equation} \label{Sob:Slobodetskii}
\normiii{v}_{H^{1/2}(a,b)} :=
\left(
\| v \|_{L^2(a,b)}^2
+
|v|_{H^{1/2}(a,b)}^2
\right)^{1/2}
\end{equation}
for $v \in H^{1/2}(a,b)$ with
\begin{equation} \label{SlobodetskiiSemi}
|v|_{H^{1/2}(a,b)} := \left( \int_a^b \int_a^b \frac{|v(s)-v(t)|^2}{|s-t|^2} \mathrm ds \mathrm dt \right)^{1/2}.
\end{equation}
With the Slobodetskii norm \eqref{Sob:Slobodetskii},
we endow $H^{1/2}_{0,}(a,b)$ with the norm
\begin{equation} \label{Sob:NormTriple}
\normiii{v}_{H^{1/2}_{0,}(a,b)} :=
\left(
\| v \|_{L^2(a,b)}^2
+
|v|_{H^{1/2}(a,b)}^2
+
\int_a^b \frac{|v(t)|^2}{t-a} \mathrm dt
\right)^{1/2}
\end{equation}
for $v \in H^{1/2}_{0,}(a,b)$.
We have the following
equivalence result
for the norms defined in~\eqref{eq:H120def} and~\eqref{Sob:NormTriple},
which is proven, e.g., in~\cite{McLean2000}
(see the proof in Appendix~\ref{sec:NormEquivalence} for the characterization of the equivalence constants).
\begin{lemma} \label{lem:NormEquivalence}
There are constants $C_{\mathrm{Int},1}$, $C_{\mathrm{Int},2}>0$, which are independent of $a,b$, such that
\begin{equation*}
C_{\mathrm{Int},1} \| v \|_{H^{1/2}_{0,}(a,b)}
\leq
\normiii{v}_{H^{1/2}_{0,}(a,b)}
\leq
C_{\mathrm{Int},2} \sqrt[4]{ 1 + \frac{4(b-a)^2}{\pi^2}} \| v \|_{H^{1/2}_{0,}(a,b)}
\end{equation*}
for all $v \in H^{1/2}_{0,}(a,b)$.
\end{lemma}
The next result is used for the proof of the temporal $hp$-error estimate in Section~\ref{sec:ConvRate}.
It localizes the $H^{1/2}(a,b)$ norm in a certain sense.
We report its proof in Appendix~\ref{sec:NormEquivalence}, and refer to
\cite{Faermann2000} for a more general localization result.
\begin{lemma} \label{lem:FractionalNormPointtau}
For a number $\tau \in (a,b)$, the estimate
\begin{equation*}
| v |_{H^{1/2}(a,b)}^2
\leq
| v |_{H^{1/2}(a,\tau)}^2 + 4 \int_a^{\tau} \frac{|v(t)|^2}{\tau-t} \mathrm dt
+ 4 \int_{\tau}^b \frac{|v(s)|^2}{s-\tau} \mathrm ds + | v |_{H^{1/2}(\tau,b)}^2
\end{equation*}
holds true for $v \in H^{1/2}(a,b)$, if all occurring integrals on the right side exist.
\end{lemma}
\subsection{Hilbert Transformation ${\mathcal{H}}_T$}
\label{sec:HilbTr}
A key role in the space-time variational formulation of IBVP \eqref{eq:IBVP}
is taken by the nonlocal operator ${\mathcal{H}}_T\in \mathcal L(L^2(J),L^2(J))$,
which is defined by
\begin{equation}\label{eq:DefHT}
({\mathcal{H}}_T v)(t)
:=
\sum_{k=0}^\infty v_k \sqrt{\frac{2}{T}} \cos\left(\left(\frac{\pi}{2}+k\pi\right)\frac{t}{T}\right), \quad t \in J.
\end{equation}
Here, $v \in L^2(J)$ and its Fourier coefficients $v_k = \int_0^T v(s) \sqrt{\frac{2}{T}} \sin\left(\left(\frac{\pi}{2}+k\pi\right)\frac{s}{T}\right) \mathrm ds$ are represented as in \eqref{FS:FourierSeries}.
We collect some properties of ${\mathcal{H}}_T$.
\begin{proposition}[{\cite[Section~2.4]{OStMZ}, \cite{SteinbachZankNoteHT,ZankExactRealizationHT}}] \label{prop:HT}
The modified Hilbert transformation ${\mathcal{H}}_T$ defined in \eqref{eq:DefHT} is a linear isometry as mapping
\begin{equation}\label{eq:HT1}
{\mathcal{H}}_T \colon H^{\nu}_{0,} (J) \to H^{\nu}_{,0} (J) \quad \text{ for } \nu \in \{0, 1/2, 1 \}
\end{equation}
and is $H^{1/2}_{0,}(J)$-elliptic, satisfying
\begin{equation}\label{eq:HT2}
\forall v\in H^{1/2}_{0,}(J):\quad
\langle \partial_t v , {\mathcal{H}}_T v \rangle_{L^2(J)}
=
\| v \|^2_{H^{1/2}_{0,}(J)}.
\end{equation}
Additionally, ${\mathcal{H}}_T$ fulfills the following properties:
\begin{equation}\label{eq:HT3}
\forall v,w \in H^{1/2}_{0,}(J) : \quad
\langle \partial_t w , {\mathcal{H}}_T v \rangle_{L^2(J)}
=
\langle {\mathcal{H}}_T w,\partial_t v \rangle_{L^2(J)}
=
\langle w,v \rangle_{H^{1/2}_{0,}(J)}
\;,
\end{equation}
\begin{equation}\label{eq:HT4}
\forall w \in H^1_{0,}(J), \forall v \in L^2(J) : \quad
\langle \partial_t {\mathcal{H}}_T w,v \rangle_{L^2(J)}
=
-\langle {\mathcal{H}}_T^{-1} \partial_t w,v \rangle_{L^2(J)}
\;,
\end{equation}
\begin{equation}\label{eq:HT5}
\forall v, w\in L^2(J) :
\quad
\langle {\mathcal{H}}_T v,w \rangle_{L^2(J)}
=
\langle v, {\mathcal{H}}_T^{-1} w \rangle_{L^2(J)}
\;,
\end{equation}
\begin{equation}\label{eq:HT6}
\forall v \in L^2(J): \quad \langle v, \mathcal{H}_T v \rangle_{L^2(J)} \geq 0
\;,
\end{equation}
\begin{equation}\label{eq:HT7}
\forall \nu \in \{1/2, 1\}, \forall v \in H^{\nu}_{0,}(J), v \neq 0 : \quad \langle v, \mathcal{H}_T v \rangle_{L^2(J)} > 0
\;.
\end{equation}
\end{proposition}
\begin{remark}\label{rmk:Tinfty}
We remark that \eqref{eq:HT1}--\eqref{eq:HT7} are valid for all $T>0$.
In particular, these identities remain stable under passage to the
limit $T\to \infty$, with appropriate modifications of spaces.
We refer to \cite{DD20} for a space-time variational formulation
and a discussion of a Petrov--Galerkin discretization for the resulting
limiting problems.
\end{remark}
\subsection{Model Scalar Initial Value Problem}
\label{sec:IVP}
In $J=(0,T)$, for a given right-hand side $f$,
consider the scalar IVP to find a function $u:J\to \mathbb R$
such that
\[
\partial_t u = f \text{ in } J, \quad u(0) = 0 \;.
\]
A weak formulation relevant for treatment of IBVP \eqref{eq:IBVP} is
to find $u\in H^{1/2}_{0,}(J)$ such that
\begin{equation}\label{eq:IVPw}
\forall w \in H^{1/2}_{,0}(J) : \; \langle \partial_t u,w \rangle_{L^2(J)} = \langle f , w \rangle_{L^2(J)}
\end{equation}
for given $f\in [H^{1/2}_{,0}(J)]'$. Here, $\langle \circ, \circ \rangle_{L^2(J)}$ denotes the inner product in $L^2(J)$ and as continuous extension of it, also the duality pairing with respect to $[H^{1/2}_{,0}(J)]'$ and $H^{1/2}_{,0}(J)$.
The continuous bilinear form on the left side of \eqref{eq:IVPw} is inf-sup stable:
\begin{equation}\label{eq:infsup}
\inf_{0\ne u\in H^{1/2}_{0,}(J)} \sup_{0 \ne w \in H^{1/2}_{,0}(J)}
\frac{\langle \partial_t u , w \rangle_{L^2(J)} }{\| u \|_{H^{1/2}_{0,}(J)} \| w \|_{H^{1/2}_{,0}(J)}}
\geq 1 \;.
\end{equation}
This is shown in~\cite[Rem.~2.10]{OStMZ} by
observing that, for every $u\in H^{1/2}_{0,}(J)$,
\[
\| u \|_{H^{1/2}_{0,}(J)}
=
\frac{\langle \partial_t u , {\mathcal{H}}_T u\rangle_{L^2(J)}}{\| {\mathcal{H}}_T u \|_{H^{1/2}_{,0}(J)}}
\leq
\sup_{0\ne w \in H^{1/2}_{,0}(J)} \frac{\langle \partial_t u , w \rangle_{L^2(J)} }{ \| w \|_{H^{1/2}_{,0}(J)} }
\;.
\]
For every $f\in [H^{1/2}_{,0}(J)]'$,
IVP \eqref{eq:IVPw} then admits a unique solution $u\in H^{1/2}_{0,}(J)$.
For the derivation of a space-time variational formulation of \eqref{eq:IBVP},
it is useful to consider a \emph{parametric IVP}:
for a given parameter $\mu\geq 0$ (eventually in the spectrum of the spatial
operator of \eqref{eq:IBVP})
and for $f\in [H^{1/2}_{,0}(J)]'$,
find $u\in H^{1/2}_{0,}(J)$ such that
$\partial_t u + \mu u = f $ in $ [H^{1/2}_{,0}(J)]' $.
A \emph{Petrov--Galerkin variational form} of this problem
is to find $u\in H^{1/2}_{0,}(J)$ such that
\begin{equation}\label{eq:pIVPPG}
\forall w \in H^{1/2}_{,0}(J) : \, \langle \partial_t u, w\rangle_{L^2(J)} + \mu \langle u, w \rangle_{L^2(J)}
=
\langle f,w \rangle_{L^2(J)}
\;.
\end{equation}
A \emph{Bubnov--Galerkin variational form}
with equal trial and test function spaces
is to find $u\in H^{1/2}_{0,}(J)$ such that
\begin{equation}\label{eq:pIVPBG}
\forall v\in H^{1/2}_{0,}(J) : \, \langle \partial_t u , {\mathcal{H}}_T v \rangle_{L^2(J)}
+
\mu \langle u,{\mathcal{H}}_T v \rangle_{L^2(J)}
=
\langle f,{\mathcal{H}}_T v \rangle_{L^2(J)}
\;.
\end{equation}
Both formulations \eqref{eq:pIVPPG} and \eqref{eq:pIVPBG} admit unique solutions
due to \eqref{eq:infsup} and \eqref{eq:HT6}.
\subsection{Temporal $hp$-Discretization}
\label{sec:thp}
To discretize \eqref{eq:pIVPBG}, we use
some space $V^M_t \subset H^{1/2}_{0,}(J)$
of finite dimension $M = {\rm dim}(V^M_t)$.
In the $hp$-time discretization, we build $V^M_t$ as follows:
on a partition $\mathcal G = \{ I_j \}_{j=1}^m$ of $J$ into $m$ time intervals
$I_j:= (t_{j-1},t_j)$, where $0 =: t_0 < t_1 < \dots < t_m := T$,
we choose $V^M_t$ as a space of continuous, piecewise polynomials
of degrees $p_j \geq 1$, which we collect in the
\emph{degree vector}
${\boldsymbol{p}} := (p_j)_{j=1}^m\in \mathbb N^m$.
We define
\begin{equation}\label{eq:DefVMt}
V^M_t = S^{{\boldsymbol{p}},1}_{0,}(J;\mathcal G)
:=
\{ v \in H^1_{0,}(J): v_{\mid_{I_j}} \in \mathbb P^{p_j},\; I_j \in \mathcal G\}
\;.
\end{equation}
Here, continuity between adjacent time-intervals is required to ensure
$S^{{\boldsymbol{p}},1}_{0,}(J;\mathcal G) \subset H^{1/2}_{0,}(J)$.
Then
$M = {\rm dim}(S^{{\boldsymbol{p}},1}_{0,}(J;\mathcal G)) = \left( \sum_{j=1}^m (p_j+1)\right) - m = \sum_{j=1}^m p_j$.
We restrict~\eqref{eq:pIVPBG} to $V^M_t$ to obtain the temporal
$hp$-approximation:
find $u^M_t\in V^M_t$ such that
\begin{equation}\label{eq:pIVPBGhp}
\forall v\in V^M_t : \,
\langle \partial_t u^M_t , {\mathcal{H}}_T v \rangle_{L^2(J)}
+
\mu \langle u^M_t , {\mathcal{H}}_T v \rangle_{L^2(J)}
=
\langle f,{\mathcal{H}}_T v \rangle_{L^2(J)}
\;.
\end{equation}
Due to the inf-sup stability \eqref{eq:infsup} and $\mu\geq 0$,
the discretization \eqref{eq:pIVPBGhp} is well-posed with inf-sup
constant independent of $\mathcal G$ and of ${\boldsymbol{p}}$.
Its numerical implementation will require, similar to \cite{OStMZ,DD20},
the efficient evaluation of ${\mathcal{H}}_T v$ for $v\in V^M_t$.
We shall address this in Section \ref{sec:NumHT}
below.
\subsection{Space-Time Variational Formulation}
\label{sec:xtVarForm}
We consider the source problem corresponding to the spatial part
of \eqref{eq:IBVP}.
Its variational form reads: given a source term $f\in L^2(\mathrm{D})$,
find
\begin{equation}\label{eq:xPbm}
w\in H^1_{\Gamma_D}(\mathrm{D}) \;\; \mbox{such that } \forall v\in H^1_{\Gamma_D}(\mathrm{D}) : \,
a(w,v) = \langle f,v \rangle_{L^2(\mathrm{D})} \;.
\end{equation}
Here, $a(w,v) = \int_\mathrm{D} A(x)\nabla_x w(x) \cdot \nabla_x v(x) \mathrm dx$.
We assume uniform positive definiteness of~$A$:
\begin{equation}\label{eq:Acoerc}
a_{\min}:=
\essinf_{x\in \mathrm{D}}
\inf_{0 \ne \xi\in \mathbb R^d} \frac{\xi^\top A(x) \xi}{\xi^\top\xi} > 0
\;.
\end{equation}
With assumption \eqref{eq:Acoerc}, we have
\[
\forall w\in H^1_{\Gamma_D}(\mathrm{D}):
\;\;
a(w,w) \geq a_{\min} \| \nabla_x w \|_{L^2(\mathrm{D})}^2 \geq a_{\min}c\| w \|_{H^1(\mathrm{D})}^2
\]
due to $|\Gamma_D|>0$ if $d=2,3$ or $\Gamma_D\ne\emptyset$ if $d=1$,
and the Poincar\'{e} inequality.
The spectral theorem and the symmetry $a(w,v) = a(v,w)$ for all $v,w\in H^1(\mathrm{D})$
ensure that the corresponding eigenvalue problem to find
\begin{equation}\label{eq:xtevp}
0 \ne \phi\in H^1_{\Gamma_D}(\mathrm{D}), \; \mu\in \mathbb R: \;\; \forall v\in H^1_{\Gamma_D}(\mathrm{D}) : \,
a(\phi,v) = \mu \langle \phi,v \rangle_{L^2(\mathrm{D})}
\end{equation}
admits a sequence of eigenpairs $\{(\mu_k,\phi_k)\}_{k\geq 1}$ enumerated
in increasing order of the real eigenvalues $\mu_k > 0$, repeated according to multiplicity,
with the eigenfunctions $\phi_k$ orthonormal in $L^2(\mathrm{D})$ and orthogonal in $H^1_{\Gamma_D}(\mathrm{D})$,
and with $\mu_k$ accumulating only at $\infty$.
In view of the forthcoming analysis,
\emph{in what follows, we endow $H^1_{\Gamma_D}(\mathrm{D})$ with the ``energy'' norm $a(\circ,\circ)^{1/2}$}.
We remark that, for $v\in H^1_{\Gamma_D}(\mathrm{D})$,
$a(v,v)=\sum_{i=1}^\infty \mu_i|v_i|^2$,
where $v_i=\langle v,\phi_i \rangle_{L^2(\mathrm{D})}$.
The space-time variational formulation of \eqref{eq:IBVP}
is based on the intersection space
\[
H^{1,1/2}_{\Gamma_D;0,}(Q)
:=
\left(L^2(J) \otimes H^1_{\Gamma_D}(\mathrm{D})\right) \cap \left(H^{1/2}_{0,}(J)\otimes L^2(\mathrm{D})\right)\;,
\]
which we equip with the corresponding sum norm.
The space $H^{1,1/2}_{\Gamma_D;,0}(Q)$ is defined analogously.
Proceeding as in \cite[Thm.~3.2]{OStMZ},
the \emph{initial-boundary value problem \eqref{eq:IBVP}--\eqref{eq:BC}
is set as a well-posed operator equation}.
\begin{theorem}\label{thm:Biso}
Consider \eqref{eq:IBVP}--\eqref{eq:BC} with homogeneous data $u_0 = 0$
in~\eqref{eq:IC} and $u_D, u_N = 0$ in~\eqref{eq:BC}. Assume
$|\Gamma_D|>0$ if $d=2,3$ or $\Gamma_D\ne\emptyset$ if $d=1$,
and that the coefficient
$A\in L^\infty(\mathrm{D};\mathbb R^{d\times d}_{\mathrm{sym}})$ satisfies~\eqref{eq:Acoerc}.
Then, the space-time variational formulation of \eqref{eq:IBVP} to
find $u\in H^{1,1/2}_{\Gamma_D;0,}(Q)$ such that
\begin{equation}\label{eq:IBVPVar}
\forall v\in H^{1,1/2}_{\Gamma_D;,0}(Q) : \, \langle \partial_t u , v \rangle_{L^2(Q)}
+
\langle A \nabla_x u, \nabla_x v \rangle_{L^2(Q)}
=
\langle g, v \rangle_{L^2(Q)}
\end{equation}
induces an isomorphism
\[
B:= \partial_t + A(\partial_x) \in \mathcal L_{\mathrm{iso}}(H^{1,1/2}_{\Gamma_D;0,}(Q),[H^{1,1/2}_{\Gamma_D;,0}(Q)]')\;.
\]
In particular,
for every $g\in [H^{1,1/2}_{\Gamma_D;,0}(Q)]'$, IBVP $Bu = g$
in \eqref{eq:IBVP}
admits a unique solution $u\in H^{1,1/2}_{\Gamma_D;0,}(Q)$.
\end{theorem}
We remark that $\langle \circ, \circ \rangle_{L^2(Q)}$ denotes the inner product in $L^2(Q)$ and as continuous extension of it, also the duality pairing with respect to $[H^{1,1/2}_{\Gamma_D;,0}(Q)]'$ and $H^{1,1/2}_{\Gamma_D;,0}(Q)$.
The \emph{space-time discretization} of \eqref{eq:IBVPVar}
is straightforward:
for any conforming,
spatial finite element subspace $V^N_x \subset H^1_{\Gamma_D}(\mathrm{D})$ of finite dimension $N$,
and for the temporal $hp$-subspace $V^M_t\subset H^{1/2}_{0,}(J)$ introduced in \eqref{eq:DefVMt},
we restrict \eqref{eq:IBVPVar} to the space-time approximation space
\begin{equation}\label{eq:xtApprSpc}
V^M_t\otimes V^N_x \subset H^{1,1/2}_{\Gamma_D;0,}(Q) \;.
\end{equation}
That is, we seek an approximate solution
$u^{MN}\in V^M_t\otimes V^N_x$ such that
\begin{equation}\label{eq:IBVPVarxt}
\langle \partial_t u^{MN} , v \rangle_{L^2(Q)}
+
\langle A\nabla_x u^{MN}, \nabla_x v \rangle_{L^2(Q)}
=
\langle g, v \rangle_{L^2(Q)}
\end{equation}
holds true for all $v\in ({\mathcal{H}}_T V^M_t) \otimes V^N_x \subset
H^{1,1/2}_{\Gamma_D;,0}(Q)$.
For these choices of test function spaces and for \emph{any} subspace $V^N_x\subset V$ of finite dimension $N$,
as in ~\cite[Sect.~3]{OStMZ},
existence and uniqueness of the discrete solution
$u^{MN}\in V^M_t\otimes V^N_x\subset H^{1,1/2}_{\Gamma_D;0,}(Q)$
of~\eqref{eq:IBVPVarxt}
follow from the \emph{continuous inf-sup condition}
\begin{equation*}
\inf_{0\ne u\in H^{1,1/2}_{\Gamma_D;0,}(Q)} \sup_{0 \ne w \in H^{1,1/2}_{\Gamma_D;,0}(Q)}
\frac{\langle \partial_t u , w \rangle_{L^2(Q)}
+\langle A\nabla_x u, \nabla_x w \rangle_{L^2(Q)}}{\| u \|_{H^{1,1/2}_{\Gamma_D;0,}(Q)} \| w \|_{H^{1,1/2}_{\Gamma_D;,0}(Q)}}
\geq \frac12 \;.
\end{equation*}
With $H^1_{\Gamma_D}(\mathrm{D})$ endowed with the $a(\circ,\circ)^{1/2}$
norm, the proof of this condition with constant independent of
$A$ follows \emph{verbatim} that of~\cite[Thm.~3.2, Cor.~3.3]{OStMZ}
for the case $A=\mathbb{I}$.
Evidently, the stability of the discrete problem
is a consequence of the choice of the
test function space~${\mathcal{H}}_T V^M_t$,
whose efficient numerical realization will be discussed in
Section~\ref{sec:NumExp}.
\section{Regularity}
\label{sec:Reg}
To obtain convergence rate bounds, we address the regularity of the solution
$u \in H^{1,1/2}_{\Gamma_D;0,}(Q)$. We consider separately the temporal and spatial
regularity.
The solution operator to the parabolic equation \eqref{eq:IBVP}
being an analytic semigroup, for time-analytic forcing $g$ in \eqref{eq:IBVP}
we expect time-analyticity of $u$.
This, in turn, is well-known to imply
exponential convergence of $hp$-time-stepping as shown, e.g.,
in \cite{SS00_340,DD20} and the references there.
We shall verify this in Sections \ref{sec:Approx} and \ref{sec:ConvRate} below.
\subsection{Time-Analyticity}
\label{sec:tReg}
We quantify the temporal analyticity of the solution $u:t\mapsto u(t)$ with $u(t) := u(t,\circ) \in L^2(\mathrm{D})$.
To this end, we recall the eigenvalue problem~\eqref{eq:xtevp}.
Setting $H:=L^2(\mathrm{D})$ and thus denoting by
$\langle \circ,\circ \rangle_H$ the $L^2(\mathrm{D})$ inner product,
the solution $u(t)$ of~\eqref{eq:IBVP} at time $t>0$
for $g=0$, $u_D = u_N = 0$, and for initial data $u_0\in H$
may be written as
\begin{equation}\label{eq:T(t)}
u(t) = E(t)u_0 := \sum_{i=1}^\infty \exp(-\mu_i t) \langle u_0,\phi_i \rangle_H \phi_i
\end{equation}
with convergence of the series in $H$.
The operators $\{ E(t) \}_{t\geq 0}$ satisfy the semigroup property
in~$H$, i.e.,
\[
\forall s,t > 0: \;\; E(s+t) = E(s)
E(t),\;\; E(0) = \operatorname{Id}\;.
\]
For $r\geq 0$, we define
the scale of spaces $X_r\subset H=X_0$
\begin{equation}\label{eq:Xs}
X_r := \{ v\in H: \| v \|_{X_r}^2 := \sum_{i=1}^\infty \mu_i^r |v_i|^2 <\infty \}\;.
\end{equation}
Here, $v_i = \langle v,\phi_i \rangle_H$ denotes the $i$-th coefficient in the eigenfunction expansion
of $v$ (recall from \eqref{eq:xtevp} that the sequence $\{\phi_i\}_{i\geq 1}$ was assumed to be
an orthonormal basis of $H = X_0$).
We remark that the norm $\| \circ \|_{X_1}$ is the energy-norm on the space
$V = H^1_{\Gamma_D}(\mathrm{D})$, due to
\[
\forall v\in V:\quad
\| v \|_{X_1}^2 = a(v,v) = \sum_{i=1}^\infty \mu_i |v_i|^2 \;.
\]
For $|\Gamma_D|>0$ if $d=2,3$ or $\Gamma_D\ne\emptyset$ if $d=1$,
$\| \circ \|_{X_1}$ is equivalent to the $H^1(\mathrm{D})$ norm on $V$
and the norm bounds
$\| v \|_{X_r} \leq c \| v \|_{X_{r'}}$ for $r'\geq r$
follow from \eqref{eq:Xs} and the assumed enumeration of the
real eigenvalues $\mu_i>0$ with $\mu_i\uparrow\infty$
as $i\uparrow\infty$:
\begin{equation} \label{ineq:XrEmbedding}
\| v \|_{X_r}^2
= \sum_{i=1}^\infty \mu_i^r |v_i|^2
\leq
\left( \sup_{m \in \mathbb N} \mu_m^{r-r'} \right) \sum_{i=1}^\infty \mu_i^{r'}|v_i|^2
\leq \mu_1^{r-r'} \| v \|_{X_{r'}}^2.
\end{equation}
For $\theta,r\geq 0$ and for any $t>0$, $E(t)$ in
\eqref{eq:T(t)} belongs to $\mathcal L(X_\theta,X_r)$. In fact, for any
$t>0$ and $v\in
X_\theta$, we have
\begin{equation}\label{eq:newEt}
\| E(t) v \|_{X_r}^2
=
\displaystyle
\sum_{i=1}^\infty \mu_i^r \exp(-2\mu_i t) |v_i|^2
=
\sum_{i=1}^\infty \mu_i^{r-\theta} \exp(-2\mu_i t) \mu_i^\theta |v_i|^2.
\end{equation}
For $\theta \geq r\geq 0$, identity~\eqref{eq:newEt} implies
\[
\| E(t) v \|_{X_r}^2
\leq
\mu_1^{-(\theta-r)} \exp(-2\mu_1 t)\sum_{i=1}^\infty \mu_i^\theta |v_i|^2
=
\mu_1^{-(\theta-r)} \exp(-2\mu_1 t) \| v \|_{X_\theta}^2
\]
for all $v\in X_\theta$, i.e., $E(t) \in \mathcal L(X_\theta,X_r)$ for any $t>0$ with
\begin{equation}\label{eq:gstThetaGreaterR}
\forall \theta\geq r\geq 0, \; \forall t>0 :
\quad \| E(t) \|_{\mathcal L(X_\theta,X_r)}^2 \leq \mu_1^{-(\theta-r)} \exp(-2\mu_1 t) \;.
\end{equation}
For $r\geq \theta \geq 0$, for any $t>0$ and $v\in
X_\theta$, identity~\eqref{eq:newEt} implies
\begin{equation}\label{eq:gst}
\| E(t) v \|_{X_r}^2
\leq
\displaystyle
\sup_{i \in \mathbb N} \{ \mu_i^{r-\theta} \exp(-2\mu_i t) \} \sum_{i=1}^\infty \mu_i^\theta |v_i|^2
=:
\displaystyle
G_{r-\theta}(t) \| v \|_{X_\theta}^2 \;.
\end{equation}
To provide an upper bound for $G_{r-\theta}(t)$,
we observe that, for fixed $t, \sigma>0$, the function
$0 < \mu \mapsto \mu^{2\sigma}\exp(-2\mu t)$
takes its maximum at $\mu_*:= \sigma / t $ whence
\begin{equation}\label{eq:bstBound}
\forall t>0: \quad
G_{2\sigma}(t) \leq G_{\max}(\sigma,t) := [\mu_*^{\sigma}\exp(-\mu_* t)]^2
=
\left( \frac{\sigma}{t \mathrm{e}} \right)^{2\sigma}
\;.
\end{equation}
Inserting~\eqref{eq:bstBound} with
$\sigma = (r-\theta)/2 > 0$ into~\eqref{eq:gst},
we arrive at
\[
\forall r\geq \theta \geq 0, \; \forall t>0 :
\quad
\| E(t)\|_{\mathcal L(X_\theta,X_r)}^2\leq
\left( \frac{r-\theta}{2t \mathrm{e}} \right)^{r-\theta}\;.
\]
The exponential decay of the Fourier coefficients for $t>0$
implied by the exponential weighting $\exp(-\mu_it)$
entails time-analyticity of the solution $t\mapsto u(t)$ for $t>0$.
To prove exponential convergence rates of $hp$-approximation
in $J = (0,T)$, we quantify the time regularity of the solution $u$
of \eqref{eq:IBVP} for $u_0=0$ and $u_D=u_N=0$ with the Duhamel representation
(see, e.g.,~\cite{Pazy})
\begin{equation}\label{eq:Duhamel}
u(t) = \int_0^t E(t-s)g(s)\mathrm ds, \quad 0<t\leq T\;.
\end{equation}
We work under the following
\emph{time-analyticity assumption} on the forcing $g$ in \eqref{eq:IBVP}:
There exist constants $C>0$ and $\delta \ge 1$
such that, for some $\varepsilon \in (0,1)$,
we have
\begin{equation}\label{eq:TimeReg}
\forall \;l\in\mathbb N_0:\;\;
\sup_{0\leq t \leq T}
\| g^{(l)}(t) \|_{X_\varepsilon} \leq C \delta^l\Gamma(l+1),
\end{equation}
where $\Gamma(\circ)$ denotes the gamma function fulfilling $\Gamma(l) = (l-1)!$ for all $l \in \mathbb N$.
\emph{Formally differentiating} \eqref{eq:Duhamel}
$l$-times with respect to $t$,
upon writing it equivalently as $u(t) = \int_0^t E(s)g(t-s)\mathrm ds$,
gives
\begin{equation}\label{eq:Tlg}
\frac{\mathrm d^l}{\mathrm dt^l}u(t)
=
\sum_{i=0}^{l-1} E^{(i)}(t) g^{(l-1-i)}(0)
+
\int_0^t E(s) g^{(l)}(t-s) \mathrm ds\;,
\quad
l\in \mathbb N, t>0\;.
\end{equation}
The right limits at $t=0$ of the time-derivatives of the forcing $g$ in \eqref{eq:IBVP}
contribute to the time-regularity.
We estimate the norm of the operators $E^{(l)}(t)$ in $\mathcal L(X_\theta, X_r)$.
\begin{lemma}\label{lem:Tt}
For $r\geq \theta \geq 0$, we have
\begin{equation}\label{eq:T'TBd}
\forall l\in \mathbb N_0, \;\forall t>0 :\;\;
\| E^{(l)}(t) \|_{\mathcal L(X_\theta,X_r)}^2
\leq
\frac{1}{\sqrt{2\pi}}\left(\frac12\right)^{2l+r-\theta}
\Gamma(2l+1+r-\theta)\, t^{-2l-(r-\theta)} \;.
\end{equation}
\end{lemma}
\begin{proof}
For $v\in H=X_0$, with $v_i = \langle v,\phi_i \rangle_H$,
the time-derivative of order $l\in \mathbb N$ applied to $v(t) = E(t)v$
represented as in \eqref{eq:T(t)} yields (with formal, term-by-term
differentiation)
\[
\frac{\mathrm d^l}{\mathrm dt^l}v(t)
=
\sum_{i=1}^\infty (-\mu_i)^l \exp(-\mu_i t) v_i \phi_i
\]
with convergence in $H$ for arbitrary, fixed $t>0$.
Therefore
\[
\forall t>0:\;\;
\| v^{(l)}(t) \|_{X_r}^2
=
\sum_{i=1}^\infty \mu_i^{2l+r-\theta} \exp(-2\mu_i t) \mu_i^\theta |v_i|^2
\;.
\]
It follows from \eqref{eq:bstBound} that
for every $t>0$
\[
\| v^{(l)}(t) \|_{X_r}^2
\leq
G_{2(l+[r-\theta]/2)}(t) \| v \|_{X_\theta}^2
\leq
G_{\max}(l+[r-\theta]/2,t) \| v \|_{X_\theta}^2
\;.
\]
Therefore, for every $v\in X_\theta$ and every $r\geq \theta \geq 0$,
we have
\[
\begin{split}
\forall l\in \mathbb N, t>0:\quad
\| v^{(l)}(t) \|_{X_r}^2
&\leq
\left(\frac{2l+ r-\theta}{2t\mathrm{e}}\right)^{2l+r-\theta} \| v
\|_{X_\theta}^2\\
&=\left(\frac12\right)^{2l+r-\theta}\left(\frac{2l+r-\theta}{\mathrm{e}}\right)^{2l+r-\theta}
t^{-2l-(r-\theta)}\| v\|_{X_\theta}^2
\;.
\end{split}
\]
For $x\in\mathbb R_+$, the Stirling's formula \eqref{Stirling} states
$\sqrt{2\pi}\,x^{x-1/2}\mathrm{e}^{-x}\le \Gamma(x)$,
which implies
$(x/\mathrm{e})^x\le \frac{1}{\sqrt{2\pi}} x^{-1/2}\Gamma(x+1)$. With $x=2l+r-\theta$,
this gives the claimed bound, as $(2l+r-\theta)^{-1/2}\le 1$.
\end{proof}
\begin{lemma}\label{lem:DtBound}
Assume \eqref{eq:TimeReg} with some $\varepsilon \in (0,1)$ and some $\delta \geq 1$.
For $r \in [0,2]$, there exists a constant $C>0$ (independent of $\delta,$ $l$, $t$) such that,
for every $l\in \mathbb N_0$ and $t>0$,
we have
\begin{equation}\label{eq:dtlu}
\| u^{(l)}(t) \|_{X_r}
\leq
C \delta^l\Gamma(l+1)
\left( t^{(2-r+\min\{r,\varepsilon\})/2} + \sum_{i=0}^{l-1} t^{-i-r/2 + \varepsilon/2} \right)
\;.
\end{equation}
For $l=0$, this bound is valid without the sum.
\end{lemma}
\begin{proof}
From \eqref{eq:Tlg}, we estimate for every $0<t\leq T$
\begin{multline*}
\| u^{(l)}(t) \|_{X_r}
\leq
\sum_{i=0}^{l-1} \|E^{(i)}(t)\|_{\mathcal L(X_\varepsilon,X_r)} \| g^{(l-i-1)}(0) \|_{X_\varepsilon} \\
+
\int_0^t \|E(s)\|_{\mathcal L(X_\varepsilon ,X_r)}
\|g^{(l)}(t-s) \|_{X_\varepsilon}\mathrm{d}s.
\end{multline*}
To estimate the sum, we use \eqref{eq:T'TBd} with $\theta = \varepsilon$ and
assumption \eqref{eq:TimeReg} and obtain
\begin{align*}
\sum_{i=0}^{l-1} &\|E^{(i)}(t)\|_{\mathcal L(X_\varepsilon,X_r)} \| g^{(l-i-1)}(0) \|_{X_\varepsilon} \\
&\leq \sum_{i=0}^{l-1} C \left(\frac12\right)^{i+r/2-\varepsilon/2} \Gamma(2i+1+r-\varepsilon)^{1/2} t^{-i-r/2+\varepsilon/2}\delta^{l-1-i}\Gamma(l-i) \\
&\leq C {\delta}^{l-1} \sum_{i=0}^{l-1}\left(\frac12\right)^{i+r/2-\varepsilon/2} \Gamma(2i+1+r-\varepsilon)^{1/2}\Gamma(l-i)\, t^{-i-r/2+\varepsilon/2} \\
&\leq C {\delta}^{l-1} \sum_{i=0}^{l-1} \Gamma(i+1+r/2-\varepsilon/2)\Gamma(l-i)\, t^{-i-r/2+\varepsilon/2} \\
&\leq C {\delta}^{l-1} \Gamma(l+1) \sum_{i=0}^{l-1} t^{-i-r/2+\varepsilon/2} \;,
\end{align*}
where in the third inequality we have used the duplication formula
$\Gamma(z)\Gamma(z+1/2)=2^{1-2z}\sqrt{\pi}\Gamma(2z)$ with
$z=(2i+1+r-\varepsilon)/2$, and the fourth inequality follows from
$\max_{0\le i\le l-1}\Gamma(i+1+r/2-\varepsilon/2)\Gamma(l-i)\le
\max_{0\le i\le l-1}\Gamma(i+2)\Gamma(l-i)\le \Gamma(l+1)$.
\\
To estimate the integral term for $\varepsilon \leq r\leq 2$,
we use assumption \eqref{eq:TimeReg} with $\varepsilon \in (0,1)$
and \eqref{eq:T'TBd} with $l=0$, $\theta = \varepsilon > 0$ and
$\varepsilon \leq r\leq 2$, and obtain, for every $l\in \mathbb N_0$,
\begin{align*}
\int_0^t \| E(s) \|_{\mathcal L(X_\varepsilon, X_r)} \| g^{(l)}(t-s) \|_{X_\varepsilon} \mathrm{d}s
&\leq
C \delta^l \Gamma(l+1) \int_0^t s^{-(r-\varepsilon)/2} \mathrm{d}s \\
&=
C C_\varepsilon \delta^l \Gamma(l+1) t^{(2-r+\varepsilon)/2}
\end{align*}
with $C_\varepsilon=2/(2-r+\varepsilon)$.
It remains to estimate the integral term for $0 \leq r \leq
\varepsilon$. In this case, for every $l \in \mathbb N_0$, we have
\begin{align*}
\int_0^t \| E(s) \|_{\mathcal L(X_\varepsilon, X_r)} \| g^{(l)}(t-s) \|_{X_\varepsilon} \mathrm{d}s
&\leq
C \delta^l \Gamma(l+1) \int_0^t \mu_1^{-(\varepsilon-r)/2} \exp(-\mu_1 s) \mathrm{d}s \\
&\leq
C \widetilde{C}_\varepsilon \delta^l \Gamma(l+1) t\;,
\end{align*}
where the bound \eqref{eq:gstThetaGreaterR} is used
($\widetilde{C}_\varepsilon=\mu_1^{-(\varepsilon-r+2)/2}$). This completes the proof of the assertion.
\end{proof}
\begin{remark}\label{rmk:s<2}
For $0\leq r < 2$, the preceding result is valid under hypothesis
\eqref{eq:TimeReg} with $\varepsilon = 0$, as used, e.g., in \cite{SS00_340},
but with $C(r) \uparrow \infty$ as $r \uparrow 2$.
\end{remark}
\begin{lemma} \label{lem:boundab}
Assume \eqref{eq:TimeReg} with some $\varepsilon \in (0,1)$ and some $\delta \geq 1$.
Let $u$ be the solution of \eqref{eq:IBVPVar}. For $T\geq b > a \geq 1,$ the estimate
\begin{equation} \label{eq:L2abX2}
\forall l \in \mathbb N_0 : \quad \left(\int_a^b \| u^{(l)}(t) \|^2_{X_2} \mathrm dt\right)^{1/2} \leq \delta^l \Gamma(l+2) C(\varepsilon,a,b)
\end{equation}
holds true with a constant $C(\varepsilon,a,b)>0$ independent of $l$ and $\delta$.
Furthermore, for $J=(0,T)$, we have $u \in H^1_{0,}(J;H)$ with
\begin{equation} \label{eq:SobH1}
\left(\int_0^T \| u(t) \|^2_H \mathrm dt\right)^{1/2}
\leq
C T^{3/2}, \; \left(\int_0^T \| u'(t) \|^2_H \mathrm dt\right)^{1/2}
\leq
C \delta \, (T^{1+\varepsilon}+T^3)^{1/2}
\end{equation}
and $u \in L^2(J;X_2)$ with
\begin{equation} \label{eq:L2X2}
\| u\|_{L^2(J;X_2)} \leq C T^{\varepsilon/2+1/2},
\end{equation}
where the constant $C>0$ is independent of $\varepsilon$, $\delta$ and $T.$
\end{lemma}
\begin{proof}
The bound \eqref{eq:L2abX2} follows from \eqref{eq:dtlu}. For the estimates \eqref{eq:SobH1} and \eqref{eq:L2X2}, we use~\eqref{eq:dtlu} for $r=0$ with $l=0$ or $l=1$, and $r=2$ with $l=0$, respectively.
\end{proof}
\begin{proposition}\label{Prop:tExpBd}
Assume \eqref{eq:TimeReg} with some $\varepsilon \in (0,1)$ and some $\delta \geq 1$.
For $r \in [0,2]$, there exists a constant $C>0$ (independent of $l$, $\delta$, $a$, $b$, $t$, $q$) such that
the solution $u$ of \eqref{eq:IBVPVar}
satisfies
\begin{equation}\label{eq:dltuBd}
\forall l \in \mathbb N_0, \, \forall t \in (0, \min\{1,T\}] : \, \| u^{(l)}(t) \|_{X_r}
\leq
C \delta^l \Gamma(l+2) t^{-l+1-r/2 + \varepsilon/2} \;,
\end{equation}
%
and, for $0<a<b \leq \min\{1,T\}$,
%
\begin{equation}\label{eq:IntEst}
\forall l \in \mathbb N, l \geq 2 : \quad
\left( \int_a^b \| u^{(l)}(t) \|^2_{X_r} \mathrm dt \right)^{1/2}
\leq
C \delta^{l} \Gamma(l+2) a^{-l+3/2-r/2 + \varepsilon/2}\;,
\end{equation}
and, for arbitrary $q\geq 2$,
\begin{equation}\label{eq:SobEst}
\| u \|_{H^q((a,b);X_r)} \leq C \delta^{q} \Gamma(q+3) a^{-q+3/2-r/2 + \varepsilon/2} \;.
\end{equation}
\end{proposition}
\begin{proof}
The bound \eqref{eq:dltuBd} follows from \eqref{eq:dtlu}.
Estimate \eqref{eq:IntEst} is obtained by integrating the pointwise bound~\eqref{eq:dltuBd}.
The Sobolev bound \eqref{eq:SobEst} follows by interpolation.
\end{proof}
For the proof of the exponential convergence rate of the
space-time discretization proposed in this work,
we need the following regularity result, which is proven in Appendix~\ref{sec:ProofLemH12X2Reg}.
\begin{lemma}\label{lem:H12X2Reg}
Assume \eqref{eq:TimeReg} with some $\varepsilon \in (0,1)$ and some $\delta \geq 1$ for $l=0,1$.
Then, for $b \in (0,T]$, the solution $u$ of~\eqref{eq:IBVPVar} belongs to
$H^{1/2}_{0,}((0,b);X_2)$ and the estimate
\[
\normiii{u}_{H^{1/2}_{0,}((0,b);X_2)} \leq \sqrt[4]{\frac{2}{\pi}}\, \frac{1}{\varepsilon}\, b^{\varepsilon/2}
\left( \frac{b}{1+\varepsilon} + \frac{3}{\varepsilon} +
\frac{4 b^2}{(\varepsilon +1)(\varepsilon + 2)}
\right)^{1/2} C_g
\]
holds true,
with
\begin{equation}\label{eq:boundg}
C_g:=\| g \|_{W^{1,\infty}((0,b);X_\varepsilon)}=\max\left\{\sup_{0 \leq t \leq b} \norm{g(t)}_{X_\varepsilon},
\sup_{0 \leq t \leq b} \norm{g'(t)}_{X_\varepsilon}
\right\}.
\end{equation}
\end{lemma}
\begin{remark}
The assertion of Lemma~\ref{lem:H12X2Reg} can be generalized. For this purpose, define the interpolation space $H^{\theta}_{0,}(a,b) := (H^1_{0,}(a,b),L^2(a,b))_{\theta,2}$ for $\theta \in [1/2,1]$ with the usual Slobodetskii norm $\normiii{\circ}_{H^{\theta}(a,b)}$ as in \cite[p.~74]{McLean2000}, where $a<b$, $a,b \in \mathbb R.$ Then, under the assumption of Lemma~\ref{lem:H12X2Reg}, we have
\begin{equation*}
\normiii{u}_{H^{\theta}_{0,}((0,b);X_2)} \leq C(b,\varepsilon,\theta, g)
\end{equation*}
for $\theta \in [1/2,1/2+\varepsilon) \cap [1/2,1]$ with a constant $C(b,\varepsilon,\theta, g)>0$.
\end{remark}
\subsection{Spatial Regularity}
\label{sec:SpReg}
We elaborate here on the regularity of the solution with respect to the
spatial variable $x\in \mathrm{D}$. For \eqref{eq:IBVP}, this regularity is,
of course, dependent on the temporal variable $t$, and the spaces $X_r$
defined in \eqref{eq:Xs} via eigensystems, which are intrinsic to the spatial
operator \eqref{eq:xPbm} with \eqref{eq:Acoerc}, play a prominent role.
In order to leverage spatial approximation results,
we relate these spaces to
standard ($d=1$) or corner-weighted ($d\ge 2$) Sobolev spaces.
As we shall consider in detail only $\mathbb P^1$-Lagrangian FEM approximation
in $\mathrm{D}$, for the ensuing convergence rate analysis in Section
\ref{sec:Approx} we are mainly interested in the spaces
$X_r$ for $r=0,1,2$ as defined in \eqref{eq:Xs}.
The cases $0\leq r \leq 1$ coincide with standard Sobolev spaces
endowed with equivalent norms.
\begin{proposition}\label{prop:H0H1}
For space dimension $d\geq 2$, assume that
$\mathrm{D}\subset \mathbb R^d$ is a bounded Lipschitz domain.
Assume further that $A\in L^\infty(\mathrm{D};\mathbb R^{d\times d}_{\mathrm{sym}})$
is uniformly positive definite
in the sense that \eqref{eq:Acoerc} is satisfied.
Then,
$X_0 = L^2(\mathrm{D})$ and $X_1 \simeq
H^1_{\Gamma_D}(\mathrm{D})$
and for $0<r<1$,
$X_r \simeq (L^2(\mathrm{D}), H^1_{\Gamma_D}(\mathrm{D}))_{r,2}$.
\end{proposition}
Consider next $1<r\leq 2$.
Once we characterize $X_2$, for $1<r<2$,
$X_r$ is characterized by real interpolation.
To characterize $X_2$,
we consider the source diffusion problem \eqref{eq:xPbm},
with assumption \eqref{eq:Acoerc} in place.
In addition, we assume
\begin{equation}\label{ass:ALip}
f\in L^2(\mathrm{D}),\;\; A\in W^{1,\infty}(\mathrm{D};\mathbb R^{d\times d}_{\mathrm{sym}}).
\end{equation}
Then,
eigenfunction expansions of $f\in L^2(\mathrm{D})$
imply that the unique solution $u\in X_1$ of \eqref{eq:xPbm}
belongs to $X_2$.
Furthermore, the solution operator is bijective,
since from~\eqref{eq:Xs} and \eqref{ass:ALip} it follows that
\begin{equation}\label{eq:X2L2}
\| u \|_{X_2}^2
= \sum_{k=1}^\infty \mu_k^2 |u_k|^2
= \| A(\partial_x) u \|_H^2
= \sum_{k=1}^\infty |f_k|^2
= \| f \|^2_{L^2(\mathrm{D})}.
\end{equation}
It remains to relate the space
$X_2$, which is defined in terms of the spatial operator $A(\partial_x)$,
to an intrinsic function space in $\mathrm{D}$.
Due to \eqref{eq:X2L2}, $X_2 = (A(\partial_x))^{-1} L^2(\mathrm{D})$.
To characterize elements in $X_2$,
we use the elliptic regularity of the BVP \eqref{eq:xPbm}
with time-independent data $f \in L^2(\mathrm{D})$ in
standard (if $d=1$)
or
corner-weighted (if $d\ge 2$)
Sobolev spaces in $\mathrm{D}\subset \mathbb R^d$.
\subsubsection{{\bf Case $d=1$}}
The spatial domain $\mathrm{D}$ is an open, bounded and connected interval,
and, by \eqref{ass:ALip}, the diffusion coefficient is a scalar
$a \in W^{1,\infty}(\mathrm{D})$ such that \eqref{eq:Acoerc} is satisfied.
Standard elliptic regularity results imply that
there exists a constant $c>0$ such that, for every
$f\in L^2(\mathrm{D})$, the solution $u=A(\partial_x)^{-1}f$ belongs to
$H^2(\mathrm{D})$ and satisfies
$\| v \|_{H^2(\mathrm{D})} \leq c \| f \|_{L^2(\mathrm{D})}$.
This, combined with~\eqref{eq:X2L2},
gives that $X_2\subset H^2(\mathrm{D})$ and
\begin{equation}\label{eq:X2d=1}
\forall v\in X_2 : \quad
\| v \|_{H^2(\mathrm{D})} \leq c \| v \|_{X_2}\;.
\end{equation}
\begin{remark}\label{rem:1Dtransmission}
For $d=1$, a continuous embedding of $X_2$ into a nonintrinsic
function space can be easily established also for transmission problems.
Assume $\mathrm{D}$ to be partitioned into $n_{\rm sub}$ disjoint, open and connected
subintervals ${\mathcal D} = \{\mathrm{D}_i\}_{i=1}^{n_{\rm sub}}$
and denote the corresponding broken Sobolev spaces
$W^{1,\infty}({\mathcal D})=\{ a\in L^\infty(\mathrm{D}):
a_{\mid_{\mathrm{D}_i}} \in W^{1,\infty}(\mathrm{D}_i), \; i=1,\dots, n_{\rm sub} \}$
and
$H^2({\mathcal D}) := \{ v\in H^1(\mathrm{D}): v_{\mid_{\mathrm{D}_i}} \in
H^2(\mathrm{D}_i), \; i=1,\dots, n_{\rm sub} \}$.
We set
$\|v\|_{H^2({\mathcal D})}^2:=\|v\|_{H^1(\mathrm{D})}^2+\sum_{i=1}^{n_{\rm sub}} |v|_{H^2(\mathrm{D}_i)}^2$.
We assume that the diffusion coefficient $a$ belongs to
$W^{1,\infty}({\mathcal D})$ and satisfies
\eqref{eq:Acoerc}.
In this case,
standard elliptic regularity results imply that
there exists a constant $c>0$ such that, for every $f\in
L^2(\mathrm{D})$, $u=A(\partial_x)^{-1}f\in H^2({\mathcal D})$ and $\| v \|_{H^2({\mathcal D})} \leq c \| f
\|_{L^2(\mathrm{D})}$.
This, combined with~\eqref{eq:X2L2}, gives $X_2\subset
H^2({\mathcal D})$ and~\eqref{eq:X2d=1} is valid with $\| v \|_{H^2({\mathcal D})}$ on the left side.
\end{remark}
\subsubsection{{\bf Case $d=2$}} \label{Sec:P1:d2}
Under \eqref{ass:ALip}, for polygonal domains $\mathrm{D} \subset \mathbb R^2$,
weak solutions of the source problem \eqref{eq:xPbm}
are known to belong to a weighted Sobolev space of Kondrat'ev type
which is defined as follows.
\begin{definition}[Kondrat'ev Spaces in dimension $d=2$] \label{def:Kma}
Assume that $\mathrm{D}\subset \mathbb R^2$ is a bounded polygonal domain
with $\geq 3$ corners and straight sides,
whose boundary $\partial \mathrm{D}$ is Lipschitz.
Denote by
$r_\mathrm{D}:\mathrm{D}\to \mathbb R_{\ge 0}$
a smooth function that locally,
in a (sufficiently small) open neighborhood
of each corner of $\mathrm{D}$,
coincides with the Euclidean distance to that corner.
Then, for $m\in \mathbb N_0$ and for some constant $a>0$,
the Kondrat'ev corner-weighted Sobolev space $\mathcal K^m_a(\mathrm{D})$
is defined as
\begin{equation}\label{eq:DefKma}
\mathcal K^m_a(\mathrm{D})
:=
\left\{ v \colon \, \mathrm{D}\to\mathbb R : \; \forall |\alpha|\leq m : \,
r_\mathrm{D}^{|\alpha|-a} \partial^\alpha v \in L^2(\mathrm{D}) \right\}
\;,
\end{equation}
with
$\| u \|_{\mathcal K^m_{a}(\mathrm{D})}^2
:=
\sum_{|\alpha|\le m}\| r_\mathrm{D}^{|\alpha|-a} \partial^\alpha v\|_{L^2(\mathrm{D})}^2$.
\end{definition}
The regularity result in question is a special case of
\cite[Thm. 4.4]{BLN2017}, which we state here for definiteness in the form
required by us.
\begin{proposition}\label{prop:BLN2017}
Assume that $\mathrm{D} \subset \mathbb R^2$ is a bounded polygon
with boundary $\partial\mathrm{D}$ consisting of a finite number of
straight sides.
Consider the elliptic source problem \eqref{eq:xPbm}
with assumptions \eqref{eq:Acoerc} and \eqref{ass:ALip} in place.
Then, there exist $c>0$ and a constant $a>0$ such that,
for every $f\in L^2(\mathrm{D})$,
the weak solution $u \in X_1 = H^1_{\Gamma_D}(\mathrm{D})$
of \eqref{eq:xPbm}
belongs to $\mathcal K^2_{a+1}(\mathrm{D})$ and
satisfies the a~priori estimate
\begin{equation}\label{eq:X2apriori}
\| u \|_{\mathcal K^2_{a+1}(\mathrm{D})} \leq c \| f \|_{L^2(\mathrm{D})}.
\end{equation}
In particular, therefore, $X_2 \subset \mathcal K^2_{a+1}(\mathrm{D})$
and there exists $c>0$ such that
\begin{equation}\label{eq:X2K2a}
\forall v \in X_2: \quad \| v \|_{\mathcal K^2_{a+1}(\mathrm{D})} \leq c \| v \|_{X_2}
\;.
\end{equation}
\end{proposition}
\begin{proof}
Assumption \eqref{ass:ALip} implies that $A \in \mathcal W^{1,\infty}(\mathrm{D})$
as defined in \cite[Eqn. (5)]{BLN2017},
and that
$\| A \|_{\mathcal W^{1,\infty}(\mathrm{D})} \leq C(\mathrm{D}) \| A \|_{W^{1,\infty}(\mathrm{D})}$.
We may then use \cite[Thm. 4.4]{BLN2017} with $b_i = c =0$, $m=1$, to conclude
the \emph{a~priori} estimate
\[
\| u \|_{\mathcal K^2_{a+1}(\mathrm{D})} \leq c \| f \|_{\mathcal K^0_{a-1}(\mathrm{D})}
\]
for all $|a|<\eta$ for some (sufficiently small) $\eta > 0$.
We assume, without loss of generality, that $0 < \eta < 1$.
Then, definition \eqref{eq:DefKma} states that
$f\in \mathcal K^0_{a-1}(\mathrm{D})$ means $r_\mathrm{D}^{-(a-1)}f\in L^2(\mathrm{D})$.
As $-(a-1) > 0$, $r_\mathrm{D}^{-(a-1)}\in L^\infty(\mathrm{D})$, so that
$\| f \|_{\mathcal K^0_{a-1}(\mathrm{D})} \leq c(a,\mathrm{D}) \| f \|_{L^2(\mathrm{D})}$.
The \emph{a~priori} estimate implies then \eqref{eq:X2apriori}.
Since $\| f \|_{L^2(\mathrm{D})} =\| u \|_{X_2}$
(see \eqref{eq:X2L2}),
the \emph{a~priori} estimate also implies \eqref{eq:X2K2a}.
\end{proof}
\begin{remark}\label{rmk:Transm2d}
For transmission problems in a polygonal domain $\mathrm{D}$,
with \emph{piecewise constant, isotropic coefficients}
in materials occupying a finite number $n_{\rm sub}$ of
polygonal subdomains $\mathrm{D}_i\subset \mathrm{D}$,
regularity in the weighted spaces $\mathcal K^2_{a+1}(\mathrm{D})$ with
radial weights also at multi-material intersection points in
$\mathrm{D}$ are stated in \cite[Theorem~3.7]{FEMTransmission}.
The assumptions in~\cite{FEMTransmission} on $A$
are more restrictive than just~\eqref{eq:Acoerc} and
$A\in W^{1,\infty}({\mathcal D};\mathbb R^{d\times d}_{\mathrm{sym}})$.
The regularity result in \cite[Theorem~3.7]{FEMTransmission}
with $m=1$ will imply for $u\in X_2$ a splitting $u=u_{\mathrm{reg}}+w_s$,
with the bound \eqref{eq:X2apriori} for $u_{\mathrm{reg}}|_{\mathrm{D}_i}$
on each subdomain $\mathrm{D}_i$,
and with $w_s$ in a finite-dimensional space $W_s$,
see~\cite[Sect.~3.2]{FEMTransmission}.
\end{remark}
\subsubsection{{\bf Case $d=3$}}
\label{sec:Regd=3}
Proposition \ref{prop:BLN2017} remains valid in space dimension $d=3$.
To detail a precise statement, we still assume \eqref{ass:ALip}.
Then, \cite[Theorem 1.1]{AmannNistor08} implies \eqref{eq:X2apriori}
and \eqref{eq:X2K2a} in bounded, polyhedral domains $\mathrm{D}\subset \mathbb R^3$
with Lipschitz boundary $\partial\mathrm{D}$ consisting of a finite number
of plane faces. Similar results are shown in \cite{MR3d} and,
for the Poisson equation with $\Gamma = \Gamma_D$, in \cite[Theorem 1.2]{BNZ2005}
(with $\mu=1$ in the statement of that theorem).
\section{Approximation}
\label{sec:Approx}
We introduce the spatial and temporal (quasi-) interpolation
operators that shall allow us to deduce convergence rates of the space-time
variational approximation of formulation \eqref{eq:IBVPVar}.
In order to use the tensor product construction of subspaces in~\eqref{eq:xtApprSpc},
we specify the choice of temporal subspaces $V^M_t\subset H^{1/2}_{0,}(J)$ for the temporal domain $J=(0,T)$.
In the spatial domain $\mathrm{D}$,
$V^N_x \subset H^1_{\Gamma_D}(\mathrm{D})$ will be specified in Section \ref{Sec:xAppr}
below.
\subsection{$hp$-Approximation in $\overline{J}=[0,T]$}
\label{sec:hpApprJ}
To specify the $hp$-subspace $V^M_t\subset H^{1/2}_{0,}(J)$ in~\eqref{eq:xtApprSpc},
we fix the geometric subdivision
parameter $\sigma \in (0,1)$ and the number of elements $m:= m_1 + m_2 \in \mathbb N$
with given $2<m_1 \in \mathbb N$, $m_2 \in \mathbb N_0$.
We set $T_1 := \min \{1,T\}$.
Then, we define the time steps by
\begin{equation} \label{def:tj}
t_j := \begin{cases}
0, & j=0, \\
T_1 \sigma^{m_1-j}, & j \in \{ 1,\dots, m_1 \}, \\
\frac{T-T_1}{m_2} \cdot (j-m_1) + T_1, & j \in \{ m_1+1, \dots, m_1+m_2 \}, \quad \text{ if } m_2 > 0,
\end{cases}
\end{equation}
where the last line is omitted in the case $T_1=T$, i.e., we assume $m_2=0$ whenever $T_1=T$.
Furthermore,
we denote by $I_j = (t_{j-1},t_j)\subset J$ the corresponding time intervals of lengths
$k_j := |I_j| = t_j - t_{j-1}$, fulfilling
\begin{equation} \label{def:kj}
k_j = \begin{cases}
T_1\sigma^{m_1-1}, & j=1, \\
T_1\sigma^{m_1-j}(1-\sigma), & j \in \{ 2,\dots, m_1 \}, \\
k_T:= \frac{T-T_1}{m_2}, & j \in \{ m_1+1, \dots, m_1+m_2 \}, \quad \text{ if } m_2 > 0.
\end{cases}
\end{equation}
Note that the splitting of $\overline{J}=[0,T]$ into the parts $[0,T_1]$ and $[T_1,T]$
is necessary for the proofs of the $hp$-error estimate in Section~\ref{sec:ConvRate},
since Proposition~\ref{Prop:tExpBd} states estimates for $b \leq T_1 = \min\{1,T\}$ only.
In other words, we apply the temporal $hp$-FEM in $[0,T_1]$,
whereas in $[T_1,T]$ we use a temporal $p$-FEM in the case $T>1$.
With this notation,
we define a geometric partition $\mathcal G^m_\sigma = \{I_j\}_{j=1}^m$ of $J=(0,T).$ On $\mathcal G^m_\sigma$,
we introduce the distribution ${\boldsymbol{p}} = (p_1,\dots,p_m) \in \mathbb N^m$
of polynomial degrees
as follows: For a given slope parameter $\mu_{\mathrm{hp}} \in \mathbb R$, $\mu_{\mathrm{hp}} \geq 1$, we set
\begin{equation} \label{def:pj}
p_j := \begin{cases}
1, & j=1, \\
\lfloor \mu_{\mathrm{hp}} j \rfloor, & j \in \{ 2,\dots,m_1 \}, \\
p_T:= \lfloor \mu_{\mathrm{hp}} m_1 \rfloor, & j \in \{ m_1+1, \dots, m_1+m_2 \}, \quad \text{ if } m_2 > 0,
\end{cases}
\end{equation}
where $\lfloor \circ \rfloor$ denotes the floor function.
Again, in the case $m_2=0$, the last line is omitted.
Thus, we set
$S^{{\boldsymbol{p}},1}(J;\mathcal G^m_\sigma) := \{ v \in C^0(\overline{J}): v_{\mid_{I_j}} \in \mathbb P^{p_j} \},$
and the temporal subspace $V^M_t$ in \eqref{eq:xtApprSpc}
is defined as
\begin{equation}\label{eq:VMt}
S^{{\boldsymbol{p}},1}_{0,}(J;\mathcal G^m_\sigma)
:=
\{ v \in S^{{\boldsymbol{p}},1}(J;\mathcal G^m_\sigma): v(0) = 0 \}
\subset
H^{1/2}_{0,}(J)
\;.
\end{equation}
Due to the continuity requirement at $t_j$ for $j=1,\dots,m-1$, which
is mandated by the $H^{1/2}$-conformity, and the zero trace at $t=0$,
it holds that
\[
M = {\rm dim}(S^{{\boldsymbol{p}},1}_{0,}(J;\mathcal G^m_\sigma)) = \sum_{j=1}^m p_j.
\]
We introduce the temporal quasi-interpolant
$\Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} v$ for a sufficiently smooth function $v \colon \, [0,T] \to \mathbb R$ by
\begin{equation} \label{def:projection}
\left( \Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma}v\right)(t) :=
\begin{cases}
v(t_1) t/t_1, & t \in \overline{I_1} \\
v(t_{j-1}) + \int^t_{t_{j-1}} (\Pi^{p_j-1}_{L^2(I_j)}v' ) (\xi)\mathrm d \xi, & t \in \overline{I_j}, \; j \in \{2,\dots,m\},
\end{cases}
\end{equation}
where $\Pi^{p_j-1}_{L^2(I_j)}$
denotes the $L^2(I_j)$ projection onto $\mathbb P^{p_j-1}$. As~\eqref{def:projection} uses point values of the interpolated
function, $\Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma}$ is only defined on a subspace of the continuous functions $C^0(\overline{J})$. Note that the nodal property
\begin{equation} \label{projNodal}
\forall j \in \{0, \dots, m \}: \quad \left( \Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma}v\right)(t_j) = v(t_j)
\end{equation}
holds true for a sufficiently smooth function $v$ with $v(0)=0$.
Our approach to convergence rate bounds in the fractional Sobolev norms is to
first obtain estimates in the additive integer order $L^2$ and $H^1$ norms
in the usual fashion by scaling estimates on unit size reference domains,
then to interpolate the global $L^2$ and $H^1$ norm error bounds.
For $j\geq 2$, the error bounds in $I_j$ are standard $hp$-interpolation error
estimates as can be found, e.g., in \cite[Chapter~3]{Schwab98}.
We recall the error bound on $\hat{I} = (-1,1)$, with the estimates on $I_j$
following by scaling.
\begin{lemma}\label{lem:hp}
On $\hat{I} = (-1,1)$,
for every $p\in \mathbb N$, a projector $\hat{\Pi}^p_{1} \colon \, H^1(\hat{I}) \to \mathbb P^p(\hat{I})$ exists such that,
for all $v\in H^{r+1}(\hat{I})$ with some $r\in \mathbb N$,
\begin{equation}\label{eq:hphIH1}
\| v' - (\hat{\Pi}^p_1 v)' \|_{L^2(\hat{I})}^2
\leq
\frac{(p-s)!}{(p+s)!} \|v^{(s+1)}\|^2_{L^2(\hat{I})}
\end{equation}
and
\begin{equation}\label{eq:hphIL2}
\| v - \hat{\Pi}^p_1 v \|_{L^2(\hat{I})}^2
\leq
\frac{1}{p(p+1)} \frac{(p-s)!}{(p+s)!} \|v^{(s+1)}\|^2_{L^2(\hat{I})}
\end{equation}
are valid for every integer $s$ with $0\leq s \leq \min\{r,p\}$.
Furthermore,
\[
\left(\hat{\Pi}^p_1 v\right)(\pm 1) = v(\pm 1) \;.
\]
\end{lemma}
We remark that the projectors $\hat{\Pi}^p_1$ for $p\geq 1$ are
given by
\[
\left(\hat{\Pi}^p_1 v \right) (t) := v(-1) + \int_{-1}^t \hat{\Pi}^{p-1}_0 (v') (\xi)\mathrm d \xi \;,
\quad t\in \hat{I}\;,
\]
with $\hat{\Pi}^{p-1}_0$ denoting the $L^2(\hat{I})$ projection onto
$\mathbb P^{p-1}$.
For $I_j \in \mathcal G^m_\sigma$ with $j\geq 2$,
the global quasi-interpolation projectors $\Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} $
are obtained by transporting $\hat{\Pi}^{p_j}_1$ from $\hat{I}$ to $I_j\in \mathcal G^m_\sigma$
via affine transformations $T_j:\hat{I}\to I_j$, resulting in local projections
$\Pi^{p_j}_{1,j}$.
We scale the projection error bounds \eqref{eq:hphIH1} and \eqref{eq:hphIL2}
to $I_j$, and apply them to strongly measurable maps $v \colon \, I_j\to X$ for
separable Hilbert space $X$ by Hilbertian tensorization
of Bochner spaces.
We denote by $\mathbb P^p(I_j;X)$ the linear space of
polynomial maps of degree $p$ with coefficients in $X$.
We obtain the following result.
\begin{lemma}\label{lem:hpIj}
For every $I_j\in \mathcal G^m_\sigma$ with $j\geq 2$ with
time-step size $k_j = |I_j|$, and for every $p\in \mathbb N$, there
exists a projector
$\Pi^{p}_{1,j}: H^1(I_j;X) \to \mathbb P^{p}(I_j;X)$
such that,
for every $v\in H^{r+1}(I_j;X)$ with some $r\in \mathbb N$,
the error bounds
\[
\| \partial_t v - \partial_t \Pi^{p}_{1,j} v \|_{L^2(I_j;X)}^2
\leq
C
\frac{(p-s)!}{(p+s)!}
\left(\frac{k_j}{2}\right)^{2s}
\|\partial^{s+1}_t v \|^2_{L^2(I_j;X)}
\]
and
\[
\| v - \Pi^{p}_{1,j} v \|_{L^2(I_j;X)}^2
\leq
C
\frac{1}{p(p+1)} \frac{(p-s)!}{(p+s)!}
\left(\frac{k_j}{2}\right)^{2(s+1)}
\|\partial^{s+1}_t v \|^2_{L^2(I_j;X)}
\]
are valid for every integer $s$ with
$0\leq s \leq \min\{r,p\}$.
Furthermore,
\[
\left(\Pi^{p}_{1,j} v\right)(t) = v(t) \;\;\mbox{in}\;\; X
\;\;\mbox{for} \;\; t \in \partial I_j = \{t_{j-1},t_j\}
\;.
\]
\end{lemma}
\subsection{$\mathbb P^1$-FEM Approximation in $\mathrm{D}$}
\label{Sec:xAppr}
We consider the choice of subspaces $V^N_x\subset H^1_{\Gamma_D}(\mathrm{D})$
in \eqref{eq:xtApprSpc} as standard, conforming $\mathbb P^1$-Lagrangian finite elements
on simplicial meshes $\mathcal T$ of $\mathrm{D}$. We denote by
$S^1(\mathrm{D};\mathcal T)$ the space of continuous, piecewise linear functions on $\mathcal T$,
and further, we define the closed subspace
\begin{equation} \label{Approx:SGammaD}
S^1_{\Gamma_D}(\mathrm{D};\mathcal T) := S^1(\mathrm{D};\mathcal T) \cap H^1_{\Gamma_D}(\mathrm{D}) \subset H^1_{\Gamma_D}(\mathrm{D}).
\end{equation}
\subsubsection{{\bf Case $d=1$}}
For any finite partition $\mathcal T$ of the open, bounded and connected
interval $\mathrm{D}$ into $N$ open subintervals
that is quasi-uniform with mesh width
$h := \max\{ |I_j|: I_j\in \mathcal T \} >0$,
there exists a constant $c>0$
independent of $N=O(h^{-1})$
such that
the nodal interpolant
$I^N: C^0(\overline{\mathrm{D}}) \to S^1(\mathrm{D};\mathcal T)$
satisfies
\begin{equation} \label{eq:X2d=1I}
\forall v\in X_2: \quad
\| v - I^N v \|_{L^2(\mathrm{D})}
+
N^{-1} \| v - I^N v \|_{H^1(\mathrm{D})}
\leq
c N^{-2} \| v \|_{H^2(\mathrm{D})}
\;.
\end{equation}
With~\eqref{eq:X2d=1},
for any $f\in L^2(\mathrm{D})$, we also have that the solution $u = A(\partial_x)^{-1}f$
satisfies
\begin{equation*}
\| u - I^N u \|_{L^2(\mathrm{D})}
+
N^{-1}
\| u - I^N u \|_{H^1(\mathrm{D})}
\leq
c N^{-2} \| f \|_{L^2(\mathrm{D})}
\;.
\end{equation*}
\begin{remark}\label{rem:1Dtransmission_approx}
For transmission problems with diffusion coefficient $a\in
W^{1,\infty}({\mathcal D})$ as in Remark~\ref{rem:1Dtransmission},
assuming that $\mathcal T$
is compatible with the partition ${\mathcal D}$
(i.e., the set of nodes of $\mathcal T$
includes all interfaces in ${\mathcal D}$),
the nodal interpolant $I^N: C^0(\overline{\mathrm{D}}) \to
S^1(\mathrm{D};\mathcal T)$ satisfies~\eqref{eq:X2d=1I} with $\| v
\|_{H^2({\mathcal D})}$ instead of $\| v \|_{H^2(\mathrm{D})}$
on the right side. The subsequent estimate for
$u = A(\partial_x)^{-1}f$, $f\in L^2(\mathrm{D})$,
follows from~\eqref{eq:X2d=1} with $\| v
\|_{H^2({\mathcal D})}$ on the left side (see Remark~\ref{rem:1Dtransmission}).
\end{remark}
\subsubsection{{\bf Case $d=2$}}
$\mathrm{D}\subset\mathbb R^2$ is a polygon with a finite number of corners and straight sides.
We assume furthermore that each entire side $\Gamma_j$ has either the
Dirichlet or the Neumann boundary condition (this is possible by
subdividing sides of $\mathrm{D}$ with changing boundary conditions and by increasing $M$
appropriately; points where boundary conditions change become then
``corner points'').
As it is well-known (e.g.,~\cite{BKP79,Apel1999,NstrGrd2015} and the references there),
functions $u\in \mathcal K^2_{a+1}(\mathrm{D})$ allow for rate-optimal approximation
in $H^1(\mathrm{D})$ and $L^2(\mathrm{D})$ norms in terms of continuous, piecewise linear nodal
Lagrangian FEM in $\mathrm{D}$, on regular, simplicial partitions $\mathcal T^N_\beta$
(see, e.g.,~\cite{BKP79,Apel1999,NstrGrd2015} and the references there for constructions)
of $\mathrm{D}$ with $O(N)$ triangles and algebraic corner-refinement
towards the vertices of $\mathrm{D}$. The subscript $\beta\in(0,1]$ denotes
the corner-refinement parameter, with $\beta=1$
corresponding to quasi-uniform meshes.
As $\mathcal K^2_{a+1}(\mathrm{D}) \subset C(\overline{\mathrm{D}})$ (see, e.g.,~\cite{ BKP79}),
the nodal interpolation operator $I^N_\beta$ is well-defined for $u\in \mathcal K^2_{a+1}(\mathrm{D})$.
Also, for $u \in \mathcal K^2_{a+1}(\mathrm{D})\cap H^1_{\Gamma_D}(\mathrm{D})$,
the interpolants $I^N_\beta u$
satisfy exactly the homogeneous Dirichlet boundary conditions on~$\Gamma_D$.
Furthermore, for suitably strong mesh grading as expressed by the
parameter $\beta$ (depending on $\mathrm{D}$, and the corner angles
at the vertices of $\mathrm{D}$),
the interpolants $I^N_\beta u$ of $u\in \mathcal K^2_{a+1}(\mathrm{D})$
converge at optimal rates under mesh refinement:
there exists a constant $c>0$ such that, for all
$N = {\rm dim}(S^1_{\Gamma_D}(\mathrm{D};\mathcal T^N_\beta)) \in \mathbb N$,
\begin{equation}\label{eq:hFEMP1Bd}
\| u - I^N_\beta u \|_{L^2(\mathrm{D})}
+
N^{-\frac 12} \| u - I^N_\beta u \|_{H^1(\mathrm{D})}
\leq
c N^{-1}
\| u \|_{\mathcal K^2_{a+1}(\mathrm{D})}
\leq
cN^{-1} \| f \|_{L^2(\mathrm{D})}
\;.
\end{equation}
Here, we used \eqref{eq:X2apriori} in the last step.
\begin{remark}\label{rmk:BisTree}
The interpolation error bound \eqref{eq:hFEMP1Bd} is based on
the graded mesh family $\{ \mathcal T^N_\beta\}_{N\geq 1}$.
The bound \eqref{eq:hFEMP1Bd} also holds on families
of bisection tree meshes, as shown in
\cite[Theorems~5.1,~2.1]{GaspMorin}.
Such families are typically generated by adaptive algorithms,
and will also be used in the ensuing numerical experiments
in Section~\ref{sec:NumExp} below.
\end{remark}
\begin{remark}\label{rmk:Transm}
For transmission problems in $\mathrm{D}$,
with $A$ as in \eqref{eq:Acoerc}, piecewise smooth
on a finite partition $\{ \mathrm{D}_i \}_{i=1}^{n_{\mathrm{sub}}}$
of $\mathrm{D}$ in straight-sided polygons $\mathrm{D}_i$,
the results in \cite[Theorem~3.7]{FEMTransmission}
imply that with graded meshes in each $\mathrm{D}_i$
with grading towards multimaterial intersection points,
the interpolation error bound \eqref{eq:hFEMP1Bd} is based on
the graded mesh family $\{ \mathcal T^N_\beta\}_{N\geq 1}$
still remains true by approximating $u_{\mathrm{reg}}$ and $w_s$
in the decomposition of \cite[Theorem~3.7]{FEMTransmission}
separately.
\end{remark}
\subsubsection{{\bf Case $d=3$}}
\label{sec:Approxd=3}
Only partial extensions of \eqref{eq:hFEMP1Bd}
to space dimension $d=3$
are available. We indicate the argument in one particular case.
Specifically,
we assume~\eqref{eq:Acoerc}, \eqref{ass:ALip} and, in addition,
that $A(x) = a(x){\mathbb{I}}$, with $a\in W^{1,\infty}(\mathrm{D})$.
Furthermore, we assume that $\Gamma_D = \Gamma$, i.e., we consider homogeneous
Dirichlet boundary conditions on the entire~$\Gamma$.
The temporal (analytic) regularity in Section~\ref{sec:tReg} is then
still valid and, as outlined in Section~\ref{sec:Regd=3},
the space $X_2$ is continuously embedded into a
weighted Kondrat'ev space in $\mathrm{D}$ with corner- and edge-weights.
A convergence estimate analogous to the $H^1$ bound in \eqref{eq:hFEMP1Bd}
(with rate $N^{-1/3}$ instead of $N^{-1/2}$)
is stated in~\cite[Theorem 2.1]{BNZ2005} with $m=1$,
and proven in~\cite{BNZ2007}, for standard, first-order
Langrangian FEM in $\mathrm{D}$ on regular triangulations of $\mathrm{D}$
into simplices, with \emph{anisotropic edge refinements}.
\section{Convergence Rate of the Space-Time Discretization}
\label{sec:ConvRate}
We are in a position to establish the convergence rate of the space-time
Galerkin discretization \eqref{eq:IBVPVarxt} with
$V^M_t = S^{{\boldsymbol{p}},1}_{0,}(J;\mathcal G^m_\sigma)$ as defined in \eqref{eq:VMt}
and with
$V^N_x = S^1_{\Gamma_D}(\mathrm{D};\mathcal T^N_\beta)$
as given in \eqref{Approx:SGammaD}, where $\beta=1$ in the case $d=1$.
We will require the temporal $H^{1/2}_{0,}(J)$ projector $Q^{1/2}_t$
onto $V^M_t$ and the spatial $H^1_{\Gamma_D}(\mathrm{D})$
``Ritz'' projector $Q^1_x$ into $V^N_x$. Being orthogonal projections,
they are stable, i.e.,
$\| Q^{1/2}_t v \|_{H^{1/2}_{0,}(J)} \leq \| v \|_{H^{1/2}_{0,}(J)}$,
$\| Q^1_x v \|_{X_1} \leq \| v \|_{X_1}$,
and optimal in the respective spaces, i.e.,
\[
\| v - Q^{1/2}_tv \|_{H^{1/2}_{0,}(J)}
=
\min_{w\in V^M_t} \| v - w \|_{H^{1/2}_{0,}(J)}
\;
\mbox{and}
\;
\| v - Q^1_x v \|_{X_1}
=
\min_{w\in V^N_x} \| v - w \|_{X_1}
\;.
\]
Here, we recall that $X_1 = H^1_{\Gamma_D}(\mathrm{D})$
denotes the ``energy'' space with
norm given by $\| v \|_{X_1} := a(v,v)^{1/2}$.
Hence, we may write (for sufficiently regular arguments $v$)
\begin{equation}\label{eq:xQopt}
\| v - Q^1_x v \|_{X_1} \leq c \| v - I^N_\beta v \|_{H^1(\mathrm{D})}
\end{equation}
with a constant $c>0$ depending on $\mathrm{D}$ and on the coefficient $A$.
Assuming a
sufficiently strong corner-mesh refinement in $\mathrm{D}$ in the
case $d=2$,
an Aubin--Nitsche duality argument,
together with~\eqref{eq:X2d=1I} and~\eqref{eq:X2d=1} if $d=1$,
or~\eqref{eq:hFEMP1Bd} and~\eqref{eq:X2K2a} if $d=2$,
implies that there exists a constant $c>0$ such that,
for all
$N = {\rm dim}(S^{1}_{\Gamma_D}(\mathrm{D};\mathcal T^N_\beta))$
and all $w\in X_2$ (see, e.g., \cite[Thm. 5.2]{BKP79}),
\begin{equation}\label{eq:hFEML2Bd}
\| w - Q^1_x w\|_{L^2(\mathrm{D})}
\leq
c N^{-2/d} \| w \|_{X_2} \;.
\end{equation}
The optimality of the temporal projection $Q^{1/2}_t$
in $H^{1/2}_{0,}(J)$ also implies
\begin{equation}\label{eq:tQopt}
\| v - Q^{1/2}_tv \|_{H^{1/2}_{0,}(J)}
\leq
\| v - \Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma}v\|_{H^{1/2}_{0,}(J)}
\end{equation}
for a sufficiently regular $v \colon \, J \to \mathbb R$.
Here, $\Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma}$
is the temporal quasi-interpolant of Subsection~\ref{sec:hpApprJ}.
Proceeding as in the proof of \cite[Theorem 3.4]{OStMZ},
we obtain the following estimate (see \cite[p.~175 bottom]{OStMZ}).
\begin{lemma}\label{lem:xtQopt}
Let $u$ and $u^{MN}$ be the solutions to~\eqref{eq:IBVPVar}
and~\eqref{eq:IBVPVarxt}, respectively. We have
\begin{equation} \label{eq:xtQopt1}
\begin{split}
&\| u - u^{MN} \|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))} \leq \| u - Q^{1/2}_t u\|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))}
\\
&\qquad\qquad+
\| u-Q^1_xu \|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))}
+
\left\| (I-Q^{1/2}_t)(I-Q^1_x) u
\right\|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))}
\\
&\qquad\qquad+
\| u - Q^1_x u \|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))}
+
\| A(\partial_x)(u-Q^{1/2}_tu)\|_{[H^{1/2}_{,0}(J;L^2(\mathrm{D}))]'}
\;.
\end{split}
\end{equation}
\end{lemma}
We combine \eqref{eq:xQopt}--\eqref{eq:xtQopt1} with the preceding regularity, proven in Section~\ref{sec:Reg}, and the approximation properties of the projections $Q^{1/2}_t$, $Q^1_x$ to
obtain our main convergence rate bound. For this purpose, we address
{\bf Term$1$} through {\bf Term$5$} in the upper bound \eqref{eq:xtQopt1}.
To this end, we use that the solution $u$ to~\eqref{eq:IBVPVar} belongs to $H^{1/2}_{0,}(J;X_2)$,
which was proven in Lemma~\ref{lem:H12X2Reg}.
We start by deriving upper bounds for {\bf Term$1$} and {\bf Term$5$}.
We have
$L^2(Q)\simeq
[L^2(Q)]'\hookrightarrow [H^{1/2}_{,0}(J;L^2(\mathrm{D}))]'$ and
$H^{1/2}_{0,}(J;X_2) \hookrightarrow L^2(J;X_2)$
with continuous and dense injections.
This, together with~\eqref{eq:X2L2}, gives the following bound for {\bf Term$5$}:
\[
\begin{split}
\|A(\partial_x)(u-Q^{1/2}_tu)\|_{[H^{1/2}_{,0}(J;L^2(\mathrm{D}))]'}
&\le\tilde c (T) \|A(\partial_x)(u-Q^{1/2}_tu)\|_{L^2(Q)}
\\ &
= \tilde c (T) \|u-Q^{1/2}_tu\|_{L^2(J;X_2)}
\\ &
\le c(T) \|u-Q^{1/2}_t u\|_{H^{1/2}_{0,}(J;X_2)}\;.
\end{split}
\]
%
Using estimate~\eqref{ineq:XrEmbedding} yields that {\bf Term$1$} can be bounded by
\[
\| u - Q^{1/2}_t u\|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))} \leq c \|u-Q^{1/2}_t u\|_{H^{1/2}_{0,}(J;X_2)}
\]
with a constant $c>0$, i.e., for both {\bf Term$1$} and {\bf Term$5$}, we need an estimate of the term $\|u-Q^{1/2}_t u\|_{H^{1/2}_{0,}(J;X_2)}$. For this purpose, we use the temporal quasi-interpolant $\Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma}$ of Subsection~\ref{sec:hpApprJ} and the inequality \eqref{eq:tQopt}. First, note that $\Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} u$ is well-defined since $u \colon \, [0,T] \to X_2$ is continuous, see estimate \eqref{eq:dltuBd} for $l=0$, $r=2$, and since $u \colon \, [0,T] \to X_2$ is smooth for $t>0$ due to Lemma~\ref{lem:DtBound}. Second, we have $u \in H^{1/2}_{0,}(J;X_2)$ because of Lemma~\ref{lem:H12X2Reg}, hence $u-\Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} u \in H^{1/2}_{0,}(J;X_2)$. Thus, it remains to estimate $\|u-\Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} u\|_{H^{1/2}_{0,}(J;X_2)}$, which is done in the following lemmas.
\begin{lemma} \label{lem:BoundGamma}
Let $\alpha>0$ and $m \in \mathbb N_0$ be given. For $\mu \geq 1$ with $\mu > \alpha$, there exist a constant $C_\Gamma > 0$, depending on $\alpha, \mu$, but independent of $m$ such that
\begin{equation*}
\sum_{j=0}^m \alpha^{2j} \frac{\Gamma(\lfloor \mu j \rfloor-j+1)}{\Gamma(\lfloor \mu j \rfloor +j+1)} \Gamma(j+3)^2 \leq C_\Gamma.
\end{equation*}
\end{lemma}
\begin{proof}
The proof is based on \cite[Lemma~3.4]{DD20}, see Appendix~\ref{sec:ProofLemBoundGamma}.
\end{proof}
\begin{lemma} \label{lem:hperrorsmoothX2}
Assume \eqref{eq:TimeReg} with some $\varepsilon \in (0,1)$ and some $\delta \geq 1$. Let the grading parameter $\sigma \in (0,1)$ be given. Choose the slope parameter $\mu_{\mathrm{hp}} \geq 1$ such that
\begin{equation} \label{muhp}
\mu_{\mathrm{hp}} > \frac{(1-\sigma) \delta}{2 \sigma^{(3+\varepsilon)/2}},
\end{equation}
and fix the number of elements $m_2 \in \mathbb N_0$ such that
\begin{equation} \label{m2}
m_2 \begin{cases}
= 0, & T \leq 1, \\
> \frac{T-T_1}{4} \cdot \delta \sigma^{-\frac{1+\varepsilon}{2 \lfloor \mu_{\mathrm{hp}} \rfloor }}, & T>1,
\end{cases}
\end{equation}
where $T_1 = \min \{1,T\}.$
Then, for every $m_1 \in \mathbb N$ with $m_1 \geq \max\{3,m_2\}$ and $m=m_1+m_2$,
the geometric partition $\mathcal G^m_\sigma$ of $J=(0,T)$,
which is given by the time steps $t_j$ in \eqref{def:tj} with time-step sizes $k_j$ in \eqref{def:kj},
and the temporal order distribution ${\boldsymbol{p}} \in \mathbb N^m$ defined by \eqref{def:pj},
lead
to the error bound
\[
\| u - \Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} u \|_{H^{1/2}_{0,}((t_2,T);X_2)}^2 \leq C \sigma^{\varepsilon m_1},
\]
with $t_2 = T_1 \sigma^{m_1 - 2}$ and a constant $C>0$ independent of $m_1$.
\end{lemma}
\begin{proof}
Set $w=u - \Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} u$. Since $w \in H^1_{0,}((t_2,T);X_2)$, see the nodal property \eqref{projNodal}, the interpolation estimate (Lemma~\ref{lem:interpolationEstimate}) yields
\begin{equation*}
\| w \|_{H^{1/2}_{0,}((t_2,T);X_2)}^2 \leq \| \partial_t w \|_{L^2((t_2,T);X_2)} \| w \|_{L^2((t_2,T);X_2)}.
\end{equation*}
We estimate both factors on the right side using Proposition~\ref{Prop:tExpBd}, which states estimates for $b \leq \min\{1,T\}=T_1$ only. Thus, we split $[0,T]$ into the two intervals $[0,T_1]$ and $[T_1,T]$ for the case $T>1$. Without loss of generality, let us assume that $T>1$, i.e., $T_1=1$ (otherwise we examine only $[0,T] \subset [0,1]$ and omit the considerations for the second interval $[T_1,T]$). We investigate the intervals $[0,T_1]$ and $[T_1,T]$ separately.
\noindent
\textbf{Interval $[0,T_1]$:}
With $\lambda = \frac{1-\sigma}{\sigma}$, the time-step size fulfills $k_j = t_j - t_{j-1} = t_{j-1} \lambda$ for $j=2,\dots,m_1$. Lemma~\ref{lem:hpIj} with $p_j=\lfloor \mu_{\mathrm{hp}} j \rfloor$, $s_j=j$ and estimate~\eqref{eq:IntEst} in Proposition~\ref{Prop:tExpBd} yield
\begin{align*}
&\| \partial_t w \|_{L^2((t_2,T_1);X_2)}^2 = \sum_{j=3}^{m_1} \| \partial_t w \|_{L^2(I_j;X_2)}^2 \\
&\leq C \sum_{j=3}^{m_1} \frac{( \lfloor \mu_{\mathrm{hp}} j \rfloor - j)!}{(\lfloor \mu_{\mathrm{hp}} j \rfloor + j)!} \left( \frac{\lambda}{2} \right)^{2j} t_{j-1}^{2j} \delta^{2(j+1)} \Gamma(j+3)^2 t_{j-1}^{-2(j+1)+1+\varepsilon} \\
&\stackrel{\eqref{def:tj}}{=}C \delta^2 T_1^{-1+\varepsilon} \sigma^{(m_1+1)(-1+\varepsilon)} \sum_{j=3}^{m_1} \frac{\Gamma(\lfloor \mu_{\mathrm{hp}} j \rfloor-j+1)}{\Gamma(\lfloor \mu_{\mathrm{hp}} j \rfloor +j+1)} \left( \frac{\lambda \delta}{2 \sigma^{(-1+\varepsilon)/2}} \right)^{2j} \Gamma(j+3)^2 \\
&\leq C_1 \sigma^{m_1(-1+\varepsilon)},
\end{align*}
where, in the last step, Lemma~\ref{lem:BoundGamma} is applied for $\mu_{\mathrm{hp}} > \alpha = \frac{(1-\sigma) \delta}{2 \sigma^{(3+\varepsilon)/2}} \geq \frac{(1-\sigma) \delta}{2 \sigma^{(1+\varepsilon)/2}}= \frac{\lambda \delta}{2 \sigma^{(-1+\varepsilon)/2}}$ with \eqref{muhp} and the constant $C_1>0$ is independent of $m_1$. In the same way, we get from Lemma~\ref{lem:BoundGamma} for $ \mu_{\mathrm{hp}} > \alpha = \frac{(1-\sigma) \delta}{2 \sigma^{(3+\varepsilon)/2}} = \frac{\lambda \delta}{2 \sigma^{(1+\varepsilon)/2}}$ with \eqref{muhp} that
\begin{multline*}
\| w \|_{L^2((t_2,T_1);X_2)}^2 = \sum_{j=3}^{m_1} \| w \|_{L^2(I_j;X_2)}^2 \\
\leq C \sigma^{m_1(1+\varepsilon)} \sum_{j=3}^{m_1} \frac{\Gamma(\lfloor \mu_{\mathrm{hp}} j \rfloor-j+1)}{\Gamma(\lfloor \mu_{\mathrm{hp}} j \rfloor +j+1)} \left( \frac{\lambda \delta}{2 \sigma^{(1+\varepsilon)/2}} \right)^{2j} \Gamma(j+3)^2 \leq C_2 \sigma^{m_1(1+\varepsilon)}
\end{multline*}
with a constant $C_2>0$ independent of $m_1$.
\noindent
\textbf{Interval $[T_1,T]$ in the case $T>1$:}
First, note that $T_1 = 1$. From Lemma~\ref{lem:hpIj} with the choices
$p_j=s_j=p_T:=\lfloor \mu_{\mathrm{hp}} m_1 \rfloor$ and $k_j=k_T$,
estimate~\eqref{eq:L2abX2} in Lemma~\ref{lem:boundab},
and the Stirling's formula
$\sqrt{2\pi n}\left(\frac{n}{\mathrm{e}}\right)^n
<n!<\sqrt{2\pi n}\left(\frac{n}{\mathrm{e}}\right)^n \mathrm{e}^{\frac{1}{12n}}$ with $\mathrm{e}^{\frac{1}{6n}}<2$,
we get
\begin{align*}
\| \partial_t w \|_{L^2((1,T);X_2)}^2 =& \sum_{j=m_1+1}^{m} \| \partial_t w \|_{L^2(I_j;X_2)}^2 \\
\leq& C \frac{1}{(2 p_T )!} \left( \frac{k_T}{2} \right)^{2 p_T} \underbrace{\sum_{j=m_1+1}^m \| \partial_t^{(p_T+1)} u \|_{L^2(I_j;X_2)}^2}_{=\| \partial_t^{(p_T+1)} u \|_{L^2((1,T);X_2)}^2}\\
\leq& C \delta^2 \frac{1}{(2 p_T )!} \left( \frac{k_T \delta}{2} \right)^{2 p_T} \Gamma(p_T+3)^2 C(\varepsilon,1,T)^2 \\
\leq& C \delta^2 C(\varepsilon,1,T)^2 (p_T+2)^4 \frac{ 4\pi p_T \left( \frac{p_T}{\mathrm{e}} \right)^{2p_T} } { \sqrt{2 \pi} \sqrt{2p_T} \left( \frac{2p_T}{\mathrm{e}} \right)^{2p_T} } \left( \frac{k_T \delta}{2} \right)^{2 p_T} \\
=& C 2 \sqrt{\pi} \delta^2 C(\varepsilon,1,T)^2 (p_T+2)^4 \sqrt{p_T} \left( \frac{k_T \delta}{4} \right)^{2 p_T} \leq C_3 \sigma^{m_1(1+\varepsilon)}
\end{align*}
with a constant $C_3 >0$ independent of $m_1$. In the last step, due to \eqref{m2}, we use that a constant $q \in (0,1)$ exists such that
\[
k_T = \frac{T-T_1}{m_2} = q \frac{4}{\delta} \sigma^{\frac{m_1(1+\varepsilon)}{2 \lfloor \mu_{\mathrm{hp}} \rfloor m_1 }} \leq q \frac{4}{\delta} \sigma^{\frac{m_1(1+\varepsilon)}{2 \lfloor \mu_{\mathrm{hp}} m_1 \rfloor }} = q \frac{4}{\delta} \sigma^{\frac{m_1(1+\varepsilon)}{2 p_T}}
\]
and therefore, $(p_T+2)^4 \sqrt{p_T} q^{2p_T} \to 0$ as $p_T \to \infty$.
Analogously, we obtain
\[
\| w \|_{L^2((1,T);X_2)}^2 \leq C_4 \sigma^{m_1(1+\varepsilon)}
\]
with a constant $C_4 >0$ independent of $m_1$.
With all estimates above, we conclude that
\begin{multline*}
\| w \|_{H^{1/2}_{0,}((t_2,T);X_2)}^2 \leq \| \partial_t w \|_{L^2((t_2,T);X_2)} \| w \|_{L^2((t_2,T);X_2)} \\
\leq \sqrt{ C_1 \sigma^{m_1(-1+\varepsilon)} + C_3 \sigma^{m_1(1+\varepsilon)}} \sqrt{ C_2 \sigma^{m_1(1+\varepsilon)} + C_4 \sigma^{m_1(1+\varepsilon)} } \leq C_{\mathrm{est}} \sigma^{\varepsilon m_1},
\end{multline*}
where $C_{\mathrm{est}}>0$ is independent of $m_1$.
\end{proof}
\begin{lemma} \label{lem:thpexpconv}
Under the assumptions of Lemma~\ref{lem:hperrorsmoothX2}, the estimate
\[
\| u - \Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} u \|_{H^{1/2}_{0,}(J;X_2)} \leq C \mathrm{exp}(-b \sqrt{M})
\]
holds true with a constant $C$ independent of $b$ and $M$, where $b= - \varepsilon \ln \sigma / \sqrt{8\mu_{\mathrm{hp}}} > 0$ and $M = {\rm dim}(S^{{\boldsymbol{p}},1}_{0,}(J;\mathcal G^m_\sigma)) \leq 2 \mu_{\mathrm{hp}} m_1^2 \leq 2 \mu_{\mathrm{hp}} m^2$.
\end{lemma}
\begin{proof}
Set $w=u - \Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} u$. Then, for $X_2$-valued functions, the norm equivalence in
Lemma~\ref{lem:NormEquivalence} and the localization in
Lemma~\ref{lem:FractionalNormPointtau} for $a=0$, $b=T$, $\tau = t_2$ yield
\begin{align*}
&(C_{\mathrm{Int},1})^2 \| w \|_{H^{1/2}_{0,}(J;X_2)}^2
\leq \normiii{w}_{H^{1/2}_{0,}(J;X_2)}^2 \\
&\qquad \leq
\| w \|^2_{L^2(J;X_2)}
+
| w |_{H^{1/2}((0,t_2);X_2)}^2 + 4 \int_0^{t_2} \frac{\|w(t)\|_{X_2}^2}{t_2-t} \mathrm dt \\
&\qquad\quad + 4 \int_{t_2}^T \frac{\|w(s)\|_{X_2}^2}{s-t_2} \mathrm ds + | w |_{H^{1/2}((t_2,T);X_2)}^2
+
\int_0^T \frac{\|w(t)\|_{X_2}^2}{t} \mathrm dt \\
&\qquad\leq \normiii{w}_{H^{1/2}_{0,}((0,t_2);X_2)}^2 + 4 \int_0^{t_2} \frac{\|w(t)\|_{X_2}^2}{t_2-t} \mathrm dt + 5 \normiii{w}_{H^{1/2}_{0,}((t_2,T);X_2)}^2,
\end{align*}
where we used the definition~\eqref{Sob:NormTriple} of the triple norm and the bound
$\int_{t_2}^T \frac{\|w(t)\|_{X_2}^2}{t} \mathrm dt \leq \int_{t_2}^T \frac{\|w(s)\|_{X_2}^2}{s-t_2} \mathrm ds$.
Next, we estimate the three terms on the right side.
\noindent
\textbf{First term:}
The triangle inequality, Lemma~\ref{lem:NormEquivalence}, Lemma~\ref{lem:H12X2Reg},
the Poincar\'e inequality (Lemma~\ref{lem:Poincare}), definition~\eqref{def:projection},
and estimates \eqref{eq:dltuBd}, \eqref{eq:IntEst} yield
\begin{align*}
&\normiii{w}_{H^{1/2}_{0,}((0,t_2);X_2)}^2 \leq 2 \normiii{u}_{H^{1/2}_{0,}((0,t_2);X_2)}^2 + 2 \normiii{\Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} u}_{H^{1/2}_{0,}((0,t_2);X_2)}^2 \\
&\; \leq C t_2^{\varepsilon} + 2 (C_{\mathrm{Int},2})^2 \sqrt{ 1 + \frac{4t_2^2}{\pi^2}} \underbrace{ \| \Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} u \|_{H^{1/2}_{0,}((0,t_2);X_2)}^2 }_{\leq \frac{2}{\pi} (t_2-0) \|\partial_t \Pi^{{\boldsymbol{p}},1}_{\mathcal G^m_\sigma} u \|_{L^2((0,t_2);X_2)}^2 } \\
&\; \leq C T_1^{\varepsilon} \sigma^{-2\varepsilon} \sigma^{\varepsilon m_1} + C t_2 \Big[ \underbrace{\int_0^{t_1} \frac{\| u(t_1) \|_{X_2}^2}{t_1^2} \mathrm dt}_{\leq C t_1^{-1+\varepsilon}} + \underbrace{ \int_{t_1}^{t_2} \|\Pi^{p_2-1}_{L^2(I_2)} \partial_t u (t)\|_{X_2}^2 \mathrm dt }_{\leq \|\partial_t u \|_{L^2(I_2;X_2)}^2 \leq C \delta^2 t_1^{-1+\varepsilon}} \Big] \leq C_1 \sigma^{\varepsilon m_1},
\end{align*}
with a constant $C_1 >0$ independent of $m_1$, where we used
\[
t_2 t_1^{-1+\varepsilon} = T_1 \sigma^{m_1-2} T_1^{-1+\varepsilon} \sigma^{(-1+\varepsilon)(m_1-1)} = T_1^{\varepsilon} \sigma^{-1-\varepsilon} \sigma^{\varepsilon m_1}.
\]
\noindent
\textbf{Second term:}
With the bound~\eqref{eq:dltuBd}, the nodal property \eqref{projNodal} and $k_1=t_1 = T_1 \sigma^{m_1-1}$, $k_2=T_1 \sigma^{m_1-2}(1-\sigma)$, we find
\begin{align*}
4& \int_0^{t_2} \frac{\|w(t)\|_{X_2}^2}{t_2 - t} \mathrm dt = 4 \int_0^{t_1} \frac{\|w(t)\|_{X_2}^2}{t_2 - t} \mathrm dt + 4 \int_{t_1}^{t_2} \frac{\|w(t)\|_{X_2}^2}{t_2 - t} \mathrm dt \\
&= 4 \int_0^{t_1} \frac{\|u(t) - u(t_1)t/t_1\|_{X_2}^2}{t_2 - t} \mathrm dt + 4 \int_{t_1}^{t_2} \frac{\| \int_t^{t_2} \partial_t w(\xi) \mathrm d\xi \|_{X_2}^2}{t_2 - t} \mathrm dt \\
&\leq \frac{8}{k_2} \int_0^{t_1} \|u(t)\|_{X_2}^2 \mathrm dt + \frac{8}{k_2} \int_0^{t_1} \|u(t_1)\|_{X_2}^2 \frac{t^2}{k_1^2} \mathrm dt + 4 \int_{t_1}^{t_2} \frac{\left[ \int_t^{t_2} \| \partial_t w(\xi) \|_{X_2} \mathrm d\xi \right]^2}{t_2 - t} \mathrm dt \\
&\leq \frac{C}{k_2} \int_0^{t_1} t^{\varepsilon} \mathrm dt + \frac{C t_1^{\varepsilon}}{k_1^2 k_2} \int_0^{t_1} t^2 \mathrm dt + 4 \int_{t_1}^{t_2} \| \partial_t w \|_{L^2((t,t_2);X_2)}^2 \mathrm dt \\
&\leq 2C \frac{T_1^{\varepsilon} \sigma^{1-\varepsilon}}{1-\sigma} \sigma^{\varepsilon m_1} + 4 k_2 \| \partial_t w \|_{L^2(I_2;X_2)}^2 \leq C_2 \sigma^{\varepsilon m_1},
\end{align*}
with a constant $C_2 >0$ independent of $m_1$,
where in the last step we have used the estimate~\eqref{eq:IntEst}.
This yields
\begin{equation*}
4 k_2 \| \partial_t u \|_{L^2(I_2;X_2)}^2 \leq C T_1 \sigma^{m_1-2}(1-\sigma) t_1^{-1+\varepsilon} = C T_1^{\varepsilon} \frac{1-\sigma}{\sigma^{1+\varepsilon}} \sigma^{\varepsilon m_1}.
\end{equation*}
\noindent
\textbf{Third term:}
Lemma~\ref{lem:NormEquivalence} and Lemma~\ref{lem:hperrorsmoothX2} give
\[
5 \normiii{w}_{H^{1/2}_{0,}((t_2,T);X_2)}^2
\leq
C (C_{\mathrm{Int},2})^2 \sqrt{ 1 + T^2} \| w \|_{H^{1/2}_{0,}((t_2,T);X_2)}^2
\leq
C_3 \sigma^{\varepsilon m_1},
\]
with a constant $C_3 >0$ independent of $m_1$.
\noindent
\textbf{Conclusion of the proof:}
As the temporal number of degrees of freedom $M$ fulfills
\begin{equation} \label{MEstimate}
M \leq \sum_{j=1}^{m_1} \lfloor \mu_{\mathrm{hp}} j \rfloor + \lfloor \mu_{\mathrm{hp}} m_1 \rfloor m_2
\leq \mu_{\mathrm{hp}} \frac{m_1(m_1+1)}{2} + \mu_{\mathrm{hp}} m_1^2 \leq 2 \mu_{\mathrm{hp}} m_1^2
\end{equation}
with $m_2 \leq m_1$, using all the estimates above, we conclude
\[
\| w \|^2_{H^{1/2}_{0,}(J;X_2)} \leq C_4 \sigma^{\varepsilon m_1} \leq C_4 \mathrm{exp}(-2b \sqrt{M}),
\]
with a constant $C_4 >0$ independent of $m_1$, $M$
and
$b= - \varepsilon \ln \sigma / \sqrt{8\mu_{\mathrm{hp}}} > 0$,
i.e., the assertion follows.
\end{proof}
As Lemma~\ref{lem:thpexpconv} implies
exponential convergence bounds on {\bf Term$1$} and {\bf Term$5$},
it remains to treat {\bf Terms}$2$--$4$ in \eqref{eq:xtQopt1}.
{\bf Term$2$} and {\bf Term$4$} are identical.
We focus on {\bf Term$3$}.
Using that $Q^{1/2}_t$ is a projector in the Hilbert space
$H^{1/2}_{0,}(J)$,
the triangle inequality gives
\[
\left\| (I-Q^{1/2}_t)(I-Q^1_x) u \right\|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))}
\leq
2 \left\| u-Q^1_x u \right\|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))}\;.
\]
Thus, {\bf Term$3$} can be estimated in the same way as
{\bf Term$2$} and {\bf Term$4$}.
Using
\[
H^{1/2}_{0,}(0,T;L^2(\mathrm{D})) \simeq H^{1/2}_{0,}(J)\otimes L^2(\mathrm{D})
\simeq
L^2(\mathrm{D}) \otimes H^{1/2}_{0,}(J) \simeq L^2(\mathrm{D};H^{1/2}_{0,}(J)),
\]
we may use the $L^2(\mathrm{D})$ error bound \eqref{eq:hFEML2Bd} on the
Ritz projection $Q^1_x$ and the regularity result in Lemma~\ref{lem:H12X2Reg} for $b=T$,
in connection with the norm equivalence in
Lemma~\ref{lem:NormEquivalence} for $a=0$, $b=T$,
to arrive at
\begin{equation}\label{eq:Q1xErr}
\left\| u-Q^1_x u \right\|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))}
\leq
c N^{-2/d}
\left\| u \right\|_{H^{1/2}_{0,}(J;X_2)} \leq C N^{-2/d},
\end{equation}
with a constants $c>0$, $C>0$ independent of $N$.
We combine the previous estimates
to obtain the main result of this paper.
\begin{theorem}\label{thm:Conv}
Let the space dimension $d$ be either $d=1$ or $d=2$.
Assume that the diffusion coefficient
$A\in W^{1,\infty}(\mathrm{D};\mathbb R^{d\times d}_{\mathrm{sym}})$
is uniformly positive definite, i.e., that \eqref{eq:Acoerc} is satisfied,
and that the forcing $g$ in \eqref{eq:IBVP} satisfies
the temporal analytic regularity~\eqref{eq:TimeReg}.
Furthermore, assume that the assumptions of Lemma~\ref{lem:hperrorsmoothX2} on the temporal mesh $\mathcal G^m_\sigma$ in \eqref{def:tj} with $\mu_{\mathrm{hp}} \geq 1$ and $m_2 \in \mathbb N_0$ fulfilling \eqref{muhp} and \eqref{m2}, respectively, and the temporal order distribution ${\boldsymbol{p}} \in \mathbb N^m$ in \eqref{def:pj} are satisfied.
Then the space-time Galerkin
approximation \eqref{eq:IBVPVarxt} admits a unique solution
$u^{MN} \in S^{{\boldsymbol{p}},1}_{0,}(J;\mathcal G^m_\sigma)\otimes S^1_{\Gamma_D}(\mathrm{D};\mathcal T^N_\beta)$
with the temporal $hp$-FE space $S^{{\boldsymbol{p}},1}_{0,}(J;\mathcal G^m_\sigma)$
of dimension $M = {\rm dim}(S^{{\boldsymbol{p}},1}_{0,}(J;\mathcal G^m_\sigma))$
as defined in \eqref{eq:VMt}, and
with the spatial FE space $S^1_{\Gamma_D}(\mathrm{D};\mathcal T^N_\beta)$
of continuous, piecewise linear FEM
on a sequence of
suitably graded, regular triangulations $\{\mathcal T^N_\beta\}_{N}$
in $\mathrm{D}$ ($\beta=1$, i.e., quasi-uniform partitions, if $d=1$)
of dimension $N={\rm dim}(S^1_{\Gamma_D}(\mathrm{D};\mathcal T^N_\beta))$.
Moreover,
a constant $C>0$ (independent of $M$ and $N$) exists
such that the space-time discretization \eqref{eq:IBVPVarxt}
based on these spaces satisfies the error bound
%
\begin{equation}\label{eq:hpxtErrBd}
\| u - u^{MN} \|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))}
\leq
C
\left(\exp(-b\sqrt{M}) + N^{-2/d} \right)
\end{equation}
with $b= - \varepsilon \ln \sigma / \sqrt{8\mu_{\mathrm{hp}}} > 0.$
\end{theorem}
\begin{proof}
Existence and uniqueness of the solution $u^{MN}$
were established at the end of Section~\ref{sec:xtVarForm}.
Estimate~\eqref{eq:hpxtErrBd} follows from Lemma~\ref{lem:xtQopt},
taking into account
Lemma~\ref{lem:thpexpconv}, and estimate~\eqref{eq:Q1xErr}.
\end{proof}
Balancing the terms in the upper bound \eqref{eq:hpxtErrBd} results in
\[
M \simeq O\left((\log N)^2\right) \quad \text{ or } \quad m_1 \simeq O(\log N),
\]
where $M \leq 2 \mu_{\mathrm{hp}} m_1^2 \leq 2 \mu_{\mathrm{hp}} m^2,$ see \eqref{MEstimate}.
Then,
the \emph{number of degrees of freedom for the space-time discretization}
behaves, as $N\to\infty$,
as
\begin{equation} \label{ST:dofsBehavior}
MN \simeq O\left(N (\log N)^2\right),
\end{equation}
i.e., it is essentially (up to the $(\log N)^2$
factor) equal to the number of
degrees of freedom for the discretization of one spatial problem.
Importantly, in the solution algorithms of~\cite{langer2020efficient},
$M \simeq O\left((\log N)^2\right)$
will reduce time and memory requirements.
\begin{remark} \label{remk:pFEM}
Theorem~\ref{thm:Conv} remains valid for solutions $u(t,\circ)$ which depend analytically on
$t\in [0,T]$.
Classical results on
exponential rates of convergence for polynomial approximation of analytic functions in $[0,T]$
(e.g.,~\cite[Chapter~12]{Davis}) imply that for any constant number of temporal elements $m \in \mathbb N$ (e.g., $m=1$) with temporal polynomial degrees ${\boldsymbol{p}} = (p,\dots,p) \in \mathbb N^m$ with $p \in \mathbb N$, temporal exponential convergence follows when $p\to\infty$ ($p$-method).
Under the otherwise exact same assumptions as in Theorem~\ref{thm:Conv},
one obtains in place of~\eqref{eq:hpxtErrBd} the error bound
%
\begin{equation} \label{pxtErrBd}
\| u - u^{MN} \|_{H^{1/2}_{0,}(J;L^2(\mathrm{D}))}
\leq
C
\left(\exp(-bp) + N^{-2/d} \right)
\end{equation}
with $M = m p$ and constants $b>0$, $C>0$ independent of $p$ and $N$.
%
This allows to improve \eqref{ST:dofsBehavior} to
\begin{equation} \label{ST:dofsBehaviorpFEM}
MN \simeq O(N \log N).
\end{equation}
%
\end{remark}
\section{Numerical Experiments}
\label{sec:NumExp}
\input{NumericalExperiments.tex}
\section{Conclusion}
\label{sec:Concl}
Based on a variational space-time formulation of the IBVP \eqref{eq:IBVP}--\eqref{eq:BC},
we analyzed tensorized discretization consisting of an exponentially convergent
time-discretization of $hp$-type, combined with a first-order Lagrangian FEM in the
spatial domain, with corner-mesh refinement to account for the
presence of spatial singularities.
Stability of the considered discretization scheme is
achieved by Hilbert-transforming the temporal $hp$-trial spaces.
Details on the efficient, exponentially accurate,
numerical realization of this transformation were presented.
Several numerical examples in space dimension $d=2$ in nonconvex polygonal domains
confirmed the asymptotic error bounds.
In effect, the overall number of degrees of freedom scales essentially
as those for one instance of the spatial problem.
The presented proof of time-analyticity via eigenfunction expansions is limited to self-adjoint,
elliptic spatial differential operators.
Nonselfadjoint spatial operators which are $t$-independent
allow similar analytic regularity results
via semigroup theory (see, e.g., \cite{DSChS2001}).
The adopted space-time formulation and its operator perspective
and the error analysis extend \emph{verbatim} to self-adjoint,
elliptic spatial operators of positive order.
Also, certain nonlinear evolution equations allow for corresponding formulations
(see, e.g., \cite{ScSt17}).
Moreover, transmission problems with piecewise Lipschitz coefficients in the spatial
operators can be covered (with the local mesh refinement also at multi-material
interface points).
The present error analysis with the same convergence rates is readily
extended to a coefficient in the temporal derivative that is time-dependent and analytic in~$[0,T]$.
We finally remark that the presently adopted space-time variational formulation
will also allow for \emph{a~posteriori} time-discretization error estimation,
which is reliable and robust uniformly with respect to $p$.
Details shall be developed elsewhere.
\bibliographystyle{amsplain}
|
2104.04097
|
\section{\label{sec:1} Introduction}
\par All material substances interact nonlinearly with intense electromagnetic radiation leading to so-called parametric excitation or parametric instabilities \cite{armstrong62,Drake,Forslund,Kruer,background2}. In laser fusion parametric instabilities such as stimulated Brillouin and Raman scattering, filamentation and modulational instabilities \cite{armstrong62,Drake,Forslund} as well as self-focusing and plasma cavitation \cite{background3,background4} are detrimental to the coupling of the laser energy to the plasma. Stimulated Brillouin and Raman backscatter can result in a large fraction of the laser energy being scattered back out of the plasma before it reaches the critical surface, while filamentation of the laser beam creates beam break up resulting in hot spots and non-uniform illumination. To mitigate the effects of these parametric instabilities the use of broadband or incoherent lasers are being investigated. The standard treatment used to investigate these parametric instabilities use a coherent wave description of the laser which is limited when dealing with a broadband laser.
\par The use of the Wigner-Moyal statistical theory has proven to be powerful in studying these instabilities in nonlinear optics \cite{WP0.8}, demonstrating the stabilization of the modulational instability, as a result of an effect similar to Landau damping, driven by random phase fluctuations of the propagating wave. In similar studies \cite{WP0.9,WP0.10}, focusing on the onset of the transverse instability in nonlinear media in the presence of a partially incoherent light, the Wigner distribution was once more confirmed as a suitable approach. This formalism is particularly well suited for nonlinear optics because of the validity of the paraxial wave approximation, which justifies a forward propagating ansatz for the evolution of electromagnetic waves in dispersive nonlinear media.
\par The Wigner-Moyal statistical approach to wave propagation has also enabled significant progress in the study of photon Landau damping \cite{photonlandau} and photon acceleration \cite{stochastic,titobook,resonant,trines2,trines3,yablon,wilks89}, where a time-dependent refractive index leads to a change in the frequency of electromagnetic waves (in contrast to a position-dependent refractive index, which leads to a change in wave number but not frequency). Both the modulational instability and photon acceleration have been extended to the study of drift waves interacting with zonal flows \cite{diamond1,diamond2,trines4,trines5,dodin2,dodin3,dodin4,dodin5}. More exotic applications include sea waves \cite{seawaves}, magneto-hydrodynamics \cite{weinberg}, dispersive Alfv\'en waves \cite{alfven1,alfven2} and neutrino-plasma interactions \cite{neutrino1,neutrino2,neutrino3}.
\par In laser-plasma interactions, in general, the standard Wigner-Moyal formalism is a limitation, as many critical aspects in ICF, fast ignition and several applications in laser-plasma and astrophysical scenarios demand a detailed analysis of the backscattered radiation. Early results on the scattering of electromagnetic waves by turbulent plasma were obtained by Bingham \textit{et al.} \cite{bingham_scatter}. In this paper we extend the work of Santos \textit{et al.} \cite{JorgePRL} where stimulated Raman scattering by a broadband pump was investigated using the Wigner-Moyal statistical approach to the investigation of stimulated Brillouin scattering.
\par The inclusion of bandwidth or incoherence effects in laser driven parametric instabilities has also been studied extensively using various approaches. The addition of small random deflections to the phase of a plane wave was shown to significantly suppress the three-wave decay instability \cite{WP0.7}, which was one of the first suggestions of the manipulation of the laser coherence as a way to avoid its deleterious effects. The threshold values for some electrostatic instabilities can also be effectively increased either by applying a random amplitude modulation to the laser or by the inclusion of a finite bandwidth of the pump wave \cite{WP0.1,WP0.5}. A new method for the inclusion of finite bandwidth effects on parametric instabilities, allowing arbitrary fluctuations of any group velocity, has also been developed \cite{WP0.4,WP0.3,pesme1,pesme2,pesme3,pesme4}. As far as Stimulated Raman Scattering is concerned, it became clear from these earlier works that, although it may seriously decollimate a coherent laser beam, laser bandwidth is an effective way to suppress the instability \cite{WP0.2}. The effects of laser beam incoherence induced by ``random phase plates'' have been studied extensively, and a reduction in the growth of many instabilities, including stimulated Raman and Brillouin scattering, has been demonstrated experimentally \cite{rpp1,rpp2,rpp3,rpp4,rpp5,rpp6}.
\par Mitigation of laser-plasma instabilities through increasing the bandwidth of the driving laser beam(s) has been investigated by several groups \cite{pesme3, follett1, hanwen,hansen1, zhao1, dorrer, bates}. For a parametric instability with ``coherent'' growth rate $\gamma_0$ and an incoherent pump laser with $\Delta\omega_0 \gtrsim \gamma_0$, Pesme \emph{et al.} \cite{pesme3} use an ``incoherent'' growth rate $\gamma_\mathrm{inc} = 4\gamma_0^2/\Delta \omega_0$. In theoretical studies \cite{pesme3,follett1,hanwen}, a bandwidth of $\Delta \omega_0 > 10\gamma_0$ or $\Delta \omega_0/\omega_0 \sim 5\%$ is often employed. In experimental studies \cite{hansen1, zhao1, dorrer, bates}, $\Delta \omega_0/\omega_0$ is typically much smaller, $\Delta \omega_0/\omega_0 < 1\%$, probably dictated by the properties of the intrinsic bandwidth of the laser gain medium.
\par Previous studies have also shown that a formalism that intrinsically describes the statistical properties of broadband lasers would allow for further theoretical progress and a systematic study on the control of parametric instabilities by spectral shaping of the pump laser.
\par A statistical description of light can be achieved through the Wigner-Moyal formalism of quantum mechanics, which provides, in its original formulation, a one-mode description of systems ruled by Schr\"odinger-like equations. In order to address other processes apart from the direct forward scattering \cite{trines1}, a generalization of this Photon Kinetic theory (GPK) was developed in \cite{JorgeJMP}. This new formulation is completely equivalent to the full Klein-Gordon equation underpinning wave propagation in plasmas and was readily employed to derive a general dispersion relation for stimulated Raman scattering driven by white light \cite{JorgePRL}.
\par In this paper, we focus on the study of the properties of stimulated Brillouin scattering (SBS) driven by a broadband pump. The suppression of the growth rate of the instability as a result of the inclusion of bandwidth in the pump wave is qualitative and quantitatively verified for realistic experimental parameters. For the sake of completeness, the less standard calculations are detailed in the appendices of the paper.
\par This paper is organized as follows. In section \ref{sec:2} , we employ GPK to derive a general dispersion relation for SBS driven by a spatially stationary field with arbitrary statistics. We perform a detailed analytical study of different regimes of SBS and compare it with classical results for the monochromatic limit of the instability. For the first time, the whole domain of unstable wave numbers is numerically explored for a wide range of bandwidth choices. Finally, in section \ref{sec:4}, we summarize the main results and state the conclusions.
\section{\label{sec:2} Broadband Stimulated Brillouin Scattering}
\par We will start first by restating the fluid equation describing the plasma response, and the dependence of the driving term associated with the radiation on the plasma response from GPK, generalizing the equivalent result for monochromatic waves derived from the wave equation for the vector potential. These two equations will then be the basis to derive the dispersion relation relevant to the scenario under study.
\par In the following we use normalized units, where length is normalized to $c/\omega_{p0}$, with $c$ the velocity of light in vacuum and $\omega_{p0}=(4\pi e^2n_{e0}/m_ec^2)^{1/2}$ the electron plasma frequency, time to $1/\omega_{p0}$, mass and absolute charge to those of the electron, respectively, $m_e$ and $e$, with $e>0$. The plasma is modeled as an interpenetrating fluid of both electrons and ions, with $n_{e0}$ and $n_{i0}$ their equilibrium (zeroth order) particle densities, respectively. Densities are also normalized to the equilibrium electron density, such that the nornalized zero-order densities are $n_{e0}=1$ and $n_{i0}=1/Z$, where $Z$ is the electric charge of the ions in units of $e$.
\par Following the procedure outlined by Santos \textit{et al.} \cite{JorgePRL}, we define the normalized vector potential of the circularly polarized pump field as $\textbf a_p(\textbf r,t)=2^{-1/2}(\hat z+i\hat y)a_0\int d\textbf kA(\textbf k)$exp$[i(\textbf k.\textbf r-(\textbf k^2+1)^{1/2}t)]$, where $\textbf a_p=e\textbf A_p/m_ec^2$, $(\textbf k^2+1)^{1/2}\equiv\omega(\textbf k)$ is the monochromatic dispersion relation in a uniform plasma, and $\textbf A_p$ is the vector potential of the pump field. We also allow for a stochastic component in the phase of the vector potential $A(\textbf k)=\hat{A}(\textbf k)$exp$[i\psi(\textbf r,t)]$ such that $\left<\textbf a_p^*(\textbf r+\textbf y/2,t).\textbf a_p(\textbf r-\textbf y/2)\right>=a_0^2m(\textbf y)$ is independent of $\textbf r$ with $m(0)=1$ and $|m(\textbf y)|$ is bounded between $0$ and $1$, which means that the field is spatially stationary i.e. the phase average of the pump field $\langle\ldots\rangle$ is not a function of $\textbf r$. In this section, $\tilde{q}$ denotes the first-order component of a generic quantity $q$. Unless specifically stated, the same notation for the functions and their Fourier transforms is used, as the argument of such functions (either $(\textbf r,t)$ or $(\textbf k,\omega)$) avoids any confusion. To obtain a dispersion relation for SBS we must couple the typical plasma response to an independently derived driving term, obtained within the GPK framework.
\subsection{Plasma response and driving term}
\par In our previous work \cite{JorgePRL}, we studied the interaction of partially coherent light with electron plasma waves, whose (undriven, undamped) dispersion relation is given by $\omega_L^2 = 1 + (T_e/m_e) k_L^2$, with $T_e$ the electron temperature, thus covering stimulated Raman back- and forward scattering, and the relativistic modulational instability. In this work, we aim to study the interaction of partially coherent light with ion acoustic plasma waves, whose (undriven, undamped) dispersion relation is $\omega(k)^2 = (Z T_e/M) k^2$, with $M$ the ion mass. This allows us to study stimulated Brillouin back- and forward scattering, in both the weakly and strongly coupled regimes. We consider a fluid model for the plasma ion response to the ponderomotive force of the driving laser beam.
\par Combining the continuity and conservation of momentum equations for each species and closing the system with an isothermal equation of state for the electrons, we can readily present without more details the plasma response to the propagation of a light wave $\textbf a_p$, beating with its scattered component $\tilde{\textbf a}$, to produce the ponderomotive force of the laser, referring the reader to \cite{Drake,Forslund,Kruer,Silva99}:
\begin{equation} \label{plasmaresponse}
\left(\frac{\partial^2}{\partial t^2}-2\tilde\nu\partial t-c_S^2\nabla^2\right)\tilde n=\frac{Z}{M}\nabla^2\mathrm{Re}[\textbf a_p.\tilde{\textbf a}],
\end{equation}
where $c_S\equiv\sqrt{Z T_e/M}$ is the ion sound velocity and $\tilde\nu$ an integral (damping) operator whose Fourier transform is $\nu|\textbf k_S|c_S$. Other models for the plasma response e.g. with more sophisticated descriptions of $\tilde\nu$ can be easily included in our analysis.
\par We now need to describe how the incident pump wave and scattered radiation interact with the plasma. In standard formulations, the starting point is the wave equation for the vector potential with the corresponding source term given by the current associated with the plasma perturbations \cite{Drake,Forslund,Kruer,WP0.15}.
In our approach, we derive the dependence of the driving term $\mathrm{Re}[\textbf a_p.\tilde{\textbf a}]$ on the plasma perturbation, using GPK. For the sake of completeness the derivation is given in \ref{AppendixDrivingTerm}. The driving term obtained within the framework of GPK is \cite{JorgeJMP,JorgePRL,Silva99}:
\begin{equation} \label{drivingterm}
W_{\mathrm{Re}\left[\textbf a_p.\tilde{\textbf a}\right]}=\frac 12\tilde n\left[\frac{\rho_0\left(\textbf k+\frac{\textbf k_S}{2}\right)}{D_s^-}+\frac{\rho_0\left(\textbf k-\frac{\textbf k_S}{2}\right)}{D_s^+}\right],
\end{equation}
where $W_{\mathrm{Re}\left[\textbf a_p.\tilde{\textbf a}\right]}$ represents the spatial and temporal Fourier transform of the Wigner function of $\mathrm{Re}\left[\textbf a_p.\tilde{\textbf a}\right]$. We observe that $W_f \equiv W_f (\textbf r,\textbf k , t)$, and thus the Fourier transform is $W_f (\textbf k_S,\textbf k , \omega_S)$, and $\omega_S$ and $\textbf k_S$ represent the frequency and wave vector of the ion acoustic wave. In Eq. (\ref{drivingterm}), $D_s^\pm$ is given by $D_s^\pm=\omega_S^2\mp\left[\textbf k.\textbf k_S-\omega_S\omega\left(\textbf k\mp\frac{\textbf k_S}{2}\right)\right]$, where we recall that $\omega$ is a function of $\textbf k$ via the linear dispersion relation $\omega(k)^2 = (Z T_e/M) k^2$.
\subsection{General dispersion relation for Stimulated Brillouin Scattering and classical monochromatic limit}
\par We can now perform temporal and spatial Fourier transforms on the plasma response Eq. (\ref{plasmaresponse}), ($\partial t\rightarrow i\omega_S,\nabla_{\textbf r}\rightarrow -i\textbf k_S$), to obtain
\begin{equation}
\tilde n=\frac ZM\frac{k_S^2}{\omega_S^2+2i\nu\omega_S|\textbf k_S|c_S-c_S^2\textbf k_S^2}\mathcal \mathrm{Re}[\textbf a_p.\tilde{\textbf a}],
\end{equation}
which can now be used with Eq.(\ref{drivingterm}). Taking advantage of one of the properties of the Wigner function \cite{WignerTransform.1,WignerTransform.2,WignerTransform.3,WignerTransform.4}
that states that
\begin{equation}
\int W_{f.g}d\textbf k=f^*g\Rightarrow\int\frac{W_{\mathrm{Re}\left[\textbf a_p.\tilde{\textbf a}\right]}}{\mathrm{Re}\left[\textbf a_p.\tilde{\textbf a}\right]}d\textbf k=1
\end{equation}
we obtain the dispersion relation:
\begin{equation}
\fl 1=\frac{\omega_{pi}^2}{2}\frac{\textbf k_S^2}{\omega_S^2+2i\nu\omega_S|\textbf k_S|c_S-c_S^2\textbf k_S^2}\int \left[\frac{\rho_0\left(\textbf k+\frac{\textbf k_S}{2}\right)}{D_s^-}+\frac{\rho_0\left(\textbf k-\frac{\textbf k_S}{2}\right)}{D_s^+}\right]d\textbf k,
\end{equation}
where $\omega_{pi}=\sqrt{Z/M}$ is the ion plasma frequency (in normalized units) and $f^*$ represents the complex conjugate of $f$.
\par By making an appropriate change of variables, our general dispersion relation can be written in a more compact way as
\begin{equation} \label{GeneralDispersionRelation}
1=\frac {\omega_{pi}^2}{2}\frac{\textbf k_S^2}{\omega_S^2+2i\nu\omega_S|\textbf k_S|c_S-c_S^2\textbf k_S^2} \int\rho_0(\textbf k)\left(\frac{1}{D^+}+\frac{1}{D^-}\right)d\textbf k ,
\end{equation}
with $D^\pm(\textbf k)=[\omega(\textbf k)\pm\omega_S]^2-(\textbf k\pm\textbf k_S)^2-1$. Equation (\ref{GeneralDispersionRelation}) is the main result of this section. We observe that, given the statistical properties of the pump field, it is possible to evaluate Eq. (\ref{GeneralDispersionRelation}). This general dispersion relation can also be used to understand how spectral shaping can modify and mitigate Stimulated Brillouin Scattering.
\par We first apply our general dispersion relation to the simple and common case of a pump plane wave of wave vector $\textbf k_0$, which means that $\rho_0(\textbf k)=a_0^2\delta(\textbf k-\textbf k_0)$. With the purpose of the following comparisons, we drop the contribution of the damping term $\nu=0$. The dispersion relation then becomes
\begin{equation}
\eqalign{
1=\frac {\omega_{pi}^2}{2}\frac{\textbf k_S^2}{\omega_S^2-c_S^2\textbf k_S^2}a_0^2\left\{\frac 1{[\omega(\textbf k_0)+\omega_S]^2-(\textbf k_0+\textbf k_S)^2-1}\right.+ \cr
\phantom{1=}+\left.\frac 1{[\omega(\textbf k_0)-\omega_S]^2-(\textbf k_0-\textbf k_S)^2-1}\right\}.
}
\end{equation}
\par This result recovers the dispersion relation of Refs. \cite{Drake,Forslund,Kruer,lehmanndisp}, obtained for a coherent pump wave $\textbf A_S=\textbf A_{L0}\cos(\textbf k_0.\textbf r-\omega_0t)$, if we account for the difference in polarization and use $\omega_0=\omega(\textbf k_0)$. All the conclusions derived in Ref. \cite{Drake,Forslund,Kruer}, based on this dispersion relation, are then consistent with the predictions of GPK \cite{JorgeJMP}.
\subsection{1D water-bag zero-order photon distribution function}
\par The full power of GPK becomes evident for broadband pump wave fields, where analytical results are not possible based on the standard formlism. In order to illustrate the consequences of broadband light, we consider a one-dimensional water-bag zero-order distribution function as the model for our photon distribution
\begin{equation} \label{water-bag}
\rho_0(\textbf k)=\frac{a_0^2}{\sigma_1+\sigma_2}[\theta(k-k_0+\sigma_1)-\theta(k-k_0-\sigma_2)],
\end{equation}where $\theta(k)$ is the Heaviside function and $\sigma_1$ ($\sigma_2$) represents the spectral bandwidth to the left (right) of the central wave number, $k_0$.
\par For this distribution function, the autocorrelation function of the random phase $\psi(x)$ satisfies
\begin{equation}
\left<\exp\left[-i\psi\left(x+\frac y2\right)+i\psi\left(x-\frac y2\right)\right]\right>=e^{-iy\tilde\sigma}\frac{\sin(y\bar\sigma)}{y\bar\sigma},
\end{equation}where $\tilde\sigma\equiv (\sigma_2-\sigma_1)/2$ and $\bar\sigma\equiv(\sigma_1+\sigma_2)/2$. The correlation length of this distribution is $\approx \pi/\sqrt 2\bar\sigma$.
\par A simplified dispersion relation for the water-bag distribution function of Eq. (\ref{water-bag}) can be derived (see \ref{AppendixWaterBag}) yielding
\begin{equation}
\eqalign{
1=\frac{a_0^2\omega_{pi}^2}{8\bar\sigma}\frac{k_S}{\omega_S^2-c_S^2k_S^2}\left[\frac{k_S^2}{k_S^2-\omega_S^2}\log\left(\frac{D_1^-D_2^+}{D_1^+D_2^-}\right)+\right. \cr
\phantom{1+}\left.+\frac{2\omega_S k_S}{\sqrt{Q_0}}(\arctanh\textbf{ }b^++\arctanh\textbf{ }b^-)\right],
}
\label{water_bag_distribution_function}
\end{equation}
with $\omega_{0i}=\sqrt{[k_0+(-1)^i\sigma_i]^2+1}$, $D_i^\pm=\omega_S^2-k_S^2\pm 2[(k_0+(-1)^i\sigma_i)k_S-\omega_{0i}\omega_S]$, $Q_0=(k_S^2-\omega_S^2)(k_S^2-\omega_S^2+4)$, $Q^\pm=\prod_{i=1}^2[D_i^\pm+(k_S-\omega_S)(\omega_S\mp 2\omega_{0i})]$ and $b^\pm=2k_S^2(\omega_S+k_S)\sqrt{Q_0}(2\bar\sigma+\omega_{01}-\omega_{02})/\left[Q^0k_S^2-Q^\pm(\omega_S+k_S)^2\right]$.
\par We are interested in the maximum growth rate of SBS. Analytical results can be obtained in the case where all the photons of the distribution propagate in an underdense medium, which implies that $k_0+(-1)^i\sigma_i\gg 1$. This also guarantees that $k_0>\sigma_1$, which assures that $\rho_0(k)$ represents a broadband (pump) source of forward propagating photons, as expected. From this condition, the approximations $\omega_{0i}\approx k_0+(-1)^i\sigma_i$ and $b^\pm\approx 0$ are also valid.
\par The dispersion relation (\ref{water_bag_distribution_function}) then becomes
\begin{equation}
\eqalign{
1=\frac{a_0^2\omega_{pi}^2}{8\bar\sigma}\frac{k_S^3}{\omega_S^2-c_S^2k_S^2}\frac{1}{k_S^2-\omega_S^2}\left\{\ln\left[\frac{2(k_0-\sigma_1)+(\omega_S+k_S)}{2(k_0-\sigma_1)-(\omega_S+k_S)}\right]+\right. \cr
\phantom{1=}\left.+\ln\left[\frac{2(k_0+\sigma_2)-(\omega_S+k_S)}{2(k_0+\sigma_2)+(\omega_S+k_S)}\right]\right\}.
}
\label{simplified}
\end{equation}
\par In the weak coupling limit, $a_0^2 \ll 2 c_S k_S \omega_0 c_S^2/\omega_{pi}^2$, the dispersion of the plasma mode almost fully coincides with the ideal dispersion of an ion-acoustic plasma wave.
The resonance condition for SBS can then be expressed as $\omega_S\sim k_Sc_S$, with $c_S\ll 1$ \cite{Drake,Forslund,Kruer}. Furthermore, the backscattering regime of stimulated Brillouin scattering (SBBS) is known to provide the highest growth rates \cite{Drake,Forslund,Kruer}, so we consider one of the terms $D_i^+$ resonant (corresponding to the contribution of the downshifted photons of the distribution function). By making the $D_1^+$ term resonant ($D_1^+=0\Rightarrow k_{L_{SBBS}}^m\approx 2(k_0-\sigma_1)/(1+c_S)$), we are considering the contribution of the photons of the lowest wave number, while with $D_2^+$ ($D_2^+=0\Rightarrow k_{L_{SBBS}}^M\approx 2(k_0+\sigma_2)/(1+c_S)$) we are searching for those of the highest wave number. This means that $k_S$ is of the order of $k_0$ and the range of unstable wave numbers is then given by:
\begin{equation} \label{range}
k_S\in\left[\frac{2}{1+c_S}(k_0-\sigma_1),\frac{2}{1+c_S}(k_0+\sigma_2)\right].
\end{equation}
\par We consider the upper limit case (as we will later see, the growth rate of the instability is within the same order of magnitude for the whole range of unstable wave numbers) and we note that $\omega_S\sim k_Sc_S$, with $c_S\ll 1$, implies that both $\omega_S\ll k_S$ and $\omega_S\ll k_0$.
\par To determine the growth rate of the instability in the weak coupling limit, we now write $\omega \approx k_Sc_S+i\Gamma$, with $\Gamma$ being the real growth rate of the instability and $|\Gamma|\ll k_Sc_S$. The dispersion relation (\ref{simplified}) can then be rewritten in the form $1=A\ln B$ where
\begin{equation}
A=\frac{a_0^2\omega_{pi}^2}{8\bar\sigma}\frac{k_S^3}{\omega_S^2-c_S^2k_S^2}\frac{1}{k_S^2-\omega_S^2}\approx\frac{a_0^2\omega_{pi}^2(k_0+\sigma_2)}{4i(\sigma_1+\sigma_2)\Gamma c_Sk_S};
\end{equation}
\begin{equation}
\eqalign{
B=\frac{2(k_0-\sigma_1)+(\omega_S+k_S)}{2(k_0-\sigma_1)-(\omega_S+k_S)}\frac{2(k_0+\sigma_2)-(\omega_S+k_S)}{2(k_0+\sigma_2)+(\omega_S+k_S)}\approx \cr
\phantom{B}\approx\frac{2k_0-\sigma_1+\sigma_2}{2(\sigma_1+\sigma_2)+i\Gamma}\frac{i\Gamma}{2(k_0+\sigma_2)}.
}
\end{equation}
\par We now take the imaginary part of the dispersion relation, working with a real $\Gamma$ and using the fact that, for a complex $Z=\rho e^{i\theta}$, with real $\rho$ and $\theta$, $\ln Z=\ln\rho+i\theta$. We get
\begin{equation} \label{general_case}
\Gamma c_Sk_S=\frac{a_0^2\omega_{pi}^2(k_0+\sigma_2)}{4(\sigma_1+\sigma_2)}\arctan\left[\frac{2(\sigma_1+\sigma_2)}{\Gamma}\right].
\end{equation}
\par With this result we are now able to compare our results for backscattering with those of Refs. \cite{Drake,Forslund,Kruer}. We found $k_{L_{SBBS}}^m\approx 2(k_0-\sigma_1)/(1+c_S)$ and $k_{L_{SBBS}}^M\approx 2(k_0+\sigma_2)/(1+c_S)$, which implies that, for the monochromatic limit, $k_{L_{SBBS}}^{m,pw}=k_{L_{SBBS}}^{M,pw}\equiv k_{L_{SBBS}}^{pw}= 2k_0/(1+c_S) \approx 2k_0(1-c_S)\approx 2k_0-2\omega_0c_S$, because $\omega_0\equiv\omega_{01}(\sigma_1=0)=\omega_{02}(\sigma_2=0)\approx k_0$, where we assume that the ion acoustic velocity is much smaller than the speed of light, $c_S \ll 1$. This recovers the result of Refs. \cite{Drake,Forslund,Kruer} for the wave number that maximizes the growth rate.
\par To determine the maximum growth rate in the weak coupling (\textit{wf}) scenario, we take the limit $\sigma_1,\sigma_2\rightarrow0$ and make use of $\arctan x\sim x$ when $x\rightarrow0$
\begin{equation} \label{weak_field_monochromatic}
\Gamma_{SBBSwf}^{pw,\mathrm{max}}=\frac{a_0\omega_{pi}}{2\sqrt{c_S}},
\end{equation}which also coincides with the monochromatic result in Refs. \cite{Drake,Forslund,Kruer} if we consider the already discussed correction for the polarization.
\par We now go back to the general case of Eq. (\ref{general_case}) and work in the opposing limit, $(\sigma_1+\sigma_2)\gg \Gamma$, so the approximation $\arctan x\sim \pi/2- 1/x$ when $x\rightarrow\infty$ can be used, yielding
\begin{equation} \label{approximation_weak_field}
\Gamma_{SBBSwf}^{\mathrm{max}}=\frac{\pi a_0^2\omega_{pi}^2}{16c_Sk_0}\frac{k_0+\sigma_2}{\sigma_1+\sigma_2}
\left[1+\frac{a_0^2\omega_{pi}^2}{16c_Sk_0}\frac{k_0+\sigma_2}{(\sigma_1+\sigma_2)^2}\right]^{-1}.
\end{equation}
\par The corresponding saturation value for large bandwidth is
\begin{equation}
\Gamma_{SBBSwf}^{\mathrm{max},sat}=\frac{\pi a_0^2\omega_{pi}^2}{16c_Sk_0}.
\end{equation}
\par We now consider the strong coupling limit, i.e., we assume that $|\omega_S|\gg k_Sc_S$, which happens when $a_0^2 > 2 c_S k_S \omega_0 c_S^2/\omega_{pi}^2$ \cite{Drake,Forslund,Kruer}.
We work in the underdense limit, as in the weak coupling case, so that the range of unstable wave numbers still holds and we use $k_S\approx 2(k_0+\sigma_2)$ as the wave number for maximum growth, which means that $k_S$ is still of the order of $k_0$. We also neglect $|\omega_S|$ when compared to $k_0$, which establishes the scale $k_S c_S\ll |\omega_S|\ll k_S\approx k_0$, consistent with $c_S \ll 1$. This means that we are not neglecting the magnitude of the imaginary part of $\omega_S$ when compared to its real part.
\par We now expand $\omega_S=\alpha+i\beta$, with real $\alpha$ and $\beta$ and $|\alpha|, |\beta| \gg k_S c_S$, so that the dispersion relation yields (see \ref{AppendixStrongFieldLimit})
\begin{equation}
\omega_S=\left(\frac{k_Sa_0^2\omega_{pi}^2}{2}\right)^{1/3}\left(\frac 12+\frac{\sqrt 3}{2}i\right),
\end{equation}
which is, once more, the result presented in Refs. \cite{Drake,Forslund,Kruer} with the usual polarization considerations. The maximum growth rate in the strong coupling limit is then
\begin{equation}
\Gamma_{SBBSsf}^{pw,\mathrm{max}}=\frac{\sqrt 3}{2}\left(\frac{k_Sa_0^2\omega_{pi}^2}{2}\right)^{1/3}.
\end{equation}
\subsection{Numerical solution of the complete dispersion relation}
\par We now examine the numerical solution of the complete dispersion relation in order to illustrate the evolution of the strength of the instability as a function of, not only the bandwidth, but also the wave number of the scattered wave itself.
\par In Fig. \ref{figuraB1} we show the maximum growth rate of the Brillouin instability as a function of the bandwidth parameter, $\sigma_2$, with $\sigma_1$ kept fixed. As expected, Eq. (\ref{approximation_weak_field}) is a good approximation to the complete solution only when we are dealing with large bandwidths. The difference between the approximate and the numerical solutions increases as bandwidth ($\sigma_2$) decreases. As $\sigma_2$ approaches $k_0$, the results start to agree and Eq. (\ref{approximation_weak_field}) can be used. As we approach the monochromatic limit, only the numerical solution should be considered, as the choice of $\sigma_1=0.1k_0$ still accounts for a considerable difference between $\Gamma_{\mathrm{max}}(\sigma_2=0)$ and the maximum growth rate in the monochromatic limit, $\Gamma_{\mathrm{max}}(\sigma_1,\sigma_2=0)$, expressed by Eq. (\ref{weak_field_monochromatic}). It is clear that a bandwidth as small as $10\%$ can still cause a reduction of the growth rate of the instability by a factor of more than $100$, which is significant.
\begin{figure}
\begin{center}\includegraphics[scale=0.55]{Gamma_maxVsSigma_2_1_2}
\end{center}
\caption{Maximum growth rate of SBBS as a function of bandwidth - $a_0=0.1$, $k_0=80.0$, $\sigma_1=0.1k_0$, $c_S=0.01$, $\omega_{pi}=0.1$. Red line - numerical solution; blue line - analytical limit for $\Gamma\ll (\sigma_1+\sigma_2)$ of Eq. (\ref{approximation_weak_field})}\label{figuraB1}
\end{figure}
\par Fig. \ref{figuraB2} shows the same results for the case of $\sigma_2\approx 0$. As in the previous case, the approximation of Eq. (\ref{approximation_weak_field}) agrees with the numerical solution as $\sigma_2$ approaches $k_0$. The monochromatic limit of Eq. (\ref{weak_field_monochromatic}) can also be confirmed at the origin of the plot, as expected.
\begin{figure}
\begin{center}\includegraphics[scale=0.47]{mergedimage_2}
\end{center}
\caption{Maximum growth rate of SBBS as a function of bandwidth - $a_0=0.1$, $k_0=80.0$, $\sigma_1\approx 0$, $c_S=0.01$, $\omega_{pi}=0.1$. Red line - numerical solution; blue line - analytical limit for $\Gamma\ll (\sigma_1+\sigma_2)$. In the inset the growth rate is shown for the regime where $\sigma_2/k_0\ll 1$}\label{figuraB2}
\end{figure}
\par We now study the behavior of the growth rate of the instability as a function of the wave number of the scattered wave. In Fig. \ref{figuraB6}, we plot the growth rate for a set of bandwidths and express it as a function of the wave number of the instability. We observe a very good agreement with the range of unstable wave numbers predicted by Eq. (\ref{range}). The lower limit does not depend on $\sigma_2$ and remains fixed as we increase bandwidth; as for the upper bound, it linearly grows as we increase the value of $\sigma_2$.
\begin{figure}
\begin{center}\includegraphics[scale=0.45]{Gamma_maxVsSigma_2_6_2}
\end{center}
\caption{Growth rate of SBBS as a function of the wave number of the scattered wave for different bandwidths of the water-bag (from the left to the right: $\sigma_2=0.1k_0,0.2k_0,0.3k_0,0.4k_0,0.5k_0,0.6k_0$, with $a_0=0.1$, $k_0=80.0$, $\sigma_1\approx 0$, $c_S=0.01$ and $\omega_{pi}=0.1$)}\label{figuraB6}
\end{figure}
\par We should also note that the flat structure observed indicates that the magnitude of the growth rate is within the same order for the full range of unstable wave numbers, meaning that the instability can grow on a wide range of wave numbers and lead to a significant level of ion acoustic turbulence. This is valid for relatively small bandwidths, as it is clear for $\sigma_2>0.1k_0$.
\par In Fig. \ref{figuraB4}, the variation of the growth rate of SBBS as a continuous function of both the bandwidth of the pump and the instability wave number allows for a global picture of the instability. As expected, we observe a strong dependence of the instability on the bandwidth of the radiation used as a driver. For a bandwidth of just $1\%$ in $k_0$, the instability is already reduced to $10\%$ of the plane wave limit, which justifies the use of bandwidth as a means of significantly mitigating or reducing the growth of the instability.
\begin{figure}
\begin{center}\includegraphics[scale=0.38]{Gamma_maxVsSigma_2_4_2}
\end{center}
\caption{Growth rate of SBBS as a function of the wave number of the scattered wave and the bandwidth of the water-bag: $a_0=0.1$, $k_0=80.0$, $\sigma_1\approx 0$, $c_S=0.01$, $\omega_{pi}=0.1$ (2D representation). The red lines illustrate the theoretical range of unstable wave numbers}\label{figuraB4}
\end{figure}
\par For fixed $k_0$, $a_0$ and $\sigma_1$, the growth rate for SBBS scales with $\propto 1/\sigma_2$, similarly to other distribution functions (e.g., asymmetric Lorentzian or Gaussian distribution of photons \cite{JorgePRL}). Both the wave number for maximum growth and the upper bound of the unstable wave numbers domain depend linearly on $\sigma_2$.
\section{\label{sec:4} Conclusions}
\par A general dispersion relation for stimulated Brillouin scattering, driven by a partially coherent pump field, has been derived, using the GPK formalism \cite{JorgePRL} which is formally equivalent to the coupling of the full wave equation with the plasma fluid equations. After having retrieved the monochromatic limit of the equation, we have used a one-dimensional water-bag profile for the incident field to model broadband effects. The analysis has revealed a growth rate dependence on the coherence width $\sigma$ of the radiation field which scales with $1/\sigma$ typical of 3-wave processes \cite{JorgePRL}. Numerical estimates of the growth rate of the instability have been obtained as a function of the intensity of the incident field and the wave number of the scattered wave, confirming the theoretical predictions for the domain of unstable wave numbers.
\par The possibility of an accurate estimate of the growth rate of the instability, for a wide range of parameters, not only stresses the important role of bandwidth in the suppression of the instability, but also motivates an exploration of other photon distributions and a comparison with particle-in-cell simulations.
\par In this paper, we have focused on the backscattering regime of SBS, but the general dispersion relation we have derived (Eq. (\ref{GeneralDispersionRelation})) may be readily applied to different regimes. A detailed comparison with previous models for SBS pumped by a wave with finite bandwidth \cite{referenceChap3,referenceChap3_2,WP0.15,parabolicwaveequationapproximation1,parabolicwaveequationapproximation2} can then be performed and will be presented in the future, along with particle-in-cell simulations of parametric instabilities pumped by broadband radiation \cite{mythesis}.
A prediction of the suppression of SBS by the experimental mechanism of polarization smoothing \cite{polarizationsmoothing} or from other spectral distribution \cite{follett1} can also be be readily obtained through GPK and will be explored in future works.
\par An oft-used method to increase the spectral bandwidth of a high-power laesr beam is spectral dispersion via random phase plates \cite{rpp1}. One of the issues with longitudinal and transverse smoothing by spectral dispersion on coherent laser beams is the creation of amplitude modulation resulting in enhanced intensity regions, seen as an adverse effect. A numerical study of beam smoothing by phase modulation on stimulated Brillouin scattering on the Laser M\'egajoule (LMJ) facility has been carried out \cite{duluc1,duluc2} and a pathway to a reduction of the amplitude modulation was demonstrated. Although it was shown that the effect can be reduced with the particular beam composition in LMJ, it is not obvious that this would be the case for all architectures and therefore the preferred route would be to develop broadband lasers from the outset.
\par Future laser drivers for inertial confinement fusion such as StarDriver \cite{star1} aim to control both laser plasma instabilities and hydrodynamic instabilities by using multiple beamlets with bandwidths in the range 2\%-10\%. Lasers with bandwidths of 2\% and repetition rates of 10Hz are already becoming available using Neodymium phosphate glass \cite{star1} and it is anticipated that this can be increased by using a range of laser gain media, such as a selection of Nd:Glass media based on phosphates, silicates and fluorides. Addition of other gain media such as Yb or Er glasses can potentially produce bandwidths up to 10\% \cite{star1}. In this paper and our previous paper, where the GPK formalism was used to study bandwidth effects on Raman scattering \cite{JorgePRL}, we demonstrated that lasers with bandwidths in this range significantly diminish the growth rate of both stimulated Brillouin and Raman backscatter.
\section*{Acknowledgements}
Work partially supported by the European Research Council (ERC-2015-AdG Grant no. 695088). RB acknowledges support from EPSRC grant EP/R004773/1.
|
2101.04136
|
\section{Introduction}
Pump-probe spectroscopy has emerged as an important experimental method to probe and manipulate
correlated electronic
matter~\cite{reviews,okamoto07,fausti11,liu12,Zong,mitrano16, Kogar,Buzzi,Thewalt18}.
In this technique the system is first subjected to an intense laser pump, and then
the reaction of the system is probed with a weaker laser pulse at a later time.
Traditionally, this technique has been used mostly with pumps in optical frequency
range, and with pulse durations that are shorter
than the relaxation time scale of the system. Such setups probe the
nonequilibrium dynamics of the system.
More recently, with the development of terahertz lasers
it has become possible to excite systems at milli-electron volt scale, which is an energy range of
great importance for correlated electron systems with interesting low temperature quantum
phases~\cite{Hebling08,Kampfrath13,Junginger12,Matsunaga12,Matsunaga13,Wu17,Wu18,Nakamura20}.
Simultaneously, it has become possible to generate
pump pulses that are long compared to the system's relaxation time.
In this case the
system stays in equilibrium in the presence of the pump, and one can probe the finite frequency
nonlinear electro-optical response of the system~\cite{Boyd}.
The purpose of this paper is to develop a theoretical framework for such nonlinear finite frequency
electro-optical responses
in ordinary centrosymmetric metals starting from basic time dependent Hamiltonian formalism.
In the context of terahertz spectroscopy there are two types of nonlinear responses that are
currently being discussed the most.
(i) Third harmonic generation, which is a measurement in frequency domain where the system is
excited with a pump electric field with frequency $\omega$, and a response at $3\omega$ is
detected~\cite{Matsunaga14,Matsunaga17,Kaiser20,ZheWang20}.
(ii) The terahertz Kerr effect where the optical property (such as the optical
conductivity or equivalently, the refractive index)
of the system is transiently modified in the presence of the pump, the change being
proportional to second order in the pump electric
field~\cite{Hoffmann09,Freysz10,Yada13,Cornet14,Sajadi17,Katsumi18,Katsumi20}. Of special
interest is the
instantaneous Kerr effect, where the probe measures a time~($t$) dependent
response that is proportional to the square of the pump electric field
${\bf E}_{pp}(t)^2$.
In the last few years, in parallel with the experimental
developments~\cite{Matsunaga14,Matsunaga17,Kaiser20,ZheWang20,Hoffmann09,Katsumi18,
Katsumi20,Grasset18,Grasset19},
a lot of theoretical effort has been
put to study and interpret Kerr and third harmonic responses of
superconductors~\cite{Tsuji15,Cea16,Murotani17,Cea18,Jujo18,Murotani19,Silaev19,Udina19,Schwarz20,
Tsuji20,Shimano20,Seibold20,Schwarz20b}.
The motivation has been to study exotic collective modes of electrons that exist only
in broken symmetry phases, such as the superconducting gap amplitude mode or the so-called Higgs
mode~\cite{Tsuji15,Cea16,Murotani17,Cea18,Jujo18,Murotani19,Silaev19,Udina19,Schwarz20,
Tsuji20,Shimano20,Seibold20,Schwarz20b,Mueller19,Kumar19,Gabrielle20,Sun20,Yang20,Golez20}.
Recent theory works have also studied nonlinear electro-optical responses of topological
metals~\cite{Parker19,Cheng20,Ahn20,Villegas20,Rostami20}.
However, relatively little theoretical attention has been given to the ordinary nontopological metal state,
which is the parent phase of many symmetry broken exotic states such as superconductors
and density waves. On the other hand, clearly there is a need for such a study for several reasons.
Firstly, in order to
identify the role of a non-zero order parameter, it is important to compare the nonlinear signals from
the metal and the symmetry broken phases. This step invariably requires our understanding of the
nonlinear signals in a metal.
Secondly, from a purely conceptual point of view, an understanding of the nonlinear responses
of a metal can be taken as a stepping stone to develop theories of such responses
in more complex phases. More concretely, particle number conservation or gauge invariance
imposes certain constraints on the nonlinear responses.
One such constraint involves the vanishing of the nonlinear current response
for non-superconducting phases if the external vector potential is time independent.
A second constraint is a generalization of the familiar $f$-sum rule
that is invoked in the context of linear conductivity. As we show below, discussing these constraints
in the simple setting of a metal is at once convenient and instructive, and this exercise sets the stage
to study rigorously more complex phases.
With the above motivation, in this paper we develop the formalism to study the finite frequency nonlinear
electro-optical response in ordinary centrosymmetric metals.
Traditionally, in nonlinear optics the quantity of central interest is the nonlinear electric
polarization ${\bf P}_{nl}$~\cite{Boyd}. However, in the context of
metals we find it more convenient, and physically more intuitive, to develop the theory
in terms of the nonlinear electrical current ${\bf j}_{nl}$ and the associated nonlinear
conductivity $\sigma^{(3)}$. This choice is not fundamental,
and is more a matter of taste, and one can easily extract ${\bf P}_{nl}$ from ${\bf j}_{nl}$ and vice versa.
In our theoretical development we pay particular attention to the following two aspects that
have been mostly glossed over in the recent literature.
First, there is a basic dichotomy between what is experimentally measured, and what can be
computed using perturbative field theoretic techniques. As we show below,
the measured nonlinear current is the sum of several
response functions that obey causality. However, what
can be calculated using field theory are contour ordered correlators that can be factorized
using Wick's theorem. The contour can be ordered either
in real time using Keldysh's two-time formalism~\cite{Kamenev},
or it can be ordered in imaginary time using Matsubara technique~\cite{Mahan}.
The advantage of the former is that, at the end of the book keeping, one can avoid
an additional step which is necessary in the Matsubara method, namely having to make
analytic continuations from imaginary to real axes. The advantage
of the latter is that, in the intermediate steps, the typical expressions in the Matsubara formalism
are more compact.
Besides the technical intricacies, the careful extraction of the response functions from the correlation
functions is important to determine the presence or absence of finite temperature effects
in the nonlinear signals.
The above dichotomy between causal response functions and contour ordered correlation functions
is already present at a linear response level. The only difference here is that the causality structure, or
equivalently, the analytic continuations are more complex in the case of a nonlinear response.
Importantly, we find that the basic intuition concerning finite temperature effects remain the same
here as in linear response. Namely, thermal factors are not
important if the dominant relaxation process is elastic scattering, just as in Drude conductivity.
While, inelastic scattering (not discussed in this work) leads to nontrivial temperature dependencies.
Second, the importance of obtaining results that are consistent with particle number conservation,
that can be expressed in terms of a global $U(1)$ symmetry. As noted already, this
conservation leads to the generalization of the $f$-sum rule,
and it also ensures that the response is zero for constant time independent vector potentials.
Physically, such a vector potential
implies zero electric field in the bulk, and consequently such a potential does not affect the system,
provided the electromagnetic response at the boundary is unremarkable.
In terms of the $U(1)$ symmetry, a constant vector potential can be absorbed, and
therefore ``gauged out'', in a global redefinition of the phases of the single particle
wavefunctions, provided the system is in a non-superconducting phase.
From a diagrammatic point
of view, this implies that keeping only an arbitrary subset of diagrams in the calculation of ${\bf j}_{nl}$
will often lead to unphysical answers.
The rest of the paper is organized as follows.
In section~\ref{sec2} we derive formal expressions for the
nonlinear current $\left(j_{NL}\right)_{\alpha}(\omega)$ in terms of the nonlinear current
kernel $\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$, see Eq.~(\ref{eq:2.20}),
or equivalently in terms of the nonlinear conductivity $\sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$,
see Eqs.~(\ref{eq:2.21-bis}) and (\ref{eq:2.21-bis2}).
The nonlinear kernel and the conductivity are rank-four tensors, and the
indices $(\alpha, \beta, \gamma, \delta)$ denote photon polarizations. The arguments
$(\omega_1, \omega_2, \omega_3)$ denote incoming photon
frequencies, with polarizations $(\beta, \gamma, \delta)$, respectively. The outgoing photon has polarization
$\alpha$ and carries frequency $\omega = \omega_1 + \omega_2 + \omega_3$.
The nonlinear kernel itself is expressed as a sum of several current correlation functions, suitably analytically
continued from imaginary to real frequencies, see Eq.~(\ref{eq:2.21}). The mapping between the
causal response functions and the imaginary time ordered correlation functions is proven exactly using
the Lehmann representation.
In section~\ref{sec3} we prove
the following two properties of the kernel stemming from particle number
conservation. First, the nonlinear kernel vanishes if any one of the three incoming
photon frequencies is set to zero, see Eq.~(\ref{eq:3.1}).
This ensures that there is no nonlinear diamagnetic response in a metallic phase. Second, a generalization
of the $f$-sum rule which shows that the nonlinear conductivity integrated over the three incoming
frequencies is a constant that depends only the electronic spectrum, and is independent of the electron
lifetime, see Eq.~(\ref{eq:3.15}). This sum rule has been noted earlier~\cite{Watanabe20}.
We use the first property to express the nonlinear conductivity in a manifestly gauge invariant form.
It is this gauge invariant response that is studied in the remaining sections.
In section~\ref{sec4} we use diagrammatic method to calculate the gauge invariant
$\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$ of a Drude metal,
namely a system of non-interacting electrons in the presence of weak disorder. The third harmonic
and the terahertz instantaneous Kerr
signals are special cases of the generalized nonlinear response, and these quantities for a Drude system
are computed in sections~\ref{sec5} and~\ref{sec6}, respectively. In the concluding section~\ref{sec7}
we give a summary of the main results, and we provide directions for future works.
\section{Derivation of the nonlinear electro-optical response}
\label{sec2}
We consider a metallic system in a crystalline environment described by the Hamiltonian
\begin{equation}
\label{eq:2.1}
\hat{\mathcal{H}}_{\rm tot} = \hat{\mathcal{H}} +\hat{V}(t),
\end{equation}
where $\hat{\mathcal{H}}$ is a time independent Hamiltonian that describes the system in the absence of external
time-dependent perturbations. Depending on the context, $\hat{\mathcal{H}}$ can include electron-electron
interaction and scattering of electrons due to disorder.
To simplify the discussion we assume that only one band is relevant.
The multiband generalization of the formalism is straightforward.
Thus, the part of $\hat{\mathcal{H}}$ that describes the band dispersion
is given by $\hat{\mathcal{H}}_0 = \sum_{{\bf k}} \epsilon_{{\bf k}} c^{\dagger}_{{\bf k}} c_{{\bf k}}$, where
$(c^{\dagger}_{{\bf k}}, c_{{\bf k}})$ are creation and annihilation operators of electrons with wavevector ${\bf k}$,
and $\epsilon_{{\bf k}}$ is the band dispersion. We take the electrons to be spinless, since it does not play any
role in the following.
$\hat{V}(t)$ is a time-dependent potential that the electrons experience
due to the electric field ${\bf E}(t)$ of the pump and the probe lasers. Since the typical
photon wavelength is much longer than the Fermi wavelength $1/k_F$, where $k_F$ is the Fermi wavevector,
the electric field can be taken as spatially uniform.
We describe the light-matter coupling by Peierls substitution, such that
$\epsilon_{{\bf k}} \rightarrow \epsilon_{{\bf k} - e {\bf A}}$ in the presence of the electromagnetic field.
Here $e$ is the electron charge
and ${\bf A}(t)$ is the vector potential to which the electric field is related by
${\bf E}(t) = -\partial_t {\bf A}(t)$. Expanding $\epsilon_{{\bf k} - e {\bf A}}$
in powers of the vector potential we get
\begin{align}
\label{eq:2.2}
\hat{V}(t) =& \left[ -e \hat{v}_{\alpha}(t) + \frac{e^2}{2!}\hat{v}_{\alpha \beta}(t) A_{\beta}(t)
- \frac{e^3}{3!}\hat{v}_{\alpha \beta \gamma}(t) A_{\beta}(t)
\right. \nonumber \\
&\left. \times A_{\gamma}(t)
+ \frac{e^4}{4!}\hat{v}_{\alpha \beta \gamma \delta}(t) A_{\beta}(t) A_{\gamma}(t) A_{\delta}(t) \right] A_{\alpha}(t),
\end{align}
where
\begin{subequations}
\label{eq:2.3}
\begin{align}
\hat{v}_{\alpha} &= \sum_{{\bf k}} \frac{\partial \epsilon_{{\bf k}}}{\partial k_{\alpha}} c^{\dagger}_{{\bf k}} c_{{\bf k}},
\label{eq:2.3a}\\
\hat{v}_{\alpha \beta} &= \sum_{{\bf k}} \frac{\partial^2 \epsilon_{{\bf k}}}{\partial k_{\alpha} \partial k_{\beta}} c^{\dagger}_{{\bf k}} c_{{\bf k}},
\label{eq:2.3b}\\
\hat{v}_{\alpha \beta \gamma} &= \sum_{{\bf k}} \frac{\partial^3 \epsilon_{{\bf k}}}
{\partial k_{\alpha} \partial k_{\beta} \partial k_{\gamma}} c^{\dagger}_{{\bf k}} c_{{\bf k}},
\label{eq:2.3c}\\
\hat{v}_{\alpha \beta \gamma \delta} &= \sum_{{\bf k}} \frac{\partial^4 \epsilon_{{\bf k}}}
{\partial k_{\alpha} \partial k_{\beta} \partial k_{\gamma} \partial k_{\delta}} c^{\dagger}_{{\bf k}} c_{{\bf k}},
\label{eq:2.3d}
\end{align}
\end{subequations}
and $(\alpha, \beta, \gamma, \delta)$ denote spatial indices $(x, y, z)$. In Eq.~(\ref{eq:2.2}) and in the rest of the paper
summation over repeated indices is implied, unless the contrary is explicitly mentioned.
The associated charge current operator
$\hat{j}_{\alpha} \equiv - \delta \hat{\mathcal{H}}[{\bf A}]/\delta A_{\alpha}$, is given by
\begin{align}
\label{eq:2.4}
\hat{j}_{\alpha}(t) =& e \hat{v}_{\alpha}(t) - e^2 \hat{v}_{\alpha \beta}(t) A_{\beta}(t) +
\frac{e^3}{2}\hat{v}_{\alpha \beta \gamma}(t) A_{\beta}(t) A_{\gamma}(t)
\nonumber \\
-& \frac{e^4}{6}\hat{v}_{\alpha \beta \gamma \delta}(t) A_{\beta}(t) A_{\gamma}(t) A_{\delta}(t).
\end{align}
The next step is to calculate the average current of the system which is defined as
\begin{equation}
\label{eq:2.5}
j_{\alpha}(t) \equiv \frac{1}{Z} \sum_n e^{-\beta E_n} \langle n(t) | \hat{j}_{\alpha}(t) | n(t) \rangle,
\end{equation}
where $|n\rangle$ are the eigenstates of $\hat{\mathcal{H}}$ with $\hat{\mathcal{H}} | n \rangle = E_n | n \rangle$. Thus,
we assume that the perturbation due to the electromagnetic field does not take the system out of
equilibrium, and that the system remains in thermal equilibrium with temperature $T$. Therefore, a
measurable quantity is simply the thermal average of the associated operator. Following the usual
rules of equilibrium statistical mechanics, the eigenstate $|n\rangle$ has Boltzmann weight $e^{-\beta E_n}$,
the partition function $Z \equiv \sum_n e^{-\beta E_n}$, and $\beta \equiv 1/(k_B T)$
with $k_B$ the Boltzmann
constant. In other words, in the following the role of the time-dependent perturbation $\hat{V}(t)$ is simply
to modify the time evolution of the states and/or the operators depending on the picture (Schroedinger,
Heisenberg or interaction).
In practice, the pump-probe experiments typically measure not just an equilibrium nonlinear response,
but also a nonequilibrium response where the system relaxes back to equilibrium after having put
out-of-equilibrium by the pump. Thus, how the nonlinear signal gets modified due to the simultaneous
presence of an inequilibrium component is a question that will be both interesting and relevant to address
in the future. In the current treatment we simply assume that the out of equilibrium component is absent.
In the following we use the operator formalism to compute the current $j_{\alpha}(t)$, while the same can be
done using the effective action principle, see, e.g.~\cite{Udina19,benfatto04}.
We adopt the interaction picture in which the time evolution of an operator $\hat{O}(t)$ is given by
$
\hat{O}(t) = e^{i \hat{\mathcal{H}} t} \hat{O}(0) e^{-i \hat{\mathcal{H}} t},
$
and that of a state by
$
| n(t) \rangle = \hat{U} (t, t_0) | n(t_0) \rangle,
$
where the time evolution operator is
\begin{equation}
\label{eq:2.6}
\hat{U} (t, t_0) = \hat{T}_+ \exp [-i \int_{t_0}^t d t^{\prime} \hat{V}(t^{\prime})],
\end{equation}
and $\hat{T}_+$ is the time ordering operator. The reference time $t_0$ is an instant before the introduction
of the perturbation $\hat{V}(t)$. It will be convenient later to set $t_0 \rightarrow -\infty$.
We assume the system to be centrosymmetric for which the lowest order nonlinear current is cubic in the
vector potential. Consequently, the operators $\hat{j}_{\alpha}(t)$ and $\hat{U}(t,t_0)$
need to be expanded to third order in the vector potential.
For convenience we define the quantity $\hat{J}_{\alpha}^{(3)}(t) = [\hat{U}^{\dagger}(t, t_0)
\hat{j}_{\alpha}(t) \hat{U}(t, t_0)]_{\mathcal{O}(A^3)}$, and after collecting terms we get
\begin{widetext}
\begin{align}
\label{eq:2.7}
&\hat{J}_{\alpha}^{(3)}(t) = -e^4 \left[
\frac{1}{6} \hat{v}_{\alpha \beta \gamma \delta}(t) A_{\beta}(t) A_{\gamma}(t) A_{\delta}(t)
- \frac{i}{2} \int_{t_0}^t dt_1 \left[ \hat{v}_{\alpha \beta}(t), \hat{v}_{\gamma \delta}(t_1) \right]
A_{\beta}(t) A_{\gamma}(t_1) A_{\delta}(t_1)
- \frac{i}{2} \int_{t_0}^t dt_1 \left[ \hat{v}_{\alpha \beta \gamma}(t), \hat{v}_{\delta}(t_1) \right]
\right.
\nonumber \\
&\times A_{\beta}(t) A_{\gamma}(t) A_{\delta}(t_1)
- \frac{i}{6} \int_{t_0}^t dt_1 \left[ \hat{v}_{\alpha}(t), \hat{v}_{\beta \gamma \delta}(t_1) \right]
A_{\beta}(t_1) A_{\gamma}(t_1) A_{\delta}(t_1)
+ \left\{ \frac{1}{2} \int_{t_0}^t dt_1 \int_{t_0}^t dt_2 \left(
\hat{v}_{\gamma}(t_1) \hat{v}_{\alpha \beta}(t) \hat{v}_{\delta}(t_2) + {\rm h.c.} \right) \right.
\nonumber \\
&- \left. \int_{t_0}^t dt_1 \int_{t_0}^{t_1} dt_2 \left( \hat{v}_{\alpha \beta}(t) \hat{v}_{\gamma}(t_1) \hat{v}_{\delta}(t_2)
+ {\rm h.c.} \right) \right\}
A_{\beta}(t) A_{\gamma}(t_1) A_{\delta}(t_2)
+ \left\{ \frac{1}{2} \int_{t_0}^t dt_1 \int_{t_0}^t dt_2 \left(
\hat{v}_{\beta}(t_1) \hat{v}_{\alpha}(t) \hat{v}_{\gamma \delta}(t_2) + {\rm h.c.} \right) \right.
\nonumber\\
&- \left. \frac{1}{2} \int_{t_0}^t dt_1 \int_{t_0}^{t_1} dt_2
\left( \hat{v}_{\alpha}(t) \hat{v}_{\beta}(t_1) \hat{v}_{\gamma \delta}(t_2) + {\rm h.c.} \right)
- \frac{1}{2} \int_{t_0}^t dt_2 \int_{t_0}^{t_2} dt_1
\left( \hat{v}_{\alpha}(t) \hat{v}_{\gamma \delta}(t_2) \hat{v}_{\beta}(t_1) + {\rm h.c.} \right)
\right\} A_{\beta}(t_1) A_{\gamma}(t_2) A_{\delta}(t_2)
\nonumber \\
&+ \left\{ i \int_{t_0}^t dt_1 \int_{t_0}^{t_1} dt_2 \int_{t_0}^{t_2} dt_3
\left(\hat{v}_{\alpha}(t) \hat{v}_{\beta}(t_1) \hat{v}_{\gamma}(t_2) \hat{v}_{\delta}(t_3) - {\rm h.c.} \right)
- i \int_{t_0}^t dt_1 \int_{t_0}^{t} dt_2 \int_{t_0}^{t_2} dt_3
\left(\hat{v}_{\beta}(t_1) \hat{v}_{\alpha}(t) \hat{v}_{\gamma}(t_2) \hat{v}_{\delta}(t_3) - {\rm h.c.} \right)
\right\} \nonumber \\
&\left. \times A_{\beta}(t_1) A_{\gamma}(t_2) A_{\delta}(t_3) \right].
\end{align}
\end{widetext}
In the above there are seven different types of terms which can be distinguished from the different ways
in which the time arguments of the three factors of the vector potential appear. Therefore, using
Eqs.~(\ref{eq:2.5}) and (\ref{eq:2.7}) the measured nonlinear current, proportional to $A^3$,
can be expressed as a sum of seven terms as
\begin{align}
\label{eq:2.8}
\left(j_{NL}\right)_{\alpha}(t) &= j_{\alpha}(t)^{1p} + j_{\alpha}(t)^{2p,a} + j_{\alpha}(t)^{2p,b} + j_{\alpha}(t)^{2p,c}
\nonumber \\
&+ j_{\alpha}(t)^{3p,a} + j_{\alpha}(t)^{3p,b} + j_{\alpha}(t)^{4p},
\end{align}
where, after taking $t_0 \rightarrow -\infty$ for convenience,
\begin{subequations}
\label{eq:2.9}
\begin{align}
j_{\alpha}(t)^{1p} &= - e^4 R_{\alpha \beta \gamma \delta}^{(1p)}(t) A_{\beta}(t) A_{\gamma}(t) A_{\delta}(t),
\label{eq:2.9a} \\
j_{\alpha}(t)^{2p,a} &= -\frac{e^4}{2} \int_{-\infty}^{\infty} dt_1 R_{\alpha \beta, \gamma \delta}^{(2p,a)}(t, t_1)
A_{\beta}(t) A_{\gamma}(t_1) A_{\delta}(t_1), \label{eq:2.9b} \\
j_{\alpha}(t)^{2p,b} &= -\frac{e^4}{2} \int_{-\infty}^{\infty} dt_1 R_{\alpha \beta \gamma, \delta}^{(2p,b)}(t, t_1)
A_{\beta}(t) A_{\gamma}(t) A_{\delta}(t_1), \label{eq:2.9c} \\
j_{\alpha}(t)^{2p,c} &= -\frac{e^4}{6} \int_{-\infty}^{\infty} dt_1 R_{\alpha, \beta \gamma \delta}^{(2p,c)}(t, t_1)
A_{\beta}(t_1) A_{\gamma}(t_1) A_{\delta}(t_1), \label{eq:2.9d} \\
j_{\alpha}(t)^{3p,a} &= - e^4 \int_{-\infty}^{\infty} dt_1 \int_{-\infty}^{\infty} dt_2
R_{\alpha \beta, \gamma, \delta}^{(3p,a)}(t, t_1, t_2)
\nonumber \\
&\times A_{\beta}(t) A_{\gamma}(t_1) A_{\delta}(t_2), \label{eq:2.9e} \\
j_{\alpha}(t)^{3p,b} &= - e^4 \int_{-\infty}^{\infty} dt_1 \int_{-\infty}^{\infty} dt_2
R_{\alpha, \beta, \gamma \delta}^{(3p,b)}(t, t_1, t_2)
\nonumber \\
&\times
A_{\beta}(t_1) A_{\gamma}(t_2) A_{\delta}(t_2), \label{eq:2.9f} \\
j_{\alpha}(t)^{4p} &= - e^4 \int_{-\infty}^{\infty} dt_1 \int_{-\infty}^{\infty} dt_2
\int_{-\infty}^{\infty} dt_3
\nonumber \\
&\times R_{\alpha, \beta, \gamma, \delta}^{(4p)}(t, t_1, t_2, t_3)
A_{\beta}(t_1) A_{\gamma}(t_2) A_{\delta}(t_3), \label{eq:2.9g}
\end{align}
\end{subequations}
and the response functions are defined by
\begin{subequations}
\label{eq:2.10}
\begin{align}
&R_{\alpha \beta \gamma \delta}^{(1p)}(t) \equiv \langle \hat{v}_{\alpha \beta \gamma \delta}(t) \rangle,
\label{eq:2.10a}\\
&R_{\alpha \beta, \gamma \delta}^{(2p,a)}(t, t_1) \equiv -i \theta(t - t_1)
\langle \left[ \hat{v}_{\alpha \beta}(t) , \hat{v}_{\gamma \delta}(t_1) \right] \rangle, \label{eq:2.10b}\\
&R_{\alpha \beta \gamma, \delta}^{(2p,b)}(t, t_1) \equiv -i \theta(t - t_1)
\langle \left[ \hat{v}_{\alpha \beta \gamma}(t) , \hat{v}_{\delta}(t_1) \right] \rangle, \label{eq:2.10c}\\
&R_{\alpha, \beta \gamma \delta}^{(2p,c)}(t, t_1) \equiv -i \theta(t - t_1)
\langle \left[ \hat{v}_{\alpha}(t) , \hat{v}_{\beta \gamma \delta}(t_1) \right] \rangle, \label{eq:2.10d}\\
&R_{\alpha \beta, \gamma, \delta}^{(3p,a)}(t, t_1, t_2) \equiv \theta(t - t_1) \theta(t - t_2)
\langle \hat{v}_{\gamma}(t_1) \hat{v}_{\alpha \beta}(t) \nonumber \\
&\times \hat{v}_{\delta}(t_2)/2 + {\rm h.c.} \rangle
- \theta(t - t_1) \theta(t_1 - t_2) \langle \hat{v}_{\alpha \beta}(t) \hat{v}_{\gamma}(t_1) \nonumber\\
&\times \hat{v}_{\delta}(t_2) + {\rm h.c.} \rangle,
\label{eq:2.10e}\\
&R_{\alpha, \beta, \gamma \delta}^{(3p,b)}(t, t_1, t_2) \equiv \theta(t - t_1) \theta(t - t_2)
\langle \hat{v}_{\beta}(t_1) \hat{v}_{\alpha}(t) \nonumber \\
&\times \hat{v}_{\gamma \delta}(t_2)/2 + {\rm h.c.} \rangle - \theta(t - t_1) \theta(t_1 - t_2) \langle \hat{v}_{\alpha}(t)
\hat{v}_{\beta}(t_1) \nonumber \\
&\times \hat{v}_{\gamma \delta}(t_2)/2 + {\rm h.c.} \rangle - \theta(t - t_2) \theta(t_2 - t_1)
\langle \hat{v}_{\alpha}(t) \hat{v}_{\gamma \delta}(t_2) \nonumber\\
&\times \hat{v}_{\beta}(t_1)/2 + {\rm h.c.} \rangle,
\label{eq:2.10f}\\
&R_{\alpha, \beta, \gamma, \delta}^{(4p)}(t, t_1, t_2, t_3) \equiv i \theta(t - t_1) \theta(t_1 - t_2) \theta(t_2 - t_3)
\nonumber \\
&\times \langle \hat{v}_{\alpha}(t) \hat{v}_{\beta}(t_1) \hat{v}_{\gamma}(t_2) \hat{v}_{\delta}(t_3) - {\rm h.c.} \rangle
- i \theta(t - t_1) \theta(t - t_2) \nonumber \\
&\times \theta(t_2 -t_3) \langle \hat{v}_{\beta}(t_1) \hat{v}_{\alpha}(t)
\hat{v}_{\gamma}(t_2) \hat{v}_{\delta}(t_3) + {\rm h.c.} \rangle.
\label{eq:2.10g}
\end{align}
\end{subequations}
Here the average $\langle \hat{O} \rangle$ of an operator $\hat{O}$ is defined as
\[
\langle \hat{O} \rangle \equiv (1/Z) \sum_n \exp(-\beta E_n) \langle n| \hat{O} | n \rangle,
\]
with $| n \rangle = | n(t_0 \rightarrow - \infty) \rangle$.
In the above the indices $(1p, 2p, 3p, 4p)$ imply that the corresponding response functions are related to
1-point, 2-point, 3-point and 4-point contour ordered current-current correlators, respectively. This link
between the response functions and the correlators will be demonstrated below. For the moment it is
obvious from the definitions of each of the response functions in Eq.~(\ref{eq:2.10})
that a $n$-point response function involves $n$ number of current operators.
Thus, there are three types of 2-point response functions that are distinguished by the labels
$(a, b, c)$, and there are
two types of 3-point response functions that are denoted by labels $(a, b)$. Also, since the trace involves the
energy eigenstates of the time translation invariant Hamiltonian $\hat{\mathcal{H}}$, it is clear that
the 1-point response function is a $t$-independent constant, the 2-point responses
are functions of the single variable $(t-t_1)$, the 3-point responses are functions of the two variables
$(t-t_1)$ and $(t-t_2)$, and the 4-point response is a function of the three variables $(t-t_1)$, $(t-t_2)$
and $(t-t_3)$.
Finally, from the presence of the
$\theta$-functions in Eq.~(\ref{eq:2.10}), it is clear that the response functions are causal.
The next step is to express the nonlinear response in the frequency domain. Accordingly, we
define the Fourier transform of the nonlinear current as
\begin{equation}
\label{eq:2.11}
\left(j_{NL}\right)_{\alpha}(\omega) \equiv \int_{-\infty}^{\infty} dt e^{i \omega t} \left(j_{NL}\right)_{\alpha}(t),
\end{equation}
and likewise the Fourier transforms of the seven components $ j_{\alpha}(\omega)^{1p}$, $ j_{\alpha}(\omega)^{2p,a}$,
$\cdots$, $ j_{\alpha}(\omega)^{4p}$ such that
\begin{align}
\label{eq:2.12}
\left(j_{NL}\right)_{\alpha}(\omega) &= j_{\alpha}(\omega)^{1p} + j_{\alpha}(\omega)^{2p,a} + j_{\alpha}(\omega)^{2p,b} +
j_{\alpha}(\omega)^{2p,c}
\nonumber \\
&+ j_{\alpha}(\omega)^{3p,a} + j_{\alpha}(\omega)^{3p,b} + j_{\alpha}(\omega)^{4p}.
\end{align}
Simultaneously, we define the Fourier transforms of the response functions by
\begin{subequations}
\label{eq:2.13}
\begin{align}
&R_{\alpha \beta, \gamma \delta}^{(2p,a)}(\Omega) \equiv \int_{-\infty}^{\infty} d(t-t_1)
e^{i (\Omega + i \eta)(t-t_1)} R_{\alpha \beta, \gamma \delta}^{(2p,a)}(t- t_1),
\label{eq:2.13a}\\
&R_{\alpha \beta \gamma, \delta}^{(2p,b)}(\Omega) \equiv \int_{-\infty}^{\infty} d(t-t_1)
e^{i (\Omega + i \eta)(t-t_1)} R_{\alpha \beta \gamma, \delta}^{(2p,b)}(t- t_1),
\label{eq:2.13b}\\
&R_{\alpha, \beta \gamma \delta}^{(2p,c)}(\Omega) \equiv \int_{-\infty}^{\infty} d(t-t_1)
e^{i (\Omega + i \eta)(t-t_1)} R_{\alpha, \beta \gamma \delta}^{(2p,c)}(t- t_1),
\label{eq:2.13c}\\
&R_{\alpha \beta, \gamma, \delta}^{(3p,a)}(\Omega_1, \Omega_2) \equiv
\int_{-\infty}^{\infty} d(t-t_1) e^{i (\Omega_1 + i \eta)(t-t_1)} \nonumber \\
&\times \int_{-\infty}^{\infty} d(t-t_2) e^{i (\Omega_2 + i \eta)(t-t_2)}
R_{\alpha \beta, \gamma, \delta}^{(3p,a)}(t- t_1, t-t_2),
\label{eq:2.13d}\\
&R_{\alpha, \beta, \gamma \delta}^{(3p,b)}(\Omega_1, \Omega_2) \equiv
\int_{-\infty}^{\infty} d(t-t_1) e^{i (\Omega_1 + i \eta)(t-t_1)} \nonumber \\
&\times \int_{-\infty}^{\infty} d(t-t_2) e^{i (\Omega_2 + i \eta)(t-t_2)}
R_{\alpha, \beta, \gamma \delta}^{(3p,b)}(t- t_1, t-t_2),
\label{eq:2.13e}\\
&R_{\alpha, \beta, \gamma, \delta}^{(4p)}(\Omega_1, \Omega_2, \Omega_3) \equiv
\int_{-\infty}^{\infty} d(t-t_1) e^{i (\Omega_1 + i \eta)(t-t_1)} \nonumber \\
&\times \int_{-\infty}^{\infty} d(t-t_2) e^{i (\Omega_2 + i \eta)(t-t_2)}
\int_{-\infty}^{\infty} d(t-t_3) e^{i (\Omega_3 + i \eta)(t-t_3)} \nonumber \\
&\times R_{\alpha, \beta, \gamma, \delta}^{(4p)}(t- t_1, t-t_2, t-t_3).
\label{eq:2.13f}
\end{align}
\end{subequations}
Using these definitions it is straightforward to check that the nonlinear current is given by
\begin{widetext}
\begin{align}
\label{eq:2.14}
&\left(j_{NL}\right)_{\alpha}(\omega) = - \frac{e^4}{6} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
\int_{-\infty}^{\infty} \frac{d \omega_1 d \omega_2 d \omega_3}{(2\pi)^2}
\delta(\omega - \omega_1 - \omega_2 - \omega_3) A_{\beta}(\omega_1) A_{\gamma}(\omega_2) A_{\delta}(\omega_3)
\left[R_{\alpha \beta \gamma \delta}^{(1p)} + \left\{ R_{\alpha \beta, \gamma \delta}^{(2p,a)}(\omega_2 + \omega_3)
\right. \right. \nonumber\\
&+ \left. R_{\alpha \gamma, \beta \delta}^{(2p,a)}(\omega_1 + \omega_3) + R_{\alpha \delta, \beta \gamma}^{(2p,a)}(\omega_1 + \omega_2)
\right\} + \left\{ R_{\alpha \beta \gamma, \delta}^{(2p,b)}(\omega_3) + R_{\alpha \beta \delta, \gamma}^{(2p,b)}(\omega_2)
+ R_{\alpha \gamma \delta, \beta}^{(2p,b)}(\omega_1) \right\} + R_{\alpha, \beta \gamma \delta}^{(2p,c)}(\omega_1 + \omega_2 + \omega_3)
\nonumber \\
&+ \left\{ R_{\alpha \beta, \gamma, \delta}^{(3p,a)}(\omega_2, \omega_3) + R_{\alpha \beta, \delta, \gamma}^{(3p,a)}(\omega_3, \omega_2)
+ R_{\alpha \gamma, \beta, \delta}^{(3p,a)}(\omega_1, \omega_3) + R_{\alpha \gamma, \delta, \beta}^{(3p,a)}(\omega_3, \omega_1)
+ R_{\alpha \delta, \beta, \gamma}^{(3p,a)}(\omega_1, \omega_2) + R_{\alpha \delta, \gamma, \beta}^{(3p,a)}(\omega_2, \omega_1)
\right\} \nonumber\\
&+ \left\{ 2R_{\alpha, \beta, \gamma \delta}^{(3p,b)}(\omega_1, \omega_2 + \omega_3)
+ 2R_{\alpha, \gamma, \beta \delta}^{(3p,b)}(\omega_2, \omega_1 + \omega_3)
+ 2R_{\alpha, \delta, \beta \gamma}^{(3p,b)}(\omega_3, \omega_1 + \omega_2) \right\}
+ \left\{ R_{\alpha, \beta, \gamma, \delta}^{(4p)}(\omega_1, \omega_2, \omega_3) \right. \nonumber\\
&+ \left. \left.
+ R_{\alpha, \beta, \delta, \gamma}^{(4p)}(\omega_1, \omega_3, \omega_2) + R_{\alpha, \gamma, \beta, \delta}^{(4p)}(\omega_2, \omega_1, \omega_3)
+ R_{\alpha, \gamma, \delta, \beta}^{(4p)}(\omega_2, \omega_3, \omega_1) + R_{\alpha, \delta, \beta, \gamma}^{(4p)}(\omega_3, \omega_1, \omega_2)
+ R_{\alpha, \delta, \gamma, \beta}^{(4p)}(\omega_3, \omega_2, \omega_1) \right\} \right].
\end{align}
\end{widetext}
In the above the total nonlinear response kernel, given by the expression within the
square bracket $[ \cdots ]$, is symmetric with respect to
all permutations of the running variables $(\beta, \omega_1)$, $(\gamma, \omega_2)$ and $(\delta, \omega_3)$. The various
terms within each curly bracket $\{ \cdots \}$ are equal since they differ only in dummy variables, and they
appear in the process of symmetrization. This also ensures that all the terms within $[ \cdots ]$ have the
same symmetry factor of $1/(3!)$.
The difficulty with Eq.~(\ref{eq:2.14}) is that the response functions, defined in Eq.~(\ref{eq:2.13}),
are not contour-ordered objects, and therefore they cannot be evaluated using the standard tools of
manybody field theory. Formally, the response functions can be expressed using the Lehmann representation,
and this is done in Appendix~\ref{appA}.
However, to evaluate such expressions one needs the exact eigenstates of
$\hat{\mathcal{H}}$ which are not known in most cases of interest. To circumvent this difficulty we need to identify
each response function with a contour-ordered correlation function.
With the above motivation we define the following imaginary time ordered current correlation functions.
\begin{subequations}
\label{eq:2.15}
\begin{align}
C_{\alpha \beta, \gamma \delta}^{(2p,a)}(\tau, \tau_1) &\equiv - T_{\tau} \langle \hat{v}_{\alpha \beta}(\tau)
\hat{v}_{\gamma \delta}(\tau_1) \rangle,
\label{eq:2.15a}\\
C_{\alpha \beta \gamma, \delta}^{(2p,b)}(\tau, \tau_1) &\equiv - T_{\tau} \langle \hat{v}_{\alpha \beta \gamma}(\tau)
\hat{v}_{\delta}(\tau_1) \rangle,
\label{eq:2.15b}\\
C_{\alpha, \beta \gamma \delta}^{(2p,c)}(\tau, \tau_1) &\equiv - T_{\tau} \langle \hat{v}_{\alpha}(\tau)
\hat{v}_{\beta \gamma \delta}(\tau_1) \rangle,
\label{eq:2.15c}\\
C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(\tau, \tau_1,\tau_2) &\equiv + T_{\tau} \langle \hat{v}_{\alpha \beta}(\tau)
\hat{v}_{\gamma}(\tau_1) \hat{v}_{\delta}(\tau_2) \rangle,
\label{eq:2.15d}\\
C_{\alpha, \beta, \gamma \delta}^{(3p,b)}(\tau, \tau_1,\tau_2) &\equiv + T_{\tau} \langle \hat{v}_{\alpha}(\tau)
\hat{v}_{\beta}(\tau_1) \hat{v}_{\gamma \delta}(\tau_2) \rangle,
\label{eq:2.15e}\\
C_{\alpha, \beta, \gamma, \delta}^{(4p)}(\tau, \tau_1,\tau_2,\tau_3) &\equiv - T_{\tau} \langle \hat{v}_{\alpha}(\tau)
\hat{v}_{\beta}(\tau_1) \hat{v}_{\gamma}(\tau_2) \hat{v}_{\delta}(\tau_3)\rangle,
\label{eq:2.15f}
\end{align}
\end{subequations}
where $T_{\tau}$ is the imaginary time ordering operator. Note, when $n$ is odd it is convenient
to define the $n$-point correlator with an overall sign which is opposite to the case when $n$ is even.
Next we define the Fourier transforms of the correlators as functions of bosonic Matsubara frequencies as
follows.
\begin{subequations}
\label{eq:2.16}
\begin{align}
&C_{\alpha \beta, \gamma \delta}^{(2p,a)}(i \Omega_{1n}) \equiv \int_0^{\beta} d(\tau - \tau_1)
e^{i \Omega_{1n}(\tau - \tau_1)} C_{\alpha \beta, \gamma \delta}^{(2p,a)}(\tau, \tau_1)
\label{eq:2.16a}\\
&C_{\alpha \beta \gamma, \delta}^{(2p,b)}(i \Omega_{1n}) \equiv \int_0^{\beta} d(\tau - \tau_1)
e^{i \Omega_{1n}(\tau - \tau_1)} C_{\alpha \beta \gamma, \delta}^{(2p,b)}(\tau, \tau_1)
\label{eq:2.16b}\\
&C_{\alpha, \beta \gamma \delta}^{(2p,c)}(i \Omega_{1n}) \equiv \int_0^{\beta} d(\tau - \tau_1)
e^{i \Omega_{1n}(\tau - \tau_1)} C_{\alpha, \beta \gamma \delta}^{(2p,c)}(\tau, \tau_1)
\label{eq:2.16c}\\
&C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(i \Omega_{1n}, i \Omega_{2n}) \equiv \frac{1}{\beta}
\int_0^{\beta} d \tau \int_0^{\beta} d \tau_1 e^{i \Omega_{1n}(\tau - \tau_1)}
\nonumber\\
&\times \int_0^{\beta} d \tau_2 e^{i \Omega_{2n}(\tau - \tau_2)}
C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(\tau, \tau_1, \tau_2)
\label{eq:2.16d}\\
&C_{\alpha, \beta, \gamma \delta}^{(3p,b)}(i \Omega_{1n}, i \Omega_{2n}) \equiv \frac{1}{\beta}
\int_0^{\beta} d \tau \int_0^{\beta} d \tau_1 e^{i \Omega_{1n}(\tau - \tau_1)}
\nonumber\\
&\times \int_0^{\beta} d \tau_2 e^{i \Omega_{2n}(\tau - \tau_2)}
C_{\alpha, \beta, \gamma \delta}^{(3p,b)}(\tau, \tau_1, \tau_2)
\label{eq:2.16e}\\
&C_{\alpha, \beta, \gamma, \delta}^{(4p)}(i \Omega_{1n}, i\Omega_{2n}, i \Omega_{3n}) \equiv \frac{1}{\beta}
\int_0^{\beta} d \tau \int_0^{\beta} d \tau_1 e^{i \Omega_{1n}(\tau - \tau_1)}
\nonumber\\
&\times \int_0^{\beta} d \tau_2 e^{i \Omega_{2n}(\tau - \tau_2)}
\int_0^{\beta} d \tau_3 e^{i \Omega_{3n}(\tau - \tau_3)}
\nonumber\\
&\times C_{\alpha, \beta, \gamma, \delta}^{(4p)}(\tau, \tau_1, \tau_2, \tau_3).
\label{eq:2.16f}
\end{align}
\end{subequations}
In the above the structure of the 2-point functions is familiar from linear response theory.
For example, due to time translation symmetry $C_{\alpha \beta, \gamma \delta}^{(2p,a)}(\tau, \tau_1)$
is a function of $s_1 = \tau - \tau_1$, and consequently, there is only one way its Fourier transform
in imaginary frequency space can be defined. Furthermore, it satisfies bosonic periodicity with
$C_{\alpha \beta, \gamma \delta}^{(2p,a)}(s_1) = C_{\alpha \beta, \gamma \delta}^{(2p,a)}(s_1 + \beta)$ for $-\beta < s_1 <0$,
which further simplifies the structure of the correlator in the frequency space. By contrast,
the nonlinear correlators are functions of more than one imaginary time variable. For example,
$C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(\tau, \tau_1, \tau_2)$ is a function of two variables $s_1 = \tau - \tau_1$
and $s_2 = \tau - \tau_2$. Consequently, there are more than one way to take Fourier transforms,
and the appropriate one has to be chosen with care. Moreover, unlike the 2-point functions,
$C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(s_1, s_2)$ and the other nonlinear correlators
do not have the property of $\beta$-periodicity. As a
consequence, in Eqs.~(\ref{eq:2.16d}), (\ref{eq:2.16e}), (\ref{eq:2.16f}) there are additional
$\tau$-integrals which are crucial to obtain the correct quantities in Matsubara space.
The next step is to express the response functions defined by Eq.~(\ref{eq:2.13}) and the correlation
functions defined by Eq.~(\ref{eq:2.16}) using Lehmann representation, and to compare them. The
procedure is somewhat long, but straightforward, and the details of this step are given in
Appendix~\ref{appA}. Based on it, we find that the 2-point functions are related by
\begin{subequations}
\label{eq:2.17}
\begin{align}
&C_{\alpha \beta, \gamma \delta}^{(2p,a)}(i \Omega_{n} \rightarrow \Omega^+)
= R_{\alpha \beta, \gamma \delta}^{(2p,a)}(\Omega),
\label{eq:2.17a}\\
&C_{\alpha \beta \gamma, \delta}^{(2p,b)}(i \Omega_{n} \rightarrow \Omega^+)
= R_{\alpha \beta \gamma, \delta}^{(2p,b)}(\Omega),
\label{eq:2.17b}\\
&C_{\alpha, \beta \gamma \delta}^{(2p,c)}(i \Omega_{n} \rightarrow \Omega^+)
= R_{\alpha, \beta \gamma \delta}^{(2p,c)}(\Omega),
\label{eq:2.17c}
\end{align}
\end{subequations}
where $\Omega^+ \equiv \Omega + i\eta$. This mapping
is well-known from linear response theory. Next, the 3-point functions are related by
\begin{subequations}
\label{eq:2.18}
\begin{align}
&C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(i \Omega_{1n} \rightarrow \Omega_1^+,
i \Omega_{2n} \rightarrow \Omega_2^+) = R_{\alpha \beta, \gamma, \delta}^{(3p,a)}(\Omega_{1}, \Omega_{2})
\nonumber\\
&+ R_{\alpha \beta, \delta, \gamma}^{(3p,a)}(\Omega_{2}, \Omega_{1}),
\label{eq:2.18a}\\
&C_{\alpha, \beta, \gamma \delta}^{(3p,b)}(i \Omega_{1n} \rightarrow \Omega_1^+,
i \Omega_{2n} \rightarrow \Omega_2^+)
= 2 R_{\alpha, \beta, \gamma \delta}^{(3p,b)}(\Omega_{1}, i \Omega_{2}),
\label{eq:2.18b}
\end{align}
\end{subequations}
and the 4-point functions by
\begin{align}
\label{eq:2.19}
&C_{\alpha, \beta, \gamma, \delta}^{(4p)}(i \Omega_{1n} \rightarrow \Omega_1^+,
i\Omega_{2n} \rightarrow \Omega_2^+, i \Omega_{3n} \rightarrow \Omega_3^+)
\nonumber\\
&= R_{\alpha, \beta, \gamma, \delta}^{(4p)}(\Omega_1, \Omega_2, \Omega_3)
+ R_{\alpha, \beta, \delta, \gamma}^{(4p)}(\Omega_1, \Omega_3, \Omega_2) \nonumber\\
&+ R_{\alpha, \gamma, \beta, \delta}^{(4p)}(\Omega_2, \Omega_1, \Omega_3)
+ R_{\alpha, \gamma, \delta, \beta}^{(4p)}(\Omega_2, \Omega_3, \Omega_1)
\nonumber\\
&+ R_{\alpha, \delta, \beta, \gamma}^{(4p)}(\Omega_3, \Omega_1, \Omega_2)
+ R_{\alpha, \delta, \gamma, \beta}^{(4p)}(\Omega_3, \Omega_2, \Omega_1).
\end{align}
Using Eqs.~(\ref{eq:2.17}), (\ref{eq:2.18}) and (\ref{eq:2.19}) the nonlinear current in Eq.~(\ref{eq:2.14})
can be re-written as
\begin{align}
\label{eq:2.20}
&\left(j_{NL}\right)_{\alpha}(\omega) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
\int_{-\infty}^{\infty} \frac{d \omega_1 d \omega_2 d \omega_3}{(2\pi)^2}
\nonumber\\
&\times
\delta(\omega - \omega_1 - \omega_2 - \omega_3) A_{\beta}(\omega_1) A_{\gamma}(\omega_2) A_{\delta}(\omega_3)
\nonumber\\
&\times \Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3),
\end{align}
where the nonlinear current kernel $\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$
is given by
\begin{widetext}
\begin{align}
\label{eq:2.21}
&\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3) = - \frac{e^4}{6}
\left[C_{\alpha \beta \gamma \delta}^{(1p)} + \left\{ C_{\alpha \beta, \gamma \delta}^{(2p,a)}(\omega_2 + \omega_3 + i\eta)
+ C_{\alpha \gamma, \beta \delta}^{(2p,a)}(\omega_1 + \omega_3 + i\eta)
+ C_{\alpha \delta, \beta \gamma}^{(2p,a)}(\omega_1 + \omega_2 + i\eta) \right\}
\right. \nonumber\\
&+ \left\{ C_{\alpha \beta \gamma, \delta}^{(2p,b)}(\omega_3 + i\eta)
+ C_{\alpha \beta \delta, \gamma}^{(2p,b)}(\omega_2 + i\eta)
+ C_{\alpha \gamma \delta, \beta}^{(2p,b)}(\omega_1 + i\eta) \right\}
+ C_{\alpha, \beta \gamma \delta}^{(2p,c)}(\omega_1 + \omega_2 + \omega_3 + i\eta)
\nonumber \\
&+ \left\{ C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(\omega_2 + i\eta, \omega_3 + i\eta)
+ C_{\alpha \gamma, \beta, \delta}^{(3p,a)}(\omega_1 + i\eta, \omega_3 + i\eta)
+ C_{\alpha \delta, \beta, \gamma}^{(3p,a)}(\omega_1 + i\eta, \omega_2 + i\eta)
\right\} \nonumber\\
&+ \left\{ C_{\alpha, \beta, \gamma \delta}^{(3p,b)}(\omega_1 + i\eta, \omega_2 + \omega_3 + i\eta)
+ C_{\alpha, \gamma, \beta \delta}^{(3p,b)}(\omega_2 + i\eta, \omega_1 + \omega_3 + i\eta)
+ C_{\alpha, \delta, \beta \gamma}^{(3p,b)}(\omega_3 + i\eta, \omega_1 + \omega_2 + i\eta) \right\}
\nonumber\\
&+ \left.
C_{\alpha, \beta, \gamma, \delta}^{(4p)}(\omega_1 + i\eta, \omega_2 + i\eta, \omega_3 + i\eta) \right].
\end{align}
\end{widetext}
In the above $C_{\alpha \beta \gamma \delta}^{(1p)} \equiv R_{\alpha \beta \gamma \delta}^{(1p)}$
is a frequency independent constant. Note, the nonlinear current kernel
$\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$ is fully symmetric with respect to permutations of the
variables $(\beta, \omega_1)$, $(\gamma, \omega_2)$ and $(\delta, \omega_3)$.
The advantage of Eq.~(\ref{eq:2.20}), compared to Eq.~(\ref{eq:2.14}), is that the nonlinear
current response is now given in terms of current-current correlators. Being
contour-ordered objects, the correlators can be factorized using Wick's theorem, and therefore expressed as
products of single particle Green's function. In other words, standard techniques of manybody field theory
and controlled approximation schemes can be used to compute the nonlinear current response.
The current response in Eq.~(\ref{eq:2.20}) can be expressed alternatively in terms of the external electric
field ${\bf E}(\omega) = i \omega {\bf A}(\omega)$ and the third order nonlinear conductivity as
\begin{align}
\label{eq:2.21-bis}
&\left(j_{NL}\right)_{\alpha}(\omega) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
\int_{-\infty}^{\infty} \frac{d \omega_1 d \omega_2 d \omega_3}{(2\pi)^2}
\nonumber\\
&\times
\delta(\omega - \omega_1 - \omega_2 - \omega_3) E_{\beta}(\omega_1) E_{\gamma}(\omega_2) E_{\delta}(\omega_3)
\nonumber\\
&\times \sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3),
\end{align}
where the third order nonlinear conductivity is defined as
\begin{align}
\label{eq:2.21-bis2}
\sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)
\equiv \frac{i \Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)}
{(\omega_1 + i\eta)(\omega_2 + i\eta)(\omega_3 + i\eta)}.
\end{align}
Eqs.~(\ref{eq:2.20}) - (\ref{eq:2.21-bis2}) constitute the main results of this section.
They express the nonlinear electro-optical response of an electronic system in terms of gauge
invariant quantities. Thus, the nonlinear current is expressed in terms of the nonlinear conductivity,
and the latter in terms of a sum of several current correlators. These relations are quite general and
they are relevant not only for metallic phases, but for superconducting ones as well.
Note, since the Eqs.~(\ref{eq:2.17}), (\ref{eq:2.18}) and (\ref{eq:2.19}) relating
the response functions with the correlators
is proven using the Lehmann representation and the exact
eigenstates of $\hat{\mathcal{H}}$ (see Appendix~\ref{appA}),
Eqs.~(\ref{eq:2.20}) - (\ref{eq:2.21-bis2}) are formally exact to all orders
in interaction and disorder strengths. In the rest of the paper we specialize to the case of metallic
phases.
\section{Gauge invariance and sum rule}
\label{sec3}
In this section we discuss certain general properties of the nonlinear kernel
$\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$ of metals
that follow from particle number conservation.
\subsection{Gauge invariance}
\label{subsec:3.1}
A vector potential that is constant in time ${\bf A}(t) = {\bf A}_0$ is equivalent to zero electric
field in the bulk.
Such a potential should not affect the system, provided the electromagnetic response
of the boundary is trivial, which is the case of non-superconducting phases.
Since in frequency space such a vector
potential is ${\bf A}(\omega) = {\bf A}_0 \delta(\omega)$, we expect that for metals, and for any given set of
polarizations $(\alpha, \beta, \gamma, \delta)$
\begin{align}
\label{eq:3.1}
&\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1=0, \omega_2, \omega_3) =
\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2=0, \omega_3)
\nonumber\\
&= \Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3=0)
=0.
\end{align}
Below we provide a proof of the above three relations.
The first step is to express the current operators defined in Eq.~(\ref{eq:2.3}) in terms of the generalized
density operator
\begin{equation}
\label{eq:3.2}
\hat{\rho}_{{\bf q}} \equiv \sum_{{\bf k}} c^{\dagger}_{{\bf k} + {\bf q}} c_{{\bf k}}.
\end{equation}
The paramagnetic current operator, defined in Eq.~(\ref{eq:2.3a}) can be written as
\begin{equation}
\label{eq:3.3}
\hat{v}_{\alpha} = \lim_{{\bf q} \to 0} \frac{1}{q_{\alpha}} \left[ \hat{\mathcal{H}} , \hat{\rho}_{q_{\alpha}} \right].
\end{equation}
The above relation follows from the continuity equation, which itself is a consequence of particle number
conservation. Alternately, it can be verified explicitly for an interacting electron Hamiltonian of the
form $\hat{\mathcal{H}} = \sum_{{\bf k}} \epsilon_{{\bf k}} c^{\dagger}_{{\bf k}} c_{{\bf k}} + \sum_{{\bf q}} V({\bf q}) \hat{\rho}_{{\bf q}} \hat{\rho}_{-{\bf q}}$,
where $V({\bf q})$ is the interaction potential. For unscreened Coulomb potential
$V({\bf q}) \propto 1/q^2$ and for screened Coulomb $V({\bf q}) \propto 1/(q^2 + q_{0}^2)$, with
$1/q_{0}$ the Thomas-Fermi screening length. Likewise, the remaining current operators defined in
Eqs.~(\ref{eq:2.3b}) - (\ref{eq:2.3d}) can be written as
\begin{equation}
\label{eq:3.4}
\hat{v}_{\alpha \beta} = \lim_{{\bf q} \to 0} \frac{1}{q_{\alpha} q_{\beta}}
\left[\left[ \hat{\mathcal{H}} , \hat{\rho}_{q_{\alpha}} \right] , \hat{\rho}_{q_{\beta}} \right],
\end{equation}
\begin{equation}
\label{eq:3.5}
\hat{v}_{\alpha \beta \gamma} = \lim_{{\bf q} \to 0} \frac{1}{q_{\alpha} q_{\beta} q_{\gamma}}
\left[ \left[\left[ \hat{\mathcal{H}} , \hat{\rho}_{q_{\alpha}} \right] , \hat{\rho}_{q_{\beta}} \right] , \hat{\rho}_{q_{\gamma}} \right],
\end{equation}
\begin{equation}
\label{eq:3.6}
\hat{v}_{\alpha \beta \gamma \delta} = \lim_{{\bf q} \to 0} \frac{1}{q_{\alpha} q_{\beta} q_{\gamma} q_{\delta}}
\left[ \left[ \left[\left[ \hat{\mathcal{H}} , \hat{\rho}_{q_{\alpha}} \right] , \hat{\rho}_{q_{\beta}} \right] , \hat{\rho}_{q_{\gamma}} \right] ,
\hat{\rho}_{q_{\delta}} \right].
\end{equation}
The second step is to convert, using Eqs.~(\ref{eq:3.3}) - (\ref{eq:3.6}),
the various current matrix elements, that enter in the definition of the various correlators
in Appendix~\ref{appA}, into equivalent density matrix elements. For this purpose we
define the following matrix elements involving the density operators in the Lehmann basis.
\begin{subequations}
\label{eq:3.7}
\begin{align}
\left(T_1^{{\bf q}} \right)_{nmpl} &\equiv (\hat{\rho}_{q_{\alpha}})_{nm}(\hat{\rho}_{q_{\beta}})_{mp}
(\hat{\rho}_{q_{\gamma}})_{pl}(\hat{\rho}_{q_{\delta}})_{ln},
\label{eq:3.7a}\\
\left(T_2^{{\bf q}} \right)_{nmpl} &\equiv (\hat{\rho}_{q_{\alpha}})_{nm}(\hat{\rho}_{q_{\beta}})_{mp}
(\hat{\rho}_{q_{\delta}})_{pl}(\hat{\rho}_{q_{\gamma}})_{ln},
\label{eq:3.7b}\\
\left(T_3^{{\bf q}} \right)_{nmpl} &\equiv (\hat{\rho}_{q_{\alpha}})_{nm}(\hat{\rho}_{q_{\gamma}})_{mp}
(\hat{\rho}_{q_{\beta}})_{pl}(\hat{\rho}_{q_{\delta}})_{ln},
\label{eq:3.7c}\\
\left(T_4^{{\bf q}} \right)_{nmpl} &\equiv (\hat{\rho}_{q_{\alpha}})_{nm}(\hat{\rho}_{q_{\gamma}})_{mp}
(\hat{\rho}_{q_{\delta}})_{pl}(\hat{\rho}_{q_{\beta}})_{ln},
\label{eq:3.7d}\\
\left(T_5^{{\bf q}} \right)_{nmpl} &\equiv (\hat{\rho}_{q_{\alpha}})_{nm}(\hat{\rho}_{q_{\delta}})_{mp}
(\hat{\rho}_{q_{\gamma}})_{pl}(\hat{\rho}_{q_{\beta}})_{ln},
\label{eq:3.7e}\\
\left(T_6^{{\bf q}} \right)_{nmpl} &\equiv (\hat{\rho}_{q_{\alpha}})_{nm}(\hat{\rho}_{q_{\delta}})_{mp}
(\hat{\rho}_{q_{\beta}})_{pl}(\hat{\rho}_{q_{\gamma}})_{ln}.
\label{eq:3.7f}
\end{align}
\end{subequations}
In the above no summation over repeated indices is implied, and $(\hat{O})_{nm} \equiv \langle n | \hat{O} | m\rangle$,
where $(n, m, p, l)$ are indices associated with the energy eigenstates of $\hat{\mathcal{H}}$ such that
$\hat{\mathcal{H}} | n \rangle = E_n | n \rangle$.
The third step of the proof is to express $\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$ in terms of
the matrix elements introduced in Eq.~(\ref{eq:3.7}). As an example of a two-point function,
$C_{\alpha \beta, \gamma \delta}^{(2p,a)}(i\Omega)$ given in Eq.~(\ref{eq:A4}) can be re-expressed as
$C_{\alpha \beta, \gamma \delta}^{(2p,a)}(i\Omega) = \lim_{{\bf q} \to 0}
C_{\alpha \beta, \gamma \delta}^{(2p,a)}(i\Omega,{\bf q})/(q_{\alpha} q_{\beta} q_{\gamma} q_{\delta})$, where
\begin{align}
&Z C_{\alpha \beta, \gamma \delta}^{(2p,a)}(i\Omega,{\bf q}) =
\frac{ e^{-\beta E_n} - e^{- \beta E_p}}{i \Omega_n + E_{np}}
\left[ \left(T_1^{{\bf q}} \right)_{nmpl} E_{nm} E_{pl}
\right. \nonumber\\
&- \left.
\left(T_2^{{\bf q}} \right)_{nmpl} E_{nm} E_{ln} \right]
- \frac{ e^{-\beta E_l} - e^{- \beta E_m}}{i \Omega_n + E_{lm}}
\nonumber\\
&\times
\left[ \left(T_4^{{\bf q}} \right)_{nmpl} E_{nm} E_{mp}
- \left(T_5^{{\bf q}} \right)_{nmpl} E_{nm} E_{pl} \right].
\nonumber
\end{align}
Likewise, as an example of a three-point function,
\[
C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(i \Omega_{1n}, i \Omega_{2n}) = \lim_{{\bf q} \to 0}
\frac{C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(i \Omega_{1n}, i \Omega_{2n}, {\bf q})}{q_{\alpha} q_{\beta} q_{\gamma} q_{\delta}},
\]
where
\begin{widetext}
\begin{align}
& Z C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(i \Omega_{1n}, i \Omega_{2n}, {\bf q})
= - \frac{\left(T_1^{{\bf q}} \right)_{nmpl} E_{nm} E_{pl} E_{ln}}{i \Omega_{12n} + E_{np}}
\left[ \frac{e^{-\beta E_l} - e^{-\beta E_p}}{i\Omega_{1n} + E_{lp}} +
\frac{e^{-\beta E_l} - e^{-\beta E_n}}{i\Omega_{2n} + E_{nl}} \right]
+ \frac{\left(T_4^{{\bf q}} \right)_{nmpl} E_{nm} E_{mp} E_{pl}}{i \Omega_{12n} + E_{lm}}
\nonumber\\
&\times
\left[ \frac{e^{-\beta E_p} - e^{-\beta E_m}}{i\Omega_{1n} + E_{pm}} +
\frac{e^{-\beta E_p} - e^{-\beta E_l}}{i\Omega_{2n} + E_{lp}} \right]
- \frac{\left(T_2^{{\bf q}} \right)_{nmpl} E_{nm} E_{pl} E_{ln}}{i \Omega_{12n} + E_{np}}
\left[ \frac{e^{-\beta E_l} - e^{-\beta E_p}}{i\Omega_{2n} + E_{lp}} +
\frac{e^{-\beta E_l} - e^{-\beta E_n}}{i\Omega_{1n} + E_{nl}} \right]
\nonumber\\
&+ \frac{\left(T_5^{{\bf q}} \right)_{nmpl} E_{nm} E_{mp} E_{pl}}{i \Omega_{12n} + E_{lm}}
\left[ \frac{e^{-\beta E_p} - e^{-\beta E_m}}{i\Omega_{2n} + E_{pm}} +
\frac{e^{-\beta E_p} - e^{-\beta E_l}}{i\Omega_{1n} + E_{lp}} \right].
\nonumber
\end{align}
\end{widetext}
In order to re-express the four-point function we use relations such as
\begin{align}
\left(W_1 \right)_{nmpl} &\equiv (\hat{v}_{\alpha})_{nm}(\hat{v}_{\beta})_{mp}(\hat{v}_{\gamma})_{pl}(\hat{v}_{\delta})_{ln},
\nonumber\\
&= \lim_{{\bf q} \to 0} \frac{\left(T_1^{{\bf q}} \right)_{nmpl} E_{nm} E_{mp} E_{pl} E_{ln}}
{q_{\alpha} q_{\beta} q_{\gamma} q_{\delta}},
\nonumber
\end{align}
and so on, and also Eq.~(\ref{eq:A21}).
From the above discussion it is clear that the nonlinear susceptibility can be expressed as a limit
in the form
\begin{equation}
\label{eq:3.8}
\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3) =
\lim_{{\bf q} \to 0}
\frac{\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3, {\bf q})}{q_{\alpha} q_{\beta} q_{\gamma} q_{\delta}}
\end{equation}
where $\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3, {\bf q})$ has the structure
\begin{align}
\label{eq:3.9}
&\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3, {\bf q})
= - \left[ \left(T_1^{{\bf q}} \right)_{nmpl} Q_1(\omega_1, \omega_2, \omega_3)_{nmpl} \right.
\nonumber\\
&+ \left. \cdots + \left(T_6^{{\bf q}} \right)_{nmpl} Q_6(\omega_1, \omega_2, \omega_3)_{nmpl} \right] e^4 E_{nm}/6.
\end{align}
The coefficients $Q_i(\omega_1, \omega_2, \omega_3)_{nmpl}$, $i= 1, \cdots, 6$, are given
in Appendix~\ref{appB}, see Eqs.~(\ref{eq:B1})-(\ref{eq:B6}).
Now we set $\omega_3=0$. It is simple to check using Eqs.~(\ref{eq:B1})-(\ref{eq:B6}) that
$Q_i(\omega_1, \omega_2, \omega_3=0)_{nmpl} = 0$, $\forall i$. Thus,
\begin{equation}
\label{eq:3.10}
\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3 =0, {\bf q}) =0.
\end{equation}
Since the above relation holds for all sets of polarizations $(\alpha, \beta, \gamma, \delta)$,
it is clear from the cyclic property of the kernel that
\[
\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1=0, \omega_2, \omega_3, {\bf q}) =
\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2=0, \omega_3, {\bf q}) = 0
\]
will hold as well. These two relations can also be shown from the following arguments.
We set $\omega_2=0$ in Eq.~(\ref{eq:3.9}), and we get $Q_3(\omega_1, \omega_2=0, \omega_3)_{nmpl}
= Q_6(\omega_1, \omega_2=0, \omega_3)_{nmpl} = 0$, while
\begin{align}
\label{eq:3.11}
&Q_1(\omega_1, \omega_2=0, \omega_3)_{nmpl}= - Q_2(\omega_1, \omega_2=0, \omega_3)_{nmpl}
\nonumber\\
&= \frac{\omega_3(\omega_{13} + E_{np}) e^{-\beta E_n}}{(\omega_{13} + E_{nm})(\omega_3 + E_{np})}
+ \frac{\omega_3 E_{mp} e^{-\beta E_m}}{(\omega_{13} + E_{nm})(\omega_1 + E_{pm})}
\nonumber\\
&-\frac{\omega_1 \omega_3 e^{-\beta E_p}}{(\omega_{3} + E_{np})(\omega_1 + E_{pm})},
\end{align}
and
\begin{align}
\label{eq:3.12}
&Q_4(\omega_1, \omega_2=0, \omega_3)_{nmpl}= - Q_5(\omega_1, \omega_2=0, \omega_3)_{nmpl}
\nonumber\\
&= \frac{\omega_3 E_{ln} e^{-\beta E_n}}{(\omega_{13} + E_{nm})(\omega_1 + E_{nl})}
+ \frac{\omega_3 (\omega_{13} + E_{lm}) e^{-\beta E_m}}{(\omega_{13} + E_{nm})(\omega_3 + E_{lm})}
\nonumber\\
&-\frac{\omega_1 \omega_3 e^{-\beta E_l}}{(\omega_{3} + E_{lm})(\omega_1 + E_{nl})}.
\end{align}
In the above equations $\omega_{13} \equiv \omega_1 + \omega_3$. Importantly,
$Q_1(\omega_1, \omega_2=0, \omega_3)_{nmpl}$ and $Q_2(\omega_1, \omega_2=0, \omega_3)_{nmpl}$ are independent
of the Lehmann basis index $l$. Thus, once $\omega_2=0$, in Eq.~(\ref{eq:3.9}) the summation over the
index $l$ can be performed for $\left(T_1^{{\bf q}} \right)_{nmpl}$ and for $\left(T_2^{{\bf q}} \right)_{nmpl}$.
Using $|l \rangle \langle l | = 1$, and the fact that $[ \hat{\rho}_{q_{\gamma}} , \hat{\rho}_{q_{\delta}}] =0$ we conclude that
\[
\sum_l \left(T_1^{{\bf q}} \right)_{nmpl} = \sum_l \left(T_2^{{\bf q}} \right)_{nmpl}.
\]
In other words, the coefficients
$Q_1(\omega_1, \omega_2=0, \omega_3)_{nmpl}$ and $Q_2(\omega_1, \omega_2=0, \omega_3)_{nmpl}$ add up
to zero in Eq.~(\ref{eq:3.9}). Likewise,
$Q_4(\omega_1, \omega_2=0, \omega_3)_{nmpl}$ and $Q_5(\omega_1, \omega_2=0, \omega_3)_{nmpl}$ are independent
of the Lehmann basis index $p$.
Using the same argument we conclude that
\[
\sum_p \left(T_4^{{\bf q}} \right)_{nmpl} = \sum_p \left(T_5^{{\bf q}} \right)_{nmpl}.
\]
Thus, the coefficients $Q_4(\omega_1, \omega_2=0, \omega_3)_{nmpl}$ and $Q_5(\omega_1, \omega_2=0, \omega_3)_{nmpl}$
also add up to zero in Eq.~(\ref{eq:3.9}), and we find
\begin{equation}
\label{eq:3.13}
\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2=0, \omega_3, {\bf q}) =0.
\end{equation}
Lastly, we set $\omega_1=0$ in Eq.~(\ref{eq:3.9}).
In this case the argument is similar. First, we find that
$Q_1(\omega_1=0, \omega_2, \omega_3)_{nmpl}$ is independent of the index $p$, which allows
$\left(T_1^{{\bf q}} \right)_{nmpl}$ to be written as $\left(T_3^{{\bf q}} \right)_{nmpl}$ in
Eq.~(\ref{eq:3.9}). Next, we find that $Q_4(\omega_1=0, \omega_2, \omega_3)_{nmpl}$ is
independent of the index $l$, which allows $\left(T_4^{{\bf q}} \right)_{nmpl}$ to be written as
$\left(T_3^{{\bf q}} \right)_{nmpl}$ as well. Likewise, both $\left(T_2^{{\bf q}} \right)_{nmpl}$
and $\left(T_5^{{\bf q}} \right)_{nmpl}$ can be written as $\left(T_6^{{\bf q}} \right)_{nmpl}$ once
$\omega_1=0$. Finally, using the fact that
\begin{align}
&Q_1(\omega_1=0, \omega_2, \omega_3)_{nmpl} + Q_3(\omega_1=0, \omega_2, \omega_3)_{nmpl}
\nonumber\\
&+ Q_4(\omega_1=0, \omega_2, \omega_3)_{nmpl} =0,
\nonumber\\
&Q_2(\omega_1=0, \omega_2, \omega_3)_{nmpl} + Q_5(\omega_1=0, \omega_2, \omega_3)_{nmpl}
\nonumber\\
&+ Q_6(\omega_1=0, \omega_2, \omega_3)_{nmpl} =0,
\nonumber
\end{align}
we conclude that
\begin{equation}
\label{eq:3.14}
\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1=0, \omega_2, \omega_3, {\bf q}) =0.
\end{equation}
As equations~(\ref{eq:3.10}), (\ref{eq:3.13}) and (\ref{eq:3.14}) hold for general wavevector ${\bf q}$, it also
holds in the limit ${\bf q} \rightarrow 0$. Thus, we conclude that the kernel vanishes
in the limit where the frequency $\omega_i$,
$i= (1, 2,3)$, is first set to zero, and then the wavevector ${\bf q} \rightarrow 0$ (quasistatic limit). However, the
quantity of interest in Eq.~(\ref{eq:3.1}) is the one for which first the wavevector is set to zero, and then
the frequency $\omega_i \rightarrow 0$ (quasidynamic limit). Consequently, the question is whether
the two ways of taking limits commute.
In general, the non-commutation of the two ways of taking limits
signify the presence of non-analytic terms in the kernel
$\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3, {\bf q})$, and there are two potential sources of
non-analyticity that need to be considered here. (i) In metals there are gapless
excitations close to the Fermi surface that can lead to non-analytic response.
However, one can show that, in the presence of a
finite elastic scattering lifetime, such non-analytic terms
are absent. This point has been discussed recently in the context of quadrupolar charge susceptibility of
metals~\cite{Gallais16}. (ii) The above proof is only a statement about the longitudinal response for which
$\nabla \times {\bf j}_{NL}({\bf r}) =0$. This follows from
Eq.~(\ref{eq:3.8}) which shows that the kernel considered here has the structure
\[
\lim_{{\bf q} \to 0} \Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3, {\bf q})
= q_{\alpha} q_{\beta} q_{\gamma} q_{\delta} \Pi^{(3L)}(\omega_1, \omega_2, \omega_3, q) ,
\]
where $\Pi^{(3L)}(\omega_1, \omega_2, \omega_3, q)$ is a scalar function independent of the direction of ${\bf q}$.
On the other hand, in superconductors the transverse response is non-zero in the quasistatic limit
(Meissner effect). This finite transverse response also shows up, and gives a nonzero
contribution in the quasidynamic limit, and
consequently Eq.~(\ref{eq:3.1}) does not hold for superconductors. But for metals no such
transverse response is expected, and therefore switching the two limits is justified.
This completes the proof of the assertion in Eq.~(\ref{eq:3.1}).
Note, since the proof uses the exact eigenstates of the Hamiltonian $\hat{\mathcal{H}}$, it is nonperturbative, and it
holds to all orders in electron-electron interaction and disorder strengths.
\subsection{Sum rule}
\label{subsec:3.2}
The nonlinear conductivity satisfies a generalization of the $f$-sum rule which can be expressed as
\begin{align}
\label{eq:3.15}
&\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
\int_{-\infty}^{\infty} \frac{d \omega_1 d \omega_2 d \omega_3}{(\pi)^3}
\sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)
\nonumber\\
&= \frac{e^4}{6} \langle \sum_{{\bf k}} \frac{\partial^4 \epsilon_{{\bf k}}}
{\partial k_{\alpha} \partial k_{\beta} \partial k_{\gamma} \partial k_{\delta}} c^{\dagger}_{{\bf k}} c_{{\bf k}} \rangle.
\end{align}
The above relation follows simply from the causal structure of the response which guarantees that, as a
function of the three frequencies, $\sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$ has poles only on
the lower half planes, and is analytic in the upper half planes. Thus, all the frequency-dependent terms in
Eq.~(\ref{eq:2.21}) necessarily have an integral of the type
\[
\int_{-\infty}^{\infty} d \omega_i \frac{1}{(\omega_i + i\eta)(\omega_i + E_0 + i\eta)} =0,
\]
where $E_0$ is an energy scale. The above integral vanishes since the contour can be completed in the
upper half plane where the integrand is analytic. Thus, the only term that survives the frequency integrals
is the constant $C_{\alpha \beta \gamma \delta}^{(1p)}$, and the above sum rule is established using
Eq.~(\ref{eq:2.3d}). The sum rule and its generalization to higher order nonlinear conductivities
was discussed earlier~\cite{Watanabe20}. Note, since the sum rule is proven using causality and the
general expression of the current kernel [Eq.~(\ref{eq:2.21})] which holds for all phases, in particular,
it is valid for superconductors as well.
\section{Nonlinear Drude response}
\label{sec4}
\begin{figure*}
\begin{center}
\includegraphics[width=15cm,trim=0 0 0 0]{figure1.jpg}
\caption{Diagrams without vertex corrections for the nonlinear electro-optical kernel
$\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$, see Eq.~(\ref{eq:2.21}).
The solid lines are electron Green's functions, and the wiggly lines are one outgoing and three incoming
photons with polarizations $(\alpha, \beta, \gamma, \delta)$ and frequencies $(\omega, \omega_1, \omega_2, \omega_3)$,
respectively, with $\omega = \omega_1 + \omega_2 + \omega_3$. (i) is the one-point function
$C_{\alpha \beta \gamma \delta}^{(1p)}$. (ii), (iii) and (iv) are the two-point functions
$C_{\alpha \beta, \gamma \delta}^{(2p,a)}(\omega_{2} + \omega_{3})$, $C_{\alpha \beta \gamma, \delta}^{(2p,b)}(\omega_{3})$,
and $C_{\alpha, \beta \gamma \delta}^{(2p,c)}(\omega_{1} + \omega_{2} + \omega_{3})$, respectively.
Diagram (v) plus that obtained by interchanging the positions of the $(\gamma, \omega_2)$
and the $(\delta, \omega_3)$ photons give the three-point function
$C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(\omega_{2}, \omega_{3})$. The diagram (vi) and
that obtained by interchanging the positions of the $(\alpha, \omega)$ and $(\beta, \omega_1)$
photons together give the three-point function
$C_{\alpha, \beta, \gamma \delta}^{(3p,b)}(\omega_{1}, \omega_{2} + \omega_{3})$.
Diagram (vii) and five others obtained by permuting the indices $(\beta, \omega_1)$, $(\gamma, \omega_2)$
and $(\delta, \omega_3)$ together give the four-point function
$C_{\alpha, \beta, \gamma, \delta}^{(4p)}(\omega_{1}, \omega_{2}, \omega_{3})$.
}
\label{fig1}
\end{center}
\end{figure*}
In this section we calculate the nonlinear electro-optical response of the simplest nontrivial system,
namely noninteracting electrons in the presence of weak disorder, using the formalism developed in
section~\ref{sec2}. Accordingly, we take
\begin{equation}
\label{eq:4.1}
\hat{\mathcal{H}} = \sum_{{\bf k}} \epsilon_{{\bf k}} c^{\dagger}_{{\bf k}} c_{{\bf k}} + \frac{1}{\mathcal{V}} \sum_{{\bf k}, {\bf q}} V_{{\bf q}} c^{\dagger}_{{\bf k} + {\bf q}}
c_{{\bf k}}.
\end{equation}
In the above $\mathcal{V}$ is the system volume, and $V_{{\bf q}}$ is the disorder potential which obeys Gaussian
distribution, such that disorder average leads to
\[
\langle V_{{\bf q}} V_{-{\bf q}^{\prime}} \rangle_{\rm dis} = \delta_{{\bf q}, {\bf q}^{\prime}} \frac{\mathcal{V}}{2\pi \nu_0 \tau}.
\]
Here $\nu_0$ is the electron density of states at the Fermi level, and $\tau$ is the elastic scattering
lifetime. The effect of impurity scattering can be taken into account perturbatively where the small
parameter is $1/(E_F \tau)$, $E_F$ being the Fermi energy. In this case the various correlation functions that
enter in the definition of the nonlinear current-current susceptibility
$\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$ given by Eq.~(\ref{eq:2.21}) can be evaluated using diagrammatic
perturbation theory. The basic building block of such a calculation is the disorder averaged single electron
Green's function which is given by
\begin{equation}
\label{eq:4.2}
G^{-1}_{{\bf k}} (i\omega_n) = i \omega_n - \epsilon_{{\bf k}} + i/(2\tau) {\rm Sgn}(\omega_n),
\end{equation}
where ${\rm Sgn}$ is the sign function.
The set of diagrams for computing $\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$,
ignoring vertex corrections
for the moment, are given in Fig.~\ref{fig1}. They have been discussed earlier in the
literarture, see. e.g.~\cite{Udina19,benfatto04}.
The solid lines indicate disorder averaged single electron
Green's function given by Eq.~(\ref{eq:4.2}), and photons indicated by wiggly lines.
The various current vertices involving $n= 1, \cdots, 4$ photons are given by Eq.~(\ref{eq:2.3}). The
diagram (i) represents the one-point function $C_{\alpha \beta \gamma \delta}^{(1p)}$. The diagram (ii) gives
the two-point function $C_{\alpha \beta, \gamma \delta}^{(2p,a)}(i\omega_{2n} + i\omega_{3n})$.
The diagram (iii) gives the two-point function $C_{\alpha \beta \gamma, \delta}^{(2p,b)}(i\omega_{3n})$.
The diagram (iv) gives
$C_{\alpha, \beta \gamma \delta}^{(2p,c)}(i\omega_{1n} + i\omega_{2n} + i\omega_{3n})$. The diagram (v) and that obtained by
interchanging the positions of the $(\gamma, \omega_2)$ and the $(\delta, \omega_3)$ photons give the three-point function
$C_{\alpha \beta, \gamma, \delta}^{(3p,a)}(i\omega_{2n}, i\omega_{3n})$. The diagram (vi) and
that obtained by interchanging the positions of the $(\alpha, \omega)$ and $(\beta, \omega_1)$
photons together give the three-point function
$C_{\alpha, \beta, \gamma \delta}^{(3p,b)}(i\omega_{1n}, i\omega_{2n} + i\omega_{3n})$.
Finally, diagram (vii) and five others obtained by permuting the indices $(\beta, \omega_1)$, $(\gamma, \omega_2)$
and $(\delta, \omega_3)$ give the four-point function
$C_{\alpha, \beta, \gamma, \delta}^{(4p)}(i\omega_{1n}, i\omega_{2n}, i\omega_{3n})$.
In the following we consider only the contribution of the low-energy electrons, for which the wavevector
sum can be replaced by an angular integral around the Fermi surface followed by an energy integral,
\begin{equation}
\label{eq:4.2-bis}
(1/\mathcal{V}) \sum_{{\bf k}} \rightarrow \nu_0 \int_{-\infty}^{\infty} d \epsilon_{{\bf k}} \oint_{FS} d \Omega_k.
\end{equation}
In this approximation one can show that the three-point and four-point functions, as well as the vertex
correction terms, do not contribute. This is demonstrated in Appendix~\ref{appC}. Thus, we need to consider
only the one- and two-point functions.
We denote the various current vertices by
\begin{align}
&(v_{{\bf k}})_{\alpha} \equiv \frac{\partial \epsilon_{{\bf k}}}{\partial k_{\alpha}},
\quad \quad
(v_{{\bf k}})_{\alpha \beta} \equiv \frac{\partial^2 \epsilon_{{\bf k}}}{\partial k_{\alpha} \partial k_{\beta}},
\nonumber \\
&(v_{{\bf k}})_{\alpha \beta \gamma} \equiv \frac{\partial^3 \epsilon_{{\bf k}}}{\partial k_{\alpha} \partial k_{\beta} \partial k_{\gamma}},
\quad
(v_{{\bf k}})_{\alpha \beta \gamma \delta} \equiv \frac{\partial^4 \epsilon_{{\bf k}}}{\partial k_{\alpha} \partial k_{\beta} \partial k_{\gamma} \partial k_{\delta}}.
\nonumber
\end{align}
Using integration by parts, and setting boundary terms to zero we write $C_{\alpha \beta \gamma \delta}^{(1p)}$ as
\[
C_{\alpha \beta \gamma \delta}^{(1p)} = - \frac{1}{\beta \mathcal{V}} \sum_{{\bf k}, \nu_n}
(v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} G_{{\bf k}}^2(i\nu_n).
\]
The above term can be evaluated together with
$C_{\alpha, \beta \gamma \delta}^{(2p,c)}(i\omega_{1n} + i\omega_{2n} + i\omega_{3n})$. We take the external photon
frequencies $(\omega_{1n}, \omega_{2n}, \omega_{3n}) >0$, since the eventual analytic continuation is to be
performed from the upper complex frequency plane. The $\epsilon_{{\bf k}}$ integral can be performed using
the method of contours. After analytic continuation we get
\begin{align}
\label{eq:4.3}
&C_{\alpha \beta \gamma \delta}^{(1p)} + C_{\alpha, \beta \gamma \delta}^{(2p,c)}(\omega_1 + \omega_2 + \omega_3 + i\eta)
\nonumber \\
&= \nu_0 \langle (v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} \rangle_{FS}
\left[\frac{\omega_1 + \omega_2 + \omega_3 }{\omega_1 + \omega_2 + \omega_3 + i/\tau}\right]
\end{align}
Next, we consider the correlation functions of the type $(2p,a)$. Using integration by parts we get
\begin{align}
&C_{\alpha \beta, \gamma \delta}^{(2p,a)}(i\Omega_n) = - \frac{1}{\beta \mathcal{V}} \sum_{{\bf k}, \nu_n}
(v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} G_{{\bf k}}(i\nu_n)
\nonumber \\
&\times G_{{\bf k}}(i\nu_n + i\Omega_n) - \frac{1}{\beta \mathcal{V}} \sum_{{\bf k}, \nu_n}
(v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta} (v_{{\bf k}})_{\gamma \delta}
\nonumber \\
&\times [ G_{{\bf k}}^2(i\nu_n)G_{{\bf k}}(i\nu_n + i\Omega_n) + G_{{\bf k}}(i\nu_n)G_{{\bf k}}^2(i\nu_n + i\Omega_n) ].
\nonumber
\end{align}
In the above the second term can be set to zero since
\begin{align}
&\int_{-\infty}^{\infty} d \epsilon_{{\bf k}}
[ G_{{\bf k}}^2(i\nu_n)G_{{\bf k}}(i\nu_n + i\Omega_n) +G_{{\bf k}}(i\nu_n)
\nonumber \\
& \times G_{{\bf k}}^2(i\nu_n + i\Omega_n) ]
=0.
\nonumber
\end{align}
For the same reason, after two integration by parts the correlation function $(2p,b)$ can be expressed as
\begin{align}
&C_{\alpha \beta \gamma, \gamma \delta}^{(2p,b)}(i\Omega_n) = \frac{1}{\beta \mathcal{V}} \sum_{{\bf k}, \nu_n}
(v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} G_{{\bf k}}(i\nu_n)
\nonumber \\
&\times G_{{\bf k}}(i\nu_n + i\Omega_n) + \cdots,
\nonumber
\end{align}
where the terms in the ellipsis can be set to zero after the energy integral. To each of the three terms
involving the correlation functions $(2p,a)$ the constant $C_{\alpha \beta \gamma \delta}^{(1p)}$ can be subtracted,
and to each of the three terms involving the correlation functions $(2p,b)$ the constant
$C_{\alpha \beta \gamma \delta}^{(1p)}$ can be added. This makes the frequency momentum sums in these
correlation functions fully convergent. Eventually we get
\begin{align}
\label{eq:4.4}
&C_{\alpha \beta, \gamma \delta}^{(2p,a)}(\omega_2 + \omega_3 + i\eta) - C_{\alpha \beta \gamma \delta}^{(1p)}
\nonumber \\
&= -\nu_0 \langle (v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} \rangle_{FS}
\left[\frac{\omega_2 + \omega_3 }{\omega_2 + \omega_3 + i/\tau}\right],
\end{align}
and
\begin{align}
\label{eq:4.5}
&C_{\alpha \beta \gamma, \delta}^{(2p,b)}(\omega_3 + i\eta) + C_{\alpha \beta \gamma \delta}^{(1p)}
\nonumber \\
&= \nu_0 \langle (v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} \rangle_{FS}
\left[\frac{\omega_3 }{\omega_3 + i/\tau}\right].
\end{align}
Finally, using Eqs.~(\ref{eq:4.2}), (\ref{eq:4.3}), and (\ref{eq:4.4}) the nonlinear current
kernel, defined in Eq.~(\ref{eq:2.21}), of a Drude metal is given by
\begin{align}
\label{eq:4.6}
&\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3) =
- \frac{e^4 \nu_0 \langle (v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} \rangle_{FS}}{6}
\nonumber \\
&\left[
\frac{\omega_1 + \omega_2 + \omega_3 }{\omega_1 + \omega_2 + \omega_3 + i/\tau}
- \frac{\omega_1 + \omega_2 }{\omega_1 + \omega_2 + i/\tau} - \frac{\omega_2 + \omega_3 }{\omega_2 + \omega_3 + i/\tau}
\right. \nonumber \\
& \left.
- \frac{\omega_3 + \omega_1 }{\omega_3 + \omega_1 + i/\tau} + \frac{\omega_1 }{\omega_1 + i/\tau}
+ \frac{\omega_2 }{\omega_2 + i/\tau} + \frac{\omega_3 }{\omega_3 + i/\tau} \right].
\end{align}
Note, this result is consistent with the constraints imposed in Eq.~(\ref{eq:3.1}) by
gauge invariance. Finally, the nonlinear conductivity can be readily obtained from the above by using
Eq.~(\ref{eq:2.21-bis2}). Note also, the above Eq.~(\ref{eq:4.6}) is relevant as a low energy asymptotic
behavior also for non-superconducting
symmetry broken states such as nematic and density wave phases.
Alternatively, the above result can be derived by considering the manifestly gauge invariant
susceptibility
\begin{align}
\label{eq:4.7}
&\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)_{\rm inv} \equiv
\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)
\nonumber \\
&- \Pi_{\alpha \beta \gamma \delta}^{(3)}(0, \omega_2, \omega_3)
- \Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, 0, \omega_3) - \Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, 0)
\nonumber \\
&+ \Pi_{\alpha \beta \gamma \delta}^{(3)}(0, 0, \omega_3) + \Pi_{\alpha \beta \gamma \delta}^{(3)}(0, \omega_2, 0)
+ \Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, 0, 0)
\nonumber \\
&- \Pi_{\alpha \beta \gamma \delta}^{(3)}(0, 0, 0).
\end{align}
In the above zeroes have been added and subtracted using the gauge invariance condition
of Eq.~(\ref{eq:3.1}).
It is simple to check that, for the gauge invariant quantity, the correlation functions $(1p)$,
$(2p,a)$ and $(2p,b)$ vanish identically and only the correlation function $(2p,c)$ contribute.
Next, we show that the result expressed in Eq.~(\ref{eq:4.6}) is consistent with the sum rule discussed
in Section~\ref{subsec:3.2}. It is simple to perform the three frequency integrals in Eq.~(\ref{eq:3.15}),
and the left hand side gives
\begin{align}
&\int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
\int_{-\infty}^{\infty} \frac{d \omega_1 d \omega_2 d \omega_3}{(\pi)^3}
\sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)
\nonumber\\
&= \frac{e^4}{6} \nu_0 \langle (v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} \rangle_{FS}.
\nonumber
\end{align}
Simultaneously the right hand side can be written as
\begin{align}
&\frac{e^4}{6 \mathcal{V}} \sum_{{\bf k}} (v_{{\bf k}})_{\alpha \beta \gamma \delta} n_F(\epsilon_{{\bf k}})
= - \frac{e^4}{6 \mathcal{V}} \sum_{{\bf k}} (v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} n_F^{\prime}(\epsilon_{{\bf k}})
\nonumber \\
&= \frac{e^4}{6} \nu_0 \langle (v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} \rangle_{FS},
\nonumber
\end{align}
where $n_F(\epsilon_{{\bf k}})$ is the Fermi function, and prime denotes its derivative with respect to
energy. Thus, the sum rule is indeed verified.
\section{Third harmonic generation}
\label{sec5}
\begin{figure}[!!t]
\begin{center}
\includegraphics[width=8.7cm,trim=0 0 0 0]{figure2.jpg}
\caption{
(a) The amplitude $\left(\sigma_{TH}\right)_{\alpha \beta \gamma \delta}(\nu)$ (polarization indices suppressed
for clarity) and the (b) phase $\theta_{TH}(\nu)$
of the third harmonic signal as a function of frequency $\nu$, see Eqs~(\ref{eq:5.5}) and (\ref{eq:5.6}).
$\tau$ is the elastic scattering lifetime of the electrons.}
\label{fig2}
\end{center}
\end{figure}
In third harmonic generation the system is perturbed by a monochromatic light pulse of frequency $\nu$,
and the nonlinear response at frequency $3\nu$ is studied. Below we describe the theory of the third harmonic
signal of a Drude metal.
We consider the perturbing electric field to be of the form ${\bf E}(t) = {\bf E}_{in} e^{- i \nu t}$,
which in Fourier space is ${\bf E} (\omega) = 2 \pi {\bf E}_{in} \delta (\omega - \nu)$. Using Eq.~(\ref{eq:2.21-bis}) we
find that the third harmonic current density is given by
\begin{equation}
\label{eq:5.1}
\left(j_{TH}\right)_{\alpha}(t) = \left[ \sigma_{\alpha \beta \gamma \delta}^{(3)}(\nu, \nu, \nu)
E_{in, \beta} E_{in, \gamma} E_{in, \delta} \right] e^{-3 i \nu t}.
\end{equation}
In frequency space this corresponds to
\begin{equation}
\label{eq:5.1-bis}
\left(j_{TH}\right)_{\alpha}(\omega) = 2\pi \delta(\omega - 3\nu)
\sigma_{\alpha \beta \gamma \delta}^{(3)}(\nu, \nu, \nu)
E_{in, \beta} E_{in, \gamma} E_{in, \delta}.
\end{equation}
In turn, the above current can be associated with a third harmonic electric field. Using Ohm's law
this electric field is given by
\begin{equation}
\label{eq:5.2}
\left(E_{TH}\right)_{\alpha}(\omega) = \left(j_{TH}\right)_{\alpha}(\omega)/ \sigma_{\alpha \alpha}^{(1)} (\omega),
\end{equation}
where $\omega = 3\nu$, and $\sigma_{\alpha \beta}^{(1)} (\omega)$ is the linear conductivity tensor,
which we have taken to be diagonal. Clearly, the third harmonic electric field depends on the
complex quantity $\sigma_{\alpha \beta \gamma \delta}^{(3)}(\nu, \nu, \nu)/\sigma_{\alpha \alpha}^{(1)} (3\nu)$. This
quantity can be expressed in terms of two real-valued functions, an amplitude
$(\sigma_{TH})_{\alpha \beta \gamma \delta}(\nu)$ and a phase $\theta_{TH}(\nu)$ by
\begin{equation}
\label{eq:5.3}
\left(\sigma_{TH}\right)_{\alpha \beta \gamma \delta}(\nu) \exp[i \theta_{TH}(\nu)]
\equiv \frac{\sigma_{\alpha \beta \gamma \delta}^{(3)}(\nu, \nu, \nu)}{\sigma_{\alpha \alpha}^{(1)} (3\nu)}.
\end{equation}
In terms of these the generated third harmonic electric field is given by
\begin{equation}
\label{eq:5.4}
\left(E_{TH}\right)_{\alpha}(t) = \left(\sigma_{TH}\right)_{\alpha \beta \gamma \delta}(\nu)
E_{in, \beta} E_{in, \gamma} E_{in, \delta} e^{-3 i \nu t + i \theta_{TH}(\nu)},
\end{equation}
where $0 \leq \theta_{TH}(\nu) < \pi$.
For a Drude metal the linear conductivity is
$\sigma_{\alpha \alpha}^{(1)} (\omega) = (\nu_0 e^2 \langle (v_{{\bf k}})_{\alpha}^2 \rangle_{FS}) \tau/(1- i\omega \tau)$,
and we get
\begin{equation}
\label{eq:5.5}
\left(\sigma_{TH}\right)_{\alpha \beta \gamma \delta}(\nu)
= \frac{e^2 \langle (v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} \rangle_{FS}}{\langle (v_{{\bf k}})_{\alpha}^2 \rangle_{FS}
\sqrt{ (\nu^2 + \tau^{-2})(4 \nu^2 + \tau^{-2})}},
\end{equation}
and
\begin{equation}
\label{eq:5.6}
\theta_{TH}(\nu) = \tan^{-1} [ 3 \nu \tau/(1 - 2 \nu^2 \tau^2)].
\end{equation}
These quantities are plotted in Fig.~\ref{fig2} as a function of the frequency of the perturbing field.
Next, we discuss how the third harmonic response depends upon the pump and the probe
polarizations~\cite{Matsunaga17,Cea18}.
We consider the pump electric field to be ${\bf E}_{in} = E_0 (\hat{x} \cos \phi + \hat{y} \sin \phi )$. For
a centrosymmetric system the third harmonic currents generated are (suppressing frequency indices)
\begin{align}
\left(j_{TH}\right)_{x} &\propto \sigma_{x x x x}^{(3)} \cos^3 \phi +
3 \sigma_{x x y y}^{(3)} \cos \phi \sin^2 \phi,
\nonumber\\
\left(j_{TH}\right)_{y} &\propto \sigma_{y y y y}^{(3)} \sin^3 \phi +
3 \sigma_{y y x x}^{(3)} \sin \phi \cos^2 \phi.
\nonumber
\end{align}
In the above we used the property
$\sigma_{x x y y}^{(3)}(\nu,\nu,\nu) = \sigma_{x y x y}^{(3)}(\nu,\nu,\nu) = \sigma_{x y y x}^{(3)}(\nu,\nu,\nu)$, and
so on. Furthermore, for a system with tetragonal or higher symmetry
$\sigma_{x x x x}^{(3)} = \sigma_{y y y y}^{(3)}$, and
$\sigma_{x x y y}^{(3)} = \sigma_{y y x x}^{(3)}$. Then, depending on whether the probe polarization is parallel
or perpendicular to the pump polarization, the third harmonic responses are
\begin{subequations}
\label{eq:5.7}
\begin{align}
\left(j_{TH}\right)_{\parallel}(\omega) &=
2\pi \delta(\omega - 3\nu) E_0^3 \left[ A(\nu) + 2 B(\nu) \sin^2(2\phi) \right],
\label{eq:5.7a}\\
\left(j_{TH}\right)_{\perp}(\omega) &= 2\pi \delta(\omega - 3\nu) E_0^3 B(\nu) \sin (4 \phi),
\label{eq:5.7b}
\end{align}
\end{subequations}
respectively, where $A(\nu) = \sigma_{x x x x}^{(3)}(\nu,\nu,\nu)$, and
$B(\nu) = [ 3 \sigma_{x x y y}^{(3)}(\nu,\nu,\nu) - \sigma_{x x x x}^{(3)}(\nu,\nu,\nu)]/4$.
\section{Terahertz Kerr effect}
\label{sec6}
We consider measurement of electro-optical Kerr effect that
involves perturbing the system with a pump
electric field ${\bf E}_{pp}(t)$ in the terahertz range, and then to probe
the system with a field ${\bf E}_{pb}(t)$ which is at a much higher frequency, typically in the
optical range. The instantaneous
Kerr signal is the response of the system which is proportional to the
square of the pump field ${\bf E}_{pp}(t)^2$.
In this setup the system is probed in the presence of the pump, and therefore the total nonlinear
current is proportional to $({\bf E}_{pp}(t) + {\bf E}_{pb}(t))^3$. In this expansion there are three
terms that are of the type ${\bf E}_{pp}(t)^2 {\bf E}_{pb}(t)$, which contribute to the Kerr signal. It
is simple to check that these three terms contribute equally. Thus, using Eq.~\ref{eq:2.21-bis}
the nonlinear current associated with Kerr effect can be expressed as
\begin{align}
\label{eq:6.1}
&\left(j_{NL}\right)_{\alpha}(\omega) = 3 \int_{-\infty}^{\infty} \int_{-\infty}^{\infty}
\int_{-\infty}^{\infty} \frac{d \omega_1 d \omega_2 d \omega_3}{(2\pi)^2}
\nonumber\\
&\times
\delta(\omega - \omega_1 - \omega_2 - \omega_3) E_{pb,\beta}(\omega_1) E_{pp, \gamma}(\omega_2) E_{pp, \delta}(\omega_3)
\nonumber\\
&\times \sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3).
\end{align}
In the above $(\alpha, \beta)$ are fixed by the probe polarization, and $(\gamma, \delta)$ are fixed by the
pump polarization.
Since the pump frequencies $(\omega_2, \omega_3)$ are much smaller compared to the typical probe
frequency $\omega_1$, we can Taylor expand
\begin{equation}
\label{eq:6.1-bis}
\sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)
= \sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, 0, 0) + \cdots.
\end{equation}
The first term above gives the instantaneous Kerr response, while the ellipsis denote terms that
lead to retarded Kerr response. Since in a typical pump-probe setup the overall nonlinear response is also
accompanied by an out of equilibrium relaxational dynamics, it is nontrivial to distinguish the retarded
Kerr response from the nonequilibrium component.
Note, in setups where both the pump and the probe frequencies are in the terahertz range,
the retarded Kerr response can dominate the overall nonlinear response.
Keeping only the instantaneous Kerr component in Eq.~(\ref{eq:6.1}),
the nonlinear current in the time domain can be written as
\begin{align}
\label{eq:6.2}
\left(j_{NL}\right)_{\alpha}(t) &= \int_{-\infty}^{\infty} \frac{d \omega}{2\pi}
\left[3 \sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega, 0, 0) E_{pp, \gamma}(t) E_{pp, \delta}(t) \right]
\nonumber \\
&\times E_{pb,\beta}(\omega) e^{-i \omega t}.
\end{align}
This expression is to be compared with the linear current response to the probe field which is
\[
\left(j_{L}\right)_{\alpha}(t) = \int_{-\infty}^{\infty} \frac{d \omega}{2\pi}
\sigma_{\alpha \beta}^{(1)}(\omega) E_{pb,\beta}(\omega) e^{-i \omega t}.
\]
Since the total current in the presence of the pump is ${\bf j}_L + {\bf j}_{NL}$, the
instantaneous Kerr response
can be expressed as a time and frequency dependent shift of the linear conductivity tensor
$\sigma_{\alpha \beta}^{(1)}(\omega) \rightarrow \sigma_{\alpha \beta}^{(1)}(\omega)
+ \Delta \sigma_{\alpha \beta}^{(1)}(\omega,t)$, where
\begin{equation}
\label{eq:6.3}
\Delta \sigma_{\alpha \beta}^{(1)}(\omega,t) = 3 \sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega, 0, 0)
E_{pp, \gamma}(t) E_{pp, \delta}(t).
\end{equation}
Thus, if the Kerr signal is measured as a change in the reflectivity $R$, then
\begin{align}
\label{eq:6.4}
\left( \Delta R \right)_{\alpha \beta} &= 3 \left[
\left( \frac{\partial R}{\partial \sigma_1} \right) {\rm Re} \sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega, 0, 0)
\right. \nonumber \\
&+ \left. \left( \frac{\partial R}{\partial \sigma_2} \right) {\rm Im} \sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega, 0, 0)
\right] E_{pp, \gamma}(t) E_{pp, \delta}(t).
\end{align}
Here $\sigma_{1,2}$ are the real and imaginary parts of the complex linear conductivity, respectively.
From Eq.~(\ref{eq:4.6}) the relevant nonlinear conductivity for a Drude metal is given by
\begin{align}
\label{eq:6.5}
\sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega, 0, 0) &=
\nu_0 e^4 \tau^3 \langle (v_{{\bf k}})_{\alpha} (v_{{\bf k}})_{\beta \gamma \delta} \rangle_{FS}
\nonumber\\
&\times
\frac{(3 - 3i \omega \tau - \omega^2 \tau^2)}{3 (1 - i \omega \tau)^3}.
\end{align}
The real and imaginary parts of the above are shown in Fig.~\ref{fig3} as a function of the probe
frequency. Note, in the frequency range $\omega \sim 1/\tau$, both the real and the imaginary parts of
$\sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega, 0, 0)$ contribute to the instantaneous Kerr response.
\begin{figure}[!!t]
\begin{center}
\includegraphics[width=7.0cm,trim=0 0 0 0]{figure3.pdf}
\caption{
(color online) The real (solid, blue) and the imaginary (dashed, red) parts of the nonlinear conductivity
(polarization indices suppressed for clarity)
associated with the Kerr signal as a function of probe frequency $\omega$,
see Eqs.~(\ref{eq:6.4}) and~(\ref{eq:6.5}). $\tau$ is the elastic scattering
lifetime of the electrons.}
\label{fig3}
\end{center}
\end{figure}
Next, we discuss how the instantaneous Kerr signal depends upon the pump and the probe
polarizations~\cite{Katsumi18,Katsumi20}. In a typical reflectivity measurement of the Kerr signal the
probe field is incident normally on the surface of the system. The quantity of interest
is the change in the reflectivity $( \Delta R )_{\alpha \alpha}$, where $\hat{\alpha}$ denotes the direction
of the probe polarization. We characterize $\hat{\alpha}$ by an angle $\phi_{pb}$ such that
$\hat{\alpha} = \hat{x} \cos \phi_{pb} + \hat{y} \sin \phi_{pb}$. Then,
\begin{align}
\label{eq:6.6}
2 \left( \Delta R \right)_{\alpha \alpha} &= [(\Delta R)_{xx} + (\Delta R)_{yy}]
+ \cos (2\phi_{pb}) [(\Delta R)_{xx}
\nonumber \\
&- (\Delta R)_{yy}] + \sin (2\phi_{pb}) [(\Delta R)_{xy} + (\Delta R)_{yx}].
\end{align}
We take the pump polarization to be also in the $xy$-plane, making an angle $\phi_{pp}$ with $\hat{x}$.
Using Eq.~(\ref{eq:6.3}), for a centrosymmetric system we get
\begin{align}
\label{eq:6.7}
\Delta \sigma_{xx}^{(1)}(\omega,t) &= 3 E^2_{pp}(t) \left[\sigma_{xxxx}^{(3)}(\omega, 0, 0) \cos^2 \phi_{pp} \right.
\nonumber\\
&\left. + \sigma_{xxyy}^{(3)}(\omega, 0, 0) \sin^2 \phi_{pp} \right].
\end{align}
One can write similar expressions for $\Delta \sigma_{yy}^{(1)}(\omega,t)$, $\Delta \sigma_{xy}^{(1)}(\omega,t)$ and
$\Delta \sigma_{yx}^{(1)}(\omega,t)$. Using Eqs.~(\ref{eq:6.4}), (\ref{eq:6.6}) and (\ref{eq:6.7}), for a system with
tetragonal or higher symmetry, the polarization dependencies of the instantaneous Kerr response can be written as
\begin{align}
\label{eq:6.8}
&\left( \Delta R \right)_{\alpha \alpha}(\omega, t) =
3 E^2_{pp}(t) \left[ K_{A_{1g}}(\omega) + \cos (2\phi_{pp}) \cos (2\phi_{pb})
\right. \nonumber\\
&\left. \times K_{B_{1g}}(\omega) + \sin (2\phi_{pp}) \sin (2\phi_{pb}) K_{B_{2g}}(\omega) \right],
\end{align}
where
\begin{subequations}
\label{eq:6.9}
\begin{align}
K_{A_{1g}}(\omega) &\equiv \left( \frac{\partial R}{\partial \sigma_1} \right)_{xx} {\rm Re} \sigma^{(3)}_{A_{1g}}(\omega)
+ \left( \frac{\partial R}{\partial \sigma_2} \right)_{xx} {\rm Im} \sigma^{(3)}_{A_{1g}}(\omega),
\label{eq:6.9a}\\
K_{B_{1g}}(\omega) &\equiv \left( \frac{\partial R}{\partial \sigma_1} \right)_{xx} {\rm Re} \sigma^{(3)}_{B_{1g}}(\omega)
+ \left( \frac{\partial R}{\partial \sigma_2} \right)_{xx} {\rm Im} \sigma^{(3)}_{B_{1g}}(\omega),
\label{eq:6.9b}\\
K_{B_{2g}}(\omega) &\equiv \left( \frac{\partial R}{\partial \sigma_1} \right)_{x^{\prime}x^{\prime}} \! \! \!
{\rm Re} \sigma^{(3)}_{B_{2g}}(\omega)
+ \left( \frac{\partial R}{\partial \sigma_2} \right)_{x^{\prime}x^{\prime}} \! \! \! {\rm Im} \sigma^{(3)}_{B_{2g}}(\omega),
\label{eq:6.9c}
\end{align}
\end{subequations}
with $x^{\prime} \equiv (x+y)/\sqrt{2}$, and
\begin{subequations}
\label{eq:6.10}
\begin{align}
\sigma^{(3)}_{A_{1g}}(\omega) &\equiv [ \sigma_{xxxx}^{(3)}(\omega, 0, 0) + \sigma_{xxyy}^{(3)}(\omega, 0, 0)]/2,
\label{eq:6.10a}\\
\sigma^{(3)}_{B_{1g}}(\omega) &\equiv [ \sigma_{xxxx}^{(3)}(\omega, 0, 0) - \sigma_{xxyy}^{(3)}(\omega, 0, 0)]/2,
\label{eq:6.10b}\\
\sigma^{(3)}_{B_{2g}}(\omega) &\equiv \sigma_{xyxy}^{(3)}(\omega, 0, 0).
\label{eq:6.10c}
\end{align}
\end{subequations}
For a tight binding model with nearest and next nearest neighbor hoppings $t$ and $t^{\prime}$,
respectively, we expect $\sigma_{xxxx}^{(3)} \sim t^2$, $\sigma_{xxyy}^{(3)} \sim t t^{\prime}$, and
$\sigma_{xyxy}^{(3)} \sim t t^{\prime}$.
\section{Conclusion}
\label{sec7}
To summarize, in this work we developed the theoretical framework to compute the nonlinear electro-optical
responses of centrosymmetric metals. The formalism itself, starting from standard time dependent
perturbation theory, is described in section~\ref{sec2}. We showed that the nonlinear current can be
expressed in terms of a sum of several response functions that are causal, as expected. However, the
response functions do not obey Wick's theorem and, therefore, they cannot be computed directly
using perturbative field theory methods. Consequently, we associated each response function with an
imaginary time ordered correlation function that can be factorized by means of Wick's theorem.
Using the Lehmann representation we
showed that the correlation functions, analytically continued to real frequencies, map on to the
response functions exactly. This leads to formal expressions for the
nonlinear current $\left(j_{NL}\right)_{\alpha}(\omega)$ in terms of the nonlinear current
kernel $\Pi_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$, see Eq.~(\ref{eq:2.20}),
or equivalently in terms of the nonlinear conductivity $\sigma_{\alpha \beta \gamma \delta}^{(3)}(\omega_1, \omega_2, \omega_3)$,
see Eqs.~~(\ref{eq:2.21-bis}) and (\ref{eq:2.21-bis2}).
The nonlinear kernel and the conductivity are rank-four tensors, and the
indices $(\alpha, \beta, \gamma, \delta)$ denote spatial directions (photon polarizations). The arguments
$(\omega_1, \omega_2, \omega_3)$ denote the incoming photon frequencies,
with $\omega = \omega_1 + \omega_2 + \omega_3$.
In section~\ref{sec3} we showed that the nonlinear kernel satisfy certain constraints,
namely that it vanishes
if either one of the three incoming photon frequencies is set to zero, see Eq.~(\ref{eq:3.1}).
This ensures that there is no nonlinear diamagnetic response in a metallic phase.
We also showed that the nonlinear conductivity satisfies a generalized $f$-sum rule.
Thus, the nonlinear conductivity integrated over the three external
frequencies is a constant that depends only the electronic spectrum, and is independent of the electron
lifetime, see Eq.~(\ref{eq:3.15}).
The constraints and the sum rule are consequences of gauge
invariance, or particle number conservation. In section~\ref{sec4} we applied the theory to compute the
gauge invariant nonlinear kernel for a Drude metal, i.e., a system of noninteracting electrons in the
presence of weak disorder, see Eq.~(\ref{eq:4.6}). As special cases of the generalized response, we derived
expressions for the third harmonic and the instantaneous terahertz
Kerr signals in sections~\ref{sec5} and~\ref{sec6}, respectively.
The current work can be extended in several directions in the future. First, the
formalism can be applied directly to compute the nonlinear responses of broken symmetry states of metals
such as nematic and density wave phases, and to understand how the signatures of such
broken symmetry states manifest
in the nonlinear signals. Second, the diagrammatic method outlined in section~\ref{sec4} can be generalized
to include interaction effects and inelastic scattering. Third, it will serve as a stepping stone to develop a
gauge invariant theory of nonlinear responses in superconductors.
\acknowledgments
The author is grateful to Yann Gallais and Ryo Shimano for illuminating discussions.
|
2205.07971
|
\section{Introduction}
In the half-space $\Pi=\R_+\times\R^n$, where $\R_+=(0,+\infty)$, we consider the conservation law
\begin{equation}\label{1}
u_t+\div_x\varphi(u)=0
\end{equation}
with a jump continuous flux vector $\varphi(u)=(\varphi_1(u),\ldots,\varphi_n(u))$. This means that at each point $u_0\in\R$ there exist one-sided limits
$\displaystyle\lim_{u\to u_0\pm}\varphi(u)\doteq\varphi(u_0\pm)$. For example, if the components $\varphi_i(u)$, $i=1,\ldots,n$, are BV-functions then the vector $\varphi(u)$ is jump-continuous. It is known that the set
$$
D=\{ \ u_0\in\R \ | \ |\varphi(u_0+)-\varphi(u_0)|+|\varphi(u_0)-\varphi(u_0-)|>0 \ \}
$$
of discontinuity points of the vector $\varphi(u)$ is at most countable (and may be an arbitrary at most countable set in $\R$). We used above and will use in the sequel the notation $|\cdot|$ for Euclidean finite-dimensional norms (including the absolute value in one-dimensional case). We will treat $\varphi(u)$ as a multi-valued vector function with values
$\bar\varphi(u_0)=[\varphi(u_0-),\varphi(u_0)]\cup [\varphi(u_0),\varphi(u_0+)]$ being a union of two segments in $\R^n$.
(one may use even more general continuous curves connecting $\varphi(u_0-)$ with $\varphi(u_0+)$ and passing through $\varphi(u_0)$).
Clearly, these sets are different from a single points only if $u_0\in D$. Let us demonstrate that the graph of $\bar\varphi(u)$ admits a continuous parametrization
\begin{equation}\label{cpar}
u=b(v)\in C(\R), \quad \bar\varphi(u)\ni g(v)\in C(\R,\R^n),
\end{equation}
such that the function $b(v)$ is non-strictly increasing and coercive, i.e., $b(v)\to\pm\infty$ as $v\to\pm\infty$.
This was shown in paper \cite{BGMS11}, but only in the case when the set $D$ admits monotone numeration $D=\{u_k\}$, $k\in\N$, $u_{k+1}>u_k$ $\forall k\in\N$, i.e., when $D$ is a completely ordered subset of $\R$. In the following lemma we construct the required parametrization for the general case.
\begin{lemma}\label{lem1}
There exists a parametrization (\ref{cpar}) with a non-strictly increasing and coercive $b(v)$.
\end{lemma}
\begin{proof}
We consider the more complicated case when $D$ is infinite (in the case of finite $D$ we only need to replace the set $\N$ in the proof below by its finite subset).
We numerate set $D$: $D=\{u_k\}_{k\in\N}$ and choose positive numbers $h_k$ such that $\displaystyle\sum_{k=1}^\infty h_k=c<\infty$ (we can take $h_k=2^{-k}$). We define the finite discrete measure $\displaystyle\mu(u)=\sum_{k=1}^\infty h_k\delta(u-u_k)$, where by
$\delta(u-u_k)$ we denote the Dirac mass at the point $u_k$. Then we introduce the strictly increasing function
$\alpha(u)=u+\mu((-\infty,u))$ with jumps at points in $D$. Notice that
\begin{equation}\label{est1}
u\le \alpha(u)\le u+c; \quad \alpha(u_2)-\alpha(u_1)\ge u_2-u_1 \ \forall u_1,u_2\in\R, u_2>u_1.
\end{equation}
The function $b(v)$ is defined as the inverse to the function $\alpha(u)$ considered as maximal monotone graph, that is,
the value $b(v)$ is such $u\in\R$ that $v\in [\alpha(u-),\alpha(u+)]$. It follows from (\ref{est1}) that
$v-c\le b(v)\le v$. If $v_1<v_2$ then denoting $u_i=b(v_i)$, $i=1,2$, we have $v_1\le\alpha(u_1+)\le\alpha(u_2-)\le v_2$ whenever $u_1<u_2$. This relations implies that $v_2-v_1\ge \alpha(u_2-)-\alpha(u_1+)\ge u_2-u_1=b(v_2)-b(v_1)$.
Hence, $b(v_2)-b(v_1)\le v_2-v_1$. In the case $u_1=u_2$ we see that $b(v_2)=b(v_1)$ and the inequality $b(v_2)-b(v_1)\le v_2-v_1$ is evident. The obtained inequality can be written in the form $|b(v_2)-b(v_1)|\le |v_2-v_1|$. We find that $b(v)$ is Lipschitz continuous. Notice also that $b(v)$ takes values $u_k\in D$ on the segments $[a_k,b_k]=[\alpha(u_k-),\alpha(u_k+)]$ of length $h_k>0$. To define the vector $g(v)$, we have to set
$g(v)=\varphi(b(v))$ whenever $b(v)\notin D$. If $b(v)=u_k\Leftrightarrow v\in [a_k,b_k]$ we set $g(v)$ being the piecewise linear function
\begin{equation}\label{ext}
g(v)=\left\{\begin{array}{lcr} \displaystyle\frac{(c_k-v)\varphi(u_k-)+(v-a_k)\varphi(u_k)}{c_k-a_k} & , & a_k\le v\le c_k, \\
\displaystyle\frac{(b_k-v)\varphi(u_k)+(v-c_k)\varphi(u_k+)}{b_k-c_k} & , & c_k\le v\le b_k,
\end{array}\right.
\end{equation}
where $c_k=(a_k+b_k)/2$ (or some other point between $a_k$ and $b_k$).
Let us show that the vector $g(v)$ is continuous on $\R$. We verify that $g(v)$ is continuous at each point $v_0\in\R$.
It is clear if $v_0\in (a_k,b_k)$ for some $k\in\N$, in view of (\ref{ext}). Further, suppose that $v_0\notin [a_k,b_k]$ for all $k\in\N$. This means that
$u_0=b(v_0)\notin D$ and $\varphi(u)$ is continuous at $u_0$. Therefore, for every $\varepsilon>0$ there exists such a $\delta>0$ that in the interval $|u-u_0|<2\delta$ $|\varphi(u)-\varphi(u_0)|<\varepsilon$. This implies
that
\begin{equation}\label{3}
\max(|\varphi(u)-\varphi(u_0)|,|\varphi(u-)-\varphi(u_0)|,|\varphi(u+)-\varphi(u_0)|)\le\varepsilon \quad \forall u\in\R, |u-u_0|<\delta.
\end{equation}
If $|v-v_0|<\delta$ then $|b(v)-u_0|\le |v-v_0|<\delta$ and taking into account (\ref{ext}) and (\ref{3}) we conclude
$$
|g(v)-g(v_0)|\le \max(|\varphi(b(v))-\varphi(u_0)|,|\varphi(b(v)-)-\varphi(u_0)|,|\varphi(b(v)+)-\varphi(u_0)|)\le\varepsilon.
$$
Since $\varepsilon>0$ is arbitrary, this means continuity of $g(v)$ at point $v_0$. By the similar reasons we prove that
$$
\lim_{v\to a_k-} g(v)=\varphi(u_k-)=g(a_k), \ \lim_{v\to b_k+} g(v)=\varphi(u_k+)=g(b_k) \ \forall k\in\N.
$$
Since, in view of (\ref{ext}),
$$
\lim_{v\to a_k+} g(v)=g(a_k), \ \lim_{v\to b_k-} g(v)=g(b_k),
$$
we find that the vector $g(v)$ is continuous at remaining points $v=a_k,b_k$, $k\in\N$. The proof is complete.
\end{proof}
At least formally after the change $u=b(v)$ equation (\ref{1}) reduces to the equation
\begin{equation}\label{1'}
b(v)_t+\div_x g(v)=0
\end{equation}
with already continuous flux $(b(v),g(v))\in\R^{n+1}$.
Recall that entropy solution (e.s.) of equation (\ref{1'}) is a function $v=v(t,x)\in L^\infty(\Pi)$ satisfying
the Kruzhkov entropy condition: $\forall k\in\R$
\begin{equation}\label{ent}
|b(v)-b(k)|_t+\div_x[\sign(v-k)(g(v)-g(k))]\le 0
\end{equation}
in the sense of distributions on $\Pi$ (in $\D'(\Pi))$. This means that for each test function $f=f(t,x)\in C_0^1(\Pi)$, $f\ge 0$
\begin{equation}\label{enti}
\int_\Pi [|b(v)-b(k)|f_t+\sign(v-k)(g(v)-g(k))\cdot\nabla_xf]dtdx\ge 0.
\end{equation}
Taking $k=\pm\|v\|_\infty$, we derive from (\ref{ent}) that $b(v)_t+\div_x g(v)=0$ in $\D'(\Pi)$ and e.s. $v=v(t,x)$ of (\ref{1'}) is a weak solution of this equation.
We study the Cauchy problem for equations (\ref{1}), (\ref{1'}) with initial condition
\begin{equation}\label{ini}
u(0,x)=b(v)(0,x)=u_0(x)\in L^\infty(\R^n).
\end{equation}
This condition is understood in the sense of relation
\begin{equation}\label{ini1}
\esslim_{t\to 0} u(t,\cdot)=u_0 \ \mbox{ in } L^1_{loc}(\R^n).
\end{equation}
It is rather well known (cf. \cite[Proposition~2]{Pan02}) that conditions (\ref{ent}), (\ref{ini1}) can be written in the form of single integral inequality similar to (\ref{enti}): for all $k\in\R$ and each non-negative test function $f=f(t,x)\in C_0^1(\bar\Pi)$, where $\bar\Pi=[0,+\infty)\times\R^n$ being the closure of $\Pi$,
\begin{equation}\label{enti1}
\int_\Pi [|b(v)-b(k)|f_t+\sign(v-k)(g(v)-g(k))\cdot\nabla_xf]dtdx+\int_{\R^n}|u_0(x)-b(k)|f(0,x)dx\ge 0.
\end{equation}
Notice that any jump continuous function is Borel and locally bounded. Therefore, $\varphi(u)\in L^\infty(\Pi)$ for all $u=u(t,x)\in L^\infty(\Pi)$, and we can define the notion of e.s. of original problem (\ref{1}), (\ref{ini}) by the standard Kruzhkov relation like (\ref{enti1})
\begin{equation}\label{enti1a}
\int_\Pi [|u-k|f_t+\sign(u-k)(\varphi(u)-\varphi(k))\cdot\nabla_xf]dtdx+\int_{\R^n}|u_0(x)-k|f(0,x)dx\ge 0
\end{equation}
for all $k\in\R$, $f=f(t,x)\in C_0^1(\bar\Pi)$, $f\ge 0$.
But such e.s. may not exist, see Example~\ref{ex2} below. For the correct definition we need multivalued extension of the flux at discontinuity points and the described above reduction to the well established case of continuous flux.
Apparently, the multivalued extension of the flux was used firstly in \cite{DFR} in the case of some model equation arising in phase transitions.
In the sequel, we need the more general class of measure-valued solutions. Recall
(see \cite{Di,Ta}) that a measure-valued function on $\Pi$ is a weakly measurable map
$(t,x)\to\nu_{t,x}$ of $\Pi$ into the space $\mathrm{Prob}_0(\R)$ of probability Borel measures with
compact support in $\R$.
The weak measurability of $\nu_{t,x}$ means that for each continuous function $p(v)$,
the function $(t,x)\to\langle\nu_{t,x},p(v)\rangle\doteq\int p(v)d\nu_{t,x}(v)$ is Lebesgue-measurable on $\Pi$.
We say that a measure-valued function $\nu_{t,x}$ is bounded if there exists such $R>0$
that $\supp\nu_{t,x}\subset [-R,R]$ for almost all $(t,x)\in\Pi$. We shall denote by $\|\nu_{t,x}\|_\infty$
the smallest of such $R$.
Finally, we say that measure-valued functions of the kind $\nu_{t,x}(v)=\delta(v-v(t,x))$,
where $v(t,x)\in L^\infty(\Pi)$ and $\delta(v-v_*)$ is the Dirac measure at a point $v_*\in\R$, are regular.
We identify these measure-valued functions and the corresponding functions $v(t,x)$,
so that there is a natural embedding $L^\infty(\Pi)\subset \MV(\Pi)$, where by $\MV(\Pi)$ we denote the set of
bounded measure-valued functions on $\Pi$.
Measure-valued functions naturally arise as weak limits of bounded sequences
in $L^\infty(\Pi)$ in the sense of the following theorem by L.~Tartar \cite{Ta}.
\begin{theorem}\label{thT}
Let $v_k(t,x)\in L^\infty(\Pi)$, $k\in\N$, be a bounded sequence. Then there exist a subsequence (we
keep the notation $v_k(t,x)$ for this subsequence) and a bounded measure valued function $\nu_{t,x}\in\MV(\Pi)$
such that
\begin{equation} \label{pr2} \forall p(v)\in C(\R) \quad p(v_k)
\mathop{\to}_{k\to\infty}\langle\nu_{t,x},p(v)\rangle \quad\text{weakly-\/$*$ in } L^\infty(\Pi).
\end{equation}
Besides, $\nu_{t,x}$ is regular, i.e., $\nu_{t,x}(v)=\delta(v-v(t,x))$ if and only if $v_k(t,x)
\mathop{\to}\limits_{k\to\infty} v(t,x)$ in $L^1_{loc}(\Pi)$ (strongly).
\end{theorem}
More generally, the following weak precompactness property holds for bounded sequences of measure valued function, see
for instance \cite[Theorem~2]{Pan95}.
\begin{theorem}\label{thTa}
Let $\nu^k_{t,x}\in MV(\Pi)$, $k\in\N$, be a bounded sequence (this means that the scalar sequence $\|\nu^k_{t,x}\|_\infty$ is bounded). Then there exists a subsequence $\nu^k_{t,x}$ (not relabeled) weakly convergent to a bounded measure valued function $\nu_{t,x}\in\MV(\Pi)$ in the sense of relation
\begin{equation} \label{pr2a} \forall p(v)\in C(\R) \quad \langle\nu^k_{t,x},p(v)\rangle
\mathop{\to}_{k\to\infty}\langle\nu_{t,x},p(v)\rangle \quad\text{weakly-\/$*$ in } L^\infty(\Pi).
\end{equation}
\end{theorem}
Obviously, in the case when the sequence $\nu^k_{t,x}$ consists of regular functions $v_k$, relation (\ref{pr2a}) reduces to (\ref{pr2}). Remark that in Theorems~\ref{thT}, \ref{thTa} the half-space $\Pi$ may be replaces by arbitrary finite-dimensional domain $\Omega$.
Recall (see \cite{Di,Pan96}) that a measure valued e.s. of (\ref{1'}), (\ref{ini}) is a bounded measure valued function
$\nu_{t,x}\in\MV(\Pi)$, which satisfies the following averaged variant of entropy relation (\ref{enti1}): for all $k\in\R$, $f=f(t,x)\in C_0^1(\bar\Pi)$, $f\ge 0$
\begin{align}\label{entim}
\int_\Pi \left[\int|b(v)-b(k)|d\nu_{t,x}(v)f_t+\int\sign(v-k)(g(v)-g(k))d\nu_{t,x}(v)\cdot\nabla_xf\right]dtdx+ \nonumber\\
\int_{\R^n}|u_0(x)-b(k)|f(0,x)dx\ge 0.
\end{align}
Now we are ready to define the notion of e.s. of original problem (\ref{1}), (\ref{ini}).
\begin{definition}[cf. \cite{BGMS11}]\label{def1}
A function $u=u(t,x)\in L^\infty(\Pi)$ is called an e.s. of problem (\ref{1}), (\ref{ini}) if there exists a measure valued e.s. $\nu_{t,x}(v)$ of (\ref{1'}), (\ref{ini}) such that the push-forward measure $b^*\nu_{t,x}(u)$ coincides with the Dirac mass $\delta(u-u(t,x))$ for a.e. $(t,x)\in\Pi$.
\end{definition}
In view of the requirement $b^*\nu_{t,x}(u)=\delta(u-u(t,x))$ entropy relation (\ref{entim}) can be written as
\begin{align}\label{entim1}
\int_\Pi \left[|u-b(k)|f_t+\int\sign(v-k)(g(v)-g(k))d\nu_{t,x}(v)\cdot\nabla_xf\right]dtdx+ \nonumber\\
\int_{\R^n}|u_0(x)-b(k)|f(0,x)dx\ge 0.
\end{align}
\begin{remark}\label{rem1}
If $u(t,x)$ is an e.s. of (\ref{1}), (\ref{ini}) then $u=-u(t,x)$ is an e.s. of the problem
\begin{equation}\label{1-}
u_t-\div_x\varphi(-u)=0, \quad u(0,x)=-u_0(x)
\end{equation}
regarding to the continuous parametrization $u=-b(-v)$, $-\bar\varphi(-u)=-g(-v)$ of the flux.
In fact, let $\nu_{t,x}$ be a measure valued e.s. of (\ref{1'}), (\ref{ini}) such that $b^*\nu_{t,x}(u)=\delta(u-u(t,x))$.
Then the measure valued function $\tilde\nu_{t,x}=l^*\nu_{t,x}\in\MV(\Pi)$, where $l(v)=-v$, is a measure valued e.s. of the problem (\ref{1-}). In fact, for each $k\in\R$
\begin{align*}
\int |-b(-v)-(-b(-k))|d\tilde\nu_{t,x}(v)=\int |b(v)-b(-k)|d\nu_{t,x}(v)=|u-b(-k)|, \\
\int \sign(v-k)(-g(-v)-(-g(-k)))d\tilde\nu_{t,x}(v)=\int \sign(v+k)(g(v)-g(-k))d\nu_{t,x}(v), \\ |-u_0(x)-(-b(-k))|=|u_0(x)-b(-k)|
\end{align*}
and these equalities imply that for every $f=f(t,x)\in C_0^1(\bar\Pi)$, $f\ge 0$
\begin{align*}
\int_\Pi \left[\int|-b(-v)-(-b(-k))|d\tilde\nu_{t,x}(v)f_t+\right. \\ \left.\int\sign(v-k)(-g(-v)-(-g(-k)))d\tilde\nu_{t,x}(v)\cdot\nabla_xf\right]dtdx+ \\
\int_{\R^n}|-u_0(x)-(-b(-k))|f(0,x)dx=\\
\int_\Pi \left[\int|b(v)-b(-k)|d\nu_{t,x}(v)f_t+\int\sign(v+k)(g(v)-g(-k))d\nu_{t,x}(v)\cdot\nabla_xf\right]dtdx+ \\
\int_{\R^n}|u_0(x)-b(-k)|f(0,x)dx\ge 0
\end{align*}
by entropy relation (\ref{entim}) with $k$ replaced by $-k$. Further,
$$
(-b(-\cdot))^*\tilde\nu_{t,x}(u)=(-b)^*\nu_{t,x}(u)=l^*\delta(u-u(t,x))=\delta(u-(-u(t,x))).
$$
We conclude that $-u(t,x)$ satisfies all the requirement of Definition~\ref{def1} for the problem (\ref{1-}).
\end{remark}
\begin{remark}\label{rem2}
The notion of e.s. does not depend on the choice of parametrization (\ref{cpar}). This follow from the following observation. Let
$$
u=b(v(s)), \quad \bar\varphi(u)\ni g(v(s))
$$
be another parametrization, where $v(s)$ is a continuous increasing and coercive function on $\R$. Then $\tilde\nu_{t,x}(s)$ is a measure valued e.s. of the equation
$$
b(v(s))_t+\div_x g(v(s))=0
$$
if and only if the push-forward measure valued function $\nu_{t,x}=(v^*\tilde\nu_{t,x})(v)$ is a measure valued e.s. of (\ref{1'}).
\end{remark}
In \cite{BGMS11} (also see \cite{BGS13,GSWZ14}) the existence and uniqueness of e.s. were established only in the case
of integrable initial function $u_0\in L^1(\R^n)$ and under assumption of H\"older continuity of the flux vector $\varphi(u)$ at zero with the exponent $\alpha\ge (n-1)/n$.
The main our result is the existence of the largest and the smallest e.s. of (\ref{1}), (\ref{ini}) in the general case $u_0\in L^\infty(\R^n)$. The uniqueness of e.s. follows from this result in the particular case when initial function $u_0$ is periodic. This extends results of \cite{Pan02}. In the case $n=1$ we also prove the weak completeness of the set of spatially periodic e.s., generalizing results of \cite{PanSIMA} to the case of discontinuous flux.
\section{Some properties of e.s.}
We denote $z^\pm=\max(\pm z,0)$, $\sign^+ z=(\sign z)^+$, $\sign^- z=-\sign^+ (-z)$ (so that $\sign^\pm z=\frac{d}{dz}z^\pm$).
\begin{proposition}\label{pro1}
If $u=u(t,x)$ is an e.s. of (\ref{1}), (\ref{ini}), $c\in\R$, then for a.e. $t>0$
$$
\int_{\R^n}(u(t,x)-c)^\pm dx\le\int_{\R^n}(u_0(x)-c)^\pm dx.
$$
\end{proposition}
\begin{proof}
Without loss of generality we will suppose that $\int_{\R^n}(u_0(x)-c)^\pm dx<\infty$, otherwise the required estimate is evident.
It follows from (\ref{entim1}) with $k=\pm M$, $M\ge\|\nu_{t,x}\|_\infty$, that for each $f=f(t,x)\in C_0^1(\bar\Pi)$
\begin{equation}\label{weak}
\int_\Pi \left[uf_t+\int g(v)d\nu_{t,x}(v)\cdot\nabla_xf\right]dtdx+
\int_{\R^n}u_0(x)f(0,x)dx=0.
\end{equation}
Taking into account that for every constant $k\in\R$
$$
\int_\Pi [b(k)f_t+g(k)\cdot\nabla_xf]dtdx+\int_{\R^n}b(k)f(0,x)dx=0,
$$
we can rewrite the previous identity in the form
$$
\int_\Pi \left[(u-b(k))f_t+\int (g(v)-g(k))d\nu_{t,x}(v)\cdot\nabla_xf\right]dtdx+\nonumber\\
\int_{\R^n}(u_0(x)-b(k))f(0,x)dx=0.
$$
Putting this equality together with entropy inequality (\ref{entim1}) and taking into account that
$|z|+z=2z^+$, $\sign z+ 1 =2\sign^+ z$, we arrive at the relation
\begin{align}\label{entim1+}
\int_\Pi \left[(u-b(k))^+f_t+\int\sign^+(v-k)(g(v)-g(k))d\nu_{t,x}(v)\cdot\nabla_xf\right]dtdx+ \nonumber\\
\int_{\R^n}(u_0(x)-b(k))^+f(0,x)dx\ge 0.
\end{align}
By coercivity condition there is such $d\in\R$ that $c=b(d)$.
Let $m\ge n$, $\delta>0$, $\beta(s)=\min((s/\delta)^+,1)^m$.
Integrating the inequality (\ref{entim1+})
over the measure $\beta'(b(k)-c)db(k)$, we arrive at the relation
\begin{align}\label{5}
\int_\Pi \left[\eta(u-c)f_t+\int q(v)d\nu_{t,x}(v)\cdot\nabla_xf\right]dtdx+
\int_{\R^n}\eta(u_0(x)-c)f(0,x)dx\ge 0,
\end{align}
where
\begin{align*}
\eta(b(v)-c)=\int_d^v (b(v)-b(k))^+\beta'(b(k)-c)db(k)=\int_d^v\beta(b(k)-c)db(k)=\\
\left\{ \begin{array}{lcr} ((b(v)-c)^+)^{m+1}/((m+1)\delta^m) & , & b(v)-c<\delta, \\
b(v)-c-m\delta/(m+1) & , & b(v)-c\ge\delta, \end{array}\right. \\
q(v)=\int_d^v\sign^+(v-k)(g(v)-g(k))\beta'(b(k)-c)d b(k).
\end{align*}
In particular, if $\supp\nu_{t,x}\subset [-M,M]$ a.e. on $\Pi$, and $\displaystyle C=2\max_{|v|\le M+d}|g(v)|$ then for all $v\in [-M,M]$
$$|q(v)|\le C\int_d^v\beta'(b(k)-c)db(k)=C\beta(b(v)-c),$$ which implies that
\begin{equation}\label{6}
\left|\int q(v) d\nu_{t,x}(v)\right|\le C\int\beta(b(v)-c)d\nu_{t,x}(v)=C\beta(u-c).
\end{equation}
Now we fix $\varepsilon>0$. Since $\beta(s)=1$ for $s>\delta$, the function $\displaystyle\gamma(s)\doteq \frac{\beta(s)}{\eta(s)+\varepsilon}$ decreases on $[\delta,+\infty)$. This implies that
$$
\max\gamma(s)=\max_{s\in[0,\delta]} \gamma(s)\le\max_{s>0}\frac{(s/\delta)^m}{\delta(s/\delta)^{m+1}/(m+1)+\varepsilon}=\max_{\sigma=s/\delta>0}\frac{m+1}{\delta \sigma+(m+1)\varepsilon \sigma^{-m}}.
$$
By direct computations we find
$$
\min_{\sigma>0}(\delta\sigma+(m+1)\varepsilon \sigma^{-m})=\frac{\delta(m+1)}{m}\left(\frac{m(m+1)\varepsilon}{\delta}\right)^{\frac{1}{m+1}}.
$$
Therefore,
$$
\gamma(s)\le\frac{m}{\delta}\left(\frac{\delta}{m(m+1)}\right)^{\frac{1}{m+1}}\varepsilon^{-\frac{1}{m+1}}.
$$
This together with estimate (\ref{6}) implies that
\begin{equation}\label{6a}
\left|\int q(v)d\nu_{t,x}(v)\right|\le N(\eta(u-c)+\varepsilon),
\end{equation}
where
\begin{equation}\label{Ne}
N=N(\varepsilon)=\frac{Cm}{\delta}\left(\frac{\delta}{m(m+1)}\right)^{\frac{1}{m+1}}\varepsilon^{-\frac{1}{m+1}}.
\end{equation}
Since $\int_\Pi f_tdtdx+\int_{\R^n} f(0,x)dx=0$ we can write (\ref{5}) in the form
\begin{align}\label{7}
\int_\Pi \left[(\eta(u-c)+\varepsilon) f_t+\int q(v)d\nu_{t,x}(v)\cdot\nabla_xf\right]dtdx+\nonumber\\
\int_{\R^n}(\eta(u_0(x)-c)+\varepsilon) f(0,x)dx\ge 0.
\end{align}
Let $E$ be a set of $t>0$ such that $(t,x)$ is a Lebesgue point of $u(t,x)$ for almost all $x\in\R^n$. It is rather well-known (see for example \cite[Lemma~1.2]{PaJHDE}) that $E$ is a set of full measure and $t\in E$ is a common Lebesgue point of the functions $t\to\int_{\R^n} u(t,x)h(x)dx$ for all $h(x)\in L^1(\R^n)$. Since every Lebesgue point of a bounded function $u$ is also a Lebesgue point of $p(u)$ for an arbitrary function $p\in C(\R)$, we may replace $u$ in the above property by $p(u)$, and in particular by $\eta(u-c)+\varepsilon$. We choose a function $\omega(s)\in C_0^\infty(\R)$, such that $\omega(s)\ge 0$, $\supp\omega\subset [0,1]$, $\int\omega(s)ds=1$, and define the sequences $\omega_r(s)=r\omega(rs)$, $\theta_r(s)=\int_{-\infty}^s\omega_r(\sigma)d\sigma=\int_{-\infty}^{rs}\omega(\sigma)d\sigma$, $r\in\N$. Obviously, the sequence $\omega_r(s)$ converges as $r\to\infty$ to the Dirac $\delta$-measure weakly in $\D'(\R)$ while the sequence $\theta_r(s)$ converges to the Heaviside function $\theta(s)$ pointwise and in $L^1_{loc}(\R)$.
Now we take the test function in the form
$$
f=f(t,x)=h\theta_r(t_0-t), \quad h=\rho(N(t-t_0)+|x|-R),
$$
where $\rho(\sigma)\in C^\infty(\R)$ is a decreasing function such that $\rho(\sigma)=1$ for $\sigma\le 0$ and $\rho(\sigma)=0$ for $\sigma\ge 1$
(we can take $\rho(\sigma)=1-\theta_1(\sigma)$), $R>0$, and $t_0\in E$. Observe that $f=\theta_r(t_0-t)$ in a vicinity $|x|<R$ of the singular point $x=0$ and therefore $f\in C^\infty(\bar\Pi)$, $f\ge 0$.
Applying (\ref{7}) to the test function $f$, we arrive at the relation
\begin{align}\label{8}
\int_{\R^n}(\eta(u_0(x)-c)+\varepsilon) h(0,x)dx-\int_\Pi (\eta(u-c)+\varepsilon) h\omega_r(t_0-t)dtdx+\nonumber\\
\int_\Pi \left[N\eta(u_0(x)-c)+\int q(v)d\nu_{t,x}(v)\cdot\frac{x}{|x|}\right]\rho'(N(t-t_0)+|x|-R)\theta_r(t_0-t)dtdx\ge 0
\end{align}
for sufficient large $r\in\N$ such that $rt_0>1$.
In view of (\ref{6a}) and the condition $\rho'(\sigma)\le 0$, the last integral in (\ref{8}) is non-positive and it follows that
$$
\int_\Pi (\eta(u-c)+\varepsilon) h\omega_r(t_0-t)dtdx\le \int_{\R^n}(\eta(u_0(x)-c)+\varepsilon) h(0,x)dx.
$$
Dropping $\varepsilon$ in the left integral, we obtain the inequality
$$
\int_0^\infty \left(\int_{\R^n}\eta(u(t,x)-c)h(t,x)dx\right)\omega_r(t_0-t)dt\le
\int_{\R^n}(\eta(u_0(x)-c)+\varepsilon) h(0,x)dx.
$$
Since $t_0\in E$ is a Lebesgue point of the function $t\to\int_{\R^n}\eta(u(t,x)-c)h(t,x)dx$, we can pass to the limit as
$r\to\infty$ in the above inequality, resulting in
$$
\int_{\R^n}\eta(u(t_0,x)-c)h(t_0,x)dx\le \int_{\R^n}(\eta(u_0(x)-c)+\varepsilon) h(0,x)dx.
$$
Revealing this relation, we get
\begin{align}\label{9}
\int_{\R^n}\eta(u(t_0,x)-c)\rho(|x|-R)dx\le \int_{\R^n}(\eta(u_0(x)-c)+\varepsilon)\rho(|x|-Nt_0-R)dx\le \nonumber \\ \int_{\R^n}\eta(u_0(x)-c)dx+\varepsilon \int_{\R^n}\rho(|x|-Nt_0-R)dx.
\end{align}
With the help of (\ref{Ne}), we obtain that for some constants $c_1$, $c_2=c_2(R,\delta)$
$$
\varepsilon\int_{\R^n}\rho(|x|-N(\varepsilon)t_0-R)dx\le c_1\varepsilon(N(\varepsilon)t_0+R+1)^n\le c_2\varepsilon(1+t_0\varepsilon^{-\frac{1}{m+1}})^n\mathop{\to}_{\varepsilon\to 0+}0
$$
(recall that $m+1>n$). Therefore, passing to the limit in (\ref{9}) as $\varepsilon\to 0+$, we obtain that for all $t_0\in E$
\begin{equation}\label{10}
\int_{\R^n}\eta(u(t_0,x)-c)\rho(|x|-R)dx\le \int_{\R^n}\eta(u_0(x)-c)dx.
\end{equation}
Now observe that $0\le\eta(s)\le s^+$ and $\eta(s)\to s^+$ as $\delta\to 0$. By Lebesgue dominated convergence theorem it follows from (\ref{10}) in the limit as $\delta\to 0$ that for a.e. $t=t_0>0$
$$
\int_{\R^n}(u(t,x)-c)^+\rho(|x|-R)dx\le \int_{\R^n}(u_0(x)-c)^+dx<+\infty.
$$
By Fatou lemma this implies in the limit as $R\to\infty$ that
\begin{equation}\label{11}
\int_{\R^n}(u(t,x)-c)^+dx\le \int_{\R^n}(u_0(x)-c)^+dx,
\end{equation}
as required. In view of Remark~\ref{rem1} the function $-u(t,x)$ is an e.s. of the problem
$u_t-\div_x\varphi(-u)_x-0$, $u(0,x)=-u_0(x)$. Applying (\ref{11}) to this e.s. with $c$ replaced by $-c$, we obtain the inequality
\begin{equation}\label{11a}
\int_{\R^n}(u(t,x)-c)^-dx\le \int_{\R^n}(u_0(x)-c)^-dx \quad \forall t\in E.
\end{equation}
\end{proof}
\begin{corollary}\label{cor1} Any e.s. $u=u(t,x)$ of (\ref{1}), (\ref{ini}) satisfies the \textbf{maximum/minimum principle}
$$
a=\essinf u_0(x)\le u(t,x)\le b=\esssup u_0(x) \ \mbox{ for a.e. } (t,x)\in\Pi.
$$
\end{corollary}
\begin{proof}
The maximum/minimum principles directly follows from (\ref{11}) and (\ref{11a}) with $k=b$ and $k=a$, respectively.
\end{proof}
Putting inequalities (\ref{11}), (\ref{11a}) together and using the known relation $|z|=z^++z^-$, we obtain the following
\begin{corollary}\label{cor2} If $u(t,x)$ is an e.s. of (\ref{1}), (\ref{ini}) then for a.e. $t>0$
$$
\int_{\R^n}|u(t,x)-c|dx\le \int_{\R^n}|u_0(x)-c|dx.
$$
\end{corollary}
If $u_1$, $u_2$ is a pair of e.s. and $\nu_{t,x}^{(1)}$, $\nu_{t,x}^{(2)}$ are the corresponding measure valued e.s. of (\ref{1'}) then by a measure-valued analogue of the doubling variable method, developed in \cite{Pan96}, we have the relation
\begin{align*}
\frac{\partial}{\partial t}\iint (b(v)-b(w))^+d\nu_{t,x}^{(1)}(v)d\nu_{t,x}^{(2)}(w)+ \\ \div_x\iint\sign^+(v-w)(g(v)-g(w))d\nu_{t,x}^{(1)}(v)d\nu_{t,x}^{(2)}(w)\le 0 \mbox{ in } \D'(\Pi).
\end{align*}
Since $b(v)=u_1(t,x)$, $b(w)=u_2(t,x)$ on $\supp\nu_{t,x}^{(1)}$, $\supp\nu_{t,x}^{(1)}$, respectively, then the above relation can be written as
\begin{equation}\label{12}
\frac{\partial}{\partial t}(u_1-u_2)^++\div_x\iint\sign^+(v-w)(g(v)-g(w))d\nu_{t,x}^{(1)}(v)d\nu_{t,x}^{(2)}(w)\le 0 \mbox{ in } \D'(\Pi).
\end{equation}
\begin{proposition}\label{pro2}
Let $u_1$, $u_2$ be e.s. of (\ref{1}), (\ref{ini}) with initial functions $u_{10}$, $u_{20}$, respectively. Assume that
for every $T>0$
$$
\meas\{ \ (t,x)\in (0,T)\times\R^n \ | \ u_1(t,x)\ge u_2(t,x) \ \}<+\infty.
$$
Then for a.e. $t>0$
$$
\int_{\R^n} (u_1(t,x)-u_2(t,x))^+dx\le\int_{\R^n} (u_{10}(x)-u_{20}(x))^+dx.
$$
In particular, $u_1(t,x)\le u_2(t,x)$ a.e. in $\Pi$ whenever $u_{10}(x)\le u_{20}(x)$ a.e. in $\R^n$ (the comparison principle).
\end{proposition}
\begin{proof}
Let, as above, $\nu_{t,x}^{(1)}$, $\nu_{t,x}^{(2)}$ be measure valued e.s. of (\ref{1'}) corresponding to $u_1$, $u_2$.
Let $E\subset\R_+$ be a set of full measure similar to one in the proof of Proposition~\ref{pro1} consisting of
values $t>0$ such that $(t,x)$ is a Lebesgue point of $(u_1(t,x)-u_2(t,x))^+$ for a.e. $x\in\R^n$. Then $t\in E$ is a common Lebesgue point of the functions $t\to\int (u_1(t,x)-u_2(t,x))^+ h(x)dx$, $h(x)\in L^1(\R^n)$.
Let $t_0,t_1\in E$, $t_0<t_1$, $\chi_r(t)=\theta_r(t-t_0)-\theta_r(t-t_1)$, where the sequence $\theta_r(t)$, $r\in\N$, was defined in the proof of Proposition~\ref{pro1}.
Applying (\ref{12}) to the nonnegative test function $f(t,x)=\chi_r(t)q(x/R)$, where $q=q(y)\in C_0^1(\R^n)$, $0\le q\le 1$, $q(0)=1$, and $R>0$, we get
\begin{align*}
\int_\Pi (u_1(t,x)-u_2(t,x))^+(\omega_r(t-t_0)-\omega_r(t-t_1))q(x/R)dtdx+ \\
\frac{1}{R}\int_{\Pi} \iint\sign^+(v-w)(g(v)-g(w))d\nu_{t,x}^{(1)}(v)d\nu_{t,x}^{(2)}(w)\cdot\nabla_yq(x/R)\chi_r(t)dtdx\ge 0.
\end{align*}
Since $t_i$, $i=1,2$, are Lebesgue points of the functions $\int_{\R^n} (u_1(t,x)-u_2(t,x))^+q(x/R)dx$ while the sequence
$\chi_r(t)$ is uniformly bounded and converges pointwise to the indicator function of the interval $(t_0,t_1]$, we can pass to the limit as $r\to\infty$ in the above relation and get
\begin{align}\label{13}
\int_{\R^n} (u_1(t_1,x)-u_2(t_1,x))^+q(x/R)dx\le \int_{\R^n} (u_1(t_0,x)-u_2(t_0,x))^+q(x/R)dx+ \nonumber\\
\frac{1}{R}\int_{(t_0,t_1)\times\R^n} \iint\sign^+(v-w)(g(v)-g(w))d\nu_{t,x}^{(1)}(v)d\nu_{t,x}^{(2)}(w)\cdot\nabla_y q(x/R)dtdx.
\end{align}
It follows from the inequality
$$
|(u_1(t_0,x)-u_2(t_0,x))^+-(u_{10}(x)-u_{20}(x))^+|\le |u_1(t_0,x)-u_{10}(x)|+|u_2(t_0,x)-u_{20}(x)|
$$
and initial relation (\ref{ini1}) that
$$
\esslim_{t_0\to 0}(u_1(t_0,x)-u_2(t_0,x))^+=(u_{10}(x)-u_{20}(x))^+ \ \mbox{ in } L^1_{loc}(\R^n).
$$
This allows to pass to the limit as $t_0\to 0$ in (\ref{13}), resulting in the relation: for a.e. $T=t_1>0$
\begin{align}\label{14}
\int_{\R^n} (u_1(T,x)-u_2(T,x))^+q(x/R)dx\le \int_{\R^n} (u_{10}(x)-u_{20}(x))^+q(x/R)dx+ \nonumber\\
\frac{1}{R}\int_{(0,T)\times\R^n} \iint\sign^+(v-w)(g(v)-g(w))d\nu_{t,x}^{(1)}(v)d\nu_{t,x}^{(2)}(w)\cdot\nabla_y q(x/R)dtdx\le \nonumber\\ \int_{\R^n} (u_{10}(x)-u_{20}(x))^+dx+\frac{1}{R}\int_{(0,T)\times\R^n}G(t,x)\cdot\nabla_y q(x/R)dtdx,
\end{align}
where
$$
G=G(t,x)\doteq\iint\sign^+(v-w)(g(v)-g(w))d\nu_{t,x}^{(1)}(v)d\nu_{t,x}^{(2)}(w).
$$
By Definition~\ref{def1}
$b(v)\equiv u_1(t,x)$ on $\supp\nu_{t,x}^{(1)}$, $b(w)\equiv u_2(t,x)$ on $\supp\nu_{t,x}^{(2)}$ and if $u_1(t,x)<u_2(t,x)$ then $v<w$ whenever $v\in \supp\nu_{t,x}^{(1)}$, $w\in\supp\nu_{t,x}^{(2)}$ and therefore the vector-function
$G$ can be different from zero vector only on the set $\{u_1(t,x)\ge u_2(t,x)\}$, which has finite measure in any layer $\Pi_T=(0,T)\times\R^n$. Thus,
denoting $D=\{ \ (t,x)\in\Pi_T \ | \ u_1(t,x)\ge u_2(t,x) \ \}$, we find
\begin{align*}
\left|\int_{(0,T)\times\R^n} G(t,x)\cdot\nabla_y q(x/R)dtdx\right|=\\ \left|\int_D G(t,x)\cdot\nabla_y q(x/R)dtdx\right|\le \|G\|_\infty\|\nabla_y q\|_\infty\meas D<\infty
\end{align*}
(notice that $\displaystyle\|G\|_\infty\le 2\max_{|v|\le M} |g(v)|$, where $M=\max(\|\nu_{t,x}^{(1)}\|_\infty,\|\nu_{t,x}^{(2)}\|_\infty)$).
We see that the last term in (\ref{14}) disappears in the limit as $R\to\infty$ due to the factor $1/R$. Hence, passing to the limit as $R\to\infty$ and using Fatou's lemma (observe that $q(x/R)\mathop{\to}\limits_{R\to\infty} q(0)=1$), we arrive at the desired relation: for all $T\in E$
$$
\int_{\R^n} (u_1(T,x)-u_2(T,x))^+dx\le \int_{\R^n} (u_{10}(x)-u_{20}(x))^+dx.
$$
\end{proof}
The following result asserts the strong completeness of the set of e.s. of the problem (\ref{1}), (\ref{ini}). More precisely, we consider the approximate problem
\begin{equation}\label{1'r}
u_t+\div_x g(v)=0, \ u=b_r(v); \quad u(0,x)=u_{r0}(x),
\end{equation}
where the sequence $b_r(u)\in C(\R)$, $r\in\N$, of non-strictly increasing functions converges as $r\to\infty$ to $b(u)$ uniformly on any segment.
\begin{proposition}\label{pro3}
Let $u_{r0}=u_{r0}(x)$, $r\in\N$, be a bounded sequence in $L^\infty(\R^n)$, and $u_r=u_r(t,x)$ be a sequence of e.s. of (\ref{1'r}). Assume that as $r\to\infty$ the sequences $u_{r0}\to u_0=u_0(x)$, $u_r\to u=u(t,x)$ in $L^1_{loc}(\R^n)$, $L^1_{loc}(\Pi)$, respectively. Then $u$ is an e.s. of (\ref{1}), (\ref{ini}) with initial data $u_0$.
\end{proposition}
\begin{proof}
Let $M=\sup\limits_{r\in\N} \|u_{r0}\|_\infty$. By Corollary~\ref{cor1} we see that $\|u_r\|_\infty\le M$ for all $r\in\N$. By Definition~\ref{def1} there exists a sequence $\nu_{t,x}^r\in\MV(\Pi)$ such that
\begin{equation}\label{unu}
b_r^*\nu_{t,x}^r(u)=\delta(u-u_r(t,x)),
\end{equation}
and that for all $k\in\R$ for every $f=f(t,x)\in C_0^1(\bar\Pi)$, $f\ge 0$
\begin{align}\label{enti2}
\int_\Pi \left[|u_r-b_r(k)|f_t+\int\sign(v-k)(g(v)-g(k))d\nu_{t,x}^r(v)\cdot\nabla_xf\right]dtdx+ \nonumber\\
\int_{\R^n}|u_{r0}(x)-b_r(k)|f(0,x)dx\ge 0.
\end{align}
By the coercivity assumption, there exist such a constant $R>0$ that $b(-R)<-M$, $b(R)>M$. Since $b_r(\pm R)\to b(\pm R)$ as $r\to\infty$, we find that $b_r(-R)<-M$, $b_r(R)>M$ for sufficiently large $r$. Without loss of generality we can suppose
that these inequalities holds for all $r\in\N$.
Then, in view of (\ref{unu}), $\supp\nu_{t,x}^r\subset [-R,R]$. Therefore, the sequence of measure valued functions $\nu_{t,x}^r$ is bounded and by Theorem~\ref{thTa} some subsequence of $\nu_{t,x}^r$ converges weakly to a bounded measure valued function $\nu_{t,x}$ (in the sense of relation (\ref{pr2a})). We replace the original sequences $u_{r0}$, $u_r$, $\nu_{t,x}^r$ by the corresponding subsequences (keeping the notations), and pass to the limit as $r\to\infty$ in (\ref{enti2}). As a result, we get
\begin{align}\label{enti3}
\int_\Pi \left[|u-b(k)|f_t+\int\sign(v-k)(g(v)-g(k))d\nu_{t,x}(v)\cdot\nabla_xf\right]dtdx+ \nonumber\\
\int_{\R^n}|u_{0}(x)-b(k)|f(0,x)dx\ge 0
\end{align}
for all $k\in\R$ and each $f=f(t,x)\in C_0^1(\bar\Pi)$, $f\ge 0$.
Moreover, passing to the limit as $r\to\infty$ in the relation (following from (\ref{unu}))
$$
\int q(b_r(v))d\nu_{t,x}^r(v)= q(u_r(t,x)) \quad \forall q(u)\in C(\R),
$$
with the help of the relation $q(b_r(v))-q(b(v))\rightrightarrows 0$ uniformly on $[-R,R]$, we obtain that for a.e. $(t,x)\in\Pi$
\begin{equation}\label{unu1}
\int q(b(v))d\nu_{t,x}(v)= q(u(t,x)).
\end{equation}
A set of full measure $E$ of points $(t,x)$, for which relation (\ref{unu1}) holds can be chosen common for all $q$ from a countable dense subset of $C(\R)$. By the density, this relation remains valid for all $q\in C(\R)$, which evidently means that $b^*\nu_{t,x}(u)=\delta(u-u(t,x))$ for all $(t,x)\in E$. In particular, it follows from (\ref{enti3}) that
the entropy relation (\ref{entim}) is fulfilled, and $\nu_{t,x}$ is a measure valued e.s. of (\ref{1'}), (\ref{ini}). In correspondence with Definition~\ref{def1}, we conclude that $u$ is an e.s. of (\ref{1}), (\ref{ini}), as required.
\end{proof}
\section{Existence of e.s.}
In this section we assume that the initial function is integrable, $u_0\in L^1(\R^n)\cap L^\infty(\R^n)$. The general case will be treated in the next section, where we will establish existence of the largest and the smallest e.s.
We introduce the approximations $b_r(u)=b(u)+u/r$, $r\in\N$, of $b(u)$ by strictly increasing functions. Then the equation in (\ref{1'r}) can be written in the standard form
\begin{equation}\label{ap1}
u_t+\div_x\varphi_r(u)=0,
\end{equation}
where $\varphi_r(u)=g((b_r)^{-1}(u))\in C(\R,\R^n)$.
As was established in \cite{ABK}, there exists the unique largest e.s. $u_r=u_r(t,x)$ of the Cauchy problem for equation (\ref{ap1}) with initial data $u_0(x)$. It is known that after possible correction on a set of null measure
$u_r(t,\cdot)\in C([0,+\infty),L^1(\R))$. Moreover, for each fixed $r\in\N$ the maps $u_0\to u_r(t,\cdot)$, $t\ge 0$, are nonexpansive in $L^1(\R^n)$. It is clear that for every $\Delta x\in\R^n$ the shifted functions $u_r(t,x+\Delta x)$ are the largest e.s. of (\ref{ap1}) with the initial function $u_0(x+\Delta x)$. This implies the uniform estimate
$$
\int_{\R^n}|u_r(t_0,x+\Delta x)-u_r(t_0,x)|dx\le\int_{\R^n}|u_0(x+\Delta x)-u_0(x)|dx \quad \forall t_0>0.
$$
It follows from this estimate that
\begin{equation}\label{estx}
\int_{\R^n}|u_r(t_0,x+\Delta x)-u_r(t_0,x)|dx\le\omega^x(|\Delta x|),
\end{equation}
where $\omega^x(h)=\sup\limits_{|\Delta x|<h}\int_{\R^n}|u_0(x+\Delta x)-u_0(x)|dx$ is the continuity modulus of
$u_0$ in $L^1(\R^n)$. We then proceed as in \cite{Kr} to get a similar estimate for shifts of the time variable.
For the sake of completeness we provide the details. We choose an averaging kernel $\beta(y)\in C_0^1(\R^n)$ with the properties: $\beta(y)\ge 0$, $\supp\beta(y)\subset B_1(0)=\{ y\in\R^n | |y|\le 1 \}$, $\int_{\R^n}\beta(y)dy=1$. For a function $q(x)\in L^\infty(\R^n)$ we consider the corresponding averaged functions
$$q^h(x)=h^{-n}\int q(y)\beta((x-y)/h)dy, \quad h>0,$$
which are the convolutions $q*\beta^h(x)$, where $\beta^h(x)=h^{-n}\beta(x/h)$. It is clear that $q^h(x)\in C^1(\R^n)$ for each $h>0$, $\|q^h\|_\infty\le\|q\|_\infty$, and $q^h\to q$ as $h\to 0$ a.e. in $\R^n$. Moreover, since $\nabla q^h=q*\nabla\beta^h(x)$, we have
\begin{equation}\label{conder}
\|\nabla q^h\|_\infty\le \frac{c}{h}\|q\|_\infty, \quad c=\|\nabla_y\beta\|_1.
\end{equation}
Applying (\ref{ap1}) with $u=u_r$ to the test function $f=(\theta_\nu(t-t_0)-\theta_\nu(t-t_0-\Delta t))p(x)$, where
$t_0,\Delta t>0$, $p=p(x)\in C_0^1(\R^n)$, $\nu\in\N$, and passing to the limit as $\nu\to\infty$, we get
\begin{equation}\label{15}
\int_{\R^n} (u_r(t_0+\Delta t)-u_r(t_0,x))p(x)dx=\int_{(t_0,t_0+\Delta t)\times\R^n} \varphi_r(u_r)\cdot\nabla pdx.
\end{equation}
By Corollary~\ref{cor1} $\|u_r\|_\infty\le M=\|u_0\|_\infty$ for every $r\in\N$. It follows from the coercivity assumption that there is such $R>0$ that $b(-R)<-M$, $b(R)>M$. All the more, $b_r(-R)<b(R)<-M$, $b_r(R)>b(R)>M$ for all $r\in\N$.
This implies that $(b_r)^{-1}([-M,M])\subset (-R,R)$ and therefore for a.e. $(t,x)\in\Pi$
$$|\varphi_r(u_r)|=g((b_r)^{-1}(u_r))\le N\doteq\max_{|v|\le R} |g(v)|.$$
It now follows from (\ref{15}) that
\begin{equation}\label{16}
\left|\int_{\R^n} (u_r(t_0+\Delta t)-u_r(t_0,x))p(x)dx\right|\le N\|\nabla p\|_1\Delta t.
\end{equation}
Further, we make use of the following variant of Kruzhkov's lemma \cite[Lemma~1]{Kr} (for the sake of completeness, we provide it with the proof).
\begin{lemma}\label{lem2} Let $w(x)\in L^1(\R^n)$. Then for each $h>0$
$$
\int_{\R^n} ||w(x)|-w(x)(\sign w)^h(x)|dx\le 2\omega_w(h),
$$
where $\omega_w(h)=\sup\limits_{|\Delta x|<h}\int_{\R^n}|w(x+\Delta x)-w(x)|dx$ is the continuity modulus of
$w$ in $L^1(\R^n)$.
\end{lemma}
\begin{proof}
First, notice that for each $x,y\in\R^n$
\begin{align*}
||w(x)|-w(x)\sign w(y)|=||w(x)|-(w(x)-w(y))\sign w(y)-w(y)\sign w(y)|= \\||w(x)|-|w(y)|-(w(x)-w(y))\sign w(y)|\le \\
||w(x)|-|w(y)||+|w(x)-w(y)|\le 2|w(x)-w(y)|.
\end{align*}
With the help of above inequality we obtain
\begin{align*}
\int_{\R^n} ||w(x)|-w(x)(\sign w)^h(x)|dx= \\ \int_{\R^n} \left|\int_{\R^n} (|w(x)|-w(x)\sign w(x-y))\beta_h(y)dy\right|dx\le \\
\int_{\R^n} \int_{\R^n} ||w(x)|-w(x)\sign w(x-y)|\beta_h(y)dydx\le \\ \int_{\R^n}\int_{\R^n} 2|w(x)-w(x-y)|\beta_h(y)dydx=\\
2\int_{|y|\le h}\left(\int_{\R^n} |w(x)-w(x-y)|dx\right) \beta_h(y)dy\le2\omega_w(h),
\end{align*}
as was to be proved.
\end{proof}
As it readily follows from Lemma~\ref{lem2}, for any $\rho=\rho(x)\in C_0^1(\R^n)$
\begin{align}\label{17}
\left|\int_{\R^n} |w(x)|\rho(x)dx-\int_{\R^n}w(x)\rho(x)(\sign w)^h(x)dx\right|\le \nonumber\\
\int_{\R^n} ||w(x)|-w(x)(\sign w)^h(x)|\rho(x)dx\le 2\|\rho\|_\infty\omega_w(h).
\end{align}
We apply this relation to the function $w(x)=u_r(t_0+\Delta t,x)-u_r(t_0,x)$ for fixed $t_0,\Delta t>0$, $r\in\N$.
In view of estimate (\ref{estx}) for every $\Delta x\in\R^n$, $|\Delta x|<h$,
\begin{align*}
\int_{\R^n}|w(x+\Delta x)-w(x)|dx\le \int_{\R^n}|u_r(t_0,x+\Delta x)-u_r(t_0,x)|dx+ \\ \int_{\R^n}|u_r(t_0+\Delta t,x+\Delta x)-u_r(t_0+\Delta t,x)|dx\le 2\omega_x(h),
\end{align*}
so that $\omega_w(h)\le 2\omega^x(h)$. It follows from (\ref{17}), (\ref{16}), and (\ref{conder}) that
\begin{align}
\int_{\R^n} |w(x)|\rho(x)dx\le\left|\int_{\R^n}w(x)\rho(x)(\sign w)^h(x)dx\right|+4\|\rho\|_\infty\omega^x(h)=\nonumber\\
\left|\int_{\R^n}(u_r(t_0+\Delta t,x)-u_r(t_0,x))\rho(x)(\sign w)^h(x)dx\right|+4\|\rho\|_\infty\omega^x(h)\le \nonumber\\
N\|\nabla(\rho(x)(\sign w)^h(x))\|_1\Delta t+4\|\rho\|_\infty\omega^x(h)\le c_\rho(\Delta t/h+\omega^x(h)),
\end{align}
where $0<h<1$, and $c_\rho$ is a constant depending only on $\rho$. Since the left hand side of this estimate does not depend on $h$, we arrive at the estimate
\begin{equation}\label{estt}
\int_{\R^n} |u_r(t_0+\Delta t,x)-u_r(t_0,x)|\rho(x)dx\le c_\rho\omega^t(\Delta t),
\end{equation}
where $\displaystyle\omega^t(\Delta t)=\inf_{0<h<1}(\Delta t/h+\omega^x(h))$. Taking $h=(\Delta t)^{1/2}$, we find
$\omega^t(\Delta t)\le (\Delta t)^{1/2}+\omega^x((\Delta t)^{1/2})$ for all $\Delta t\in (0,1)$. Thus, $\omega^t(\Delta t)\to 0$ as $\Delta t\to 0$.
Both estimates (\ref{estx}), (\ref{estt}) are uniform with respect to $t_0>0$ and $r\in\N$. By the known compactness criterium they imply pre-compactness of the sequence $u_r$ in $L^1_{loc}(\Pi)$. Therefore, passing to a subsequence, we can assume that $u_r\to u$ as $r\to\infty$ in $L^1_{loc}(\Pi)$. We conclude that all the requirements of Proposition~\ref{pro3} are satisfied (with the constant sequence $u_{r0}=u_0$), and by this proposition $u=u(t,x)$ is an e.s. of (\ref{1}), (\ref{ini}).
For more general initial functions $u_0(x)\in (c+L^1(\R^n))\cap L^\infty(\R^n)$, where $c\in\R$, one can make the change $\tilde u=u-c$. As is easy to verify, $u$ is an e.s. of (\ref{1}), (\ref{ini}) if and only if $\tilde u$ is an e.s. to the problem
$$
u_t+\div_x\varphi(c+u)=0, \quad u(0,x)=u_0(x)-c,
$$
corresponding to the parametrization $u=b(v)-c$, $\bar\varphi(c+u)\ni g(v)$.
The existence of such an e.s. has been just shown. This yields the existence of e.s. to the original problem. Thus, we have proved the following result.
\begin{theorem}\label{th1}
For every initial function $u_0\in (c+L^1(\R^n))\cap L^\infty(\R^n)$, where $c\in\R$, there exists an e.s. of problem (\ref{1}), (\ref{ini}).
\end{theorem}
Concerning the uniqueness, it may fail even if $n=1$ and $u_0\in L^1(\R)\cap L^\infty(\R)$.
\begin{example}\label{ex1}
We will study the problem
\begin{equation}\label{18}
u_t+H(u)_x=0, \quad u(0,x)=u_0(x)\doteq\frac{1}{1+x^2},
\end{equation}
where $H(u)=\sign^+ u$ is the Heaviside function. The natural solution of this problem is the stationary solution
$u(t,x)\equiv u_0(x)$. To construct other e.s., we choose the appropriate continuous parametrization of the flux
(it corresponds (\ref{ext}) if we set $H(0)=1/2$)
$$u=b(v)=\left\{\begin{array}{lcr} v & , & v<0, \\ 0 & , & 0\le v\le 1, \\ v-1 & , & v>1, \end{array}\right. \quad
\tilde H(u)\ni g(v)=\left\{\begin{array}{lcr} 0 & , & v<0, \\ v & , & 0\le v\le 1, \\ 1 & , & v>1, \end{array}\right.$$
where $\tilde H(u)=H(u)$, $u\not=0$, $\tilde H(0)=[0,1]$.
\end{example}
We are going to find an e.s. of (\ref{18}) in the form
$$
u(t,x)=\left\{\begin{array}{lcr} 1/(1+x^2) & , & x>x(t), \\ 0 & , & x<x(t), \end{array}\right.
$$
where $x(t)\in C^1((\alpha,\beta))$, $0\le\alpha<\beta\le+\infty$; $x'(t)>0$, $\lim\limits_{t\to\alpha+} x(t)=-\infty$, $\lim\limits_{t\to\beta-} x(t)=+\infty$ if $\beta<+\infty$. The corresponding measure valued e.s. $\nu_{t,x}$ is assumed being regular, i.e., it is an e.s. $v=v(t,x)\in L^\infty(\Pi)$ of the conservation law
$b(v)_t+g(v)_x=0$ such that $u=b(v)$. In particular, $v(t,x)=1+1/(1+x^2)$ if $x>x(t)$ and $v(t,x)\in [0,1]$ if $x<x(t)$.
Since in the latter case $v_x=b(v)_t+g(v)_x=0$ in the sense of distributions, we claim that $v$ does not depend on $x$, i.e., $v=v(t)$ in the domain $x<x(t)$. As is easy to realize, both the Rankine-Hugoniot and the Oleinik conditions should be fulfilled on the discontinuity line $x=x(t)$.
They means, respectively, that $x'(t)$ coincides with the slope of the chord connected the points $(b(v-),g(v-))$, $(b(v+),g(v+))$ of the graph of the flux function, and that this graph lies above of the indicated chord then $v$ runs between
$v-=\lim\limits_{x\to x(t)-} v(t,x)=v(t)$ and $v+=\lim\limits_{x\to x(t)+} v(t,x)=1+1/(1+x(t)^2)>v-$.
Notice that the Oleinik condition is automatically satisfied while the Rankine-Hugoniot condition provides the differential equation $x'(t)=(1+x^2)(1-v(t))$. In particular, taking $v(t)\equiv 0$ and solving the above equation, we obtain the discontinuity curve $x=x(t)=\tan(t-t_0)$, $t_0-\pi/2<t<t_0+\pi/2$ with the required properties for all $t_0\ge\pi/2$. Varying $v(t)$, we can construct many other e.s. For example, choosing $v(t)=t^2/(1+t^2)$ and a particular solution $x=-1/t$ of the differential equation $x'(t)=(1+x^2)(1-v(t))=(1+x^2)/(1+t^2)$, we find the e.s. $u=1/(1+x^2)$ if $xt>-1$, $u=0$ if $xt<-1$.
We conclude that an e.s. of (\ref{18}) is not unique. In the case of merely continuous flux vector an e.s. of the problem
(\ref{1}), (\ref{ini}) may also be non-unique but only if $n>1$, see \cite{KrPa1,KrPa2}.
\section{Existence of the largest and the smallest e.s.}
To construct the largest e.s., we choose a strictly decreasing sequence $d_r>d=\esssup u_0(x)$, $r\in\N$, and the corresponding sequence $u_r$ of e.s. of (\ref{1}), (\ref{ini}) with initial functions
$$
u_{0r}(x)=\left\{\begin{array}{lcr} u_0(x) & , & |x|\le r, \\ d_r & , & |x|>r. \end{array}\right.
$$
Since $u_{0r}\in (d_r+L^1(\R^n))\cap L^\infty(\R^n)$ an e.s. $u_r$ actually exists by Theorem~\ref{th1}. Observe that $\forall r\in\N$
$
u_0(x)\le u_{0r+1}(x)\le u_{0r}(x)\le d_r \ \mbox{ a.e. on } \R^n, \mbox{ and }
\lim\limits_{r\to\infty} u_{0r}(x)=u_0(x).
$
Denote $\delta_r=d_r-d_{r+1}>0$. By the maximum principle $u_r\le d_r$ for all $r\in\N$. Therefore,
$$
\{(t,x) | u_{r+1}(t,x)\ge u_r(t,x)\}\subset\{(t,x) | d_{r+1}\ge u_r(t,x)\}=\{(t,x) | d_r-u_r(t,x)\ge \delta_r\}.
$$
By Chebyshev's inequality and Corollary~\ref{cor2} for each $T>0$
\begin{align*}
\meas\{ \ (t,x)\in (0,T)\times\R^n \ | \ u_{r+1}(t,x)\ge u_r(t,x) \ \}\le \\ \meas\{ \ (t,x)\in (0,T)\times\R^n \ | \ d_r-u_r(t,x)\ge \delta_r \ \}\le \\
\frac{1}{\delta_r}\int_{(0,T)\times\R^n}|d_r-u_r|dtdx\le \frac{T}{\delta_r}\int_{\R^n}|d_r-u_{0r}|dx= \frac{T}{\delta_r}\int_{|x|<r}(d_r-u_0)dx<+\infty.
\end{align*}
We see that the assumption of Proposition~\ref{pro2} regarded to the e.s. $u_{r+1}$ and $u_r$ is satisfied and by this proposition $u_{r+1}\le u_r$ a.e. on $\Pi$. Since
$u_{0r}\ge u_0\ge a\doteq\essinf u_0(x)$ then $u_r\ge a$, by the minimum principle. Hence, the sequence
$$
u_r(t,x)\mathop{\to}_{r\to\infty} u_+(t,x)\doteq\inf_{r>0} u_r(t,x)
$$
a.e. on $\Pi$, as well as in $L^1_{loc}(\Pi)$. By Proposition~\ref{pro3} the limit function $u_+$ is an e.s. of original problem (\ref{1}), (\ref{ini}).
Let us demonstrate that $u_+$ is the largest e.s. of this problem.
For that, we choose an arbitrary e.s. $u=u(t,x)$ of (\ref{1}), (\ref{ini}). By the maximum principle,
$u\le d$. Therefore, for each $r\in\N$
$$
\{ (t,x)\in \Pi_T=(0,T)\times\R^n | u\ge u_r\}\subset\{(t,x)\in \Pi_T | d\ge u_r\}= \{ (t,x)\in \Pi_T | d_r-u_r\ge d_r-d\}$$
and consequently
\begin{align*}
\meas\{(t,x)\in \Pi_T | u\ge u_r\}\le\frac{1}{d_r-d}\int_{\Pi_T}|d_r-u_r|dx\le \frac{T}{d_r-d}\int_{|x|<r}(d_r-u_0)dx<+\infty,
\end{align*}
where we use again Chebyshev's inequality and Corollary~\ref{cor2}.
Hence, the requirement of Proposition~\ref{pro2}, applied to the e.s. $u$ and $u_r$, is satisfied and, by the comparison principle,
the inequality $u_0\le u_{0r}$ implies that $u\le u_r$ a.e. on $\Pi$. In the limit as $r\to\infty$ we conclude that $u\le u_+$ a.e. on $\Pi$. Hence, $u_+$ is the unique largest e.s.
The smallest e.s. $u_-$ can be found as $u_-=-\tilde u_+$, where $\tilde u_+$ is the largest e.s. to the problem (\ref{1-}).
We have established the existence of the largest and the smallest e.s. Let us demonstrate that these e.s. satisfy the stability and monotonicity properties with respect to initial data.
\begin{theorem}\label{th2}
Let $u_{1+},u_{2+}\in L^\infty(\Pi)$ be the largest e.s. of (\ref{1}), (\ref{ini}) with initial functions $u_{10}$, $u_{20}$, respectively. Then for a.e. $t>0$
$$
\int_{\R^n} (u_{1+}(t,x)-u_{2+}(t,x))^+dx\le\int_{\R^n} (u_{10}(x)-u_{20}(x))^+dx.
$$
In particular, if $u_{10}\le u_{20}$ a.e. in $\R^n$ then $u_{1+}\le u_{2+}$ a.e. in $\Pi$.
\end{theorem}
\begin{proof}
We choose a decreasing sequence $d_r>d=\max(\esssup u_{10}(x),\esssup u_{20}(x))$, $r\in\N$, and define the following sequences of initial functions
$$
u_{1r}^0(x)=\left\{\begin{array}{lcr} u_{10}(x) & , & |x|\le r, \\ d_r & , & |x|>r, \end{array}\right. \quad
u_{2r}^0(x)=\left\{\begin{array}{lcr} u_{20}(x) & , & |x|\le r, \\ d_r+1 & , & |x|>r. \end{array}\right.
$$
Let $u_{1r}=u_{1r}(t,x)$, $u_{2r}=u_{2r}(t,x)$ be e.s. of problem (\ref{1}), (\ref{ini}) with initial functions $u_{1r}^0$,
$u_{2r}^0$, respectively. As was demonstrated above, the sequences $u_{1r}$, $u_{2r}$ decrease and converges in $L^1_{loc}(\Pi)$ to the largest e.s. $u_{1+}$, $u_{2+}$, respectively. By the maximum principle $u_r\le d_r$ a.e. in $\Pi$ and therefore for each $T>0$
\begin{align*}
\{ (t,x)\in\Pi_T | u_{1r}(t,x)\ge u_{2r}(t,x) \}\subset\{ (t,x)\in\Pi_T | d_r\ge u_{2r}(t,x) \}\subset \\ \{ (t,x)\in\Pi_T | d_r+1-u_{2r}(t,x)\ge 1 \}.
\end{align*}
By Chebyshev inequality and Corollary~\ref{cor2}
\begin{align*}
\meas\{ (t,x)\in\Pi_T | u_{1r}(t,x)\ge u_{2r}(t,x) \}\le\meas\{ (t,x)\in\Pi_T | d_r+1-u_{2r}(t,x)\ge 1 \}\le \\ \int_{\Pi_T}|d_r+1-u_{2r}(t,x)|dtdx\le T\int_{\R^n}|d_r+1-u_{2r}^0(x)|dx= \\ T\int_{|x|<r}(d_r+1-u_{20}(x))dx<\infty,
\end{align*}
which allows to apply Proposition~\ref{pro2} and conclude that for a.e. $t>0$ and all $r\in\N$
\begin{align*}
\int_{\R^n} (u_{1r}(t,x)-u_{2r}(t,x))^+dx\le \int_{\R^n}(u_{1r}^0(x)-u_{2r}^0(x))^+dx=\\
\int_{|x|<r}(u_{10}(x)-u_{20}(x))^+dx\le \int_{\R^n}(u_{10}(x)-u_{20}(x))^+dx.
\end{align*}
To complete the proof, it remains only to pass to the limit as $r\to\infty$ in above relation with the help of Fatou's lemma.
\end{proof}
\begin{corollary}\label{cor3} With notations of Theorem~\ref{th2} for a.e. $t>0$
$$
\int_{\R^n} |u_{1+}(t,x)-u_{2+}(t,x)|dx\le\int_{\R^n} |u_{10}(x)-u_{20}(x)|dx.
$$
\end{corollary}
\begin{proof}
By Theorem~\ref{th2} we find that for a.e. $t>0$
\begin{align*}
\int_{\R^n} (u_{1+}(t,x)-u_{2+}(t,x))^+dx\le\int_{\R^n} (u_{10}(x)-u_{20}(x))^+dx, \\
\int_{\R^n} (u_{2+}(t,x)-u_{1+}(t,x))^+dx\le\int_{\R^n} (u_{20}(x)-u_{10}(x))^+dx.
\end{align*}
Putting these inequalities together, we derive the desired result.
\end{proof}
The analogues of Theorem~\ref{th2} and Corollary~\ref{cor3} for the smallest e.s. follows from the results for the largest e.s. to the problem (\ref{1-}) after the change $u\to -u$.
Let us return to the problem (\ref{18}) from Example~\ref{ex1} and find the largest and the smallest e.s. explicitly.
First, we demonstrate that the largest e.s. $u_+$ coincides with the stationary solution $u_0=1/(1+x^2)$. Since the e.s. $u_+$ is the largest one, then $u_+\ge u_0$. Further, by Proposition~\ref{pro1}
for a.e. $t>0$
$$
\int_{\R} u_+(t,x)dx=\int_{\R} (u_+(t,x)-0)^+dx\le\int_\R (u_0(x)-0)^+dx=\int_\R u_0(x)dx,
$$
which implies the inequality
$$
\int_{\R} (u_+(t,x)-u_0(x))dx\le 0.
$$
Since $u_+\ge u_0$, we conclude that $u_+=u_0(x)$ a.e. in $\Pi$, as was claimed.
Let us show that the smallest e.s. of (\ref{18}) is given by the expression
$$
u_-(t,x)=\tilde u(t,x)\doteq\left\{\begin{array}{lcr} 1/(1+x^2) & , & x>\tan(t-\pi/2), \\ 0 & , & x<\tan(t-\pi/2),
\end{array}\right.
$$
and we agree that $\tilde u\equiv 0$ for $t\ge\pi$. As was shown in Example~\ref{ex1}, $\tilde u$ is indeed an e.s. of (\ref{18}). Therefore, the smallest e.s. $u_-\le\tilde u$. By the minimum principle we also claim that $u_-\ge 0$. Direct calculation shows that
\begin{equation}\label{e1}
\int \tilde u(t,x)dx=\int_{\tan(t-\pi/2)}^{+\infty}\frac{dx}{1+x^2}=(\pi-t)^+.
\end{equation}
Observe that $(u_-)_t+H(u_-)_x=0$ in $\D'(\Pi)$, where we have to choose $H(0)=0$ because $v=v(t)\equiv 0$ for $x<\tan(t-\pi/2)$, see Example~\ref{ex1}. This easily implies that for a.e. $r>0$
$$
\frac{d}{dt}\int_{-r}^r u_-(t,x)dx=H(u_-(t,-r))-H(u_-(t,r))\ge -1 \ \mbox{ in } \D'(\R),
$$
which, in turn, implies the estimate
$
\int_{-r}^r u_-(t,x)dx\ge\int_{-r}^r u_0(x)dx-t.
$
Passing in this estimate to the limit as $r\to+\infty$, we find that
$
\int u_-(t,x)dx\ge \int u_0(x)dx-t=\pi-t.
$
Taking also into account that $u_-\ge 0$, we see that for a.e. $t>0$
$$
\int u_-(t,x)dx\ge (\pi-t)^+.
$$
Comparing this inequality with (\ref{e1}), we get
$$
\int (\tilde u(t,x)-u_-(t,x))dx\le 0
$$
for a.e. $t>0$. Since $\tilde u\ge u_-$, this implies the desired identity $u_-=\tilde u(t,x)$.
\medskip
In the end of this section we put the example promised in Introduction, which shows the necessity of the multi-valued extension of the flux.
\begin{example}\label{ex2}
Let $n=1$ and $\chi_0(u)$ be a function that is different from zero only at the zero point, where it equals $1$, i.e. $\chi_0(u)$ is the indicator function of the singleton $\{0\}$. We consider the Riemann problem
$$
u_t+(\chi_0(u))_x=0, \quad u(0,x)=H(x),
$$
where $H(x)$ is the Heaviside function.
Putting the entropy relation
$$
|u-k|_t+[\sign(u-k)(\chi_0(u)-\chi_0(k))]_x\le 0
$$
together with the identities
$$
\pm\left((u-k)_t+(\chi_0(u)-\chi_0(k))_x\right)=0,
$$
we get that for each $k\in\R$
\begin{equation}\label{ex3}
((u-k)^\pm)_t+[\sign^\pm(u-k)(\chi_0(u)-\chi_0(k))]_x\le 0 \ \mbox{ in } \D'(\Pi).
\end{equation}
It follows from this relation that $((u-1)^+)_t\le 0$, $((u+\varepsilon)^-)_t\le 0$ in $\D'(\Pi)$ for each $\varepsilon>0$ and since $0\le u(0,x)\le 1$, we find that $(u-1)^+=(u+\varepsilon)^-=0$, that is, $-\varepsilon\le u\le 1$ a.e. in $\Pi$. In view of arbitrariness of $\varepsilon>0$, we see that $0\le u\le 1$ a.e. in $\Pi$. It again follows from (\ref{ex3}) that $((u-\varepsilon)^+)_t\le 0$ in $\D'(\Pi)$ for every $\varepsilon>0$. This implies that $(u-\varepsilon)^+\le (H(x)-\varepsilon)^+=0$ a.e. in the quarter-plane $t>0$, $x<0$. Since $\varepsilon>0$ is arbitrary, we conclude that $u(t,x)=0$ in this quarter-plane. Now, we will demonstrate that $u=1$ a.e. in the quarter-plane $t>0$, $x>0$.
For that, we apply the relation $(1-u)_t-\chi_0(u)_x=0$ to the test function $f=p(\min(R+T-t-x,x))h(t)$,
where $T>0$, $R>2$, $p(v)\in C^1(\R)$ is a function with the properties $p'\ge 0$, $p(v)=0$ for $v\le 0$, $p(v)>0$ for $v>0$, $p(v)=1$ for $v\ge 1$; $h(t)\in C_0^1((0,T))$, $h\ge 0$ (notice that $p\equiv 1$ in a neighborhood of a singular line $x=R+T-t-x$, $t<T$, which implies that $f\in C_0^1(\Pi)$). As a result, we get
\begin{align}\label{ex4}
\int_\Pi (1-u)ph'(t)dtdx+\int_{x>R+T-t-x}(-(1-u)+\chi_0(u))p'hdtdx+\nonumber\\ \int_{x<R+T-t-x}(-\chi_0(u))p'hdtdx=0.
\end{align}
Observing that $0\le\chi_0(u)\le 1-u$ for $u=u(t,x)\in [0,1]$, and that $p'=p'(\min(R+T-t-x,x))\ge 0$, we find that
the last two integrals in (\ref{ex4}) are non-positive and therefore for all $h=h(t)\in C_0^1((0,T))$, $h\ge 0$
$$
\int_0^T\left(\int_{\R^n}(1-u)p(\min(R+T-t-x,x))dx\right)h'(t)dt=\int_\Pi (1-u)ph'(t)dtdx\ge 0.
$$
This means that
$$
\frac{d}{dt}\int_{\R^n}(1-u)p(\min(R+T-t-x,x))dx\le 0 \ \mbox{ in } \D'((0,T)).
$$
Taking into account the initial condition, we find that for a.e. $t\in (0,T)$
$$
\int_{\R^n}(1-u)p(\min(R+T-t-x,x))dx\le \int_{\R^n}(1-u_0(x))p(\min(R+T-x,x))dx=0
$$
since $u_0(x)=1$ for $x>0$ while $p(\min(R+T-x,x))=p(x)=0$ for $x\le 0$. In the limit as $R\to+\infty$, this relation implies that for a.e. $t\in (0,T)$
$$
\int_{\R^n}(1-u(t,x))p(x)dx=0.
$$
Since $p(x)>0$ for $x>0$, and $T>0$ is arbitrary, we conclude that $u(t,x)=1$ for a.e. $(t,x)\in\Pi$, $x>0$. We have established that our solution $u=H(x)$. But this function is not even a weak solution of our equation because the Rankine-Hugoniot relation $0=\chi_0(1)=\chi_0(0)=1$ is violated on the shock line $x=0$. Hence, our Riemann problem has no e.s. in the Kruzhkov sense. As we already know, there exists an e.s. of our problem in the sense of Definition~\ref{def1},
corresponding to the multi-valued extension $\tilde\chi_0(0)=[0,1]$ of the flux.
The corresponding continuous parametrization can be given by the functions
$$
u=b(v)=\left\{\begin{array}{lcr} v+1 & , & v<-1, \\ 0 & , & -1\le v\le 1, \\ v-1 & , & v>1, \end{array}\right. \quad
\tilde\chi_0(u)\ni g(v)=\left\{\begin{array}{lcr} 0 & , & |v|>1, \\ 1-|v| & , & |v|\le 1. \end{array}\right.
$$
Let us show that the stationary solution $u=H(x)$ is an e.s. of our problem.
The corresponding e.s. $v=v(t,x)$
of the equation $b(v)_t+g(v)_x=0$ can be chosen regular. For $x>0$ it is uniquely determined by the requirement $b(v)=u=1$ and therefore $v=2$. In the case $x<0$ one can chose $v\equiv -1$ or $v\equiv 1$ (it is even possible to take measure valued function $\nu_{t,x}(v)=(1-\alpha)\delta(v+1)+\alpha\delta(v-1)$, $\alpha=\alpha(t,x)\in [0,1]$). By the construction both the Rankine-Hogoniot and the Oleinik conditions are satisfied in the shock line $x=0$. Hence $H(x)=b(v)$ is the required e.s.
\end{example}
\section{The case of periodic initial functions}
Let us study the particular case when the initial function $u_0(x)$ is periodic, $u_0(x+e)=u_0(x)$ a.e. in $\R^n$ for all $e\in L$, where $L\subset\R^n$ is a lattice of periods. Without loss of generality we may suppose that $L$ is the standard lattice $\Z^n$.
\begin{theorem}\label{th3}
The largest e.s. $u_+$ and the smallest e.s. $u_-$ of the problem (\ref{1}), (\ref{ini}) are space-periodic and coincide: $u_+=u_-$.
\end{theorem}
\begin{proof}
Let $e\in L$. In view of periodicity of the initial function it is obvious that $u(t,x+e)$ is an e.s. of (\ref{1}), (\ref{ini}) if and only if $u(t,x)$ is an e.s. of the same problem. Therefore, $u_+(t,x+e)$ is the largest e.s.
of (\ref{1}), (\ref{ini}) together with $u_+$. By the uniqueness $u_+(t,x+e)=u_+(t,x)$ a.e. on $\Pi$ for all $e\in L$, that is $u_+$ is a space periodic function. In the same way we prove space periodicity of the minimal e.s. $u_-$.
Let $\nu_{t,x}^\pm(v)$ be measure valued e.s. of (\ref{1'}) corresponding to the e.s. $u_\pm$.
In view of (\ref{weak}), we have
\begin{equation}\label{19}
(u_+-u_-)_t+\div_x\int g(v)d(\nu_{t,x}^+-\nu_{t,x}^-)(v)=0 \ \mbox{ in } \D'(\Pi).
\end{equation}
Let $\alpha(t)\in C_0^1(\R_+)$, $\beta(y)\in C_0^1(\R^n)$, $\displaystyle\int_{\R^n}\beta(y)dy=1$. Applying (\ref{19}) to the test function $k^{-n}\alpha(t)\beta(x/k)$, with $k\in\N$, we arrive at the relation
\begin{equation}\label{20}
k^{-n}\int_\Pi(u_+-u_-)\alpha'(t)\beta(x/k)dtdx+k^{-n-1}\int_\Pi Q\cdot\nabla_y\beta(x/k)\alpha(t)dtdx=0,
\end{equation}
where the vector $Q=Q(t,x)=\int g(v)d(\nu_{t,x}^+-\nu_{t,x}^-)(v)\in L^\infty(\Pi,\R^n)$. We observe that
\begin{align*}
k^{-n-1}\left|\int_\Pi Q\cdot\nabla_y\beta(x/k)\alpha(t)dtdx\right|\le k^{-n-1}\|Q\|_\infty\int_\Pi |\nabla_y\beta(x/k)|\alpha(t)dtdx= \\ k^{-1}\|Q\|_\infty\int_\Pi |\nabla_y\beta(y)|\alpha(t)dtdy=c/k, \quad c=\const.
\end{align*}
Therefore, in the limit as $k\to\infty$ the second integral in (\ref{20}) disappears while (see for example \cite[Lemma~2.1]{PaJDE})
$$
k^{-n}\int_\Pi(u_+-u_-)\alpha'(t)\beta(x/k)dtdx\to\int_{\R_+\times\T^n}(u_+-u_-)(t,x)\alpha'(t)dtdx,
$$
where $\T^n=[0,1)^n$ is the periodicity cell (or, the same, the thorus $\R^n/\Z^n$). Hence, after the passage to the limit we get
$$
\int_{\R_+\times\T^n}(u_+-u_-)(t,x)\alpha'(t)dtdx=0 \quad \forall \alpha(t)\in C_0^1(\R_+).
$$
This identity means that
$$
\frac{d}{dt}\int_{\T^n}(u_+(t,x)-u_-(t,x))dx=0 \ \mbox{ in } \D'(\R_+)
$$
and implies, with the help of initial condition (\ref{ini1}), that for a.e. $t>0$
$$
\int_{\T^n}(u_+(t,x)-u_-(t,x))dx=\int_{\T^n}(u_0(x)-u_0(x))dx=0.
$$
Since $u_+\ge u_-$, we conclude that $u_+=u_-$ a.e. on $\Pi$.
\end{proof}
Since any e.s. of (\ref{1}), (\ref{ini}) is situated between $u_-$ and $u_+$, we deduce the following
\begin{corollary}\label{cor4}
An e.s. of (\ref{1}), (\ref{ini}) is unique and coincides with $u_+$.
\end{corollary}
\section{Weak completeness of e.s.}
In the one-dimensional case $n=1$
we consider a bounded sequence $u_r=u_r(t,x)\in L^\infty(\Pi)$, $r\in\N$, of e.s. of equation (\ref{1}) (without a prescribed initial condition), which are periodic with respect to the spatial variable, $u_r(t,x+1)=u_r(t,x)$ a.e. in $\Pi$.
Without loss of generality we can suppose that this sequence converges weakly-$*$ in $L^\infty(\Pi)$ to a function $u=u(t,x)$. It is clear that this function is $x$-periodic. The main result of this section is the following
\begin{theorem}\label{th4}
The limit function $u(t,x)$ is an e.s. of problem (\ref{1}), (\ref{ini}) with some periodic initial function $u_0(x)$.
\end{theorem}
Applying this theorem to the constant sequence $u_r=u$, we obtain that any e.s. of equation (\ref{1}) admits a strong trace $u_0$ at the initial line $t=0$ in the sense of relation (\ref{ini1}). In the case of continuous flux function Theorem~\ref{th4} was proved in \cite{PanSIMA} and was even extended in \cite{PanSIMA1} to the case of a degenerate parabolic equation $u_t+\varphi(u)_x=A(u)_{xx}$. We underline that the statement of Theorem~\ref{th4} is purely one-dimensional, in the case $n>1$ it is no longer valid, see \cite[Remark~3]{PanSIMA}. To prove Theorem~\ref{th4}, we will follow the scheme of paper \cite{PanSIMA}. First of all we will modify the technical lemma \cite[Lemma~2.3]{PanSIMA}.
\begin{lemma}\label{lem3}
Let $\nu$ be a Borel measure with compact support in $\R$ and $p(v)\in C(\R)$ be such a function that
\begin{equation}\label{21}
\int\sign^+(v-k)(p(v)-p(k))d\nu(v)=0 \quad \forall k\in [a,b],
\end{equation}
where $a<b=\max\supp\nu$. Then $p(v)\equiv\const$ on $[a,b]$.
\end{lemma}
\begin{proof}
We choose values $k_1,k_2\in [a,b]$ such that $p(k_1)=\min\limits_{[a,b]} p(v)$, $p(k_2)=\max\limits_{[a,b]} p(v)$. If $p(k_1)<p(b)$ then $k_1<b$. Taking $k=k_1$, we find that the integrand in (\ref{21}) is not negative and strictly positive in an interval $(b-\delta,b]$, $\delta>0$. Since $b=\max\supp\nu$ then $\nu((b-\delta,b])>0$ and the integral in (\ref{21}) is strictly positive, which contradicts to this condition. Hence, $p(k_1)=p(b)$. Similarly, assuming that $p(k_2)>p(b)$ and taking $k=k_2$ in (\ref{21}), we come to a contradiction. Thus, $p(k_2)=p(k_1)=p(b)$, that is,
$\min\limits_{[a,b]} p(v)=\max\limits_{[a,b]} p(v)$. We conclude that $p(v)\equiv\const$ on $[a,b]$.
\end{proof}
\begin{corollary}\label{cor5}
Suppose that
\begin{equation}\label{21a}
\int\sign^-(v-k)(p(v)-p(k))d\nu(v)=0 \quad \forall k\in [a,b],
\end{equation}
where $a=\min\supp\nu<b$. Then $p(v)\equiv\const$ on $[a,b]$.
\end{corollary}
\begin{proof}
After the change $v\to -v$, $k\to -k$, requirement (\ref{21a}) reduces to the following one: $\forall k\in [-b,-a]$
$$
\int\sign^+(v-k)(p(-v)-p(-k))d\tilde\nu(v)=
-\int\sign^-(-v+k)(p(-v)-p(-k))d\tilde\nu(v)=0,
$$
where $\tilde\nu$ is the push-forward measure $l^*\nu$ under the map $l(v)=-v$. Notice that
$-a=\max\supp\tilde\nu$. By Theorem~\ref{th4} we conclude that $p(-v)\equiv\const$ on $[-b,-a]$, which is equivalent to the desired statement.
\end{proof}
Let $\nu_{t,x}^r$, $r\in\N$, be a measure valued e.s. of (\ref{1'}) corresponding to the e.s. $u_r$. Then the sequence $\nu_{t,x}^r$, $r\in\N$, is bounded and, by Theorem~\ref{thTa}, passing to a subsequence if necessary, we can suppose that this sequence converges weakly as $r\to\infty$ to a bounded measure valued function
$\nu_{t,x}\in\MV(\Pi)$. Since $b^*\nu_{t,x}^r(u)=\delta(u-u_r(t,x))$ then for each $p(u)\in C(\R)$
$$
p(u_r)=\int p(b(v))d\nu_{t,x}^r(v)\mathop{\to}_{r\to\infty}\int p(b(v))d\nu_{t,x}(v).
$$
This relation implies that the push-forward measure $\bar\nu_{t,x}=b^*(\nu_{t,x})(u)$ is the limit measure valued function for the sequence $u_r$ (in the sense of Theorem~\ref{thT}). In particular, this measure valued function is space-periodic, $\bar\nu_{t,x+1}=\bar\nu_{t,x}$ for a.e. $(t,x)\in\Pi$. Notice that, in correspondence with (\ref{pr2}), the weak limit function $u(t,x)=\int ud\bar\nu_{t,x}(u)=\int b(v)d\nu_{t,x}(v)$.
Passing to the limit as $r\to\infty$ in the entropy relation
$$
\int_\Pi \left[\int|b(v)-b(k)|d\nu_{t,x}^r(v)f_t+\int\sign(v-k)(g(v)-g(k))d\nu_{t,x}^r(v)f_x\right]dtdx\ge 0,
$$
$k\in\R$, $f=f(t,x)\in C_0^1(\Pi)$, $f\ge 0$, we obtain the relation
$$
\int_\Pi \left[\int|b(v)-b(k)|d\nu_{t,x}(v)f_t+\int\sign(v-k)(g(v)-g(k))d\nu_{t,x}(v)f_x\right]dtdx\ge 0,
$$
which shows that $\nu_{t,x}$ is an e.s. of (\ref{1'}).
Using compensated compactness arguments, we establish the formulated below one more important property of the limit measure valued e.s. $\nu_{t,x}$. We consider even the more general case of equations
\begin{equation}\label{g1}
\varphi_0(v)_t+\varphi_1(v)_x=0,
\end{equation}
where $\varphi_0(v),\varphi_1(v)$ are arbitrary continuous functions.
A measure valued e.s. $\nu_{t,x}\in\MV(\Pi)$ of this equation is characterized by the usual Kruzhkov entropy relation: for all $k\in\R$
\begin{equation}\label{gen}
\frac{\partial}{\partial t}\int\sign(v-k)(\varphi_0(v)-\varphi_0(k))d\nu_{t,x}(v)+\frac{\partial}{\partial x}\int\sign(v-k)(\varphi_1(v)-\varphi_1(k))d\nu_{t,x}(v)\le 0
\end{equation}
in $\D'(\Pi)$. Taking $k=\pm R$, $R\ge\|\nu_{t,x}\|_\infty$, we derive the identity
\begin{align*}
\frac{\partial}{\partial t}\int (\varphi_0(v)-\varphi_0(k))d\nu_{t,x}(v)+\frac{\partial}{\partial x}\int (\varphi_1(v)-\varphi_1(k))d\nu_{t,x}(v)= \\
\frac{\partial}{\partial t}\int \varphi_0(v)d\nu_{t,x}(v)+\frac{\partial}{\partial x}\int\varphi_1(v)d\nu_{t,x}(v)=0 \ \mbox{ in } \D'(\Pi)
\end{align*}
for all $k\in\R$. Putting this identity multiplied by $\pm 1$ together with (\ref{gen}), we get another (equivalent) form of entropy relation (\ref{gen})
\begin{equation}\label{gen1}
\frac{\partial}{\partial t}\int\psi_{0k}^\pm(v)d\nu_{t,x}(v)+\frac{\partial}{\partial x}\int\psi_{1k}^\pm(v)d\nu_{t,x}(v)\le 0 \ \mbox{ in } \D'(\Pi),
\end{equation}
where
$$
\psi_{ik}^\pm(v)=\sign^\pm(v-k)(\varphi_i(v)-\varphi_i(k)), \quad i=0,1, \ k\in\R.
$$
Denote by $\Co A$ the convex hull of a set $A\subset\R^n$. In the case when $A$ is a compact subset of $\R$,
$\Co A=[\min A,\max A]$.
\begin{proposition}\label{pro4} Let $\nu_{t,x}^r$, $r\in\N$, be a sequence of measure valued e.s. of equation (\ref{g1}) such that for a.e. $(t,x)\in\Pi$ and all $r\in\N$ the function $\varphi_0(v)$ is constant on $\Co\supp\nu_{t,x}^r$ (in particular, this condition is always satisfied when the measure valued functions $\nu_{t,x}^r$ are regular). Suppose that this sequence converges weakly to a measure valued function $\nu_{t,x}$ (in the sense of relation (\ref{pr2a})). Then
for a.e. $(t,x)\in\Pi$ there exists a nonzero vector $(\xi_0,\xi_1)\in\R^2$ such that $\xi_0\varphi_0(v)+\xi_1\varphi_1(v)=\const$ on $\Co\supp\nu_{t,x}$.
\end{proposition}
\begin{proof}
Since $\nu_{t,x}^r$ are measure valued e.s. of (\ref{g1}) then in view of (\ref{gen1}) for all $k\in\R$ the distributions
$$
\alpha_{kr}^\pm\doteq\frac{\partial}{\partial t}\int\psi_{0k}^\pm(v)d\nu_{t,x}^r(v)+\frac{\partial}{\partial x}\int\psi_{1k}^\pm(v)d\nu_{t,x}^r(v)\le 0 \ \mbox{ in } \D'(\Pi).
$$
By the known representation of nonnegative distributions $\alpha_{kr}^\pm=-\mu_{kr}$, where $\mu_{kr}$ are nonnegative locally finite measures on $\Pi$. We use also that $\alpha_{kr}^+=\alpha_{kr}^-$ because
$$
\alpha_{kr}^+-\alpha_{kr}^-=\frac{\partial}{\partial t}\int \varphi_0(v)d\nu_{t,x}^r(v)+\frac{\partial}{\partial x}\int\varphi_1(v)d\nu_{t,x}^r(v)=0 \ \mbox{ in } \D'(\Pi).
$$
It is clear that $\mu_{kr}=0$ for $|k|>M=\sup\limits_r\|\nu_{t,x}^r\|_\infty$ while for $|k|\le M$
\begin{align*}
<\mu_{kr},f>=\int_{\Pi}\left[\int\psi_{0k}^\pm(v)d\nu_{t,x}^r(v)f_t+\int\psi_{1k}^\pm(v)d\nu_{t,x}^r(v)f_x\right]dtdx\le \\
2\max_{|v|\le M}(|\varphi_0(v)|+|\varphi_1(v)|)\int_{\Pi}\max(|f_t|,|f_x|)dtdx\doteq C_f
\end{align*}
for each $f=f(t,x)\in C_0^1(\Pi)$, $f\ge 0$. Since the constants $C_f$ do not depend on $r$, the sequences of nonnegative measures $\mu_{kr}$, $r\in\N$, are bounded in the space $\mathrm{M}_{loc}(\Pi)$ of locally finite measures in $\Pi$ endowed with the standard locally convex topology.
By the Murat interpolation lemma \cite{Mu1} the sequences of distributions $\alpha_{kr}^\pm$, $r\in\N$ are pre-compact in the Sobolev space $H_{loc}^{-1}(\Pi)$. Recall that this space consists of distributions $l$ on $\Pi$ such that for each $f\in C_0^\infty(\Pi)$ the distribution $fl$ lies in the space $H^{-1}(\R^2)$, which is dual to the Sobolev space $H^1(\R^2)$.
The topology of $H_{loc}^{-1}(\Pi)$ is generated by seminorms $\|lf\|_{H^{-1}}$. We fix $k,l\in\R$ and denote
\begin{align*}
P_{kr}^+=\int\psi_{0k}^+(v)d\nu_{t,x}^r(v), \quad Q_{kr}^+=\int\psi_{1k}^+(v)d\nu_{t,x}^r(v), \\
P_{lr}^-=\int\psi_{0l}^-(v)d\nu_{t,x}^r(v), \quad Q_{lr}^-=\int\psi_{1l}^-(v)d\nu_{t,x}^r(v).
\end{align*}
As we already demonstrated, the sequences
$$
\alpha_{kr}^+=\frac{\partial}{\partial t} P_{kr}^++\frac{\partial}{\partial x} Q_{kr}^+, \quad
\alpha_{lr}^-=\frac{\partial}{\partial t} P_{lr}^-+\frac{\partial}{\partial x} Q_{lr}^-
$$
are precompact in $H_{loc}^{-1}(\Pi)$. By the compensated compactness theory (see \cite{Mu,Ta}), the quadratic functional
$\Phi(\lambda)=\lambda_1\lambda_4-\lambda_2\lambda_3$, $\lambda=(\lambda_1,\lambda_2,\lambda_3,\lambda_4)\in\R^4$, is weakly continuous on the sequence $(P_{kr}^+,Q_{kr}^+,P_{lr}^-,Q_{lr}^-)$.
By the definition of the measure valued limit function $\nu_{t,x}$ we find that as $r\to\infty$
\begin{align*}
P_{kr}^+\rightharpoonup P_k^+\doteq\int\psi_{0k}^+(v)d\nu_{t,x}(v), \quad Q_{kr}^+\rightharpoonup Q_k^+\doteq\int\psi_{1k}^+(v)d\nu_{t,x}(v), \\
P_{lr}^-\rightharpoonup P_l^-\doteq\int\psi_{0l}^-(v)d\nu_{t,x}(v), \quad Q_{lr}^+\rightharpoonup Q_l^-\doteq\int\psi_{1l}^-(v)d\nu_{t,x}(v)
\end{align*}
weakly-$*$ in $L^\infty(\Pi)$. By our assumption the function $\varphi_0(v)$ is constant on the segment $\Co\supp\nu_{t,x}^r$ for all $r\in\N$. Therefore,
$\psi_{0k}^+(v)\equiv P_{kr}^+$, $\psi_{0l}^-(v)\equiv P_{lr}^-$ on this segment. It follows from this observation that
\begin{align*}
P_{kr}^+Q_{lr}^--Q_{kr}^+P_{lr}^-=\int (\psi_{0k}^+(v)\psi_{1l}^-(v)-\psi_{1k}^+(v)\psi_{0l}^-(v))d\nu_{t,x}^r(v)
\mathop{\rightharpoonup}_{r\to\infty} \\ \int (\psi_{0k}^+(v)\psi_{1l}^-(v)-\psi_{1k}^+(v)\psi_{0l}^-(v))d\nu_{t,x}(v)
\ \mbox{ weakly-$*$ in } L^\infty(\Pi).
\end{align*}
On the other hand, this limit equals $P_k^+Q_l^--Q_k^+P_l^-$ in view of the mentioned above weak continuity of the functional $\Phi(\lambda)$. Hence, we arrive at the relation
\begin{align}\label{g2}
\int (\psi_{0k}^+(v)\psi_{1l}^-(v)-\psi_{1k}^+(v)\psi_{0l}^-(v))d\nu_{t,x}(v)=
\int\psi_{0k}^+(v)d\nu_{t,x}(v)\int\psi_{1l}^-(v)d\nu_{t,x}(v)-\nonumber\\ \int\psi_{1k}^+(v)d\nu_{t,x}(v)\int\psi_{0l}^-(v)d\nu_{t,x}(v).
\end{align}
Notice that $\psi_{ik}^+(v)=0$ for $v\le k$ while $\psi_{il}^-(v)=0$ for $v\ge l$, where $i=0,1$. Therefore, the integrand in the left hand side of (\ref{g2}) is identically zero whenever $l\le k$. For all such pairs $(k,l)$ we have
\begin{equation}\label{g3}
\int\psi_{0k}^+(v)d\nu_{t,x}(v)\int\psi_{1l}^-(v)d\nu_{t,x}(v)= \int\psi_{1k}^+(v)d\nu_{t,x}(v)\int\psi_{0l}^-(v)d\nu_{t,x}(v).
\end{equation}
Let $\Omega$ be the set of common Lebesgue points of the functions $(t,x)\to\int p(v)d\nu_{t,x}(v)$, $p(v)\in F$, where
$F\subset C(\R)$ is a countable dense set. Since the set $F$ is countable, $\Omega$ is a set of full measure in $\Pi$. By the density of $F$ any point $(t,x)\in\Omega$ is a Lebesgue points of the functions $\int p(v)d\nu_{t,x}(v)$ for all
$p(v)\in C(\R)$. In particular, for each fixed $(t,x)\in\Omega$ the measure $\nu_{t,x}$ is uniquely determined.
Since identity (\ref{g3}) fulfils a.e. in $\Pi$, it holds at each point of $\Omega$. We fix such a point $(t,x)\in\Omega$ and denote $\nu=\nu_{t,x}$, $[a,b]=\Co\supp\nu$. We have to show that $\xi_0\varphi_0(v)+\xi_1\varphi_1(v)=\const$ on $[a,b]$ for some $\xi=(\xi_0,\xi_1)\in\R^2$, $\xi\not=0$. If $\varphi_0(v)\equiv\const$ on $[a,b]$, we can take $\xi=(1,0)$, thus completing the proof. So, assume that $\varphi_0(v)$ is not constant on $[a,b]$ and, in particular, that $a<b$. We define a smaller segment $[a_1,b_1]$, where
\begin{align*}
a_1=\max\{ c\in [a,b] \ | \ \varphi_0(v)=\varphi_0(a) \ \forall v\in [a,c] \}, \\
b_1=\min\{ c\in [a,b] \ | \ \varphi_0(v)=\varphi_0(b) \ \forall v\in [c,b] \}.
\end{align*}
If $a_1\ge b_1$ then $\varphi_0(v)\equiv\const$ on $[a,b]$, which contradicts to our assumption. Therefore, $a\le a_1<b_1\le b$ and we can choose such $a_2,b_2\in (a_1,b_1)$ that $a_2<b_2$. Observe that $\varphi_0(v)$ cannot be constant on segments
$[a,a_2]$, $[b_2,b]$ (otherwise, $a_1\ge a_2$, $b_1\le b_2$, respectively). Therefore, there exist such $l_0\in [a,a_2]$, $k_0\in [b_2,b]$ that $\varphi_0(l_0)$, $\varphi_0(k_0)$
are extremum values of $\varphi_0(u)$ on the segments $[a,a_2]$, $[b_2,b]$, which are different from $\varphi_0(a)$, $\varphi_0(b)$, respectively. Then, the functions $\psi_{0k_0}^+(v)$, $\psi_{0l_0}^-(v)$ keep their sign and different from zero in neighborhoods of points $b$, $a$, respectively. This implies that
$$
\int\psi_{0k_0}^+(v)d\nu(v)\not=0, \quad \int\psi_{0l_0}^-(v)d\nu(v)\not=0.
$$
Then, by relation (\ref{g3}) (with $\nu_{t,x}=\nu$)
\begin{equation}\label{g4}
\int\psi_{1l}^-(v)d\nu(v)=c\int\psi_{0l}^-(v)d\nu(v) \quad \forall l\in [a,b_2],
\end{equation}
where
$$
c=\int\psi_{1k_0}^+(v)d\nu(v)/\int\psi_{0k_0}^+(v)d\nu(v).
$$
By relation (\ref{g3}) again
\begin{equation}\label{g4a}
\int\psi_{1k}^+(v)d\nu(v)=c_1\int\psi_{0k}^+(v)d\nu(v) \quad \forall k\in [a_2,b],
\end{equation}
where
$$
c_1=\int\psi_{1l_0}^-(v)d\nu(v)/\int\psi_{0l_0}^-(v)d\nu(v).
$$
Moreover, $c_1=c$ in view of (\ref{g4}). Introducing the function $p(v)=\varphi_1(v)-c\varphi_0(v)$, we can write equalities (\ref{g4}), (\ref{g4a}) in the form
\begin{align*}
\int\sign^-(v-l)(p(v)-p(l))d\nu(v)=0 \quad \forall l\in [a,b_2]; \\
\int\sign^+(v-k)(p(v)-p(k))d\nu(v)=0 \quad \forall k\in [a_2,b].
\end{align*}
By Lemma~\ref{lem3} and its Corollary~\ref{cor5}, we conclude that $p(v)$ is constant on each segment $[a,b_2]$, $[a_2,b]$. Since $a_2<b_2$, these segments intersect and therefore $p(v)=-c\varphi_0(v)+\varphi_1(v)\equiv\const$ on $[a,b]=\Co\supp\nu$, $\nu=\nu_{t,x}$.
This completes the proof.
\end{proof}
Notice, that the sequence $\nu_{t,x}^r$ of measure valued e.s. of equation (\ref{1'}) satisfies the requirements of Proposition~\ref{pro4} and we conclude that for a.e. $(t,x)\in\Pi$ there is a vector $\xi=(\xi_0,\xi_1)\in\R^2$, $\xi\not=0$, such that $\xi_0 b(v)+\xi_1 g(v)\equiv\const$ on $\Co\supp\nu_{t,x}$. In the case of linearly non-degenerate flux Proposition~\ref{pro4} implies the strong convergence of the sequence $u_r$, even without the periodicity requirement.
\begin{corollary}\label{cor6} Assume that the function $\varphi(u)$ is not affine on nondegenerate intervals. Then the sequence $u_r\to u$ as $r\to\infty$ in $L^1_{loc}(\Pi)$ (strongly), and $u=u(t,x)$ is an e.s. of (\ref{1}).
\end{corollary}
\begin{proof}
By Proposition~\ref{pro4} for a.e. $(t,x)\in\Pi$ there is $\xi=(\xi_0,\xi_1)\in\R^2$, $\xi\not=0$, such that $\xi_0 b(v)+\xi_1 g(v)\equiv\const$ on $\Co\supp\nu_{t,x}$. Let us show that for such $(t,x)$ the function $b(v)\equiv\const$ on
the segment $\Co\supp\nu_{t,x}$. In fact, assuming the contrary, we realize that the component $\xi_1\not=0$ and consequently $g(v)=cb(v)+\const$ for all $v\in \Co\supp\nu_{t,x}$, where $c=-\xi_0/\xi_1$. This means that $\varphi(u)=cu+\const$ on the interior of the non-degenerate interval $\{u=b(v) | v\in \Co\supp\nu_{t,x} \}$. But this contradicts to our assumption. We conclude that $b(v)$ is constant (equaled $u(t,x)$) on $\Co\supp\nu_{t,x}$. Therefore, the measure valued function $\bar\nu_{t,x}=b^*\nu_{t,x}$ is regular, $\bar\nu_{t,x}(u)=\delta(u-u(t,x))$.
In correspondence with Theorem~\ref{thT} the sequence $u_r$ converges to $u(t,x)$ strongly. Moreover, like in the proof of Proposition~\ref{pro3}, we conclude that the limit function $u=u(t,x)$ is an e.s. of (\ref{1}).
\end{proof}
Below, we prove Theorem~\ref{th4} in the general case.
\subsection{Proof of Theorem~\ref{th4}.}
Let $E$ be the set of full measure in $\R_+$, introduced in the proof of Proposition~\ref{pro1}, consisting of such $t>0$ that $(t,x)$ is a Lebesgue point of $u(t,x)$ for almost all $x\in\R$. We remind that $t\in E$ is a common Lebesgue point of all functions $\int_{\R} u(t,x)\rho(x)dx$, $\rho(x)\in L^1(\R)$. We can choose a sequence $t_m\in E$ such that $t_m\to 0$ as $m\to\infty$, and $u(t_m,x)\rightharpoonup u_0(x)\in L^\infty(\R)$ weakly-$*$ in $L^\infty(\R)$. It is clear that $u_0(x)$ is a periodic function, and that $u(t,x)\rightharpoonup u_0(x)$ as $E\ni t\to 0$. Let $\tilde u=\tilde u(t,x)$ be a unique (by Corollary~\ref{cor4}) e.s. of (\ref{1}), (\ref{ini}) with initial function $u_0$, and $\tilde\nu_{t,x}$ be a corresponding measure valued e.s. of equation (\ref{1'}). We are going to demonstrate that $u=\tilde u$. Clearly, this will complete the proof.
Applying the equalities
\begin{align*}
\frac{\partial}{\partial t} u+\frac{\partial}{\partial x}\int g(v)d\nu_{t,x}(v)=
\frac{\partial}{\partial t}\tilde u+\frac{\partial}{\partial x}\int g(v)d\tilde\nu_{t,x}(v)=0 \ \mbox{ in } \D'(\Pi)
\end{align*}
to the test functions $f=k^{-n}\alpha(t)\beta(x/k)$ and passing to the limit as $k\to\infty$, we derive, like in the proof of Theorem~\ref{th3}, that
$$
\frac{d}{dt}\int_0^1 u(t,x)dx=\frac{d}{dt}\int_0^1 \tilde u(t,x)dx=0 \ \mbox{ in } \D'(\R_+).
$$
This implies that for a.e. $t>0$
\begin{equation}\label{22}
\int_0^1 u(t,x)dx=\int_0^1 \tilde u(t,x)dx=I\doteq\int_0^1 u_0(x)dx,
\end{equation}
where we used the initial condition for e.s. $\tilde u$ and the fact that $\forall t\in E$
$$
\int_0^1 u(t,x)dx=\int_0^1 u(t_m,x)dx\mathop{\to}_{m\to\infty}\int_0^1 u_0(x)dx.
$$
Since in $\D'(\Pi)$
$$
\frac{\partial}{\partial t}(u-\tilde u)+\frac{\partial}{\partial x}\left(\int g(v)d\nu_{t,x}(v)-\int g(v)d\tilde\nu_{t,x}(v)\right)=0,
$$
there exists a Lipschitz function $P=P(t,x)$ (a potential) such that
$$
P_x=u-\tilde u, \quad P_t=\int g(v)d\tilde\nu_{t,x}(v)-\int g(v)d\nu_{t,x}(v) \quad \mbox{ in } \D'(\Pi).
$$
By the Lipschitz condition, this function admits continuous extension on the closure $\bar\Pi$. Since $P$ is defined up to an additive constant, we can assume that $P(0,0)=0$. It is clear that $P_x(t,x)\to P_x(0,x)$ weakly in $\D'(\R)$ as $t\to 0$. Taking into account that
$P_x(t,x)=u(t,x)-\tilde u(t,x)\rightharpoonup 0$ as $t\to 0$, running over a set of full measure, we find that $P_x(0,x)=0$ and therefore
$P(0,x)\equiv P(0,0)=0$. Further, by the spatial periodicity of $u-\tilde u$ and the condition
$$
\int_0^1(u-\tilde u)(t,x)dx=0
$$
(following from (\ref{22})), we find that the function $P(t,x)$ is spatially periodic as well, $P(t,x+1)=P(t,x)$.
Applying the doubling variables method \cite{Pan96} to the pair of measure valued e.s. $\nu_{t,x}$, $\tilde\nu_{t,x}$ of equation (\ref{1'}), we arrive at the relation
\begin{align}\label{23}
\frac{\partial}{\partial t}\iint |b(v)-b(w)|d\nu_{t,x}(v)d\tilde\nu_{t,x}(w)+\nonumber\\
\frac{\partial}{\partial x}\iint \sign(v-w)(g(v)-g(w))d\nu_{t,x}(v)d\tilde\nu_{t,x}(w)\le 0 \ \mbox{ in } \D'(\Pi).
\end{align}
Since $b(w)=\tilde u(t,x)$ on $\supp\tilde\nu_{t,x}$ and $b^*\nu_{t,x}=\bar\nu_{t,x}$, we can simplify the first integral
$$
\iint |b(v)-b(w)|d\nu_{t,x}(v)d\tilde\nu_{t,x}(w)=\int |b(v)-\tilde u(t,x)|d\nu_{t,x}(v)=\int |u-\tilde u(t,x)|d\bar\nu_{t,x}(u).
$$
We will need the following key relation
\begin{align}\label{24}
\iint |b(v)-b(w)|d\nu_{t,x}(v)d\tilde\nu_{t,x}(w)P_t(t,x)+\nonumber\\ \iint \sign(v-w)(g(v)-g(w))d\nu_{t,x}(v)d\tilde\nu_{t,x}(w)P_x(t,x)=0 \ \mbox{ a.e. in } \Pi.
\end{align}
We remind that
\begin{align*}
P_t(t,x)=\int g(w)d\tilde\nu_{t,x}(w)-\int g(v)d\nu_{t,x}(v)=-\iint (g(v)-g(w))d\nu_{t,x}(v)d\tilde\nu_{t,x}(w), \\
P_x(t,x)=u-\tilde u=\iint(b(v)-b(w))d\nu_{t,x}(v)d\tilde\nu_{t,x}(w),
\end{align*}
and (\ref{24}) can be written in the more symmetric form
\begin{align}\label{25}
\iint |b(v)-b(w)|d\nu_{t,x}(v)d\tilde\nu_{t,x}(w)\iint (g(v)-g(w))d\nu_{t,x}(v)d\tilde\nu_{t,x}(w)=\nonumber\\
\iint(b(v)-b(w))d\nu_{t,x}(v)d\tilde\nu_{t,x}(w)\iint \sign(v-w)(g(v)-g(w))d\nu_{t,x}(v)d\tilde\nu_{t,x}(w).
\end{align}
To prove (\ref{25}), we fix $(t,x)\in\Pi$, denote $\nu=\nu_{t,x}$, $\tilde\nu=\tilde\nu_{t,x}$, $[a,b]=\Co\supp\nu$,
$[a_1,b_1]=\Co\supp\tilde\nu$, and consider the following four cases:
(i) $[a,b]\cap [a_1,b_1]=\emptyset$. In this case $\sign(v-w)\equiv s$ is constant on $[a,b]\times[a_1,b_1]$. Therefore,
\begin{align*}
\iint |b(v)-b(w)|d\nu(v)d\tilde\nu(w)=s\iint (b(v)-b(w))d\nu(v)d\tilde\nu(w), \\
\iint \sign(v-w)(g(v)-g(w))d\nu(v)d\tilde\nu(w)=s\iint (g(v)-g(w))d\nu(v)d\tilde\nu(w)
\end{align*}
and (\ref{25}) follows;
(ii) $[a,b]\subset [a_1,b_1]$. Since $b(w)$ is constant on $[a_1,b_1]$, we find
\begin{equation}\label{26}
\iint (b(v)-b(w))d\nu(v)d\tilde\nu(w)=\iint |b(v)-b(w)|d\nu(v)d\tilde\nu(w)=0
\end{equation}
and (\ref{25}) is trivial;
(iii) $[a_1,b_1]\subset [a,b]$. In correspondence with Proposition~\ref{pro4} for some nonzero vector $(\xi_0,\xi_1)$ the function $\xi_0b(v)+\xi_1g(v)=\const$ on $[a,b]$. If $\xi_1=0$ then $b(v)\equiv\const$ on $[a,b]$, which implies (\ref{26}), and (\ref{25}) is trivially satisfied. For $\xi_1\not=0$ we find that $g(v)=cb(v)+\const$ on $[a,b]$, $c=-\xi_0/\xi_1$. Therefore,
\begin{align*}
\iint (g(v)-g(w))d\nu(v)d\tilde\nu(w)=c\iint (b(v)-b(w))d\nu(v)d\tilde\nu(w), \\
\iint \sign(v-w)(g(v)-g(w))d\nu(v)d\tilde\nu(w)=c\iint |b(v)-b(w)|d\nu(v)d\tilde\nu(w),
\end{align*}
and (\ref{25}) follows;
(iv) The remaining case: $a<a_1\le b<b_1$ or $a_1<a\le b_1<b$. We consider only the former subcase $a<a_1\le b<b_1$, the latter subcase is treated similarly. Since $b(w)\equiv b(b_1)$ on $[a_1,b_1]$ while $b(v)\le b(b_1)$ for all $v\in [a,b]$, we find that
\begin{equation}\label{27}
\iint |b(v)-b(w)|d\nu(v)d\tilde\nu(w)=-\iint (b(v)-b(w))d\nu(v)d\tilde\nu(w).
\end{equation}
Besides, if $b(v)\equiv\const$ on $[a,b]$ then $b(v)\equiv\const$ on $[a,b_1]=[a,b]\cup [a_1,b_1]$ and we again arrive at (\ref{26}), which readily implies the desired relation (\ref{25}). Thus, assume that $b(v)$ is not constant on $[a,b]$. In view of (\ref{27}) relation (\ref{25}) will follow from the equality
\begin{equation}\label{28}
\iint \sign(v-w)(g(v)-g(w))d\nu(v)d\tilde\nu(w)=-\iint (g(v)-g(w))d\nu(v)d\tilde\nu(w).
\end{equation}
By Proposition~\ref{pro4} we have $g(v)=cb(v)+\const$ on $[a,b]$, where $c=-\xi_0/\xi_1$ (remark that $\xi_1\not=0$, otherwise $b(v)\equiv\const$ on $[a,b]$, which contradicts our assumption). Therefore,
\begin{align*}
\iint \sign(v-w)(g(v)-g(w))d\nu(v)d\tilde\nu(w)= \\ \iint_{[a,b]\times [a_1,b]}\sign(v-w)(g(v)-g(w))d\nu(v)d\tilde\nu(w)-\\
\iint_{[a,b]\times (b,b_1]}(g(v)-g(w))d\nu(v)d\tilde\nu(w)=\\
c\iint_{[a,b]\times [a_1,b]}|b(v)-b(w)|d\nu(v)d\tilde\nu(w)-\iint_{[a,b]\times (b,b_1]}(g(v)-g(w))d\nu(v)d\tilde\nu(w)=\\
-c\iint_{[a,b]\times [a_1,b]}(b(v)-b(w))d\nu(v)d\tilde\nu(w)-\iint_{[a,b]\times (b,b_1]}(g(v)-g(w))d\nu(v)d\tilde\nu(w),
\end{align*}
where we use that $b(v)-b(w)=b(v)-b(b_1)\le 0$ for $v\in [a,b]$, $w\in [a_1,b]$. On the other hand,
\begin{align*}
\iint (g(v)-g(w))d\nu(v)d\tilde\nu(w)= \\ \iint_{[a,b]\times [a_1,b]}(g(v)-g(w))d\nu(v)d\tilde\nu(w)+
\iint_{[a,b]\times (b,b_1]}(g(v)-g(w))d\nu(v)d\tilde\nu(w)=\\
c\iint_{[a,b]\times [a_1,b]}(b(v)-b(w))d\nu(v)d\tilde\nu(w)+\iint_{[a,b]\times (b,b_1]}(g(v)-g(w))d\nu(v)d\tilde\nu(w),
\end{align*}
and (\ref{28}) follows. This completes the proof of relation (\ref{25}).
Let $\rho(r)=r^2/(1+r^2)$. Then the function $q=\rho(P(t,x))$ is nonnegative and Lipschitz. Moreover, by the chain rule for Sobolev derivatives $q_t=\rho'(P)P_t, q_x=\rho'(P)P_x$. Applying (\ref{23}) to the test function $qf$, where
$f=f(t,x)\in C_0^\infty(\Pi)$, $f\ge 0$, we obtain the relation
\begin{equation}\label{29}
\int_\Pi [BP_t+GP_x]f\rho'(P)dtdx+\int_\Pi [Bf_t+Gf_x]qdtdx\ge 0,
\end{equation}
where we denote
\begin{align*}
B=B(t,x)=\iint |b(v)-b(w)|d\nu_{t,x}(v)d\tilde\nu_{t,x}(w), \\ G=G(t,x)=\iint \sign(v-w)(g(v)-g(w))d\nu_{t,x}(v)d\tilde\nu_{t,x}(w).
\end{align*}
In view of relation (\ref{24}) $BP_t+GP_x=0$ a.e. on $\Pi$ and the first integral in (\ref{29}) disappears. Therefore,
$$
\int_\Pi [Bf_t+Gf_x]qdtdx\ge 0.
$$
Taking in this relation $f=k^{-1}\alpha(t)\beta(x/k)$, where $\alpha(t)\in C_0^1(\R_+)$, $\beta(y)\in C_0^1(\R)$ are nonnegative functions, $\int\beta(y)dy=1$, we arrive at the relation
$$
k^{-1}\int_{\Pi}B(t,x)q(t,x)\alpha'(t)\beta(x/k)dtdx+k^{-2}\int_\Pi G(t,x)q(t,x)\alpha(t)\beta'(x/k)dtdx\ge 0.
$$
In the limit as $k\to\infty$ the second term in this relation disappears while the first one
$$
k^{-1}\int_{\Pi}B(t,x)q(t,x)\alpha'(t)\beta(x/k)dtdx\mathop{\to}_{k\to\infty}\int_{\R^+\times [0,1]}B(t,x)q(t,x)\alpha'(t)dtdx,
$$
where we utilize the $x$-periodicity of $B(t,x)q(t,x)$, which allows to apply \cite[Lemma~2.1]{PaJDE}.
As a result, we get
$$
\int_{\R^+\times [0,1]}B(t,x)q(t,x)\alpha'(t)dtdx \quad \forall \alpha(t)\in C_0^1(\R_+), \alpha(t)\ge 0.
$$
This inequality means that
$$
\frac{d}{dt}\int_0^1 B(t,x)q(t,x)dx\le 0 \ \mbox{ in } \D'(\R_+),
$$
and implies that for $t,\tau\in E$, $t>\tau$,
\begin{equation}\label{30}
0\le\int_0^1 B(t,x)q(t,x)dx\le \int_0^1 B(\tau,x)q(\tau,x)dx,
\end{equation}
where $E\subset\R_+$ is a set of full measure.
Observe that $0\le q(\tau,x)\le |P(\tau,x)|=|P(\tau,x)-P(0,x)|\le L\tau$, where $L$ is a Lipschitz constant of $P$ while the function $B(t,x)$ is bounded. Therefore,
$$
\int_0^1 B(\tau,x)q(\tau,x)dx\mathop{\to}_{E\ni\tau\to 0} 0
$$
and it follows from (\ref{30}) that
$\int_0^1 B(t,x)q(t,x)dx=0$. Since $B,q\ge 0$, we find that $Bq=0$ a.e. on $\Pi$. Let $E\subset\Pi$ be the set where $q=0 \Leftrightarrow P=0$, that is, $E=P^{-1}(0)$. By the known properties of Lipschitz functions, $\nabla P=0$ a.e. on $E$.
In particular, $P_x=u-\tilde u=0$ a.e. in $E$. On the other hand, for $(t,x)\in\Pi\setminus E$ the function $q>0$ and therefore $B=0$ a.e. on this set. Since
$$
B(t,x)=\iint |b(v)-b(w)|d\nu_{t,x}(v)d\tilde\nu_{t,x}(w)=\int |b(v)-\tilde u|d\nu_{t,x}(v),
$$
we find that $b(v)=\tilde u$ on $\supp\nu_{t,x}$. In particular, again $u(t,x)=\int b(v)d\nu_{t,x}(v)=\tilde u(t,x)$.
We conclude that $u=\tilde u$ a.e. in $\Pi$, which completes the proof.
\section*{Acknowledgments}
The research was supported by the Russian Science Foundation, grant 22-21-00344.
|
1910.03718
|
\section{Introduction}
\IEEEPARstart{O}{ne} major research topic on random matrices is to study the tail inequalities for sums of random matrices, which bound the probability of the extreme eigenvalues (or singular values) of sums of random matrices that are larger than a given constant ({\it cf.}\cite{ahlswede2002strong,hsu2012tail,minsker2017some,tropp2012user,vershynin2012,zhang2017lsv}). Random matrices have been widely used in many research fields, {\it e.g.,} compressed sensing \cite{vershynin2012}, quantum computing \cite{ahlswede2002strong} and optimization \cite{nemirovski2007sums,so2011moment}, to model practical system behaviours with uncertainty disturbance. Crucial system characteristics can be observed efficiently if the concentration phenomenon of the tail behaviour of the random fluctuation exists. The following are some application examples of this study:
\begin{itemize}
\item In compressed sensing, Baraniuk {\it et al.} \cite{baraniuk2008simple} introduced an alternative definition of the restricted isometric properties (RIP), and then achieve a simple proof for the RIP under the concentration assumption of the measurement matrix;
\item In optimization, Nemirovski \cite{nemirovski2007sums} and So \cite{so2011moment} have pointed out that the behavior of matrix random series, which is the sum of fixed matrices weighted by independent random variables, is strongly related to the efficiently computable solutions to many optimization problems, {\it e.g.,} the chance constrained optimization problem and the quadratic optimization problem with orthogonality constraints;
\item In probability theory, Hsu {\it et al.} \cite{hsu2011dimension} used the tail inequalities of random matrices to bound the supremum of stochastic processes;
\item In machine leaning, the tail inequalities have been applied to study matrix approximation via random sampling \cite{tropp2015introduction}.
\item {In theoretical computer science, the matrix Chernoff bounds have been used to prove that the matrix-valued samples taken from the stationary random walk on an expander graph can most be treated as the independent samples \cite{wigderson2005randomness,wigderson2008derandomizing,kyng2018matrix,garg2018matrix}. }
\item {In quantum information, quantum systems are naturally in matrix forms, and hence tail bounds are very useful to study the quantum system behaviours with random noise \cite{ahlswede2002strong}. }
\end{itemize}
Most existing tail inequalities are equipped with the matrix-dimension term as a product factor, and thus are unsuitable to the scenario of high-dimensional or infinite-dimensional matrices. To overcome this shortcoming, it is important yet challenging to develop the dimension-free tail inequalities for sums of random matrices. Instead of the ambient matrix dimension, some pioneering works introduced the intrinsic matrix dimension to reduce the matrix-dimension dependence in the tail inequalities for sums of random matrices ({\it cf.} \cite{minsker2017some,hsu2012tail,tropp2015introduction}). Moreover, Rudelson and Vershynin \cite{rudelson2007sampling} presented the dimension-free tail inequalities for the sum of rank-one random matrices, each of which can be expressed as the tensor product of a bounded random vector with itself. Recently, Hsu {\it et al.} \cite{hsu2011dimension} gave the tail results for sums of random matrices by replacing the explicit matrix dimensions with a trace quantity that could be small when the matrix dimension is large or infinite. Magen and Zouzias \cite{magen2011low} applied the non-commutative Khintchine moment inequality to achieve a dimension-free tail inequality for sums of low-rank bounded matrices while the convergence rate of this inequality will be slow because of the absence of exponential form.
Consider the sum of $K$ random Hermitian matrices ${\bf X}_1,\cdots,{\bf X}_K\in\mathbb{C}^{n\times n}$, ${\bf Y}:=\sum_{k=1}^K {\bf X}_k$. In the literature, obtaining the tail inequalities for the largest eigenvalue of ${\bf Y}$ has a common starting-point ({\it cf.} \cite{ahlswede2002strong,hsu2012tail,minsker2017some,tropp2012user}):
\begin{equation}\label{eq:tropp.laplace}
\mathbb{P}\{\lambda_{\max}({\bf Y})\geq t\}\leq \inf_{\theta>0}\big\{{\rm e}^{-\theta t}\cdot \mathbb{E}\,{\rm tr}\,{\rm e}^{\theta {\bf Y}}\big\},\quad t>0,
\end{equation}
where $\lambda_{\max}(\cdot)$ denotes the largest eigenvalue.
This suggests that the key to bounding $\mathbb{P}\{\lambda_{\max}({\bf Y})\geq t\}$ is to obtain the relevant Laplace-transform bounds, {\it i.e.,} the upper bound of $\mathbb{E}\,{\rm tr}\,{\rm e}^{\theta {\bf Y}}$ ({\it cf.} \cite[Proposition 3.1]{tropp2012user}). By using the Golden-Thompson trace inequality, Ahlswede and Winter \cite{ahlswede2002strong} arrived at
\begin{eqnarray}\label{eq:aw}
\mathbb{E}\,{\rm tr}\,{\rm e}^{\theta {\bf Y}}={\rm tr}\,\mathbb{E}\,{\rm e}^{\theta {\bf Y}}
&\leq& ({\rm tr}{\bf I})\cdot \Big[\prod_k \lambda_{\max}(\mathbb{E}{\rm e}^{\theta {\bf X}_k}) \Big] \\
&=&{\rm dim}({\bf Y})\cdot \exp\Big(\sum_k\lambda_{\max}
\big(\log \mathbb{E} {\rm e}^{\theta {\bf X}_k} \big) \Big), \label{eq:aw01}
\end{eqnarray}
where ${\rm dim}({\bf Y})$ stands for the matrix dimension, or the ambient dimension (AD), of ${\bf Y}$. Note that there are two shortcomings in the bound:
\begin{enumerate}[(i)]
\item The form of $\sum$-over-$\lambda_{\max}$ on the right-hand side of (\ref{eq:aw01}) is potentially much larger than the optimal form of $\lambda_{\max}$-over-$\sum$.
\item The inequality has the matrix dimension ${\rm dim}({\bf Y})$ as a product factor; hence, it will become very loose when the dimension of ${\bf Y}$ is high.
\end{enumerate}
The first shortcoming has been successfully solved in Tropp's work \cite{tropp2012user}, where Lieb's concavity theorem was applied to obtain the following Laplace-transform bound \cite[Lemma 3.4]{tropp2012user}:
\begin{equation}\label{eq:tropp}
\mathbb{E}\,{\rm tr}\,{\rm e}^{\theta {\bf Y}}\leq {\rm dim}({\bf Y})\cdot
\exp\left(\lambda_{\max}\left(\sum_k\log \mathbb{E}{\rm e}^{\theta {\bf X}_k} \right)\right).
\end{equation}
Denote $v = \lambda_{\max}(\sum_k\mathbb{E} {\bf X}_k^2)$, and $\lambda_{\max}({\bf X}_k) \leq L$ for all $k$. One can obtain the tail inequality \cite[Theorem 6.1]{tropp2012user}):\footnote{The inequality of
\eqref{eq:amb.tail01} is resulted from the fact that $\Gamma(t)\leq \frac{t^2}{2(1+t/3)}$ when $t>0$.}
\begin{eqnarray}\label{eq:amb.tail}
\mathbb{P}\big\{\lambda_{\max}({\bf Y})\geq t \big\} &\leq& {\rm dim}({\bf Y})\cdot \exp \left( -\frac{v}{L^2}\cdot \Gamma\left( \frac{Lt}{v} \right) \right) \\
&\leq& {\rm dim}({\bf Y})\cdot \exp \left( \frac{-t^2/2}{v+Lt/3} \right), \label{eq:amb.tail01}
\end{eqnarray}
where $t>0$ and
\begin{equation}\label{eq:gamma}
\Gamma(t):= (t+1)\log(t+1)-t.
\end{equation}
The introduction of the intrinsic dimension (ID) $${\rm intdim}({\bf V}):=\frac{{\rm tr}({\bf V})}{\lambda_{1}({\bf V})},$$ where $\sum_k\mathbb{E} \{{\bf X}_k {\bf X}_k^*\} \preceq {\bf V} $ with ${\bf A} \preceq {\bf B}$ meaning that the matrix ${\bf B}-{\bf A}$ is a positive semidefinite matrix \cite{hsu2012tail,minsker2017some,tropp2015introduction}, provides an attempt to overcome the second shortcoming. By setting $\Psi(t) := {\rm e}^{\theta t} - \theta t -1$ in the generalized Laplace transform bound ({\it cf.} \cite[Proposition 7.4.1]{tropp2015introduction}):
\begin{equation*
\mathbb{P}\big\{\lambda_{\max}({\bf Y})\geq t \big\} \leq \frac{1}{\Psi(t)} \cdot \mathbb{E} \, {\rm tr}\, \Psi({\bf Y}),
\end{equation*}
Tropp obtained the tail inequality with the intrinsic dimension as a product factor \cite[Theorem 7.3.1]{tropp2015introduction}):
\begin{equation}\label{eq:int.tail}
\mathbb{P}\big\{\lambda_{\max}({\bf Y})\geq t \big\} \leq 4\cdot {\rm intdim}({\bf V})\cdot \exp \left( \frac{-t^2/2}{v+Lt/3} \right),
\end{equation}
for $t\geq \sqrt{v} + L/3$.
Although ${\rm intdim}({\bf V})$ could be smaller than ${\rm dim}({\bf Y})$, there is still one caveat for this method, namely,
the ID inequality \eqref{eq:int.tail} cannot be used to study $\mathbb{P}\big\{\lambda_{\max}({\bf Y})\geq t \big\}$ when $t$ is smaller than $\sqrt{v} + L/3$. Therefore, it is desirable to obtain the tail inequalities for sums of random matrices without the aforementioned shortcomings.
\subsection{Overview of Main Results}
In this paper, we propose a new framework to obtain the dimension-free tail inequalities for sums of random matrices. Compared with the existing works, the tail inequalities obtained under this framework has the following advantages.
\begin{enumerate}[(i)]
\item They can be used to study the tail behavior of several kinds of matrix functions including arbitrary kinds of matrix norms, the sum of the $j$ largest singular values for complex matrices and the absolute value of the sum of the $j$ largest eigenvalues for Hermitian matrices.
\item The tail bounds do not have a dimensional factor, and can work for any $t>0$. Because of the independence of the matrix dimension, they are suitable to the scenario of high-dimensional or infinite-dimensional matrices.
\end{enumerate}
Under this framework, we further obtain the tail inequalities for matrix random series, which are the sums of fixed matrices weighted by independent random variables. Compared with the existing works \cite{tropp2012user,zhang2018matrix}, our results are independent of the matrix dimension but also suitable to arbitrary kinds of probability distributions with bounded first-order moment.
As an application in compressed sensing, we apply the resulted tail inequalities to achieve a proof of the RIP of the measurement matrix that can be expressed as sums of random matrices without any assumption imposed on the entries of matrices. In addition, we also discuss the applications of our results in optimization, stochastic process, matrix approximation, {and quantum information.}
The rest of this paper is organized as follows. In Section \ref{sec:notation}, we introduce some necessary preliminaries and notations. In Section \ref{sec:main}, we present the main results of this paper. In Section \ref{sec:rip}, we discuss the application of the resulted tail inequalities in compressed sensing. {The applications in stochastic process, matrix approximation via random sampling and quantum information are given in Section \ref{sec:process}, Section \ref{sec:approximation} and Section \ref{sec:sampling}, respectively.} The last section concludes the paper and the proofs of our main results are given in the appendix.
iiii
\section{Preliminaries and Notations}\label{sec:notation}
In this section, we introduce some necessary preliminaries and notations, and then give a Laplace-transform bound as the starting-point to obtain the main results.
\subsection{Matrix Functions}
Let $\mu: \mathbb{M}\rightarrow \mathbb{R}$ be a matrix function defined on the matrix set $\mathbb{M}$. Assume that the function $\mu$ and the matrix set $\mathbb{M}$ satisfy the following conditions:
\begin{description}
\item[(C1)] For any ${\bf A}\in\mathbb{M}$, it holds that $\mu({\bf A}) \geq 0$.
\item[(C2)] For any ${\bf A}\in\mathbb{M}$ and any $\theta\geq 0$, it holds that $\theta\cdot {\bf A} \in\mathbb{M}$ and $\mu(\theta\cdot {\bf A}) = \theta \cdot \mu( {\bf A})$.
\item[(C3)] For any ${\bf A},{\bf B}\in\mathbb{M}$, it holds that $\mu({\bf A}+{\bf B})\leq \mu({\bf A})+\mu({\bf B})$.
\end{description}
The following are examples of the function $\mu(\cdot)$ and the matrix set $\mathbb{M}$ satisfying conditions (C1)-(C3):
\begin{enumerate}[(i)]
\item According to \cite[Corollary 3.4.3]{horn1991topics}, the function $\mu(\cdot)$ can be the sum of the $j$, $1\leq j < \min\{m,n\}$, largest singular values $\sum_{i=1}^j\sigma_{i}(\cdot)$ in the case that $\mathbb{M} = \mathbb{C}^{m\times n}$, where the notation $\mathbb{C}^{m\times n}$ stands for the set of all $m\times n$ complex matrices and all singular values $\sigma_1,\cdots,\sigma_{\min\{m,n\}}$ are in descending order, {\it i.e.}, $\sigma_1\geq \cdots\geq\sigma_{\min\{m,n\}}$.
\item According to \cite[Theorem G.1]{marshall2010inequalities}, the function $\mu(\cdot)$ can be the absolute value of the sum of the $j$, $1\leq j < n$, largest eigenvalues $\big|\sum_{i=1}^j\lambda_{i}(\cdot)\big|$ in the case that $\mathbb{M} = \mathbb{H}^{n\times n}$, where the notation $\mathbb{H}^{n\times n}$ denotes the set of all $n$-dimensional Hermitian matrices and all eigenvalues $\lambda_1,\cdots,\lambda_n$ are in descending order, {\it i.e.}, $\lambda_1\geq \cdots\geq\lambda_n$.
\item {It follows from the non-negativity, the homogeneousness and the triangle inequality that the function $\mu(\cdot)$ can be an arbitrary matrix norm with $\mathbb{M} = \mathbb{C}^{m\times n}$.}
\end{enumerate}
{Note that all results in this paper are built under the assumption that the function $\mu:\mathbb{M}\rightarrow \mathbb{R}$ and the matrix set $\mathbb{M}$ satisfy conditions (C1)-(C3) if no specific statements are given.}
\subsection{Infinite-dimensional Diagonal Matrices}
For any $\theta>0$, define an infinite-dimensional diagonal matrix w.r.t. the function $\mu$:
\begin{align}\label{eq:def.diag}
\widehat{{\bf D}}_\mu[\theta; {\bf B}]:={\bf D}_0+ {{\bf D}}_\mu[\theta; {\bf B}]
\end{align}
with
\begin{align}\label{eq:def.diag0}
{\bf D}_0:=&\bm{\Lambda}\left[0,0,\log\frac{1}{2!}, \log\frac{1}{3!},\cdots\right]
\end{align}
and
\begin{align}\label{eq:def.diag1}
{{\bf D}}_\mu&[\theta; {\bf B}]:= \bm{\Lambda}\Big[0, \log(\theta \cdot \mu({\bf B})+1),2\log(\theta \cdot \mu({\bf B})+1), 3\log(\theta \cdot \mu({\bf B})+1),\cdots\Big],
\end{align}
where $\bm{\Lambda}[ \cdots]$ stands for the diagonal matrix. It is direct that ${\rm tr} \,{\rm e}^{{\bf D}_0} = {\rm e}$. Subsequently, we consider the following properties of the operation ${\bf D}_\mu[\cdot\,;\,\cdot]$:
\begin{proposition}\label{prop:operation}
Given $K$ fixed matrices ${\bf B}_1,\cdots,{\bf B}_K\in \mathbb{M}$, let $\Omega = \{\Omega_1,\cdots,\Omega_I\}$ be a partition of the index set $\{1,\cdots, K\}$ with $\bigcup_{i=1}^I \Omega_i=\{1,\cdots, K\}$ and $|\Omega_i|$ stands for the cardinality of the set $\Omega_i$. Then, there holds that
\begin{align}
{\bf D}_\mu\left[\theta;\sum_{k=1}^K{\bf B}_k\right] \preceq \sum_{k=1}^K{\bf D}_\mu[\theta;{\bf B}_k];&\qquad\mbox{(sub-additivity)}\label{eq:operation1}\\
\sum_{k=1}^K {\bf D}_\mu\left[\theta;{\bf B}_k\right] \preceq K\cdot {\bf D}_\mu\left[\theta;\sum_{k=1}^K\frac{{\bf A}_k}{K}\right];&\qquad\mbox{(super-additivity)}\label{eq:operation2}\\
\sum_{k=1}^K{\bf D}_\mu[\theta;{\bf B}_k] \preceq \sum_{i=1}^I \left( |\Omega_i |\cdot {\bf D}_\mu\left[\theta;\sum_{k\in\Omega_i} \frac{{\bf A}'_k}{|\Omega_i|}\right] \right),&\qquad \mbox{(partial super-additivity)}\label{eq:operation3}
\end{align}
where ${\bf A}_1,\cdots,{\bf A}_K \in\mathbb{M}$ and ${\bf A}'_1,\cdots,{\bf A}'_K \in\mathbb{M}$ satisfy that $\sum\limits_k\mu({\bf B}_k)\leq\mu\big(\sum\limits_k{\bf A}_k\big)$ and $\sum\limits_{k\in\Omega_i}\mu({\bf B}_k)\leq\mu\big(\sum\limits_{k\in\Omega_i}{\bf A}'_k\big)$, respectively.
\end{proposition}
In this proposition, the sub-additivity is a direct result from (C3), and the super-additivity (or partial super-additivity) can be achieved by using the inequality of Arithmetic and geometric means: $\frac{s_1+s_2+\cdots+s_K}{K} \geq \sqrt[K]{s_1 s_2 \cdots s_K}$, $\forall \,s_1,\cdots,s_K \geq 0$. We remark that a simple choice of the matrices ${\bf A}_k$ and ${\bf A}'_k$ is as follows:
\begin{align}\label{eq:notationU}
{\bf A}_k\leftarrow& {{\bf U}}: = \mathop{\arg\max}\limits_{1\leq k\leq K}\big\{ \mu({\bf B}_k)\big\},\quad k\in\{1,2,\cdots, K\}\nonumber\\
{\bf A}'_k\leftarrow& {{\bf U}_i}: = \mathop{\arg\max}\limits_{k\in\Omega_i}\big\{ \mu({\bf B}_k)\big\},\quad k\in \Omega_i.
\end{align}
\subsection{Laplace-Transform Bounds}
Here, we use the aforementioned infinite-dimensional diagonal matrices ${\bf D}_\mu[\cdot;\cdot]$ to obtain the Laplace-transform bounds for the matrix function $\mu$. This provides a starting-point to achieve the tail inequalities for sums of random matrices.
\begin{proposition}\label{prop:diagonal}
For any ${\bf B}\in\mathbb{M}$ and $\theta>0$,
\begin{equation}\label{eq:diagonal.1}
{\rm e}^{\mu(\theta \cdot {\bf B})}={\rm e}^{-1}\cdot {\rm tr}\,{\rm e}^{\widehat{{\bf D}}_\mu[\theta; {\bf B}]},
\end{equation}
and for any $K\in\mathbb{N}$,
\begin{equation}\label{eq:diagonal.2}
{\rm e}^{(\mu(\theta \cdot {\bf B})+1)^K}={\rm tr}\,{\rm e}^{{\bf D}_0+K\cdot {\bf D}_\mu[\theta; {\bf B}]},
\end{equation}
where $\widehat{{\bf D}}_\mu[\theta; {\bf B}]$ is defined in \eqref{eq:def.diag}.
\end{proposition}
These results are derived from the Taylor's expansion of ${\rm e}^{x}$, where the function $\mu(\theta {\bf X})$ is converted into the trace operation for the infinitely-dimensional diagonal matrix ${\bf D}_\mu[\theta;{\bf X}]$. Next, we consider the Laplace-transform bound for sums of random matrices, {\it i.e.,} the upper bound of $\mathbb{E} \,{\rm e}^{\theta \mu( \sum_k{\bf X}_k)}$.
\begin{proposition}\label{prop:diagonal.sum}
Let ${\bf X}_1,\cdots,{\bf X}_K\in \mathbb{M}$ be independent random matrices. Then, it holds that for any $\theta>0$,
\begin{align}\label{eq:diagonal.sum}
\mathbb{E}{\rm e}^{\mu\left(\sum_{k=1}^K\theta {\bf X}_k\right)}\leq {\rm e}^{-1}\cdot {\rm tr}\,\exp\left({\bf D}_0+\sum_{k=1}^K \log\mathbb{E}\,{\rm e}^{{\bf D}_\mu[\theta;{\bf X}_k]}\right).
\end{align}
\end{proposition}
This bound follows from the sub-additivity of ${\bf D}_\mu[\cdot\,;\,\cdot]$. Note that the term $\sum_{k=1}^K \log\mathbb{E}\,{\rm e}^{{\bf D}_\mu[\theta;{\bf X}_k]}$ in the right-hand side of \eqref{eq:diagonal.sum} could be improved to the desired form $ \log\mathbb{E}\,{\rm e}^{{\bf D}_\mu[\theta;\sum_k{\bf X}_k]}$. This can be done using the super-additivity of ${\bf D}_\mu[\cdot\,;\,\cdot]$. The detail is provided in the next section.
\section{Main Results}\label{sec:main}
In this section, we present the dimension-free (DF) tail inequalities for sums of random matrices. We also compare our results with the dimension-dependent bounds \eqref{eq:amb.tail} and \eqref{eq:int.tail}, and obtain a trade-off relationship between the matrix dimension and the number $K$ of the summand matrices. We then provide numerical experiments
and show that our tail inequalities yield a more accurate description to the tail behavior of sums of random matrices in some cases.
\subsection{Dimension-free Tail Inequalities}
Let $g(\theta,K)$ be an arbitrary function satisfying $g(\theta,K)\geq \max\{\theta,\theta^K\}$, $\forall\,\theta >0$. Given independent random matrices ${\bf X}_1,\cdots,{\bf X}_K\in \mathbb{M}$, let ${\bf B}_1,\cdots,{\bf B}_K\in\mathbb{M}$ be fixed matrices such that
\begin{equation}\label{eq:cond.b}
\mathbb{E}\,{\rm e}^{{\bf D}_\mu[\theta;{\bf X}_k]} \preceq {\rm e}^{{\bf D}_\mu[\theta;{\bf B}_k]},\quad 1\leq k\leq K.
\end{equation}
Denote $\phi:=\left[\mu\left({\bf U}\right)+1\right]^K-1$ with ${\bf U} = \mathop{\arg\max}\limits_{1\leq k\leq K} \{\mu({\bf B}_k)\}$. Then, we obtain the following master tail inequality for sums of random matrices.
\begin{proposition}\label{prop:master1}
Let Then, it follows that, for any $t>0$,
\begin{equation}\label{eq:master1}
\mathbb{P}\left\{\mu\left( \sum_k{\bf X}_k\right)\geq t \right\}
\leq \inf_{\theta>0}\left\{{\rm e}^{-\theta t+g(\theta,K)\cdot \phi} \right\}.
\end{equation}
\end{proposition}
As shown in the proof of this proposition, we first introduce fixed matrices ${\bf B}_1,\cdots,{\bf B}_K$ to control the behavior of the random matrices $\{{\bf X}_k\}$, and then relax the bound to the one with the term $\sum_{k=1}^K {\bf D}_\mu[\theta;{\bf B}_k]$. Based on the super-additivity \eqref{eq:operation2} of ${\bf D}_\mu[\cdot;\cdot]$, we finally obtain the bound incorporating the term $\phi$. Note that
one feasible choice of $g(\theta,K)$ is
\begin{equation}\label{eq:g}
g(\theta,K) \leftarrow g_1(\theta,K):= {\rm e}^{K \theta} -K \theta + \alpha_1(K),
\end{equation}
where
\begin{equation}\label{eq:alpha}
\alpha_1(K):= \frac{K+1}{K}\left( \log \left(\frac{K+1}{K}\right) -1 \right).
\end{equation}
The function $g_1(\theta,K)$ is tangent to $\theta$ at the point $\big(\frac{1}{K}\log \frac{K+1}{K} , \frac{1}{K} \log \frac{K+1}{K} \big)$. Figure \ref{fig:g1} plots the curve of $g_1(\theta,K)$ when $K=2$ and $\alpha_1(K)$.
\begin{figure}[htbp]
\centering
\subfigure[\hbox{The curves of $g_1(\theta,K)$ and $\max\{ \theta ,\theta^K\}$ ($K=2$).}]{
\includegraphics[height=6cm]{g1.eps} }
\subfigure[\hbox{The curve of $\alpha_1(K)$.}]{
\includegraphics[height=6cm]{a1.eps}}
\caption{The function curves of $g_1(\theta,K)$ when $K=2$ and $\alpha_1(K)$.}\label{fig:g1}
\end{figure}
By substituting $g_1(\theta,K)$ into the master inequality \eqref{eq:master1} and then taking minimization w.r.t. $\theta$, we arrive at the following result.
\begin{theorem}\label{thm:tail1}
For any $t>0$, then it holds that
\begin{align}\label{eq:tail1}
\mathbb{P}\left\{\mu\left(\sum_{k=1}^K{\bf X}_k \right)\geq t \right\}\leq& {\rm e}^{(1+\alpha_1(K))\cdot \phi}\cdot\exp\left(- \phi\cdot \Gamma\left(\frac{t}{K\phi}\right)\right) \nonumber\\
\leq& {\rm e}^{(1+\alpha_1(K))\cdot \phi}\cdot\exp\left(- \frac{t^2/2}{K^2 \phi + Kt/3}\right)\nonumber\\
%
\leq&
\left\{
\begin{array}{ll}
{\rm e}^{\phi(1+\alpha_1(K)) }\cdot {\rm e}^{\frac{-t^2}{ 4K^2 \phi }} , & \mbox{if $t< 3K\phi $;} \\
{\rm e}^{\phi(1+\alpha_1(K)) }\cdot {\rm e}^{\frac{-3t}{ 4K }}, & \mbox{if $t\geq 3K\phi $,}
\end{array}
\right.
\end{align}
where $\Gamma(t)$ and $\alpha_1(K)$ are defined in \eqref{eq:gamma} and \eqref{eq:alpha}, respectively.
\end{theorem}
Because of the appearance of the function $\Gamma(t)$, this result has the similar form of the dimension-dependent tail inequalities \eqref{eq:amb.tail} and \eqref{eq:int.tail}. However, it also has the following different characteristics.
\begin{enumerate}
\item It does not have the matrix dimensional term as a product factor, such as ${\rm dim} ({\bf Y})$ (resp. ${\rm intdim} ({\bf Y})$) in \eqref{eq:amb.tail} (resp. \eqref{eq:int.tail}).
\item The function $\mu(\cdot)$ can be chosen as the sum of the $j$ largest singular values for complex matrices or the absolute value of the sum of the $j$ largest eigenvalues for Hermitian matrices. However, the inequalities \eqref{eq:amb.tail} and \eqref{eq:int.tail} are designed for the largest eigenvalue of Hermitian matrices only.
\item There is no restriction on the random matrices ${\bf X}_k$ except that $\mathbb{E}\,{\rm e}^{{\bf D}_\mu[\theta;{\bf X}_k]} \preceq {\rm e}^{{\bf D}_\mu[\theta;{\bf B}_k]}$. In contrast, the inequalities \eqref{eq:amb.tail} and \eqref{eq:int.tail} require that the largest eigenvalues of ${\bf X}_k$ {are bounded.
\item Since the summand number $K$ and the term $\phi$, which is in the order of $O[(1+\mu({\bf U}))^K]$, appear in the denominator of the right-hand side of \eqref{eq:tail1}, a large $K$ can bring a low rate of convergence to {\it zero} as $t$ goes to the {\it infinity}. In contrast, the convergence rate of the dimension-dependent inequalities \eqref{eq:amb.tail} and \eqref{eq:int.tail} are insensitive to the value of $K$.
\end{enumerate}
To sum up, the tail inequality \eqref{eq:tail1} does not have the matrix dimension as a product factor, and thus could be suitable to studying the tail behavior of many spectral problems for high-dimensional (or infinite-dimensional) random matrices. However, since its convergence rate is sensitive to the value of $K$, it will converge to {\it zero} at a slow rate as $t$ goes to the {\it infinity} in the case of large $K$.
To address this issue, we employ the partial super-additivity \eqref{eq:operation3} to obtain a variant of the term $\phi$ that has a lower order than $O[(1+\mu({\bf U}))^K]$.
Let $\Omega = \{\Omega_1,\cdots,\Omega_I\}$ be a partition of the index set $\{1,\cdots, K\}$ with $\bigcup_{i=1}^I \Omega_i=\{1,\cdots, K\}$ and $\tau: = \max\limits_{1\leq i\leq I}\{ |\Omega_i|\}$. Denote $\phi_\Omega:=\sum\limits_{i=1}^I\big( \big[\mu\big({\bf U}_i\big)+1\big]^{|\Omega_i|}-1\big)$ with ${\bf U}_i = \mathop{\arg\max}\limits_{k\in\Omega_i} \{\mu({\bf B}_k) \}$. Then, we get the following master tail inequality:
\begin{proposition}\label{prop:master2}
For any $t>0$, it holds that
\begin{equation}\label{eq:master.bound2}
\mathbb{P}\left\{\mu\left( \sum_k{\bf X}_k\right)\geq t \right\}
\leq \inf_{\theta>0} \exp\left( -\theta t+ g(\theta,\tau) \cdot \phi_\Omega \right).
\end{equation}
\end{proposition}
Compared with the tail bound \eqref{eq:master1}, this bound (\ref{eq:master.bound2}) incorporates the term $\phi_\Omega$ instead of the ordinary term $\phi$, which probably becomes explosive when $K$ is large because of the power $K$. In contrast, if the partition $\Omega = \{\Omega_1,\cdots,\Omega_I\}$ is well designed, the variant $\phi_\Omega$ will have a relatively lower power {$\tau$} and then the value of $\phi_\Omega$ will be controlled. In the similar way to achieve Theorems \ref{thm:tail1}, substituting $g(\theta,\tau):={\rm e}^{\tau\theta}-\tau\theta -\alpha_1(\tau) \geq \max\{\theta, \theta^{\tau}\}$ into the above master tail inequality leads to the following tail result.
\begin{theorem}\label{thm:tail2}
For any $t>0$, it holds that
\begin{align}\label{eq:tail2}
\mathbb{P}\left\{\mu\left(\sum_{k=1}^K{\bf X}_k \right)\geq t \right\}\leq& {\rm e}^{\phi_\Omega(1+\alpha_1(\tau))}\cdot\exp\left(- \phi_\Omega\cdot \Gamma\left(\frac{t}{\tau\cdot \phi_\Omega}\right)\right)\nonumber\\
\leq& {\rm e}^{\phi_\Omega(1+\alpha_1(\tau))}\cdot\exp\left(- \frac{t^2/2}{\tau^2 \phi_\Omega + \tau\cdot t/3}\right)\nonumber\\
%
\leq&
\left\{
\begin{array}{ll}
{\rm e}^{\phi_\Omega(1+\alpha_1(\tau)) }\cdot {\rm e}^{\frac{-t^2}{ 4\tau^2 \phi_\Omega }} , & \mbox{if $t< 3\tau\phi_\Omega $;} \\
{\rm e}^{\phi_\Omega(1+\alpha_1(\tau)) }\cdot {\rm e}^{\frac{-3t}{ 4\tau }}, & \mbox{if $t\geq 3\tau\phi_\Omega $.}
\end{array}
\right.
\end{align}
\end{theorem}
{The number $\tau$ (resp. the term $\phi_\Omega$) appearing in the denominator of the right-hand side of \eqref{eq:tail2} is smaller than the number $K$ (resp. the term $\phi$) appearing in \eqref{eq:tail1}.} Therefore, compared with the aforementioned inequality \eqref{eq:tail1}, this inequality is less sensitive to the value of $K$ and converges to {\it zero} much slower as $t$ goes to the {\it infinity}.
\begin{remark}\label{rem:suggestion}
Selecting the partition $\Omega = \{\Omega_1,\cdots,\Omega_I\}$ of the index set $\{1,2,\cdots,K\}$ plays an essential part in the practical application of this tail result. To control the order of the term $\phi_\Omega$, the partition $\Omega$ could be chosen in the following way:
\begin{itemize}
\item if $K$ is even, let each element of $\Omega$ contain two indexes, {\it i.e.,} $I = K/2$ and $\tau=2$;
\item {if $K$ is odd, one element of $\Omega$ contains one index and each of the others is composed of two indexes, {\it i.e.,} $I = (K+1)/2$ and $\tau=2$. }
\end{itemize}
{In the following discussion, such a partition is denoted as $\widetilde{\Omega} = \{\widetilde{\Omega}_1,\cdots,\widetilde{\Omega}_{\widetilde{I}} \}$ with $\widetilde{I} = \lceil \frac{K}{2} \rceil$}.
\end{remark}
\begin{remark}\label{rem:number}
One key difference between the resulted DF inequalities \eqref{eq:tail2} and the ambient dimension (AD) inequality \eqref{eq:amb.tail} lies in their product factors: the former are with ${\rm e}^{(1+\alpha_1(\tau))\phi_\Omega}$ and the latter are with ${\rm dim}({\bf Y})$. Under the notations given in \eqref{eq:notationU}, we arrive at the following sufficient condition to guarantee that ${\rm e}^{(1+\alpha_1(\tau))\phi_\Omega}\leq {\rm dim}({\bf Y})$:
\begin{align}\label{eq:number}
I \leq \frac{\log {\rm dim}({\bf Y})}{\big[1+\alpha_1(\tau)\big]\cdot \big[ (\mu({\bf U})+1)^\tau-1 \big] } .
\end{align}
This condition suggests that the partition number $I$ should be in the order of $O(\log {\rm dim}({\bf Y}))$, and it meanwhile reflects that the DF inequalities \eqref{eq:tail2} are not suitable to the scenario of large quantities of summand matrices. However, there is a suboptimal method to overcome this limitation, that is, decreasing the magnitude of the random matrices ${\bf X}_k$ to generate a small $\mu({\bf U})$. We will demonstrate this strategy with numerical experiments in Section \ref{sec:experiment}.
\end{remark}
{In addition, consider the function $g_4(\theta):= \theta^2 +\frac{1}{4}$. It is direct that $g_4(\theta) \geq \max\{\theta,\theta^2 \}$ for any $\theta>0$ and the curve of $g_4(\theta)$ is tangent to that of $\theta$ at the point $(\frac{1}{2},\frac{1}{2})$. By substituting $g_4(\theta)$ into Proposition \ref{prop:master2}, we then arrive at a Azuma-Hoeffding type tail inequalities:
\begin{theorem}\label{thm:tail5}
For any $t>0$, there holds that
\begin{align}\label{eq:tail5}
\mathbb{P}\left\{\mu\left(\sum_{k=1}^K{\bf X}_k \right)\geq t \right\} \leq {\rm e}^{\frac{\phi_{\widetilde{\Omega}}}{4}}\cdot\exp\left\{ -\frac{t^2}{4\phi_{\widetilde{\Omega}}}\right\},
\end{align}
where $\phi_{\widetilde{\Omega}}:=\sum\limits_{i=1}^{\widetilde{I}}\big( \big[\mu\big(\widetilde{{\bf U}}_i\big)+1\big]^{|\widetilde{\Omega}_i|}-1\big)$ with $\widetilde{{\bf U}}_i = \mathop{\arg\max}\limits_{k\in\widetilde{\Omega}_i} \{\mu({\bf B}_k) \}$.
\end{theorem}
Compared with Tropp's Azuma-Hoeffding type result \cite[Theorem 7.1]{tropp2012user}, our result has the following advantages:
\begin{enumerate}
\item it has no matrix dimension as a product factor;
\item there is no restriction on the probability behavior of the random matrices ${\bf X}_k$;
\item the matrix function $\mu(\cdot)$ can be set as many kinds of specific forms.
\end{enumerate}
Similar to \eqref{eq:number}, we can also obtain the following sufficient condition to guarantee that ${\rm e}^{\frac{\phi_{\widetilde{\Omega}}}{4}} \leq {\rm dim}({\bf Y})$:
\begin{align}\label{eq:number2}
\widetilde{I} \leq \frac{4\log {\rm dim}({\bf Y})}{\phi_{\widetilde{\Omega}}\cdot \big[ (\mu(\widetilde{{\bf U}})+1)^2-1 \big] }
\end{align}
with $\widetilde{{\bf U}} := \mathop{\arg\max}\limits_{1\leq k\leq K} \{\mu({\bf B}_k) \}$.
}
\subsection{An Empirical Method to Generate Fixed Matrices ${\bf B}_k$} \label{sec:selectb}
The obtained tail inequalities \eqref{eq:tail1} and \eqref{eq:tail2} rely on the existence of fixed matrices ${\bf B}_k$ that satisfy Condition \eqref{eq:cond.b}. In the following, we propose a {\it{constructive}} method to generate desired matrices ${\bf B}_k$ with high probability for the cases that
(i) $\mu(\cdot)$ is the sum of the $j$ largest singular values for complex matrices, or (ii) it is the absolute value of the sum of the $j$ largest eigenvalues for Hermitian matrices.
First, we present a sufficient condition for \eqref{eq:cond.b}.
\begin{proposition}\label{prop:validity}
Let ${\bf X}\in \mathbb{M}$ be a random matrix. If there exists a fixed matrix ${\bf B}$ such that $\mathbb{E}\mu({\bf X})\leq \mu({\bf B})$, then it holds that
\begin{equation*
\mathbb{E}\,{\rm e}^{{\bf D}_\mu[\theta;{\bf X}]} \preceq {\rm e}^{{\bf D}_\mu[\theta;{\bf B}]}.
\end{equation*}
\end{proposition}
Hence, in order to guarantee the validity of Condition \eqref{eq:cond.b}, we only need to let the value of $\mu({\bf B}_k)$ be larger than or equal to the expectation $\mathbb{E}\mu({\bf X}_k)$, $1\leq k\leq K$. Then, the following theorem provides an empirical method to elavulate $\mu({\bf B}_k)$.
\begin{theorem}\label{thm:select}
Let ${\bf X}\in \mathbb{M}$ be a random matrix and ${\bf X}^{(1)},\cdots,{\bf X}^{(N)}\in\mathbb{M}$ be $N$ i.i.d.~observations of ${\bf X}$. For any $\gamma>0$, let the fixed matrix ${\bf B}_\gamma\in\mathbb{M}$ satisfy the relation
\begin{equation}\label{eq:bgamma}
\mu({\bf B}_\gamma) \geq \frac{1}{N}\left(\sum_{n=1}^N \mu({\bf X}^{(n)})\right)+\gamma\cdot \exp\left(\frac{1}{N}\sum_{n=1}^N \log \big(\mu( {\bf X}^{(n)})+1\big) \right).
\end{equation}
Then, with probability at least $1- \exp\Big( \frac{-N (\log(1+\theta \gamma ) )^2}{2 \mathbb{E} (\log (\mu(\theta {\bf X})+1))^2 } \Big)$, it holds
\begin{equation}\label{eq:cond.b2}
\mathbb{E}\log \big(\mu(\theta {\bf X})+1\big)\leq \log \big(\mu(\theta {\bf B}_\gamma)+1\big),\quad \theta>0.
\end{equation}
\end{theorem}
This theorem shows that
if the fixed matrix ${\bf B}_\gamma$ satisfies the relation (\ref{eq:bgamma}),
then the probability that \eqref{eq:cond.b} fails to hold will exponentially decay to {\it zero} as the observation number $N$ goes to {\it infinity}.
Finally, we explicitly demonstrate how to generate the fixed matrices ${\bf B}_k$ based on the estimated value of $\mu({\bf B}_k)$ for the following two cases of $\mu(\cdot)$.
\begin{enumerate}
\item Let ${\bf B}\in\mathbb{H}^{n\times n}$ and $\mu(\cdot)$ be the absolute value of the sum of the $j$ largest eigenvalues ($j\geq 1$). Denote $\lambda_1({\bf B})\geq \lambda_2({\bf B})\geq\cdots \geq\lambda_j({\bf B})$ as the $j$ largest eigenvalues of ${\bf B}$. For arbitrary $w_1\geq w_2\geq\cdots\geq w_j >0$ with $\sum_{i=1}^j w_i =1$, we can set $\lambda_i({\bf B}) = w_i \mu({\bf B})$ ($i=1,2,\cdots,j$). Then, the matrix ${\bf B}$ can be generated in the way of matrix eigenvalue decomposition:
\begin{equation*}
{\bf B}:= {\bf V}\cdot \bm{\Lambda}[\lambda_1({\bf B}),\lambda_2({\bf B}),\cdots,\lambda_j({\bf B}),\underbrace{0,\cdots,0}_{n-j} ] \cdot {\bf V}^*,
\end{equation*}
where ${\bf V}$ is an arbitrary $n\times n$ unitary matrix.
%
\item Let ${\bf B}\in\mathbb{C}^{m\times n}$ and $\mu(\cdot)$ be the sum of the $j$ largest singular values ($j\geq 1$). Denote $\sigma_1({\bf B})\geq \sigma_2({\bf B})\geq\cdots \geq\sigma_j({\bf B})$ as the $j$ largest singular values of ${\bf B}$. Similarly, given $w_1\geq w_2\geq\cdots\geq w_j >0$ with $\sum_{i=1}^j w_i =1$, we can set $\sigma_i({\bf B}) = w_i \mu({\bf B})$ ($i=1,2,\cdots,j$). Then, the matrix ${\bf B}$ can be generated in the way of matrix singular value decomposition:
\begin{equation*}
{\bf B}:= {\bf U} \left[
\begin{array}{c c c c c c c}
\sigma_1({\bf B}) & 0& \cdots & 0 & 0 & \cdots & 0 \\
0& \sigma_2({\bf B}) & \cdots & 0 & 0 & \cdots & 0 \\
\vdots& \vdots & \ddots & \vdots & \vdots& \ddots & \vdots \\
0& 0 & \cdots & \sigma_j({\bf B}) & 0& \cdots &0 \\
0& 0 & \cdots & 0 & 0 & \cdots & 0 \\
\vdots& \vdots & \ddots & \vdots & \vdots& \ddots & \vdots \\
0& 0 & \cdots & 0 & 0& \cdots &0 \\
\end{array}
\right]_{m\times n}
{\bf V}^*,
\end{equation*}
where ${\bf U}$ (resp. {\bf V}) can be an arbitrary $m\times m$ (resp. $n\times n$) unitary matrix.
\end{enumerate}
{\subsection{Dimension-free Tail inequalities for Matrix Random Series}
Matrix random series refers to sums of fixed matrices weighted by i.i.d. random variables, {\it i.e.,} it is of the form $\sum_{k=1}^K \xi_k {\bf A}_k$, where $\xi_1,\cdots,\xi_K$ are i.i.d. random variables and ${\bf A}_1,\cdots,{\bf A}_K\in\mathbb{C}^{m\times n}$ are fixed matrices. The study of matrix random series is motivated by applications of random matrices in
neural networks \cite{zhao17theoretical}, kernel methods \cite{choromanski2016recycling}, deep learning \cite{cheng2015exploration} and optimization \cite{nemirovski2007sums,so2011moment,zhang2018matrix}, where the random matrices of interest can be equivalently expressed as matrix random series weighted by some specific random variables. One main research field on matrix random series is to explore their tail behaviors, and some tail results have been proposed. For example, Tropp \cite{tropp2012user} presented the tail inequalities for matrix Gaussian series and matrix Rademacher series, and his results can be directly generalized to the matrix sub-Gaussian series.\footnote{For convenience, the matrix random series weighted by Gaussian random variables is briefly named as the matrix Gaussian series, and this way of naming will be used in the whole paper if no confusion arises.} Zhang {\it et al.} \cite{zhang2018matrix} provided the tail inequalities for matrix infinitely-divisible series. There are two limitations in these works: 1) all of them are dependent on matrix dimension, and thus are unsuitable to the high-dimensional or infinite-dimensional scenario; and 2) they are only applicable to some specific distributions and thus are lack of generality.
The following dimension-free tail inequalities for matrix random series can be directly derived from Theorem \ref{thm:tail1} and Theorem \ref{thm:tail2}:
\begin{corollary}\label{cor:series}
Let ${\bf A}_1,\cdots,{\bf A}_K\in \mathbb{M}$ be fixed matrices and $\xi_1,\cdots\xi_K$ be independent random variables with $\max\limits_{1\leq k\leq K} \mathbb{E} |\xi_k | \leq c$.
\begin{enumerate}
\item For any $t>0$,
\begin{align}\label{eq:tail3}
\mathbb{P}\left\{\mu\left(\sum_{k=1}^K\xi_k{\bf A}_k \right)\geq t \right\}\leq& {\rm e}^{(1+\alpha_1(K))\cdot \psi}\cdot\exp\left(- \psi\cdot \Gamma\left(\frac{t}{K\psi}\right)\right) ,
\end{align}
where $\psi:=\left[c\mu\left({\bf V}\right)+1\right]^K-1$ with ${\bf V} = \mathop{\arg\max}\limits_{1\leq k\leq K} \{\mu({\bf A}_k)\}$.
\item For any $t>0$,
\begin{align}\label{eq:tail4}
\mathbb{P}\left\{\mu\left(\sum_{k=1}^K\xi_k{\bf A}_k \right)\geq t \right\}\leq& {\rm e}^{\psi_\Omega(1+\alpha_1(\tau))}\cdot\exp\left(- \psi_\Omega\cdot \Gamma\left(\frac{t}{\tau\cdot \psi_\Omega}\right)\right),
\end{align}
where $\psi_\Omega:=\sum\limits_{i=1}^I\big( \big[c\mu\big({\bf V}_i\big)+1\big]^{|\Omega_i|}-1\big)$ with ${\bf V}_i = \mathop{\arg\max}\limits_{k\in\Omega_i} \{\mu({\bf A}_k) \}$.
\end{enumerate}
\end{corollary}
Compared with the existing works \cite{tropp2012user,zhang2018matrix}, the above results have the following advantages: 1) they are independent of the matrix dimension, and thus are suitable to high-dimensional or infinite-dimensional scenario; 2) there is no requirement on the distributions except the bounded first-order moment, and thus they have better generality.}
\begin{remark}\label{rem:series}
The following is an application of the above results in optimization. The pioneering work \cite{nemirovski2007sums} and its follow-up \cite{so2011moment} have pointed out that whether there exist the efficiently computable solutions to some optimization problems ({\it e.g.}, chance constrained optimization problems and quadratic optimization problems with orthogonality constraints) can be reduced to a question about the tail behavior of matrix random series ({\it i.e.}, the upper bound of ${\rm Pr }\{ \|\sum_k \xi_k {\bf A}_k \|>t\}$), and the ``optimal" answer to this question will be provided by the resolution to Nemirovski's conjecture \cite{nemirovski2007sums}. The original version of Nemirovski's conjecture requires that the random variables $\xi_k$ should have {\it zero} mean and obey either distribution supported on $[-1,1]$ or Gaussian distribution with {\it unit} variance.
Zhang {\it et al.} \cite{zhang2018matrix} extended Nemirovski's conjecture to the infinitely-divisible setting, where $\xi_k$ can be infinitely-divisible random variables. The resulted tail inequalities \eqref{eq:tail3} and \eqref{eq:tail4} actually suggest that Nemirovski's conjecture holds in a more general setting, where $\xi_k$ just have the bounded first-order moments. The detailed discussion is similar to that in \cite{zhang2018matrix}, so we omit it here.
\end{remark}
\subsection{Numerical Experiments}
At the end of this section, we conduct the experiments to empirically exam the validity of Theorem \ref{thm:select} and then to make a comparison between the AD tail inequality \eqref{eq:amb.tail} and the resulted dimension-free (DF) inequality \eqref{eq:tail2}.\footnote{In view of the comparability, we do not consider the ID inequality \eqref{eq:int.tail} in this experiment for two reasons: 1) its range of $t$ starts from $\sqrt{v} + L/3$ rather than the {\it origin}, and thus it cannot provides the comparative information when $t\in(0,\sqrt{v} + L/3)$; and 2) its product factor $4\cdot {\rm intdim}({\bf V})$ is likely to be much bigger than the factor ${\rm dim}({\bf Y})$ of the AD inequality \eqref{eq:amb.tail} in the experiments, and thus drawing curves of the ID inequality will decrease the readability of figures. }
\subsubsection{Examination of Theorem \ref{thm:select}} Consider the largest singular values of three types of random matrices whose entries obey the Gaussian distribution with {\it zero} mean and {\it unit} variance, the uniform distribution on $[-1,1]$ and the Rademacher distribution that takes $1$ or $-1$ with $1/2$ probability, respectively. The size of matrices is set as $50\times 10$ and let the constant $\theta=1$. {The expectation term $\mathbb{E}\log \big(\sigma_{\max}({\bf X})+1\big)$ is approximated by using the empirical term
\begin{equation*}
\frac{1}{3000}\sum_{n=1}^{3000} \log \big(\sigma_{\max}( {\bf X}^{(n)})+1\big),
\end{equation*}
where ${\bf X}^{(n)}$, $1\leq n\leq 3000$, are the independent observations of the random matrix ${\bf X}$. In this manner, the values of $\mathbb{E}\log \big(\sigma_{\max}({\bf X})+1\big)$ are approximately $2.3630$, $1.8681$ and $2.3408$ for the Gaussian random matrix, the uniform random matrix and the Rademacher random matrix, respectively.\footnote{{There have been many sophisticated results to prove the distributions of the largest singular values (or eigenvalues) of specific random matrices, for example, the quadrant law for the singular values of Gaussian random matrices\cite{shen2001singular}, the semi-circle law for the eigenvalues of Gaussian orthogonal (or unitary) ensembles \cite{wigner1993characteristic} and Marchenko-Pastur law for the singular values of large rectangular random matrices \cite{marvcenko1967distribution}. However, these results are unsuitable (at least cannot be directly applied) to efficient computation of the expectation term $\mathbb{E}\log \big(\mu({\bf X})+1\big)$ for arbitrary applicable choices of the matrix function $\mu$. Therefore, we only adopt the empirical approximation of this term.}}}
Given another set of independent observations ${\bf X}^{(1)},\cdots,{\bf X}^{(N)}$ of ${\bf X}$ with $N=100$, we compute ${\bf B}_\gamma$ according to the expression \eqref{eq:bgamma} and then exam the validity of the inequality \eqref{eq:cond.b2}. In Fig. \ref{fig:condition17}, we show the success ratios (out of 100 times repeated tests) of the inequality \eqref{eq:cond.b2} for different values of $\gamma\in(0,0.02]$. For these three kinds of random matrices, the success ratios (out of 100 times repeated tests) of the inequality \eqref{eq:cond.b2} all increase up to {\it one} as $\gamma$ becomes large, which supports the validity of Theorem \ref{thm:select}.
\begin{figure}[htbp]
\centering
\includegraphics[height=8cm]{condition17.eps}
\caption{Success Ratio of Inequality \eqref{eq:cond.b2} for Different Types of Random Matrices.}
\label{fig:condition17}
\end{figure}
\subsubsection{Examination of DF Tail Inequality}\label{sec:experiment} Let $\mu=\lambda_{\max}$ and {$\mathbb{M} = \mathbb{H}^{200\times 200}$.} Consider the random Hermitian matrices ${\bf X}_k = c \left( \frac{{\bf S}_k+{\bf S}_k^T}{2}\right)$, where $c$ is a positive constant to control the magnitude of ${\bf X}_k$ and the entries of ${\bf S}_k\in\mathbb{R}^{5\times 5}$ are all i.i.d. and obey the standard Gaussian distribution $\mathcal{N}(0,1)$. Thus $\mathbb{E}\mathrm{{\bf X}_k} = 0$, $1\leq k\leq K$. For each $k\in \{1,2 ,\cdots,K\}$, we take $1100$ observations $\widehat{{\bf S}}^{(i)}_k$ of ${\bf S}_k$ to generate the realizations {$\widehat{{\bf X}}^{(i)}_k = c\left(\frac{\widehat{{\bf S}}^{(i)}_k+(\widehat{{\bf S}}^{(i)}_k)^T}{2}\right)$}, $1\leq i\leq 1100$. To ensure that the probability $\mathbb{P}\big\{\big|\lambda_{\max}\big(\sum_k({\bf X}_k \big)\big|\geq t \big\}$ will be strictly decreasing w.r.t. $t$, we alternatively consider the following probability expression $\mathbb{P}\big\{\big|\lambda_{\max}\big(\sum_k({\bf X}_k \big) - \mathbb{E} \lambda_{\max}\big(\sum_k{\bf X}_k \big)\big|\geq t \big\}$, which is empirically computed by using the following function:
\begin{equation*}
h_{\rm TV}(t) = \frac{\big| \{ 1\leq i\leq 100 : \big|\lambda_{\max}\big(\sum_{k=1}^K\widehat{{\bf X}}^{(i)}_k \big) - \frac{1}{1000}\sum\limits_{j=101}^{1100}\lambda_{\max}\big(\sum_{k=1}^K{\bf X}^{(j)}_k \big) \big|\geq t \}\big|}{100},\quad t> 0.
\end{equation*}
In the AD inequality \eqref{eq:amb.tail}, the terms $v = \lambda_{\max}(\sum_k\mathbb{E} {\bf X}_k^2)$ and $L \geq \lambda_{\max}({\bf X}_k)$ are respectively approximated by using the empirical quantities
\begin{equation*}
\widehat{v} = \lambda_{\max}\left(\sum_{k=1}^K \frac{1}{100} \sum_{i=1}^{100} \big(\widehat{{\bf X}}^{(i)}_k\big)^2\right),
\end{equation*}
and
\begin{equation*}
\widehat{L} = \mathop{\max_{1\leq k\leq K}}_{1\leq i\leq 100} \big\{ \lambda_{\max}(\widehat{\bf X}^{(i)}_k) \big\}.
\end{equation*}
Then, the right-hand sides of \eqref{eq:amb.tail} and \eqref{eq:tail2} can be respectively expressed as
\begin{align*}
h_{\rm AD} (t) := {\rm dim}({\bf X}_k)\cdot \exp \left( \frac{-t^2/2}{\widehat{v}+\widehat{L}t/3} \right),&\quad t>0;\\
h_{{\rm DF}}(t) := {\rm e}^{(1+\alpha_1(\tau))\cdot \phi_\Omega}\cdot\exp\left(- \frac{t^2/2}{\tau^2 \phi_\Omega + \tau\cdot t/3}\right),&\quad t>0.
\end{align*}
The partition $\Omega$ of the index set $\{1,2,\cdots,K\}$ is designed according to the suggestion given in Remark \ref{rem:suggestion}.
As shown in Fig. \ref{fig:compare}, the DF inequality \eqref{eq:tail2} provides a precise description of the tail behavior of sums of random matrices when the summand number $K$ is small. However, if the value of $K$ increases, the value of $\phi_\Omega$ will become large and thus the upper bound of $h_{\rm TV}(t)$ provided by $h_{\rm DF}(t)$ turns out to be loose accordingly ({\it cf.} Fig. \ref{fig:compare}(c)-(d)). Following the statements in Remark \ref{rem:number}, we rescale the magnitude of the matrices ${\bf X}_k$ by setting $c<1$ to overcome this shortcoming ({\it cf.} Fig. \ref{fig:compare}(e)-(f)).
\begin{figure}[htbp]
\centering
\subfigure[\hbox{$K=5$, $c=1$}]{
\includegraphics[height=4.4cm]{5_1.eps} }
\subfigure[\hbox{$K=10$, $c=1$}]{
\includegraphics[height=4.4cm]{10_1.eps}}
\subfigure[\hbox{$K=15$, $c=1$}]{
\includegraphics[height=4.4cm]{15_1.eps}}
\subfigure[\hbox{$K=20$, $c=1$}]{
\includegraphics[height=4.4cm]{20_1.eps}}
\subfigure[\hbox{$K=15$, $c=0.5$}]{
\includegraphics[height=4.4cm]{15_5.eps} }
\subfigure[\hbox{$K=20$, $c=0.5$}]{
\includegraphics[height=4.4cm]{20_5.eps}}
\caption{Numerical comparison among the AD inequality \eqref{eq:amb.tail} and the DF inequalities \eqref{eq:tail2}.}\label{fig:compare}
\end{figure}
\section{Applications in Compressed Sensing}\label{sec:rip}
In this section, we show that the resulted tail inequalities can provide a simple proof of the restricted isometry property (RIP) of a measurement matrix that is expressed as the sum of random matrices without any assumption imposed on the distributions of matrix entries. We first give a brief introduction of RIP in compressed sensing, and then show the proof of RIP for sums of random matrices.
\subsection{Introduction of Restricted Isometric Property}
One core issue of compressed sensing is to recover a vector (or signal) ${\bf x}^\star \in\mathbb{C}^n$ by solving the underdetermined linear equation:
\begin{equation}\label{eq:cs.linear}
{\bf y} = {\bf P} {\bf x}^\star,
\end{equation}
where ${\bf y}\in\mathbb{C}^m$ ($m \ll n$) and ${\bf P} \in \mathbb{C}^{m\times n}$ are called the measurement vector and the measurement (or sensing) matrix, respectively. This linear equation could have infinitely many solutions to this linear equation. Thus, by introducing an additional condition that ${\bf x}^\star$ is $s$-sparse, {\it i.e.,} $\| {\bf x}^\star\|_0 : = {\rm supp} ({\bf x}^\star) \leq s$,
this linear equation can be reformulated as an $\ell_0$-minimization problem:
\begin{equation}\label{eq:cs.l0min}
\min_{{\bf x}\in\mathbb{C}^N} \| {\bf x}\|_0\quad \mbox{subject to} \quad {\bf P} {\bf x} = {\bf y}.
\end{equation}
One efficient way to solve this NP hard problem is to consider its $\ell_1$ convex relaxation {({\it cf.} \cite{candes2005decoding,candes2006stable,donoho2006compressed,ramirez2013l1})}:
\begin{equation}\label{eq:cs.l1min}
\min_{{\bf x}\in\mathbb{C}^n} \| {\bf x}\|_1\quad \mbox{subject to} \quad {\bf P} {\bf x} = {\bf y},
\end{equation}
which can be solved with efficient convex optimization methods. Candes and Tao \cite{candes2005decoding,candes2006near} have proved that if the measurement matrix ${\bf P} \in \mathbb{C}^{m\times n}$ satisfies the RIP, the recovery $\hat{\bf x}$ from the $\ell_1$-minimization \eqref{eq:cs.l1min} can approximate the true ${\bf x}^\star$ well. Hence, the RIP plays an essential role in compressed sensing.
\begin{definition}[Restricted Isometry Property]\label{def:rip1}
Given a matrix ${\bf P}\in\mathbb{C}^{m\times n}$, for any $0\leq s\leq n$, the restricted isometry constant of ${\bf P}$ of order $s$ is defined as the smallest number $\delta_s:=\delta_s({\bf P})$ such that
\begin{equation}\label{eq:rip1}
(1-\delta_s )\| {\bf x}\|_2 \leq \|{\bf P} {\bf x} \|_2 \leq (1+\delta_s) \| {\bf x}\|_2,
\end{equation}
for all $s$-sparse ${\bf x}\in\mathbb{C}^n$. Let $\delta\in(0,1)$, the matrix ${\bf P}$ is said to satisfy the restricted isometry property (RIP) of order $s$ with parameter $\delta$, shortly, ${\rm RIP}_s(\delta)$, if $0\leq \delta_s({\bf P}) <\delta$.
\end{definition}
In the literature, many types of measurement matrices have been proven to satisfy the RIP condition with high probability, {\it e.g.}, random Gaussian or Bernoulli matrices ({\it cf.} \cite{candes2006near,mendelson2008uniform}); the structured matrices with Gaussian or Bernoulli entries ({\it cf.} \cite{haupt2010toeplitz,rauhut2009circulant}); {and the matrix infinitely-divisible series \cite{zhang2018matrix}.} Different from these existing works that specify the probability distributions of entries of measurement matrices, we will show that if the measurement matrix ${\bf P}\in\mathbb{C}^{m\times n}$ can be expressed as the sum of random matrices ${\bf P}_1,\cdots, {\bf P}_K$ that satisfy a mild condition \eqref{eq:smallest.sigma}, its RIP still holds with a high probability. To achieve this proof, we will borrow the idea of the work \cite{baraniuk2008simple}, where by introducing an alternative definition of RIP, Baraniuk {\it et al.} proved the RIP of a measurement matrix under the assumption that it satisfies a concentration inequality \cite[Eq. (4.3)]{baraniuk2008simple}. We will directly use the resulted tail inequalities to prove the RIP of ${\bf P}=\sum_{k=1}^K {\bf P}_k$ without imposing any distribution assumption on ${\bf P}$.
\subsection{RIP of Sums of Random Matrices}
Given a matrix ${\bf P}\in\mathbb{C}^{m\times n}$ and any set $\mathcal{I}\subseteq \{1,2,\cdots,n\}$ of column indices, denote by $[{\bf P}]_\mathcal{I}$ the $m\times |\mathcal{I}| $ matrix composed of these columns, where $|\mathcal{I}|$ stands for the cardinality of the set $\mathcal{I}$. Similarly, for a vector ${\bf x}\in\mathbb{C}^n$, we denote ${\bf x}_\mathcal{I} $ as the $|\mathcal{I}|$-dimensional vector obtained by retaining only the entries in ${\bf x}$ corresponding to the column indices in $\mathcal{I}$. Alternatively, under these notations, Baraniuk {\it et al.} \cite{baraniuk2008simple} introduced another version of the RIP definition: a matrix ${\bf P}\in\mathbb{C}^{m\times n}$ is said to satisfy the ${\rm RIP}_s(\delta)$ if there exists a $\delta_s\in(0,1)$ such that
\begin{equation}\label{eq:rip2}
(1-\delta_s) \| {\bf x}_\mathcal{I}\|_2 \leq \|[{\bf P}]_\mathcal{I} {\bf x}_\mathcal{I} \|_2 \leq (1+\delta_s) \| {\bf x}_\mathcal{I}\|_2,
\end{equation}
holds for all sets $\mathcal{I}$ with $|\mathcal{I}| \leq s$. As shown in \eqref{eq:rip2}, the ${\rm RIP}_s(\delta)$ requires that all singular values of $[{\bf P}]_\mathcal{I}$ lie in the interval $[1-\delta_s,1+\delta_s]$ for any $\mathcal{I}\subset \{1,\cdots,n\}$ with $|\mathcal{I}| \leq s$.
\begin{theorem}\label{thm:rip}
Let ${\bf P}_1,\cdots,{\bf P}_K \in \mathbb{C}^{m\times n}$ be random matrices and ${\bf P} =\sum_{k=1}^K {\bf P}_k$. Given a number $s<n$, assume that
\begin{equation}\label{eq:smallest.sigma}
\sigma_{\min} ([{\bf P}]_\mathcal{I}) \geq \frac{\sigma_{\min} ([{\bf P}_1]_\mathcal{I})+\cdots + \sigma_{\min} ([{\bf P}_K]_\mathcal{I}) }{K^2}
\end{equation}
holds for any $\mathcal{I} \subset \{1,2,\cdots,n\}$ with $|\mathcal{I}| =s$, where $[{\bf P}]_{\mathcal{I}}$ stands for the $m\times |\mathcal{I}| $ matrix composed of the columns taken from the matrix ${\bf P}$ w.r.t. the index set $\mathcal{I}$. Let $\{{\bf A}_{1},\cdots,{\bf A}_{K}\}$ and $\{{\bf B}_{1},\cdots,{\bf B}_{K}\}$ be the two fixed matrix sequences such that for any $\mathcal{I} \subset \{1,2,\cdots,n\}$,
\begin{align}\label{eq:condition.bc}
\mathbb{E}\,{\rm e}^{{\bf D}_{\sigma_{\max}}[\theta;[{\bf P}_k]_\mathcal{I}]} \preceq {\rm e}^{{\bf D}_{\sigma_{\max}}[\theta;[{\bf A}_{k}]_\mathcal{I}]}\;\;\mbox{and}\;\; \mathbb{E}\,{\rm e}^{{\bf D}_{\sigma_{\max}}[\theta;[{\bf P}_k]_\mathcal{I}^{\dag}]} \preceq {\rm e}^{{\bf D}_{\sigma_{\max}}[\theta;[{\bf B}_{k}]_\mathcal{I}]}, \quad 1\leq k\leq K,
\end{align}
where the superscript $\dag$ stands for the Moore-Penrose inverse. Let $\Omega = \{\Omega_1,\cdots,\Omega_I\}$ be a partition of the index set $\{1,\cdots, K\}$ with $\bigcup_{i=1}^I \Omega_i=\{1,\cdots, K\}$ and $\tau: = \max\limits_{1\leq i\leq I}\{ |\Omega_i|\}$. {For any $1\leq i\leq I$, let
\begin{align*}
u_i :=& \mathop{\max_{ k\in\Omega_i }}_{ \mathcal{I} \subset \{1,2,\cdots,n\}} \Big\{
\sigma_{\max}([{\bf A}_{k}]_\mathcal{I} )\Big\};\nonumber\\
v_i :=& \mathop{\max_{ k\in\Omega_i }}_{ \mathcal{I} \subset \{1,2,\cdots,n\}} \Big\{
\sigma_{\max}([{\bf B}_{k}]_\mathcal{I} )\Big\},
\end{align*}
and denote
\begin{align*}
\bar{\phi}_\Omega = \max\left\{ \sum_{i=1}^I \Big[\big(u_i+1\big)^{|\Omega_i|} -1\Big],
\sum_{i=1}^I \Big[\big(v_i+1\big)^{|\Omega_i|} -1\Big] \right\}.
\end{align*}}
For any $\delta\in(0,1)$, if there exists two positive constants $c_1$ and $c_2$ such that
\begin{equation}\label{eq:rip.cond1}
s \leq \frac{ c_1m }{ \log({\rm e} n/s) },
\end{equation}
and
\begin{equation}\label{eq:rip.cond2}
c_2\leq \frac{\bar{\phi}_\Omega}{m}\cdot \Gamma\left(\frac{t}{\tau\cdot \bar{\phi}_\Omega}\right) -c_1,
\end{equation}
then the ${\rm RIP}_s(\delta)$ \eqref{eq:rip1} holds for the random matrix ${\bf P}$ with probability at least $1-2{\rm e}^{\bar{\phi}_\Omega(1+\alpha_1(\tau))}\cdot {\rm e}^{-c_2 m}$.
\end{theorem}
Note that the tail inequalities \eqref{eq:tail2} can also lead to the similar RIP results, and we omit them here. The validity of this theorem is determined by the following factors: 1) the validity of Condition \eqref{eq:smallest.sigma}; 2) the existence of ${\bf A}_k$ and ${\bf B}_k$; and 3) Conditions \eqref{eq:rip.cond1} and \eqref{eq:rip.cond2} that can be satisfied by selecting sufficiently small $c_1>0$. Subsequently, we will give a detailed discussion on these factors.
\begin{remark}\label{rem:rip.condition}
{To examine the validity of condition \eqref{eq:smallest.sigma}, we alternatively consider whether it holds for any ${\bf A}_1,\cdots,{\bf A}_K \in\mathbb{C}^{m\times n}$ that
\begin{equation*
\sigma_{\min} ({\bf A}) \geq \frac{\sigma_{\min} ({\bf A}_1)+\cdots + \sigma_{\min} ({\bf A}_K) }{K^2}\quad\mbox{with}\quad {\bf A} = \sum_{k=1}^K {\bf A}_k.
\end{equation*}
This inequality, roughly speaking, requires that the summands ${\bf A}_1,{\bf A}_2,\cdots,{\bf A}_K$ should not make ${\bf A}$ have {\it zero} singular values.} Here, we design an experiment to empirically verify the validity of {this inequality}.
Let the number $K$ of the summand matrixes ${\bf A}_k$ evaluate from the set $\{2,5,10,15,20,25,30\}$ and the matrix sizes $m\times n$ of ${\bf A}$ are set as $1\times 5$, $5\times 20$, $10\times 80$, $15\times 200$ and $20\times 400$, respectively. Let the entries of ${\bf A}_k$ obey the Gaussian distribution with {\it zero} mean and {\it unit} variance. For each experimental setting, repeat $2000$ times and the success ratio of Condition \eqref{eq:smallest.sigma} is shown in Fig. \ref{fig:condition51}. We find that the success ratio can mostly reach $1$ when $K$ is larger than $5$, which implies that Condition \eqref{eq:smallest.sigma} can be easily satisfied when the summand matrices $\mathbf{A}_k$ are not too few. Especially, the high matrix dimension is beneficial to the success ratio {as well}.
\end{remark}
\begin{figure}[htbp]
\centering
\includegraphics[height=8cm]{condition51.eps}
\caption{Success Ratio of Condition \eqref{eq:smallest.sigma} for Different Matrix Sizes.}
\label{fig:condition51}
\end{figure}
\begin{remark}
To guarantee the existence of ${\bf A}_k$ and ${\bf B}_k$, we first need to consider the validity of Condition \eqref{eq:condition.bc}, which aims to control the random behavior of the terms $[{\bf P}_k]_\mathcal{I}$ and $[{\bf P}_k]_\mathcal{I}^{\dag}$. As addressed in Proposition \ref{prop:validity}, to safisfy this condition, the fixed matrices ${\bf A}_{k}$ and ${\bf B}_{k}$ should guarantee that the inequalities $\sigma_{\max}([{\bf A}_{k}]_\mathcal{I}) \geq \mathbb{E}\sigma_{\max}( [{\bf P}_k]_\mathcal{I} )$ and $\sigma_{\max}([{\bf B}_{k}]_\mathcal{I}) \geq \mathbb{E} \sigma_{\max}( [{\bf P}_k]_\mathcal{I}^{\dag} )$ hold for any $\mathcal{I}\subset\{1,\cdots,n\}$ with $|\mathcal{I}|=s$. Next, we will show how to construct such fixed matrices ${\bf A}_{k}$ and ${\bf B}_{k}$. {For any $k\in\{1,2,\cdots,K\}$, denote
\begin{align*}
a_k :=&\max_{ \mathcal{I} }\Big\{ \mathbb{E}\sigma_{\max}( [{\bf P}_k]_\mathcal{I} ) \Big\};\\
b_k :=&\max_{ \mathcal{I} } \Big\{ \mathbb{E} \sigma_{\max}( [{\bf P}_k]_\mathcal{I}^{\dag} )\Big\},
\end{align*}
and then let the fixed matrices ${\bf A}_k$ and ${\bf B}_k$ have the following forms:
\begin{equation*}
{\bf A}_k:= \left[
\begin{array}{c c c c c c c c c c c c c c c }
a_k &0&0&0 & a_k &0&0&0 & a_k & 0 & 0 & 0 & a_k & 0 &\cdots \\
0&a_k&0&0 & 0&a_k&0&0 & 0& a_k & 0 & 0 & 0& a_k &\cdots \\
\vdots&\vdots& \ddots & \vdots & \vdots&\vdots& \ddots & \vdots & \vdots& \vdots& \ddots & \vdots & \vdots& \vdots& \vdots\\
0&0&0& a_k & 0&0&0& a_k & 0& 0& 0& a_k & 0& 0& \cdots
\end{array}\right]_{m\times n},
\end{equation*}
and
\begin{equation*}
{\bf B}_k:=\left[
\begin{array}{c c c c c c c c c c c c c c c }
b_k &0&0&0 & b_k &0&0&0 & b_k & 0 & 0 & 0 & b_k & 0 &\cdots \\
0&b_k&0&0 & 0&b_k&0&0 & 0& b_k & 0 & 0 & 0& b_k &\cdots \\
\vdots&\vdots& \ddots & \vdots & \vdots&\vdots& \ddots & \vdots & \vdots& \vdots& \ddots & \vdots & \vdots& \vdots& \vdots\\
0&0&0& b_k & 0&0&0& b_k & 0& 0& 0& b_k & 0& 0& \cdots
\end{array}\right]_{m\times n}.
\end{equation*}
Taking the matrix ${\bf A}_k$ as example, we first consider some special cases:
\begin{itemize}
\item If the index set $\mathcal{I}$ makes the $s$ column vectors selected from ${\bf A}_k$ differ from each other, the matrix product $[{\bf A}_k]_\mathcal{I}^T \cdot [{\bf A}_k]_\mathcal{I}$ is a diagonal matrix with the identical entries $a^2_k$ and thus $\sigma_{\max}([{\bf A}_k]_\mathcal{I})=a_k$.
\item In the case that $\frac{n}{m} \geq s$, if the index set $\mathcal{I}$ takes $s$ identical column vectors from ${\bf A}_k$ to form $[{\bf A}_k]_\mathcal{I}$, the matrix product $[{\bf A}_k]_\mathcal{I}^T \cdot [{\bf A}_k]_\mathcal{I}$ is a diagonal matrix with only one non-zero entry $s\cdot a^2_k$ and thus $\sigma_{\max}([{\bf A}_k]_\mathcal{I})=\sqrt{s}\cdot a_k$;
\item In the case that $\frac{n}{m} < s$, if the index set $\mathcal{I}$ selects $\lceil \frac{n}{m}\big \rceil$ identical and $s - \lceil \frac{n}{m}\big \rceil $ different column vectors from ${\bf A}_k$ to form $[{\bf A}_k]_\mathcal{I}$, the matrix product $[{\bf A}_k]_\mathcal{I}^T \cdot [{\bf A}_k]_\mathcal{I}$ is a diagonal matrix with $(s+1-\lceil \frac{n}{m}\big \rceil)$ non-zero entries: one is $\lceil \frac{n}{m}\big \rceil\cdot a^2_k$ and the others are $a^2_k$. Thus, we have $\sigma_{\max}([{\bf A}_k]_\mathcal{I})= \sqrt{ \big\lceil \frac{n}{m}\big \rceil}\cdot a_k$, where $\lceil \cdot \rceil$ stands for the ceiling function.
\end{itemize}
Without loss of generality, we then have, for any $\mathcal{I} \subset \{1,2,\cdots, n\}$ with $|\mathcal{I}| =s$,
\begin{align*}
a_k& \leq \sigma_{\max}([{\bf A}_k]_\mathcal{I}) \leq
\left\{
\begin{array}{l l }
\sqrt{ s } \cdot a_k,& \mbox{if $\frac{n}{m} > s$; } \\
\sqrt{ \big\lceil \frac{n}{m}\big \rceil} \cdot a_k, & \mbox{otherwise,}
\end{array}
\right.\\
b_k& \leq \sigma_{\max}([{\bf B}_k]_\mathcal{I}) \leq
\left\{
\begin{array}{l l }
\sqrt{ s } \cdot b_k,& \mbox{if $\frac{n}{m} >s$; } \\
\sqrt{ \big\lceil \frac{n}{m}\big \rceil} \cdot b_k, & \mbox{otherwise.}
\end{array}
\right.
\end{align*}}
In this manner, the resulted matrices ${\bf A}_k$ and ${\bf B}_k$ satisfy Condition \eqref{eq:condition.bc} for any $k\in\{1,2,\cdots, K\}$.
\end{remark}
{\section{Applications in Stochastic Processes}\label{sec:process}
The supremum of stochastic processes has long been an important issue in the field of probability theory. Hsu {\it et al.} \cite{hsu2011dimension} embedded a stochastic process into an infinite-dimensional diagonal random matrix, and then used the tail inequalities of random matrices to solve this issue. Here, we borrow this embedding idea to analyze the supremum of a stochastic process by applying the resulted DF tail inequalities.
Let $\{X_1,X_2,X_3,\cdots\}$ be a stochastic process with a constant $\beta$ such that $\mathbb{E} |X_i| \leq \beta$ holds for any $i=1,2,\cdots$. Let ${\bf X} := \bm{\Lambda}[X_1,X_2,X_3,\cdots]$ be an infinite-dimensional diagonal random matrix. Letting $\mu = \sigma_{\rm max}$, then it follows from Theorem \ref{thm:tail1} and {$\phi \leq \beta$} that
\begin{align*
\mathbb{P}\left\{\sup_{i>0} |X_i|\geq t \right\} =& \mathbb{P}\left\{\sigma_{\rm max}({\bf X} )\geq t \right\}\nonumber\\
\leq & \left\{
\begin{array}{cc}
{\rm e}^{(2\log2-1)\beta }\cdot {\rm e}^{\frac{-t^2}{ 4 \beta }} , & \mbox{if $t< 3\beta $;} \\
{\rm e}^{(2\log2-1)\beta }\cdot {\rm e}^{\frac{-3t}{ 4 }}, & \mbox{if $t\geq 3\beta $.}
\end{array}
\right.
\end{align*}
Alternatively, the above expression can be equivalently rewritten as
\begin{align}\label{eq:rp.supremum}
\left\{
\begin{array}{ll}
\mathbb{P}\left\{\sup\limits_{i>0} |X_i|\geq \sqrt{ 4\beta \big(\epsilon +(2\log2-1)\cdot\beta\big) } \right\} \leq {\rm e}^{-\epsilon} , & \mbox{if $t< 3\beta $;} \\
\mathbb{P}\left\{\sup\limits_{i>0} |X_i|\geq \frac{4}{3}\big(\epsilon +(2\log2-1)\cdot\beta\big) \right\} \leq {\rm e}^{-\epsilon}, & \mbox{if $t\geq 3\beta $,}
\end{array}
\right.
\end{align}
Compared with the existing work \cite{hsu2011dimension}, this result is independent of matrix dimension and applicable to the various kinds of stochastic processes as long as the expectation $\mathbb{E} |X_i|$ ($\forall i>0$) has a unified upper bound.
\section{Applications in Matrix Approximation via Random Sampling}\label{sec:approximation}
Matrix approximation via random sampling aims to estimate a complicated objective matrix by constructing some structural random matrices whose expectations are identical with the objective matrix, and has been widely used in many practical applications of linear algebra and machine learning, {\it e.g.,} matrix random sparsification \cite{drineas2011note}, randomized matrix multiplication \cite{drineas2006fast,magen2011low} and random feature \cite{rahimi2008random,lopez2014randomized}.
Tail bounds of random matrices provide analytical benchmarks to the approximation quality of the matrix-approximation strategy. Tropp \cite{tropp2015introduction} applied the dimension-dependent expectation bounds for sums of random matrices to provide a comprehensive analysis on these applications. Hsu {\it et al.} \cite{hsu2011dimension} obtained the upper bound of probability that the discrepancy between the estimator and the objective matrix is large in the randomized matrix multiplication. Nevertheless, their results are dependent on the matrix dimension. Here, we will explore the properties of matrix approximation via random sampling from the dimension-free viewpoint.
\subsection{Dimension-free Expectation Bounds}
The following lemma is a part of the proof of Proposition \ref{prop:diagonal.sum}:
\begin{lemma}\label{lem:exp.bound}
For any $\theta>0$, it holds that
\begin{equation}\label{eq:exp.bound00}
\mathbb{E}{\rm e}^{\mu\left(\sum_{k=1}^K\theta {\bf X}_k\right)} \leq {\rm e}^{g(\theta,\tau)\cdot \phi_\Omega}.
\end{equation}
\end{lemma}
Consider the function
\begin{equation}\label{eq:g3}
g_2(\theta;c) := \frac{3\theta^2}{6-2c\theta} +\alpha_2(c),\quad 0<\theta<\frac{3}{c},\;c>0
\end{equation}
with
\begin{equation}\label{eq:alpha3}
\alpha_2(c) = \frac{3[(c+3) - \sqrt{6c+9}] }{c^2}\geq 0.
\end{equation}
It holds that $g_2(\theta;c) \geq \max\{\theta,\theta^2\}$ for any $0<\theta<\frac{3}{c}$ with $c > 0$. The curve of $g_2(\theta;c)$ is tangent to that of $\theta$ at the point $\Big(\frac{(6c+9)-3\sqrt{6c+9}}{2c^2+3c},\frac{(6c+9)-3\sqrt{6c+9}}{2c^2+3c}\Big)$, and is illustrated in Fig. \ref{fig:g3} for different values of $c$.
\begin{figure}[htbp]
\centering
\subfigure[\hbox{The curves of $g_2(\theta;c)$ and $\max\{ \theta ,\theta^2\}$.}]{
\includegraphics[height=6cm]{g3.eps} }
\subfigure[\hbox{The curve of $\alpha_2(c)$.}]{
\includegraphics[height=6cm]{a3.eps}}
\caption{The function curves of $g_2(\theta;c)$ ($c\in\{1,5,10,20,50\}$) and $\alpha_2(c)$.}\label{fig:g3}
\end{figure}
By substituting $g_2(\theta;c)$ and $\phi_{\widetilde{\Omega}}$ into \eqref{eq:exp.bound00}, we then obtain the following expectation bound:
\begin{proposition}\label{prop:exp.bound2}
For any $c>0$, it holds that
\begin{equation}\label{eq:exp.bound3}
\mathbb{E}\left\{ \mu\left(\sum_{k=1}^K{\bf X}_k\right) \right\}\leq \phi_{\widetilde{\Omega}} \left(\sqrt{2 \alpha_2(c) } + \frac{c \alpha_2(c)}{3} \right) .
\end{equation}
\end{proposition}
Compared with the expectation bounds given in Tropp's works ({\it cf.} Remark 5.5 of \cite{tropp2012user} and Theorem 6.6.1 of \cite{tropp2015introduction}), this result is independent of matrix dimension, and is applicable to various kinds of eigenproblems for sums of random matrices. Subsequently, we will show the applications of this expectation bound in matrix approximation via random sampling.
\subsection{Applications in Matrix Approximation}
For the completeness of presentation, we first introduce the setup of matrix approximation via random sampling and refer to \cite{tropp2015introduction} for further details. Supposed that ${\bf B} \in \mathbb{R}^{m\times n}$ is the objective matrix that can be expressed as the sum of the matrices ${\bf B}_1,\cdots,{\bf B}_L \in \mathbb{R}^{m\times n}$:
\begin{equation*}
{\bf B} = \sum_{l=1}^L {\bf B}_l.
\end{equation*}
We introduce the non-negative quantities $p_1,p_2,\cdots,p_L$ with $\sum_{l=1}^L p_l =1$ to qualify the importance of each summand matrix ${\bf B}_l$, $1\leq l\leq L$. Alternatively, the quantity $p_l$ can also be deemed as the probability with which the corresponding matrix ${\bf B}_l$ is randomly selected in random sampling. The unbiased estimate of the objective matrix ${\bf B}$ can be constructed in the following way:
\begin{equation*}
{\bf R} = p_l^{-1} {\bf B}_l \quad \mbox{with probability $p_l$},
\end{equation*}
and it is true that $\mathbb{E}\, {\bf R} = \sum_{l=1}^{L} p_l\cdot p_l^{-1} {\bf B}_l = {\bf B}$.
Although such a random matrix ${\bf R}$ inherits the specific structure of ${\bf B}_l$, it provides a poor approximation of ${\bf B}$ with only a single copy. Thus the average of $K$ independent copies of ${\bf R}$ is adopted to improve the approximation performance, that is,
\begin{equation*}
\widehat{{\bf R}}_K = \frac{1}{K}\sum_{k=1}^K {\bf R}_k.
\end{equation*}
We can select a specific kind of matrix function $\mu(\widehat{{\bf R}}_K - {\bf B})$, such as any matrix norms, as a measurement to examine the approximation performance. The expectation bound \eqref{eq:exp.bound3} leads to the following result on the performance of the matrix approximation via random sampling.
\begin{theorem}\label{thm:ran.approx}
Assume that the vector space $\mathbb{M}$ and the function $\mu:\mathbb{M}\rightarrow \mathbb{R}$ satisfy Conditions (C1)-(C3). Given a fixed matrix ${\bf B}\in\mathbb{M}$, let the random matrix ${\bf R}\in\mathbb{M}$ be an unbiased estimate of ${\bf B}$. Let ${\bf R}_1,\cdots,{\bf R}_K\in\mathbb{M}$ be the independent copies of ${\bf R}$. Denote $\widehat{{\bf R}}_K := \frac{1}{K}\sum_{k=1}^K {\bf R}_k$ and $u :=\max\limits_{1\leq k\leq K} \mu( {\bf R}_k - {\bf B})$. If there exists $\epsilon>0$ such that $u \leq \sqrt{ 1+2\epsilon \mu( {\bf B} ) }-1$, then it holds that for any $c>0$,
\begin{align}\label{eq:ran.approx2}
\frac{\mathbb{E} \mu( \widehat{{\bf R}}_K - {\bf B})}{\mu({\bf B})} \leq \epsilon \cdot \left(\sqrt{2 \alpha_2(c) } + \frac{c \alpha_2(c)}{3} \right).
\end{align}
\end{theorem}
This theorem shows a dimension-free result on the performance of matrix approximation via random sampling.
Especially, when $c$ goes to the {\it infinity}, the term $\big(\sqrt{2 \alpha_2(c) } + \frac{c \alpha_2(c)}{3} \big)$ will converge to {\it one}, which means that $ \frac{\mathbb{E} \mu( \widehat{{\bf R}}_K - {\bf B})}{\mu({\bf B})} \leq \epsilon$ in this limiting case. This result highlights the importance of the approximation error $\mu({\bf R}_k - {\bf B})$ caused by each copy ${\bf R}_k$, which suggests that to achieve an accuracy estimate of ${\bf B}$, it should be essential to keep the individual approximation error $\mu({\bf R}_k - {\bf B})$ at a reasonable level.
\begin{remark}\label{rem:compare}
In Section 6.2 of \cite{tropp2015introduction}, Tropp gave the dimension-dependent result on the approximation error $\mathbb{E} \| \widehat{{\bf R}}_K - {\bf B}\|$ equipped with the spectral norm $\|\cdot\|$\footnote{This result can be reformulated as $\frac{\mathbb{E} \| \widehat{{\bf R}}_K - {\bf B}\|}{ \|{\bf B} \|} \leq O(\epsilon) $ in the applications of matrix random sparsification, randomized matrix multiplication and random feature ({\it cf.} Sections 6.3-6.5 of \cite{tropp2015introduction}).}: for any $\epsilon>0$, it holds that $\mathbb{E} \| \widehat{{\bf R}}_K - {\bf B}\|\leq 2\epsilon$ if
\begin{equation}\label{eq:tropp}
K \geq \frac{2m_2({\bf R}) \log (m+n)}{\epsilon^2} + \frac{2L \log (m+n)}{3\epsilon},
\end{equation}
where $\mathbf{R}\in\mathbb{C}^{m\times n}$, $\|{\bf R}\|\leq L$ and $m_2({\bf R}) := \max\{ \|\mathbb{E}({\bf R}{\bf R}^*) \|, \|\mathbb{E}({\bf R}^*{\bf R}) \| \}$. This result suggests that as long as the copy number is large enough, the approximation error can be controlled to be the satisfactory level.
The main differences between the results \eqref{eq:ran.approx2} and \eqref{eq:tropp} lie in the following aspects:
\begin{enumerate}[(1)]
\item Since Tropp's result \eqref{eq:tropp} is dimension dependent, the number $K$ could be very large in the high-dimensional scenario. In contrast, our result is independent of matrix dimension and thus is suitable to the high-dimensional or infinite-dimensional scenario.
\item The bounded condition $\|{\bf R}\|\leq L$ in Tropp' result imposes a requirement into the behavior of the random matrix ${\bf R}$. In contrast, there is no restriction on ${\bf R}$ in our result.
\item Tropp's result is based on the spectral norm, and in contrast, the $\mu(\cdot)$ in our result can be set as variant kinds of matrix functions.
\item Tropp's result shows the asymptotical behavior of the approximation error w.r.t. the copy number $K$. In contrast, our result illustrates a deterministic description to the relationship between the entire and the individual approximation errors.
\end{enumerate}
\end{remark}
To sum up, the two results are complementary with each other. According to Tropp's result, given a quantities of copy matrices ${\bf R}_1,{\bf R}_2,\cdots,{\bf R}_K,\cdots$, the average matrix of any part of them will outperform the individual one and then we treat this average one as a new copy of ${\bf R}$. In this manner, we can generate the series of copy matrices each of which can reach a satisfactory approximate accuracy.
\begin{remark}
Interestingly, there is a direct way to obtain the result $\frac{\mathbb{E} \mu( \widehat{{\bf R}}_K - {\bf B})}{\mu({\bf B})} \leq \epsilon$ instead of the aforementioned limiting case. It begins with the function $g_3(\theta,K) := \theta^{K+1} +\alpha_3(K)$ with
\begin{equation*
\alpha_3(K):= \Big(\frac{K}{K+1}\Big)\cdot\Big(\frac{1}{K+1}\Big)^{\frac{1}{K}}.
\end{equation*}
We find that $g_3(\theta,K) \geq \max\{\theta,\theta^K\}$ for any $\theta \geq 0$ and the curve of $g_3(\theta,K)$ is tangent to that of $\theta$ at the point $\big(\big(\frac{1}{K+1}\big)^{\frac{1}{K}},\big(\frac{1}{K+1}\big)^{\frac{1}{K}}\big)$ ({\it cf.} Fig. \ref{fig:g2}). Substituting $g_3(\theta,K)$ into \eqref{eq:exp.bound00} leads to the expectation bound $\mathbb{E}\big\{ \mu\big(\sum_{k=1}^K{\bf X}_k\big) \big\}\leq \phi_\Omega$. Similar to Theorem \ref{thm:ran.approx}, if there exists $\epsilon>0$ such that $u \leq \sqrt[\tau]{ 1+\epsilon \tau \mu( {\bf B} ) }-1$, then it holds that $ \frac{\mathbb{E} \mu( \widehat{{\bf R}}_K - {\bf B})}{\mu({\bf B})} \leq \epsilon.$
\begin{figure}[htbp]
\centering
\subfigure[\hbox{The curves of $g_3(\theta,K)$ and $\max\{ \theta ,\theta^K\}$ ($K=2$).}]{
\includegraphics[height=6cm]{g2.eps} }
\subfigure[\hbox{The curve of $\alpha_3(K)$.}]{
\includegraphics[height=6cm]{a2.eps}}
\caption{The function curves of $g_3(\theta,K)$ ($K=2$) and $\alpha_3(K)$.}\label{fig:g2}
\end{figure}
\end{remark}
{\section{Applications in Matrix Expander Graphs}\label{sec:sampling}
In this section, we will consider the applications of the proposed framework in quantum information. In particular, we first develop dimension-free tail inequalities for the matrix martingale-difference sequence (MDS). Based on the resulting tail inequality, we then provide a dimension-free analysis to the expander-walk sampling and the cover of quantum hypergraphs.
\subsection{Dimension-free Tail Inequalities for Matrix Martingale Difference Sequence (MDS)}
Given a probability space $(\Omega, \mathcal{F},\mathbb{P})$, denote $\{\mathcal{F}_k\}_{k=0}^{\infty}$ to be a filtration contained in the sigma algebra $\mathcal{F}$, that is, $\mathcal{F}_0 \subset \mathcal{F}_1\subset \mathcal{F}_2\subset \cdots \subset \mathcal{F}_\infty \subset \mathcal{F}$. By equipping with such a filtration, we define the conditional expectation $\mathbb{E}_k [\cdot] = \mathbb{E}_k [\cdot | \mathcal{F}_k]$. A random-matrix sequence $\{{\bf X}_k\}$is said to be adapted to the filtration if each ${\bf X}_k$ is measurable with respect to $\mathcal{F}_k$. An adapted random-matrix sequence $\{{\bf X}_k\}$ is said to be a matrix martingale if $\mathbb{E}_{k-1}{\bf X}_k = {\bf X}_{k-1}$ and $\mathbb{E} \|{\bf X}_k\|<\infty$ for $k=1,2,3,\cdots$. Given a matrix martingale $\{{\bf X}_k\}$, the matrix martingale difference sequence (MDS) is defined as ${\bf Z}_k := {\bf X}_k - {\bf X}_{k-1}$ for $k=1,2,3,\cdots$. We note that the matrix MDS is conditionally zero mean, that is, $\mathbb{E}_{k-1} {\bf Z}_k = {\bf 0}$.
It is not difficult to verify that the subadditivity of matrix cumulant-generating function \cite[Lemma 3.4]{tropp2012user} still holds for a martingale difference sequence $\{{\bf Z}_1,\cdots,{\bf Z}_K\}$. Then, the result given in Proposition \ref{prop:diagonal.sum} can be extended to the setting of the matrix MDS:
\begin{proposition}\label{prop:diagonal.mds}
Let $\{{\bf Z}_1,\cdots,{\bf Z}_K\}\subset \mathbb{M}$ be a matrix MDS. Then, it holds that for any $\theta>0$,
\begin{align}\label{eq:diagonal.sum}
\mathbb{E}{\rm e}^{\mu\left(\sum_{k=1}^K\theta {\bf Z}_k\right)}\leq {\rm e}^{-1}\cdot {\rm tr}\,\exp\left({\bf D}_0+\sum_{k=1}^K \log\mathbb{E}\,{\rm e}^{{\bf D}_\mu[\theta;{\bf Z}_k]}\right).
\end{align}
\end{proposition}
Similar to the way of developing tail inequalities \eqref{eq:tail2} and \eqref{eq:tail5} for independent matrix sequence, we can derive the dimension-free tail inequalities for the matrix MDS.
\begin{theorem}\label{thm:tail.mds}
Given a matrix MDS $\{{\bf Z}_1,\cdots,{\bf Z}_K\}\subset \mathbb{M}$, let ${\bf B}_1,\cdots,{\bf B}_K\in\mathbb{M}$ be fixed matrices such that
\begin{equation}\label{eq:cond.b}
\mathbb{E} {\rm e}^{{\bf D}_\mu[\theta;{\bf Z}_k]} \preceq {\rm e}^{{\bf D}_\mu[\theta;{\bf B}_k]},\quad k=1,2,3,\cdots.
\end{equation}
Then, there holds that
\begin{enumerate}
\item for any $\theta>0$,
\begin{align}\label{eq:tail.mds1}
\mathbb{P}\left\{\mu\left(\sum_{k=1}^K{\bf X}_k \right)\geq t \right\} \leq {\rm e}^{\varphi_\Omega(1+\alpha_1(\tau))}\cdot\exp\left(- \varphi_\Omega \cdot \Gamma\left(\frac{t}{2\tau\cdot \varphi_\Omega}\right)\right)
\end{align}
where $\varphi_\Omega:=\sum\limits_{i=1}^{I}\big( \big[\mu\big({\bf U}_i\big)+1\big]^{|\Omega_i|}-1\big)$ with ${\bf U}_i = \mathop{\arg\max}\limits_{k\in\Omega_i} \{\mu({\bf B}_k) \}$.
\item for any $\theta>0$,
\begin{align}\label{eq:tail.mds2}
\mathbb{P}\left\{\mu\left(\sum_{k=1}^K{\bf X}_k \right)\geq t \right\} \leq {\rm e}^{\frac{\varphi_{\widetilde{\Omega}}}{4}}\cdot\exp\left\{ -\frac{t^2}{4\phi_{\widetilde{\Omega}}}\right\},
\end{align}
where $\varphi_{\widetilde{\Omega}}:=\sum\limits_{i=1}^{\widetilde{I}}\big( \big[\mu\big(\widetilde{{\bf U}}_i\big)+1\big]^{|\widetilde{\Omega}_i|}-1\big)$ with $\widetilde{{\bf U}}_i = \mathop{\arg\max}\limits_{k\in\widetilde{\Omega}_i} \{\mu({\bf B}_k) \}$.
\end{enumerate}
\end{theorem}
Subsequently, we will use these tail inequalities to explore the properties of the expander-walk sampling and the cover of quantum hypergraph.
\subsection{Expander-walk Sampling}
The expander-walk sampling refers to a simpler that samples vertices in an expander graph by doing a random walk. It has been proven that such a sampler can generate the samples whose average is not $t$-close to the true mean with exponentially decreasing probability and fewer random bits\cite{gillman1998chernoff}. This finding implies that the sampling results can almost be treated as the independent samples, and thus the expander-walk sampling plays an essential role in quantum information. Although the effectiveness of this method for matrix sampling has been explored in some works \cite{wigderson2005randomness,wigderson2008derandomizing,kyng2018matrix,garg2018matrix}, their results all have the matrix-dimension as the product factor, and thus could not be suitable to the high-dimensional scenario. To overcome this limitation, we will provide the dimension-free analysis of the sampling method under the proposed dimension-free framework.
Given a connected undirected $d$-regular graph $G = (V, E)$ on $n$ vertices, its normalized adjacency matrix ${\bf A}$ is defined as ${\bf A} = [A_{ij}]_{n\times n}$ with $A_{ij} = e_{ij} /d$, where $e_{ij}$ is the number of edges between the $i$-th and the $j$-th vertices. We note that ${\bf A}$ is a real symmetric (certainly Hermitian) matrix and the set of ${\bf A}$'s eigenvalues, called as the spectrum of $G$, is of the form $1 = \lambda_1 > \lambda_2 \geq \lambda_3\geq \cdots \geq \lambda_n$. The unit eigenvector of the eigenvalue $1$ is $(1/\sqrt{n},\cdots,1/\sqrt{n})^T$, and the value of $1-\lambda_2$ is called as the spectral gap of ${\bf A}$. The graph $G = (V, E)$ is said to be an expander graph with spectral gap $\epsilon > 0$ if there holds that $1-\lambda_2>\epsilon$.
Define $y_k$ ($0 \leq k \leq K$) to be the $k$-th vertex visited in a random walk on $G$ and let $\{y_1, \cdots , y_K\}$ be the sequence of vertices encountered on a random walk. A random walk is said to be stationary if it starts from $y_1$ which is chosen uniformly at random. Let
$f:V \rightarrow \mathbb{H}^{d \times d}$ be a matrix-valued function such that the Frobenius norm $\|f(y)\|_F\leq 1$ for all $y\in V$ and $\sum_{y\in V} f(y) = 0$. Let $\mathbb{E}[f(y)]$ be the mean value of $f(y)$ uniformly over all vertices. Under the assumption that $\mathbb{E}[f(y)] = 0$, we would like to analyze the behavior of the tail probability $\mathbb{P} \big\{ \big\|\frac{1}{K} \sum_{k=1}^K f(y_k)\big\| >t \big\}$ ($t>0$). Since the elements of the sequence $\{f(y_1),\cdots,f(y_K)\}$ are not independent of each other yet, the proposed framework cannot be directly used to solve this issue. Instead, the martingale method, proposed by Grag {\it et al.} \cite[Theorem 1.6]{garg2018matrix}, converts the sum of the matrix-valued functions w.r.t. a stationary random walk on an expander graph into the sum of a martingale difference sequence:
\begin{lemma}\label{lem:martingale}
Assume that $\{y_1,\cdots,y_K\}$ is a stationary random walk on the expander graph $G = (V, E)$ with spectral gap $\epsilon > 0$. Then, for any $t>0$, there exists a martingale difference sequence $\{{\bf Z}_1,\cdots,{\bf Z}_K\}$ w.r.t. the filtration generated by initial segments of $\{y_1,\cdots,y_K\}$ such that
\begin{equation*}
\mathbb{P} \left\{ \left\|\frac{1}{K} \sum_{k=1}^K f(y_k)\right\| >t \right\} \leq
\mathbb{P} \left\{ \left\|\frac{1}{K} \sum_{k=1}^K {\bf Z}_k\right\| >\frac{t}{2} \right\}.
\end{equation*}
where ${\bf Z}_k$ is a martingale with bound $\|{\bf Z}_k\| \leq \frac{\log(n / t)}{\epsilon}$.
\end{lemma}
By combining Theorem \ref{thm:tail.mds}, we then arrive at the Bennett-type and the Azuma-Hoeffding type results, and the latter has the similar form to that of the existing Chernoff bounds for the expander-walk sampling \cite{wigderson2005randomness,wigderson2008derandomizing,kyng2018matrix,garg2018matrix}:
\begin{theorem}\label{thm:expander}
Let $G = (V, E)$ be a expander graph $G = (V, E)$ with spectral gap $\epsilon > 0$. For any $t>0$, $K \geq 1$ and $d\geq 1$, then there exists a ${\rm poly}(r)$-time computable sampler $\sigma : \{0,1\}^r \rightarrow V^K$ with $r = \log(n) + O(K)$ satisfying that
\begin{enumerate}[(a)]
\item for any $t>0$,
\begin{align}\label{eq:expander1}
\mathbb{P}_{\omega \stackrel{R}{\leftarrow} \{0,1\}^r} \left\{ \left\|\frac{1}{K} \sum_{k=1}^K f(\sigma(\omega)_k)\right\| >t \right\} \leq& {\rm e}^{\varphi'_\Omega(1+\alpha_1(\tau))}\cdot\exp\left(- \varphi'_\Omega \cdot \Gamma\left(\frac{Kt}{2\tau\cdot \varphi_\Omega}\right)\right),
\end{align}
where $\omega \stackrel{R}{\leftarrow} \{0,1\}^r$ stands for sampling $\omega$ from $\{0,1\}^r$ uniformly, and $\varphi'_\Omega:=\sum\limits_{i=1}^{I}\big( \big[\frac{\log(n / t)}{\epsilon}+1\big]^{|\Omega_i|}-1\big)$;
\item for any $t>0$,
\begin{align}\label{eq:expander2}
\mathbb{P}_{\omega \stackrel{R}{\leftarrow} \{0,1\}^r}\left\{\left\|\frac{1}{K} \sum_{k=1}^K f(\sigma(\omega)_k)\right\| >t \right\}\leq {\rm e}^{\frac{\varphi'_{\widetilde{\Omega}}}{4}}\cdot\exp\left\{ -\frac{K^2t^2}{16\varphi'_{\widetilde{\Omega}}}\right\},
\end{align}
where $\varphi'_{\widetilde{\Omega}} := \sum\limits_{i=1}^{\widetilde{I} }\big( \big[\frac{\log(n / t)}{\epsilon}+1\big]^{|\widetilde{\Omega}_i|}-1\big) $.
\end{enumerate}
\end{theorem}
\begin{IEEEproof}
As addressed in \cite[Section 5]{goldreich2011sample}, there must exist a sampler via the random walk on the expander graph satisfying the relation $r = \log(n) + O(K)$. Therefore, we only need to prove the inequalities \eqref{eq:expander1} and \eqref{eq:expander2}, which can be directly resulted from the combination of Theorem \ref{thm:tail.mds} and Lemma \ref{lem:martingale}. This completes the proof.
\end{IEEEproof}
Compared with these existing bounds whose product factors are $2d$, our results do not have the matrix dimension as a product factor, and thus can provide a more precise description to the sampler performance when the matrix dimension is high. We note that it follows from the fact $|\widetilde{\Omega}_i| \leq 2$ ($\forall i\in\{1,2,\cdots, | \widetilde{\Omega}|\}$) that
\begin{equation*}
\varphi'_{\widetilde{\Omega}} \leq \widetilde{I} \cdot \left[\Big(\frac{\log(n / t)}{\epsilon}\Big)^2+\frac{2\log(n / t)}{\epsilon}\right]= \left\lceil \frac{K}{2} \right\rceil \cdot \left[\Big(\frac{\log(n / t)}{\epsilon}\Big)^2+\frac{2\log(n / t)}{\epsilon}\right].
\end{equation*}
We then obtain a sufficient condition to guarantee the relation ${\rm e}^{\frac{\varphi'_{\widetilde{\Omega}}}{4}}\leq 2d$:
\begin{equation}\label{eq:choose.k}
K \leq \frac{8\log 2d}{\Big(\frac{\log(n / t)}{\epsilon}\Big)^2+\frac{2\log(n / t)}{\epsilon}}.
\end{equation}
which suggests that the step number $K$ of the random walk should be less than $O(\log 2d)$.
\section{Cover of Quantum Hypergraphs}
We first introduce some necessary preliminaries on quantum hypergraph and refer to \cite[Section 4.3]{wigderson2005randomness} for their details.
A hypergraph is a pair $(V,E)$ where $E$ is a collection of subsets of $V$. Set $|V|=d$ and an edge $e\in E$ can be treated as a $d \times d$ diagonal matrix with $1$ or $0$ at each diagonal entry to signify whether that vertex is in the edge, where the $i$-th entry is $1$ if the $i$-th vertex is in the edge and $0$ otherwise. Denote the matrix corresponding to the edge $e$ as ${\bf M}_e$. The quantum hypergraph $(\mathcal{V},\mathcal{E})$ is a generalization of the hypergraph generated in the following way:
\begin{enumerate}[(a)]
\item Let the vertex set $\mathcal{V}$ be a $d$-dimensional complex Hilbert space, and each vertex is represented as a linear combination of an orthonormal basis of $\mathcal{V}$;
\item Given an edge $e\in \mathcal{E}$ containing some vertices in $\mathcal{V}$, the corresponding matrix ${\bf M}_e$ is signified as a projection ${\bf M}_e\in \mathcal{E}$ onto the space spanned by these vertices.
\item For any edge $e\in \mathcal{E}$, the matrix ${\bf M}_e$ is not only limited to the projection, but also extended to be any Hermitian matrix satisfying ${\bf 0} \preceq {\bf M}_e \preceq {\bf I}$.
\end{enumerate}
Therefore, the quantum hypergraph can be formally defined as follows:
\begin{definition}[Quantum Hypergraph]
A hypergraph $G=(\mathcal{V},\mathcal{E})$ is said to be a quantum hypergraph if $\mathcal{V}$ is a $d$-dimensional Hilbert space and $\mathcal{E}$ is a finite set such that each $e \in \mathcal{E}$ is identified with a Hermitian matrix ${\bf M}_e$ with ${\bf 0} \preceq {\bf M}_e \preceq {\bf I}$.
\end{definition}
A finite set $\mathcal{C}\subset \mathcal{E}$ is said to be a cover of a quantum hypergraph $G=(\mathcal{V},\mathcal{E})$ if $\sum_{e\in \mathcal{C}} {\bf M}_e \succeq {\bf I}$. The size of the smallest cover is called the cover number and denoted as ${\rm cov}(G)$. Furthermore, a fractional cover is a set of non-negative weights $w(e)$ ($e\in\mathcal{E}$) such that $\sum_{e\in \mathcal{E}} w(e) {\bf M}_e \succeq {\bf I}$ and the fractional cover number is defined as
\begin{equation*}
{\rm cov}_f(G):= \min_w \left\{ \sum_{e\in \mathcal{E}} w(e) \Big| \sum_{e\in \mathcal{E}} w(e) {\bf M}_e \succeq {\bf I} \right\}.
\end{equation*}
One main concern in quantum information is to verify whether the cover of a quantum hypergraph can be found in the polynomial time. This issue has been discussed in the previous works \cite{ahlswede2002strong},\cite[Theorem 4.5]{wigderson2005randomness}, where they mainly concern with the relationship between the cover size and the vertex number (matrix dimension), while the fractional cover number is still treated as a constant. Instead, based on the dimension-free result \eqref{eq:tail5}, we can achieve a new analysis on this issue:
\begin{theorem}\label{thm:graph}
Let $G=(\mathcal{V},\mathcal{E})$ be a quantum hypergraph with the fractional cover number ${\rm cov}_f(G)$ and $|\mathcal{V}|=d$. Then, if ${\rm cov}_f(G) \leq \frac{K}{6 \left\lceil \frac{K}{2} \right\rceil}$, one can find a $K$-size cover of $G$ in time $d^{K}$.
\end{theorem}
This result shows an upper bound of ${\rm cov}_f(G)$ to guarantee that the cover of $G$ can be found in a polynomial time and illustrates the effect to finding the cover when ${\rm cov}_f(G)$ is super-constant. Our result can be deemed as a complement of the relevant existing works.
We note that this theorem is built on the independent sampling and it could be an interesting problem whether there exists a larger upper bound of ${\rm cov}_f(G)$ when other sampling methods are adopted.
\subsection{Proof of Theorem \ref{thm:graph}}
\begin{IEEEproof} As addressed in the proof of \cite[Theorem 4.5]{wigderson2005randomness}, finding the cover of a quantum hypergraph $G=(\mathcal{V},\mathcal{E})$ can be reduced to a semidefinite program (SDP) problem, and the solving this SDP can provide the fractional cover number in an arbitrary accuracy and a probability distribution of the edges: $p(e) = \frac{w(e)}{{\rm cov}_f(G)}$. Given $p(e)$ and ${\rm cov}_f(G)$, it follows from the definition of ${\rm cov}_f(G)$ that $\mathbb{E}_p[{\bf M}_e] \succeq \frac{1}{{\rm cov}_f(G)}{\bf I}$ and then denote ${\bf M} = \mathbb{E}_p [{\bf M}_e]$.
Given an i.i.d. sample set $S \subseteq \mathcal{E}$ with $K=|S|$ w.r.t. the distribution $p(e)$, set $K \geq 2{\rm cov}_f(G) $ and it follows from \eqref{eq:tail5} that
\begin{align}\label{eq:graph.pr1}
\mathbb{P}\left\{ \sum_{e\in S} {\bf M}_e \succeq {\bf I} \right\} = & \mathbb{P}\left\{ \sum_{e\in S} ({\bf M}_e-{\bf M}) \succeq {\bf I} -K {\bf M}\right\}\nonumber\\
\geq & \mathbb{P}\left\{ \frac{1}{K} \sum_{e\in S} ({\bf M}_e-{\bf M}) \succeq \left(\frac{1}{K} - \frac{1}{{\rm cov}_f(G)}\right){\bf I} \right\}\nonumber\\
\geq & \mathbb{P}\left\{ \frac{1}{K} \sum_{e\in S} ({\bf M}_e-{\bf M}) \succeq - \frac{1}{2{\rm cov}_f(G)}{\bf I} \right\}\nonumber\\
\geq & \mathbb{P}\left\{\left\| \frac{1}{K} \sum_{e\in S} ({\bf M}_e-{\bf M})\right\| \leq \frac{1}{2{\rm cov}_f(G)} \right\}\nonumber\\
\geq & 1- {\rm e}^{\frac{\varphi_{\widetilde{\Omega}}}{4}}\cdot\exp\left\{ -\frac{K^2}{16\varphi_{\widetilde{\Omega}}{\rm cov}^2_f(G)}\right\}.
\end{align}
On the other hand, since $\|{\bf M}_e-{\bf M}\| \leq 1 $, we have
\begin{equation}\label{eq:graph.pr2}
\varphi_{\widetilde{\Omega}} \leq 3 \left\lceil \frac{K}{2} \right\rceil.
\end{equation}
To maintain the non-negativity of the above probability, the combination of \eqref{eq:graph.pr1} and \eqref{eq:graph.pr2} leads to
\begin{equation*}
{\rm cov}_f(G) \leq \frac{K}{6 \left\lceil \frac{K}{2} \right\rceil}.
\end{equation*}
Enumerating over the $K$ i.i.d. samples gives us a deterministic algorithm to find a cover in time $O(d^K)$. This completes the proof.
\end{IEEEproof}
}
\section{Conclusion}
In this paper, we propose a framework to obtain the dimension-free (DF) tail inequalities of a matrix function $\mu$ for sums of random matrices. We also develop the tail inequalities for matrix random series. Although $\mu$ is required to satisfy Conditions (C1-C3), it still contains some usual matrix functions as special cases including {all matrix norms}, the absolute value of sum of the $j$ largest eigenvalues for Hermitian matrices and the sum of $j$ largest singular values for complex matrices. Therefore, the proposed framework can be used to study the tail behavior of many eigenproblems of random matrices. Since the resulted tail inequalities are independent of the matrix dimension, they are suitable to the scenario of high-dimensional or infinite-dimensional matrices. { Compared with the existing works \cite{tropp2012user,zhang2018matrix}, our results are independent of the matrix dimension but also suitable to arbitrary kinds of probability distributions with bounded first-order moment.}
Moreover, we discuss the applications of the resulted dimension-free tail inequalities in the following aspects:
\begin{itemize}
\item In compressed sensing, we achieve a proof of the restricted isometric property (RIP) for the measurement matrix that can be expressed as the sum of random matrices without any assumption imposed on the distributions of matrix entries. Compared to the previous work \cite{baraniuk2008simple}, instead of the concentration assumption imposed on the measurement matrices, we use the resulted tail inequalities to achieve the proof based on a mild condition \eqref{eq:smallest.sigma} that can be easily satisfied (see Remark \ref{rem:rip.condition}).
\item In probability theory, we bound the supremum of a stochastic process from below. Compared with the existing work \cite{hsu2011dimension}, this upper bound is independent of matrix dimension and is applicable to the various kinds of stochastic processes with unified first-order moments.
\item In machine learning, we analyze the performance of matrix approximation via random sampling. Our analysis shows that to achieve good approximation, each copy has to approximate the objective matrix well. In contrast, the existing work \cite{tropp2015introduction} highlights the relationship among the matrix dimension, the copy number and the approximation error.
\item In optimization, the resulted tail inequalities for matrix random series can extend Nemirovski's conjecture \cite{nemirovski2007sums}, which plays an essential part in solving chance constrained optimization problems and quadratic optimization problems with orthogonality constraints, to a more general setting, where the weights can be arbitrary random variables with bounded first-order moments instead of the original condition, that is, either distribution with {\it zero} mean and $[-1,1]$ support or Gaussian distribution with {\it unit} variance ({\it cf.} Remark \ref{rem:series}).
\item {In theoretical computer science, the expander-walk sampling for matrix-valued data plays an essential part, and the effectiveness of this sampling method has become a concerned topic in these years. With help of the random matrix techniques ({\it e.g.} matrix Chernoff bounds), this issue has been studied in many works \cite{wigderson2005randomness,wigderson2008derandomizing,kyng2018matrix,garg2018matrix}. However, their results all have the matrix dimension as a product factor and could become loose when the matrix dimension is high. To overcome this limitation, we provide a dimension-free analysis on the effectiveness of this sampling method.}
\item In quantum information, we analyses the fractional cover number of quantum hypergraphs.
\end{itemize}
Under the proposed framework, we first obtain the DF tail inequality \eqref{eq:tail1} with the term $\phi$. Since the order of $\phi$ is $O((\mu({\bf{U}}+1))^K)$, this inequality has a rather slow rate of convergence to {\it zero} in the case of large $K$. To overcome this issue, we present the tail inequality \eqref{eq:tail2} that is equipped with the term $\phi_\Omega$. Since the order of $\phi_\Omega$ is much lower than that of $\phi$, the inequality \eqref{eq:tail2} can converge to {\it zero} at a reasonable rate in spite of large $K$. The experimental results support the validity of the proposed framework and show that the inequality \eqref{eq:tail2} provides a better description to the tail behavior of the probability $\mathbb{P}\left\{\mu\left(\sum_k{\bf X}_k \right)\geq t \right\}$.
In the future works, we will explore further applications of the resulted tail inequalities.
|
1910.03841
|
\section{Introduction}
The solar cycle activity that waxes and wanes with a period of 11 years
modulates the heliospheric environment and has potential implications
for changes in ``space weather". It is, therefore, extremely important
to understand long term changes in solar cycle activity and to accurately predict
the behaviour of upcoming solar cycles. A number of satellites and
space missions in the recent years including many being planned for the
future require knowledge of future solar cycle activity in planning the missions
properly. The current solar cycle 24 is the fourth successive cycle, since
cycle 21, in a continuing trend of diminishing sunspot cycles and is also one of the
weakest cycles, since cycle 14, with a peak smoothed sunspot number (SSN) of 116
in the revised sunspot scale. The maximum of solar cycle 24 is therefore known
as the ``mini solar maximum". It must be clarified here that as of July 2015 a
revised and updated list of the (Wolf) sunspot numbers has been adopted, referred to
as SSN V2.0 \citep{CLe16,Cli16}. Recent studies have also claimed that the Sun
may move into a period of very low sunspot activity comparable with the Dalton \citep{ZPo14}
or even the Maunder minimum \citep{ZGk15,San16}. This has caught the attention
of researchers worldwide who have attempted to predict the amplitude of solar cycle 25
\citep{UHa14,JaB15,CJS16,HUp16,KKR17,IiH17,PSc18,JiW18,UHa18,PeN18,GoM18,
MSR18,SaK18,BNa18}. The solar cycle 25 predictions made prior to 2016 usually
used the unrevised SSN observations, referred to here onward as SSN V1.0, while
the solar cycle 25 predictions made after 2016 mostly used the SSN V2.0 observations.
The different estimates of SSN in V1.0 and V2.0 for the amplitude of cycle 25 along with the
ratio of peak SSN of cycle 25 to cycle 24 are summarised in Table \ref{tab-SSN}.
Recently, \cite{Pes18} reported that the solar cycle 25 predictions which used the
SSN V1.0 observations need to be revisited as he showed that the revised SSN V2.0
observations have different values of SSN for the solar maxima and minima compared
to the original SSN V1.0 observations. In our previous solar cycle prediction
\citep{JaB15}, abbreviated henceforth as JBA15, a peak SSN of
$\sim$62$\pm$12 was reported for the amplitude of the upcoming solar cycle 25.
For that prediction, we had used the original SSN V1.0 observations, with data for the
period 1975 -- mid-2014. We therefore revisit, in this paper, our earlier prediction
in order to update the amplitude of solar cycle 25 using the revised SSN V2.0 observations
available after July 2015.
\begin{table}
\begin{center}
\caption{Estimates of the amplitude of SSN for cycle 25 as reported by
different researchers}
\label{tab-SSN}
\vspace{-0.4cm}
\begin{tabular}{@{}lccc@{}}
\hline\noalign{\smallskip}
Authors & $SSN_{max}$ & $SSN_{max}$ & $\frac{SSN25}{SSN24}$\\
& (V1.0) & (V2.0) & \\
\noalign{\smallskip}\hline\noalign{\smallskip}
\cite{UHa14} & - & - & $\sim$1 \\
\cite{JaB15} & 62$\pm$12 & - & 0.83 \\
\cite{CJS16} & - & - & $~$1 \\
\cite{HUp16} & - & - & $~1$ \\
\cite{KKR17} & 63$\pm$11.3 & - & 0.84 \\
\cite{IiH17} & - & - & $<1$ \\
\cite{KiA18} & 50-55 & - & 0.73 \\
\cite{PSc18} & - & 135$\pm$25 & 1.16 \\
\cite{JiW18} & - & 125$\pm$32 & 1.08 \\
\cite{UHa18} & - & 110.6 & 0.95 \\
\cite{PeN18} & - & 130 & 1.12 \\
\cite{GoM18} & - & - & $\sim$1 \\
\cite{MSR18} & - & 99.6 & 0.86 \\
\cite{SaK18} & - & 154$\pm$12 & 1.32 \\
\cite{BNa18} & - & 118 & 1.01 \\
${\bf{This ~study}}$ & 82$\pm$8 & 133$\pm$11 & 1.00, 1.14 \\
\noalign{\smallskip}\hline
\end{tabular}
\end{center}
\end{table}
Further, we primarily used, in our earlier prediction (JBA15), a possible continuation of a
steady declining trend observed in unsigned solar polar fields above latitudes of $\ge$
$45^{\circ}$ starting from $\sim$1995 until the minimum of cycle 24 to estimate a value of
unsigned polar field and subsequently a value of heliospheric magnetic field (HMF) at the
minimum of cycle 24. The HMF value was then used as a precursor for predicting the peak SSN of
cycle 25. The study used solar photospheric magnetic fields (SPF) data covering the period
1975--mid-2014. Since then, we now have three additional years of observations (up to the
current data set of Dec. 2017) of SPF. Recently, \cite{IJB19} have claimed that the over
20 year steady decline in unsigned polar fields reported in JBA15 showed
an abrupt rise after July 2015 instead of a continuing declining trend. In this paper, we have
shown that this change in the declining trend of unsigned polar fields observed after July 2015 would
affect the estimated value of unsigned polar fields at the upcoming minimum of solar cycle 24 as
obtained in JBA15, and in turn our estimate of the amplitude of cycle 25. In addition, it has been argued
by some authors \citep{JaS15,ZGk15,San16}, that we might be heading towards a Maunder like Grand
minimum. In fact, sunspot numbers going back over the past 1000 solar cycles or $\sim$11,000 years
in time using ${^{14}}$C records from tree rings and this data has been used to identify 27 grand or prolonged
solar minima \citep{USK07}, implying that appropriate conditions can exist on the sun to induce grand minima.
\cite{CKa12} and \cite{KCh13} used a flux transport dynamo model to characterise the onset of grand minima
seen in this $\sim$11,000 year long data set and brought out several important insights:
\begin{itemize}
\item gradual
changes in meridional flow velocity lead to a gradual onset of grand minima, while abrupt changes lead to an abrupt
onset.
\item one or two solar cycles before the onset of grand minima, the cycle period tends to become longer.
\end{itemize}
It may be noted that surface meridional flows over cycle 23 have shown gradual variations \citep{HRi10}
and cycle 24 started $\sim$1.3 years later than expected. There is also evidence of longer cycles before the
start of the Maunder and Sp\"{o}rer minimum \citep{MiK10}. Also, modelling studies of solar cycle 23, invoking
meridional flow variations over the cycle have shown that very deep minima are generally associated
with weak polar fields \citep{NMM11}. Given these conclusions and the fact that the declining trend in
photospheric magnetic fields is still continuing \citep{SaJ19}, it is reasonable to raise the question about
the peak SSN of cycle 25 if the oncoming solar minimum of cycle 24 were to take place in 2021 instead of
2020 as expected.
Thus, in this study, we re-estimate the amplitude of solar cycle 25. Importantly, we discuss the variations
of unsigned polar fields and HMF after July 2014 which reveal some new findings; that are crucial in the context of
recent changes in the Sun's global magnetic field behaviour.
\begin{figure}
\centering
\includegraphics[width=12.0cm, height=8.0cm]{fig1.eps}
\caption{A comparison between the monthly SSN V1.0 (blue) and
the monthly SSN V2.0 (red) observations for solar cycles 14--24.
The recalibrated SSN V2.0 data is now Internationaly used for SSN
observations after July 2015.}
\label{fig1}
\end{figure}
\section{Data and Methodology}
\subsection{Smoothed Sunspot Number (SSN)}
For SSN, we used SSN V1.0 and V2.0 observations obtained from the Royal
Observatory of Belgium, Brussels (http://www.sidc.be/silso/datafiles). The original
version SSN V1.0 was created in 1849 by R. Wolf who derived the daily total sunspot
number by the formula: R$_{z}$ = Ns + 10$\times$Ng, where Ns is the number of sunspots and Ng
is the number of sunspot groups. The original version is maintained at Zurich observatory, while a
recalibrated SSN V2.0 version was devised after Jul. 2015 and is maintained by the
Royal Observatory of Belgium. Observations of the SSN are available since 1749. However, in this
study, we have preferred to use SSN V1.0 and V2.0 observations from cycles 14--24 since only the HMF
values of cycles 14--24 were used in this study. Figure \ref{fig1} plots observations of
the monthly SSN V1.0 (in blue) and SSN V2.0 (in red) spanning solar cycles 14--24. It is apparent
from Fig.\ref{fig1} that there is no large change observed in temporal variation of SSN V2.0 except
for the V2.0 values being about 40\%--70\% higher than V1.0. Also, there is no simple scaling factor
between V1.0 and V2.0 that could be used for re-calculation of the predicted peak SSN of solar cycle 25.
We have therefore obtained the peak values of SSN in V1.0 and V2.0 during the solar maxima of cycles 14--24 and
listed them in Table \ref{tab-SSNmax}. It is clear from Table \ref{tab-SSNmax} that the peak SSN
for solar cycles 14--24 in the two versions have different values. Earlier, JBA15 directly employed
the correlation equation proposed by \cite{CLi11}. Based on the correlation
between the HMF at solar minimum (B${_{min}}$) of the preceding cycle (n-1) and the peak value of
sunspot number smoothed over a period of 13-month (SSN${_{max}}$) of the next cycle (n) these authors reported
a correlation equation given by
\begin{equation}
SSN{_{max}}= 63.4 \times B{_{min}} - 184.7
\label{eqCL}
\end{equation}
The peak values of SSN V1.0, from solar cycles 14 to 23, used by \cite{CLi11} was from the
National Oceanic and Atmospheric Administration Geophysical Data Center and is listed in the fourth
column of Table \ref{tab-SSNmax}. From a comparison of the SSN values in second and fourth columns of
Table \ref{tab-SSNmax}, it is clear that the peak SSN V1.0 used in this study for cycles 14--17 are not
in agreement with the peak SSN V1.0 used by \cite{CLi11}. We have thus, in this paper, updated the
correlation between B${_{min}}$ and SSN${_{max}}$, derived by \cite{CLi11} and consequently also
updated the prediction for the amplitude of cycle 25 that was made earlier in JBA15.
\begin{table}
\begin{center}
\caption{Estimates of the peak values of SSN for solar cycles 14--24 in V1.0, V2.0,
and the one used by \cite{CLi11}.}
\label{tab-SSNmax}
\vspace{-0.4cm}
\begin{tabular}{@{}lccc@{}}
\hline\noalign{\smallskip}
Solar Cycles & $SSN_{max}$ & $SSN_{max}$ &$SSN_{max}$\\
& (V1.0) & (V2.0) & (\cite{CLi11})\\
\noalign{\smallskip}\hline\noalign{\smallskip}
Cycle 14 & 64.2 &107.1 &77.0\\
Cycle 15 & 105.4 &175.7 &126.5\\
Cycle 16 & 78.1 &130.2 &93.7\\
Cycle 17 & 119.2 &198.6 &143.0\\
Cycle 18 & 151.8 &218.7 &151.8\\
Cycle 19 & 201.3 &285.0 &201.3\\
Cycle 20 & 110.6 &156.6 &110.6\\
Cycle 21 & 164.5 &232.9 &164.5\\
Cycle 22 & 158.5 &212.5 &158.5\\
Cycle 23 & 120.8 &180.3 &120.8\\
Cycle 24 & 81.9 &116.4 & -\\
\noalign{\smallskip}\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Solar Photospheric Fields (SPF)}
The SPF, for this study, were computed using medium-resolution
line-of-sight (LOS) synoptic magnetograms from the National Solar Observatory, Kitt Peak
(NSO/KP) and the Synoptic Optical Long-term Investigations of the Sun (NSO/SOLIS) facilities.
Each synoptic magnetogram is available in standard FITS format and represents
one Carrington rotation (CR) or 27.2753 day averaged SPF in
units of Gauss. The synoptic magnetograms used here were from Feb. 1975 to Dec. 2017,
covering CR1625--CR2197 and spanning solar cycles 21--24. We computed the unsigned
values of SPF in the latitude range $45^{\circ}$-$78^{\circ}$, referred to here as
polar magnetic fields (PMF). Details about the computation of SPF can be further referred
to in \cite{JaF18}. It must be noted that researchers generally use the signed values of
polar fields for SPF studies. However, it is to be kept in mind that in the present
study the unsigned values of polar fields in the latitude range
$45^{\circ}$-$78^{\circ}$ have been used with the signed values of polar fields
being used only for comparison with the unsigned polar fields.
The signed or unsigned values of polar fields
in the latitude range of $45^{\circ}$-$78^{\circ}$ were estimated
by taking into account the actual magnetic field values or the absolute of the
actual magnetic field values, respectively.
\subsection{Heliospheric Magnetic Field (HMF)}
We used daily measurements of HMF
obtained from the OMNI2 data base at 1 AU (http://gsfc.nasa.gov/omniweb) covering
the period Feb. 1975$-$Dec. 2017 that span solar cycles 21--24.
CR averaged values of HMF were derived in order to compare and correlate them with
CR averaged polar fields during solar cycle minima. We used CR averaged values for
1 year intervals around solar minima of cycles 20--23 \citep{WRS09} corresponding to
CR1642--1654, CR1771--1783, CR1905--1917, and CR2072--2084, respectively.
\section{Results}
\subsection{Photospheric Magnetic Field (PMF)}
\label{polar}
\begin{figure}
\centering
\includegraphics[width=8.5cm, height=11cm]{fig2.eps}
\caption{(first panel) variations of NSO/KP signed PMF for the time period Feb. 1975--Dec. 2017.
Overplotted in solid red curve are smoothed NSO/KP fields while in solid black curve is WSO signed PMF.
(second panel) variations of NSO/KP unsigned PMF for the same time period as in
the first panel. The solid red and black lines are best fits to the declining trend for the annual means
while the dotted red and black lines are extrapolations of the best fits until 2020. The solid red line
is the best fit to all the annual means while the solid black line is the best fit to all the annual means
except for the years 2010 and 2011. The horizontal red and black dashed lines are marked at 2.2 G
and 2.0 G. (third panel) measurements of HMF during the same period. The horizontal red line
indicate the floor level of the HMF of 4.6 nT as proposed by \cite{SCl07}. (fourth panel) Plotted is
the monthly averaged SSN V2.0 data. The filled grey dots, in the top three panels, are CR measurements of
SPF, while the open blue circles are annual means with 1 $\sigma$ error bars. The vertical grey bands in
each panel demarcate 1 year intervals around the minima of solar cycles 20--23, while the vertical dotted
lines in each panel mark the solar maxima of cycles 21--24.
}
\label{fig2}
\end{figure}
The first panel of Figure \ref{fig2} plots the signed PMF in the latitude range
of $45^{\circ}$-$78^{\circ}$ for the period Feb.1975$-$Dec.2017, covering
solar cycles 21--24. We have overplotted the smoothed NSO/KP fields (solid
red curve) with a 13-month running mean, while for comparison, we have also
overplotted the signed WSO PMF (solid black curve) in the latitude range poleward
of $55^{\circ}$ for the period Apr. 1976--Apr. 2019, covering solar cycles 21--24.
It is clear that the overall temporal behaviour of NSO/KP signed PMF during solar
cycles 21--24 show a good agreement with WSO signed PMF. Thus, the medium
resolution NSO/KP signed PMF are useful to study the large scale nature of
PMF. The signed PMF in each solar cycle shows a maximum strength at the
start of the cycle, while at solar cycle maximum, it runs through zero and changes the
sign of the field. This is known as reversal of PMF or polar reversal. Subsequently, the
signed PMF again shows a maximum strength during the minimum of the cycle.
Thus, typically, the signed PMF shows an anti-solar cycle behaviour. In cycle 24,
after the zero-crossing or reversal of PMF, the signed PMF shows a clear rise in field
strength in the year 2015 after solar cycle maximum. Thereafter, it shows a nearly
constant value for the next two years in the year, {\it{i.e.}} 2016 and 2017. The
steady value of signed PMF is evident up to Apr. 2019, about 1 year prior to the
solar minimum of cycle 24, from the variations of WSO signed PMF. It is to be noted that
based on zonal and meridional flow patterns during solar cycles 23 and 24, \cite{KHH18}
estimated that cycle 25 will begin in early 2020. Similar steady values of signed PMF
few years before the minimum of solar cycle can also be seen during earlier solar cycles, i.e.
cycles 22 and 23. Thus, besides the typical anti-solar cycle like behaviour of the signed
PMF in each solar cycle it also shows steady value or a polar field plateau prior to the minimum
of the solar cycle.
The second panel of Fig.\ref{fig2} plots the unsigned NSO/KP PMF in the latitude range
of $45^{\circ}$-$78^{\circ}$ for the period Feb.1975$-$Dec.2017, covering solar cycles
21--24. It is evident from a careful examination of the strength of the unsigned PMF
(see the solid red curve), referred to henceforth as $B{_{p}}$, that unlike the signed PMF
the unsigned PMF shows a totally different temporal behaviour. The unsigned PMF shows
solar cycle like modulations in cycles 21 and 22, while, in cycle 23, no such solar cycle like
modulation is seen. However, we see a steady decline in the field strength since the start
of cycle 23 until the end of cycle 23. Again, at the start of solar cycle 24, in the years 2010
and 2011, there was an increase in the $B{_{p}}$. The annual mean for the years 2010
and 2011 is shown by open black circles. After 2011, the $B{_{p}}$ again declined from
2012 upto 2014, only to increase again in the year 2015 after the solar maximum of cycle 24
and has been constant since then. Thus, in contrast to the behaviour in previous cycles, the
value of $B{_{p}}$, in cycle 24, shows an anti-solar cycle behaviour with a polar field plateau
like the signed PMF, with a maximum strength at the start of the cycle attaining a minimum
strength around the solar cycle maximum and again showing a rise in strength after the solar
cycle maximum and thereafter attaining a steady value.
We further investigate whether the unexpected rise of $B{_{p}}$ during the
year 2015 and the subsequent constant value for the next two years would
change the declining trend. As seen in the years 2010 and 2011, we have already witnessed
a rise in the strength of PMF, but the declining trend of the PMF continued after 2011. Hence,
we believe that the break in the declining trend during 2015-2017 could be temporary and that
the declining trend may continue. However, the increase in the $B{_{p}}$ could affect its rate of
decline, and thus, change its expected value in 2020, {\it{i.e.}} at the expected minimum of cycle 24.
We, therefore, re-estimate the value of $B{_{p}}$ in 2020 assuming a continuing declining trend.
The solid red line in Fig.\ref{fig2} (second panel) is a least square fit to the declining trend of $B{_{p}}$
for all the annual means in the period 1995--2017, while the broken red line is
the extrapolation until 2020. The least square fit is statistically significant
with a Pearson correlation coefficient of $r = -0.91$, at a significance
level of $99\%$. Similarly, the solid black line in Fig.\ref{fig2} (second panel)
is a least square fit to the declining trend for all the annual means, with
the years 2010 ad 2011 being left out, and the broken black line is an
extrapolation until 2020. The fit is statistically significant with a Pearson
correlation coefficient of $r = -0.94$, at a significance level of $99\%$. The
expected values of $B{_{p}}$ in 2020 for the above two cases are
$\sim$2.2 ($\pm$0.08) G and $\sim$2.0 ($\pm$0.06) G, as indicated by the dashed
horizontal lines in red and black, respectively in Fig.\ref{fig2} (second panel).
Thus, the average expected value of $B{_{p}}$ in 2020 would be $\sim$2.1 ($\pm$0.07) G.
The expected values of $B{_{p}}$ would be $\sim$2.1 ($\pm$0.08) G and $\sim$1.9 ($\pm$0.06) G
if the minimum is in 2021 instead. Thus, the average expected value of $B{_{p}}$ in 2021 would be
$\sim$2.0 ($\pm$0.07) G.
\subsection{Heliospheric Magnetic Field (HMF)}
\label{swp}
%
The third panel of Fig.\ref{fig2} plots the strength of HMF (B${_{HMF}}$) at
1 AU for the period Feb.1975$-$Dec.2017, covering cycles 21--24.
A global reduction in the B${_{HMF}}$ from solar cycle
22 through solar cycle 23 to solar cycle 24 is evident from
Fig.\ref{fig2} (third panel). Also, it is seen from Fig.\ref{fig2}
(third panel) that the HMF returns to an average value at each solar
minimum, which is known as the floor value of HMF. The floor level
of HMF is essentially determined by the baseline flux from the slow
solar wind flows. \cite{SCl07} estimated the floor level of
4.6 nT for HMF, indicated by a horizontal red line in Fig.\ref{fig2}
(third panel), from a correlation of B${_{HMF}}$ and sunspot number
in the period covering cycles 20--22. However, it can be seen that
the B${_{HMF}}$, during the minimum of cycle 23, went down well
below the floor level of 4.6 nT. Further, it is interesting to note, as seen
from Fig.\ref{fig2} (third panel) that, in cycle 24, the B${_{HMF}}$ by the
year 2018 has already approached the value of the floor level of 4.6 nT
about two years prior to the minimum of cycle 24. Thus, it is expected that
we could witness the floor level of HMF also going down below the proposed floor
level of 4.6 nT in cycle 24. The fourth panel of Fig. \ref{fig2} plots the SSN V2.0
observations as a function of time for the period Feb.1975$-$Dec.2017, covering
solar cycles 21$-$24. As mentioned earlier, a global reduction of SSN since cycle
21 upto cycle 24 is evident from Fig.\ref{fig2} (fourth panel).
\begin{center}
\begin{figure}
\includegraphics[width=14.5cm, height=9cm]{fig3.eps}
\caption{B${_{HMF}}$ as function of B${_{p}}$ shown for the period
Feb. 1975--Dec. 2017 covering solar cycles 21--24 (left panel)
and for 1 year interval during the minima of cycles 20--23 (right panel). The
correlation coefficients, r = 0.4 and 0.5, are indicated in the top right corners
of the panels, respectively. The solid black lines in both the panels are
overall fit to all data points. The filled black and red dots in the left panel are
measurements for the period Feb. 1975--Dec. 1994 and Jan.1995--Dec.2017,
respectively, while the filled dots in different colours in the right panel are
measurements for cycles 20--23. The
solid blue lines in both panels indicate the floor levels of HMF of 4.6 nT as
computed by \citet{SCl07}, while the dash-dotted (left panel) and dashed (right
panel) horizontal lines indicate the floor levels of 4.2 and 3.2 nT, respectively, obtained in
this study.}
\label{fig3}
\end{figure}
\end{center}
Since the HMF results from solar photospheric fields being swept
out into the inner heliosphere and beyond, the declining
B${_{p}}$ beginning around mid-1990's during solar cycle 22, and continuing during cycles
23 and 24 would have contributed to the observed global reduction in the
B${_{HMF}}$ and the reduction in the floor level of HMF during cycles
23 and 24. A plot of B${_{HMF}}$ vs B${_{p}}$ for the period
Feb.1975--Dec.2017 is shown in Figure \ref{fig3} (left). It is evident from Fig.\ref{fig3} that
the B${_{HMF}}$ values post-1995 (filled red-dots) show a clear reduction in their
strength as compared to the values prior to 1995 (filled black-dots). We found a moderately
statistically significant correlation of B${_{HMF}}$ with B${_{p}}$ with
a Pearson correlation coefficient of $r = 0.4$ at a significance level of
$99\%$ as indicated at the top right corner of Fig.\ref{fig3} (left).
The solid black line in Fig.\ref{fig3}
(left) is a best fit to all the data points between B${_{HMF}}$
and B${_{p}}$. The linear correlation of B${_{HMF}}$ and B${_{p}}$ can
thus be represented by the following equation,
\begin{equation}
B{_{HMF}} = 4.2\pm(0.2) + (0.54\pm0.05) \times B{_{p}}
\label{eq1}
\end{equation}
which gives an intercept of 4.2 ($\pm$ 0.2) nT for B${_{HMF}}$ when
B${_{p}}$ = 0. This implies that the floor level of the HMF would be
4.2 nT even if solar polar field drops to zero.
We, thus, see a reduced floor value of 4.2 nT for HMF (dotted black
line in Fig.\ref{fig3} (left panel)), unlike the
proposed floor level of 4.6 nT by \cite{SCl07} (solid blue line in
Fig.\ref{fig3} (left panel)), from a correlation of B${_{HMF}}$
and B${_{p}}$ for the period covering solar cycles 21--24. The
reduced floor value of HMF is presumably due to the observed
global reduction of the B${_{HMF}}$ post-1995.
Since solar polar fields provide most of the HMF during solar minimum
\citep{SCK05}, we now consider the correlation of B${_{HMF}}$
and B${_{p}}$ only during solar minima as reported in JBA15 which is
also shown in Fig.\ref{fig3} (right panel).
The overall fit to all data points between cycles 20--23
is shown by a solid black line in Fig.\ref{fig3} (right) with
an intercept of 3.2 ($\pm$0.5) nT for the HMF when B${_{p}}$ would go
to zero and represented by the equation
\begin{equation}
B{_{HMF}} = 3.2\pm(0.2) + (0.43\pm0.11) \times B{_{p}}
\label{eq2}
\end{equation}
The floor level of HMF is thus, $\sim$3.2 nT indicated by a dashed black
line in Fig.\ref{fig3} (right panel), a value that has dropped by
more than 1 nT from the proposed floor level of 4.6 nT.
This is also due to the reduced HMF during solar cycles 23 and 24.
Thus, in order to show the contribution of the reduced HMF (due to declining
B${_{p}}$ since the mid-1990's), we have plotted B${_{HMF}}$ Vs. B${_{p}}$ shown
by filled dots of different colours for each solar cycle in Fig. \ref{fig3} (right panel).
It is evident from Fig.\ref{fig3} (right panel) that the values of B${_{HMF}}$
are above the floor level of 4.6 nT for the minima of cycles 20--22.
However, the values of B${_{HMF}}$ (filled blue dots in Fig.\ref{fig3} (right panel))
have gone below the floor level of 4.6 nT during the minimum of cycle 23, and
so we see the reduction in the floor level of HMF down to 3.2 nT. As stated earlier,
one would expect B${_{HMF}}$ to go below the floor level of 4.6 nT during the upcoming minimum
of cycle 24 too. It means the proposed floor level could be around 3.2 nT.
We thus preferred equation \ref{eq2}, having a better correlation than equation \ref{eq1},
to derive the updated expected value of B${_{HMF}}$ at the minimum of cycle 24,
which was found to be 4.16 ($\pm$0.6) nT using the updated expected value of B${_{p}}$
at the minimum of cycle 24 of 2.1 G. The previous estimated value of B${_{HMF}}$ in
2020 obtained in JBA15 was 3.9 ($\pm$0.6) nT. If on the other hand the minimum of
solar cycle 24 occurs in 2021, then B${_{HMF}}$ in 2021 would be 4.12 ($\pm$0.6) nT.
\subsection{Amplitude of solar cycle 25}
The correlations between B${_{min}}$ and SSN${_{max}}$
using SSN V1.0 (upper panel) and SSN V2.0
(lower panel) observations are shown in Fig.\ref{fig4}.
The values of B${_{min}}$ used in this study are from \cite{CLi11} (see Table
2 in \cite{CLi11}). The respective Pearson correlation coefficients,
r = 0.80 (for V1.0) and 0.76 (for V2.0) at a significance level of 99\% are indicated at the
top right corner of each panel.
The correlation between B${_{min}}$ and SSN${_{max}}$
for SSN V1.0 is given by
\begin{equation}
SSN{_{max}}= 52.6 (\pm 12.9) \times B{_{min}} - 136.5 (\pm 64)
\label{eqv1}
\end{equation}
Using the value of B${_{min}}$ of 3.9 nT for cycle 24 in equation \ref{eqCL}, JBA15
derived a SSN${_{max}}$ of 62$\pm$12 for cycle 25, indicated by a solid blue dot in the upper
panel of Fig.\ref{fig4}.
Using the updated value of B${_{min}}$ of 4.16 nT for cycle 24 (see section \ref{swp}) in equation
\ref{eqv1}, we predict a SSN${_{max}}$ of 82$\pm$8
for cycle 25. This value is indicated by a solid red dot with 1 sigma error-bar in the upper
panel of Fig.\ref{fig4}. The two other predictions, using SSN V1.0 observations, made
by \cite{KKR17} and \cite{KiA18} have been indicated by the coloured horizontal lines in
the the upper panel of Fig.\ref{fig4}.
It is, thus, seen from the upper panel of Fig.\ref{fig4} that the prediction made in this study,
using the SSN V1.0 observations that is different from the SSN observations used in JBA15,
show clearly similar or a relatively stronger cycle 25 than cycle 24 which had a SSN${_{max}}$ of 81.9
whereas the predictions made by JBA15 and other researchers \citep{KKR17,KiA18} indicated
a relatively weaker cycle 25 than cycle 24.
On the other hand, the correlation between B${_{min}}$ and SSN${_{max}}$ for SSN V2.0 data
is given by
\begin{equation}
SSN{_{max}}= 64.4 (\pm 17.9) \times B{_{min}} - 134.8 (\pm 89)
\label{eqv2}
\end{equation}
Using the value of B${_{min}}$ of 4.16 nT for cycle 24 in equation \ref{eqv2}, we predict a
SSN${_{max}}$ of 133$\pm$11 for cycle 25 if the solar minimum is in 2020. This
is shown by a solid red dot with 1 sigma error bar in Fig.\ref{fig4} (lower panel) with the
differently coloured horizontal lines indicating predictions for cycle 25 by other researchers,
using SSN V2.0 observations. The ratio of the values of SSN${_{max}}$ predicted for cycle 25 to
the values of SSN${_{max}}$ for cycle 24 by these authors have been given in Table
\ref{tab-SSN}. It is already clear from both Table \ref{tab-SSN} and from Fig.\ref{fig4}
(lower panel) as to why \cite{UHa18} and \cite{MSR18} argued for a solar cycle 25 that would be
more or less like solar cycle 24. In contrast, the prediction reported in this study using
SSN V2.0 observations with a SSN${_{max}}$ of 133$\pm$11 for cycle 25 suggests a relatively
stronger cycle 25 than cycle 24, which had a SSN${_{max}}$ of 116. Our prediction, in fact,
agrees with the predictions made by \cite{JiW18},
\cite{PSc18} and \cite{PeN18} who claimed a similar result for the amplitude of cycle 25.
The SSN${_{max}}$ for cycle 25 would be 82$\pm$8 and 133$\pm$11 in SSN V1.0 and SSN V2.0,
respectively, if solar minimum happens in 2020 and 80$\pm$8 and 130$\pm$11 in SSN V1.0 and
SSN V2.0, respectively, if solar minimum happens in 2021, suggesting again a relatively stronger cycle
25 than cycle 24, which is independent of the time of occurrence of solar minimum in cycle 24.
\begin{center}
\begin{figure}
\includegraphics[width=9cm, height=12cm]{fig4.eps}
\caption{SSN${_{max}}$ of following cycles from cyles 14--24 as a
function of B${_{min}}$ of preceding cycles from cycles 13-23
shown for SSN V1.0 (upper panel) and SSN V2.0 (lower panel).
The filled black dots with solar cycle numbers in each panel are the
values of SSN${_{max}}$ as function of B${_{min}}$ for each solar cycle
from cycle 14--24.
The solid black line in each panel is a best fit line
to all data points from cycles 14--24.
}
\label{fig4}
\end{figure}
\end{center}
\section{Discussion and conclusions}
%
Our study reports the temporal changes in solar photospheric fields obtained
from NSO/KP and NSO/SOLIS synoptic magnetograms, covering solar cycles 21--24,
specifically paying attention to the manner in which the unsigned
solar polar magnetic fields at latitudes $\ge$ $45^{\circ}$ behaved after the
mini-solar maximum of cycle 24. In the present study, it has been shown that after
the solar cycle maximum of cycle 24, unexpectedly, there has been an increase
in the unsigned solar polar field strength in the year 2015 that has subsequently
shown a slow and steady change maintaining the declining trend, that had begun around
the mid-1990's, for more than about 22-years. Importantly, it appears
that the unsigned solar polar fields have switched their behaviour from
solar-cycle-like in cycles 21 and 22 to an anti-solar-cycle-like in cycle 24
and showing no correlation with solar cycle in between suggesting a transition
phase of solar magnetic fields during cycle 23.
As per the current understanding of
the solar dynamo process that gives rise to the solar cycle and as proposed by
solar dynamo models \citep{Cha10}, sunspot or toroidal fields are generated from
poloidal fields by solar differential rotation, while poloidal or polar fields
are regenerated from toroidal fields by the Babcock-Leighton mechanism
\citep{Bab61,Lei69}. This mechanism depends on the systematic tilt angle
distribution of bipolar sunspot regions, which, in turn, is determined by the
Coriolis force acting on the magnetic flux tubes that rise through the solar
surface at different latitudes to produce bipolar sunspot regions \citep{DCh93}.
This whole process would result in a large scatter in tilt angle distribution,
which along with diffusion and surface flux transport processes could
change the solar polar field behaviour in any given solar cycle, therby making it totally
different from the previous cycle. The sudden transition of behaviour of
unsigned solar polar fields in cycle 24 can, thus, be
attributed to the fluctuations in Babcock-Leighton mechanism that decide the net
flux transported towards the poles, and thereby, ultimately determining the net
polar field strength at the end of a cycle.
Based on the unexpected rise in solar polar fields after July 2014, we
have re-estimated, in this study, the new average field strength of $\sim$2.1 G
for the unsigned solar polar fields in 2020 for the upcoming solar minimum
of cycle 24. This value is quite different from the estimate of unsigned solar
polar fields of $\sim$1.6 G in 2020 that was reported in JBA15.
A reduction in the strength of HMF at 1 AU is also clearly seen post
July 2014, during solar cycle 24. Also, we have shown that the strength of HMF in cycle 24, post solar
maximum and about 2 years prior to the minimum, has nearly approached the average floor level of HMF of
4.6 nT as proposed by \cite{SCl07}. Using the correlation between the strength of HMF and the unsigned
solar polar field strength during cycles 21--24, we found a floor level of 4.2 nT. This decrease in the
floor level of the HMF is actually due to the observed global reduction in the HMF during the last two
solar cycles 23 and 24.
Using the correlation
between the strength of HMF and the unsigned solar polar photospheric field
strength at minima of cycles 20-23,
we have estimated a value of 4.16 for the HMF in 2020 for the upcoming minimum of cycle 24.
Further, based on the correlation of HMF at solar minimum and SSN${_{max}}$ from
cycles 14--24, we estimated a value of SSN${_{max}}$ of 82$\pm$8 (V1.0) and 133 $\pm$11
(V2.0) for the upcoming solar cycle 25. Also, expecting a delay in the minimum of cycle 24
by one year, {\it{i.e.}} 2021, we also estimated a value of SSN${_{max}}$ of 80$\pm$8 (V1.0) and 130$\pm$11
(V2.0) for the upcoming solar cycle 25 if the minimum of cycle 24 would occur in 2021 instead of 2020.
Our estimate suggests that the oncoming sunspot
cycle 25 will be relatively stronger than cycle 24, but will be weaker
than cycles 23. This is different from the prediction made by JBA15 that reported rather
a relatively weaker cycle 25 than cycle 24.
Using the values of solar spectral
irradiance at 10.7 cm (F10.7) and the averaged polar magnetic field, \cite{PSc18}
computed a solar dynamo amplitude (SODA) index. They used the SODA index as a precursor
for predicting the next cycle's amplitude and estimated a maximum SSN V2.0 of 135$\pm$25
for cycle 25. The Fe XIV coronal green line emission appears at high latitudes
($\ge$ $50^{\circ}$) just before solar cycle maximum, which subsequently drifts to the
poles. This is known as rush-to-the-poles (RTTP). Based on the correlations of the rise
rate of the RTTP to the delay time between the end of the RTTP and the maximum of the
following cycle, \cite{PeN18} estimated the maximum SSN V2.0 of cycle 25 of 130. Similarly,
using the surface flux transport model \cite{JiW18} predict the polar field strength at
the end of cycle 24 and predicted the amplitude of cycle 25 to be 125$\pm$32, indicating
a 10\% stronger cycle 25 than cycle 24. Also, using continuous century-scale data-driven
surface flux transport simulations, \cite{BNa18} reported a slightly stronger solar cycle
25 than cycle 24 with an SSN${_{max}}$ ranging between 109 and 139 (V2.0). Thus,
using the revised SSN V2.0 and the other existing correlations that relate to the strength
of the following cycle, the other researchers also arrived at the same conclusion for
the amplitude of cycle 25 similar to the relatively stronger cycle 25 as proposed by this
study. The relatively stronger upcoming cycle 25 can be understood from the fact that the
axial dipole moment during cycle 24 (by Dec. 2017) has been stronger than that
during cycle 23 \citep{JiW18}. This is in keeping with the flux transport dynamo model by
\cite{CCJ07} wherein, the authors predicted a weaker cycle 24 based on the weaker axial
dipole moment during cycle 23.
With space missions like the Parker Solar Probe being operational and upcoming
solar missions like the ADITYA-L1 mission, by India, planned for launch in
2020 \citep{JaS17}, cycle 25 is bound to reveal more crucial insights
into as yet unknown aspects of the internal workings of our sun.
\acknowledgments
This work has made use of NASA's OMNIWEB services Data System.
The authors thank the free data use policy of the National Solar
Observatory (NSO/KP, NSO/SOLIS and NSO/GONG), OMNI2 from NASA
and WDC-SILSO at Royal Observatory, Belgium, Brussels. SKB
acknowledges the support by the PIFI (Project No. 2015PM066)
program of the Chinese Academy of Sciences and the NSFC
(Grant No. 11750110422, 11433006, 11790301, and 11790305).
SA acknowledges an INSA Honorary Scientist position.
|
2006.03304
|
\section{Introduction}
\label{sec:intro}
Radio detection of cosmic-ray air-showers is the promising technique providing cost-effective and high duty-cycle measurements.
Although radio emission from extensive air-showers (EAS) has been detected and studied since a long time, the technique becomes applicable for the field measurements only from the beginning of this century with the progress of modern digital data acquisition and processing systems.
Most ground-based EAS radio arrays perform measurements jointly with master detectors (particle or optical detectors), which produce the trigger for the entire facility.
The main challenge for the development of self-triggered radio array is a low signal-to-noise ratio (SNR).
Plenty of radio frequency interference (RFI), natural and man-made background distort and overlap EAS signal.
Impossibility of independent radio measurements leads to the necessity of supporting each radio array with particle or light detectors, and consequently to the impossibility of building large cost-effective radio arrays.
At the present moment there is no successful operating self-triggered ground-based radio EAS array, except ARIANNA~\cite{Barwick:2016mxm}, which indeed operates in extremely radio-quiet areas.
Besides it, research in this direction is carried out at few experiments, namely OVRO-LWA~\cite{ovro} and LOFAR~\cite{lofar}.
The main bottleneck in the radio detection of EAS is the increased flow of raw data comparing to other detectors, since the analysis of radio data requires recording of the full electrical field from each antenna station (cf. charges or waveforms from particle and Cherenkov detectors).
Self trigger for radio requires real-time processing of raw data with the rate of about GB/s for a single antenna station. To decrease the resulting data flow dumped from the fast ADC it is also necessary to apply additional online reconstruction (e.g. arrival direction).
The proposed pipeline can be implemented using FPGA technology.
In present work we discuss the architecture of radio trigger and algorithms for classification of broadband pulses.
\section{Tunka-Rex}
\label{sec:trex}
Tunka Radio Extension (Tunka-Rex)~\cite{trex} is a digital antenna array located in Eastern Siberia which measures EAS radio emission in the energy range of primary particle of $10^{17}$ - $10^{18}$ eV and frequency band 30-80 MHz.
Measurements are triggered by the host-detectors Tunka-133 (air-Cherenkov array) and Tunka-Grande (scintillator array) of TAIGA~\cite{Kostunin:2019nzy}, depending on the operation mode.
Array consists of 63 antennas covering an area of about 3 km$^{2}$.
Antenna station is based on SALLA (Short Aperiodic Loaded Loop Antenna)~\cite{salla} measures two perpendicular horizontal polarizations of EAS pulse.
The full signal chain has the following structure: SALLA $\to$ 30 meters cable $\to$ analog filter-amplifier $\to$ ADC with ring buffer (200 MHz 12 bit) $\to$ DAQ.
When the trigger is received, Tunka-Rex records the radio traces with a duration of 5 {\textmu}s from ring buffer for each station and sends them to the DAQ.
Signal of EAS has a duration of few tens of nanoseconds and located in a specific region in the recorded trace, namely "signal window".
Timestamp of signal in the recorded trace for each station is different, but known by cable lengths and hardware delays.
Standard Tunka-Rex data processing and reconstruction pipeline of EAS parameters is performed using timestamp and arrival direction given by the host detector.
For a detailed explanation of recent quality cuts and reconstruction procedure see Ref.~\cite{trex-template}.
During last years all data collected by Tunka-Rex are stored in Tunka-Rex Virtual Observatory (TRVO)~\cite{trvo}, a database with fast access to different data layers. Using this database we develop and test methods for independent, self-triggered detection of EAS pulses in raw data flow.
\section{Architecture of self trigger}
\label{sec:arch}
The first level of independent EAS pulse detection is a threshold trigger on channel-, station- and cluster-level.
In ideal radio-quiet conditions it is possible to perform measurements using this method only, but in a realistic environment we have to apply additional processing before triggering and recording, because a significant amount of transient RFI in conjunction with limited bandwidth of DAQ communication lines and storage capacity leads to an incredible fraction of dead time.
The bottleneck of modern data acquisition systems is data transmission channels to the data center.
The use of a trigger on a threshold leads to the frequent trigger generation with a low fraction of true positives, which will cause the overloading of the data acquisition system.
Of course, this problem can be solved using fast network equipment, powerful servers and large storage, but this led to a significant increase in the costs of the final setup.
To reduce the load on the transmission channels it is necessary to implement hardware rejection of background data.
This step includes tasks of developing methods for deciphering the nature of the short radio pulses for separating the noise from EAS pulse as well as further hardware implementation of this method.
In the classical approach for radio detection we use advanced offline analysis, but this analysis does not applicable for real-time trigger generation.
For hardware trigger implementation we need a set of simple algorithms that can be easily implemented on FPGA.
Our approach suggests using compact antenna clusters and multi-level trigger generation scheme.
Experience has shown that using compact antenna clusters brings better fault tolerance and shows improved reconstruction accuracy.
The main idea of this approach is a gradual decrease in the data flow with a simultaneous complication of algorithms for searching of EAS signal.
\pagebreak
We suggest the following hierarchical structure of algorithms for real-time processing for the reduction of data flow and decreasing the dead time caused by readout:
\begin{enumerate}
\item \textit{Channel level.}
Improving signal quality using digital filters such as bandpass filter and median filter.
\item \textit{Station level.}
Two-channel analysis and data pre-selection.
The analysis includes SNR cut and RFI suppression algorithms.
RFI from a known source (usual case is other detectors located at the Tunka facility associated with pulses from the near-horizon area) can be easily suppressed by a specially tuned algorithm.
Trigger will be produced with the following conditions: amplitude and SNR ratio of the signal exceed the given thresholds and pulse is not associated with known RFI sources.
\item \textit{Cluster level.}
Pre-selection of station-level data by cluster-level analysis.
The analysis includes matched filtering, arrival direction cuts and suppression of known RFI by waveform.
\end{enumerate}
\section{Methods of classification of broadband pulses}
\label{sec:methods}
Effective noise reduction is not possible without a clear understanding of the noise environment in the experiment.
The short pulses like EAS ones are the most difficult for filtering.
Such pulses can be generated by various kinds of pulsing electronics and they have stable form and amplitude.
To identify such pulses we developed a method for searching pulses from stable RFI sources.
The main idea of this method is an applying of filters tuned for RFI detection and analysis of amplitude and arrival direction of RFI.
On the first step we developed single station analysis (2 channels) and test it on Tunka-Rex data.
Like RFI detector we used root mean square (RMS) in the window of 640~ns.
The method is defined as follows:
\begin{enumerate}
\item Applying a sliding window for each recorded trace and calculation of RMS in this window separately for each channel.
Window containing a maximum of RMS is defined as a window contains a candidate of RFI pulse.
RFI position is defined as a position of the peak of Hilbert envelope in the corresponding window.
\item Creating a channel-to-channel RMS distribution of RFI candidates for all processed data.
\item Finding RFI cores in this distribution by iterative Gauss fit (see Fig.~\ref{fig:RMSDistr}).
\item Averaging of signals inside each core of RMS distribution for getting the template of RFI pulse.
\end{enumerate}
\begin{figure}[t]
\includegraphics[width=1\linewidth]{images/distr_templ.pdf}
\caption{
\textit{Top:} examples of channel RMS distributions from neighboring Tunka-Rex stations.
\textit{Bottom:} Pulses obtained by averaging of signatures indicated by red circles.
One can see, that detected RFI are similar for all three antennas what can point to the location of the noise source.
}
\label{fig:RMSDistr}
\end{figure}
Applying this method to the data recorded at a given station allows one to get specific types of high-amplitude pulses detected at this station.
After that high amplitude pulses are pooled to the pulse library specific for each station.
Pulse library will be used for rejection of high amplitude pulses that are not associated with EAS pulse.
Proposed criterion of rejection is the convolution between noise pulse and raw time trace surpassing the threshold.
\section{Discussion and conclusion}
\label{sec:concl}
We developed the method for detection and shape definition of RFI from stable sources by using data from only one antenna station.
Test using Tunka-Rex archival data (TRVO) showed the efficiency of this technique.
Our method can be easily adapted for various methods of RFI detection such as matching filtering with known RFI template and neural network processing.
Also, this technique can be adapted to search for EAS signals.
In continuation of this analysis we will develop methods for multi-station analysis for RFI identification.
Investigating the timing profile and lateral distribution function of RFI we can define the position of this noise source.
Moreover, analyzing the data from several antenna stations we can reconstruct the shape of RFI pulses in more detail.
The next steps on the way of independent radio triggering are optimization of algorithms of defining RFI pulses, making a library of specific pulses for each station, developing methods for filtering and testing its efficiency with TRVO data.
By this we will reduce raw data flow and be ready for developing and testing hardware prototype.
In addition to self-trigger technique, the algorithms developed in the frame of this work can complement the methods for the lowering the threshold, e.g. ones using deep learning~\cite{Bezyazeekov:2019jbe}, as well as for the future low-threshold detectors~\cite{Schroder:2018dvb} or precise 21cm cosmology operating in 30-80~MHz domain~\cite{Kostunin:2019gho}.
\acknowledgments
This work was supported by the Russian Federation Ministry of Science and High Education (agreement \textnumero~075-15-2019-1631, project. FZZE-2020-0024), by the Russian Science Foundation Grant No. 19-72-00010 (section 2,3) and by Russian Foundation for Basic Research grant 18-32-20220.
|
1302.1481
|
\section{\bigskip Introduction}
Topological superconductors (TSCs) and superfluids (TSFs)\cite{Qi,Qi1} have
attracted considerable interest in condensed matter physics because of their
potential applications on the fault-tolerant topological quantum computation
(TQC)\cite{Nayak}. One of the remarkable features in TSCs/TSFs is the helicity
or chirality of the unconventional pairings. Unfortunately, there are very
limited natural materials\cite{Osheroff,Mackenzie} exhibiting these kinds of
unconventional pairings. Starting with the pioneer work by Fu $et$ $al$.
\cite{Fu}, recently, some new classes of hybridized systems
\cite{Sau,Mao,Qi2,Suk} have been proposed as possible candidates for TSCs,
where the unconventional pairings are induced from proximity effects of
conventional $s$-wave superconductor films. However, the impurities or
disorders in the materials hosting the electron gas increase the difficulties
to investigate the topological properties in the hybridized systems from
experiments\cite{Mourik}. Therefore, it should be not only interesting but
also necessary to design other systems that present TSC/TSF phases.
The ultra-cold atom gas associated with optical lattice technology provide an
ideal platform to realize and investigate the topological
phases\cite{Zhu,Sato,Shao,Zhang1,Zhang2} due to the controllability and
cleanity. In particular, some chiral $p$-wave TSFs\cite{Sato, Zhang1} have
been proposed based on the laser-induced artificial gauge fields
\cite{Osterloh,Ruseckas} in cold atom systems. The effect of the artificial
gauge fields is equivalent to the spin-orbit coupling, a key factor to induce
topological phase. More recently, some experimental groups reported the
realization of strong spin-orbit coupling in ultra-cold fermionic atoms gas
$^{40}$K and $^{6}$Li\cite{Wang,Cheuk}. This new technique brings the huge
hope to realize many exotic states related with spin-orbit coupling.
Recently, triangular optical lattices(TOLs) have been widely investigated in
experiments and theory, and the external fields and the different types of
interactions among the filled ultra-cold atoms can induce rich quantum phases
in the TOLs\cite{Struck,Hauke,Tieleman}. In this paper, we propose that an
exotically chiral $f$-wave TSF can be realized through the effective $k^{3}$
Rashba spin-orbit coupling(RSOC)\cite{Rashba}, Zeeman field(ZF) and $s$-wave
Feshbach resonance in triangular optical lattices(TOLs). The effective $k^{3}$
Rashba SOC and ZF are produced by the laser-atom interactions through
modulating applied laser beams. The $s$-wave Feshbach resonance is utilized to
induce the SF states\cite{Chin} of the trapped atoms. We find that there
exists a phase transition separating the TSF and normal superfluid (NSF),
which is determined by the bulk gap closing mechanism\cite{Sato3}. The TSF
resembles the SF with $f$-wave paring symmetry\cite{Hung}, which is consistent
with the geometrical symmetry of the TOLs. The chiral $f$-wave TSF is fully
gapped in bulk and has three chiral gapless edge states located on the
boundary. More interestingly, the TSF can be modulated through initializing
the lasers. Furthermore, there is one stable Majorana fermion bound to each
vortex in the TSF, and the commensurability between the SF vortex lattice
structures and the TOLs shows advantages to investigate the properties of
Majorana fermions. Hence, these properties make the system a potential
candidate to perform QTC.
The paper is organized as follows. In Sec. II, we propose a scheme to simulate
the RSOC and ZF through the laser-atom interaction in triangular lattices and
an effective tight-binding Hamilton describing the fermionic atom in dark
states is deduced. In Sec. III, through applying the $s$-wave Feshbach
resonance, we discuss the properties of the TSF and NSF with mean-field
approximation. Furthermore, we discuss the Majorana zero mode in the vortex
structure in the TSF states. In Sec. IV, we summarize our results.
\section{\bigskip Simulation and Model Hamiltonian}
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=1.0\linewidth]{fig1.eps}
\end{center}
\caption{(Color online) (a). Uniform triangle lattices are formed from the
maxima of the potential given by $V(\mathbf{r})=V_{L}\sum_{i=1}^{3
\cos(\mathbf{k}_{i}\cdot\mathbf{r})$. The defined lattice vectors
$\mathbf{s}_{i}$ and $\mathbf{d}_{i}$ are shown. The hexangular zone encircled
by the black-dashed lines is the unit cell of triangular lattice. (b) The
Brillouin zone (BZ) of triangular lattices, and the high-symmetry points are
marked.
\end{figure}
Firstly, we apply three blue detuned laser beams to create two-dimensional
TOLs, which can trap atoms at lattice sites. The three laser beams have same
wave-vector length but different polarizations and are applied along three
different directions: $\pm\frac{\sqrt{3}}{2}\hat{e}_{x}$ $-$ $\frac{1}{2
\hat{e}_{y}$ and $\hat{e}_{y}$, respectively. The total potential is given by
$V(\mathbf{r})=V_{L}\sum_{i=1}^{3}\cos(\mathbf{k}_{i}\cdot\mathbf{r})$ with
wave vectors $\mathbf{k}_{1,2}=k\left( \pm\frac{\sqrt{3}}{2},\frac{1
{2}\right) $ and $\mathbf{k}_{3}=k\left( 0,1\right) $. The pattern of the
potential is shown in Fig.1(a), where the maxima form perfect TOLs.
In order to simulate the RSOC, we consider the ultra-cold fermionic atoms
trapped in the TOLs and having tripod-type level configuration (e.g., the
lowest three Zeeman levels of $^{6}$Li atoms near the broad $s$-wave Feshbach
resonance)\cite{Ruseckas,Stanescu,Zhu} shown in Fig. 2(a). Three degenerate
hyperfine ground states $|1\rangle$, $|2\rangle$ and $|3\rangle$ are coupled
to an excited state $|4\rangle$ through spatially modulated two sets of lasers
with the corresponding Rabi frequencies $\Omega_{v,1}$, $\Omega_{v,2}$ and
$\Omega_{v,3}$ with $v=a,b$ denoting two independent sets. The Rabi
frequencies can be parameterized as $\Omega_{v,1}=\frac{1}{2}\Omega_{v
\sin\theta_{v}\cos\phi_{v}e^{iS_{v,1}}$; $\Omega_{v,2}=\frac{1}{2}\Omega
_{v}\sin\theta_{v}\sin\phi_{v}e^{iS_{v,2}}$; $\Omega_{v,3}=\frac{1}{2
\Omega_{v}\cos\theta_{v}e^{iS_{v,3}}$, and $\sum_{i=1}^{3}\left\vert
\Omega_{v,i}\right\vert ^{2}=\frac{1}{4}\Omega_{v}^{2}$. Thus, the system can
be described by a Hamiltonian
\begin{equation}
H_{0}=H_{tb}+H_{l-a} \label{H_lat
\end{equation}
wit
\begin{equation}
H_{tb}=-\sum_{i,\alpha}\mu a_{i,\alpha}^{\dag}a_{i,\alpha}-\sum_{i,j,\alpha
,\beta}t_{i,j}a_{i,\alpha}^{\dag}a_{j,\beta} \label{H_tb
\end{equation}
an
\begin{equation}
H_{l-a}=\underset{i,v}{\sum}\delta_{v}a_{i,4}^{\dag}a_{i,4}-\underset
{i,v,\alpha}{\sum}\hbar\Omega_{v,\alpha}a_{i,\alpha}^{\dag}a_{i,4}+H.c
\label{H_la
\end{equation}
Here, $H_{tb}$ is the tight-binding Hamiltonian describing the atom hopping
between different sites, and $H_{l-a}$ describes the laser-atom coupling.
$\mu$ is the chemical potential. $a_{i,\alpha}^{\dag}$ is the creation
operator of atom on site $i$ and in state $|\alpha\rangle$ with $\alpha$=1, 2,
3. $t_{ij}$ is the hopping integral between site $i$ and $j$. $\delta_{v}$ is
the detuning to the excited state $|4\rangle$.
Since the energy scale of $\delta_{v}$ and $\hbar\Omega_{v,\alpha}$ is much
larger than that of $\mu$ and $t_{i,j}$ (See the below discussions about the
parameters parts.), we firstly consider $H_{l-a}$. The eigenvalues of
$H_{l-a}$ can be obtained from the diagonalization. Namely, $E_{i,v,n=1,2,3,4
=0,0,\frac{1}{2}(\delta_{v}\mp\sqrt{\delta_{v}^{2}+\hbar^{2}\Omega_{v}^{2}})$.
The corresponding eigenstates (dressed states) are\begin{widetext}
\begin{align}
|D_{i,v,1}\rangle & =\sin\phi_{v}e^{iS_{v,3,1}}\left\vert 1\right\rangle
-\cos\phi_{v}e^{iS_{v,3,2}}\left\vert 2\right\rangle \nonumber\\
|D_{i,v,2}\rangle & =\cos\theta_{v}\cos\phi_{v}e^{iS_{v,3,1}}\left\vert
1\right\rangle +\cos\theta_{v}\sin\phi_{v}e^{iS_{v,3,2}}\left\vert
2\right\rangle -\sin\theta_{v}\left\vert 3\right\rangle \nonumber\\
|B_{i,v,1}\rangle & =\frac{1}{\sqrt{\Omega_{v}^{2}+(2E_{i,v,3})^{2}}}\left[
\Omega_{v}\left( \sin\theta_{v}\cos\phi_{v}e^{iS_{v,3,1}}\left\vert
1\right\rangle +\sin\theta_{v}\sin\phi_{v}e^{iS_{v,3,2}}\left\vert
1\right\rangle +\cos\theta_{v}\left\vert 3\right\rangle \right)
-2E_{i,v,3}e^{iS_{v,3}}\left\vert 4\right\rangle \right] \nonumber\\
|B_{i,v,2}\rangle & =\frac{1}{\sqrt{\Omega_{v}^{2}+(2E_{i,v,4})^{2}}}\left[
\Omega_{v}\left( \sin\theta_{v}\cos\phi_{v}e^{iS_{v,3,1}}\left\vert
1\right\rangle +\sin\theta_{v}\sin\phi_{v}e^{iS_{v,3,2}}\left\vert
1\right\rangle +\cos\theta_{v}\left\vert 3\right\rangle \right)
-2E_{i,v,4}e^{iS_{v,3}}\left\vert 4\right\rangle \right]. \label{eigenstate
\end{align}
\end{widetext}Here, $|B_{i,v,1/2}\rangle$ represent zero-energy dark states,
while $|D_{i,v,1/2}\rangle$ represent the nonzero-energy bright states. That
means the energy of dark states is not adjusted by the laser fields. Moreover,
the dark states $|D_{i,v,1/2}\rangle$ have no coupling with the initial
excited state $|4\rangle$. Therefore, the dark states are stable under atomic
spontaneous emission. With the adiabatic approximation\cite{Zhu,Stanescu}, we
can neglect all the couplings that simultaneously involve the dark states and
bright states and reduce the Hamiltonian $H_{0}$ into the subspace spanned by
the dark states. In general, the dark states $|D_{i,a,1/2}\rangle$ produced by
laser set $a$ (See the red lines in Fig. 2 (a)) are different from the dark
states $|D_{i,b,1/2}\rangle$ produced by laser set $b$ (See the green lines in
Fig. 2 (a)). However, the two sets of dark states can be same through
initializing the parameters of the lasers, and the equivalence attributes to
the periodicity of TOLs. In present work, we concentrate on the two sets of
lasers configuration illustrated in Fig. 2 (b) and (c). The initialized
parameters for the laser fields are $\theta_{a}=k_{a,2}y+\varphi$, $\phi
_{a}=\frac{\pi}{4}$, $S_{a,1}=k_{a,1}x$, $S_{a,2}=-k_{a,1}x$, $S_{a,3
=k_{a,3}z$ and $\theta_{b}=k_{b,1}x+\varphi$, $\phi_{b}=k_{b,2}y+\frac{\pi
{4}$, $S_{b,1}=0$, $S_{b,2}=0$, $S_{b,3}=k_{b,3}z$, where $\varphi$ is an
arbitrary phase and $k_{v,1}$, $k_{v,2}$ and $k_{v,3}$ are the wave vectors of
the lasers along the $x$, $y$, $z$ axes, respectively. The wave vectors of the
lasers are initialized to fulfil the relations: $k_{v,1}$=$4\pi$, $k_{v,2
$=$4\pi/\sqrt{3}$, so that the commensuration with the TOLs is guaranteed
simultaneously. Now, $|D_{i,v,1}\rangle$=$\frac{\sqrt{2}}{2}(\left\vert
1\right\rangle -\left\vert 2\right\rangle )$, $|D_{i,v,2}\rangle$=$\frac
{\sqrt{2}}{2}(\left\vert 1\right\rangle +\left\vert 2\right\rangle
)\cos\varphi-\sin\varphi\left\vert 3\right\rangle $ for both $v=a$ and $b$.
That means the Hamiltonian (\ref{H_lat}) can be projected into the subspace
spanned by $|D_{i,\uparrow}\rangle$ and $|D_{i,\downarrow}\rangle$ with
$|D_{i,\uparrow}\rangle\equiv|D_{i,v,1}\rangle$ and $|D_{i,\downarrow
\rangle\equiv|D_{i,v,2}\rangle$ if the atoms are initially pumped to these
dark states and they remain in the dark states. Here, we use $\sigma
=\uparrow\downarrow$ to denote two pseudo-spin. Then, in the dark states
subspace, the Hamiltonian $H_{0}$ can be projected into the following form:
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=1.0\linewidth]{fig2.eps}
\end{center}
\caption{(color online) Illustration of the light-atom interaction for
generation of effective non-Abelian gauge fields and effective Zeeman fields.
(a) The configuration of the hyperfine levels of ultra-cold atom and three
sets of laser beams characterized by the Rabi frequencies $\Omega_{a,i}$,
$\Omega_{b,i}$ and $\Omega_{c,j}$ with $i=1,2,3$ and $j=1,2$. The atom and the
laser fields have interaction through the Raman-type coupling with a large
single-photon detuning $\delta_{a/b/c}$. The laser beams configuration for
$\Omega_{a}$, $\Omega_{b}$ and $\Omega_{c}$ are shown in (b), (c) and (d). (e)
The relative energy levels modulated by the atom-laser couplings.
\end{figure
\begin{equation}
H_{eff}=-\sum_{i,\sigma}\mu c_{i,\sigma}^{\dag}c_{i,\sigma}-\sum
_{i,j,\sigma,\sigma^{\prime}}t_{i,j}c_{i,\sigma}^{\dag}U_{\sigma
,\sigma^{\prime}i,j}c_{j,\sigma^{\prime}}\text{.} \label{H_lat2
\end{equation}
Here, $c_{i,\sigma}^{\dag}$ is the creation operator of atom on site $i$ in
eigenstate $|D_{i,\uparrow}\rangle$. The Peierls phase factor is
\begin{equation}
U_{\sigma,\sigma^{\prime}i,j}=U_{\sigma,\sigma^{\prime}i}U_{\sigma
,\sigma^{\prime}j}^{\dag}=e^{i\int_{\mathbf{r}_{j}}^{\mathbf{r}_{i
}(\mathbf{\tilde{A}}_{a.\sigma,\sigma^{\prime}}+\mathbf{\tilde{A}
_{b,\sigma,\sigma^{\prime}})\cdot d\mathbf{r}} \label{phase factor
\end{equation}
Where $\mathbf{\tilde{A}}_{v.\sigma,\sigma^{\prime}}$ is the
laser-field-induced gauge vector potential, and $\mathbf{\tilde{A}
_{v,\sigma\sigma^{\prime}}$=$i\hbar\langle D_{v,\sigma}|\nabla|D_{v,\sigma
^{\prime}}\rangle$. We list the forms of $\mathbf{\tilde{A}}_{v,\sigma
\sigma^{\prime}}$ for completeness
\begin{align}
\mathbf{\tilde{A}}_{v,\uparrow\uparrow} & =\hbar\left( \cos^{2}\phi
_{v}\nabla S_{v,2,3}+\sin^{2}\phi_{v}\nabla S_{v,1,3}\right) \nonumber\\
\mathbf{\tilde{A}}_{v,\uparrow\downarrow} & =\hbar\cos\theta_{v}\left(
\frac{1}{2}\sin2\phi_{v}\nabla S_{v,1,2}-i\nabla\phi_{v}\right) \nonumber\\
\mathbf{\tilde{A}}_{v,\downarrow\downarrow} & =\hbar\cos^{2}\theta
_{v}\left( \cos^{2}\phi_{v}\nabla S_{v,1,3}+\sin^{2}\phi_{v}\nabla
S_{v,2,3}\right) \label{vector_potential
\end{align}
Note that since the two sets of lasers $a$ and $b$ have different detunings
$\delta_{a}$ and $\delta_{b}$ to the excited state $|4\rangle$, there are no
interference effects between the two sets, and they interact with the atoms
independently. The total gauge vector potentials are the simple sum of
$\mathbf{\tilde{A}}_{a.\sigma,\sigma^{\prime}}+\mathbf{\tilde{A}
_{b,\sigma,\sigma^{\prime}}$.
Now, we evaluate that the effective RSOC can be simulated by the
aforementioned two sets of lasers $a$ and $b$. For convenience, we define the
(next) nearest neighbor lattice vectors: $\mathbf{s}_{n}$ ($\mathbf{d}_{n}$)
shown in Fig. 1 (a) with $n$=$1$...$6$, and set the lattice constant $1$. For
the two sets of lasers configuration illustrated in Fig. 2 (b) and (c), we can
find that $U_{\mathbf{s}_{1}}$=$U_{\mathbf{s}_{4}}^{\dag}$=$e^{i4\pi
\cos\varphi\sigma_{x}}$ and $U_{\mathbf{d}_{2}}$=$U_{\mathbf{d}_{5}}^{\dag
$=$e^{i4\pi\cos\varphi\sigma_{y}}$ are the only nontrivial phase factors and
other $U_{\mathbf{s}_{n}/\mathbf{d}_{n}}$ are trivial and equal $\mathbf{1}$.
Since the TOLs have the rotation symmetry of point group $C_{3\upsilon}$,
rotating the laser beams or the lattice systems with $\pm\frac{2\pi}{3}$ gives
another two groups of nontrivial phase factors: $U_{\mathbf{s}_{3}
$=$U_{\mathbf{s}_{6}}^{\dag}$=$e^{i4\pi\cos\varphi\sigma_{x}}$, $U_{\mathbf{d
_{1}}$=$U_{\mathbf{d}_{4}}^{\dag}$=$e^{-i4\pi\cos\varphi\sigma_{y}}$ and
$U_{\mathbf{s}_{2}}$=$U_{\mathbf{s}_{5}}^{\dag}$=$e^{-i4\pi\cos\varphi
\sigma_{x}}$, $U_{\mathbf{d}_{3}}$=$U_{\mathbf{d}_{6}}^{\dag}$=$e^{-i4\pi
\cos\varphi\sigma_{y}\text{ }}$, respectively. Here, $\sigma_{x/y}$ are the
two Pauli matrix. For a two-component spin system, we have the following
relation for the unitary operator
\begin{equation}
U_{\sigma\sigma^{\prime}}=e^{i\alpha(\sigma_{x/y})_{\sigma\sigma^{\prime}
}=\cos\alpha+i(\sigma_{x/y})_{\sigma\sigma^{\prime}}\sin\alpha. \label{U1
\end{equation}
With Eq. (\ref{U1}), we can find that $H_{eff}$ in Eq. (\ref{H_lat2}) has the
form as follows, \begin{widetext
\begin{align}
H_{eff} & =-\sum_{i,\sigma}\mu c_{i,\sigma}^{\dag}c_{i,\sigma}-t\cos
\alpha\sum_{i,,n,\sigma}c_{i,\sigma}^{\dag}c_{i+\mathbf{s}_{n},\sigma
}-t^{\prime}\cos\alpha\sum_{i,,n,\sigma}c_{i,\sigma}^{\dag}c_{i+\mathbf{d
_{n},\sigma}\nonumber\\
& -it\sin\alpha\sum_{i,,n,\sigma,\sigma^{\prime}}(-1)^{n+1}c_{i,\sigma}^{\dag
}(\sigma_{x})_{\sigma\sigma^{\prime}}c_{i+\mathbf{s}_{n},\sigma^{\prime
}-it^{\prime}\sin\alpha\sum_{i,,n,\sigma,\sigma^{\prime}}(-1)^{n}c_{i,\sigma
}^{\dag}(\sigma_{y})_{\sigma\sigma^{\prime}}c_{i+\mathbf{d}_{n},\sigma
^{\prime}}.\label{H_effn
\end{align}
\end{widetext} Here, $t$ and $t^{\prime}$ are the original nearest and
next-nearest neighbor hopping integrals. $\alpha=4\pi\cos\varphi$. The first
three terms in Eq. (\ref{H_effn}) are the modulated normal hopping parts of
the Hamiltonian, while the last two terms describe the effective RSOC. More
importantly, through adjusting the gauge flux $\alpha$, one can change the
relative strength between the hopping and RSOC. That is nearly impossible in
the condensed matter system. The $k^{3}$ type of RSOC can be explicitly found
in the momentum space form of Eq. (\ref{H_effn}), which we will discuss in the
next section.
In the following part of this section, we simulate how to generate an
effective ZF to split two pseudo-spin states $|D_{i,\uparrow}\rangle$ and
$|D_{i,\downarrow}\rangle$. We apply two additional laser beams that couple
the states $\left\vert 1\right\rangle $ and $\left\vert 2\right\rangle $ to
the excited state $\left\vert 4\right\rangle $ with a large detuning
$\delta_{c}$\cite{Zhu1} (See the blue lines in Fig. 2 (a) ). The laser-atom
interaction is
\begin{equation}
H_{l-a}^{\prime}=-\underset{i}{\sum}\delta_{c}a_{i,4}^{\dag}a_{i,4
+\underset{i,\alpha=1}{\overset{\alpha=2}{\sum}}\hbar\Omega_{c,\alpha
}a_{i,\alpha}^{\dag}a_{i,4}+H.c. \label{H_lan
\end{equation}
The corresponding Rabi frequencies are parameterized as $\Omega_{c,1
=\frac{\sqrt{2}}{4}\Omega_{c}e^{i4\pi x}$ and $\Omega_{c,2}=\frac{\sqrt{2}
{4}\Omega_{c}e^{-i4\pi x}$ with $\sqrt{\left\vert \Omega_{c,1}\right\vert
^{2}+\left\vert \Omega_{c,2}\right\vert ^{2}}=\frac{1}{2}\Omega_{c}$ and
$\hbar\Omega_{c}\ll$ $\delta_{c}$. The lasers configuration is illustrated in
Fig. 2 (d).
The eigenvalues of $H_{l-a}^{\prime}$ can be obtained from the
diagonalization. Namely, $E_{i,n=1,2,3}^{\prime}=0,-\frac{1}{2}(\delta_{c
\mp\sqrt{\delta_{c}^{2}+\hbar^{2}\Omega_{c}^{2}})$. The corresponding
eigenstates are
\begin{align}
\left\vert \chi_{1}\right\rangle & =\frac{\sqrt{2}}{2}e^{-i4\pi x}\left\vert
1\right\rangle -\frac{\sqrt{2}}{2}e^{i4\pi x}\left\vert 2\right\rangle
\nonumber\\
\left\vert \chi_{2}\right\rangle & =\frac{\sqrt{2}}{2}\cos\beta e^{-i4\pi
x}\left\vert 1\right\rangle +\frac{\sqrt{2}}{2}\cos\beta e^{i4\pi x}\left\vert
2\right\rangle -\sin\beta\left\vert 4\right\rangle \nonumber\\
\left\vert \chi_{3}\right\rangle & =\frac{\sqrt{2}}{2}\sin\beta e^{-i4\pi
x}\left\vert 1\right\rangle +\frac{\sqrt{2}}{2}\sin\beta e^{i4\pi x}\left\vert
2\right\rangle +\cos\beta\left\vert 4\right\rangle \label{eigenstate2
\end{align}
Here, $\tan\beta=(\sqrt{\delta_{c}^{2}+\hbar^{2}\Omega_{c}^{2}}-\delta
_{c})/\delta_{c}$. Due to $\hbar\Omega_{c}\ll$ $\delta_{c}$, we can get
$\tan\beta\sim\hbar\Omega_{c}/\delta_{c}\sim0$, and $E_{i,2}^{\prime}\sim
\hbar^{2}\Omega_{c}^{2}/4\delta_{c}$.
\begin{equation}
\left\vert \chi_{2}\right\rangle \sim\frac{\sqrt{2}}{2}e^{-i4\pi x}\left\vert
1\right\rangle +\frac{\sqrt{2}}{2}e^{i4\pi x}\left\vert 2\right\rangle
,\left\vert \chi_{3}\right\rangle \sim\left\vert 4\right\rangle . \label{chi3
\end{equation}
Since $E_{i,1}^{\prime}=0$, there is no effect of $\left\vert \chi
_{1}\right\rangle $ to the ground states $\left\vert 1\right\rangle $ and
$\left\vert 2\right\rangle $, and $\left\vert \chi_{4}\right\rangle $ also has
no effect to $\left\vert 1\right\rangle $ and $\left\vert 2\right\rangle $.
Hence, we can only consider the effect of $\left\vert \chi_{2}\right\rangle $
to $\left\vert 1\right\rangle $ and $\left\vert 2\right\rangle $. If we define
that $d_{i}^{\dag}$ is an operator to create an atom on site $i$ in eigenstate
$\left\vert \chi_{2}\right\rangle $. A perturbation Hamiltonian can be written
as: $H_{p}=$ $\hbar\Omega_{p}\sum_{i}d_{i}^{\dag}d_{i}$. With Eq.
(\ref{chi3}), $H_{p}$ has the form
\begin{equation}
H_{p}=H_{ac}+\hbar\Omega_{p}\underset{i}{\sum}e^{-i8\pi x}a_{i,1}^{\dag
}a_{i,2}+H.c. \label{Hp2
\end{equation}
Where $\Omega_{p}=$ $\hbar\Omega_{c}^{2}/8\delta_{c}$. $H_{ac}=$ $\hbar
\Omega_{p}\underset{i}{\sum}(a_{i,1}^{\dag}a_{i,1}+a_{i,2}^{\dag}a_{i,2})$ is
a constant ac-Stark shift, whose effect can be canceled with a frequency
offset of the laser beams $\Omega_{v,3}$ applied to the level $\left\vert
3\right\rangle $\cite{Zhu1}. Therefore, we can only consider the effect of the
second term in Eq. (\ref{Hp2}). From Eq. (\ref{eigenstate}), we can get the
following relation
\begin{align}
\left\vert 1\right\rangle & =\frac{\sqrt{2}}{2}\left( \left\vert
D_{i,\uparrow}\right\rangle +\cos\varphi\left\vert D_{i,\downarrow
}\right\rangle \right) \nonumber\\
\left\vert 2\right\rangle & =\frac{\sqrt{2}}{2}\left( -\left\vert
D_{i,\uparrow}\right\rangle +\cos\varphi\left\vert D_{i,\downarrow
}\right\rangle \right) \label{eigen3
\end{align}
Where we have applied the conditions that at all the lattice sites, $e^{\pm
i8\pi x}=1$, $e^{\pm iS_{v,1,2}}=1$, $\theta=\varphi$ and $\phi=\pi/4$. With
Eq. (\ref{eigen3}), we find the second term in Eq. (\ref{Hp2}) induce a
splitting between $|D_{i,\uparrow}\rangle$ and $|D_{i,\downarrow}\rangle$ as
\begin{align*}
H_{s} & =-\hbar\Omega_{p}\underset{i}{\sum}(c_{i,\uparrow}^{\dag
}c_{i,\uparrow}-\cos^{2}\varphi c_{i,\downarrow}^{\dag}c_{i,\downarrow})\\
& =-h_{0}\underset{i,\sigma}{\sum}c_{i,\sigma}^{\dag}c_{i,\sigma
-h_{z}\underset{i}{\sum}(c_{i,\uparrow}^{\dag}c_{i,\uparrow}-c_{i,\downarrow
}^{\dag}c_{i,\downarrow}),
\end{align*}
with $h_{0}=\hbar\Omega_{p}(1-\cos^{2}\varphi)/2$ and $h_{z}=\hbar\Omega
_{p}(1+\cos^{2}\varphi)/2$. $h_{0}$ can be renormalized into the chemical
potential term in $H_{eff}$ (Eq. (\ref{H_effn})), and $h_{z}$ describes the
effective ZF. In order to guarantee that $H_{s}$ cannot pump the atoms outside
of the dark-state subspace, the conditions: $\hbar\Omega_{p}\ll\left\vert
E_{b,3}\right\vert $ $<\left\vert E_{a,3}\right\vert $ must be fulfilled (See
Fig. 2 (e)). Now, the new Hamiltonian including the effective RSOC and ZF is
\begin{equation}
H_{0}^{\prime}=H_{eff}+H_{s}. \label{H_lat3
\end{equation}
\section{Topological SF and Majorana Fermion}
The SF states can be induced by atomic interaction from the s-wave scattering.
The interaction term is described by the Hamiltonian
\begin{equation}
H_{int}=\sum_{i}\sum_{\alpha<\beta}V_{\alpha\beta}a_{i,\alpha}^{\dag
}a_{i,\beta}^{\dag}a_{i,\beta}a_{i,\alpha}, \label{H_int
\end{equation}
where $\alpha$ and $\beta$ label three ground states of atoms and
$V_{\alpha\beta}$ are proportional to $s$-wave scattering lengths between
$\alpha$, $\beta$ channel. In $s$-wave SF state, $H_{int}$ can be decoupled on
the mean-field level
\begin{equation}
H_{mf}=\sum_{i}\sum_{\alpha<\beta}\Delta_{\alpha\beta}a_{i,\alpha}^{\dag
}a_{i,\beta}^{\dag}+H.c. \label{H_int1
\end{equation}
with $\Delta_{\alpha\beta}=V_{\alpha\beta}\left\langle a_{i,\beta}a_{i,\alpha
}\right\rangle $, the SF order parameter. Under the condition of
$\Delta_{\alpha\beta}\ll\Omega_{a/b}$, it is safe to consider the SF in the
dark-state subspace, because $H_{mf}$ cannot pump the atoms outside of the
dark-state subspace. Then, we project $H_{mf}$ to the dark-state subspace and hav
\begin{equation}
H_{mf}^{\prime}=\sum_{i}\Delta_{0}c_{i,\uparrow}^{\dag}c_{i,\downarrow}^{\dag
}+H.c. \label{H_int2
\end{equation}
with $\Delta_{0}$ the linear combinations of $\Delta_{\alpha\beta}$.
In the following parts of the paper, we focus on the total Hamiltonian which
describes the SF states
\begin{equation}
H_{t}=H_{0}^{\prime}+H_{mf}^{\prime}\text{.} \label{H_tot2
\end{equation}
After the Fourier transformation, in momentum Nambu bases:
$[c_{\mathbf{k\uparrow}},c_{\mathbf{k}\downarrow,}c_{-\mathbf{k\uparrow
}^{\dag},c_{-\mathbf{k\downarrow}}^{\dag},]^{T}$, $H_{t}$ can be expressed as
\begin{equation}
H_{t}(\mathbf{k})=\left[
\begin{array}
[c]{cc
\varepsilon_{\mathbf{k}}-h_{z}\sigma_{z}+\mathbf{g}_{\mathbf{k}
\cdot\mathbf{\sigma} & -i\Delta_{0}\sigma_{y}\\
i\Delta_{0}\sigma_{y} & -(\varepsilon_{\mathbf{k}}-h_{z}\sigma_{z
)+\mathbf{g}_{\mathbf{k}}\cdot\mathbf{\sigma}^{\ast
\end{array}
\right] . \label{H_T
\end{equation}
Here $\mathbf{g}_{\mathbf{k}}=(a_{\mathbf{k}},b_{\mathbf{k}})$,
$\mathbf{\sigma}=(\sigma_{x},\sigma_{y})$, and the explicit forms of
$\varepsilon_{\mathbf{k}}$, $a_{\mathbf{k}}$ and $b_{\mathbf{k}}$ are listed
\begin{align}
\varepsilon_{\mathbf{k}} & =-2t_{1}(\cos k_{x}+2\cos\frac{k_{x}}{2}\cos
\frac{\sqrt{3}k_{y}}{2})\nonumber\\
& -2t_{3}(\cos\sqrt{3}k_{y}+2\cos\frac{3k_{x}}{2}\cos\frac{\sqrt{3}k_{y}
{2})-\mu^{\prime} \label{Ekk
\end{align}
\begin{align}
a_{\mathbf{k}} & =2t_{2}(\sin k_{x}-2\sin\frac{k_{x}}{2}\cos\frac{\sqrt
{3}k_{y}}{2})\nonumber\\
b_{\mathbf{k}} & =2t_{4}(-\sin\sqrt{3}k_{y}+2\sin\frac{\sqrt{3}k_{y}}{2
\cos\frac{3k_{x}}{2}) \label{akbk
\end{align}
in which $t_{1}$=$t\cos\alpha$, $t_{2}$=$t\sin\alpha$, $t_{3}$=$t^{\prime
\cos\alpha$, $t_{4}$=$t^{\prime}\sin\alpha$ with $\alpha=4\pi\cos\varphi$ and
$\mu^{\prime}=\mu+h_{0}$.
Before discussing the properties of the SF states described by $H_{t}$ in Eq.
(\ref{H_tot2}), we give the estimations about the parameters related to the
aforementioned simulations to ensure the experimental feasibility and
rationality. For trapped atoms: $^{6}$Li, the wave length of laser beams
utilized to produce the TOL is $\lambda_{L}\sim1\mu m$, and the lattice
constant is $a_{L}=\frac{2\lambda_{L}}{\sqrt{3}}$. The recoil energy is
$E_{r}=\frac{\hbar^{2}k_{L}^{2}}{2m}\sim\hbar\times2\pi\times30$ kHZ$\sim1\mu
K$ with $k_{L}=\frac{2\pi}{\lambda_{L}}$, and $t=\frac{4E_{r}}{\sqrt{\pi
}(\frac{V_{0}}{E_{r}})^{\frac{3}{4}}e^{-2\sqrt{V_{0}/E_{r}}}$\cite{Zwerger}
when $V_{0}>>$ $E_{r}$ with $V_{0}\sim4V_{L}$, the depth of the TOL.
$t^{\prime}/t=e^{-\eta(\sqrt{3}-1)\sqrt{(V_{0}-E_{kin})/E_{r}}k_{L}a_{L}}$
with $E_{kin}$ and $\eta$ the kinetic energy of atom and the renormalized
factor. The typical atomic velocity is about several centimeters per second,
and $E_{kin}$ has the same order of $E_{r}$. $\eta$ depends on geometry of the
lattice. According to the 1D lattice results\cite{Bloch} and $\eta<1$, we
estimate that $V_{0}\sim3E_{r}$ is enough to get $t^{\prime}/t=\frac{1
{3\sqrt{3}}$. Actually, the TSF is robust even when $t^{\prime}/t\sim10^{-3}$.
Here, without lost of generality, we set $t^{\prime}/t\equiv\frac{1}{3\sqrt
{3}}$. Then $t\sim0.1E_{r}$ with the aforementioned formula. From the
harmonic-potential approximation, the energies of the atoms tightly confined
at a single lattice site are quantized to levels separated by $2\pi\hbar
\omega_{0}$=$2E_{r}\sqrt{\frac{V_{0}}{E_{r}}}$\cite{Bloch}. On the other hand,
the maximal band width for the TOL is $W_{m}\sim10t$ $\sim E_{r}$. Therefore,
it is safe to describe the system with single-band approximation because of
$2\pi\hbar\omega_{0}>W_{m}$. The Rabi frequencies $\Omega_{a/b/c}$ are
$10^{3}E_{r}/\hbar$, and the $\Omega_{p}$ can be tuned from $0$ to
$E_{r}/\hbar$ which is enough for $\hbar\Omega_{p}\ll\left\vert E_{b,3
\right\vert $ $<\left\vert E_{a,3}\right\vert $. Then, adiabatic
approximation\cite{Zhu1} is reasonable. The typical $s$-wave pairing potential
$\Delta_{\alpha\beta}$ in experiments is about $0.1E_{r}/\hbar$\cite{Ketterle
, which is much smaller than $\Omega_{a/b}$. Hence, our proposal is
experimentally feasible when the parameters lie in the estimated region.
\bigskip For convenience to discuss the properties of SF, we rewrite
$H_{t}(\mathbf{k})$ in the new bases $[\hat{\psi}_{\mathbf{k,}+},\hat{\psi
}_{-\mathbf{k,}+}^{\dag},-\hat{\psi}_{-\mathbf{k,}-}^{\dag},-\hat{\psi
}_{\mathbf{k,}-}]^{T}$ with $\hat{\psi}_{\mathbf{k,}\pm}$= $\frac{1}{\sqrt{2
}\left[ c_{\mathbf{k\uparrow}}\pm c_{-\mathbf{k\downarrow}}^{\dag}\right] $,
$H_{t}^{\prime}$ has the following form
\begin{equation}
H_{t}^{\prime}(\mathbf{k})=\left[
\begin{array}
[c]{cc
H_{+}(\mathbf{k}) & -i\varepsilon_{\mathbf{k}}\sigma_{y}\\
i\varepsilon_{\mathbf{k}}\sigma_{y} & H_{-}(\mathbf{k})
\end{array}
\right] . \label{H_R
\end{equation}
Here
\begin{equation}
H_{\pm}(\mathbf{k})=\pm\left[ (-h_{z}\mp\Delta_{0})\sigma_{z}+a_{k}\sigma
_{x}\pm b_{k}\sigma_{y}\right] . \label{Hpm0
\end{equation}
The spectrums of Hamiltonian(\ref{H_R}) are $\pm E_{\pm}(k)$, an
\begin{equation}
E_{\pm}(k)=\sqrt{\varepsilon_{\mathbf{k}}^{2}+\Delta_{0}^{2}+\Theta_{k}^{2
\pm2\sqrt{h_{z}^{2}\Delta_{0}^{2}+\varepsilon_{\mathbf{k}}^{2}\Theta_{k}^{2}
}, \label{Epm
\end{equation}
in which $\Theta_{k}=\sqrt{h_{z}^{2}+\left\vert \mathbf{g}_{\mathbf{k
}\right\vert ^{2}}$. The topological transition point is determined by bulk
energy gap closing condition: $E_{-}(k)=0$ (Fig. 3(d)). The spectrums are
fully gapped off this point, and the SF is topologically nontrivial when
$h_{z}>\sqrt{\Delta_{0}^{2}+\varepsilon_{\mathbf{k}}^{2}}|_{\mathbf{k}=(0,0)}$
(Fig. 3(c)) and trivial when $h_{z}<\sqrt{\Delta_{0}^{2}+\varepsilon
_{\mathbf{k}}^{2}}|_{\mathbf{k}=(0,0)}$ (Fig. 3(e)). Around the $\Gamma$ point
in BZ,
\begin{equation}
\mathcal{H}_{\pm}(\mathbf{k})=\mp\lbrack(\Delta_{0}\pm h_{z})\sigma
_{z}+\lambda_{so}(k_{-}^{3}\sigma_{\pm}+k_{+}^{3}\sigma_{\mp})]\text{,}
\label{H_plusandminus
\end{equation}
where $\sigma_{\pm}$=$\frac{1}{2}\left( \sigma_{x}\pm\sigma_{y}\right) $,
$k_{\pm}$=$\frac{1}{2}\left( k_{x}\pm k_{y}\right) $. The term $\lambda
_{so}(k_{-}^{3}\sigma_{\pm}+k_{+}^{3}\sigma_{\mp})$ is the $k^{3}$ RSOC with
amplitude: $\lambda_{so}$=$\frac{t_{2}}{2}\sim-0.48t$ for $\cos\varphi
=-0.101$. It is explicit that $\mathcal{H}_{\pm}(\mathbf{k})$ has the
well-defined $f$-wave chirality. That means the topological Chern
number\cite{Thouless} can be calculated $\mathcal{C}_{\mathcal{H}_{-}}
=$\frac{3}{2}\left[ \text{sign}(h_{z}-\Delta_{0})_{h_{z}>\Delta_{0
}-\text{sign}(h_{z}-\Delta_{0})_{h_{z}<\Delta_{0}}\right] $=$3$ while
$\mathcal{C}_{\mathcal{H}+}$=$0$. From the square lattice results that
$\Delta_{0}$ has maximum when the filling $\sim$ 1 atom per
site\cite{Hofstetter}, we set $\Delta_{0}=0.5t$ and $\hbar\Omega_{p
=3\Delta_{0}$ for the chemical potential around $\mu_{1}$ and filling about
0.6 atom per site. When the chemical potential locates at the region of
$\mu_{4}$ (Fig. 3 (b)), the low-energy behaviors are dominated by
$H_{t}^{\prime}(\mathbf{k})$ with $\mathbf{k}\sim(0,\frac{2\sqrt{3}\pi}{3})$.
Then we cannot define the specific chirality, and we ascribe these cases to
NSF. According to the aforementioned analysis, we draw the phase diagram in
Fig. 3(e). We find that the TSF strongly depends on the initial parameter
$\varphi$ of the laser beams and the fillings. That means we can control the
topological properties of the SF by modulating the parameters of the lasers.
That provides convenience to investigate the TSF.
In lattice case, the ground state Chern number of $H_{t}$ can be calculated
with:
\begin{equation}
\mathcal{C}_{n}=\frac{1}{2\pi}\int_{BZ}d^{2}k2Im\langle\frac{\partial
u_{n}(\mathbf{k})}{\partial k_{x}}|\frac{\partial u_{n}(\mathbf{k})}{\partial
k_{y}}\rangle, \label{chernnumber1
\end{equation}
where $u_{n}(\mathbf{k})$ is the ground state wave-function for the $n$th
occupied band ($n$=1,2). The straightforward calculation gives $\mathcal{C
_{1}$=$0$ and $\mathcal{C}_{2}$=$3$ when $h_{z}>\sqrt{\Delta_{0
^{2}+\varepsilon_{\mathbf{k}}^{2}}|_{\mathbf{k}=(0,0)}$, and $\mathcal{C}_{1
$=$0$ and $\mathcal{C}_{2}$=$0$ when $h_{z}<\sqrt{\Delta_{0}^{2
+\varepsilon_{\mathbf{k}}^{2}}|_{\mathbf{k}=(0,0)}$. The non-zero Chern number
means the same numbers of gapless edge states from the bulk-edge
correspondence. From the energy spectrum of $H_{t}(k_{y},x)$ shown in Fig. 4,
we find that three gapless chiral edge states transport on one edge of TSF.
(see Fig.4 (a) and (d)). The effective Hamiltonian describing the chiral edge
states is
\begin{equation}
\mathcal{H}_{edge}=\sum_{k_{y}\geq0}\nu_{0}k_{y}\hat{\Psi}_{k_{y}}^{\dag
}(x)\hat{\Psi}_{k_{y}}(x), \label{Hedge1
\end{equation}
where $\nu_{0}$ is the effective velocity at $\Gamma$, and $\hat{\Psi}_{k_{y
}^{\dag}(x)=\sum_{\sigma}\int dx\left[ u_{k_{y},\sigma}(x)c_{\sigma}^{\dag
}(x)+v_{k_{y},\sigma}(x)c_{\sigma}(x)\right] $. The Majorana condition
requires $\hat{\Psi}_{-k_{y}}(x)=$ $\hat{\Psi}_{k_{y}}^{\dag}(x)$, i.e.,
$u_{-k_{y,\sigma}}(x)=v_{k_{y,\sigma}}^{\ast}(x)$. We check that it's indeed
the case in our model. That means three edge states are chiral Majorana edge
states. \begin{figure}[ptb]
\begin{center}
\includegraphics[width=1.0\linewidth]{fig3.eps}
\end{center}
\caption{(color online) (a) The band structures of $E_{k}=\varepsilon_{k
\pm\Theta_{k}$ with $\cos\varphi=-0.101$ and $\mu=-2.59$. Here, we set
$k_{x}\in\lbrack-\frac{4\pi}{3},\frac{4\pi}{3}]$ and $k_{y}\in\lbrack
-\frac{2\pi}{\sqrt{3}},\frac{2\pi}{\sqrt{3}}]$. (b) The band structures along
high symmetric lines (Fig.2 (b)). The dashed blue lines are $\varepsilon
_{k}\pm\left\vert g_{k}\right\vert $ and the solid red lines correspond to
(a). Four different fillings with chemical potential $\mu_{1/2/3/4}=-2.59t$,
$-3.436t$, $-3.89t$, $0.45t$ are shown. (c) (d) (e) are the SF quasi-particle
spectrums corresponding to $\mu_{1}$, $\mu_{2}$ and $\mu_{3}$. (f) The phase
diagram as change of $\cos\varphi$ and $\mu$. Two different phases, NSF and
TSF are identified. $\Delta_{0}=0.5t$ and $\hbar\Omega_{p}=3\Delta_{0}$.
\end{figure}\begin{figure}[ptbptb]
\begin{center}
\includegraphics[width=1.0\linewidth]{fig4.eps}
\end{center}
\caption{(color online) The spectrums of Hamiltonian (\ref{H_tot2}) with edges
at x direction $i_{x}$ $\in(1,31)$. (a) (b) and (c) correspond to $\mu_{1}$
$\mu_{2}$ and $\mu_{3}$ cases in Fig. 3 (b). The dashed black lines indicate
the contributions from the first BZ. (d) The edge states (the states crossing
the gap and denoted with the solid red lines in (a)) transport along two
edges.
\end{figure}
In general, the topological defects bound Majorana zero modes in
TSC/TSF\cite{Qi,Read}. Here, we consider the topological excitations of vortex
structures in our system. The quasi-particle excitations usually are described
by the Bogoliubov-de Gennes (BdG) equation. From Fig. 4(a), we can find that
the wave function of low energy excitations can be constructed from the
contributions of quasi-particle around $\Gamma$. For simplicity, only the
non-trivial part $\mathcal{H}_{-}(k)$ of Eq.(\ref{H_plusandminus}) is taken
into account. Define $z=x+iy$, then $\partial_{z/z^{\ast}}=\partial_{x}\pm
i\partial_{y}$. The BdG equation for the quasi-particle has the form:
$\mathcal{H}_{-}(z,z^{\ast})\Psi_{0}=E\Psi_{0}$ with $\Psi_{0}=[u_{0}^{\ast
},v_{0}^{\ast}]^{T}$ and the corresponding quasi-particle creation operator:
$\hat{\Psi}_{0}^{\dag}
{\displaystyle\int}
dzdz^{\ast}\left[ u_{0}\hat{\psi}_{-}^{\dag}+v_{0}\hat{\psi}_{-}\right] $.
In the uniform TSF states, we can assume a trivial wave function:
$u_{0}=e^{i\pi/4}z^{-\frac{3}{2}}e^{-\frac{2}{3}(\frac{h_{z}-\Delta_{0
}{\lambda_{so}})^{1/3}(zz^{\ast})^{\frac{3}{2}}}$ and $v_{0}=u_{0}^{\ast}$ as
a test wave function from the BdG equation. The SF order parameter with a
vortex structure can be approximately expressed as $\Delta_{0}(r)$=$0$ for
$r<r_{c}$ and $\Delta_{0}(r)$=$\Delta_{0}e^{i\theta}$ for $r>r_{c}$ with
vorticity $1$. We imagine that the vortex is created adiabatically by changing
the wave function slowly enough so that it always remains an eigenstate. The
wave function of the vortex state can be obtained from a singular gauge
transformation: $\Psi_{0}\rightarrow\Psi_{0}e^{iq\theta}$ with $q=\pm1$
identifying the quasi-particle and quasi-anti-particle. In reverse, the vortex
can be gauged away by the inverse singular gauge transformation: $k\rightarrow
k-\nabla\theta/2$ and $\Delta_{0}e^{i\theta}\rightarrow\Delta_{0}$
\cite{Sato2}. After the inverse transformation, the state is one of
eigenstates, namely, the vortex excited state. In analogy to the Laughlin's
argument\cite{Laughlin} about the vortex excitation in quantum Hall state, we
get the wave function describing vortex zero mode as $u_{0}^{\prime}\sim
e^{-i(\frac{\theta}{2}-\frac{\pi}{4})}r^{-\frac{3}{2}}e^{-\frac{2}{3}\left(
\frac{h_{z}-\Delta_{0}}{\lambda_{so}}\right) ^{\frac{1}{3}}r^{3}}$ and
$v_{0}^{\prime}=(u_{0}^{\prime})^{\ast}$. The unique one zero mode for
$f$-wave case is proven by the numerical calculation\cite{Mao}.
The stability of Majorana zero mode is measured by the mini-gap $E_{g
\sim\Delta_{0}^{2}/E_{f}$ with $E_{f}$ the Fermi energy. In our case,
the\ $E_{f}$ is measured by $t$ not $E_{kin}$. Hence, The ratio $E_{g
/\Delta_{0}$ can be large enough to protect Majorana zero mode. Take half
filling as an example, we assume the optimized $\Delta_{0}\sim t$ and roughly
estimate $E_{f}\sim$ $3t$. Then $E_{g}/\Delta_{0}\sim1/3$. Comparing with the
hybridized systems \cite{Fu,Sau,Mao}, the energy scale of $E_{f}$ has the
order of electrons' kinetic energy $E_{kin}$ and $\Delta_{0}$ from the
proximity effect is much smaller compared to $E_{kin}$. So, the mini-gap in
hybridized systems may be relative small compare to superconductive gap
$\Delta_{0}$.
\section{\bigskip Conclusions}
In summary, we have proposed a scheme to produce $k^{3}$ RSOC and ZF through
the laser-atom interaction in TOLs, and a novelly chiral $f$-wave TSF is
realized thanks to the $s$-wave Feshbach resonance. We find that there exists
three Majorana edge states locating on the boundary of the system and one
Majorana fermion bounding to each vortex in the TSF state. The TSF can be
controlled by modulating the parameters of the laser. The controllability
provides convenience to investigate the properties of the TSF. Our proposal
enlarges TSF family and presents some advantages to study the Majorana fermions.
\emph{Acknowledgments:} Ningning Hao thanks J. Li and Guocai Liu thanks S. L.
Zhu for helpful discussions. The work is supported by the Ministry of Science
and Technology of China 973 program(2012CB821400), NSFC-1190024, NSFC-11147171
and NSFC-11247011.
|
2202.03335
|
\section{Defense Evaluation}
This section reports the defense performance of our proposed PPD and three existing defensive mechanisms (Basic, DP, and ADV). We consider the defense performance with and without the adaptive attacks.
Due to the large computation for exploring proper hyper-parameters in all the defenses, we only report the defense performance on a subset of settings.
\subsection{Defense Performance Against Non-adaptive Attacks}
The proposed PPB defense can significantly reduce the attack accuracy to around 50\% in most settings, which demonstrates the effectiveness of PPB. Comparing with the existing defenses, PPB achieves a better trade-off between utility and privacy (\textit{i}.\textit{e}.\@\xspace, high prediction accuracy and low attack accuracy).
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (CIFAR10, DenseNet121, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (CIFAR10, VGG16, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_level_0.7.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_l1filter_0.7.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_l2filter_0.7.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_slim_0.7.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (CIFAR10, ResNet18, Sparsity 0.7).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_level_0.7.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_l1filter_0.7.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_l2filter_0.7.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_slim_0.7.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (CIFAR10, DenseNet121, Sparsity 0.7).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_level_0.7.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_l1filter_0.7.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_l2filter_0.7.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_slim_0.7.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (CIFAR10, VGG16, Sparsity 0.7).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_resnet18_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_resnet18_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_resnet18_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_resnet18_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (CIFAR100, ResNet18, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_vgg16_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_vgg16_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_vgg16_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_vgg16_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (CIFAR100, VGG16, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_densenet121_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_densenet121_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_densenet121_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_densenet121_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (CIFAR100, DenseNet121, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_resnet18_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_resnet18_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_resnet18_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_resnet18_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (CHMNIST, ResNet18, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_densenet121_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_densenet121_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_densenet121_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_densenet121_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (CHMNIST, DenseNet121, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_resnet18_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_resnet18_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_resnet18_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_resnet18_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (SVHN, ResNet18, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_vgg16_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_vgg16_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_vgg16_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_vgg16_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (SVHN, VGG16, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_densenet121_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_densenet121_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_densenet121_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_densenet121_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses (SVHN, DenseNet121, Sparsity 0.6).}
\end{figure*}
\clearpage
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_location_column_level_0.6.pdf}
\caption{Sparsity 0.6}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_location_column_level_0.7.pdf}
\caption{Sparsity 0.7}
\end{subfigure}
\caption{Performance of defenses (Location, FC, L1 Unstructured).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_purchase_column_level_0.6.pdf}
\caption{Sparsity 0.6}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_purchase_column_level_0.7.pdf}
\caption{Sparsity 0.7}
\end{subfigure}
\caption{Performance of defenses (Purchase, FC, L1 Unstructured).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_texas_column_level_0.6.pdf}
\caption{Sparsity 0.6}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_texas_column_level_0.7.pdf}
\caption{Sparsity 0.7}
\end{subfigure}
\caption{Performance of defenses (Location, FC, L1 Unstructured).}
\end{figure*}
\subsection{Defense Performance Against Adaptive Attacks}
The proposed PPB defense can still reduce the attack accuracy by a significant degree when considering adaptive attacks. PPB and ADV are the best two defenses for L1 Unstructured and Slimming pruning. PPB performs the best for L1 Structured and L2 Structured pruning in most cases.
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_level_0.6_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_l1filter_0.6_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_l2filter_0.6_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_slim_0.6_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (CIFAR10, DenseNet121, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_level_0.6_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_l1filter_0.6_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_l2filter_0.6_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_slim_0.6_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (CIFAR10, VGG16, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_level_0.7_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_l1filter_0.7_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_l2filter_0.7_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_slim_0.7_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (CIFAR10, ResNet18, Sparsity 0.7).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_level_0.7_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_l1filter_0.7_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_l2filter_0.7_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_slim_0.7_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (CIFAR10, VGG16, Sparsity 0.7).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_level_0.7_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_l1filter_0.7_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_l2filter_0.7_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_slim_0.7_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (CIFAR10, DenseNet121, Sparsity 0.7).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_resnet18_level_0.6_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_resnet18_l1filter_0.6_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_resnet18_l2filter_0.6_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_resnet18_slim_0.6_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (CIFAR100, ResNet18, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_vgg16_level_0.6_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_vgg16_l1filter_0.6_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_vgg16_l2filter_0.6_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_vgg16_slim_0.6_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (CIFAR100, VGG16, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_resnet18_level_0.6_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_resnet18_l1filter_0.6_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_resnet18_l2filter_0.6_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_resnet18_slim_0.6_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (CHMNIST, ResNet18, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_densenet121_level_0.6_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_densenet121_l1filter_0.6_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_densenet121_l2filter_0.6_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_chmnist_densenet121_slim_0.6_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (CHMNIST, DenseNet121, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_resnet18_level_0.6_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_resnet18_l1filter_0.6_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_resnet18_l2filter_0.6_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_resnet18_slim_0.6_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (SVHN, ResNet18, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_vgg16_level_0.6_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_vgg16_l1filter_0.6_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_vgg16_l2filter_0.6_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_vgg16_slim_0.6_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (SVHN, VGG16, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_densenet121_level_0.6_adp.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_densenet121_l1filter_0.6_adp.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_densenet121_l2filter_0.6_adp.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_svhn_densenet121_slim_0.6_adp.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (SVHN, DenseNet121, Sparsity 0.6).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_location_column_level_0.6_adp.pdf}
\caption{Sparsity 0.6}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_location_column_level_0.7_adp.pdf}
\caption{Sparsity 0.7}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (Location, FC, L1 Unstructured).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_purchase_column_level_0.6_adp.pdf}
\caption{Sparsity 0.6}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_purchase_column_level_0.7_adp.pdf}
\caption{Sparsity 0.7}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (Purchase, FC, L1 Unstructured).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_texas_column_level_0.6_adp.pdf}
\caption{Sparsity 0.6}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_texas_column_level_0.7_adp.pdf}
\caption{Sparsity 0.7}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks (Location, FC, L1 Unstructured).}
\end{figure*}
\section{Extended Experiments for Attack Evaluation}
In this section, we provide the extended experiments for attack evaluation and report the prediction accuracy and the attack performance in the remaining experimental settings, \textit{i}.\textit{e}.\@\xspace, datasets (CIFAR10, CIFAR100, CHMNIST, SVHN, Location, Purchase, and Texas), model architectures (VGG16, ResNet18, DenseNet121).
\clearpage
\subsection{Prediction Accuracy of Pruned Models}
This section presents the prediction accuracy of pruned models using different pruning approaches and sparsity levels on the remaining experiment settings, including the model architectures (ResNet18, DenseNet121, VGG16, and FC) and the datasets (CIFAR10, CIFAR100, CHMNIST, SVHN, Location, Purchase, and Texas).
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_cifar10_vgg16.pdf}
\caption{CIFAR10, VGG16}
\label{fig:mia_cifar10_vgg16}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_cifar100_resnet18.pdf}
\caption{CIFAR100, ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_cifar100_vgg16.pdf}
\caption{CIFAR100, VGG16}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_chmnist_resnet18.pdf}
\caption{CHMNIST, ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_chmnist_densenet121.pdf}
\caption{CHMNIST, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_chmnist_vgg16.pdf}
\caption{CHMNIST, VGG16}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_svhn_resnet18.pdf}
\caption{SVHN, ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_svhn_densenet121.pdf}
\caption{SVHN, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_svhn_vgg16.pdf}
\caption{SVHN, VGG16}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_purchase_column.pdf}
\caption{Purchase, FC}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_texas_column.pdf}
\caption{Texas, FC}
\end{subfigure}
\caption{Prediction accuracy (test accuracy) of the pruned models using different pruning approaches and sparsity levels. Each point indicates the prediction accuracy achieved by the pruned model with a specific pruning approach and sparsity level. The black line indicates the prediction accuracy of the original models.}
\end{figure*}
\clearpage
\subsection{Privacy Risks of Pruned Models}
\label{app:attack_acc}
This section presents the attack accuracy of the remaining experiment settings.
Similar to the observation in the main paper, we find that in most cases, the pruned models become more vulnerable to the membership inference attacks than the original models.
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_cifar10_vgg16.pdf}
\caption{CIFAR10, VGG16}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_cifar100_resnet18.pdf}
\caption{CIFAR100, ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_cifar100_vgg16.pdf}
\caption{CIFAR100, VGG16}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_chmnist_resnet18.pdf}
\caption{CHMNIST, ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_chmnist_densenet121.pdf}
\caption{CHMNIST, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_chmnist_vgg16.pdf}
\caption{CHMNIST, VGG16}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_svhn_resnet18.pdf}
\caption{SVHN, ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_svhn_densenet121.pdf}
\caption{SVHN, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_svhn_vgg16.pdf}
\caption{SVHN, VGG16}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_texas_column.pdf}
\caption{Texas, FC}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_purchase_column.pdf}
\caption{Purchase, FC}
\end{subfigure}
\caption{Privacy Risks of Neural Network Pruning (w.r.t. prediction accuracy). Most pruning approaches results in a higher attack accuracy when considering a similar prediction accuracy, compared with the original models. We present the attack accuracy of SAMIA for pruned models and the attack accuracy of Conf attack for the original models.}
\end{figure*}
\newpage
\subsection{Effectiveness of SAMIA}
We investigate the effectiveness of SAMIA.
Although most investigated MIAs are effective in attacking pruned models, SAMIA results in higher attack accuracy than other attacks in most cases.
SAMIA does not perform among the best attacks mainly in two cases: 1) the attack accuracy is close to 50\% (\textit{i}.\textit{e}.\@\xspace, random guessing); 2) the sparsity level is low (\textit{e}.\textit{g}.\@\xspace, $0.5$). When the sparsity level is low, the divergence of confidence and sensitivity is reduced. In this case, the features of confidence and sensitivity cannot be fully utilized for SAMIA, which makes the SAMIA's attack performance close to the existing attacks.
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_densenet121_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_densenet121_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_densenet121_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_densenet121_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_densenet121_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_densenet121_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_densenet121_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_densenet121_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack performance comparison of MIAs (CIFAR10, DenseNet121).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_vgg16_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_vgg16_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_vgg16_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_vgg16_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_vgg16_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_vgg16_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_vgg16_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_vgg16_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack performance comparison of MIAs (CIFAR10, VGG16).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_resnet18_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_resnet18_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_resnet18_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_resnet18_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_resnet18_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_resnet18_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_resnet18_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_resnet18_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack performance comparison of MIAs (CIFAR100, ResNet18). }
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_vgg16_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_vgg16_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_vgg16_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_vgg16_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_vgg16_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_vgg16_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_vgg16_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_vgg16_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack performance comparison of MIAs (CIFAR100, VGG16). }
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_resnet18_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_resnet18_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_resnet18_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_resnet18_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_resnet18_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_resnet18_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_resnet18_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_resnet18_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack performance comparison of MIAs (CHMNIST, ResNet18). }
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_densenet121_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_densenet121_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_densenet121_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_densenet121_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_densenet121_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_densenet121_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_densenet121_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_densenet121_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack performance comparison of MIAs (CHMNIST, DenseNet121). }
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_vgg16_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_vgg16_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_vgg16_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_vgg16_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_vgg16_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_vgg16_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_vgg16_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_chmnist_vgg16_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack performance comparison of MIAs (CHMNIST, VGG16). }
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_resnet18_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_resnet18_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_resnet18_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_resnet18_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_resnet18_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_resnet18_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_resnet18_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_resnet18_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack performance comparison of MIAs (SVHN, ResNet18). }
\label{fig:mia_svhn_resnet}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_densenet121_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_densenet121_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_densenet121_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_densenet121_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_densenet121_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_densenet121_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_densenet121_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_densenet121_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack performance comparison of MIAs (SVHN, DenseNet121). }
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_vgg16_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_vgg16_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_vgg16_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_vgg16_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_vgg16_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_vgg16_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_vgg16_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_svhn_vgg16_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack performance comparison of MIAs (SVHN, VGG16). }
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_purchase_column_level.pdf}
\caption{Purchase, FC}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_purchase_column_level_2.pdf}
\caption{Purchase, FC}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_texas_column_level.pdf}
\caption{Texas, FC}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_texas_column_level_2.pdf}
\caption{Texas, FC}
\end{subfigure}
\caption{Attack performance comparison of MIAs (L1 Unstructured).}
\end{figure*}
\clearpage
\subsection{Impact of Confidence Gap, Sensitivity Gap, and Generalization Gap}
This section reports the impact of confidence gap, sensitivity gap, and generalization gap in the remaining experimental settings. We observe the strong correlation between gaps (\textit{i}.\textit{e}.\@\xspace, confidence gap, sensitivity gap, and generalization gap) and attack accuracy.
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_cifar10_densenet121.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_cifar10_densenet121.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_cifar10_densenet121.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (CIFAR10, DenseNet121).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_cifar10_vgg16.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_cifar10_vgg16.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_cifar10_vgg16.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (CIFAR10, VGG16).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_cifar100_resnet18.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_cifar100_resnet18.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_cifar100_resnet18.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (CIFAR100, ResNet18).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_cifar100_densenet121.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_cifar100_densenet121.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_cifar100_densenet121.pdf}
\caption{Generalization gap}
\label{fig:gen_gap_cifar100_densenet}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (CIFAR100, DenseNet121).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_cifar100_vgg16.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_cifar100_vgg16.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_cifar100_vgg16.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (CIFAR100, VGG16).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_chmnist_resnet18.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_chmnist_resnet18.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_chmnist_resnet18.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (CHMNIST, ResNet18).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_chmnist_densenet121.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_chmnist_densenet121.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_chmnist_densenet121.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (CHMNIST, DenseNet121).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_chmnist_vgg16.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_chmnist_vgg16.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_chmnist_vgg16.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (CHMNIST, VGG16).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_svhn_resnet18.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_svhn_resnet18.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_svhn_resnet18.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (SVHN, ResNet18).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_svhn_densenet121.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_svhn_densenet121.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_svhn_densenet121.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (SVHN, DenseNet121).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_svhn_vgg16.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_svhn_vgg16.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_svhn_vgg16.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (SVHN, VGG16).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_location_column.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_location_column.pdf}
\caption{Sensitivity gap}
\label{fig:sens_gap_location}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_location_column.pdf}
\caption{Generalization gap}
\end{subfigure}
\vspace{-0.5em}
\caption{\xy{Impact of confidence gap, sensitivity gap, and generalization gap (Location, FC).}}
\label{fig:gap_attack_location_fc}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_purchase_column.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_purchase_column.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_purchase_column.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (Purchase, FC).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_texas_column.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_texas_column.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_texas_column.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{Impact of confidence gap, sensitivity gap, and generalization gap (Texas, FC).}
\end{figure*}
\clearpage
\subsection{Impact of Dataset Split Strategies}
\label{sec:dataset_split}
This section reports the attack performance when the target model is trained on an unevenly split dataset.
In the main paper, we split datasets into training, test, and validation in the ratio 45\%, 45\%, 10\%, so that we get the same number of training and test data and the random guessing attack accuracy is 50\%. Here we evaluate the attack performance using different and uneven ratios. Given the uneven dataset, we use Area Under the Curve (AUC) to report the attack performance. As shown in Table~\ref{tab:split}, the uneven split may cause a slight decrease of attack AUC compared with an even split, but the privacy risk is still high (over 70\% attack AUC).
\begin{table*}[!h]
\centering
\begin{tabular}{@{}cc@{}}
\toprule
Dataset Split Ratio & \multirow{2}{*}{Attack AUC} \\
Train(\%)/Test(\%)/Dev(\%) & \\ \midrule
45/45/10 & 75.5 \\
50/40/10 & 72.7 \\
60/30/10 & 71.6 \\
70/20/10 & 72.7 \\\bottomrule
\end{tabular}%
\caption{Impact of different dataset split strategies (CIFAR10, ResNet18, L1 Unstructured, Sparsity 0.7).}
\label{tab:split}
\end{table*}
\clearpage
\subsection{Attack Accuracy With Unknown Sparsity Levels}
\label{app:attack_unknown}
This section reports the attack accuracy with unknown sparsity levels in the CIFAR10, CIFAR100, Purchase, and Texas datasets. Without knowing the sparsity level (\textit{i}.\textit{e}.\@\xspace different sparsity levels used in the target model and the shadow model), the adversary can still achieve similar (even higher in some cases) attack accuracy.
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_densenet121_level.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_densenet121_l1filter.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_densenet121_l2filter.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_densenet121_slim.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack accuracy with unknown sparsity levels (CIFAR10, DenseNet121).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_vgg16_level.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_vgg16_l1filter.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_vgg16_l2filter.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_vgg16_slim.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack accuracy with unknown sparsity levels (CIFAR10, VGG16).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar100_densenet121_level.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar100_densenet121_l1filter.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar100_densenet121_l2filter.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar100_densenet121_slim.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack accuracy with unknown sparsity levels (CIFAR100, DenseNet121).}
\end{figure*}
\begin{figure*}[!h]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_purchase_column_level.pdf}
\caption{Purchase, FC}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_texas_column_level.pdf}
\caption{Texas, FC}
\end{subfigure}
\caption{Attack accuracy with unknown sparsity levels.}
\end{figure*}
\input{appendix/defense}
\input{appendix/defense_adp}
\subsection{Divergence of Prediction Behaviors}\label{4.1}
\xy{To investigate the prediction behaviors of pruned neural networks, we first introduce two metrics: prediction confidence and prediction sensitivity. Specifically, given an input sample $\bm{x}$ and an pruned model $f_p$, the prediction confidence is defined as $\mathrm{PC} = f_p(\bm{x})$.
In addition, to further measure the prediction behavior changes in terms of slight input change, we introduce prediction sensitivity, which is defined as\vspace{-0.2em}
\begin{equation}\label{eq:sensitivity}
\mathrm{PS} = \frac{1}{n} \sum_{i=1}^n \frac{|f_p(\bm{x} + \epsilon\bm{\delta_i}) - f_p(\bm{x})|}{\epsilon},
\end{equation}
where $\bm{\delta_i} \sim \mathcal{N}(0, 1)$ is a random Gaussian noise vector added to the input data $\bm{x}$, and $\epsilon$ controls the magnitude of input changes.
Similar idea has been used in the gradient estimation for black-box adversarial attacks~\cite{chen2017zoo}. It has been shown that a small number of noise vectors can achieve good estimation of prediction changes, so that we set a small query budget in the evaluation ($n=10$)~\cite{chen2017zoo,bhagoji2018practical}.
Accordingly, we use the confidence and sensitivity to measure the divergence between members and non-members. We define the confidence gap as\vspace{-0.2em}
\begin{equation}
\frac{1}{|\mathcal{D}_{train}|}\sum_{(\bm{x_i}, y_i)\in \mathcal{D}_{train}}f_p^{y_i}({\bm{x_i}}) - \frac{1}{|\mathcal{D}_{test}|}\sum_{(\bm{x_i}, y_i)\in \mathcal{D}_{test}}f_p^{y_i}(\bm{x_i}),
\end{equation}
where $f_p^{y_i}$ denotes the prediction confidence of ground-truth class $y_i$.
Confidence gap calculates the difference of average confidence between members and non-members in the ground-truth class.
Similarly, we define the sensitivity gap as\vspace{-0.2em}
\begin{equation}
\frac{1}{|\mathcal{D}_{train}|}\sum_{(\bm{x_i}, y_i)\in \mathcal{D}_{train}}\mathrm{PS}^{y_i}(\bm{x_i}) - \frac{1}{|\mathcal{D}_{test}|}\sum_{(\bm{x_i}, y_i)\in \mathcal{D}_{test}}\mathrm{PS}^{y_i}(\bm{x_i}),
\end{equation}
where $\mathrm{PS}^{y_i}$ denotes the prediction sensitivity (Eq.~\ref{eq:sensitivity}) of ground-truth class $y_i$. The sensitivity gap calculates the difference of average sensitivity between members and non-members in the ground-truth class.}
As illustrated in Figure~\ref{fig:histogram}, the divergence of prediction confidences and \xy{prediction sensitivities} is increased due to neural network pruning, which introduces the new attack vectors for MIAs and thus makes the pruned models more vulnerable. Moreover, the divergences of prediction confidences and sensitivities from the pruned model vary widely among the different classes of training and test data.
\xy{Figure~\ref{fig:cls_div} shows that the divergences of the pruned models' prediction behavior (confidence and sensitivity) over members and non-members are significantly different among classes.} Similar observations of prediction confidences on different classes after pruning have been made in other fields such as model fairness and transparency~\cite{paganini2020prune,hooker2019compressed,hooker2020characterising}.
\subsection{SAMIA: Self-Attention MIA}
\xy{Upon the above observations, we propose one hypothesis: \textit{the divergences among classes, \textit{i}.\textit{e}.\@\xspace, confidence gap and sensitivity gap, can provide fine-grained ``evidence'' for MIAs, leading to serious privacy leakage}. In addition, most existing MIA research only considers the confidence gap and a single threshold of the ground-truth class, which may underestimate the privacy risks of MIAs in neural network pruning. Hence, we propose SAMIA, a self-attention MIA, to fully utilize the increased divergence information along with the class information to conduct a finer-grained analysis.} Specifically, self-attention is a neural network module to capture global dependencies among inputs and allows the inputs to interact with each other. Despite the recent success of self-attention mechanism in many areas, such as natural language processing~\cite{vaswani2017attention,devlin2019bert} and computer vision~\cite{zhang2019self,wang2018non,fu2019dual}, it has not been well exploited in the research of privacy attacks yet.
\xy{In SAMIA, we leverage the self-attention mechanism to automatically extract the finer-grained ``thresholds'' from different classes by capturing the dependency between predicted information (confidence and sensitivity) and class information and allowing them to interact with each other. Specifically, SAMIA takes the pruned model's prediction confidence and sensitivity and ground-truth labels as inputs. Given a specific class, the self-attention mechanism finds out the specific confidence information and sensitivity information that the attack ``threshold'' should pay more ``attention'' to.}
\begin{figure}[!t]
\centering
\includegraphics[width=0.7\linewidth]{figs/transformer.pdf}
\vspace{-0.5em}
\caption{\xy{Attack model architecture in SAMIA.}}
\label{fig:transformer}
\vspace{-0.5em}
\end{figure}
\xy{Figure~\ref{fig:transformer}} illustrates the network architecture of the attack model used in SAMIA, \xy{enlighten by the idea of Transformer~\cite{vaswani2017attention}, \textit{i}.\textit{e}.\@\xspace, one of the most widely used self-attention architectures. We first convert the ground-truth label into a one-hot vector and then feed both pruned model's prediction confidence, sensitivity and the one-hot vector into the attack model as the input features.
The input features are encoded into a vector using a Fully Connected (FC) layer, which are then fed into the multi-head self-attention modules. In each module, we encode the features as query $Q$, key $K$, and value $V$ vectors using a linear function following the self-attention strategy.} The attention module calculates the attention scores of the subgroups in a scaled dot-product way:
$\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(QK^T)V,$
where $\mathrm{softmax}()$ denotes the softmax function to make the attention scores sum up to $1$.
The output of the attention module is the weighted sum of the value vector, where the weight assigned to each value is derived by the attention scores $\mathrm{softmax}(QK^T)$. \xy{In addition, we calculate four attention scores (\textit{i}.\textit{e}.\@\xspace, multi-head attention) to capture the different attention strategies. Followed by the attention module, we add the result to the input features and apply the layer normalization~\cite{ba2016layer} to stabilize the attack model training. The result will be fed into another FC layer with layer normalization. We consider these operations as a block and repeat the block for three times, followed by two fully connected layers. }
A non-linear activation function, ReLU is applied to the output of the first few FC layers. A softmax function is applied to the last FC layer to provide the binary prediction on the membership status.
\xy{Compared with existing MIAs that learn a single threshold of prediction confidence to determine the membership, the proposed SAMIA captures the information of confidences and sensitivities and intuitively better learns the diverse thresholds to multiple classes. Our evaluation results demonstrate that SAMIA leads to higher attack accuracy compared with the state-of-the-art attacks.}
\subsection{Neural Network Pruning}
The state-of-the-art neural networks are usually deep and resource hungry, requiring large amounts of computation and memory, which becomes a particular challenge on resource-constrained end devices. As one of the most popular network compression approaches, neural network pruning has attracted great attention in recent years~\cite{han2015deep,li2016pruning,liu2017learning,blalock2020state}. In general, most network pruning studies follow the pruning workflow: "train-prune-finetuning." For example, Han~\textit{et al}.\@\xspace~\cite{han2015deep} proposed to remove the individual parameters with the lowest magnitude. Randomly removing individual parameters reduces the model size, but may not be efficient to facilitate hardware optimization and accelerate the neural network computation. Therefore, many methods were proposed to remove parameters in an organized way by removing a group of parameters (\textit{i}.\textit{e}.\@\xspace, structured pruning). For example, Li~\textit{et al}.\@\xspace~\cite{li2016pruning} removed the entire filters with the lowest magnitude in the neural network, which leads to significant speedup compared with the unstructured pruning.
Liu~\textit{et al}.\@\xspace~\cite{liu2017learning} removed the entire channels according to the corresponding scaling factors in the followed batch normalization layers. In this paper, we investigate the privacy risks of both unstructured and structured pruning approaches.
More recently, new pruning approaches have been proposed, which prune parameters by searching the optimal neural architecture~\cite{cai2019once,li2020eagleeye} or fine-tune the pruned model by rewinding the parameters to the previous states~\cite{frankle2018lottery,renda2020comparing}. The privacy risks discussed in this paper might exist in these new pruning approaches.
We will investigate their privacy risks in our future work.
On the other hand, recent efforts have been put into neural network pruning from other important perspectives. Paganini~\cite{paganini2020prune} investigated the unfairness and systematic biases in the pruned models. Hooker \textit{et al}.\@\xspace~\cite{hooker2019compressed} demonstrated the biased performance on different groups and classes after pruning. Given the potential of pervasively implementing neural network pruning, this work targets on another critical and urgent aspect regarding neural network pruning, \textit{i}.\textit{e}.\@\xspace, training data privacy.
\vspace{-0.5em}
\subsection{Membership Inference Attacks (MIAs)}
Membership inference attacks have raised serious privacy threats by determining if a record was in the training dataset of a neural network model via querying that model. Given a target neural network model $f: \mathbb{R}^n \rightarrow \mathbb{R}$, the process of MIA can be formally defined as:
\vspace{-0.5em}
\begin{equation}
\mathcal{A}: \bm{x}, f \rightarrow \{0, 1\},\vspace{-0.5em}
\end{equation}
where $\mathcal{A}$ denotes the attack model, which is a binary classifier. If the data sample $\bm{x}$ is used to train the target model $f$ , the attack model $\mathcal{A}$ outputs $1$ (\textit{i}.\textit{e}.\@\xspace, member), and $0$ otherwise (\textit{i}.\textit{e}.\@\xspace, non-member).
Due to the practical consideration, most MIAs focused on the black-box setting, where an adversary only has access to the target model's outputs. By leveraging the target model's prediction confidences, Shokri~\textit{et al}.\@\xspace,~\cite{shokri2015privacy} proposed a black-box MIA. They constructed several shadow models to mimic the behavior of a target model. The well-established shadow models will then be used to generate data to train a neural network-based binary classifier to determine the membership of a record against the target model, \textit{i}.\textit{e}.\@\xspace, whether a record belongs to the target model's training dataset or not. Salem~\textit{et al}.\@\xspace,~\cite{salem2019ml} further boosted this attack successfully by only using a single shadow model. To further improve the attack accuracy, Nasr~\textit{et al}.\@\xspace,~\cite{nasr2018machine} included more features, such as the class labels of data samples, to train the binary classifier. In addition to the aforementioned neural network-based binary classifier, Leino~\textit{et al}.\@\xspace,~\cite{leino2020stolen}, Yeom~\textit{et al}.\@\xspace,~\cite{yeom2018privacy}, and Song~\textit{et al}.\@\xspace,~\cite{song2019privacy,song2020systematic} proposed the metric-based binary classifier, where the membership of a record is directly determined by a predefined threshold based on the metrics, such as the prediction confidences, entropy, or modified entropy of the record.
Song and Mittal showed that by setting a class-dependent threshold, the metric-based classifier could achieve comparable or even better accurate inference performance compared with the neural network-based classifier~\cite{song2020systematic}.
\xy{Despite the extensive research on MIAs, none of them is designed towards pruned models. Therefore, we propose SAMIA to investigate the privacy risks of pruned models.}
\vspace{-0.5em}
\subsection{Defenses against MIAs}
Recent efforts have been made to defend against MIAs. As one of the most popular privacy-preserving techniques, differential privacy (DP) provides provable defense against MIAs by adding noise to the gradient or parameter during model training~\cite{dwork2008differential,dwork2006calibrating,abadi2016deep}. However, DP usually requires a large magnitude of noises to achieve a meaningful privacy guarantee, which seriously degrades the performance of the protected models~\cite{jayaraman2019evaluating}.
On the other hand, regularization~\cite{shokri2017membership}, dropout, and model stacking~\cite{salem2019ml} have been used in model training to reduce the privacy risks caused by overfitting. Although these approaches reduced the vulnerability by bridging the generalization gap between member and non-member data samples, in many cases, the privacy risks after applying these approaches are still high.
Recent adversarial learning techniques~\cite{goodfellow2014explaining,yuan2019adversarial},
has been introduced in defending against MIAs by adding noises to the prediction confidences for misleading the adversary~\cite{nasr2018machine,jia2019memguard}.
In a recent analysis of the defense mechanisms, Song and Mittal showed that the early stopping mechanism achieved comparable performance with most defenses~\cite{song2020systematic}.
\xy{In this paper, we provide a comprehensive analysis of defenses in neural network pruning, including our proposed PPB defense along with the existing defense mechanisms.}
\subsection{Neural Network Pruning Workflow}
This paper is focused on a general neural network pruning process, whose workflow includes three key stages: original network training, coarse pruning, and fine-tuning, as illustrated in Figure~\ref{fig:pipeline}. Specifically,
\begin{enumerate}
\item \textit{Original network training}: Train a large (sometimes over-parameterized) model $f(\bm{x}; \bm{W})$ with parameters $\bm{W}$;
\vspace{-0.5em}
\item \textit{Coarse pruning}: Prune the trained model by removing the parameters according to a certain criterion. Pruning takes $f(\bm{x}; \bm{W})$ as input and produces a new model $f(\bm{x}; \bm{M}\odot \bm{W})$, where $\bm{M}\in {0, 1}^{|\bm{W}|}$ denotes the binary mask that sets certain parameters to 0, and $\odot$ denotes the element-wise multiplication;
\vspace{-0.5em}
\item \textit{Fine-tuning}: Fine-tune the pruned model by retraining it on the same training data for $N$ epochs. After fine-tuning, we get the pruned model $f(\bm{x}; \bm{M}\odot \bm{W}')$ with a new set of parameters $\bm{W}'$ different from $\bm{W}$. For simplicity's sake, in the following paper, we use $f$ to denote the original model $f(\bm{x}; \bm{W})$ and $f_p$ to denote the pruned model $f(\bm{x}; M\odot \bm{W}')$.
\end{enumerate}
\subsection{Adversarial Knowledge}
The goal of a membership inference attack is to determine the membership information given a data sample, \textit{i}.\textit{e}.\@\xspace, whether the data sample is used to train the target pruned model or not.
We assume the adversary has the following knowledge:
\begin{itemize}
\item The adversary has access to the target pruned model. The adversary has no access to the original model, which is practical in most real-world cases: the original model for training is not exposed to the public. Only the target pruned model in production is released for public use.
\vspace{-0.5em}
\item The adversary has access to prediction vector provided by the target pruned model, which means we consider the black-box membership inference attacks, where the internal information about the target pruned model (\textit{e}.\textit{g}.\@\xspace, the activations in the model) and all the information related to the original model are inaccessible to the adversary. Compared with white-box attacks (\textit{e}.\textit{g}.\@\xspace, \cite{nasr2019comprehensive}), the black-box setting is more serious since it can be conducted by anyone who can query the target pruned model.
\vspace{-0.5em}
\item The adversary knows the approaches used for neural network pruning. The adversary can train a shadow model using the same pruning approach.
\end{itemize}
\subsection{Adaptive Adversary}
The arms race between attacks and defenses is one of the main challenges in machine learning privacy.
If the defense is designed without considering the adversary knows the defense mechanisms, usually the defense performance will be substantially degraded after the adaptive attacks are evaluated against the defense.
Hence, it is critical to consider both non-adaptive and adaptive attacks in evaluating the defense mechanisms.
In this paper, we consider two types of attacks: 1) non-adaptive attacks - the adversary does not know the defense mechanism and trains the shadow pruned model only using neural network pruning; 2) adaptive attacks - the adversary knows the defense mechanism and trains the shadow pruned model using neural network pruning with the same defense.
\subsection{Neural Network Pruning}
The state-of-the-art neural networks are usually deep and resource hungry, requiring large amounts of computation and memory, which becomes a particular challenge on resource-constrained end devices. As one of the most popular network compression approach, neural network pruning has attracted great attention in recent years~\cite{mozer1989skeletonization,janowsky1989pruning,segee1991fault,reed1993pruning}.
Train-Prune-Fine-tuning is the most common way to achieve the good performance of the pruned models. In this paper, we evaluate the privacy risk of pruned models following this setting. Recently, alternative approaches such as rewinding the pruned model's weight values to an earlier state~\cite{frankle2018lottery} and fine-tuning the pruned model using the original training schedule~\cite{renda2020comparing} are shown comparable performance with the original model. We will investigate the privacy risks of these pruning approaches in our future work.
To prune the parameters in the neural networks, recently Han \textit{et al}.\@\xspace proposed to remove the parameters with the lowest absolute values~\cite{han2015deep}. To facilitate hardware optimization and accelerate the neural network computation, many methods were proposed to remove parameters in an organized way (structured pruning) by removing the entire filters or channels~\cite{li2016pruning,liu2017learning}.
Although most studies on network pruning focus on the overall accuracy of pruned models on test data, some efforts have been put into the other effects caused by network pruning.
Paganini~\cite{paganini2020prune} investigated the unfairness and systematic biases in the pruned models. Hooker \textit{et al}.\@\xspace~\cite{hooker2019compressed} demonstrated the biased performance on different groups and classes after pruning.
In this work, we investigate another urgent aspect in network pruning - training data privacy. This work investigates four widely used pruning approaches, including the structured and unstructured approaches, and evaluates the privacy risk of models after pruning.
\subsection{Membership Inference Attacks}
Given a target machine learning model $f: \mathbb{R}^n \rightarrow \mathbb{R}$, the goal of membership inference attacks is to infer whether a given data sample was used in the model training or not.
Formally, membership inference attack can be defined as:
\begin{equation}
\mathcal{A}: \bm{x}, f \rightarrow \{0, 1\},
\end{equation}
where $\mathcal{A}$ denotes the attack model, which is a binary classifier. If the data sample $\bm{x}$ is used to train the target model $f$ , $\mathcal{A}$ outputs $1$ (\textit{i}.\textit{e}.\@\xspace, member), and $0$ otherwise (\textit{i}.\textit{e}.\@\xspace, non-member).
In this paper, we consider the black-box setting of membership inference attacks, where the adversary only has access to the target model's prediction given a data sample.
For example, Shokri~\textit{et al}.\@\xspace~\cite{shokri2015privacy} constructed several shadow models to mimic the target model's different behaviors on the predictions of members and non-members and then train a binary classifier based on the predictions as an attack model to determine the membership of data samples against the target model. Salem~\textit{et al}.\@\xspace~\cite{salem2018ml} found that membership inference attacks are successful even using a single shadow model. Nasr~\textit{et al}.\@\xspace~\cite{nasr2018machine} added the class labels of data samples to the input of the attack model to improve the attack accuracy.
In addition to the neural network-based attack models, Leino~\textit{et al}.\@\xspace~\cite{leino2020stolen}, Yeom~\textit{et al}.\@\xspace~\cite{yeom2018privacy}, and Song~\textit{et al}.\@\xspace~\cite{song2019privacy,song2020systematic} found that some metrics derived from model predictions can be used as a sign of membership. The metric-based attacks determine a given data sample is a member if the metric of this given sample (\textit{e}.\textit{g}.\@\xspace, confidence, entropy, modified entropy) is above a certain threshold.
Song~\textit{et al}.\@\xspace recently showed that by setting class-dependent threshold, the metric-based attacks could achieve comparable or even better performance compared with neural network-based attacks.
Hence, in this paper, we use two metric-based attacks proposed in~\cite{song2020systematic} as our baseline attacks.
Besides, most neural network-based attack models were built upon fully-connected neural networks, which might not thoroughly explore the connections between predictions and labels. In this paper, we propose a self-attention-based membership inference attack to leverage the dependence of input features such as model predictions and sample labels. Our evaluation results demonstrate the superior performance of our proposed self-attention-based attack.
\subsection{Defenses against Membership Inference Attacks}
Several defenses against membership inference attacks have been proposed to mitigate privacy risks in machine learning models.
Differential privacy (DP)~\cite{dwork2008differential,dwork2006calibrating,abadi2016deep} is a widely used privacy-preserving technique to provide provable privacy risk mitigation by adding noise to the gradient or parameters during training.
However, DP requires a large magnitude of noises to achieve a meaningful theoretical privacy guarantee, which significantly degrades the performance of the protected models~\cite{jayaraman2019evaluating}.
Regularization~\cite{shokri2017membership}, dropout, and model stacking were used \textit{et al}.\@\xspace~\cite{salem2019ml} in target model training to reduce privacy risks by avoiding overfitting. Although these approaches reduced the vulnerability by bridging the generalization gap between member and non-member data, in many cases, the privacy risks after applying these approaches are still high.
Nasr~\textit{et al}.\@\xspace~\cite{nasr2018machine} trained the target model with additional adversarial regularization to defend against membership inference attacks.
Jia~\textit{et al}.\@\xspace~\cite{jia2019memguard} conducted adversarial attacks~\cite{yuan2019adversarial} against the adversary's attack model by intentionally modifying the target model's predictions to mislead the attack model.
According to the recent analysis of defenses against membership inference attacks, most of these defenses are either limited in the effectiveness against adaptive attacks~\cite{niu2020membership,song2020systematic} or no better than the early stopping mechanisms~\cite{song2020systematic}.
Therefore, in this paper, we use the early stopping mechanism as our baseline defense. We evaluate the performance of our proposed defense with both non-adaptive and adaptive attacks.
In this section, we first provide some insights on why neural network pruning makes the model more vulnerable to membership inference attacks and then describe the general membership inference attacks against neural network pruning. We introduce two types of membership inference attacks to evaluate the privacy risks of pruned models.
\newsavebox{\measurebox}
\begin{figure*}[!t]
\centering
\sbox{\measurebox}{%
\begin{minipage}[b]{.4\textwidth}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=0.9\textwidth]{figs/cifar100_mobilenetv2_avg_div.png}
\caption{CIFAR100 dataset}
\end{subfigure}
\end{minipage}}
\usebox{\measurebox}\qquad
\begin{minipage}[b][\ht\measurebox][s]{.4\textwidth}
\centering
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=0.9\textwidth]{figs/cifar10_mobilenetv2_avg_div.png}
\caption{CIFAR10 dataset}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\vspace{0.2cm}
\hspace{0.7cm}
\includegraphics[width=0.8\textwidth]{figs/svhn_mobilenetv2_avg_div.png}
\caption{SVHN dataset}
\end{subfigure}
\end{minipage}
\caption{\textbf{Divergence of the Pruned Model's Prediction Confidences over Different Classes.} We prune three MobileNetV2 models with 70\% sparsity on the CIFAR100, CIFAR10, and SVHN datasets, respectively. The red bar indicates the average prediction confidence of members in each class. The green bar indicates the average prediction confidence of non-members. We observe that the divergence of predictions between members and non-members (difference of average prediction confidence between members and non-members, shown in the blue curve) differ among classes. For example, for the CIFAR100 dataset, the divergence in the class ``fox'' is around three times higher than that in the class ``cloud''. (We only show the first 40 classes in the CIFAR100 dataset due to space limit.)}
\label{fig:cls_div}
\end{figure*}
\subsection{Divergence of Prediction Confidences}
Although neural network pruning helps remove the redundant parameters, recent research on neural network pruning usually focuses on improving the trade-off between the pruned models' sparsity and their overall prediction performance (\textit{i}.\textit{e}.\@\xspace, test accuracy).
However, we observe that despite the overall prediction performance is not degraded, neural network pruning may disproportionately affect the pruned models' prediction performance on the training data (members) and test data (non-members). Such disproportionate effects might increase the divergence between predictions over members and non-members (Figure~\ref{fig:histogram}).
Given that most existing membership inference attacks predict the membership status based on the difference of predictions between members and non-members, such increased divergence in neural network pruning makes the pruned models more vulnerable to membership inference attacks.
More importantly, we observe that the disproportionate effects of neural network pruning vary widely among the different subgroups of training and test data.
Figure~\ref{fig:cls_div} shows that the divergence of the pruned models' predictions over members and non-members is significantly different among classes.
Moreover, the divergence may differ greatly among the subgroups of each class, where the divergence of predictions over subgroups of each class can be affected by neural network pruning in different degrees.
Similar observation of divergence on different subgroups after neural network pruning has been made in other fields such as model fairness and transparency~\cite{paganini2020prune,hooker2019compressed,hooker2020characterising}.
We hypothesize that such divergence among subgroups may provide fine-grained ``evidence'' for membership inference attacks and lead to more serious privacy leakage.
Therefore, in the following sections, we investigated the pruned models' privacy risks by analyzing divergence of prediction confidences over members and non-members and among different classes.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\linewidth]{figs/attack_pipeline.png}
\caption{{The framework of membership inference attacks (MIA) against neural network pruning.}}
\label{fig:attack_pipeline}
\end{figure}
\subsection{Proposed Attack Design against Neural Network Pruning}
To explore the performance divergence due to neural network pruning,
we adopt the shadow model training strategy in general attack design by training both the shadow model and shadow pruned model.
The shadow pruned model provides the evidence how neural network pruning changes the predictions for each subgroup.
Shadow model training strategy is widely used in most membership inference attacks.
In our proposed membership inference attacks, the adversary first trains a shadow model from shadow member data (\textit{i}.\textit{e}.\@\xspace, training data used in the shadow model), and then prunes and fine-tunes the shadow model using the same neural network pruning approach.
Based on the predictions of the shadow pruned model, the adversary trains an attack model to learn how to distinguish shadow member data and shadow non-member data.
To extract sensitive membership information from the target pruned model, the adversary first derives the predictions of the given input sample by querying the target pruned model. The the adversary feeds the predictions into the trained attack model and provides the binary classification of the membership status.
Figure~\ref{fig:attack_pipeline} illustrates the general design of membership inference attacks against neural network pruning.
Next, we introduce two types of membership inference attacks to explore the privacy risks in neural network pruning.
\subsection{Class Dependent MIA}
Inspired by recent threshold-based membership inference attacks~\cite{leino2020stolen,yeom2018privacy,song2020systematic}, we introduce class-dependent threshold-based attacks to infer the membership of the data samples. The adversary learns the thresholds from the shadow pruned model according to different class-dependent metrics (\textit{e}.\textit{g}.\@\xspace, prediction confidence, correctness, and entropy). The adversary predicts an input sample is a member if the metric is above the threshold, and vice versa.
Specifically, we select two metrics for class-dependent threshold-based attacks: prediction confidence and modified entropy. Yeom \textit{et al}.\@\xspace, used prediction confidence to identify membership status of an input sample~\cite{yeom2018privacy}. Song \textit{et al}.\@\xspace, recently proposed modified entropy, which achieved better performance than using prediction confidence~\cite{song2020systematic}.
Given an input sample $\bm{x}$, its class $y$ and target pruned model $f_p$, the class-dependent threshold-based attack using prediction confidence (``conf'' attack for short) is defined as follows:
\begin{equation}
I_{\mathrm{conf}}(f_p, (\bm{x},y)) = \mathbbm{1}\{f_p^{(y)}(\bm{x})\geq\zeta_y\},
\end{equation}
where $f^{(y)}$ denotes the prediction confidence of class $y$ and $\zeta_y$ is the threshold of class $y$ derived from the shadow pruned model. $\mathbbm{1}{}$ denotes the indicator, which outputs $1$ (\textit{i}.\textit{e}.\@\xspace, member) if the prediction confidence is larger than the threshold in class $y$, $0$ (\textit{i}.\textit{e}.\@\xspace, non-member) otherwise. Similarly, we define the class dependent threshold-based attack using another metric, modified entropy (``mentr'' attack for short) as follows:
\begin{equation}
I_{\mathrm{mentr}}(f_p, (\bm{x},y)) = \mathbbm{1}\{\mathrm{mentr}(f_p^{(y)}(\bm{x}))\geq\zeta_y\},
\end{equation}
where modified entropy is defined as:
\begin{align}
\mathrm{mentr}(f_p(\bm{x}), y) =& - (1 - f_p^{(y)}(\bm{x}))\log (f_p^{(y)}(\bm{x})) \nonumber\\
&- \sum_{t\neq y}f_p^{(t)}(\bm{x})\log(1-f_p^{(t)}(\bm{x})).
\end{align}
By extracting the class dependent thresholds according to these two metrics, the adversary is capable of identifying the fine-grained divergence between members and non-members.
The adversary infers an input sample is a member if the metric is above the class-dependent threshold, and non-member otherwise.
The implementation of our class dependent threshold-based attacks is adapt from Song \textit{et al}.\@\xspace's work\footnote{\url{https://github.com/inspire-group/membership-inference-evaluation}}.
Our evaluation showed that class-dependent threshold-based attacks explore more serious privacy risks of the pruned models than those of the original models.
\subsection{Self-Attention MIA (SAMIA)}
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{figs/self-attention.png}
\caption{Network Architecture for Self-Attention-based Membership Inference Attack.}
\label{fig:self_attention}
\end{figure}
The class dependent threshold-based membership inference attacks may underestimate the privacy risks due to neural network pruning since the divergence of predictions disproportionately affects not only the different classes but also specific subgroups of data samples.
A single threshold for a class may not represent the different degrees of the divergence of predictions among subgroups.
To conduct finer-grained analysis of divergence among subgroups, we propose a self-attention-based membership inference attack (SAMIA). Self-attention is an essential component to capture global dependencies and is widely used in the state-of-the-art natural language processing models such as GPT3~\cite{brown2020language}, BERT~\cite{devlin2019bert,vaswani2017attention}.
Despite the recent advances, the self-attention mechanism has not yet been included in the research of privacy attacks.
In SAMIA, we leverage the self-attention mechanism to automatically extract the finer-grained ``thresholds'' from different classes and subgroups by capturing the dependency between predictions and label classes using neural networks.
SAMIA takes the pruned model's predictions and ground-truth labels as inputs.
The self-attention mechanism constructs subgroups based on model predictions and labels automatically and, moreover, finds out which subgroup the attack ``threshold'' should pay more ``attention.''
The outputs of the self-attention mechanism will be the weighted sum of the inputs over the attention scores.
By applying self-attention mechanism, SAMIA derives more finer-grained ``thresholds'' in terms of subgroups than the class dependent thresholds.
Our evaluation demonstrates that SAMIA leads to higher attack accuracy compared with two threshold-based attacks.
Figure~\ref{fig:self_attention} illustrates the network architecture of the attack model used in SAMIA. We first convert the ground-truth label into a one-hot label and then feed both the pruned model's prediction and one-hot label into the attack model as the input features.
The input features are first encoded into a vector using a Fully Connected (FC) layer to form subgroups and then transformed as query $Q$, key $K$, and value $V$ vectors using a linear function.
The attention module calculates the attention scores of the subgroups in a scaled dot-product way:
\begin{equation}
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(QK^T)V,
\end{equation}
where $\mathrm{softmax}()$ denotes the softmax function to make the attention scores sum up to $1$.
The output of the attention module is the weighted sum of the value vector, where the weight assigned to each value is derived by the attention scores $\mathrm{softmax}(QK^T)$.
Followed by the attention module, we apply dropout to the output of the attention module to avoid overfitting and create a shortcut (residual) connection between the input and output of the attention module to avoid gradient vanishing.
In the attack model of SAMIA, we adopt two attention modules, followed by two fully connected layers.
A non-linear activation function, LeakyReLU~\cite{maas2013rectifier} is applied to the first and fourth FC layers in Figure~\ref{fig:self_attention}.
We use softmax to provide the binary prediction on the membership.
Given the workflow of neural network pruning presented in Section~\ref{S3.1}, this section investigates the privacy risks of pruning. A general framework of the membership inference attacks against the pruned model is illustrated in Figure~\ref{fig:attack_pipeline}.
To extract sensitive membership information from the target pruned model, the adversary first derives the predictions of the given input sample by querying the target pruned model. The the adversary feeds the predictions into the trained attack model and provides the binary classification of the membership status.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\linewidth]{figs/attack_pipeline.png}
\caption{The framework of membership inference attacks (MIA) against neural network pruning. {\textbf{\color{red}[TO-DO:]}}{sample pool}}
\label{fig:attack_pipeline}
\end{figure}
\newsavebox{\measurebox}
\begin{figure*}[!t]
\centering
\sbox{\measurebox}{%
\begin{minipage}[b]{.48\textwidth}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=1.0\textwidth]{figs/cifar100_mobilenetv2_avg_div.png}
\caption{CIFAR100 dataset}
\end{subfigure}
\end{minipage}}
\usebox{\measurebox}\qquad
\begin{minipage}[b][\ht\measurebox][s]{.48\textwidth}
\centering
\begin{subfigure}[b]{\textwidth}
\hspace{4.5cm}
\includegraphics[width=0.3\textwidth]{figs/avg_div_legend.png}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\includegraphics[width=0.9\textwidth]{figs/cifar10_mobilenetv2_avg_div.png}
\caption{CIFAR10 dataset}
\end{subfigure}
\begin{subfigure}[b]{\textwidth}
\hspace{1.0cm}
\includegraphics[width=0.8\textwidth]{figs/svhn_mobilenetv2_avg_div.png}
\caption{SVHN dataset}
\end{subfigure}
\end{minipage}
\caption{\textbf{Divergence of the Pruned Model's Prediction Confidences over Different Classes.} We prune three MobileNetV2 models with 70\% sparsity on the CIFAR100, CIFAR10, and SVHN datasets, respectively. The red bar indicates the average prediction confidence of members in each class. The green bar indicates the average prediction confidence of non-members. We observe that the divergence of predictions between members and non-members (difference of average prediction confidence between members and non-members, shown in the blue curve) differ among classes. For example, for the CIFAR100 dataset, the divergence in the class ``fox'' is around three times higher than that in the class ``cloud''. (We only show the first 40 classes in the CIFAR100 dataset due to space limit.)}
\label{fig:cls_div}
\end{figure*}
\subsection{Divergence of Prediction Confidences}
Although neural network pruning helps remove the redundant parameters, recent research on neural network pruning usually focuses on improving the trade-off between the pruned models' sparsity and their overall prediction performance (\textit{i}.\textit{e}.\@\xspace, test accuracy).
However, we observe that despite the overall prediction performance is not degraded, neural network pruning may disproportionately affect the pruned models' prediction performance on the training data (members) and test data (non-members). Such disproportionate effects might increase the divergence between predictions over members and non-members (Figure~\ref{fig:histogram}).
Given that most existing membership inference attacks predict the membership status based on the difference of predictions between members and non-members, such increased divergence in neural network pruning makes the pruned models more vulnerable to membership inference attacks.
More importantly, we observe that the disproportionate effects of neural network pruning vary widely among the different subgroups of training and test data.
Figure~\ref{fig:cls_div} shows that the divergence of the pruned models' predictions over members and non-members is significantly different among classes.
Moreover, the divergence may differ greatly among the subgroups of each class, where the divergence of predictions over subgroups of each class can be affected by neural network pruning in different degrees.
Similar observation of divergence on different subgroups after neural network pruning has been made in other fields such as model fairness and transparency~\cite{paganini2020prune,hooker2019compressed,hooker2020characterising}.
We hypothesize that such divergence among subgroups may provide fine-grained ``evidence'' for membership inference attacks and lead to more serious privacy leakage.
Therefore, in the following sections, we investigated the pruned models' privacy risks by analyzing divergence of prediction confidences over members and non-members and among different classes.
Next, we introduce two types of membership inference attacks to explore the privacy risks in neural network pruning.
\subsection{CDMIA: Class-Dependent MIA}
Inspired by recent threshold-based membership inference attacks~\cite{leino2020stolen,yeom2018privacy,song2020systematic}, we introduce class-dependent threshold-based attacks to infer the membership of the data samples. The adversary learns the thresholds from the shadow pruned model according to different class-dependent metrics (\textit{e}.\textit{g}.\@\xspace, prediction confidence, correctness, and entropy). The adversary predicts an input sample is a member if the metric is above the threshold, and vice versa.
Specifically, we select two metrics for class-dependent threshold-based attacks: prediction confidence and modified entropy. Yeom \textit{et al}.\@\xspace, used prediction confidence to identify membership status of an input sample~\cite{yeom2018privacy}. Song \textit{et al}.\@\xspace, recently proposed modified entropy, which achieved better performance than using prediction confidence~\cite{song2020systematic}.
Given an input sample $\bm{x}$, its class $y$ and target pruned model $f_p$, the class-dependent threshold-based attack using prediction confidence (``conf'' attack for short) is defined as follows:
\begin{equation}
I_{\mathrm{conf}}(f_p, (\bm{x},y)) = \mathbbm{1}\{f_p^{(y)}(\bm{x})\geq\zeta_y\},
\end{equation}
where $f^{(y)}$ denotes the prediction confidence of class $y$ and $\zeta_y$ is the threshold of class $y$ derived from the shadow pruned model. $\mathbbm{1}{}$ denotes the indicator, which outputs $1$ (\textit{i}.\textit{e}.\@\xspace, member) if the prediction confidence is larger than the threshold in class $y$, $0$ (\textit{i}.\textit{e}.\@\xspace, non-member) otherwise. Similarly, we define the class dependent threshold-based attack using another metric, modified entropy (``mentr'' attack for short) as follows:
\begin{equation}
I_{\mathrm{mentr}}(f_p, (\bm{x},y)) = \mathbbm{1}\{\mathrm{mentr}(f_p^{(y)}(\bm{x}))\geq\zeta_y\},
\end{equation}
where modified entropy is defined as:
\begin{align}
\mathrm{mentr}(f_p(\bm{x}), y) =& - (1 - f_p^{(y)}(\bm{x}))\log (f_p^{(y)}(\bm{x})) \nonumber\\
&- \sum_{t\neq y}f_p^{(t)}(\bm{x})\log(1-f_p^{(t)}(\bm{x})).
\end{align}
By extracting the class dependent thresholds according to these two metrics, the adversary is capable of identifying the fine-grained divergence between members and non-members.
The adversary infers an input sample is a member if the metric is above the class-dependent threshold, and non-member otherwise.
The implementation of our class dependent threshold-based attacks is adapt from Song \textit{et al}.\@\xspace's work\footnote{\url{https://github.com/inspire-group/membership-inference-evaluation}}.
Our evaluation showed that class-dependent threshold-based attacks explore more serious privacy risks of the pruned models than those of the original models.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{figs/self-attention.png}
\caption{The network architecture of the proposed SAMIA.}
\label{fig:self_attention}
\end{figure}
\subsection{SAMIA: Self-Attention MIA}
The CDMIA may underestimate the privacy risks of the pruned neural network since the divergence of predictions disproportionately affects not only the different classes but also specific subgroups of data samples.
A single threshold for a class may not represent the different degrees of the divergence of predictions among subgroups.
To conduct finer-grained analysis of divergence among subgroups, we propose a self-attention-based membership inference attack (SAMIA). Self-attention is an essential component to capture global dependencies and is widely used in the state-of-the-art natural language processing models such as GPT3~\cite{brown2020language}, BERT~\cite{devlin2019bert,vaswani2017attention}.
Despite the recent advances, the self-attention mechanism has not yet been included in the research of privacy attacks.
In SAMIA, we leverage the self-attention mechanism to automatically extract the finer-grained ``thresholds'' from different classes and subgroups by capturing the dependency between predictions and label classes using neural networks.
SAMIA takes the pruned model's predictions and ground-truth labels as inputs.
The self-attention mechanism constructs subgroups based on model predictions and labels automatically and, moreover, finds out which subgroup the attack ``threshold'' should pay more ``attention.''
The outputs of the self-attention mechanism will be the weighted sum of the inputs over the attention scores.
By applying self-attention mechanism, SAMIA derives more finer-grained ``thresholds'' in terms of subgroups than the class dependent thresholds.
Our evaluation demonstrates that SAMIA leads to higher attack accuracy compared with two threshold-based attacks.
Figure~\ref{fig:self_attention} illustrates the network architecture of the attack model used in SAMIA. We first convert the ground-truth label into a one-hot label and then feed both the pruned model's prediction and one-hot label into the attack model as the input features.
The input features are first encoded into a vector using a Fully Connected (FC) layer to form subgroups and then transformed as query $Q$, key $K$, and value $V$ vectors using a linear function.
The attention module calculates the attention scores of the subgroups in a scaled dot-product way:
\begin{equation}
\mathrm{Attention}(Q, K, V) = \mathrm{softmax}(QK^T)V,
\end{equation}
where $\mathrm{softmax}()$ denotes the softmax function to make the attention scores sum up to $1$.
The output of the attention module is the weighted sum of the value vector, where the weight assigned to each value is derived by the attention scores $\mathrm{softmax}(QK^T)$.
Followed by the attention module, we apply dropout to the output of the attention module to avoid overfitting and create a shortcut (residual) connection between the input and output of the attention module to avoid gradient vanishing.
In the attack model of SAMIA, we adopt two attention modules, followed by two fully connected layers.
A non-linear activation function, LeakyReLU~\cite{maas2013rectifier} is applied to the first and fourth FC layers in Figure~\ref{fig:self_attention}.
We use softmax to provide the binary prediction on the membership.
Much of the progress in artificial intelligence over the past decade has been the result of deep neural networks. These powerful neural networks with a large number of parameters consume considerable storage and memory bandwidth. This makes it challenging to deploy the state-of-the-art neural networks on resource-constrained devices. To address this issue, neural network pruning as one of the most popular compression technologies has attracted great attention~\cite{mozer1989skeletonization,han2015deep}. By removing weak connections from a deep neural network, recent research has shown that neural network pruning can substantially reduce the size of a neural network without largely compromising prediction accuracy~\cite{han2015deep,li2016pruning,liu2017learning,blalock2020state}.
In general, neural network pruning includes three main stages: 1) train an original deep neural network (sometimes over-parameterized); 2) remove the insignificant parameters; 3) fine-tune the remaining parameters with the training dataset so as to reach the expected performance~\cite{blalock2020state}. Most existing research on neural network pruning has focused on improving the trade-off between accuracy and sparsity by strategically design the last two stages~\cite{han2015deep,li2016pruning,liu2017learning,blalock2020state}. However, such efforts on reusing training dataset poses serious privacy risks of neural network pruning due to the potentially increased memorization of training samples.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{0.8\linewidth}
\includegraphics[width=\linewidth]{figs/exp/cifar10_mobilenetv2_org.png}
\caption{Original Model}
\end{subfigure}
\begin{subfigure}[b]{0.8\linewidth}
\includegraphics[width=\linewidth]{figs/exp/cifar10_mobilenetv2_prune.png}
\caption{Pruned Model}
\end{subfigure}
\caption{{Histogram of prediction confidences of the ground-truth label on the CIFAR10 MobileNetv2 model.} We remove 70\% of the parameters in the original model using l1 unstructured pruning. The figure shows the frequency of prediction confidences belonging to the ground-truth labels on the training and test data. The vertical lines indicate the average values of prediction confidence of training data (members) and test data (non-members). Neural network pruning makes the distance between two vertical lines larger than that in the original model, which indicates the increased divergence between prediction confidences over members and non-members after pruning.}
\label{fig:histogram}
\end{figure}
Recent research has shown the privacy risks of neural networks, which are prone to memorizing sensitive information from the training data. Taking membership inference attack as an example, an adversary can infer whether a given data sample was used to train a neural network with high accuracy, which poses severe privacy threats of individual information.
For instance, if an adversary can infer that an individual's information is used to train a machine learning model for classifying the type of lung cancers, then the adversary knows that the individual has lung cancer.
Shorki \textit{et al}.\@\xspace first conducted membership inference attacks by against black-box machine learning models, where the adversary only has access to the data sample and predictions of the target model.
Recently, membership inference attacks have been extended to a wide range of scenarios such as against white-box machine learning models~\cite{nasr2019comprehensive,leino2020stolen}, against generative models~\cite{hayes2019logan,chen2020gan} and graph models~\cite{olatunji2101membership}, against machine translation~\cite{hisamoto2020membership}, text generation systems~\cite{song2019auditing}, against genomic data~\cite{chen2020differential} and clinic data~\cite{jordon2020hide}, and against transfer learning models~\cite{zou2020privacy}.
Although extensive studies have investigated the privacy risks of neural networks, privacy leakage in pruned neural networks has not been investigated.
In view of this, this paper focuses on one fundamental question: {\textit{comparing with deep neural networks, are the pruned networks more vulnerable to membership inference attacks?}}
Compared with conventional neural network training, neural network pruning may raise the privacy risks due to two main reasons:
First, most neural network pruning approaches require re-training the pruned models on the training data for fine-tuning the parameters after removing the redundant parameters.
The additional training process increases the risk of memorizing the sensitive training samples in the neural networks.
Second, neural network pruning may disproportionately affect the predictions over training data and test/unseen data.
Although the model's overall prediction accuracy is not degraded due to pruning, the distribution of prediction confidences may be changed significantly.
Figure~\ref{fig:histogram} shows the increased divergence between prediction confidences over training data (members) and test data (non-members) before and after pruning.
Most existing membership inference attacks rely on the difference of predictions between members and non-members to predict the membership status.
Given the increased divergence in neural network pruning, we hypothesize that the pruned models become more vulnerable to membership inference attacks.
Moreover, we observe that the such divergence varies among different classes or subgroups, which may provide more ``evidence'' for membership inference attacks and impose more serious threats.
Our main contributions are summarized below:
\vspace{-0.7em}
\begin{itemize}[leftmargin=*]
\item We conduct a pioneering study to explore the privacy risks of neural network pruning by performing membership inference attacks against the pruned models.
To the best of our knowledge, this is the first work to analyze the privacy risks of neural network pruning. With the rising use of neural network pruning in end devices, we believe its privacy risks deserve more attention from the research community.
\vspace{-0.7em}
\item We propose two types of membership inference attacks specific to neural network pruning, namely class dependent membership inference attack (CDMIA) and self-attention-based membership inference attack (SAMIA). By analyzing the divergence of prediction confidences over members and non-members in the pruned models, both attacks result in high attack accuracy of identifying the membership status from the pruned models. Especially, the proposed SAMIA shows superiority in identifying the privacy risks of the pruned models by conducting a finer-grained analysis on the prediction confidences, which can be considered as a strong baseline attack for future research.
\vspace{-0.7em}
\item To rigorously evaluate the privacy risks of neural network pruning, we conduct extensive experiments of membership inference attacks on five commonly used datasets, four pruning approaches, seven sparsity levels, and 154 pruned models in total. From the experimental results, we demonstrate that neural network pruning indeed becomes more vulnerable to membership inference attacks.
The adversary can successfully reveal the membership status, even without the knowledge of the pruning approach used in the target model. Moreover, we evaluate the privacy impacts of different pruning approaches and various sparsity levels.
\vspace{-0.7em}
\item The existing defenses proposed for the unpruned models may become ineffective in mitigating the privacy risks due to neural network pruning. We propose a new defense approach, namely pair-based posterior balancing (PPB). PPB narrows the divergences of different prediction confidences during model fine-tuning based on their KL-divergence distance.
Our evaluation demonstrates that the proposed defense successfully mitigates the privacy risks due to neural network pruning compared with the original model while maintaining the pruned models' sparsity and prediction accuracy.
\end{itemize}
\subsection{Design Principles of Defenses}
\xyy{
Two major design principles are considered for the defenses of pruned neural networks. On the one hand, effective defenses should be able to reduce the behavior discrepancy introduced by pruning. The above attack evaluation has demonstrated that the privacy risks introduced by MIAs in the pruned models are due to the increased divergence of prediction confidences and sensitivities. Hence, it is essential to reduce such divergence between members and non-members of the pruned neural networks for defense. On the other hand, the defenses need to take into consideration the resource constraints imposed by low-end devices. Neural network pruning aims to reduce the computational cost during inference.
Such cost cannot be increased by the defenses. Therefore, the defenses should be designed to mitigate the privacy risks of pruned models before deploying them on devices, thus without introducing additional defense costs in the inference phase.
}
\begin{figure}[!tb]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/mia_cifar10_densenet121_level_small_conf_pair_2.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/mia_cifar10_densenet121_level_small_sens_pair_2.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\caption{\xy{Divergence of the pruned model's prediction confidences and sensitivities using PPB defense (CIFAR10, DenseNet121).}}
\label{fig:hist_defense}
\end{figure}
\begin{figure}[!tb]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/conf_gap_cifar10_densenet121_level_pair2.pdf}
\caption{Confidence gap}
\end{subfigure}
\begin{subfigure}[b]{0.48\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/sens_gap_cifar10_densenet121_level_pair2.pdf}
\caption{Sensitivity gap}
\end{subfigure}
\caption{Divergence of the pruned model's prediction confidences and sensitivities over different classes with PPB defense (CIFAR10, DenseNet121). }
\label{fig:div_cls_defense}
\end{figure}
\subsection{Proposed Defense: PPB}
\xyy{Following the two design principles, we propose a countermeasure approach named by pair-based posterior balancing (PPB). }
\xy{The main idea of PPB defense is to mitigate the new prediction behaviors on prediction confidence and sensitivity by aligning the posterior predictions of different input samples. In this way, PPB can reduce the divergence of prediction confidence between members and non-members as well as the degree of sensitivities. Specifically, given any pair of two input samples, we try to make the distributions of their ranked posterior predictions as close as possible.}
The difference between ranked posteriors' distributions is measured by the Kullback–Leibler divergence (KL divergence)~\cite{kullback1951information}. Give two posterior predictions $P$ and $Q$, the KL divergence is defined as:
\begin{equation}
\mathcal{L}_{\mathrm{KL}}(P, Q) = \sum_x P(x)\log\frac{P(x)}{Q(x)}.
\end{equation}
KL-divergence is considered as a regularization term in neural network pruning.
The loss function includes both the prediction loss and KL divergence loss, which can be given by:
\begin{align}
\label{eq:defense_obj}
\mathcal{L}(f_p(\bm{x}), \bm{y}) =& \sum_i \mathcal{L}_{\mathrm{predict}}(f_p(\bm{x}_i), y_i) \nonumber\\
&+ \lambda \sum_{j, k (j\neq k)}\mathcal{L}_{\mathrm{KL}}(R(f_p(\bm{x}_j)), R(f_p(\bm{x}_k))),
\end{align}
where $\mathcal{L}_{\mathrm{KL}}$ and $\mathcal{L}_{\mathrm{predict}}$ denote the KL-divergence loss and the prediction loss (\textit{e}.\textit{g}.\@\xspace, cross-entropy loss for the classification tasks), respectively. $R(\cdot)$ sorts the posteriors provided by the pruned model $f_p$ in decreasing order and $\lambda$ is a hyper-parameter to balance the two losses. It is computationally costly to calculate the KL loss for all possible pairs of data samples in the training dataset. To address this issue, we sample training pairs in each mini-batch during fine-tuning by randomly selecting two data samples as a pair without replacement. Hence, in each mini-batch with batch size $B$, KL loss consists of $B/2$ pairs of training samples.
\begin{figure*}[!tb]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_level_0.6.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_l1filter_0.6.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_l2filter_0.6.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_resnet18_slim_0.6.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Performance of defenses for different pruning approaches (CIFAR10, ResNet18, Sparsity 0.6).}
\label{fig:defense_cifar10_resnet}
\end{figure*}
\begin{figure*}[!tb]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_level_0.6.pdf}
\caption{CIFAR10, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_level_0.6.pdf}
\caption{CIFAR10, VGG16}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_densenet121_level_0.6.pdf}
\caption{CIFAR100, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_location_column_level_0.6.pdf}
\caption{Location, FC}
\end{subfigure}
\caption{Performance of defenses for different datasets (L1 Unstructured, Sparsity 0.6).}
\label{fig:defense_l1unstructured_0.6}
\end{figure*}
\xyy{In addition, the PPB defense is only applied in fine-tuning of neural network pruning by using KL-divergence as a regularization term. Thus, the defense does not include the additional computational costs in the inference phase. }
\xyy{After applying PPB defense, we observe the divergence between the member and non-member data is significantly reduced by comparing Figure~\ref{fig:hist_defense} with Figure~\ref{fig:histogram}.
Such decreased divergence can be observed in different classes by comparing Figure~\ref{fig:cls_div} with Figure~\ref{fig:div_cls_defense}.
Both changes on the distributions of the pruned model's posterior predictions indicate that the PPB defense makes the attack model fail to learn the binary classification thresholds from the prediction confidence and sensitivity.}
Moreover, the PPB defense is designed to change the distribution of predictions instead of their orders. In other words, the PPB defense will not change the predicted classes of the pruned models during fine-tuning, which largely preserves the prediction accuracy of pruned models.
As shown in Figure~\ref{fig:div_cls_defense}, such decreased divergence can also be preserved in different classes. After applying PPB defense, the divergence of the pruned model is close to that of the original model (comparing Figure~\ref{fig:cls_div} with Figure~\ref{fig:div_cls_defense}), which indicates the effectiveness of the PPB defense.
\subsection{Defense Evaluation}
This section evaluates the effectiveness of PPB by comparing the performance of PPB with that of state-of-the-art defenses\footnote{\xyy{The defenses are evaluated by the empirical experiments. We will investigate the strict privacy guarantee in future work.
}}.
\subsubsection{State-of-the-art Defenses}
We investigate three state-of-the-art defenses against MIA attacks in neural network pruning.
\vspace{0.2em}
\noindent
\textbf{Early Stopping and L2 Regularization (Basic).}
Early stopping and l2 regularization have been used to successfully defend membership inference attacks with competitive performance~\cite{shokri2017membership,salem2019ml,song2020systematic}. As discussed in Section~\ref{4.1}, an adversary infers the membership of a sample based on the divergence of the prediction confidences between members and non-members. Such divergence becomes more severe as the number of training epochs increases, due to the increased memorization. Hence, the early stopping mechanism with fewer training epochs and l2 regularization for penalizing the over-training can tradeoff a slight reduction in model accuracy with lower privacy risk.
In the evaluation, we stop the training and fine-tuning when the validation loss is not decreased for five epochs using early stopping mechanism. In l2 regularization, we set the regularization factor as $0.0005$. \textit{Note that we use early stopping and l2 regularization in all the other defenses to improve the defense performance.}
\vspace{0.2em}
\xy{\noindent
\textbf{Differential Privacy (DP).}
\xyy{Differential privacy is a strategy to bound the individual information exposure when running an algorithm $f$ and has been widely investigated for preventing privacy leakage against membership inference attacks~\cite{truex2019effects,rahimian2019differential,chen2020differential}.
We implement differentially private SGD (DPSGD)~\cite{abadi2016deep,rahimian2019differential}, one of the most widely-used defense techniques, to train neural networks with DP guarantees.
Following DPSGD, we first clip the gradient, then add noise to the gradient, and use the generated noisy gradient to update the model's parameters. The noise is sampled from a Gaussian distribution $\mathcal{N}(0,\sigma)$. To achieve ($\epsilon, \delta$)-DP, the standard deviation of the Gaussian distribution, i.e., $\delta$, should be in the order of $\Omega(q\sqrt{T\log (1/\delta)/\epsilon}$, where $q$ denotes the sampling ratio and $T$ denotes the total number of iterations.
Accordingly, the privacy guarantee of the DP defense can be derived from $\delta$, which plays an important role in balancing utility and privacy.
Therefore, in the defense evaluation, we evaluate the effectiveness of DP defense and explore the impact of different privacy budgets (\textit{i}.\textit{e}.\@\xspace, different values of $\delta$).
}
\vspace{0.2em}\noindent
\textbf{Adversarial Regularization (ADV).}
Nasr~\textit{et al}.\@\xspace proposed to consider the membership inference adversary in the training process~\cite{nasr2018machine}. The defender first trains a surrogate attack model to distinguish between members and non-members and then trains the target model to minimize the prediction loss while maximizing the classification loss of the surrogate attack model. A parameter $\alpha$ is used to balance the prediction performance and privacy risk.
ADV is applied in the fine-tuning process of pruning to protect the privacy of pruned models.
}
\begin{figure*}[!tb]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_level_0.6_adp.pdf}
\caption{CIFAR10, ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_l1filter_0.6_adp.pdf}
\caption{CIFAR10, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_l2filter_0.6_adp.pdf}
\caption{CIFAR100, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_resnet18_slim_0.6_adp.pdf}
\caption{Location, FC}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks for different pruning approaches (CIFAR10, ResNet18, Sparsity 0.6).}
\label{fig:defense_cifar10_resnet0.6_adp}
\end{figure*}
\begin{figure*}[!tb]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_densenet121_level_0.6_adp.pdf}
\caption{CIFAR10, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar10_vgg16_level_0.6_adp.pdf}
\caption{CIFAR10, VGG16}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_cifar100_densenet121_level_0.6_adp.pdf}
\caption{CIFAR100, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/defend/defend_location_column_level_0.6_adp.pdf}
\caption{Location, FC}
\end{subfigure}
\caption{Performance of defenses against adaptive attacks for different datasets (L1 Unstructured, Sparsity 0.6).}
\label{fig:defense_l1unstructured_0.6_adp}
\end{figure*}
\subsubsection{Experimental Results of Defenses}\label{sec:defense_eval}
\xy{
We use the same settings of attack evaluations in Section~\ref{sec:eval} and conduct the following experiments with defenses in the process of pruning.
Since there is always a trade-off between privacy and prediction accuracy when implementing defenses, we explore different settings of hyper-parameters in the defensive mechanisms to thoroughly evaluate the defense performance. Specifically, we set hyper-parameter $\lambda \in \{1, 2,4,8, 16\}$ in PPB, $\sigma \in \{0.01, 0.1, 1, 10, 100\}$) in the DP noise vectors, $\alpha \in \{0.5, 1, 2, 4, 8, 16\}$ in ADV, respectively.
Figure~\ref{fig:defense_cifar10_resnet} and~\ref{fig:defense_l1unstructured_0.6} illustrate the prediction accuracy and attack accuracy with different defense mechanisms. For better illustration, we remove the results if the model with a specific hyper-parameter cannot achieve 75\% of the basic defense's prediction accuracy, \textit{i}.\textit{e}.\@\xspace, poor prediction performance, or result in a higher attack accuracy, \textit{i}.\textit{e}.\@\xspace, ineffective defense.
We observe that PPB is especially effective in protecting all pruning approaches from attacks, which can reduce the attack accuracy to around 50\% (random guessing accuracy), while not degrading the prediction accuracy too much.
Hence, \textit{PPB defense provides a privacy-preserving approach with minimal degradation of prediction accuracy}.
In addition, ADV is also effective in the L1 unstructured and Slimming pruning, but fails to achieve a good balance between prediction performance and privacy in the L1 structured and L2 structured pruning. Similar to the fact shown in recent work~\cite{jayaraman2019evaluating}, we observe DP can hardly balance this utility-privacy tradeoff.}
\subsubsection{Defenses against Adaptive Attacks}
To rigorously evaluate the defense performance, we consider adaptive attacks, where the adversary knows all the details of defenses along with the pruning information.
\xyy{In adaptive attacks, the adversary trains a shadow pruned model following the same defense mechanism (\textit{e}.\textit{g}.\@\xspace, Basic, DP, ADV, and proposed PPB) and pruning process. The adversary then performs the SAMIA attack based on the shadow pruned model.
As shown in Figure~\ref{fig:defense_cifar10_resnet0.6_adp} and~\ref{fig:defense_l1unstructured_0.6_adp}, we observe that PPB reduces the accuracy of adaptive attacks compared to the attacks on the pruned model without defenses and provides the best protection in L1 structured and L2 structured pruning. Besides, for the L1 unstructured and Slimming pruning, PPB and ADV are the best two defenses.
\textit{PPB is designed towards pruned models by reducing the confidence and sensitivity gap. Therefore, in general, PPB provides good protection in all pruning approaches.
}
In addition, ADV is designed to mitigate the confidence gap, which is largely increased in L1 unstructured and slimming pruning (as discussed in Section~\ref{sec:impact_gap}). Hence, ADV is also effective in protecting pruned models using L1 unstructured and slimming pruning.
}
\subsection{Evaluation Setup}
\xyy{
In the evaluation, we consider the most widely used datasets, neural network architectures, and optimization approaches following recent research of MIAs~\cite{shokri2017membership,hui2019practical,salem2019ml,song2020systematic}. }
\begin{figure*}[!t]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_cifar10_resnet18.pdf}
\caption{CIFAR10, ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_cifar10_densenet121.pdf}
\caption{CIFAR10, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\centering
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_cifar100_densenet121.pdf}
\caption{CIFAR100, DenseNet121}
\label{fig:prune_acc_cifar100_densenet}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/accuracy/acc_location_column.pdf}
\caption{Location, FC}
\end{subfigure}
\caption{Prediction accuracy (test accuracy) of the pruned models using different pruning approaches and sparsity levels. Each point indicates the prediction accuracy achieved by the pruned model with a certain pruning approach and sparsity level. The black line indicates the prediction accuracy of the original models.}
\label{fig:prune_acc}
\end{figure*}
\subsubsection{Datasets}
We consider \xy{seven} popular datasets in the experiments: CIFAR10, CIFAR100, \xy{CHMNIST}, SVHN, Texas, Location, and \xy{Purchase}.
\begin{itemize}[leftmargin=*] \setlength{\itemsep}{0.1em} \setlength{\parskip}{0.1em}
\item {\textit{CIFAR10}} and \textit{CIFAR100}~\cite{krizhevsky2009learning}. These are two benchmark datasets for image classification. CIFAR10 dataset contains 60,000 $32\times 32$ color images in 10 classes, with 6,000 images per class. CIFAR100 dataset contains 60,000 color images in 100 classes, with 600 images per class.
\item \xy{\textit{CHMNIST}~\cite{kather2016multi}. This dataset consists of 5,000 histological images of human colorectal cancer containing 10 classes of tissues. We resize all images to $32\times 32$, the same dimension as CIFAR10 and CIFAR100.}
\item \textit{SVHN}~\cite{netzer2011reading}. This dataset consists of 99,289 $32\times 32$ color images from house numbers in the Google Street View dataset, containing 10 classes from 0 to 9.
\item \textit{Location}~\cite{yang2016participatory,yang2015nationtelescope}. This dataset contains location ``check-in'' records of mobile users in the Foursquare social network, restricted to the Bangkok area. The dataset is used to predict users' geosocial type based on the geographical history record features: whether the user visited a certain region or location type. We use the preprocessed purchase dataset provided by Shokri \textit{et al}.\@\xspace~\cite{shokri2017membership}, which contains 5,010 data samples, 446 binary features, and 30 classes.
\item \textit{Texas}~\cite{hospital2020}. This dataset is presented in the Hospital Discharge Data Public Use Data File provided by the Texas Department of State Health Services. The dataset is used to predict the types of patient's main procedure based on a wide range of features, such as external causes of injury, diagnosis of the patient, procedures the patient underwent, and other generic information. We use the preprocessed purchase dataset provided by~\cite{shokri2017membership}, which contains 67,330 data samples, 6,169 binary features, and 100 classes.
\item \xy{\textit{Purchase}~\cite{acquire2014}. This dataset is presented in Acquire Valued Shoppers Challenge to predict which shoppers will become repeat buyers based on the purchase history. We use the preprocessed purchase dataset provided by Shokri \textit{et al}.\@\xspace~\cite{shokri2017membership}, which contains 197,324 data samples, 600 binary features, and 100 classes. }
\end{itemize}
\xy{Each above dataset is first randomly and equally split into two parts: one for target model, one for shadow model. In each part, we split the data into three datasets: training (45\%), validation (10\%), and test (45\%). We use the validation dataset to determine if the model needs to stop training or fine-tuning for early stopping.}
Therefore, the membership inference via random guessing results in 50\% attack accuracy.
\xy{Due to the space limit, we only show the results of the CIFAR10 and Purchase datasets. The rest results are presented in the Appendix.}
\subsubsection{Neural Network Architectures}
For the \xy{four} image datasets, \textit{i}.\textit{e}.\@\xspace, CIFAR10, CIFAR100, \xy{CHMNIST}, and SVHN, we consider \xy{three representative neural network architectures: ResNet18, VGG16, and DenseNet121\footnote{All neural networks are trained using \url{https://github.com/huyvnphan/PyTorch_CIFAR10}}.} For the other \xy{three} datasets, \textit{i}.\textit{e}.\@\xspace, Texas, \xy{Purchase}, and Location, we implement fully connected (FC) neural networks with two layers, and the numbers of neurons for each layer are 256 and 128, respectively. All the FC layers except the last one are followed by ReLU activation functions. In addition, Adam optimizer~\cite{kingma2014adam} is implemented with a learning rate of 0.001 and the batch size of 128 to train all the original models and fine-tune all the models after pruning.
\subsubsection{Neural Network Pruning Approaches}
Four representative neural network pruning approaches are considered, including L1 unstructured pruning, L1 structured pruning, L2 structured pruning, and Network slimming.
\begin{itemize}[leftmargin=*]\setlength{\itemsep}{0.1em} \setlength{\parskip}{0.1em}
\item \textit{L1 unstructured pruning}~\cite{han2015deep} (L1 unstructured), which removes the weights with the lowest absolute values individually. This pruning approach can produce a sparse neural network with a small size, but may not improve efficiency given the existing hardware and software optimization.
\item \textit{L1 structured pruning}~\cite{li2016pruning} (L1 structured), which removes the entire filters with the lowest absolute values from the convolution layers. By removing the entire filters, this method leads to significant speedup compared with the unstructured pruning since optimization for dense matrix can be applied for efficient computation.
\item \textit{L2 structured pruning} (L2 structured), which removes the entire filters with the lowest L2 norm values from the convolution layers, similar to L1 structured pruning.
\item \textit{Network slimming}~\cite{liu2017learning} (Slimming), which associates scaling factors used in the batch normalization layer with each channel and removes the entire channels with the lowest scaling factors. This method automatically identifies the insignificant channels and finds the target architectures.
\end{itemize}
We apply the L1 unstructured pruning to all models.
Since structured pruning approaches, \textit{i}.\textit{e}.\@\xspace, L1 structured and L2 structured pruning and Slimming, can only be applied to pruning convolution layers, we evaluate the structured pruning approaches on the ResNet18, VGG16, and DenseNet121 models trained on CIFAR10, CIFAR100, SVHN datasets. In addition, five sparsity levels $\gamma=\{0.5, 0.6, 0.7, 0.8, 0.9\}$ are investigated for all pruning approaches, which denote the portions of the removed parameters\footnote{Since structured pruning only removes the parameters in the convolution layers, the sparsity levels for structured pruning only count the removed parameters in the convolution layers instead of the entire neural network.}. We follow typical pruning procedures: train the original model, prune the model using the above approaches, and finally fine-tune the pruned model.
Figure~\ref{fig:prune_acc} shows the prediction accuracy of the original model and the pruned models with different pruning approaches and sparsity levels. We observe that the pruned models achieve close performance compared to the original model if the sparsity level is not high. The accuracy of pruned models is reduced with the increase of the pruning sparsity. Sometimes, pruned models can achieve higher accuracy than the original models, which has been shown in recent studies of neural network pruning~\cite{blalock2020state}. Unstructured pruning usually performs better than structured pruning in the evaluation, since structured pruning forces the removed parameter in a restricted way, which limits the performance of the pruned model but increases the speed of model inference.
\xy{
\subsubsection{State-of-the-art MIAs}
\label{sec:state_mia}
To thoroughly evaluate the proposed SAMIA, we investigate eight state-of-the-art MIAs along with SAMIA.
\footnote{We implement Conf, Xent, Mentr, and Top1-Conf attacks based on \url{https://github.com/inspire-group/membership-inference-evaluation} and BlindMI attack based on \url{https://github.com/hyhmia/BlindMI/blob/master/BlindMI_Diff_W.py}. }.
\begin{itemize}[leftmargin=*]\setlength{\itemsep}{0.1em} \setlength{\parskip}{0.1em}
\item \textit{Ground-truth class confidence-based threshold attack (Conf)}. Yeom \textit{et al}.\@\xspace used the prediction confidence of ground-truth class to identify membership status~\cite{yeom2018privacy}. The adversary learns a threshold to determine the membership of a data sample based on the confidence of ground-truth class. Given an input sample $\bm{x}$, its class $y$, and the pruned model $f_p$, the attack function is defined as $I_{\mathrm{conf}}(f_p, (\bm{x},y)) = \mathbbm{1}\{f_p^{(y)}(\bm{x})\geq\zeta_y\}$, where $f_p^{(y)}$ is the prediction confidence of class $y$ and $\zeta_y$ is the threshold of class $y$ derived from the shadow pruned model.
\item \textit{Cross-Entropy-based threshold attack (Xent)}. The entropy loss can be used to derive the threshold from the shadow pruned model~\cite{yeom2018privacy}. The attack function is defined as $I_{\mathrm{xent}}(f_p, (\bm{x},y)) = \mathbbm{1}\{\mathrm{xent}(f_p^{(y)}(\bm{x}))\geq\zeta_y\}$, where $\mathrm{xent}$ denotes the cross entropy loss.
\item \textit{Modified-entropy-based threshold attack (Mentr)}. Song and Mittal proposed modified entropy by including the information about the ground-truth class, which achieved better performance than using prediction confidence~\cite{song2020systematic}. The attack function is defined as $I_{\mathrm{mentr}}(f_p, (\bm{x},y)) = \mathbbm{1}\{\mathrm{mentr}(f_p^{(y)}(\bm{x}))\geq\zeta_y\}$, where $\mathrm{mentr}(f_p(\bm{x}), y) = - (1 - f_p^{(y)}(\bm{x}))\log (f_p^{(y)}(\bm{x})) - \sum_{t\neq y}f_p^{(t)}(\bm{x})\log(1-f_p^{(t)}(\bm{x}))$.
\item \textit{Top1 Confidence-based threshold attack (Top1-Conf)}. Salem~\textit{et al}.\@\xspace proposed to derive the threshold from the highest prediction confidence~\cite{salem2019ml}. The attack function is defined as $I_{\mathrm{top1}}(f_p, (\bm{x})) = \mathbbm{1}\{\mathrm{top1}(f_p(\bm{x}))\geq\zeta_y\}$, where $\mathrm{top1}$ calculates the highest value from the prediction confidence.
\item \textit{Confidence-based Neural Network attack (NN)}. Shokri \textit{et al}.\@\xspace proposed to use prediction confidence as features to train a neural network from the shadow model~\cite{shokri2017membership}, which is used to distinguish member and non-member data.
\item \textit{Top-3 Confidence-based Neural Network attack (Top3-NN)}. Salem~\textit{et al}.\@\xspace proposed to use the top-3 prediction confidences as features~\cite{salem2019ml} to train a neural network classifier.
\item \textit{Confidence-based Neural Network attack with ground-truth class (NNCls)}. Nasr~\textit{et al}.\@\xspace combined one-hot encoded class labels with the prediction confidence as features to train a neural network classifier~\cite{nasr2018machine}.
\item \textit{Blind Membership Inference Attack (BlindMI)}. Hui~\textit{et al}.\@\xspace proposed to determine the membership of a data sample by moving it to a non-member set and check if the moving operation increases the distance between member and non-member sets~\cite{hui2019practical}. BlindMI considers the data sample as a non-member if the distance is increased. We use the default BlindMI attack provided in~\cite{hui2019practical}.
\end{itemize}
In the main paper, we present the results of five attacks, that achieve the highest attack accuracies in most experiments, \textit{i}.\textit{e}.\@\xspace, Conf, Mentr, NNCls, BlindMI, and SAMIA. The results of the rest of the attacks are reported in the Appendix.}
Besides, it should be mentioned that to provide a practical analysis of privacy risks, we adopt early stopping and l2 regularization as a baseline defense mechanism and apply it to all the following experiments of membership inference attacks. Other defense mechanisms will be discussed in Section~\ref{sec:defense}.
\subsubsection{SAMIA Settings}
\xy{Following the experimental setting in~\cite{shokri2017membership}, we first train five shadow models and their pruned models. The predictions of the shadow models on shadow training and shadow test datasets are used to train an attack model. In the attack model, we use four attention heads, 64 neural units, and GeLU activation function~\cite{hendrycks2016gaussian} in each self-attention module, with a 20\% dropout rate.
We use SGD optimizer~\cite{saad1998online} to train the attack models for 100 epochs with batch size 128. The learning rate of the SGD optimizer is set as 0.01 and reduced to 0.001 and 0.0001 at the 1/2 and 3/4 of the training process (\textit{i}.\textit{e}.\@\xspace, the 50th and 75th epoch).}
\xyy{
Due to the large number of settings evaluated in the attacks and defenses and the high computational cost in each setting, we conduct all the experiments only once. Thus experimental variation may be observed due to the randomness in neural network pruning and membership inference attacks (\textit{e}.\textit{g}.\@\xspace, parameter initialization, dataset shuffling).}
\begin{figure*}[!t]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_cifar10_resnet18.pdf}
\caption{CIFAR10, ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_cifar10_densenet121.pdf}
\caption{CIFAR10, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_cifar100_densenet121.pdf}
\caption{CIFAR100, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/acc_attack_location_column.pdf}
\caption{Location, FC}
\end{subfigure}
\caption{Privacy Risks of Neural Network Pruning (w.r.t. prediction accuracy). Most pruning approaches results in a higher attack accuracy when considering a similar prediction accuracy, compared with the original models. We present the attack accuracy of SAMIA for pruned models and the attack accuracy of Conf attack for the original models.}
\label{fig:acc_attack}
\end{figure*}
\begin{figure*}[!t]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/mia_cifar10_resnet18.pdf}
\caption{CIFAR10, ResNet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/mia_cifar10_densenet121.pdf}
\caption{CIFAR10, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/mia_cifar100_densenet121.pdf}
\caption{CIFAR100, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/mia_location_column.pdf}
\caption{Location, FC}
\end{subfigure}
\caption{Privacy Risks of Neural Network Pruning (w.r.t. model sparsity). We present the attack accuracy of SAMIA for pruned models and the attack accuracy of Conf attack for the original models.}
\label{fig:mia_sparsity}
\end{figure*}
\begin{figure*}[!t]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/conf_gap_cifar10_resnet18.pdf}
\caption{Confidence gap}
\label{fig:conf_gap_cifar10_resnet18}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/sens_gap_cifar10_resnet18.pdf}
\caption{Sensitivity gap}
\label{fig:sens_gap_cifar10_resnet}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\linewidth]{figs/exp/attack/gen_gap_cifar10_resnet18.pdf}
\caption{Generalization gap}
\end{subfigure}
\caption{\xy{Impact of confidence gap, sensitivity gap, and generalization gap (CIFAR10, ResNet18).} We present the relationship between the gap and the attack accuracy of SAMIA.}
\label{fig:gap_cifar10_resnet18}
\end{figure*}
\begin{figure*}[!t]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_resnet18_level.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_resnet18_l1filter.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_resnet18_l2filter.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_resnet18_slim.pdf}
\caption{Slimming}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_resnet18_level_2.pdf}
\caption{L1 unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_resnet18_l1filter_2.pdf}
\caption{L1 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_resnet18_l2filter_2.pdf}
\caption{L2 structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_resnet18_slim_2.pdf}
\caption{Slimming}
\end{subfigure}
\caption{\xy{Attack performance comparison of MIAs (CIFAR10, ResNet18). We present the attack accuracy of state-of-the-art membership inference attacks and compared them with the proposed SAMIA. We present the results of pruned CIFAR10 ResNet18 models using four pruning approaches. The black line presents the attack accuracy of original models using Conf attack, \textit{i}.\textit{e}.\@\xspace, Conf (Org.).}}
\label{fig:mias_cifar10_resnet}
\end{figure*}
\subsection{Privacy Risk Discussions}
\xyy{
In this section, we evaluate the privacy risks of the pruned models and compare them with the original models and then investigate several key factors on privacy risks of neural network pruning. Additionally, we investigate the privacy risks of different pruning approaches and discuss the effectiveness of the proposed SAMIA and the impact of unknown sparsity levels and pruning approaches.
\subsubsection{Privacy Risks of Neural Network Pruning}\label{sec:5.2.1}
Since different pruning approaches and sparsity levels may achieve distinct prediction accuracy, to make a fair comparison, we evaluate the privacy risks of pruning by taking the prediction accuracy into consideration.
Figure~\ref{fig:acc_attack} show the relationship between prediction accuracy and (SAMIA) attack accuracy when we apply different pruning approaches and sparsity levels on the CIFAR10, CIFAR100, and Location datasets.
We observe that \textit{when the pruned model achieves a comparable prediction accuracy with the original model, most pruning approaches result in an increased attack accuracy (\textit{i}.\textit{e}.\@\xspace, privacy risk).
}
The attack accuracy may be decreased with the loss of prediction accuracy, as the pruned model becomes less effective for both prediction and attack.
However, we still observe that in most cases, when the pruned model performs worse than the original model, the pruned model's attack accuracy remains higher than the original one's.
Therefore, the pruned models become more vulnerable to membership inference attacks than the original models.
When a low sparsity level is used, we always observe the increased privacy risk of the pruned model (Figure~\ref{fig:mia_sparsity}). Since with a low sparsity level, the pruned model is more likely to achieve a comparable or even higher prediction accuracy compared with the original model, which increases the accuracy of prediction confidence used in MIAs and further increases the privacy risk.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_densenet121_level.pdf}
\caption{CIFAR10, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar10_densenet121_level_2.pdf}
\caption{CIFAR10, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_densenet121_level.pdf}
\caption{CIFAR100, DenseNet121}
\label{fig:mias_cifar100_densenet1}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_cifar100_densenet121_level_2.pdf}
\caption{CIFAR100, DenseNet121}
\label{fig:mias_cifar100_densenet2}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_location_column_level.pdf}
\caption{Location, FC}
\end{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\includegraphics[width=0.95\linewidth]{figs/exp/mias/mias_location_column_level_2.pdf}
\caption{Location, FC}
\end{subfigure}
\caption{\xy{Attack performance comparison of MIAs on different datasets (L1 Unstructured). We present the attack accuracy of state-of-the-art membership inference attacks and compared them with the proposed SAMIA. We present the attack accuracy of three models (CIFAR10 DenseNet121, CIFAR100 DenseNet121, Location FC) pruned by L1 unstructured pruning. The black line presents the attack accuracy of original models using Conf attack, \textit{i}.\textit{e}.\@\xspace, Conf (Org.).}}
\label{fig:mias_dataset}
\end{figure}
\subsubsection{Impact of Confidence, Sensitivity, and Generalization Gap.}
\label{sec:impact_gap}
As aforementioned, we hypothesize that neural network pruning leads to the increased \textit{confidence gap} and \textit{sensitivity gap} (in ground-truth class) of pruned models, thus increasing their membership inference risks.
Meanwhile, overtraining is considered as one of the key causes of membership leakage in previous research~\cite{yeom2018privacy,song2020systematic}, leading to our evaluation on \textit{generalization gap}, \textit{i}.\textit{e}.\@\xspace, the difference between training accuracy and testing accuracy.
From Figure~\ref{fig:gap_cifar10_resnet18}, we observe that {neural network pruning increases the gaps between members and non-members, \textit{i}.\textit{e}.\@\xspace, confidence gap, sensitivity gap, and generalization gap, in most settings}. Further, with the increase of gaps, we observe the increase of attack accuracy, which indicates \textit{the strong correlation between the gaps, \textit{i}.\textit{e}.\@\xspace, confidence gap, sensitivity gap, and generalization gap, and the increased privacy risk.}
The strong correlation for confidence gap and sensitivity gap validates our intuition that these gaps can be leveraged by the adversary to infer the membership status, introducing a new attack surface in neural network pruning.
By investigating the attack results, we find that the confidence gap plays the most important role in the privacy risk. L1 unstructured and slimming pruning usually lead to an increased confidence gap, which will be leveraged by the adversary and result in a higher attack accuracy. Additionally, sensitivity gap can also leak the membership information. For example, the confidence gap of L1 unstructured pruning on a CIFAR10 ResNet18 model (Figure~\ref{fig:conf_gap_cifar10_resnet18}) is close to the original model, but due to the increased sensitivity gap (Figure~\ref{fig:sens_gap_cifar10_resnet}), the pruning still results in an increased attack accuracy.
\subsubsection{Privacy Risks of Pruning Approaches.}
Following the same settings above, we investigate the privacy risks of different pruning approaches by comparing the attack accuracy under the similar prediction accuracy.
As shown in Figure~\ref{fig:acc_attack} and~\ref{fig:mia_sparsity}, given the similar prediction accuracy of the pruned models, L1 unstructured and slimming pruning result in the highest attack accuracy.
Besides, L1 structured pruning achieves the lowest attack accuracy among all pruning approaches, but still in some cases, the attack accuracy is higher than the original model, even with the similar or lower prediction accuracy.
The structured constraint used in L1 structured pruning regularizes the model in the fine-tuning and thereby reduces the privacy risk.
\subsubsection{Effectiveness of SAMIA}
To investigate the effectiveness of the proposed SAMIA, we compare SAMIA with the state-of-the-art MIAs in terms of attack accuracy. As shown in Figure~\ref{fig:mias_cifar10_resnet}, we observe that \textit{our proposed SAMIA achieves the highest attack accuracy in most cases compared with baseline attacks}, which is mainly due to the fact that SAMIA best leverages both confidence gap and sensitivity gap (in ground-truth class) introduced in pruning. Besides, Top1-conf and Mentr attacks are also effective, as both attacks take advantage of the confidence gap, the most important factor for model privacy.
We also observe that when the pruning introduces a high generalization gap , all attacks can achieve a high attack accuracy (\textit{e}.\textit{g}.\@\xspace, CIFAR100 DenseNet121 in Figure~\ref{fig:mias_cifar100_densenet1},~\ref{fig:mias_cifar100_densenet2} and Appendix Figure~\ref{fig:gen_gap_cifar100_densenet})), which has been discussed in the previous MIA research.
}
\subsubsection{Unknown Sparsity Level and Pruning Approach}
\label{section:unknownSLPA}
\xy{In the evaluation above, we assume the adversary has the knowledge of sparsity levels and pruning approaches used in network pruning.
In this section, we explore the privacy risks of a more realistic scenario, \textit{i}.\textit{e}.\@\xspace, when the adversary has no prior knowledge of the sparsity levels and the pruning approaches.
\vspace{0.2em}
\noindent
\textbf{Unknown sparsity level.} We assume the adversary only knows the pruning approach but not the sparsity level that is the major factor of model efficiency. We evaluate the attack accuracy of SAMIA when the adversary prunes target models and shadow models using different sparsity levels. We also consider the case when the target model is not pruned, \textit{i}.\textit{e}.\@\xspace, sparsity level $= 0$. As shown in Figure~\ref{fig:unknown_sparsity_cifar10_resnet} and~\ref{fig:unknown_sparsity_dataset}, the attack accuracy is not affected too much due to the different sparsity levels between target models and shadow models. In some cases, using a different sparsity level in pruning shadow models can even increase the attack accuracy.
\textit{The attack accuracy mainly depends on the performance of the shadow model}, and thus the adversary can attack victim models with higher attack accuracy by selecting a good pruned shadow model.
For instance, the adversary can use each shadow model to attack other shadow models with different sparsity levels and select the one with the highest attack accuracy.
}
\begin{figure*}[!t]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_resnet18_level.pdf}
\caption{L1 Unstructured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_resnet18_l1filter.pdf}
\caption{L1 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_resnet18_l2filter.pdf}
\caption{L2 Structured}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_resnet18_slim.pdf}
\caption{Slimming}
\end{subfigure}
\caption{Attack accuracy with unknown sparsity levels (CIFAR10, ResNet18).}
\label{fig:unknown_sparsity_cifar10_resnet}
\end{figure*}
\begin{figure*}[!t]
\centering
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar10_densenet121_level.pdf}
\caption{CIFAR10, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_cifar100_densenet121_level.pdf}
\caption{CIFAR100, DenseNet121}
\end{subfigure}
\begin{subfigure}[b]{0.24\linewidth}
\includegraphics[width=\linewidth]{figs/exp/unknown/unknown_location_column_level.pdf}
\caption{Location, FC}
\end{subfigure}
\caption{\xy{Attack accuracy with unknown sparsity levels (L1 Unstructured).}}
\label{fig:unknown_sparsity_dataset}
\end{figure*}
\vspace{0.2em}
\noindent
\xyy{
\textbf{Unknown sparsity level and pruning approach.}
Since we assume the adversary has no prior knowledge of the sparsity level and pruning approach, the adversary may randomly pick a sparsity level and a pruning approach to prune a shadow model for attacks. To evaluate the attack accuracy, we conduct 20 experiments for the aforementioned four image datasets and the corresponding neural networks. In each experiment, we randomly select the sparsity levels and pruning approaches for target models and shadow models, respectively. For example, the target model uses L1 Structured pruning with 0.5 sparsity level while the shadow model uses Slimming pruning with 0.8 sparsity level. The sparsity levels are selected from the set of $\{0.5, 0.6, 0.7, 0.8, 0.9\}$ and the pruning approaches are selected from the four pruning approaches.
To measure the privacy risks, we define the attack accuracy loss as $(acc_{known} - acc_{unknown}) /acc_{known}$, where $acc_{known}$ denotes the attack accuracy when the adversary knows all the pruning information and $acc_{unknown}$ denotes the attack accuracy without knowing any pruning information. Table~\ref{tab:acc_loss_unknown} shows the average attack accuracy loss over 20 experiments for each dataset and model.
We observe that \textit{without knowing the sparsity levels and pruning approaches, the attack is still effective in most cases except the CIFAR10 VGG16 and CIFAR100 ResNet18 models}.
The poor attack performance in these two models is due to the ineffectiveness of shadow models using specific sparsity levels and pruning approaches for attacks.
For example, we observe a significant drop of attack accuracy (from 90\% to 50\%) when applying L1 structured and slimming pruning with sparsity levels 0.7 to 0.9, on the CIFAR10 VGG16 model (Figure~\ref{fig:mia_cifar10_vgg16} in Appendix). The large gap makes the shadow models pruned using these settings ineffective in attacking the unknown victim model.
}
\begin{table}[!tb]
\centering
\caption{\xy{Attack accuracy loss with unknown sparsity levels and pruning approaches.}}
\label{tab:acc_loss_unknown}
\small
\begin{tabular}{@{}llr@{}}
\toprule
Dataset & Model & Attack Acc Loss \\ \midrule
\multirow{3}{*}{CIFAR10} & ResNet18 & 4.77\% \\
& DenseNet121 & 1.63\% \\
& VGG16 & 26.83\% \\\midrule
\multirow{3}{*}{CIFAR100} & ResNet18 & 12.41\% \\
& DenseNet121 & 6.90\% \\
& VGG16 & 2.43\% \\\midrule
\multirow{3}{*}{SVHN} & ResNet18 & 0.60\% \\
& DenseNet121 & 0.12\% \\
& VGG16 & 0.05\% \\\midrule
\multirow{3}{*}{CHMNIST} & ResNet18 & 0.78\% \\
& DenseNet121 & 0.52\% \\
& VGG16 & -0.58\% \\ \bottomrule
\end{tabular}%
\end{table}
\section{Introduction}
\input{intro}
\section{Background and Related Work}
\label{sec:back}
\input{back}
\section{System Overview}
\input{threat}
\section{MIA against Neural Network Pruning}
\label{sec:attack}
\input{attack}
\section{Attack Evaluation}
\label{sec:eval}
\input{eval}
\section{Defenses against MIAs}
\label{sec:defense}
\input{defend}
\section{Conclusion}
This paper conducted the first analysis of privacy risks in neural network pruning. We first explored the impacts of neural network pruning on prediction divergence, \xy{based on which, a new membership inference attack, \textit{i}.\textit{e}.\@\xspace, self-attention membership inference attack (SAMIA), is proposed against the pruned neural network models. Through comprehensive and rigorous evaluation, we demonstrated the substantially increased privacy risks of the pruned models. We found that the privacy risks of the pruned models are tightly related to the confidence gap, sensitivity gap, and generalization gap due to pruning. Besides, even without knowing the pruning approach, the membership inference attacks can still achieve high attack accuracy against the pruned model.}
Especially, the proposed SAMIA showed superiority in identifying the pruned models' prediction divergence by using finer-grained prediction metrics, which is recommended as a competitive baseline attack model for future privacy risk study of neural network pruning.
In addition, to defend the attacks, we proposed a pair-based posterior balancing named as PPB by reducing the prediction divergence of fine-tuning process during neural network pruning. We experimentally demonstrated that PPB could reduce the attack accuracy to around 50\% (random guessing accuracy) without considering adaptive attacks and \xy{achieve the best protection compared with the three existing defenses.} Besides, PPB showed competitive performance even when defending adaptive attacks.
\xyy{
The proposed SAMIA attack will be further explored under more challenging MIA settings, such as the label-only MIA without available confidences, where the existing label-only MIA attacks using data augmentation~\cite{choquette2021label} and black-box adversary~\cite{li2021membership} can be potentially integrated for more powerful attack capability. }
We hope our work convinces the community about the importance of exploring innovative neural network pruning approaches by taking privacy-preserving into consideration.
\section*{Acknowledgement}
We would like to thank our shepherd, Yinzhi Cao, and the anonymous reviewers for their constructive suggestions. This work was supported in part by National Science Foundation (CCF-2106754).
\section*{Availability}
\xyy{Our code is publicly available at \url{https://github.com/Machine-Learning-Security-Lab/mia_prune} for the purpose of reproducible research. }
\bibliographystyle{unsrt}
\subsection{Neural Network Pruning Workflow}\label{S3.1}
This paper is focused on a general neural network pruning process, whose workflow includes three key stages: original network training, coarse pruning, and fine-tuning, as illustrated in Figure~\ref{fig:pipeline}. Specifically,
\vspace{-0.3em}
\begin{enumerate}[leftmargin=*]\setlength{\itemsep}{0.1em} \setlength{\parskip}{0.1em}
\item \textit{Original network training}: A large size original neural network model $f(\bm{x}; \bm{W})$ (sometime over-parameterized) is first trained at this stage, where $\bm{x}$ is the training data and $\bm{W}$ is the model parameters;
\item \textit{Pruning}: Upon the original network, the pruning is conducted by removing insignificant parameters or groups of parameters according to a specific criterion. The pruned network can be given by $f(\bm{x}; \bm{M}\odot \bm{W})$, where $\bm{M}\in {0, 1}^{|\bm{W}|}$ denotes the binary mask that can set a parameter to be 0, and $\odot$ denotes the element-wise multiplication;
\item \textit{Fine-tuning}: To recover the performance loss due to pruning, a pruned network can be fine-tuned by reusing the training data. After $N$-epoch fine-tuning, a pruned network can be given by $f(\bm{x}; \bm{M}\odot \bm{W}^N)$.
\end{enumerate}
\vspace{-0.3em}
For the sake of simplicity, we use $f$ to denote the original model $f(\bm{x}; \bm{W})$ and $f_p$ to denote the pruned model $f(\bm{x}; M\odot \bm{W}^N)$ in the following paper.
\subsection{Adversarial Knowledge}
The goal of MIAs is to find the membership of a data sample, \textit{i}.\textit{e}.\@\xspace, whether the sample is used to train a target model or not. In this paper, we assume the adversary of the MIAs against a pruned neural network has the following knowledge.
\begin{itemize}[leftmargin=*] \setlength{\itemsep}{0.1em} \setlength{\parskip}{0.1em}
\item \textit{Access to query the pruned network}. The pruned model is made available to the public, \textit{i}.\textit{e}.\@\xspace, queryable. Due to practical considerations, the original model is assumed not published \xy{and inaccessible}.
\item \textit{Access to the prediction confidences}. We consider the practical black-box MIAs~\cite{nasr2019comprehensive}. The adversary can only acquire the output, \textit{i}.\textit{e}.\@\xspace, the prediction confidences, of the pruned network. Any internal information about the pruned model and the original model, such as the network architecture and activation functions, are inaccessible to the adversary.
\item \textit{Access to the pruning approach and the sparsity level}. We consider two different types of adversaries \xy{with or without knowledge of the pruning approach and the sparsity level.}
\item \textit{Access to the defense approach}.
The arms race between attacks and defenses is one main challenge in machine learning privacy. If the defense mechanisms are designed without considering the adversary's knowledge, their performance might be substantially degraded when adaptive attacks are used against those defensive mechanisms~\cite{jia2019memguard,song2020systematic}. Hence, we consider both non-adaptive and adaptive attacks to evaluate defense mechanisms: 1) non-adaptive attacks, \textit{i}.\textit{e}.\@\xspace, the adversary has no access to the defense mechanisms; 2) adaptive attacks, where the adversary has full knowledge of the defense mechanisms and performs the MIAs by taking the defensive mechanisms into account.
\end{itemize}
|
2202.03340
|
\section{{\bf Introduction}}\label{Sec.1}
A Lidstone series provides a generalization of Taylor series that approximates a given function in a neighborhood of two points instead of one \cite{Lidstone}. Recently, Ismail and Mansour \cite{Ismail and Mansour} introduced a $q$-analog of the Lidstone expansion theorem. They proved that, under certain conditions, an entire function $f(z)$ can be expanded in the form
\begin{equation}\label{q-Lidstone series} f(z)= \sum_{n=0}^{\infty} \Big[ A_n(z) D_{q^{-1}}^{2n}\, f(1)- B_n(z)D_{q^{-1}}^{2n}\, f(0)\Big],\end{equation}
where $A_n(z)$ and $B_n(z)$ are the $q$-Lidstone polynomials defined by
$$A_n(z)=\large{\eta}^1_{q^{-1}} B_n(z) \mbox{ and } B_n(z)=\frac{2^{2n+1}}{[2n+1]_q!}B_{2n+1}(z/2;q).$$
Here $\large{\eta}^y_{q^{-1}}$ denotes the $q$-translation operator defined by
\begin{equation*}{\large{\eta}_{q^{-1}}^y} z^n
=q^{\frac{n(n-1)}{2}}z^n(-y/z ;q^{-1})_n=y^n(-z/y ;q)_n,\end{equation*}
and $B_{n}(z;q)$ is the $q$-analog of the Bernoulli polynomials which defined by the
generating function
\begin{equation}
\dfrac{t\,E_q(zt)}{E_q(t/2)e_q(t/2)-1}=\sum_{n=0}^{\infty}B_n(z;q)
\frac{t^n}{[n]_q!},
\end{equation}
where $E_q(z)$ and $e_q(z)$ are the $q$-exponential functions defined by
$$ E_q(z):= \sum_{j=0}^\infty q^{j(j-1)/2}\,\frac{z^j}{[j]_q!}; \, z\in \mathbb{C} \, \mbox{ and } \,
e_q(z):= \sum_{j=0}^\infty \frac{z^j}{[j]_q!}; \, |z|< 1.$$
\vskip5mm
This paper aims to construct the $q$-Lidstone polynomials which are $q$-Bernoulli and $q$-Euler polynomials generated by the third Jackson $q$-Bessel function, and then to derive two formula of $q$-Lidstone expansion theorem. More precisely,
we will prove that the entire function may be expanded in terms of $q$-Lidstone polynomials in two different forms. In the first form, the $q$-Lidstone polynomials are $q$-Bernoulli polynomials and the coefficients are the even powers of the $q$-derivative $\frac{\delta_q f(z)}{\delta_q z}$ at $0$ and $1$. The other form expand the function in $q$-Lidstone polynomials based on $q$-Euler polynomials and the coefficients contain the even and odd powers of the $q$-derivative $\frac{\delta_q f(z)}{\delta_q z}$.
The publications \cite{Mansour and AL-Towaileb,Mansour and AL-Towaileb 2} are the most affiliated with this work.
\vskip5mm
This article is organized as follows: in Section 2, we state some definitions and present some background on $q$-analysis which we need in our investigations. In Section 3 and Section 4, we introduce $q$-Bernoulli and $q$-Euler polynomials generated by the third Jackson $q$-Bessel function. Section 5 contains a $q$-Lidstone expansion theorem involving $q$-Bernoulli polynomials while Section 6 contains a $q$-Lidstone series
involving $q$-Euler polynomials.
\section{{\bf Definitions and Preliminary results}}
Throughout this paper, unless otherwise is stated, $q$ is a positive number less than one and we follow the notations and terminology in \cite{AMbook,GR}.
The symmetric $q$-difference operator $\delta_q$ is defined by
$$ \delta_q f(z)= f(q^{\frac{1}{2}}z)-f(q^{\frac{-1}{2}}z),$$
(see \cite{Cardoso011,GR}) and then
\begin{equation}\label{delta}
\frac{\delta_q f(z)}{\delta_q z}:= \dfrac{f(q^{\frac{1}{2}}z)-f(q^{\frac{-1}{2}}z)}{z(q^{\frac{1}{2}}- q^{\frac{-1}{2}})} \quad z\neq 0.
\end{equation}
We use a third $q$-exponential function $exp_q(z)$ which has the following series representation
\begin{equation}\label{Def EX} exp_q(z)=\sum_{n=0}^{\infty} \frac{q^{\frac{n(n-1)}{4}}}{[n]_q!}z^n;\quad z\in\mathbb{C}.\end{equation}
This function has the property $ \displaystyle \lim_{q\rightarrow 1} exp_{q}(z)= e^z$ for $z\in\mathbb{C}$, and it is an entire function of $z$ of order zero (see \cite{GR}).
\begin{rem}
From the identity $[n]_{1/q}!=q^{\frac{n(1-n)}{2}}[n]_q!$, one can verify that
\begin{equation}\label{exp_q&exp_{1/q}} exp_{q}(z)= exp_{q^{-1}}(z); \quad z\in\mathbb{C}.\end{equation}
\end{rem}
We consider the domain $\displaystyle \Omega:= \{ z\in \mathbb{C}: \, |1-exp_q(z)|< 1\}$.
\begin{lem}\label{inverse of exp.}
Let $z\in \Omega$. Then \begin{equation}\label{the inverse of exp.}
\dfrac{1}{exp_q(z)}:= 1+ \sum_{n=1}^{\infty} c_n\, z^n\,,
\end{equation}
where
\begin{equation*} c_n= \sum_{k=1}^{n} (-1)^{k}\, \sum_{s_1+s_2+\ldots+s_k=n\atop s_i>0\,(i=1,\ldots,k) } \dfrac{ q^{ \sum_{i=1}^k s_i(s_i-1)/4 }}{[s_1]_q! [s_2]_q!\ldots [s_{k}]_q!}\,.\end{equation*}
\end{lem}
\begin{proof} Observe that, for $z\in \Omega$ the function $\dfrac{1}{exp_q(z)}$ can be represented as
\begin{equation*}
\dfrac{1}{exp_q(z)} := \Big[\frac{1}{1+(exp_q(z)-1)}\Big] = \sum_{k=0}^{\infty}(-1)^k \Big[ exp_q(z)-1 \Big]^k.\end{equation*}
Using the series expansion \eqref{Def EX} of $exp_q(z)$, we get
\begin{eqnarray*}
\dfrac{1}{exp_q(z)} &=& \sum_{k=0}^{\infty}(-1)^k \left(\sum_{n=1}^{\infty} q^{n(n-1)/4 }\, \frac{z^n}{[n]_q!}\right)^k \\
&=& 1+\sum_{k=1}^{\infty}(-1)^k \left(\sum_{n=1}^{\infty} q^{n(n-1)/4 }\, \frac{z^{n}}{[n]_q!}\right)^k \\
&=&1+\sum_{k=1}^{\infty}(-1)^k\sum_{n=k}^{\infty} z^{n} \sum_{s_1+s_2+\ldots+s_k=n\atop s_i>0\,(i=1,\ldots,k) }\dfrac{ q^{ \sum_{i=1}^k s_i(s_i-1)/4 }}{[s_1]_q! [s_2]_q!\ldots [s_{k}]_q!}\,.
\end{eqnarray*}
Put $\displaystyle a_n(k)= \sum_{s_1+s_2+\ldots+s_k=n\atop s_i>0\,(i=1,\ldots,k) } \dfrac{ q^{ \sum_{i=1}^k s_i(s_i-1)/4 }}{[s_1]_q! [s_2]_q!\ldots [s_{k}]_q!}$. Then, the power series of $\dfrac{1}{exp_q(z)}$ takes the form
\begin{equation*}
\dfrac{1}{exp_q(z)}= 1+ \sum_{n=1}^{\infty} z^n \, \sum_{k=1}^{n} (-1)^{k}a_n(k),
\end{equation*}and then we obtain the desired result.
\end{proof}
The $q$-sine and $q$-cosine, $S_q(z)$ and $C_q(z)$, are defined by $$ exp_q(iz):= C_q(z)+iS_q(z),$$ where
\begin{equation}\label{S&C}\begin{split} C_q(z) &:= \sum_{n=0}^{\infty} (-1)^n\, \frac{q^{n(n-\frac{1}{2})}}{[2n]_q!}z^{2n}, \\ S_q(z) &:= \sum_{n=0}^{\infty}(-1)^n\, \frac{q^{n(n+\frac{1}{2})}}{[2n+1]_q!}z^{2n+1}.\end{split}
\end{equation}
These functions can be written in terms of the third Jackson $q$-Bessel function or (Hahn-Exton $q$-Bessel
function \cite{Koelink and Swarttouw}) as
\begin{equation*}
\begin{split}
C_q(z)&:= q^{-\frac{3}{8}}\, \frac{(q^2;q^2)_{\infty}}{(q;q^2)_{\infty}}((1-q)z)^{\frac{1}{2}}\, J^{(3)}_{-\frac{1}{2}}(q^{\frac{-3}{4}}(1-q)z;q^2),\\ S_q(z)&:= q^{\frac{1}{8}}\, \frac{(q^2;q^2)_{\infty}}{(q;q^2)_{\infty}}((1-q)z)^{\frac{1}{2}}\, J^{(3)}_{\frac{1}{2}}(q^{\frac{-1}{4}}(1-q)z;q^2),
\end{split}\end{equation*}
and satisfy
\begin{equation}\label{delta Sine and Cosine}
\frac{\delta_q C_q(wz)}{\delta_q z}= -w\,S_q(wz), \quad \frac{\delta_q S_q(wz)}{\delta_q z}= w\,C_q(wz).\end{equation}
(see~\cite{Cardoso011,GR}). Therefore, \begin{equation}\label{delta E} \frac{\delta_q\, exp_q(wz)}{\delta_q z}= w\, exp_q(wz).\end{equation}
\begin{figure}[h]
\includegraphics[width=10 cm, height=5cm]{graph.eps}
\caption{The roots of $C_q(z)$ and $S_q(z)$ at $q=\frac{1}{2}$}
\label{Fig(1)}
\end{figure}
Note that since the third Jackson $q$-Bessel functions have only real roots and the roots are simple (see \cite{Koelink and Swarttouw}), it follows that the roots of $C_q(z)$ and $S_q(z)$ are also real and simple as shown in Figure~\ref{Fig(1)}. Also, because $C_q(z)$ and $S_q(z)$ are respectively even and odd, the roots of these functions are symmetric.
\noindent Throughout this paper we assume that $S_1$ and $C_1$ are the smallest positive zero of the functions $S_q(z)$ and $C_1$, respectively.
Here, the $q$-analog of the hyperbolic functions $\sinh z$ and $\cosh z$ are defined for $z\in \mathbb{C}$ by
\begin{equation}\label{sh+cosh}
\begin{gathered}
Sinh_q(z):= -i S_q(iz)=\dfrac{exp_q(z)-exp_q(-z)}{2} \\
Cosh_q(z):= C_q(iz) = \dfrac{exp_q(z)+exp_q(-z)}{2}.\end{gathered}\end{equation}
\vskip 1cm
\section{{\bf A $q$-Bernoulli polynomials generated by the third Jackson $q$-Bessel function }} \label{q-Bernoulli and Euler}
In this section, we use the third $q$-exponential function $exp_q(x)$ to define a $q$-analog of the Bernoulli polynomials which are suitable for our approach.
\begin{defn}\label{q-Bernoulli} A $q$-Bernoulli polynomials $\widetilde{B}_n(z;q)$ are defined by the generating function
\begin{equation}\label{GF:Bernoulli-poly} \dfrac{w\,exp_q(zw)\, exp_q(\frac{-w}{2})}{exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})}=\sum_{n=0}^{\infty}\widetilde{B}_n(z;q) \frac{w^n}{[n]_q!},\end{equation}
and $\widetilde{\beta}_n(q):= \widetilde{B}_n(0;q)$ are the $q$-Bernoulli numbers. Therefore,
\begin{equation}\label{qBernoulli-numbers}
\dfrac{w\, exp_q(-w/2)}{exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})} =\sum_{n=0}^{\infty}\frac{\widetilde{\beta}_n(q)}{[n]_q!}w^n.\end{equation} \end{defn}
\begin{rem}
$\widetilde{B}_{2n+1}(\frac{1}{2};q)=0$. Indeed, for $z=\frac{1}{2}$, the left hand side of Equation \eqref{GF:Bernoulli-poly} is an even function. Therefore, the odd powers of $w$ on the left hand side vanish. Also, note that $$\widetilde{B}_0(z;q)= \dfrac{w\,exp_q(zw)\, exp_q(\frac{-w}{2})}{exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})}\Big|_{w=0}=1.$$
\end{rem}
\begin{prop}\label{prop:1-1}
The $q$-Bernoulli polynomials $\widetilde{B}_n(z;q)$ are given recursively by $\widetilde{B}_0(z;q)=1$, and for $n\in\mathbb{N}$ \[\widetilde{B}_n(z;q)=\sum_{k=0}^{n}\qbinom{n}{k} q^{\frac{k(k-1)}{4}}\, \widetilde{\beta}_{n-k}(q) z^k.\] \end{prop}
\begin{proof}
By substituting \eqref{qBernoulli-numbers} into \eqref{GF:Bernoulli-poly} and using the series representation of $exp_q(wz)$ we obtain
\begin{equation*}
\begin{split}
\dfrac{w\,exp_q(zw)\, exp_q(\frac{-w}{2})}{exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})} &=\sum_{n=0}^{\infty}\widetilde{\beta}_n(q) \frac{w^{n}}{[n]_q!}\sum_{n=0}^{\infty} \frac{q^{\frac{n(n-1)}{4}}}{[n]_q!} (wz)^n \\ &= \sum_{n=0}^{\infty} \frac{w^{n}}{[n]_q!}
\sum_{k=0}^{n}\qbinom{n}{k} q^{\frac{k(k-1)}{4}}\, \widetilde{\beta}_{n-k}(q) z^k.
\end{split}
\end{equation*}
This implies
\begin{equation} \label{Eq:2}
\sum_{n=0}^{\infty} \frac{w^{n}}{[n]_q!}\,
\sum_{k=0}^{n}\qbinom{n}{k} q^{\frac{k(k-1)}{4}}\, \widetilde{\beta}_{n-k}(q) z^k =\sum_{n=0}^{\infty} \widetilde{B}_n(z;q)\frac{w^n}{[n]_q!}. \end{equation}
Comparing the coefficient of $\frac{w^n}{[n]_q!}$, we obtain the required result.
\end{proof}
\begin{prop}\label{Ber.q and 1/q}
For $n\in\mathbb{N}$ and $z\in \mathbb{C}$, we have
\begin{eqnarray} \label{Equiv-Rel:1}\widetilde{B}_n(z;q)&=& q^{\frac{n(n-1)}{2}}\widetilde{B}_n(z;1/q),\\
\label{Equiv-Rel:2}
\widetilde{\beta}_n(q)&=&q^{\frac{n(n-1)}{2}}\widetilde{\beta}_n(1/q).
\end{eqnarray}
\end{prop}
\begin{proof}
By replacing $q$ by $1/q$ on the generating function in \eqref{GF:Bernoulli-poly}, and then using Equation \eqref{exp_q&exp_{1/q}} we obtain
\[\sum_{n=0}^{\infty} q^{\frac{n(n-1)}{2}}\widetilde{B}_n(z;1/q)
\frac{w^n}{[n]_q!}=\sum_{n=0}^{\infty}\widetilde{B}_n(z;q)\frac{w^n}{[n]_q!}.\]
Equating the coefficients of $w^n$ yields \eqref{Equiv-Rel:1} and substituting with $z=0$ in \eqref{Equiv-Rel:1} yields directly \eqref{Equiv-Rel:2}.
\end{proof}
\vskip5mm
\begin{thm}\label{Eq.q-D(TH)}
The $q$-Bernoulli polynomials satisfy the $q$-difference equation
\begin{equation}\label{Eq.q-D} \frac{\delta_q \widetilde{B}_n(z;q)}{\delta_q z}=[n]_q\, \widetilde{B}_{n-1}(z;q)\quad
(n\in\mathbb{N}).\end{equation}
\end{thm}
\begin{proof}
Calculating the $q$-derivative $\delta_q$ of the two sides of \eqref{GF:Bernoulli-poly} with respect to the variable $z$ and using Equation \eqref{delta E}, we obtain
\begin{equation*}
\dfrac{w^2 exp_q(zw)exp_q(\frac{-w}{2})}{exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})}=\sum_{n=1}^{\infty}
\frac{\delta_q \widetilde{B}_n(z;q)}{\delta_q z} \frac{w^{n}}{[n]_q!}.
\end{equation*}
This implies
\begin{equation}\label{11} \sum_{n=1}^{\infty} \frac{\delta_q \widetilde{B}_n(z;q)}{\delta_q z} \,\frac{w^{n}}{[n]_q!}
= \sum_{n=1}^{\infty} \widetilde{B}_{n-1}(z;q)\,\frac{w^{n}}{[n-1]_q!}.\end{equation}
Equating the corresponding $n$th power of $w$ in the two series of \eqref{11}, we obtain
the required result.
\end{proof}
\begin{cor}
For $ k\geq 2$, we have
\begin{equation*}\
\frac{\delta^{2}_q \widetilde{B}_{k}(z;q)}{\delta_q z^{2}} = [k]_q[k-1]_q \widetilde{B}_{k-2}(z;q).
\end{equation*}
\end{cor}
\begin{proof}
It follows directly by calculating the derivative $\delta_q$ of
\eqref{Eq.q-D} for even and odd index of $\frac{\widetilde{B}_k(z;q)}{[k]_q!}$.
\end{proof}
\begin{prop}
The $q$-Bernoulli numbers of odd index satisfy
\begin{equation}\label{odd-Bernoulli-numbers} \widetilde{\beta}_1(q)=-\frac{1}{2},
\quad \widetilde{\beta}_{2n+1}(q)=0; \quad n\in\mathbb{N}.\end{equation}
\end{prop}
\begin{proof}
Observe that,
\begin{equation*}
\dfrac{w\, exp_q(\frac{-w}{2})}{exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})} =-w + \dfrac{w\, exp_q(\frac{-w}{2})}{exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})}.\end{equation*}
So, we can write Equation \eqref{qBernoulli-numbers} in the form
\begin{equation*}
\sum_{n=0}^{\infty}\frac{\widetilde{\beta}_n(q)}{[n]_q!} w^n= -w+ \sum_{n=0}^{\infty}\frac{\widetilde{\beta}_n(q)}{[n]_q!} (-w)^n.
\end{equation*}
This implies
\begin{equation*}
\sum_{n=0}^{\infty} (1-(-1)^n)\, \widetilde{\beta}_n(q)\,\frac{w^n}{[n]_q!}= -w.
\end{equation*}
Therefore, $\widetilde{\beta}_1(q)=-\frac{1}{2}$ and $\widetilde{\beta}_{2n+1}(q)=0$ for every
$n\in\mathbb{N}$.
\end{proof}
\vskip5mm
\begin{thm}\label{sum:B}
For $z\in\mathbb{C}$ and $n\in\mathbb{N}$, we have the identity
\begin{equation*}\frac{q^{\frac{n(n-1)}{4}}}{[n]_q!}\,(\frac{-1}{2})^{n} (2q^{\frac{1-n}{2}}\, z;q)_n=\sum_{k=0}^{[\frac{n}{2}]}(\frac{1}{2})^{2k} \frac{q^{\frac{k(2k+1)}{2}}}{[2k+1]_q!}\, \frac{\widetilde{B}_{n-2k}(z;q)}{[n-2k]_q!}.
\end{equation*}
\end{thm}
\begin{proof}
By using \eqref{GF:Bernoulli-poly}, we have
\begin{equation}\label{Eq.1} w\,exp_q(zw)\, exp_q(\frac{-w}{2})= \Big[exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})\Big]\sum_{n=0}^{\infty}\widetilde{B}_n(z;q) \frac{w^n}{[n]_q!}.\end{equation}
Using the series representation of $exp_q(zw)$, we can prove that
\begin{equation}\label{Eq.2}
exp_q(zw)\, exp_q(\frac{-w}{2})= \sum_{n=0}^{\infty} q^{\frac{n(n-1)}{4}}\frac{w^n}{[n]_q!}(\frac{-1}{2})^{n}(2q^{\frac{1-n}{2}} z;q)_n. \end{equation}
Substituting \eqref{Eq.2} into \eqref{Eq.1} and using \eqref{Def EX}, we obtain
\begin{equation*} \begin{split}
&\sum_{n=0}^{\infty} q^{\frac{n(n-1)}{4}} \frac{w^n}{[n]_q!}(\frac{-1}{2})^{n}(2q^{\frac{1-n}{2}} z;q)_n \\ &= \sum_{n=0}^{\infty} \frac{q^{\frac{n(2n+1)}{2}}}{[2n+1]_q!} (\frac{w}{2})^{2n}\sum_{n=0}^{\infty}\widetilde{B}_n(z;q) \frac{w^n}{[n]_q!} \\ &= \sum_{n=0}^{\infty} w^n \sum_{k=0}^{[\frac{n}{2}]}(\frac{1}{2})^{2k} \frac{q^{\frac{k(2k+1)}{2}}}{[2k+1]_q!}\, \frac{\widetilde{B}_{n-2k}(z;q)}{[n-2k]_q!}.
\end{split}\end{equation*}
Comparing the coefficient of $w^n$ we obtain the required result.
\end{proof}
\noindent Note that if we substitute with $z=0$ in the identity of Theorem \ref{sum:B}, we get the following recurrence relation:
\begin{cor}
For $n\in\mathbb{N}$, we have
\begin{equation*} \frac{q^{\frac{n(2n-1)}{2}}}{[2n]_q!}\,(\frac{1}{2})^{2n}=\sum_{k=0}^{n}(\frac{1}{2})^{2k} \frac{q^{\frac{k(2k+1)}{2}}}{[2k+1]_q!}\, \frac{\widetilde{\beta}_{2n-2k}(q)}{[2n-2k]_q!}.
\end{equation*}\end{cor}
\noindent As a consequence of the above result, we have \begin{equation*}\begin{gathered} \widetilde{\beta}_{0}(q)=1, \quad \widetilde{\beta}_{1}(q)= -\frac{1}{2}, \quad \widetilde{\beta}_{2}(q)= \dfrac{(1-q^3)q^{\frac{1}{2}}-(1-q)\,q^{\frac{3}{2}}}{4(1-q^3)},\\
\widetilde{\beta}_{3}(q)=0, \quad \widetilde{\beta}_{4}=\dfrac{q^3(q^3;q^2)_2-[3]_q(q^5(1-q)(1-q^3)}{16(1-q^3)^2(1-q^5)}-\\
\dfrac{(1+q^2)(1-q)^2(1-q^5)(q^2(1-q^3)-q^3(1-q)))}{16(1-q^3)^2(1-q^5)}.
\end{gathered}\end{equation*}
\vskip5mm
\noindent In the following result we prove that the function $Coth_q(z)$ has a $q$-analog of Taylor series expression with only odd exponents for $z$.
\begin{prop}\label{coth_q}
Let $w$ be a complex number such that $0 <|\frac{w}{2}|< C_1$. Then
\begin{equation*}Coth_q(\frac{w}{2})= (\frac{w}{2})^{-1}+ \sum_{n=1}^{\infty}2\widetilde{\beta}_{2n}(q) \frac{w^{2n-1}}{[2n]_q!}.
\end{equation*}\end{prop}
\begin{proof}
By using Equation \eqref{qBernoulli-numbers} and the identity
\begin{equation*}Coth_q(\frac{w}{2}) = \dfrac{exp_q(\frac{w}{2})+ exp_q(\frac{-w}{2})}{exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})}\, ,\end{equation*}
we obtain \begin{equation*} \begin{split}
\sum_{n=0}^{\infty}\widetilde{\beta}_{n}(q) \frac{w^{n}}{[n]_q!} &= w\,Coth_q(\frac{w}{2})- \dfrac{w\,exp_q(\frac{w}{2})}{exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})}\\
&= w\,Coth_q(\frac{w}{2})- \sum_{n=0}^{\infty}\widetilde{\beta}_{n}(q) \frac{(-w)^{n}}{[n]_q!}.
\end{split}\end{equation*} Therefore, $\displaystyle w\,Coth_q(\frac{w}{2})= 1+ \sum_{n=1}^{\infty}2\widetilde{\beta}_{2n}(q) \frac{(w)^{2n}}{[2n]_q!}$ and then the result follows.\end{proof}
\vskip5mm
We define the polynomials $\widetilde{A}_n(z;q)$ by the generating function
\begin{equation}\label{A_n(z;q)}
\dfrac{w\,exp_q(zw)}{exp_q(\frac{w}{2})-exp_q(\frac{-w}{2})}=\sum_{n=0}^{\infty}\widetilde{A}_n(z;q) \frac{w^n}{[n]_q!}.\end{equation}
\begin{prop}\label{Prop:B&A}
For $n\in\mathbb{N}$, the $q$-Bernoulli polynomials $\widetilde{B}_n(z;q)$ can be represented in terms of $\widetilde{A}_{n}(z;q)$ as
\begin{equation}\label{prop:1-2}
\widetilde{B}_n(z;q)=\sum_{k=0}^{n}\qbinom{n}{k} \, (-\frac{1}{2})^k\, q^{\frac{k(k-1)}{4}}\, \widetilde{A}_{n-k}(z;q).
\end{equation} \end{prop}
\begin{proof}
From \eqref{A_n(z;q)}, \eqref{Def EX} and Definition \ref{q-Bernoulli} we get
\begin{equation*}\begin{split}
\sum_{n=0}^{\infty}\widetilde{B}_n(z;q) \frac{w^n}{[n]_q!} &= \dfrac{w\,exp_q(zw)\, exp_q(-w/2)}{exp_q(w/2)-exp_q(-w/2)} \\
&=\sum_{n=0}^{\infty}\widetilde{A}_n(z;q) \frac{w^{n}}{[n]_q!}\,\sum_{n=0}^{\infty} \frac{q^{\frac{n(n-1)}{4}}}{[n]_q!} (\frac{-w}{2})^n \\
&= \sum_{n=0}^{\infty} \frac{w^{n}}{[n]_q!}\,
\sum_{k=0}^{n}\qbinom{n}{k} (-\frac{1}{2})^k q^{\frac{k(k-1)}{4}}\, \widetilde{A}_{n-k}(z;q).
\end{split}\end{equation*}
Comparing the coefficient of $\frac{w^n}{[n]_q!}$ we obtain the required result. \end{proof}
\begin{thm}\label{inverse of A_n}
Let $z\in \mathbb{C}$. Then, the polynomials $\widetilde{A}_n(z;q)$ can be represented in terms of the $q$-Bernoulli polynomials $\widetilde{B}_{n}(z;q)$ as \begin{equation}\label{Thm. An}
\widetilde{A}_{n}(z;q)= [n]_q!\sum_{j=0}^{n-1} (-\frac{1}{2})^{j+1}\, \frac{\tilde{a}_j}{[n-j-1]_q!} \widetilde{B}_{n-j-1}(z;q),\end{equation}
where \begin{equation}\label{cons.}
\tilde{a}_j= \sum_{k=0}^{j} (-1)^k \,\sum_{s_1+s_2+\ldots+s_k=n\atop s_i>0\,(i=1,\ldots,k) } \dfrac{ q^{ \sum_{i=0}^k s_i(s_i+1)/4 }}{[s_1+1]_q! [s_2+1]_q!\ldots [s_{k}+1]_q!}.\end{equation}
\end{thm}
\begin{proof}
We can write the generating function of the $q$-polynomials $\widetilde{A}_n(z;q)$ as
\begin{equation*}\dfrac{w\,exp_q(zw)}{exp_q(w/2)-exp_q(-w/2)}= \dfrac{1}{exp_q(-w/2)}\,\Big[ \dfrac{w\, exp_q(zw)\, exp_q(-w/2)}{exp_q(w/2)-exp_q(-w/2)}\Big].\end{equation*}
Putting $\displaystyle a_n(k)= \sum_{s_1+s_2+\ldots+s_k=n\atop s_i>0\,(i=1,\ldots,k) } \dfrac{ q^{ \sum_{i=0}^k s_i(s_i+1)/4 }}{[s_1+1]_q! [s_2+1]_q!\ldots [s_{k}+1]_q!}$, and then using Lemma \ref{inverse of exp.} we obtain
\begin{eqnarray*}
&&\sum_{n=0}^{\infty} \widetilde{A}_n(z;q)\frac{w^n}{[n]_q!}\\ &=& w\,\Big(\sum_{n=0}^{\infty} (\frac{-1}{2})^{n+1} [n]_q!\, \frac{w^n}{[n]_q!} \sum_{k=0}^{n}(-1)^ka_{n}(k)\Big)\,\Big( \sum_{n=0}^{\infty} \widetilde{B}_{n}(z;q) \frac{w^n}{[n]_q!}\Big) \\ &=&
w\,\sum_{n=0}^{\infty}\frac{w^n}{[n]_q!}\, \sum_{j=0}^{n}\qbinom{n}{j}[j]_q! (\frac{-1}{2})^{j+1} \sum_{k=0}^{j} a_j(k) (-1)^k \widetilde{B}_{n-j}(z;q) \\ &=&
\sum_{n=0}^{\infty}\frac{w^{n+1}[n+1]_q}{[n+1]_q!}\, \sum_{j=0}^{n}\qbinom{n}{j}[j]_q! (\frac{-1}{2})^{j+1} \sum_{k=0}^{j} a_j(k) (-1)^k \widetilde{B}_{n-j}(z;q).
\end{eqnarray*}
This implies
\begin{equation*} \widetilde{A}_{n+1}(z;q) = [n+1]_q\sum_{j=0}^{n}\qbinom{n}{j}[j]_q! (\frac{-1}{2})^{j+1} \sum_{k=0}^{j} a_j(k) (-1)^k \widetilde{B}_{n-j}(z;q),\end{equation*} and then we obtain the required result.
\end{proof}
\begin{cor}\label{Cor.2}
For $n\in\mathbb{N}_0$ and $z\in \mathbb{C}$, the power series of the polynomial $\widetilde{A}_{n}(z;q)$ takes the form
\begin{equation*}
\widetilde{A}_{n}(z;q)= \sum_{m=0}^{n-1} \tilde{c}_m(n) \, \frac{z^m}{[m]_q!},
\end{equation*} where
\begin{equation} \tilde{c}_m(n)= [n]_q!\, (\frac{-1}{2})^{n+1}\sum_{r=n-1}^{m} q^{\frac{m(m-1)}{4}}\frac{(-2)^r\, \tilde{a}_r}{[r-m]_q!}\, \widetilde{\beta}_{r-m}(q).\end{equation}
\end{cor}
\begin{proof}
From Theorem \ref{inverse of A_n} and Proposition \ref{prop:1-1}, we get
\begin{equation*}
\begin{split}
\widetilde{A}_{n}(z;q) &= [n]_q!\, (-\frac{1}{2})^{n}\sum_{r=0}^{n-1} \, \frac{(-2)^r\,\tilde{a}_r}{[r]_q!} \widetilde{B}_{r}(z;q) \\
&=[n]_q!\, (-\frac{1}{2})^{n}\sum_{r=0}^{n-1} \, \frac{(-2)^r\,\tilde{a}_r}{[r]_q!} \sum_{m=0}^{r}\qbinom{r}{m} q^{\frac{m(m-1)}{4}}\, \widetilde{\beta}_{r-m}(q) z^m \\
&= [n]_q!\, (-\frac{1}{2})^{n}\sum_{m=0}^{n-1} \Big(\sum_{r=n-1}^{m} q^{\frac{m(m-1)}{4}}\frac{(-2)^r\, \tilde{a}_r}{[r-m]_q!}\, \widetilde{\beta}_{r-m}(q)\Big)\, \frac{z^m}{[m]_q!}.
\end{split}
\end{equation*}
\end{proof}
\begin{cor}\label{Sinh_q}
Let $w$ be a complex number such that $|w|< S_1$. Then
\begin{equation*}\frac{1}{Sinh_q(w)}= \sum_{n=0}^{\infty} d_n\, (2w)^{n},\end{equation*}
where $d_0=1$, $\displaystyle d_n= [n+1]_q \sum_{j=0}^{n} (-\frac{1}{2})^{j+1}\, \frac{\tilde{a}_j}{[n-j]_q!} \widetilde{\beta}_{n-j}(q)$ and $\tilde{a}_j$ the constants defined in \eqref{cons.}.
\end{cor}
\begin{proof}
The proof follows immediately from \eqref{A_n(z;q)}, \eqref{Thm. An} and replacing $w$ by $2w$.
\end{proof}
\begin{rem}
According to the definition of $q$-Bernoulli numbers \eqref{qBernoulli-numbers}, we have $\widetilde{\beta}_{n}(q)= \widetilde{A}_{n}(-\frac{1}{2};q)$. That is, for $n\in \mathbb{N}_0$ we have
\begin{equation*}
\widetilde{\beta}_{n+1}(q)= [n+1]_q!\sum_{j=0}^{n} (-\frac{1}{2})^{j+1}\, \frac{\tilde{a}_j}{[n-j]_q!} \widetilde{B}_{n-j}(-\frac{1}{2};q),\end{equation*}
where $\tilde{a}_j$ is the constants which defined in \eqref{cons.}.
\end{rem}
\section{{\bf A $q$-Euler polynomials generated by the third Jackson $q$-Bessel function }} \label{q-Euler}
\begin{defn} A $q$-Euler polynomials $\widetilde{E}_n(z;q)$ are defined by the generating function
\begin{equation}\label{Defn.Euler}
\dfrac{2\,exp_q(zw)\, exp_q(\frac{-w}{2})}{exp_q(\frac{w}{2})+exp_q(\frac{-w}{2})}=\sum_{n=0}^{\infty}\widetilde{E}_n(z;q)
\frac{w^n}{[n]_q!},\end{equation}
and the $q$-Euler numbers $\widetilde{e}_n(q)$ are defined in terms of generating function
\begin{equation}\label{qEuler-numbers}
\dfrac{2}{exp_q(w)+exp_q(-w)} =\sum_{n=0}^{\infty}\widetilde{e}_n(q)\frac{w^n}{[n]_q!}.\end{equation} \end{defn}
\noindent Clearly, $\widetilde{e}_{2n+1}(q)=0$ for all $n\in\mathbb{N}_0$.
Consequently,
\begin{equation}\label{Cosine}
\dfrac{1}{C_q (z)}=\sum_{n=0}^{\infty}(-1)^n
\dfrac{\widetilde{e}_{2n}(q)}{[2n]_q!}\, z^{2n}; \quad |z|<C_1,\end{equation}
where $C_1$ is the first positive zeros of $C_q (z)$.
\noindent We use the notation $\widetilde{E}_n$ to denotes the first Euler number, i.e.,
\begin{equation*} \widetilde{E}_n := \widetilde{E}_n(0;q), \quad (n\in \mathbb{N}_0).\end{equation*}
\begin{prop}
The $q$-Euler polynomials $\widetilde{E}_n(z;q)$ are given by
\[\widetilde{E}_0(z;q)=1,\]
and for $n\in\mathbb{N}$
\[\widetilde{E}_n(z;q)=\sum_{k=0}^{n}\qbinom{n}{k} q^{\frac{k(k-1)}{4}}
\widetilde{E}_{n-k}\, z^k.\]
\end{prop}
\begin{proof}
The proof is similar to the proof of Proposition~\ref{prop:1-1} and is omitted.
\end{proof}
\begin{prop}
For $n\in\mathbb{N}_0$, we have
\begin{equation}\label{E(even)} \widetilde{E}_{n}(\frac{1}{2};q)=(\frac{1}{2})^{n} \sum_{n=0}^{n}\,\qbinom{n}{k}\, (-1)^k\,q^{\frac{k(k-1)}{4}}
(q^{\frac{1-k}{2}};q)_k\, \widetilde{e}_{n-k}(q).\end{equation}
\end{prop}
\begin{proof}
Since
\begin{equation*}
exp_q(\frac{w}{2})\,exp_q(\frac{-w}{2})= \sum_{n=0}^{\infty} (\frac{-1}{2})^n q^{\frac{n(n-1)}{4}}\, (q^{\frac{1-n}{2}};q)_n \frac{w^n}{[n]_q!},
\end{equation*}
then, by using \eqref{Defn.Euler} and \eqref{qEuler-numbers} we get
\begin{equation*} \sum_{k=0}^{\infty} \widetilde{E}_{n}(\frac{1}{2};q)\, \frac{w^n}{[n]_q!}= \sum_{k=0}^{\infty} \frac{w^n}{[n]_q!} \sum_{k=0}^{n}(\frac{1}{2})^{n} \,(-1)^k\qbinom{n}{k}\,q^{\frac{k(k-1)}{4}}
(q^{\frac{1-k}{2}};q)_k\, \widetilde{e}_{n-k}(q), \end{equation*}
which implies the result.
\end{proof}
Note that if $z=\frac{1}{2}$, then the left hand side of \eqref{Defn.Euler} is an even function. Hence,
\begin{equation}\widetilde{E}_{2n+1}(\frac{1}{2};q)=0.\end{equation}
\begin{prop}
For $n\in\mathbb{N}_0$, we have $\displaystyle \widetilde{E}_{2n}=\delta_{n,0}$\,,
where $\delta_{n,0}$ is the Kronecker's delta.
\end{prop}
\begin{proof}
Observe that
\begin{equation}
\dfrac{2\,exp_q(\frac{-w}{2})}{exp_q(\frac{w}{2})+exp_q(\frac{-w}{2})}-1=
\dfrac{exp_q(\frac{-w}{2})-exp_q(\frac{w}{2})}{exp_q(\frac{w}{2})+exp_q(\frac{-w}{2})}.
\end{equation}
So, we obtain
\begin{equation}\label{Prop 3.1}
\sum_{n=0}^{\infty}\frac{\widetilde{E}_n}{[n]_q!} w^n
= 1+ \dfrac{exp_q(\frac{-w}{2})-exp_q(\frac{w}{2})}{exp_q(\frac{w}{2})+exp_q(\frac{-w}{2})}.\end{equation}
The right hand side of \eqref{Prop 3.1} is an odd function, therefore the even powers of $w$ on the left hand side of this equation vanish.
Hence $\widetilde{E}_0=1$ and $\widetilde{E}_{2n}=0$ for every $n\in\mathbb{N}$.
\end{proof}
The following results can be proved by the same way of Proposition \ref{Ber.q and 1/q}, Theorem \ref{Eq.q-D(TH)} and Theorem \ref{sum:B}.
\begin{prop} For $n\in\mathbb{N}$ and $z\in \mathbb{C}$, we have
\begin{enumerate}
\item $\displaystyle \widetilde{E}_n(z;q)= q^{\frac{n(n-1)}{2}}\widetilde{E}_n(z;1/q)$;
\item $\displaystyle \frac{\delta_q \widetilde{E}_n(z;q)}{\delta_q z}=[n]_q\, \widetilde{E}_{n-1}(z;q)$.
\end{enumerate}
\end{prop}
\begin{thm}\label{Euler-Relation}
For $z\in\mathbb{C}$, we have the identities
\begin{equation*}\begin{split}
\frac{q^{\frac{n(n-1)}{4}}}{[n]_q!}\,(\frac{-1}{2})^{n} (2q^{\frac{1-n}{2}}\, z;q)_n &= \sum_{k=0}^{[\frac{n}{2}]}(\frac{1}{2})^{2k} \frac{q^{\frac{k(2k-1)}{2}}}{[2k]_q!}\, \frac{\widetilde{E}_{n-2k}(z;q)}{[n-2k]_q!};\\
\frac{q^{\frac{n(n-1)}{4}}}{[n]_q!}\,(\frac{-1}{2})^{n} &= \sum_{k=0}^{[\frac{n}{2}]}(\frac{1}{2})^{2k} \frac{q^{\frac{k(2k-1)}{2}}}{[2k]_q!}\, \frac{\widetilde{E}_{n-2k}}{[n-2k]_q!}.\end{split}\end{equation*}
\end{thm}
As a consequence of Theorem \ref{Euler-Relation}, we get
\begin{equation*}\begin{gathered} \widetilde{E}_{0}=1, \quad \widetilde{E}_{1}= -\frac{1}{2}, \quad \widetilde{E}_{2}=0, \quad
\widetilde{E}_{3}= \dfrac{(q-1)q^{\frac{3}{2}}+(1-q^3)\,q^{\frac{1}{2}}}{8(1-q)},\\
\widetilde{E}_{5}=\dfrac{(q^2-1)q^5+(1-q^2)[5]_q\,q^3+ (q-1)q^{\frac{1}{2}}[4]_q[5]_q([3]_q q^{\frac{1}{2}}-{\frac{3}{2}})}{32(1-q^2)}.
\end{gathered}\end{equation*}
\begin{prop}
We have the identity
\begin{equation}\label{tansh}Tanh_q(\frac{w}{2})=\sum_{n=0}^{\infty} \widetilde{E}_{2n+1} \frac{w^{2n+1}}{[2n+1]_q!}, \quad |\frac{w}{2}|< S_1.
\end{equation}\end{prop}
\begin{proof}
The proof is similar to the proof of Proposition~\ref{coth_q} and is omitted.
\end{proof}
\vskip5mm
Recall that the $q$-tangent and $q$-secant numbers defined by the series expansions of $Tan_q z$ and $Sec_q z$ by
\begin{equation}\label{tan:exp}\begin{split}Tan_qz &=\sum_{n=0}^{\infty}T_{2n+1}(q)\frac{z^{2n+1}}{[2n+1]_q!},\\ Sec_qz &=\frac{1}{C_q u}=\sum_{n=0}^{\infty}S_{2n}(q)\frac{z^{2n}}{[2n]_q!},\end{split}\end{equation}
(for more details see~\cite{Foata,Huber}).
\noindent Consider $S_q(z)$ and $C_q(z)$ which defined in \eqref{S&C}. Then, from \eqref{Cosine} and \eqref{tansh} we get
\[T_{2n+1}(q)=(-1)^{n}\widetilde{E}_{2n+1} 2^{2n+1}, \quad S_{2n}(q)=(-1)^n \widetilde{e}_{2n}(q).\]
\begin{thm}
For $n\in\mathbb{N}_0$
\begin{equation}\label{Convol.}\sum_{k=0}^{n}(-1)^k\, 2^{2k} \frac{\beta_{2k}(q)}{[2k]_q!}\frac{T_{2n-2k+1}(q)}{[2n-2k+1]_q!}=\delta_{0,n}\,,\end{equation}
where $\delta_{n,0}$ is the Kronecker's delta.
\end{thm}
\begin{proof}
From Proposition \ref{coth_q}, we have
\begin{equation}\label{zcot:exp} z\, Cot_q(z)=\sum_{n=0}^{\infty}(-1)^n\, 2^{2n}\beta_{2n}(q)\frac{z^{2n}}{[2n]_q!}.\end{equation} Observe that
$z\, Tan_q (z)\, Cot_q(z)= z$. So, by using \eqref{tan:exp} and \eqref{zcot:exp} we obtain
\[z=\sum_{n=0}^{\infty} (-1)^n\, 2^{2n}\beta_{2n}(q)\frac{z^{2n}}{[2n]_q!}\sum_{n=0}^{\infty} T_{2n+1}(q)\,\frac{z^{2n+1}}{[2n+1]_q!}.\]
Therefore,
\[\sum_{n=0}^{\infty} z^{2n}\sum_{k=0}^{n}(-1)^k 2^{2k} \frac{\beta_{2k}(q)}{[2k]_q!}\frac{T_{2n-2k+1}(q)}{[2n-2k+1]_q!}=1.\]
Comparing the coefficient of $z^{2n}$, we obtain the desired result.
\end{proof}
\begin{cor}
Let $n\in\mathbb{N}_0$. Then, the $q$-tangent numbers $T_{2n+1}(q)$ are positive numbers.
\end{cor}
\begin{proof}
From Equation \eqref{Convol.}, we get
\[\frac{T_{2n+1}}{[2n+1]_q!}=\sum_{k=1}^{n}(-1)^{k-1} (2)^{2k} \frac{\beta_{2k}(q)}{[2k]_q!}\frac{T_{2n-2k+1}(q)}{[2n-2k+1]_q!}.\]
Since $(-1)^{k-1}\beta_{2k}>0$ for $k\in\mathbb{N}$ and $T_1(q)=1>0$, then we can prove the result by induction on $n$ for all $n\in\mathbb{N}_0$.
\end{proof}
\vskip5mm
\noindent We define a sequence of polynomials $\widetilde{M}_n(z;q)$ by the generating function
\begin{equation}\label{M_n(z;q)} \dfrac{exp_q(zw)}{exp_q(w/2)+exp_q(-w/2)}=\sum_{n=0}^{\infty}\widetilde{M}_n(z;q) \frac{w^n}{[n]_q!}.\end{equation}
Similarly to Proposition \ref{Prop:B&A}, Theorem \ref{inverse of A_n} and Corollary \ref{Sinh_q}, we have the following results.
\begin{prop}\label{prop:E}
For $n\in\mathbb{N}$, the $q$-Euler polynomials $\widetilde{E}_n(z;q)$ can be represented in terms of $\widetilde{M}_{n}(z;q)$ as
\begin{equation}\label{prop:1-2}
\widetilde{E}_n(z;q)= \sum_{k=0}^{n}\qbinom{n}{k} \, (-1)^k\, (\frac{q^{k/4}}{2})^{k-1}\, \widetilde{M}_{n-k}(z;q).
\end{equation} \end{prop}
\begin{thm}\label{Lem:2}
For $n\in\mathbb{N}$ and $z\in\mathbb{C}$, $\widetilde{M}_n(z;q)$ can be represented in terms of the $q$-Euler polynomials $\widetilde{E}_{n}(z;q)$ as
\begin{equation}
\widetilde{M}_{n}(z;q)= \frac{[n]_q!}{2}\sum_{j=0}^{n} (-\frac{1}{2})^{j+1}\, \frac{\tilde{a}_j}{[n-j]_q!} \widetilde{E}_{n-j}(z;q),\end{equation}
where\begin{equation*}
\tilde{a}_j= \sum_{k=0}^{j} (-1)^k \,\sum_{s_1+s_2+\ldots+s_k=n\atop s_i>0\,(i=1,\ldots,k) } \dfrac{ q^{ \sum_{i=0}^k s_i(s_i+1)/4 }}{[s_1+1]_q! [s_2+1]_q!\ldots [s_{k}+1]_q!}.\end{equation*}
\end{thm}
\begin{cor}
For $n\in\mathbb{N}_0$ and $z\in\mathbb{C}$, the power series of the polynomial $\widetilde{M}_{n}(z;q)$ takes the form
\begin{equation*}
\widetilde{M}_{n}(z;q)= \sum_{m=0}^{n} c_m(n) \, \frac{z^m}{[m]_q!},
\end{equation*} where
\begin{equation*} c_m(n)= \frac{[n]_q!}{2}\, (\frac{-1}{2})^{n+1}\sum_{r=n}^{m} q^{\frac{m(m-1)}{4}}\frac{(-2)^r\, \tilde{a}_r}{[r-m]_q!}\, \widetilde{E}_{r-m}.\end{equation*}
\end{cor}
\begin{prop}
For $z\in\mathbb{C}$, we have
\begin{equation*}\frac{1}{Cosh_q(\frac{w}{2})}= \sum_{n=0}^{\infty} \widetilde{d}_n\, w^{n}, \quad |\frac{w}{2}|< C_1,\end{equation*}
where $\displaystyle \widetilde{d}_n= \sum_{j=0}^{n} (-\frac{1}{2})^{j+1}\, \frac{\tilde{a}_j}{[n-j]_q!} \widetilde{E}_{n-j}$ and $\tilde{a}_j$ the constants defined in \eqref{cons.}.
\end{prop}
\section{{\bf A $q$-Lidstone series involving $q$-Bernoulli polynomials}} \label{q-Lidstone-S}
Our aim of this section is to prove that an entire function $f$ may be expanded in terms of $q$-Lidstone polynomials, where the coefficients of these polynomials are the even powers of the $q$-derivative $\frac{\delta_q f(z)}{\delta_q z}$ at $0$ and $1$.
\vskip3mm
We begin by recalling some definitions and results from \cite{Ramis} which will be used in the proof of the main result.
\begin{defn}\label{def:Ramiss} Let $k$ be a non zero real number, and let $p$ be a real number with $|p|>1$. An entire function $f$ has a $p$-exponential growth of order $k$ and a finite type, if there exist real numbers $K>0$ and $\alpha$, such that
\[|f(z)|< K p^{\frac{k}{2}\left(\dfrac{log |z|}{log \, p}\right)^2} |z|^{\alpha},\]
or equivalently,
\[|f(z)|\leq K e^{\frac{k}{2\log p}(\log |z|)^2+\alpha \log |z|}.\]
\end{defn}
\begin{defn}
Let $k$ be a non zero real number and let $p$ be a real number, with
$|p|>1$. A formal power series expansion $ \hat{f}:=\sum_{=0}^{\infty} a_n z^n$ is $p$-Gevery of order $\frac{1}{k}$ (or of level $k$), if there exists real numbers $C$, $A>0$ such that
\[|a_n|<Cp^{\frac{n(n+1)}{2k} }\, A^n.\]
\end{defn}
\begin{prop}\label{prop:Ramiss}
Let $k$ be a non zero real number and $p$ be a real number, with
$p>1$. The following statements are equivalent.
\begin{itemize}
\item[i] The series $\hat{f}:=\sum_{n=0}^{\infty}a_n x^n$ is $p$-Gevery of
order $-k$;
\item[ii] The series $\hat{f}$ is the power series expansion at the origin of
an entire function $f$ having a $p$-exponential growth of order $k$ and
a finite type $\alpha$, where
\[|a_n|<Ke^{-\frac{(n-\alpha)^2}{2k} },\quad K>0.\]
\end{itemize}
\end{prop}
\begin{rem}
The series $\sum_{n=0}^\infty\, \frac{q^{\frac{n(n-1)}{4}}}{[n]_q!}\, z^n$ which defines the function $exp_q(z)$ is $q^{-1}$- Gevery of order $-2$. Consequently, $exp_q(z)$ has $q^{-1}$- exponential growth of order $2$.
\end{rem}
\vskip5mm
\begin{prop}\label{prop:An} Let $z $ and $w$ be complex numbers
such that $|w|<S_1$. Then
\begin{equation}\label{Eq:Ln}
Sinh_q(wz)\, Csch_q(w)
=\sum_{n=0}^{\infty} \frac{2^{2n+1}}{[2n+1]_q!} \widetilde{A}_{2n+1}(z/2;q)\, w^{2n},\end{equation}
where $\widetilde{A}_{n}(z;q)$ are the $q$-polynomials defined in \eqref{A_n(z;q)}.
\end{prop}
\begin{proof}First, note that the function $g_q(z,w):= Sinh_q(wz)\, Csch_q (w)$ is holomorphic for $|w|<S_1$. By using \eqref{sh+cosh}, we can write
\[ g_q(z,w):= \dfrac{exp_q(zw)-exp_q(-zw)}{exp_q (w)-exp_q (-w)}.\]
Then, by using \eqref{A_n(z;q)} we get
\begin{eqnarray*}
g_q(z,w)&:=&\dfrac{exp_q(zw)-exp_q(-zw)}{exp_q(w)-exp_q(-w)}\\
&=& \frac{1}{2w} \dfrac{2w\, exp_q(zw)}{exp_q(w)-exp_q(-w)} - \frac{1}{2w}\dfrac{2w\, exp_q(-zw)}{exp_q(-w)-exp_q(w)}\\
&=& \frac{1}{2w} \sum_{n=0}^{\infty} \widetilde{A}_{n}(z/2;q)\frac{(2w)^n}{[n]_q!}- \frac{1}{2w}\sum_{n=0}^{\infty} \widetilde{A}_{n}(z/2;q)\frac{(-2w)^n}{[n]_q!}\\
&=&\sum_{n=0}^{\infty} \frac{2^{2n+1}}{[2n+1]_q!} \widetilde{A}_{2n+1}(z/2;q)\, w^{2n}.
\end{eqnarray*}
\end{proof}
Henceforth, we will consider the notation \begin{equation}\label{A_n2}\widetilde{A}_n(z)=\frac{2^{2n+1}}{[2n+1]_q!} \widetilde{A}_{2n+1}(z/2;q).\end{equation}
So, the previous result can be restated in the following form:
\begin{equation}\label{Eq:An2}
\dfrac{exp_q(zw)-exp_q(-zw)}{exp_q(w)-exp_q(-w)}=\sum_{n=0}^{\infty} \widetilde{A}_{n}(z) w^{2n},\end{equation}
\begin{cor}\label{cor:A}
For $n\in\mathbb{N}$, the $q$-polynomials $\widetilde{A}_n(z)$ satisfy the\
$q$-difference equation
\[ \frac{\delta^{2}_q \,\widetilde{A}_n(z) }{\delta_q z^2} =\widetilde{A}_{n-1}(z),\]
with the boundary conditions $\widetilde{A}_n(0)=\widetilde{A}_n(1)=0$, and $\widetilde{A}_0(z)=z$.
\end{cor}
\begin{proof}
By using \eqref{delta E} we obtain
\begin{eqnarray*} \frac{\delta^{2}_q \,g(z,w)}{\delta_q z^2} &=& \sum_{n=0}^{\infty} \frac{\delta^{2}_q \,\widetilde{A}_n(z) }{\delta_q z^2}\, w^{2n}\\ &=& w^2 \dfrac{exp_q(zw)-exp_q(-zw)}{exp_q(w)-exp_q(-w)} \\ &=& \sum_{n} \widetilde{A}_n(z)\, w^{2n+2}.\end{eqnarray*}
Therefore, $ \frac{\delta^{2}_q \,\widetilde{A}_n(z) }{\delta_q z^2} =\widetilde{A}_{n-1}(z) \quad (n\in\mathbb{N})$. Furthermore,
$$ \widetilde{A}_{0}(z)=\lim_{w\rightarrow 0} \dfrac{exp_q(zw)-exp_q(-zw)}{exp_q(w)-exp_q(-w)} =z.$$
Substitute with $z=0$ and $z=1$ in Equation \eqref{Eq:An2}, we obtain
$$\widetilde{A}_n(0)=\widetilde{A}_n(1)=0 \, \mbox{ for all } \, n\in\mathbb{N}.$$
\end{proof}
\begin{prop}\label{prop:2} Let $z $ and $w$ be complex numbers such that $|w|<S_1$. Then
\begin{equation}
\dfrac{exp_q (zw)exp_q(-w)-exp_q(-zw)exp_q(w)}{exp_q(w)-exp_q(-w)}
=\sum_{n=0}^{\infty} \widetilde{B}_n(z)\,w^{2n},\end{equation}
where
\[\widetilde{B}_n(z)=\frac{2^{2n+1}}{[2n+1]_q!}\widetilde{B}_{2n+1}(z/2;q).\]
\end{prop}
\begin{proof}If $z $ and $w$ are complex numbers such that
$|w|<S_1$, then
\begin{eqnarray*}
&&\dfrac{exp_q(zw)exp_q(-w)-exp_q(-zw)exp_q(w)}{exp_q(w)-exp_q(-w)}\\
&=&\frac{1}{2w}\left[\dfrac{2w\, exp_q(zw)exp_q(-w)}{ exp_q(w)-exp_q(-w)} \right]-
\frac{1}{2w}\left[\dfrac{2w\, exp_q(-zw)exp_q(w)}{ exp_q(w)-exp_q(-w)}\right]\\
&=&\frac{1}{2w}\sum_{n=0}^{\infty}\dfrac{(2w)^{n}-(-2w)^n}{[n]!}
\widetilde{B}_{n}(z/2;q)\\
&=&\sum_{n=0}^{\infty}\frac{w^{2n}}{[2n+1]!}2^{2n+1}\widetilde{B}_{2n+1}(z/2;q).
\end{eqnarray*}
\end{proof}
As in Corollary~\ref{cor:A}, one can verify that $\widetilde{B}_0(z)=z-1$ and
for $n\in\mathbb{N}$, the $q$-polynomials $\widetilde{B}_n(z)$ satisfy the\
$q$-difference equation
\[ \frac{\delta^{2}_q \,\widetilde{B}_n(z) }{\delta_q z^2} =\widetilde{B}_{n-1}(z),\]
with the boundary conditions $\widetilde{B}_n(0)=\widetilde{B}_n(1)=0$.
\vskip6mm
Now, observe that
\[\begin{gathered}exp_q(zw) =\dfrac{exp_q(zw)exp_q(-w)-exp_q(-zw)exp_q(w)}{exp_q(-w)-exp_q(w)}\\
+\,exp_q(w)\,\dfrac{exp_q(zw)-exp_q(-zw)}{exp_q(w)-exp_q(-w)}.\end{gathered}\]
So, from Proposition \ref{prop:An} and Proposition \ref{prop:2} we get immediately the following result.
\begin{prop}\label{Prop:3} If $z$ and $w$ are complex numbers such that $|w|<S_1$, then
\begin{equation}
exp_q(zw)=exp_q(w)\, \sum_{n=0}^{\infty} \widetilde{A}_n(z)w^{2n} -\sum_{n=0}^{\infty}\widetilde{B}_n(z)w^{2n}.
\end{equation}
\end{prop}
\vskip6mm
In the following, we assume that $\Psi$ is a comparison function, i.e. $\Psi(t)=\sum_{n=0}^{\infty}\Psi_n t^n$ such that
$\Psi_n>0$ and $\Big(\Psi_{n+1}/\Psi_n\Big) \downarrow 0$ (see \cite{Boas-Buck,Nachbin}). We denote by
$\mathcal{R}_{\Psi}$ the class of all entire functions $f$ such that, for some numbers $\tau$,
\begin{equation}\label{tau}|f(re^{i\theta})|\leq M \Psi(\tau \,r),\end{equation}
as $r\rightarrow \infty$. Here, the complex variable $z$ was written as $z = r e^{i\theta}$ to emphasize that the limit must hold in all directions $\theta$. The infimum of numbers $\tau$ for which \eqref{tau} holds is the $\Psi$-type of the function $f$. This type can be computed by applying Nachbin's theorem \cite{Nachbin} which states that a function $f(z)=\sum_{n=0}^{\infty}
f_n z^n$ is of $\Psi$-type $\tau $ if and only if
$$\tau= \limsup_{n\rightarrow \infty} \Big|\frac{f_n}{\Psi_n}\Big|^{\frac{1}{n}}.$$
\noindent In~\cite{Boas-Buck}, the authors applied Nachbin's theorem for the generalized Borel transform \begin{equation*}
F(w)=\sum_{n=0}^{\infty} \dfrac{f_n}{\Psi_n w^{n+1}},\end{equation*}
and they proved the following result.
\begin{thm}\label{thm-Kernel expansion}
Let $f(z)$ belong to the class
$\mathcal{R}_{\Psi}$, and let $D(f)$ be the closed set consists of the
union of the set of all singular points of $F$ and the set of all points
exterior to the domain of $F$. Then
\[f(z)=\frac{1}{2\pi i} \int_{\Gamma} \Psi(zw) F(w)\,dw\]
where $\Gamma$ encloses $D(f)$.
\end{thm}
According to the above arguments and results we will prove the main theorem.
\begin{thm}\label{Thm:qLidstone expansion 2}
Let $S_1$ be the smallest positive zero of $S_q(z)$. Assume that one of the following conditions hold:
\begin{enumerate}
\item[(i)] The function $f(z)$ is an entire function of
$q^{-1}$-exponential growth of order $2$ and a finite type $\alpha$, where
\begin{equation}\label{Co. alpha}
\alpha < 2\,\Big(\frac{1}{4}- \frac{\log S_1}{\log q}\Big).
\end{equation}
\noindent \item[(ii)] The function $f(z)$ is an entire function of $q^{-1}$-
exponential growth of order less than $2$.
\end{enumerate}
Then $f(z)$ has a convergent $q$-Lidstone representation
\[f(z)=\sum_{n=0}^{\infty}\left[\widetilde{A}_n(z)\,\frac{\delta^{2n}_q \, f(1)}{\delta_q z^{2n}}-\widetilde{B}_n(z)\,
\frac{\delta^{2n}_q \, f(0)}{\delta_q z^{2n}}\right],\]
where $\widetilde{A}_n(z)$ is the polynomial of degree $2n+1$ defined in
\eqref{A_n2} and
\[\widetilde{B}_n(z):=\frac{2^{2n+1}}{[2n+1]_q!}\widetilde{B}_{2n+1}(z/2;q).\]
\end{thm}
\begin{proof}
We apply Theorem~\ref{thm-Kernel expansion} when $\Psi(z)$ chosen as $exp_q(z)$ and
\[ \Psi_n=\frac{q^{\frac{n(n-1)}{4}}}{[n]_q!}.\] Notice, the sequence \[\dfrac{\Psi_{n+1}}{\Psi_n}=\frac{q^{n/2}(1-q)}{1-q^{n+1}}= \frac{q^{n/2}}{[n+1]_q}\]
is decreasing and vanishes at $\infty$. By using Proposition \ref{prop:Ramiss}, we have for any entire function $f(z)=\sum_{=0}^{\infty} a_n z^n$ of $q^{-1}$- exponential growth of order $k$ and a finite type $\alpha$, there exists a real number $K>0$ such that
\[|a_n|\leq Kq^{\frac{(n-\alpha)^2}{2k}}.\]
According to the assumption, we have two cases:
\noindent Case 1. If $k=2$, then $|a_n|\leq Kq^{\frac{(n-\alpha)^2}{4}}$. This implies \eqref{tau} holds and $f\in \mathcal{R}_{\Psi}$. Here, the $\Psi$-type of the function $f$ given by
\begin{eqnarray*}
\tau &:=&\limsup_{n\rightarrow \infty} \Big|\frac{a_n}{\Psi_n}\Big|^{\frac{1}{n}} \\
&\leq& \frac{q^{\frac{1}{4}- \alpha/2}}{(1-q)} \limsup_{n\rightarrow \infty} \Big( K\,(q;q)_n q^{\alpha^2/4}\Big)^{\frac{1}{n}}\\
&\leq& q^{\frac{1}{4}- \alpha/2}< S_1.\end{eqnarray*}
\noindent Case 2. If $k<2$, then $\tau=0$.
\noindent So, we can take $D(f)$ lies in the closed disk $|w|\leq \tau \leq q^{\frac{1}{4}- \alpha/2} < S_1$ and take the curve $\Gamma$ as the circle $|w|=\tau+\epsilon <S_1$, $\epsilon>0$ which encloses $D(f)$. Note that the inequality $q^{\frac{1}{4}- \alpha/2} < S_1$ satisfies the condition \eqref{Co. alpha} on the type of $f(z)$. We obtain
\[f(z)=\frac{1}{2\pi i} \int_{\Gamma} exp_q(zw) F(w)\,dw.\]
Therefore,
\begin{eqnarray*}
\frac{\delta^{2n}_q \, f(0)}{\delta_q z^{2n}}&=&\frac{1}{2\pi i}
\int_{\Gamma} \frac{\delta^{2n}_q}{\delta_q z^{2n}}\, exp_q (zw)|_{z=0}\, F(w)\,dw\\
&=&\frac{1}{2\pi i} \int_{\Gamma} w^{2n}\, F(w)\,dw ,\\
\frac{\delta^{2n}_q \, f(1)}{\delta_q z^{2n}}&=&\frac{1}{2\pi i}\int_{\Gamma} w^{2n}\,exp_q (w) \,F(w)\,dw
\end{eqnarray*}
Now, by using Proposition \ref{Prop:3} we have
\begin{eqnarray*}
&& f(z)= \frac{1}{2\pi\,i}\int_{\Gamma}exp_q(zw)\,F(w)\,dw \\
&=&\frac{1}{2\pi\,i}\int_{\Gamma}\left\{ exp_q(w)\, \sum_{n=0}^{\infty} \widetilde{A}_n(z)w^{2n} -\sum_{n=0}^{\infty}\widetilde{B}_n(z)w^{2n} \right\} F(w)\,dw\\
&=& \sum_{n=0}^{\infty}\left[\widetilde{A}_n(z)\,\frac{\delta^{2n}_q \, f(1)}{\delta_q z^{2n}}-\widetilde{B}_n(z)\,
\frac{\delta^{2n}_q \, f(0)}{\delta_q z^{2n}} \right].
\end{eqnarray*}
\end{proof}
\begin{rem}\label{Rem.1}
In Theorem \ref{Thm:qLidstone expansion 2}, it is obvious if
$$ \frac{\delta^{2n}_q \, f(0)}{\delta_q z^{2n}}= \frac{\delta^{2n}_q \, f(1)}{\delta_q z^{2n}}=0, \,\, (n\in \mathbb{N})$$ then $f(z)$ is identically zero.
\end{rem}
The following example shows that the sign of equality can not be admitted in \eqref{Co. alpha}.
\begin{exa}
Consider $f(z)= S_q(S_1z)$. Then $f$ is an entire function of
$q^{-1}$-exponential growth of order $2$ and a finite type $\alpha= \frac{1}{2}-2\, \frac{\log_q S_1}{\log_q q}$. By using \eqref{delta Sine and Cosine}, one can verify that
$$\frac{\delta^{2n}_q \, f(0)}{\delta_q z^{2n}}= \frac{\delta^{2n}_q \, f(1)}{\delta_q z^{2n}}=0\,\, (n\in \mathbb{N}).$$
This implies the $q$-Lidstone expansion of $f(z)$ vanishes identically but the function does not.
\end{exa}
\vskip 5mm
We end this section by given the $q$-Lidstone series of the functions $(z;q)_n$.
\begin{exa}
Consider the functions $g_n(z)=(z;q)_n$, $n\in \mathbb{N}$. Then, Condition (ii) of Theorem \ref{Thm:qLidstone expansion 2} is satisfied. So, these polynomials have a $q$-Lidstone representation
\[g_n(z)=\sum_{m=0}^{n}\left[\widetilde{A}_m(z)\,\frac{\delta^{2m}_q \,g_n(1)}{\delta_q z^{2m}}-\widetilde{B}_m(z)\,
\frac{\delta^{2m}_q \, g_n(0)}{\delta_q z^{2m}}\right].\]
One can verify that
\[\begin{gathered} \frac{\delta^{2m}_q \, g_n(z)}{\delta_q z^{n}}= q^{-m(m+\frac{1}{2})}\, [n]_q[n-1]_q...[n-2m+1]_q (q^mz;q)_{n-2m}.\end{gathered}\]
Therefore, $g_n(z)$ have the convergent $q$-Lidstone representation
\[\begin{gathered}
g_n(z)=\\ \sum_{m=0}^{n} q^{-m(m+\frac{1}{2})}\, [n]_q\,[n-1]_q\, ...\, [n-2m+1]_q \left[(q^m;q)_{n-2m}\, \widetilde{A}_m(z) -\widetilde{B}_m(z)\right].\end{gathered}\]
\end{exa}
\section{{\bf A $q$-Lidstone series involving $q$-Euler polynomials}}
In this section, we introduce another $q$-extension of Lidstone theorem. We expand the function in $q$-Lidstone polynomials which are $q$-Euler polynomials $\widetilde{E}_n(z;q)$ defined by the generating function \eqref{Defn.Euler}.
\noindent All the results can be studied in the same manner of the results of the previous section.
\begin{prop}\label{prop:Mn} If $z $ and $w$ are complex numbers
such that $|w|<C_1$, then
\begin{equation}\label{Eq:Mn}
Cosh_q(wz)\, Sech_q(w)
=\sum_{n=0}^{\infty}\widetilde{M}_n(z) w^{2n},\end{equation}
where \begin{equation}\label{Def:Mn}\widetilde{M}_n(z):=\frac{2^{2n}}{[2n]_q!} \widetilde{M}_{2n}(z/2;q),\end{equation}
and $\widetilde{M}_{n}(z;q)$ are the $q$-polynomials defined in \eqref{M_n(z;q)}.
\end{prop}
\begin{prop}\label{prop:2E} If $z $ and $w$ are complex numbers such
that $|w|<C_1$, then
\begin{equation}\begin{gathered}
\dfrac{exp_q(zw)exp_q(-w)-exp_q(-zw)exp_q(w)}{exp_q(w)+exp_q(-w)}\\
=\sum_{n=0}^{\infty}\frac{w^{2n+1}}{[2n+1]_q!}2^{2n+1}\widetilde{E}_{2n+1}(z/2;q).
\end{gathered}\end{equation}
\end{prop}
\begin{prop}\label{Prop:3E} If $z$ and $w$ are complex numbers such
that $|w|<C_1$, then
\begin{equation}
exp_q(zw)=exp_q(w)\, \sum_{n=0}^{\infty} \widetilde{M}_n(z)w^{2n} -\sum_{n=0}^{\infty}\widetilde{N}_{n+1}(z)w^{2n+1},
\end{equation} where
\[\widetilde{N}_{n+1}(z)=\frac{2^{2n+1}}{[2n+1]_q!}\widetilde{E}_{2n+1}(z/2;q).\]
\end{prop}
\begin{thm}
Assume that one of the following conditions hold:
\begin{enumerate}
\item[(i)] The function $f(z)$ is an entire function of
$q^{-1}$-exponential growth of order $2$ and a finite type $\alpha$,
where \begin{equation}\label{Co. alpha C}\alpha< 2 \left(\frac{1}{4}-\frac{\log C_1}{\log q}\right);
\end{equation}
\noindent \item[(ii)] The function $f(z)$ is an entire function of $q^{-1}$-exponential growth of order less than $2$.
\end{enumerate}
Then $f(z)$ has the convergent representation
\[f(z)=\sum_{n=0}^{\infty}\left[\widetilde{M}_n(z)\frac{\delta^{2n}_q \, f(1)}{\delta_q z^{2n}}-\widetilde{N}_{n+1}(z)
\frac{\delta^{2n+1}_q \, f(0)}{\delta_q z^{2n+1}}\right],\]
where $\widetilde{M}_n$ is the polynomial defined in \eqref{Def:Mn} and
\[\widetilde{N}_{n+1}(z):=\frac{2^{2n+1}}{[2n+1]_q!}\, \widetilde{E}_{2n+1}(z/2;q).\]
\end{thm}
As in Remark \ref{Rem.1}, the sign of equality can not be admitted in \eqref{Co. alpha C}. For example, the function $f(z)=
\text{C}_q (C_1z)$ is a function of type $\left(\frac{1}{2}-2\frac{\log C_1}{\log q}\right)$ and one can verify that
\[\frac{\delta^{2n}_q \, f(1)}{\delta_q z^{2n}} = 0 = \frac{\delta^{2n+1}_q \, f(0)}{\delta_q z^{2n+1}}.\]
Hence, the $q$-Lidstone expansion of $f(z)$ vanishes while the function does not.
\section{\bf{ Concluding Remarks}}
The $q$-Lidstone's series approximates an entire function in a neighborhood of two points in terms of $q$-analog of Lidstone polynomials. In \cite{Ismail and Mansour}, the authors introduced these polynomials which were $q$-Bernoulli polynomials generated by second Jackson $q$-Bessel function.
In this paper, we presented $q$-Bernoulli and $q$-Euler polynomials generated by the third Jackson $q$-Bessel function to construct new types of $q$-Lidstone expansion theorem \cite{Ismail and Mansour}.
This work provides the basis for several applications that we can search in the future. Firstly, we are interested in studying the generalization of $q$-Lidstone's series. The analogous problem for the classical case was studied in \cite{Whittaker} by Whittaker. Secondly, we are interested in constructing the $q$-Fourier series for the $q$-Lidstone polynomials $\widetilde{A}_n(z)$ and $\widetilde{B}_n(z)$, and applying such expansions to a solution of certain $q$-boundary value problems as in \cite{Mansour and AL-Towaileb} and \cite{Mansour and AL-Towaileb 2}.
|
solv-int/9504002
|
\section{Introduction}
During the last years an essential progress has been
achieved in the investigation
of integrable quantum field theories. Such a success owes much to the fact
that these models are characterized by infinite dimensional Hopf algebra
symmetries, known as affine quantum group symmetries.
These symmetries are genereted by
non-local conserved currents which in many cases
can be constructed explicitly. Such
an approach to the quantum field theory permits to obtain non-perturbative
solutions in the quantum field theory using algebraic methods
\cite{smi}-\cite{bab}.
The situation is analogous to the one taking place in Conformal Field Theory
(CFT). In particular, in CFT, as a result of the
infinite-dimensional Virasoro algebra
(or other extended algebras), exact solutions are
successfully obtained with the help of the Ward identities \cite{BPZ}.
Explicit currents that generate a $q$-deformation of affine Kac-Moody
algebras \cite{dri},\cite{jim} were constructed for the Sine-Gordon theory and
its generalization to imaginary coupling affine Toda theory in \cite{BL},
and shown to completely characterize the $S$-matrices. At special values
of the coupling where these quantum field theories have ordinary
Lie group $G$ invariance, the quantum affine symmetry becomes the
$G$-Yangian symmetry \cite{ber},\cite{lus}.
The affine quantum group invariance fixes the $S$-matrices up to
overall scalar factors, which in turn can be fixed
using crossing symmetry,
unitarity and analyticity. These quantum group invariant $S$-matrices,
which are the specializations of the $R$-matrices satisfy the
Yang-Baxter equation.
In the present work a series of new integrable models is identified
and its $q$-deformed structure is studied.
In particular, the organization of the paper is as follows.
In section \ref{hyperelliptic-surfaces},
a brief description of the minimal conformal models on
hyper-elliptic surfaces which can be represented as
two-sheet coverings of a ramified sphere is given.
In section \ref{new-model},
a model of perturbed CFT is proposed; the relevant perturbation
is the highest weight vector of the Virasoro algebra at the
branching points. The characters of this model are calculated
and the existence of an infinite series of Integrals of Motion (IMs) is
proved; the integrability of the model is thus established.
Furthermore, the $\beta$-function of the model
is calculated and it is shown that the theory
is massive.
In the last section, section \ref{nonlocal-charges},
the non-local currents are constructed. These are related
by non-trivial braiding relations which lead
to the $q$-deformed
algebra of the conserved charges of the model.
\section{CFT on Hyper-Elliptic Surfaces}
\label{hyperelliptic-surfaces}
Conformal field theories on compact Riemann surfaces, and in particular
on hyper-elliptic surfaces, have been considered by many authors. One
of the pioneering works on hyper-elliptic surfaces was Zamolodchikov's
work
for the Ashkin-Teller models \cite{zam87}; another important contribution
was Knizhnik's work \cite{kni}
on two-loop
calculations in string theory. Finally, in \cite{CSS},
the minimal models on hyper-elliptic
surfaces were thoroughly discussed.
Let $\Gamma$ be a compact Riemann surface of genus $g\geq 1$. If $\Gamma$
is a Riemann surface of an algebraic function $y=y(z)$ given by the
equation
\begin{equation}
R(y,z)=y^{n}+a_{1}(z)y^{n-1}+\ldots+a_{n}(z)=0~,
\end{equation}
where $R(y,z)$ is a polynomial of the form shown above,
then the affine part of $\Gamma$ coincides with
the complex algebraic curve (1,1) in ${\Bbb C}^2$ in case this curve is
ordinary (smooth). Of special importance to us is the example of
hyper-elliptic curves given by equations of the form
\begin{equation}
\label{form1}
y^2=P_{2g+1}(z)~,
\end{equation}
or
\begin{equation}
\label{form2}
y^2=P_{2g+2}(z)~,
\end{equation}
where $P_h(z),~h=2g+1,2g+2,$ is a polynomial of degree $h$ without
roots of multiplity $h$.
In both cases, the
genus of the corresponding Riemann surface is $g$. It is noteworthy that any
Riemann surface of genus $g=1$ or $g=2$ has a representation
in one of the forms \calle{form1} or \calle{form2},
while the same statement
is not true for surfaces of genus $g=3$. We label the two sheets of the
Riemann surface $\Gamma$
by the numbers $l=0,1$:
\begin{equation}
y^{(l)}(z)= e^{i\pi l}\,P_h^{1/2}(z)=
e^{i\pi l}\,\prod_{i=1}^h\, (z-w_i)^{1/2}~.
\end{equation}
Let $A_a,\, B_a,~a=1,2,\dots,g$ be the basic cycles of the surface.
As we encircle the point $w_i$ along the
contours $A_a,\, B_a$,
in the case of an $A_a$ cycle we stay on the
same sheet, while in the case of a
$B_a$ cycle we pass from the $l$-th sheet
to the $(l+1)$-th one.
We shall denote the process of encircling the points $w_i$
on the cycles $A_a, \, B_a$ by the symbols
$\hat{\pi}_{A_a}$, $\hat{\pi}_{B_a}$ respectively.
Here these generators form a group
of monodromy that in our case of two-sheet covering of the sphere coincides
with the ${\Bbb Z}_{2}$ group.
We consider the energy-momentum tensor with representation $T^{(l)}(z)$
on each of these sheets.
The above definition of the monodromy properties
along the cycles $A_a,~B_a$ implies that the following
boundary conditions should be satisfied by the energy-momentum
tensor:
\begin{equation}
\hat{\pi}_{A_{a}}T^{(l)}=T^{(l)} ,\quad \hat{\pi}_{B_{a}}T^{(l)}=T^{(l+1)}~.
\end{equation}
It is convenient to pass to a basis, in which the operators
$\hat{\pi}_{A_a}$, $\hat{\pi}_{B_a}$ are diagonal
\begin{eqnarray}
T=T^{(0)}+T^{(1)}~,&&\quad T^{-}=T^{(0)}-T^{(1)}~,\\
\hat{\pi}_{A_{a}}T=T~,&& \quad \hat{\pi}_{A_{a}}T^{-}=T^{-}~,
\label{BC1}\\
\hat{\pi}_{B_{a}}T=T~,&& \quad \hat{\pi}_{B_{a}}T^{-}=-T^{-}~.
\label{BC2}
\end{eqnarray}
The corresponding operator product expansions
(OPEs) of the $T,~T^-$ fields can be
determined by taking into account the
OPEs of $T^{(l)},~T^{(l')}$. On the same
sheet, the OPEs of the two fields $T^{(l)}(z_{1})T^{(l)}(z_{2}),$ are the same
as
that on the sphere, while on different sheets they do not correlate, i.e.
$T^{(l)}(z_{1})T^{(l+1)}(z_{2})\sim {\rm reg}$.
Thus, in the diagonal
basis the OPEs can be found to be
\begin{eqnarray}
T(z_{1})T(z_{2})&=&{c\over 2\,z_{12}^4}+
{2\,T(z_2)\over z_{12}^2}+
{T'(z_2)\over z_{12}} + {\rm reg}~,
\label{OPE1} \\
T^{-}(z_1)T^{-}(z_{2})&=&{c\over 2\,z_{12}^4}+
{2\,T(z_2)\over z_{12}^2}+
{T'(z_2)\over z_{12}} + {\rm reg}~,
\label{OPE2}\\
T(z_1)T^{-}(z_2)&=&{2\over z_{12}^2}\,T^{-}(z_2)+
{T'^{-}(z_2)\over z_{12}}+ {\rm reg}~,
\label{OPE3}
\end{eqnarray}
where $c=2\hat{c}$, and $\hat{c}$ is the central charge in the OPE of
$T^{(l)}(z_{1})T^{(l)}(z_{2})$. It is seen from
\calle{OPE3} that $T^-$ is
primary field with respect to $T$. To write the algebra
\calle{OPE1}-\calle{OPE2} in
the graded form we determine the mode expansion of $T$ and $T^-$:
\begin{eqnarray}
T(z)V_{(k)}(0)&=&\sum_{n\in {\Bbb Z}}\, z^{n-2}L_{-n}V_{(k)}(0)~,\\
T^-(z)V_{(k)}(0)&=&\sum_{n\in {\Bbb Z}}\, z^{n-2-k/2}L_{-n+k/2}^-V_{(k)}(0)~,
\end{eqnarray}
where $k$ ranges over the values 0,1 and determines the parity sector in
conformity with the boundary conditions
\calle{BC1} and \calle{BC2}. Standard calculations
lead to the following algebra for the operators $L_{-n}$ and
$L_{-n+k/2}^{-}$:
\begin{eqnarray}
\lbrack L_n,L_m\rbrack &=& (n-m)\,L_{n+m}+\frac{c}{12}\,
(n^3-n)\,\delta_{m+n,0}~,\nonumber\\
\lbrack L_{m+k/2}^{-},L_{n+k/2}^{-}\rbrack
&=&(m-n)\,L_{n+m+k}+\frac{c}{12}\lbrack (m+k/2)^3-
(m+k/2)\rbrack \, \delta_{n+m+k,0}~,~~~~~~
\label{algebra} \\
\lbrack L_m,L_{n+k/2}^- \rbrack &=& \lbrack m-n-k/2\rbrack \, L_{m+n+k/2}^-~.
\nonumber
\end{eqnarray}
The operators $\overline{L}_n,~ \overline{L}_{m+k/2},~\overline{L}_n^-,~
\overline{L}_{m+k/2}^-$ satisfy the same relations and $\overline{L}_n,$
$\overline{L}_{m+k/2},$ $\overline{L}_n^-,$ $\overline{L}_{m+k/2}^-$
commute with $L_n,~
L_{m+k/2},~L_n^-,~L_{m+k/2}^-$.
To describe
the representations of the algebra \calle{algebra},
it is necessary to consider
separately the non-twisted sector with $k=0$ and
the twisted sector
sector with $k=1$. In order
to write the $\lbrack V_{(k)}\rbrack$ representation of the
algebra \calle{algebra} in a more explicit form, it is convenient to
consider the highest weight states.
In the $k=0$ sector, the
highest weight state $\vline\, \Delta , \Delta^-\rangle$
is determined with the help of a primary field $V_{(0)}$
by means of the formula
\begin{equation}
\label{state1}
\vline \,\Delta , \Delta^-\rangle=V_{(0)}\, \vline\, \emptyset
\rangle ~.
\end{equation}
Using the definition of vacuum, it is easy to see that
\begin{equation}
\begin{array}{l}
L_0\,\vline\,\Delta, \Delta^-\rangle=\Delta
\, \vline\, \Delta ,\Delta^-\rangle~ ,\quad
L_0^-\,\vline\, \Delta, \Delta^-\rangle=
\Delta^-\,\vline\, \Delta ,\Delta^-\rangle~, \\
\nonumber\\
L_n\,\vline\, \Delta, \Delta^-\rangle =
L_n^-\,\vline\, \Delta, \Delta^-\rangle=0 ,
\quad n \geq 1~.
\end{array}
\end{equation}
In the $k=1$ sector, we define the vector of highest weight $|\Delta\rangle$
of the algebra to be
\begin{equation}
\label{state2}
\vline\, \Delta \rangle=V_{(1)}\,\vline \,\emptyset\rangle~,
\end{equation}
where $V_{(1)}$ is a primary field
with respect to $T$. In analogy with the non-twisted sector we
obtain
\begin{equation}
L_0\,\vline \,\Delta \rangle=\Delta \,\vline \,\Delta \rangle,\quad
L_n\,\vline\, \Delta \rangle=L_{n-1/2}^- \,\vline\, \Delta \rangle=0,
\quad n \geq 1~.
\end{equation}
Thus, the Verma module over the algebra \calle{algebra}
is obtained by the action
of any number of $L_{-m}$ and $L_{-m+k/2}^-$ operators with $n,m>0$
on the states \calle{state1} and \calle{state2}. As was shown in ref.
\cite{CSS} by means of GKO
(coset construction) method, the central charge of a reducible unitary
representation of the algebra \calle{algebra} has the form
\begin{equation}
\label{ccharge}
c=2-\frac{12}{p(p+1)}=2\hat{c}~ ,\quad p=3,4,\ldots~.
\end{equation}
Using ref. \cite{FF}, Dotsenko and Fateev \cite{DF}
gave the complete solution for the
minimal model correlation functions on the sphere. They were able to write
down the integral representation for the conformal blocks of
the chiral vertices
in terms of the correlation functions of
the vertex operators of a free bosonic
scalar field $\Phi$ coupled to a background charge $\alpha_0$.
This construction has become known as the Coulomb Gas Formalism
(CGF).
In the present case, this approach is also applicable by
considering a Coulomb gas for each sheet separately
but coupled to the same bouckground charge:
\begin{equation}
\begin{array}{l}
T^{(l)}=-\frac{1}{4}(\partial_z\Phi^{(l)})^{2} + i\alpha_0\partial_z^2
\Phi^{(l)}~,\quad
\langle\Phi^{(l)}(z)\Phi^{(l')}(z')\rangle=-\delta^{ll'}
\,\ln|z-z'|^2~,\\
\\
\hat{\pi}_{A_a}\partial_z\Phi^{(l)}=\partial_z\Phi^{(l)}~ ,\quad
\hat{\pi}_{B_a}\partial_z\Phi^{(l)}=\partial_z\Phi^{(l+1)}~,
\nonumber
\end{array}
\end{equation}
where $c=2-24\alpha_0^2$ or $\alpha_0^2=1/2p(p+1)$.
Passing to the basis which diagonalizes the operators $\hat{\pi}_{A_a}$ ,
$\hat{\pi}_{B_a}$, i.e.
\begin{eqnarray}
\Phi=\Phi^{(0)} + \Phi^{(1)}~,\quad \Phi^- = \Phi^{(0)} - \Phi^{(1)}
~,\nonumber\\
\hat{\pi}_{A_a}\partial_z\Phi = \partial_z\Phi~ ,\quad
\hat{\pi}_{B_a}\partial_z\Phi = \partial_z\Phi~,\\
\hat{\pi}_{A_a}\partial_z\Phi^- = \partial_z\Phi^-~ ,\quad
\hat{\pi}_{B_a}\partial_z\Phi^- = -\partial_z\Phi^-~,
\nonumber
\end{eqnarray}
we finally obtain the bosonization rule for the operators $T$ , $T^-$ in the
diagonal basis
\begin{eqnarray}
T &=& -\frac{1}{4}(\partial_z\Phi)^2 + i\alpha_0\partial_z^2\Phi -
\frac{1}{4}(\partial_z \Phi^-)^2~,\nonumber \\
\\
T^- &=& -\frac{1}{2}\partial_z\Phi\partial_z\Phi^- +
i\alpha_0\partial_z^2\Phi^- ~.
\nonumber
\end{eqnarray}
In conventions of ref. \cite{CSS},
the vertex operator with charges $\alpha$, $\beta$
in the $k=0$ (non-twisted) sector is given by
\begin{equation}
\label{vertex1}
V_{\alpha\beta}(z) = e^{i\alpha\Phi+i\beta\Phi^-}~,
\end{equation}
with conformal weights $\Delta=\alpha^2-2\alpha_0\alpha
+\beta^2$ and $\Delta^-=2\alpha\beta-2\alpha_0\beta$.
In the $k=1$ (twisted) sector the situation is slightly different. Here we
have an antiperiodic bosonic field $\Phi^-$, i.e.
$\Phi^-(e^{2\pi i}z) = -\Phi^-$;
this leads to the deformation of
the geometry of space-time. If we recall that the circle is parametrized by
$\Phi^- \in S^1 \lbrack 0,2\pi R\rbrack$, the
condition $\Phi^- \sim -\Phi^-$ means that pairs of points of $S^1$ have been
identified.
Thus, $\Phi^-$ lives on the orbifold $S^1/{\Bbb Z}_2$; under the
identification
$\Phi^- \sim -\Phi^-$ the two points
$\Phi^-=0$ and $\Phi^-=\frac{1}{2}(2\pi R)$ are fixed points. One can try
to define the twist fields $\sigma_\epsilon(z),~\epsilon=0,1,$ for
the bosonic
field $\Phi^-$, with respect to which $\Phi^-$ is antiperiodic. Notice
that there
is a separate twist field for each fixed point. The OPE of the
current $I^-=i\partial_z\Phi^-$ with the field $\sigma_\epsilon$ is then
\begin{equation}
\begin{array}{l}
I^-(z)\sigma_{\epsilon}(0)=\frac{1}{2}z^{-1/2}\hat{\sigma}_{\epsilon}(0) +
\ldots~,\\
\nonumber\\
I^-(z)\hat{\sigma}_{\epsilon}(0)=\frac{1}{2}z^{-3/2}
\sigma_{\epsilon}(0) + 2z^{-1/2}\sigma'_{\epsilon}(0) + \ldots~.
\end{array}
\end{equation}
The twist fields $\sigma_\epsilon$ and $\hat{\sigma}_\epsilon$
are primary fields for the $T_{\rm orb}=-\frac{1}{4}(\partial_z\Phi^-)^{2}$
with dimensions $\Delta_{\epsilon}=1/16$ and $\hat{\Delta}_{\epsilon}=
9/16$ respectively. So, in the twisted sector
the highest weight vectors (or primary
fields) can be written as follows
\begin{equation}
\label{vertex2}
V_{\gamma\,\epsilon}^{(t)}=e^{i\gamma\Phi}\sigma_{\epsilon}~ ,\quad
\Delta^{(t)}=\gamma^2-2\alpha_0\gamma+{1\over 16}~.
\end{equation}
In ref. \cite{CSS},
the anomalous dimensions of the primary fields of the minimal models
for the algebra \calle{algebra}
were obtained both in the non-twisted and twisted sectors in
conformity with the spectrum of the central charge \calle{ccharge};
in particular, it was found that
the charges $\alpha,\beta,\gamma$
of the primary fields corresponding to $k=0$ and $k=1$ sectors
have the form:
\begin{equation}
\begin{array}{l}
\alpha_{n'm'}^{nm}={2-n-n'\over 2}\,\alpha_{+}+
{2-m-m'\over 2}\,\alpha_{-}~,\\
\nonumber\\
\beta_{n'm'}^{nm}={n-n'\over 2}\,\alpha_{+}+
{m-m'\over 2}\,\alpha_{-}~,\\
\nonumber\\
\gamma_{nm}={2-n\over 2}\,\alpha_{+}+
{2-m\over 2}\,\alpha_{-}~,\\
\nonumber\\
1\leq n,n'\leq p ,\quad 1\leq m,m'\leq p-1~,
\end{array}
\end{equation}
where the constants $\alpha_{\pm}$
are expressed in terms of the background charge $\alpha_0$:
\begin{equation}
\alpha_{\pm}=\alpha_{0}/2 \pm \sqrt{\alpha_{0}^{2}/4+1/2} ~.
\end{equation}
We denote the corresponding fields by $V^{nm}_{n'm'}$, $V^{(t)}_{nm}$
and their conformal weights by $\Delta^{nm}_{n'm'}$, $\Delta^{(t)}_{nm}$.
We can thus represent the CFT on a hyper-elliptic surface as a CFT on
the plane with an additional symmetry, exactly as
described by the algebra \calle{algebra}.
The corresponding highest weight vectors of the algebra are
given by \calle{vertex1} and \calle{vertex2}; finally, the
central charge is given by \calle{ccharge}.
We will confine ourselves to the minimal models on
hyper-elliptic surfaces as presented above; keeping this in mind
we pass to the construction of
perturbed models of these CFTs.
\section{Perturbation by $V_{nm}^{(t)}$ and Integrals of Motion}
\label{new-model}
\setcounter{equation}{0}
Let $S_p$ be the action the $p$-th conformal minimal model on the
hyper-elliptic surface $\Gamma$
\begin{equation}
S_p\lbrack\Phi,\Phi^-\rbrack\,\sim \, \int\,d^2z\,(
\, \partial_z \Phi \partial_{\overline z}\Phi - i\alpha_0R\Phi) +
\int\,d^2z\,\partial_z\Phi^-\partial_{\overline z}\Phi^-~.
\end{equation}
We now consider the perturbation of this conformal
field theory by the degenerate relevant operator $V_{nm}^{(t)}$.
\begin{equation}
S_\lambda\,=\,S_p\lbrack\Phi,\Phi^-\rbrack
+\lambda\,\int\,d^2\,z\,e^{i\gamma_{nm}
\Phi(z,\overline{z})}\,\sigma_{\epsilon}(z,\overline{z})~.
\end{equation}
The parameter $\lambda$ is a coupling constant with conformal weight
$(1-\Delta_{nm}^{(t)}\, , \, 1-\Delta_{nm}^{(t)})$.
Obviously, for a generic perturbation the new action $S_\lambda$
does not describe an integrable model.
We are going to choose the perturbation in a way that the corresponding
field theory is integrable.
To prove the integrability of this massive (this claim is proved at the end
of the present section)
theory, one must calculate the characters of the modules
of the identity $I$ and
$V_{nm}^{(t)}$.
The ``basic" currents $T(z)$ and $T^-(z)$ generate an infinite-dimensional
vector subspace $\Lambda$ in the representation space. This subspace can
be constructed by successive applications of the generators $L_{-n}$ and
$L_{-m}^-$ with $n,m>0$ to the identity operator $I$.
$\Lambda$ can be decomposed to a direct sum of eigenspaces of $L_0$, i.e.
\begin{equation}
\Lambda\,=\,\bigoplus_{s=0}^{\infty} \Lambda_{s}~,\quad
L_0\,\Lambda_s = s\,\Lambda_s~.
\end{equation}
The space $\Lambda$ contains the subspace $\Lambda'=\partial_z\Lambda$.
Therefore, in order to separate the maximal linearly independent set,
one must take
the factor space $\hat{\Lambda}=\Lambda/(L_{-1}\Lambda\,\bigoplus\,L_{-1}^{-}
\Lambda)$ instead of $\Lambda$. The space $\hat{\Lambda}$ admits a similar
decomposition as a direct sum of eigenspaces of $L_0$.
It follows that the formula of the character for $\hat{\Lambda}$
takes the form
\begin{equation}
\chi_0 = (1-q)^2 \prod_{n=1}^{+\infty}\,\frac{1}{(1-q^n)^2}~.
\end{equation}
The dimensionalities of the subspaces
$\hat{\Lambda}_s$ can be determined from the
character formula
\begin{equation}
\sum_{s=0}^{\infty} \, q^s\, \dim(\hat{\Lambda}_s) = (1-q)\,\chi_0 + q~.
\end{equation}
\indent
On the other hand,
the module $V$ of the primary field $V_{nm}^{(t)}$ can be constructed by
successively applying the generators $L_{-k}$ and $L_{1/2-l}^-$
with $k,l>0$ to
the primary field $V_{nm}^{(t)}$. This space $V$ and
the corresponding factor space $\widehat{V} =
V/L_{-1}V$ may also be decomposed in a direct sum of
$L_0$ eigenspaces:
\begin{equation}
V=\bigoplus_{s=0}^{\infty}\,V_s^{(t)}~,\quad L_0\,V_s^{(t)}=s\,
V_s^{(t)}~.
\end{equation}
The dimensionalities of $V_s^{(t)}$
in this factor space associated with the relevant field
\begin{equation}
V_{(1,1)}^{(t)}=e^{i\frac{\alpha_0}{2}\Phi}\sigma_{\epsilon}
\end{equation}
are given by
the character formula
\begin{equation}
\sum_{s={\Bbb N}/2}^{+\infty}\, q^{s+\Delta_{(1,1)}^{(t)}}\,
\dim(\hat{V}_s^{(t)})=
\chi_{\Delta_{(1,1)}^{(t)}}\, (1-q)~,
\label{char1}
\end{equation}
where
\begin{eqnarray}
\label{char2}
\chi_{\Delta_{(1,1)}^{(t)}}&=&q^{\Delta_{(1,1)}^{(t)}}
\prod_{n=1}^{+\infty}\frac{1}{(1-q^{n})(1-q^{n-1/2})}~,\\
\Delta_{(1,1)}^{(t)}&=&\frac{1}{16}\left(1-{6\over p(p+1)}\right)~.
\end{eqnarray}
When the dimensionalities of $\widehat{V}_s^{(t)}$
(calculated from \calle{char1}, \calle{char2}) are
compared to those of $\hat{\Lambda}_{s+1}$,
we see that for $s=1,3,5,\dots$ the $\dim(\widehat{\Lambda}_{s+1})$ exceeds
the $\dim(\widehat{V}^{(t)}_s)$
at least
by the unity, i.e.
$\dim(\widehat{\Lambda}_{s+1})>
\dim(\widehat{V}^{(t)}_s),~s=1,3,5,\dots~.$
This proves that the model
\begin{equation}
\label{action}
S_{\lambda}=S_p + \lambda\,\int\,d^2z\,e^{i\frac{\alpha_0}{2}
\Phi(z,\overline{z})}\,\sigma_{\epsilon'}(z,\overline{z})
\end{equation}
possesses an infinite set of non-trivial IMs.
We
note here that there are no such IMs for
perturbations by the operators $V_{nm}^{(t)}$ with $n,m>1$.
We now briefly study the renormalization
group flow behaviour in the vicinity of the fixed point.
Solving the Callan-Symanzik equation \cite{IZ} up to third order, one can
obtain the $\beta$-function
\begin{equation}
\beta=\varepsilon\, g\, \left( 1 + \frac{Y}{6}\, g^2\right) + {\cal O}(g^4) ~.
\end{equation}
In the above equation, we have denoted
\begin{equation}
\varepsilon = 1-\Delta_{(1,1)}^{(t)}
\end{equation}
and
\begin{equation}
Y = \int d^2 z_1 \int d^2 z_2 \,\langle V_{(1,1)}^{(t)}(z_1,\overline{z}_1)
V_{(1,1)}^{(t)}(z_2,\overline{z}_2)V_{(1,1)}^{(t)}(1,1)
V_{(1,1)}^{(t)}(0,0) \rangle ~.
\end{equation}
Since $Y>0$, we conclude that there is no reason
to expect the exsistance of any non-trivial zeros of the $\beta$-function.
In the absence of
zeros, the field theory described by the action \calle{action}
has a finite correlation length
$R_c\sim \lambda^{-1/2\varepsilon}$ and the spectrum consists of particles
with non-zero mass
of order $m\sim R_c^{-1}$. In this case, the IMs force the scattering
of the particles to be factorizable, i.e. there is particle production,
the set of particle momenta is preserved, the $n$-particle
$S$-matrix is a product of 2-particle $S$-matrices etc.
\section{Infinite Quantum Group Symmetry}
\label{nonlocal-charges}
\setcounter{equation}{0}
\indent
In this section we briefly review the method developed in ref.
\cite{BL} and then we apply it to our model.
We consider a CFT perturbed by a relevant operator with zero Lorentz spin.
The Euclidean action is given by
\begin{equation}
\label{pert-action}
S_\lambda=S_{\rm CFT}+\frac{\lambda}{2\pi}
\,\int\,d^2z\,V_{\rm pert}(z,\overline{z})~,
\end{equation}
where the perturbation field can be written as $V_{\rm pert}(z,\overline{z})=
V_{\rm pert}(z)\overline{V}_{\rm pert}(\overline{z})$
(or a sum of such terms but
in our case this is irrelevant).
Let us assume that for the conformal
invariant action $S_{\rm CFT}$ there exist the chiral currents $J(z)$,
$\overline{J}(\overline{z})$ satisfying equations $\partial_{\overline
z}J(z)=0$,
$\partial_z\overline{J}(\overline{z})=0$. Then for the action
\calle{pert-action} $S_\lambda$, the
perturbed currents, which are local with respect to the perturbing field, up
to the first order, are given by Zamolodchikov's equations \cite{zam89}
\begin{equation}
\begin{array}{l}
\partial_{\overline z}J(z,\overline{z})=\lambda\oint_z\,
{d\omega\over 2\pi i}\, V_{\rm pert}
(\omega,\overline{z})J(z)~,\\ \\
\partial_z\overline{J}(z,\overline{z})=\lambda\oint_{\overline{z}}\,
{d\overline{\omega}\over 2\pi i}\,
V_{\rm pert}(z,\overline{\omega})\overline{J}(\overline{z})~.
\end{array}
\end{equation}
The condition for the conservation of the currents up to first order in
perturbation theory is that the residues of OPEs appearing in the above
contour integrals are total derivatives:
\begin{equation}
\begin{array}{l}
{\rm Res}\Big(V_{\rm pert}(\omega)J(z)\Big)=\partial_zh(z)~,
\\ \\
{\rm Res}\Big(\overline{V}_{\rm pert}(\overline{\omega})
\overline{J}(\overline{z})\Big)
=\partial_{\overline{z}}
\overline{h}(\overline{z})~.
\end{array}
\end{equation}
Then Zamolodchikov's equations for the currents are written in the form
\begin{equation}
\label{continuity-equation}
\begin{array}{l}
\partial_{\overline{z}}J(z,\overline{z})=\partial_zH(z,\overline{z})~,\\
\\
\partial_z\overline{J}(z,\overline{z})=
\partial_{\overline{z}}\overline{H}(z,\overline{z})~,
\end{array}
\end{equation}
where the fields $H$, $\overline{H}$ are
\begin{equation}
\begin{array}{l}
H(z,\overline{z})=\lambda\, \lbrack h(z)\overline{V}_{\rm pert}(\overline{z})
+\dots\rbrack~,\\ \\
\overline{H}(z,\overline{z})=\lambda\,\lbrack
V_{\rm pert}(z)\overline{h}(\overline{z})+\dots\rbrack~,
\end{array}
\end{equation}
where the dots represent contributions coming from terms in the OPEs
which are more singular
than the residue term.
The conserved charges following from the conserved currents
\calle{continuity-equation}
are
\begin{equation}
\label{charges}
\begin{array}{l}
Q=\int\,{dz\over 2\pi i}\,J+\int {d\overline{z}\over 2\pi i}\, H~,\\ \\
\overline{Q}=\int\,{d\overline{z}\over 2\pi i}\,\overline{J}
+\int\,{dz\over 2\pi i}\,\overline{H}~.
\end{array}
\end{equation}
Using the non-trivial braiding relations between the
conserved currents, one can
obtain the $q$-deformed affine Lie algebra for the conserved charges
\calle{charges}.
We are now going to implement the above construction of non-local charges
for the theory described by the action \calle{action}.
We will thus derive the $q$-deformed Lie algebra underlying the theory.
Using the construction explained above, we can show that the
action \calle{action} admits the following non-local conseved
quantum currents:
\begin{equation}
\label{continuity2}
\begin{array}{l}
\partial_{\overline{z}}J =\partial_zH~,\\
\nonumber\\
\partial_z\overline{J}=\partial_{\overline z}
\overline{H}~,
\end{array}
\end{equation}
where
\begin{equation}
\label{currents}
\begin{array}{l}
J=\colon e^{ia\varphi(z)}\,
e^{ib\varphi^-(z)}\colon\, \sigma(z)~,\\ \\
\overline{J}=
\colon e^{ia\overline{\varphi}(\overline{z})}e^{ib
\overline{\varphi}^-(\overline{z})}\colon
\,\overline{\sigma}(\overline{z})~,\\ \\
H(z,\overline{z})=\lambda\, A \, \colon
e^{i(a+\alpha_0/2) \varphi (z)}e^{i(b+k) \varphi^-(z)}
\overline{\sigma}(\overline{z})
e^{i\frac{\alpha_{0}}{2}
\overline{\varphi}(\overline{z})}\colon~,\\ \\
\overline{H}(z,\overline{z})=\lambda\, A\,
\colon e^{i(a+\alpha_0/2) \overline{\varphi}(\overline{z})}
e^{i(b+k) \overline{\varphi}^-(\overline{z})}
\sigma (z)e^{i\frac{\alpha_0}{2}\varphi(z)}
\colon~,
\end{array}
\end{equation}
and
\begin{eqnarray}
a &=& -(15/8+k^{2})/(\alpha_{0}+4k^{2}/\alpha_{0})~,
\nonumber\\
b &=& 2k a/\alpha_0~,
\label{constants}\\
A &=& \alpha_0/2(a + \alpha_0/2)~.\nonumber
\end{eqnarray}
In the derivation of \calle{currents}, we used the OPEs
\begin{equation}
\begin{array}{l}
\sigma(z)\, \sigma(x)=(z-x)^
{k^2-1/8}:e^{ik\varphi^{-}(x)}:+\ldots~,\\ \\
\overline{\sigma}(\overline z)\overline{\sigma}(\overline x)=
(\bar z-\bar x)^{\overline{k}^2-1/8}
\,:e^{i\overline{k}\overline{\varphi}^-(\overline{x})}:+\ldots~.
\end{array}
\end{equation}
From the continuity equations
\calle{continuity2} we define the conserved charges
\begin{equation}
\begin{array}{l}
Q =\int\,\frac{dz}{2\pi i}\,J + \int\,\frac{d\overline{z}}{2\pi i}\,H
~,\\
\nonumber\\
\overline{Q} =\int\,\frac{dz}{2\pi i}\,\overline{H} +
\int\frac{d\overline{z}}{2\pi i}\,\overline{J}~.
\end{array}
\end{equation}
To find the commutation relations between the charges $Q$ and
$\overline{Q}$, we must first derive the braiding relations
of the non-local
conserved currents $J$, $\overline{J}$. To this end we will make
use of the well known identity
\begin{equation}
e^A\,e^B=e^B\,e^A\,e^{\lbrack A,B\rbrack}~,
\quad \lbrack A,\lbrack A, B\rbrack\rbrack=
\lbrack B,\lbrack A,B\rbrack\rbrack=0~.
\end{equation}
We then obtain the following braiding relations
\begin{equation}
\begin{array}{ll}
e^{ia\varphi(z)}e^{ib\varphi(z')}=
e^{\mp i\pi ab}\,e^{ib\varphi(z')}e^{ia\varphi(z)}~,
&\quad z\lessgtr z'~,\\ \\
e^{ia\varphi^{-}(z)}e^{ib\varphi^{-}(z')}=
e^{\mp i\pi ab}\,e^{ib\varphi^-(z')}e^{ia\varphi^-(z)}
~, &\quad z\lessgtr z'~,\\ \\
e^{ia\overline{\varphi}(\overline{z})}
e^{ib\overline{\varphi}(\overline{z}')}=
e^{\pm i\pi ab}\,e^{ib\overline{\varphi}(\overline{z}')}
e^{ia\overline{\varphi}(\overline{z})}~,
&\quad \overline{z}\lessgtr \overline{z}'~,\\ \\
e^{ia\overline{\varphi}^-(\overline{z})}
e^{ib\overline{\varphi}^-(\overline{z}')}=
e^{\pm i\pi ab}\,e^{ib\overline{\varphi}^-(\overline{z}')}
e^{ia\overline{\varphi}^-
(\overline{z})}~, &\quad \overline{z}\lessgtr \overline{z}'~,\\ \\
e^{ia\varphi(z)}e^{ib\overline{\varphi}(\overline{z}')}=e^{i\pi ab}
\,e^{ib\overline{\varphi}(\overline{z}')}
e^{ia\varphi(z)}~, &\quad \forall z,\overline{z}'~,\\ \\
e^{ia\varphi^-(z)}e^{ib\overline{\varphi}^-(\overline{z}')}=
e^{i\pi ab}\,e^{ib\overline{\varphi}^-(\overline{z}')}e^{ia\varphi^-(z)}~,
&\quad \forall z,\overline{z}'~.
\end{array}
\end{equation}
Using the representation of the twist fields $\sigma,
\overline{\sigma}$ in terms of
scalar bosonic fields which was proposed in ref. \cite{AZ},
we can derive the following
braiding relations:
\begin{equation}
\begin{array}{ll}
\sigma(z)\sigma(z')=e^{\mp i\pi/8}\,\sigma(z')\sigma(z)~,&\quad z\lessgtr z'~,
\\ \\
\overline{\sigma}(\overline{z})\overline{\sigma}
(\overline{z}')=e^{\pm i\pi/8}\,\overline{\sigma}(\overline{z}')
\overline{\sigma}(\overline{z})~,&\quad \overline{z}\lessgtr \overline{z}'~,
\\ \\
\sigma(z)\overline{\sigma}(\overline{z}')=
e^{+i\pi/8}\,\overline{\sigma}(\overline{z}')\sigma(z)~,&
\quad \forall z,\overline{z}'~.
\nonumber
\end{array}
\end{equation}
Consequently the non-local conserved currents have the non-trivial braiding
relations
\begin{equation}
J(x,t)\overline{J}(y,t)=
q^{\nu}\,\overline{J}(y,t)J(x,t)~,
\end{equation}
where
\begin{equation}
q=e^{-i\pi}~,\quad
\nu = 1/8-aa-bb~.
\end{equation}
\indent
Using the above braiding relations and the expressions
\calle{currents}, one finds that
the conserved charges satisfy the relations
\begin{eqnarray}
Q\overline{Q}-q^{\nu}\,\overline{Q}Q
=\frac{\lambda}{2\pi i}\,\int_t\,
(dz\partial_z+d\overline{z}\partial_{\overline{z}})\,
A\, e^{i(a+\alpha_0/2)\varphi(z)}
e^{i(b+k)\varphi^{-}(z)}\times \nonumber\\
\times A\,e^{i(a+\alpha_{0}/2)\overline{\varphi}(\overline{z})}
e^{i(b+k)\overline{\varphi}^-(\overline{z})}~.
\label{QQ}
\end{eqnarray}
Now let us recall that the scalar field $\varphi^-$ lives on the orbifold
$S^1 / {\Bbb Z}_2$ and hence the momentum $k$ must be
quantized. Therefore, the above
relations must be transformed to
\begin{eqnarray}
\widehat Q_{\epsilon}\widehat{\overline{Q}}_{\overline{\epsilon}}-
q^{\nu_{\epsilon\overline{\epsilon}}}\,
\widehat{\overline{Q}}_{\overline{\epsilon}}\widehat Q_{\epsilon}&=&
{\lambda\over 2\pi i}\, \sum\,
A_L^{nm}A_R^{nm}\, \int_t \, (dz\,\partial_z+
d\overline{z}\,\partial_{\overline{z}})\times \nonumber\\
&\times & e^{i(a_L^{nm}+\alpha_0/2)\varphi(z)+
i(a_R^{nm}+\alpha_0/2)\overline{\varphi}(\overline{z})}\times\nonumber\\
&\times & e^{i(b_L^{nm}+k_L^{nm})\varphi^-(z)+
i(b_R^{nm}+k_R^{nm})\overline{\varphi}^-(\overline{z})}~,
\end{eqnarray}
where
\begin{equation}
\begin{array}{l}
\nu_{\overline{\epsilon}\epsilon}=
1/8-a_L^{nm}a_R^{nm}-b_L^{nm}b_R^{nm}
\\ \\
k_L^{nm}=k_L^{nm}(\epsilon,\epsilon')=
{n\over R} + \left( m+{\epsilon+\epsilon'\over 2}
\right)\,{R\over 2}~,
\\ \\
k_R^{nm}=k_R^{nm}(\overline{\epsilon},\epsilon')=
{n\over R}-\left( m+
{\overline{\epsilon}+\epsilon'\over 2}\right) \,{R\over 2}~.
\nonumber
\end{array}
\end{equation}
The constants $a_L^{nm}$, $a_R^{nm}$, $b_L^{nm}$,
$b_R^{nm}$, $A_L^{nm}$, $A_R^{nm}$ are obtained from the
relations \calle{constants} and
$\epsilon,\overline{\epsilon},
\epsilon'\in\{0,1\}$.
Finally, the topological charge for the model \calle{action}
is defined as follows:
\begin{eqnarray}
{\cal T}_{\rm top}&=&\int_{-\infty}^{+\infty}\,dx\,\partial_x\Phi(x)+
\int_{-\infty}^{+\infty}\,dx\,\partial_x\Phi^-(x)\nonumber\\
&=&\int_{-\infty}^{+\infty}\,dx\,\partial_x\,(\varphi +
\overline{\varphi})+
\int_{-\infty}^{+\infty}\,dx\,\partial_x(\varphi^- +
\overline{\varphi}^-)
\nonumber\\
&=&T_{\rm top}+\overline{T}_{\rm top}+
T_{\rm top}^-+\overline{T}_{\rm top}^-~,
\label{top-charg}
\end{eqnarray}
where $\Phi$, $\Phi^-$ and the quasi-chiral components
$\varphi, \overline{\varphi},
\varphi^-,\overline{\varphi}^-$ are related by the following
equations:
\begin{equation}
\begin{array}{l}
\varphi(x,t)=\frac{1}{2}\,
\left(\Phi(x,t)+\int_{-\infty}^x\, dy\, \partial_t
\Phi(y,t)\right)~,\\
\nonumber\\
\overline{\varphi}(x,t)=\frac{1}{2}\,\left(\Phi(x,t)-
\int_{-\infty}^x\, dy\,\partial_t
\Phi(y,t)\right)~,\\
\nonumber\\
\varphi^-(x,t)=\frac{1}{2}\,\left(\Phi^-(x,t)+
\int_{-\infty}^x\,dy\,\partial_t\Phi^-(y,t)\right)~,\\
\nonumber\\
\overline{\varphi}^-(x,t)=\frac{1}{2}\,\left(\Phi^-(x,t)-
\int_{-\infty}^x\,dy\,
\partial_t\Phi^-(y,t)\right)~,
\end{array}
\end{equation}
These equations guarantee that
$\Phi=\varphi+\overline{\varphi}$ and $\Phi^-=\varphi^-+
\overline{\varphi}^-$. Taking into account all these, the
right hand side of the equation \calle{QQ}
can be reexpressed in terms of the usual topological charges
charge in \calle{top-charg}:
\begin{eqnarray}
\widehat{Q}_\epsilon\widehat{\overline{Q}}_{\overline{\epsilon}} -
q^{\nu_{\overline{\epsilon}\epsilon}}\,
\widehat{\overline{Q}}_{\overline{\epsilon}}\widehat{Q}_\epsilon
= \frac{\lambda}{2\pi i}\,
\sum\, A_L^{nm}A_R^{nm}\,
\lbrack 1-e^{i(a_L^{nm}+\alpha_0/2)T_{\rm top}+
i(a_R^{nm}+\alpha_0/2)\overline{T}_{\rm top}}\times\nonumber\\
\times e^{i(b_L^{nm}+k_L^{nm})T_{\rm top}^-+
i(b_R^{nm}+k_R^{nm})\overline{T}_{\rm top}^-}\rbrack~.~~~~~
\label{QQ2}
\end{eqnarray}
Then, one can easily calculate the commutators
\begin{equation}
\label{TQ}
\begin{array}{l}
\lbrack T_{\rm top},Q_\epsilon^{nm}\rbrack=
a_L^{nm}\, Q_{\epsilon}^{nm}~,\quad
\lbrack \overline{T}_{\rm top},
\overline{Q}_{\overline{\epsilon}}^{nm}\rbrack=
a_R^{nm}\,\overline{Q}_{\overline{\epsilon}}^{nm}~, \\ \\
\lbrack T_{\rm top}^-,Q_{\epsilon}^{nm}\rbrack=
b_{L}^{nm}\, Q_{\epsilon}^{nm}~,\quad
\lbrack\overline{T}_{\rm top}^-,
\overline{Q}_{\overline{\epsilon}}^{nm}\rbrack=
b_R^{nm}\,\overline{Q}_{\overline{\epsilon}}^{nm}~.
\end{array}
\end{equation}
Thus, these commutation relations \calle{TQ}
together with the relations \calle{QQ2}
constitute the
algebra, to the lowest non-trival order in perturbation theory,
which is the symmetry of the $S$-matrix of the theory.
Unfortunately, the isomorphism between the algebra
\calle{QQ2},\calle{TQ} and the Hopf
algebra has not been established yet, and, hence, the universal $R$-matrix of
this hidden Hopf algebra has not been studied. However, we are going to make
some additional comments about these open questions in the near future.
\section{Conclusions}
To summarize, in the present paper we have introduced a new
integrable model
in quantum field theory. The novelty of the model resides in the fact that
it is built on a hyper-elliptic surface instead of the usual
Euclidean plane. The quantum symmetry of the model has been identified
in terms of the non-local conserved charges. This has led to a generalization
of the method first introduced by Bernard and LeClair \cite{BL}
for the affine Toda field theories where only boson fields are involved.
As is understood very well by now, the quantum non-local conserved charges
provide a quantum field theoretic basis for understanding quantum groups.
Unfortunately, the mapping from the physical algebra satisfied by the non-local
charges to the $q$-deformed Lie algebra has not been discovered yet.
If this mapping is found, one will be able to study the universal $R$-matrix
and consequently uncover the structure of the $S$-matrix.
\vspace{.5cm}
{\bf Acknowledgements}
We would like to thank A. LeClair, F. Smirnov and R. Poghossian for helpful
discussions.
\vspace{.5cm}
|
chem-ph/9504003
|
\section{Introduction}
The conventional approach to solving electronic structure problems has been
through the use of basis set expansion of wavefunctions.\cite{szabo/ostlund}
While these methods can
produce highly accurate results, there are a few drawbacks. Amongst them, the
completeness of the basis set is always a concern, treating aperiodic systems
with plane wave bases leads to waste in computational effort, and most
importantly the method scales unfavorably with system size.
Several recent studies have shown that accurate $ab-initio$ results can be
generated by a real space representation of the same problem. By decomposing
the multicenter problem into several single center problems and by propagating
the orbital residues in real space, Becke\cite{becke} has obtained impressive
accuracy in
Density Functional calculations on polyatomic systems. More recently,
Chelikowsky et al.\cite{chelikowsky/troullier/saad} have developed a finite
difference-psuedopotential method
and successfully applied it to the $ab-initio$ computation of properties of
several diatomics.
Considerable effort has been expended in recent years towards developing
linear scaling solutions to the electronic structure problem.
Researchers have focussed on using multigrid (MG) methods and/or localized
orbitals to overcome the unfavorable scaling.
Bernholc et al.\cite{bernholc/yi/sullivan} have used a full MG algorithm with
non-uniform grids to
perform real space electronic structure calculations. They present results for
H and H$_2$. Davstad\cite{davstad} has discretized the Hartree-Fock (HF)
equations and used
MG methods to solve the resultant equations for diatomics. Teeter
and coworkers\cite{white/wilkins/teter} have used a finite element basis in
conjunction with MG to solve
for the electronic structure of several one orbital systems.
Baroni and Gianozzi\cite{baroni/giannozzi} represented the Hamiltonian in real
space and developed a Lanczos method which solves directly for the
ground state electron
density.
Within the plane wave basis scheme, Galli and
Parrinello\cite{galli/parrinello} have proposed a nonorthogonal, localized
orbital
approach.
Mauri et al.\cite{mauri/galli/car} and Ordejon et
al.\cite{ordejon/drabold/grumbach/martin} have
developed related methods employing localized, orthogonal
(or generalized Wannier) orbitals.
Stechel et al.\cite{stechel/williams/feibelman} have presented a general
algorithm for iteratively obtaining
the occupied subspace using nonorthogonal, localized orbitals. A
different approach has been taken by three
groups\cite{li/nunes/vanderbilt,daw,carlsson} who have developed
methods for variational solution for the one electron density matrix.
These methods utilize cutoffs in the density matrix beyond some length
scale, and a `purification transformation' to preserve idempotency in
the density matrix. Finally, an exact path integral formulation of
Kohn-Sham (KS)-Density Functional Theory (DFT) has been
developed\cite{harris/pratt,yang,pratt/hoffman/harris} in the last
ten years which is the single approach using only the diagonal one electron
density.
In the Density Functional Theory\cite{parr/yang,dreizler/gross} -Local Density
Approximation (LDA)
, solving for the ground electronic state of a collection of nuclei
and electrons is equivalent to minimization of the Kohn-Sham Energy
Functional (KSEF). In broad terms, the principal components of a real space
minimization of the KSEF
are: (1) solving for the electrostatic
potential due to the nuclei and electrons, which serves as an input for (2)
propagating the KS orbitals while maintaining orthonormality. The evolving KS
orbitals define a new electronic distribution, which in turn defines a new
potential for the orbitals. Several approaches exist for the iteration of the
above process to self-consistency. It is essential that both (1) and (2) be
solved by linear scaling methods to achieve favorable scaling for the entire
solution process.
Orthogonalization of $N$ delocalized orbitals requires $O(N^3)$ steps. In the
context of generalized
Wannier functions\cite{kohn,kohn-II}, one can obtain orbitals that are
exponentially localized in
systems with band gaps and localized as polynomials for metals. One of the
principal advantages of the real space approach is that these localized
orbitals need not be orthogonalized if they possess no overlap in space. If
such is the case, then methods such as Full Approximation Scheme-Multi Grid
(FAS-MG) developed by Brandt et al.\cite{brandt,brandt/mccormick/ruge} can be
used to propagate
the KS orbitals in a rigorously $O(N)$ scheme.
Conceptually, we are then left with the task of generating the electrostatic
potential due to the electrons and nuclei by a linear scaling
method.
Traditionally, FFT methods (scale
as $NLogN$) have been used to solve for the potential resulting from the
electron-electron and electron-nuclei interaction.
Becke's\cite{becke/dickson} method generates the potential by
decomposing the charge density around various nuclei in the system.
The Poisson equation is solved on a radial and angular mesh around each
ion center.
The overall potential is recovered by addition of the single center potentials.
The electrostatic energy due to the interaction of nuclei has
typically been solved by Ewald (scales as $N^{3/2}$) summation.
York and Yang\cite{york/yang} have modified the Ewald method to develop the
fast Fourier
Poisson method that scales as $NLogN$.
We have developed a physically
intuitive method that solves for the entire electrostatic potential
`in one shot' and exhibits rigorous linear scaling.\cite{merrick/iyer/beck} It
involves approximating the singular nucleus as
distributed over a portion of the grid and solving the Poisson equation for
the resultant charge distribution (electrons and nuclei) using
a full multigrid solver. In this research,
we use a unit cube of charge multiplied by the atomic number $Z$.
The size of the cube at a given scale
is dictated by the grid separation $h$.
The electrons and nuclei are thus placed
on an equal footing in terms of the Poisson equation. In this way,
the entire electrostatic problem is solved, including all electron-electron,
electron-nucleus, {\it and} nucleus-nucleus interactions, in a fast
linear scaling step. A distinct advantage of this approach is that
it obviates the need for Ewald summation to compute the
nuclear contributions to the total energy for periodic systems;
we
have computed electrostatic
energies of periodic ionic lattices to high accuracy with this
method.\cite{merrick/iyer/beck}
This work deals with the use of this novel approach to solve for the
electrostatic
potential in minimization of the KSEF.\cite{acs/dc/note}
We have also used a simple nested iteration
scheme, as a precursor to incorporating Brandt's FAS-MG, to propagate the KS
orbitals in coordinate space. Section II deals briefly with DFT-LDA and
presents details of our algorithm. In the following section, we present results
on model multi-orbital atomic and molecular systems to exhibit the accuracy of
the distributed
nucleus approximation.
We summarize our findings and discuss
future research plans in Section IV.
\section{Theory and Methods}
\subsection{Definitions}
The Kohn-Sham total
energy\cite{parr/yang,payne/teter/allan/arias/joannopoulos} can be represented
as (we consider only doubly occupied
states here, and atomic units are used throughout):
\eject
\begin{equation}
E[\{ \psi_i \}] = 2 \sum_{i=1}^{N/2} \int \psi_i^* \left[ - \frac{1}{2}
\right] \nabla^2 \psi_i d^3 {\bf r} + \int V_{ion}({\bf r}) \rho ({\bf r})
d^3 {\bf r}
\label{eq:ksef}
\end{equation}
$$
+ \frac{1}{2} \int \frac{\rho({\bf r}) \rho({\bf r\prime})}
{\mid {\bf r} - {\bf r\prime} \mid } d^3 {\bf r} d^3 {\bf r\prime}
+ E_{XC}[\rho ({\bf r})] + E_{nucleus}(\{ {\bf R}_N \} ) . $$
\noindent
The set of all wavefunctions, $\{ \psi_i \}$, are the occupied one electron
orbitals.
The first term is the total kinetic energy, the second
is
due to the electron-nucleus electrostatic energy, the third is the
electron-electron electrostatic interaction, the fourth is the
exchange-correlation
energy, which if known exactly would give the exact ground state
energy, and the final term is the total nucleus-nucleus electrostatic
energy.
The electron density is given by:
\begin{equation}
\rho({\bf r}) = 2 \sum_{i=1}^{N/2} \mid \psi_i ({\bf r}) \mid ^2 .
\label{eq:rho}
\end{equation}
\noindent
The objective, then, is to determine the set of KS orbitals,$\{ \psi_i \}$,
that
minimize the Kohn-Sham energy functional. The self-consistent solution of the
KS equations define the orbitals that minimize the KSEF:
\begin{equation} [- \frac{1}{2} \nabla^2 + V_{eff} ] \psi_i({\bf r}) = E_i
\psi_i({\bf r}),
\label{eq:ks}
\end{equation}
where
\begin{equation} {\mbox v}_{eff}({\bf r}) =
{\mbox v}_{ion}({\bf r}) + \int \frac{\rho({\bf r'})}
{\mid {\bf r} - {\bf r'} \mid} d {\bf r'}
+ {\mbox v}_{xc}(\rho({\bf r})) .
\label{eq:terms}
\end{equation}
The first two terms in the effective potential are the total
electrostatic contribution to the electronic part of the
total energy, which is long ranged, while the
exchange correlation potential in the LDA depends only on the local
electron density.
We have used the exchange-correlation potential of Vosko et
al.\cite{vosko/wilk/nusair}
(VWN) which was parametrized from the Monte Carlo data of Ceperley
and Alder.\cite{ceperley/alder} We have assumed the paramagnetic form here
since we are only
interested in doubly occupied states in this work.
\subsection{Grid Representation}
We represent the wavefunctions and operators on an evenly
spaced Cartesian grid. The nuclei are represented as a cube of charge located
at the grid point corresponding to the nucleus position. The effective
potential (operator) is diagonal in the coordinate representation; thus
its application is trivial. In this paper, we represent the
kinetic energy operator using a finite differences(FD) representation.
For atomic and molecular
problems, we find that we need at least a 6$^{th}$ order FD form
to obtain accurate results; all computations in this work have used an
8$^{th}$ order form.
Our findings are consistent with
those of Chelikowsky et al.\cite{chelikowsky/troullier/saad}, who
recently discussed use
of a FD representation in DFT calculations.
Full or `exact' solution of the grid problem corresponds to
completely solving
a discretized version of the continuous problem.
Thus, there are two issues of accuracy. First,
how accurate is the grid representation of the partial differential
equations? Second, how close is one to a complete solution to the
grid represented problem? We note here that, since our problem is not
represented by a Hamiltonian in a complete
basis set, one is $not~guaranteed$ total energies above the exact
ground state. That is, the grid-based approach is a variational
calculation, but does not necessarily satisfy the variational
theorem. One simply knows that by going to a higher resolution
representation, results closer to the exact energy will be
obtained if the problem is completely solved at that finer
scale.
\subsection{Minimization Strategy}
In order to locate the ground state electron density, we must either
solve the Kohn-Sham one electron equations (Eqn. \ref{eq:ks}) to self
consistency, or
(equivalently)
directly minimize the KSEF (Eqn. \ref{eq:ksef}) with respect to wavefunction
variations. The latter leads to the familiar steepest descent equation:
\begin{equation} \dot{\psi_i}({\bf r}) = -\frac{\delta E[\{\psi_i ({\bf
r})\}]}
{\delta \psi_i^* ({\bf r})} = 0 ,
\label{eq:steep}
\end{equation}
Locating the ground state amounts to propagating Eqn. \ref{eq:steep} until
a limit in the magnitude of the forces is reached(while
maintaining orthonormality constraints).
We have minimized the KSEF by using steepest descent, Gauss-Seidel (SOR)
and conjugate gradient methods. We have experimented with various `step sizes'
for
steepest descent and Gauss-Seidel calculations and chosen the one that leads
to fastest convergence.
In Gauss-Seidel
propagation, the updated wavefunction value at grid
point $i-1$ is used to update the old value at grid point $i$.
That is, instead of updating all values and then writing
the new wavefunction vector into the old, the new values
are written sequentially as the propagation passes through
the grid.
We found Gauss-Seidel
propagation to be substantially more efficient than steepest descent, and
we employed Gauss-Seidel in our later calculations.
Conjugate gradient provides an efficient and robust minimizer. We have
used the algorithm developed by Payne et
al.\cite{payne/teter/allan/arias/joannopoulos} in certain minimization
calculations. In our method, however, all of the propagation equations are in
coordinate space.
The wavefunctions are orthogonalized at each step of propagation
by the Gram-Schmidt procedure.
The method is
efficient and accurate,
it breaks possible spurious symmetries generated by initial
conditions in the wavefunctions, and leads to the preservation
of ordered energy states.
Orthonormalization is essential
to prevent the collapse of all wavefunctions to the ground state.
With the orthonormalization, the minimization is a well-defined
process for the many electron problem; it should,
when completed, locate
a single minimum in the energy functional represented on the grid
(although multiple minima may in principle occur, we did not encounter
unphysical states in this work). The resulting electron density is the
grid solution to the functional minimization problem of locating
the ground state electron density in Kohn-Sham theory.
\subsection{Nested Iteration for the Orbitals}
In the nested iteration for the wavefunction variational
calculation, minimization is carried out on each
scale until the solution reaches a limit value, beginning
on the coarsest scale. Then, the
solutions (both the wavefunctions and the electrostatic potential)
are linearly interpolated to the next finer
scale, and minimization begins on this scale.
The process is continued until the finest scale is reached, where
we iterate until a self-consistent solution is obtained.
Typically, we use three grid levels, where the
next finer grid spacing was always a factor of two smaller
than the previous coarser scale.
\subsection{Multigrid for the Poisson Equation}
At each iteration step in the minimization process, the Poisson
equation must be solved to generate the electrostatic portion
of the evolving effective potential:
\begin{equation}
\nabla^2 \phi ({\bf r}) = - 4 \pi \rho ({\bf r}).
\label{eq:poisson}
\end{equation}
\noindent
We solve this equation using a full multigrid cycle.
Multigrid for the Poisson equation is known to be
a linear scaling process.
The solution of the Poisson equation is embedded within
the nested iteration for the orbitals.
The Poisson equation is discretized on the same grid as the
Schr{\"o}dinger equation. For consistency,
the same representation (8$^{th}$ order FD) for $\nabla^2$
is used as for the kinetic energy operator.
The Poisson equation is an elliptic partial differential
equation: solution requires the input of the charge density and
boundary conditions (either finite or periodic). In this paper,
we treat finite systems and fix the values of $\phi({\bf r})$
at the boundaries of the grid. Once a new value of the orbitals
is obtained following a propagation step, a new charge density is
constructed, the old values of the potential are taken as the
initial $\phi({\bf r})$(except for the very first
solution of the Poisson equation), and the multigrid
process is initiated. Since the input values of $\phi({\bf r})$
are then relatively close to the solution, the process is
rapid.
On a grid, the Poisson equation
can be written as the following matrix equation:
\begin{equation}
{\bf Au} = {\bf b}
\label{eq:matrix}
\end{equation}
\noindent
where $\bf A$ is the matrix representation of the $\nabla^2$ operator
in real space, $\bf u$ is the potential vector on the grid, and $\bf b$ is
the vector representing $- 4 \pi \rho ({\bf r})$. If $\bf u$ is the exact
solution for the given
fine grid representation, and $\bf v$ the evolving solution during
iteration, then Eqn.\ref{eq:matrix} can equally well be represented as:
\begin{equation}
{\bf Ae} = {\bf r}
\label{eq:error}
\end{equation}
\noindent
where ${\bf e} = {\bf u} - {\bf v}$ (the error) and ${\bf r} = {\bf b}
- {\bf Av}$ (the residual). The residual is known at the beginning
of the iteration, and solution of Eqn. \ref{eq:error} yields the complete
error
in the initial guess for the solution. By adding $\bf e$ to $\bf v$,
the solution $\bf u$ is obtained.
Once the fine grid approximation is obtained, the multigrid
cycle generates corrections to the initial guess by passing
the error and residual to
coarser scales (restriction process) and iterating on the residual
equation. This process is carried out on a sequence of grids going
from fine to coarse and back to fine (interpolation).
The mathematical arguments
for the convergence behavior of multigrid are subtle and are not
discussed here. In words, by passing to a coarser scale, the
long wavelength modes in the error
appear more oscillatory and are thus damped at a greater rate.
By generating corrections on a sequence of coarser scales
and passing this error information back to the fine scale,
critical slowing down
can be overcome. The resulting algorithm is fast and linear
scaling, often requiring on the order of 10 iterations
or less on the fine scale.
Interested readers are referred to
excellent review articles of Brandt\cite{brandt} and Briggs\cite{briggs} for
more details.
\section{Results}
\subsection{Hydrogen-like atoms and H${_2^+}$}
One approach to solve for the relevant orbital is to treat these one electron
systems
as paramagnetic and incorporate the full effective
potential.\cite{puska/nieminen/manninen} Instead, we
treat the electrostatic potential as a function of the bare nucleus alone.
This reduces the problem to a fixed potential eigenvalue problem and provides
a stringent test of the distributed nucleus approximation.
As stated earlier,
the nucleus is represented as a cube of charge, of magnitude Z, at the
grid point corresponding to the nucleus position.
Being a long ranged potential, and with no electronic
density to shield the nucleus, Z/R boundary conditions are imposed
on $\phi({\bf r})$ when
solving the Poisson equation. Since we treat the electrons and nuclei
together, our potential has an additional nucleus self interaction term
associated with it. This self energy is a constant for given Z and grid
spacing and needs to be subtracted from the computed energy to obtain the true
energy of the system.
The energy functional
has been minimized on a three-tiered regular
Cartesian grid. The grids are labelled coarse, intermediate and fine.
We have experimented with several starting configurations for the orbitals
ranging from particle-in-a-box states to hydrogen-like wavefunctions. We
propagate the orbitals using the steepest descent, Gauss-Seidel or conjugate
gradient recipes.
Regardless of the starting configuration we find convergence to be rapid.
Table \ref{tab:h-like-results} summarizes the
results for these calculations, where we have started with particle-in-a-box
states. We note several interesting points: We are
pleasantly surprised by the accuracy of the distributed nucleus approximation
in solving for the ground electronic state. We compute accurate ground state
energies and the virial theorem is satisfied to around 1\% accuracy.
As expected, more accurate results
are obtained as we go to finer grids. Even using a simple nested scheme
(as opposed to the FAS-MG) we find great gains in computational efficiency in
locating the minimum of the energy functional.
Table \ref{tab:h2+-results} presents the results for the H${_2^+}$ at various
internuclear
separations. The grids were chosen to accommodate each nucleus on grid
points. This calculation illustrates that we are able to generate accurate
absolute energies and
equally accurate binding energies.\cite{wind} Generation of accurate binding
energies is
crucial in structure determination and Monte Carlo or Molecular
Dynamics calculations.
\subsection{Neon}
We now attempt to locate the ground state electronic density of the 10
electron Ne atom. Our calculation, DFT-LDA (VWN), treats all electrons
explicitly and
involves the propagation of five KS orbitals. Within the DFT-LDA approximation
the energy for the neon atom is
-128.214 Hartree.\cite{perdew/zunger} The Hartree-Fock energy of the same is
-128.547 Hartree.\cite{fischer}
As with hydrogen-like atoms, the nucleus is treated as a cube of charge.
However, there are a few important differences between this multiorbital
computation and
the previous one electron calculations. Since the electrostatic potential is
dependent on the charge density, the Poisson equation has to be re-solved with
every update of the wavefunctions.
Further, due to the shielding of the nucleus by the
electronic density we expect the potential to vanish at the boundaries of the
computational grid.
Our results for these calculations, where we have started with
particle-in-a-box states, are summarized in Tables
\ref{tab:ne-results}, \ref{tab:ne-moments-r} and \ref{tab:ne-moments-r2}. As
stated earlier, we are not guaranteed total energies above that of the exact
ground state and this is borne out in the calculation with $h_{fine}$=0.90/4
(Table \ref{tab:ne-results}). However, for a given grid spacing the
minimization is robust and we solve the discretized equations accurately. Using
a nested iteration scheme with $h_{fine}$=0.95/4
we are within 0.5\% of the calculated energy for the neon atom. This result
was obtained by iterating 256 times on the coarse, 128 times on the
intermediate and 64 times on the fine grid.
Figure \ref{fig:fig-I}a illustrates the significant gains one obtains by
using this approach as opposed to iterating only on the fine scale. One
requires on the order of 10$^3$ direct iterations on
the fine scale alone to obtain an energy within 1 Hartree of the converged
results. In contrast, we require merely 20 iterations on the fine scale with
the nested scheme to attain such accuracy. Also note that iterations on the
coarse scale are roughly 1/64$^{th}$ (and 1/8$^{th}$ on the intermediate
scale) as expensive as on the fine scale. In addition, iterating on the coarser
scales
allows us to remove substantial portions of the long wavelength errors in the
orbitals more
efficiently. While we have reduced the effects of critical slowing down (CSD),
the phenomenon is not completely eliminated (Figure \ref{fig:fig-I}b) by the
nested scheme. After 64 iterations on the fine scale the energy of the system
is
computed to be -127.900 Hartree. After 128 iterations it is -127.992 Hartree,
after 256 iterations it is -128.011 and finally, after 512 iterations the
energy
for the neon atom is -128.013 Hartree. Thus, a considerable number of expensive
fine scale iterations have to be performed to obtain the final converged
solution. Also, we observe deviations from the
Virial theorem by as much as 15 to 20\%. Our investigation indicates that
this is primarily due to the poor representation of the core 1s orbital, which
contains an overwhelming portion of the energy is. A calculation for a
hydrogenic atom
with Z=10 (Table \ref{tab:h-like-results}),
with the same grid as for neon, indicated a similar 15$\%$ departure.
To analyze the convergence of the KS orbitals we have calculated their radial
moments as the solution evolves. $\langle R \rangle$ probes the regions closer
to
the nucleus and $\langle R^2 \rangle$ provides an understanding of
the tails of the orbitals. The $\langle R \rangle$ and $\langle R^2 \rangle$
we calculate are somewhat different from those calculated by Perdew and
Zunger.\cite{perdew/zunger} To facilitate a direct comparison with their
calculation we have used a modified VWN potential with the exchange term
only.\cite{compare}
That our results are different should come as no surprise since the grid
representation of the orbitals
is somewhat crude. Despite this, we observe atomic shell structure in the
radial distribution function consistent with HF calculations. It appears that
the convergence of the 1s
orbital to the eventual solution is rapid.
We speculate
that two interlinked factors contribute to the somewhat slower convergence
of the 2s and 2p orbitals. Firstly, since the forces on the 1s orbital are
much greater (due to its proximity to the nucleus) the convergence process
is accelerated. This is borne out by the rate of convergence of $\langle R
\rangle$
and $\langle R^2 \rangle$ for this orbital (no wonder the frozen core
approximation is so good!). The 2s and 2p orbitals are more delocalized
and therefore have longer wavelength modes associated with them.
It is well known that residual modes of $\lambda \leq 4h$ (high frequency) are
readily eliminated by iterating on a scale with grid spacing $h$. It is the
elimination of the longer wavelength error modes in the evolving solution that
causes CSD (Figure \ref{fig:fig-I}). Thus, while the nested scheme provides
significant improvement
we still encounter some CSD. This appears to be independent of the method
(Figure \ref{fig:fig-II}) chosen to propagate the KS orbitals.
\section{Discussion}
In summary, we have used the distributed nucleus approximation to compute the
overall electrostatic potential accurately with a liner scaling algorithm. We
have obtained encouraging results in using this approximation for the solution
of the Kohn-Sham orbitals for single electron and multiorbital cases. In
general, we compute energies to high accuracy, and the orbital representation
is adequate. We feel that this method can be successfully employed to
perform large scale simulations of interesting condensed phase systems. For the
purposes of high resolution electronic structure calculations one would need
significantly larger number of grid points.
The nested iteration scheme highlights the importance of length scales in
solving for the KS orbitals. We have presented clear evidence that direct
iteration on the fine scale alone is an inefficient process. The use of coarser
scales enables us to obtain dramatic improvements in convergence and postpones
the onset of critical slowing down until we are closer to the eventual
solution. This phenomenon is evident in all three propagation methods and
suggests that smoothing of long wavelength modes of the error in the solution
is of more importance than
the propagation method used on each scale.
The evidence we have presented suggests that we would benefit greatly by
adopting the
FAS-MG scheme. The advantages are: (1) the method scales rigorously in a linear
fashion as long as
the $N^3$ orthogonalization bottleneck is overcome by use of localized
orbitals and thus, critical slowing down is completely overcome (as has been
done
for in the solution of the Poisson equation), (2) the method lends itself
readily to the use of adaptive grids which should improve the orbital
representation around the nucleus, and finally (3) with the incorporation of
computational `zones of refinement' the storage requirements can be reduced to
modest amounts in
large scale simulations.
\section{Acknowledgments}
We would like to thank Achi Brandt, David Hoffman, Donald Kouri,
Randall LaViolette, Thomas Marchioro II, Ruth Pachter, Frank Pinski,
Lawrence Pratt, Ellen
Stechel, and Priya Vashishta for helpful discussions concerning
this work. This research was supported by NSF grant CHE-9225123.
We thank the Ohio Supercomputer Center for their support of this
project through grant time on the Cray-YMP,on which some of these
calculations were performed.
TLB would like to thank AFOSR and Dr.\ Ruth Pachter at Wright
Patterson Air Force Base for support during a summer faculty
fellowship.
\newpage
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\bf Z & \bf n$_{fine}$ & \bf h$_{fine}$ & \bf Nested ? & \bf $\Delta$ E\% & \bf
[1- E$_{pot}$/2E$_{kin}$]\% & \bf Iterations \\
\hline
1 & 4 & 1.2 & No & -4.882 & 4.728 & 35 \\
\hline
1 & 8 & 0.6 & No & -0.623 & 2.770 & 57 \\
\hline
1 & 16& 0.3 & No & 0.268 & 1.516 & 186\\
\hline
1 & 16& 1.2/4& Yes & 0.267 & 1.609 & 44 \\
\hline
1 & 4 & 1.4 & No & -6.200 & 2.694 & 37 \\
\hline
1 & 8 & 0.7 & No & -1.470 & 2.412 & 61 \\
\hline
1 & 16& 0.35 & No & -0.096 & 0.637 & 198\\
\hline
1 & 16& 1.4/4& Yes & -0.096 & 0.577 & 46 \\
\hline
5 & 16& 0.4/4& Yes & -0.582 & 0.577 & 64 \\
\hline
10& 16& 0.225/4& Yes & -0.833 & 1.190 & 68 \\
\hline
10& 16& 0.950/4& Yes & -2.236 & 15.219 & 64 \\
\hline
100 & 16& 0.03125/4& Yes & -2.092 & 2.521 & 73 \\
\hline
\end{tabular}
\caption{Results for calculations on hydrogen-like atoms using the distributed
nucleus approximation. Z refers to the nuclear charge. There are $2 n_{fine}
+1$ lattice points (in one dimension) on the calculation grid. An answer of Yes
under
$\bf{Nested ?}$ indicated that a three tiered scheme was used to propagate the
KS orbitals. An answer of No corresponds to direct iteration on only the
single grid with grid spacing $h_{fine}$. We specify only the number of
iterations on the finest grid. Note that no effort has been made to optimize
grid spacing for Z=5, 10 and 100. The calculation for Z=10 and $h_{fine} =
0.95$ was performed to compare against calculations for the neon atom.}
\label{tab:h-like-results}
\end{table}
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\bf R & \bf $\Delta$ E\% & \bf $\Delta$ E$_{binding}$\% & \bf ($\langle
T_{calc} \rangle
- \langle T_V \rangle$)/$\langle T_V \rangle$ \% & \bf ($\langle V_{calc}
\rangle - \langle V_V \rangle$)/$\langle V_V \rangle$ \% \\
\hline
0.6 & 0.192 & -0.551 & -0.532 & -0.186 \\
\hline
0.8 & 0.457 & -3.390 & -0.473 & -0.199 \\
\hline
1.2 & 0.004 & 0.183 & 0.480 & 0.183 \\
\hline
1.4 & 0.128 & 1.659 & -1.630 & -0.615 \\
\hline
1.6 & 0.194 & 0.291 & -0.539 & -0.195 \\
\hline
1.8 & 0.262 & 0.936 & -0.064 & -0.023 \\
\hline
2.0 & 0.340 & 0.819 & 0.121 & 0.043 \\
\hline
2.2 & 0.433 & 0.669 & 0.623 & 0.213\\
\hline
2.6 & 0.664 & 0.144 & 1.600 & 0.572\\
\hline
\end{tabular}
\caption{Presented are the results for the H$^+_2$ ion at various internuclear
separations. All calculations were performed with the nested iteration scheme
using the Gauss-Seidel algorithm for wavefunction propagation. Grid spacings
were chosen to accommodate the nuclei at specific lattice points. Exact
results were obtained from the calculations of Wind. The last two columns
indicate deviations from the molecular Virial theorem.}
\label{tab:h2+-results}
\end{table}
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\bf h$_{fine}$ & \bf Energy &\bf [1- E$_{pot}$/2E$_{kin}$]\% \\
\hline
PZ & -128.214 & 0.0 \\
\hline
0.90/4 & -129.096 & 14.353 \\
\hline
0.95/4 & -127.900 & 15.332 \\
\hline
1.00/4 & -126.607 & 18.496 \\
\hline
1.05/4 & -125.237 & 20.592 \\
\hline
\end{tabular}
\caption{We present results for the Ne atom for various grid spacings.
All calculations were performed with the nested iteration scheme
using the Gauss-Seidel algorithm for wavefunction propagation. We have started
these calculations with 9 grid points and 256 iterations on the coarse scale.
As we move to finer scales the number of grid points scales up and the number
of iterations scales down by a factor of 2 for each successive scale. In this
and all other captions `PZ' refers to the DFT-LDA results of Perdew and
Zunger. The last column tabulates deviations from the Virial theorem.}
\label{tab:ne-results}
\end{table}
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& \bf 1s & \bf 2s & \bf 2p \\
\hline
HF & 0.158 & 0.892 & 0.965 \\
\hline
PZ & 0.159 & 0.906 & 0.990 \\
\hline
32 & 0.1069 & 0.9205 & 1.1429 \\
\hline
64 & 0.1061 & 0.8609 & 1.1007 \\
\hline
128 & 0.1062 & 0.8264 & 1.0790 \\
\hline
256 & 0.1062 & 0.8128 & 1.0725 \\
\hline
512 & 0.1062 & 0.8097 & 1.0914 \\
\hline
\end{tabular}
\caption{$\langle R \rangle$ for the Ne orbitals are presented at different
stages of iteration on the fine scale. For example, $\langle R
\rangle_{1s}$=0.1067 after 32 iterations on the fine scale. This calculation
was performed using the nested scheme with
$h_{fine}$=0.95/4. `HF' refers to moments calculated from Hartee-Fock
wavefunctions. To facilitate a direct comparison with `PZ' we have used a
modified VWN potential with exchange only.}
\label{tab:ne-moments-r}
\end{table}
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& \bf 1s & \bf 2s & \bf 2p \\
\hline
HF & 0.034 & 0.967 & 1.229 \\
\hline
PZ & 0.034 & 1.005 & 1.326 \\
\hline
32 & 0.0323 & 1.0736 & 1.6997 \\
\hline
64 & 0.0319 & 0.9511 & 1.6025 \\
\hline
128 & 0.0319 & 0.8715 & 1.5465 \\
\hline
256 & 0.0319 & 0.8363 & 1.5263 \\
\hline
512 & 0.0319 & 0.8279 & 1.5219 \\
\hline
\end{tabular}
\caption{$\langle R^2 \rangle$ for the Ne orbitals are presented at different
stages of iteration on the fine scale. For example, $\langle R^2
\rangle_{1s}$=0.0321 after 32 iterations on the fine scale. This calculation
was performed using the nested scheme with
$h_{fine}$=0.95/4. }
\label{tab:ne-moments-r2}
\end{table}
\newpage
\begin{figure}
\epsfysize=6.0in
\centerline{\epsffile{latest.2.ps}}
\caption{Figure a illustrates the significant gain in computational
efficiency that one obtains with a multiscale method to propagate the KS
orbitals. For the nested scheme, iterations on the coarse scale are weighted by
a factor of 1/64 and those on the intermediate scale by 1/8. The fact that
critical slowing down is not completely eliminated by the nested scheme is
illustrated in Figure b.}
\label{fig:fig-I}
\end{figure}
\newpage
\begin{figure}
\epsfysize=5.0in
\centerline{\epsffile{2.port.ps}}
\caption{This figure illustrates that some degree of CSD remains on all
scales, even though the nested cycle has led to substantial acceleration. We
have
made no effort to optimize the number of iterations on each scale. While
conjugate gradient requires the fewest number of iterations, the time per
iteration (update of all five orbitals) is roughly 3-4 times that of
Gauss-Seidel, making the methods competitive.}
\label{fig:fig-II}
\end{figure}
\newpage
|
2211.11191
|
\section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{figures/motivation.pdf}
\caption{Users' multi-domain behaviors. The definition of \textit{domain} can be \textit{recommendation scenario} or \textit{item category}.}
\label{fig:motivation}
\end{figure}
As an effective remedy for information overload, personalized recommender systems (RSs) aim to make effective and satisfying choices for users, and have been deployed in many online services such as news feeds, E-commerce platforms and social medias.
Advanced RSs usually involve multiple recommendation \textbf{scenarios} or domains, and each scenario contains a set of items that is related to the scenario’s topic and marketing strategy. Users interact with these scenarios to satisfy their diverse demands~\cite{feng2018learning}. For example, the E-commerce platform \text{Taobao} provides diversified shopping spots including product search, homepage feed and banner, live broadcast and so on, as shown in the left part of Fig.~\ref{fig:motivation}. \text{Baidu} serves as a comprehensive website where users can read news, watch videos and more.
Broadly speaking, different item \textbf{categories} can also be regarded as multiple domains. As in the right part of Fig.~\ref{fig:motivation}, users usually interact with various categories such as clothes, food and more for their different demands.
\textbf{Multi-domain recommendation} (MDR) has attracted increasing research attention in recent years, the goal of which is to improve the recommendation performance of all domains simultaneously.
There are both commonality and diversity among users' multi-domain behaviors.
For the commonality, multiple domains usually have common users and overlapped items. A user tends to have similar behavior patterns across different domains, e.g., preferring ordinary or fashionable goods. The users' domain-invariant preference and items' static information can be shared across domains.
For the diversity, domains have different topics and marketing strategies, and their item sets are also different.
As a result, users take different behaviors and have domain-specific preferences in different domains.
Graph neural network (GNN) based methods have proven to be powerful for recommendation problems. The user-item interaction log is naturally suitable for modeling as a graph, and GNNs can exploit the interaction graph to capture user preference and facilitate better recommendation services.
Conventional GNN-based methods for MDR can be divided into two types. The first one deals with multiple domains separately. That is, for each domain we construct user-item interaction graph and train recommendation models independently. It learns separate representations for different domains to characterize users' domain-specific preferences. However, the sparseness of user-item interaction behaviors~\cite{gu2021self,chen2020scenario} is a crucial obstacle, as a long tail of users have few feedbacks to capture their accurate preferences, especially for some new domains.
The second one alternatively constructs a unified user-item interaction graph with multi-domain data and train a shared model to provide recommendation services for all domains~\cite{PPGN}.
Considering the intrinsic difference among domains, users tend to have specific demands and preferences under some domains, but the shared model neglects the domain-specific characteristics which results in limited recommendation performance.
Researchers have proposed some advanced methods ~\cite{pretrain,BiTGCF,DAN,dagcn,da-gtcdr} that exert the prominent feature extracting ability of GNN and incorporate knowledge transfer to alleviate the sparseness.
~\citet{pretrain} employ pre-training and fine-tuning diagram, which transfers a pre-trained graph encoder to initialize the node embedding on the target domain. Considering the pretrain-finetune paradigm only improves the recommendation accuracy on a single target domain, some works exploit to improve the recommendation accuracy on both domains simultaneously.
~\citet{BiTGCF} further realizes the two-way transfer across two domains with a bi-directional feature transfer module. ~\citet{da-gtcdr} propose a graphical and attentional framework to combine the embeddings of common users from both domains, thus enhancing the quality of user embeddings and improving the recommendation performance on each domain.
Despite their effectiveness, these methods focus on information transfer between only two domains. When employed in more than two domains, they only capture pair-wise relations between domains and dismiss the high-order connections.
For effective multi-domain recommendation, an appropriate way is to learn from the user feedbacks in all domains and acquire transferable knowledge to obtain better user representations that characterize their domain-specific preferences.
In this paper we propose $\mathsf{H^3Trans}$, a \textbf{h}ierarchical \textbf{h}ypergrap\textbf{h} network based correlative preference \textbf{trans}fer framework to improve multi-domain recommendation. As a general topological structure, a hyperedge can connect an arbitrary number of nodes, and thus hypergraph provides a means for modeling high-order connections in multiple domains.
Above all, we integrate users' multi-domain behaviors into a hypergraph. We view each user as multiple nodes w.r.t. to different domains, where the representation of each user node characterizes the domain-specific preference. For item nodes, because items' properties are relatively static than users, we view each item as a single node shared by all domains.
The core of the hypergraph structure constructed by $\mathsf{H^3Trans}$ is two novel types of hyperedges, namely Hyper-I and Hyper-U,
Specifically, we first design a dynamic item transfer module named Hyper-I. For a given domain, we dynamically seek out related items from user-item interactions of other domains, and construct a hyperedge (named \textit{hyperedge-i}) to connect them as cross-domain item relations. \textit{Hyperedge-i} helps build relations between the items of different domains and capture users' correlative preference from the cross-domain behaviors without interference information. Moreover, we propose a structure-aware aggregator with attention mechanism to model the message passing procedure through the hyperedges, which adjusts the item representation much more correlative to the target domain and thus improves the multi-domain recommendation performance.
We further introduce an adaptive user aggregation module named Hyper-U. We have regarded each user as a separate node per domain, that is, for a given user we can acquire separate user representations in multiple domains.
We utilize a hyperedge (named \textit{hyperedge-u}) to connect these separate user nodes of a given user, which aggregates the scattered user preferences among multiple domains. To effectively model the high-order connections among multiple domains, we propose to employ attention mechanism into the message propagation within such hyperedges. \textit{Hyperedge-u} contributes to transferring correlative preferences from source domains and capturing the commonality among multiple domains. Note that each domain can be viewed as the target domain (and the others as the sources), thus our proposed $\mathsf{H^3Trans}$ can improve the quality of user representation for all domains simultaneously.
The contributions are as follows:
\begin{itemize}
\item We propose $\mathsf{H^3Trans}$, a hierarchical hypergraph network based correlative preference transfer framework for multi-domain recommendation. $\mathsf{H^3Trans}$ proposes two hyperedge-based modules, namely Hyper-I and Hyper-U. To our knowledge, this is the first work that investigates hypergraph-based preference transfer in multi-domain recommendation.
\item To improve item representations for cross-domain transfer, Hyper-I performs dynamic item transfer which helps extract correlative preference from the cross-domain behaviors without interference information.
\item To model the high-order connections among users' multi-domain behaviors, Hyper-U aggregates users' scattered preferences in multiple domains and exploits the high-order connections with an attention based propagation layer.
\item Extensive experiments on large-scale production datasets and public datasets are conducted to analyze our proposed $\mathsf{H^3Trans}$, and the results demonstrate the superiority.
\end{itemize}
\section{Preliminary}
\subsection{Definition of Hypergraph}
Compared to an ordinary graph, a hypergraph is a more general topological structure where a \textbf{hyperedge} can connect an arbitrary number of nodes. Formally, a hypergraph is composed of a node set and a hyperedge set. The connectivity of a hypergraph can be represented by an incidence matrix ${H}$, where $h_{ve}=1$ if the hyperedge $e$ contains the node $v$, otherwise $h_{ve} = 0$. Besides, we use $E_v$ to denote a set of hyperedges that connect to node $v$, and use $V_e$ to denote a set of nodes connected to hyperedge $e$. Also, we can define the neighbors $N_v$ of node $v$ as a set of nodes that share at least one hyperedge with node $v$.
\subsection{Problem Definition}
Given domains $\mathcal{D}_1, \mathcal{D}_2, \dots, \mathcal{D}_T$, where $T$ denotes the number of domains.
For domain $\mathcal{D}_t$, we utilize $\bm{U}^t$ and $\bm{I}^t$ to denote its user ID set and item ID set respectively.
Let $\mathcal{R}^t \in \mathbb{R}^{|\bm{U}^t| \times |\bm{I}^t|}$ denotes the user-item interaction matrix of domain $\mathcal{D}_t$. If its entry $r_{ui}^t = 1$, it means that the user $u$ interacted with the item $i$ under domain $t$. In this work, we consider click behavior as the interaction type.
The problem of single-domain recommendation is to estimate the scores of unobserved entries in one interaction matrix $\mathcal{R}^t$, and we compute the score between a user and an item as:
\begin{equation}
{\hat{r}}^t_{u, i} = f(z_u, z_i \mid t)
\end{equation}
Here $z_u$ and $z_i$ denote the learned representations of user $u\in\bm U^t$ and item $i\in\bm I^t$ for domain $t$, and $f(\cdot)$ is the similarity function.
The problem of multi-domain recommendation is to estimate the unobserved scores for all interaction matrices $\{\mathcal{R}^t\}_{t=1}^T$. Specifically, the \textbf{user set} $\bm U$ is shared among all $T$ domains, i.e., $\bm{U} = \bm{U}^1 = \bm{U}^2 = \dots = \bm{U}^T$, because each user may actively interact with all domains. For the \textbf{item set} $\bm I$, each domain has its own set and we denote the total item candidate pool as $\bm{I} = \bm{I}^1 \cup \bm{I}^2 \cup \dots \cup \bm{I}^T$.
\begin{figure*}[t]
\centering
\centerline{\includegraphics[width=2\columnwidth]{figures/Overview2.pdf}}
\caption{Overall architecture of $\mathsf{H^3Trans}$. It contains two hyperedge-base modules: adaptive user aggregation (Hyper-U) and dynamic item transfer (Hyper-I). These two modules compose a hierarchical hypergraph network. Different colors refer to different domains. Here we regard the first domain $\mathcal{D}_1$ as target domain and the others are sources.}
\label{fig:overview}
\end{figure*}
\section{Methodology}
Fig.~\ref{fig:overview} shows the overall architecture of $\mathsf{H^3Trans}$, a \textbf{h}ierarchical \textbf{h}yper-grap\textbf{h} network based correlative preference \textbf{trans}fer framework for multi-domain recommendation.
We introduce the construction of multi-domain graph, and the basic graph neural network in subsections \ref{hypergraph_construction} and \ref{basic_gnn}. Two core modules, namely dynamic item transfer and adaptive user aggregation, compose a hierarchical hypergraph neural network, detailed in subsection \ref{hyper-i} and \ref{hyper-u}. Finally, the training procedure and optimization strategy will be introduced in subsection~\ref{hypergraph_train}.
\subsection{Unified Multi-domain Graph}\label{hypergraph_construction}
To improve the recommendation performance in all domains, instead of constructing individual graphs for each domain, we integrate users' multi-domain behaviors into a unified graph $\mathcal{G}=(\mathcal{V}, \mathcal{E})$.
In details, the node set $\mathcal{V}$ consists of user nodes and item nodes, i.e., $\mathcal{V} = \mathcal{U} \cup \mathcal{I}$. For \textbf{user nodes}, considering the domain discrepancy and the diversity of users' multi-domain behaviors, it is necessary to acquire separate representations for different domains. Thus we regard each user as separate nodes positioned in different domains (these nodes share the same attributes). Specifically, for a given user $u\in \bm U$, it corresponds to $T$ nodes $(u^1, u^2, \cdots, u^T)$, thus the relation between user node set size $|\mathcal U|$ and user ID set size $|\bm U|$ meets the condition of $|\mathcal U| = |\bm U|\cdot T$. Each user node representation characterizes user's preference under a specific domain. For \textbf{item nodes}, items' properties are relatively static than users. Thus we treat each item $i\in \bm I$ as a single node across various domains. In other words, each item $i$ only corresponds to one node in the graph. The item $i$'s node is also denoted as $i$.
The \textbf{basic edge set} collects the user-item history interactions from all domain, i.e.\ , $\mathcal{R} = (\mathcal{R}^1, \mathcal{R}^2, \cdots, \mathcal{R}^T)$, where $\mathcal{R}^t$ denotes the user-item interaction matrix of domain $\mathcal{D}_t$. This work considers click behavior as the interaction type.
For an entry $r_{ui}^t=1$, it means that the user $u$ has interacted with the item $i$ under domain $t$, and we build an interaction edge between the corresponding user node $u^t$and item node $i$, denoted as $e(u^t, i)$. To clarify which domain the edges belong to, we utilize distinct edge types for different domains. For domain $t$, the edge subset is denoted as $\mathcal{E}^t$, and the whole edge set is the union of all domains, i.e., $\mathcal{E} = \mathcal{E}^1 \cup \mathcal{E}^2 \cup \cdots \cup \mathcal{E}^T$.
With access to user-item interactions in any domain, it's convenient to additionally leverage hyperedges to build cross-domain relations and capture the correlative knowledge during transfer.
\subsection{Basic Graph Neural Network}\label{basic_gnn}
Based on the unified multi-domain graph, we employ basic graph neural networks that apply the neighborhood aggregation scheme to obtain expressive feature representation for nodes, which updates the representation of the ego node by iteratively aggregating information from the neighbors to encode the collaborative signal into the node representations. The basic GNN includes four modules: (1) an embedding module that transforms the sparse attribute features of nodes into low-dimensional embedding vectors; (2) a message-passing module with several layers that refines the node representation by aggregating the information from neighbors; (3) a readout module that generates the final representation with a readout function; (4) a prediction module that generates the prediction score that how likely user $u$ would interact with item $i$ under domain $t$ based on the learned node representations.
\subsubsection{\textbf{Embedding Module}}
This module maps each node (user or item) into a $d$-dimensional embedding vector $x_{u^t}$ (or $x_i$). For each user node $u^t\in \mathcal{U}$ (or item node $i \in \mathcal{I}$), we acquire its embedding $x_{u^t}$ (or $x_i$) from an embedding look-up table $X \in \mathbb{R}^{(|\mathcal{U}| + |\mathcal{I}|)\times d}$, which can be taken as a parameter matrix. It is noted that a given user corresponds to $T$ nodes, and these nodes share the same initial embedding vector.
\subsubsection{\textbf{Message Passing Module}}
The message-passing module consists of several layers that follow the neighborhood aggregation framework. It can be taken as a two-stage process to refine the node representation by aggregating the information from neighbors. The two stages are neighbor aggregation and node update:
\textbf{Neighbor aggregation}:
\begin{equation}
\begin{split}
h_{\mathcal{N}_{u^t}}^{(l+1)} & = \mathrm{AGG}_\mathrm{U}\left(\{h_i^{(l)} \mid i \in \mathcal{N}_{u^t} \}\right) \\
h_{\mathcal{N}_i}^{(l+1)} & = \mathrm{AGG}_\mathrm{I}\left(\{h_{u^t}^{(l)} \mid {u^t} \in \mathcal{N}_i \}\right) \\
\end{split}
\end{equation}
\textbf{Node update}:
\begin{equation}
\begin{split}
h_{u^t}^{(l+1)} &= \mathrm{UP}_\mathrm{U} \left(h_{u^t}^{(l)}, h_{\mathcal{N}_{u^t}}^{(l+1)} \right) \\
h_i^{(l+1)} &= \mathrm{UP}_\mathrm{I} \left(h_i^{(l)}, h_{\mathcal{N}_{i}}^{(l+1)} \right)
\end{split}
\end{equation}
where $l$ denotes the $l$-th message passing layer. $h_{u^t}^{(l)}$ and $\ h_i^{(l)}$ refer to the hidden representation of user node ${u^t}$ and item node $i$ respectively. $\mathrm{AGG}_\mathrm{U}$ and $\mathrm{AGG}_\mathrm{I}$ are the aggregation functions for user and item nodes. The same is to the node update function $\mathrm{UP}_\mathrm{U}$ and $\mathrm{UP}_\mathrm{I}$. There are a lot of designs for the aggregate function and update function. Here we use mean pooling for the aggregator and linear transforming for node update. Noted that the initial representation is acquired from embedding module, i.e., $h_{u^t}^{(0)}=x_{u^t}, h_i^{(0)}=x_i$.
\subsubsection{\textbf{Readout Module}}
After obtaining $L$ layers representations, we utilize a readout layer to generate the final representations:
\begin{equation}
\begin{split}
z_{u^t} & = \mathrm{Readout} \left(h_{u^t}^{(l)} \mid l \in [0, \dots, L]\right). \\
z_i & = \mathrm{Readout} \left(h_i^{(l)} \mid l \in [0, \dots, L]\right).
\end{split}
\end{equation}
Common designs for the readout function include last-layer only, concatenation, and weighted sum. Here we only utilize the output of the last message-passing layer.
\subsubsection{\textbf{Prediction Module}}
The prediction module produces the prediction score that how likely a user $u$ would interact with item $i$ under domain $t$ based on the learned node representations. It is formulated as:
\begin{equation}
\hat{r}^t_{u, i} = f(z_{u^t}, z_i)
\end{equation}
where $f$ is the score function and we usually adopt similarity function such as inner product and cosine function.
\subsection{Hierarchical Hypergraph Network}
Based on the unified multi-domain graph, we further utilize hyperedge to explore the high-order connections among users' multi-domain behaviors. In this section, we will introduce two core hyperedge-based modules: dynamic item transfer module (Hyper-I) \& adaptive user aggregation module (Hyper-U), which compose the hierarchical hypergraph neural network.
\subsubsection{\textbf{Hyper-I: Dynamic Item Transfer Module}} \label{hyper-i}
In multi-domain recommendation, each domain contains a set of items that is related to the domain's topic and marketing strategy. Due to the intrinsic difference, directly transferring users' cross-domain behaviors from sources to target is not a good approach. It will introduce interference information and degenerate the user representations. To extract correlative preference from users' cross-domain behaviors for transfer, we design a dynamic item transfer module, namely Hyper-I. It dynamically adjusts the source item representations to be more relevant to a given target domain, that contributes to capturing correlative user preferences from source domains.
Take the domain $\mathcal D_t$ as the target domain, and others as source domains. For each source domain $s$, before the message passing layer which acquires user node representation by aggregating the information from neighboring items, we adjust the item representations to eliminate domain discrepancy.
Specifically, for each user's interacted item under the source domain, we seek out similar items from the target domain $t$, and then construct a hyperedge (named \textbf{hyperedge-i}) to connect these item nodes. This hyperedge contains a two-level relationship. The first level is that the interacted source item is related to the picked target items. The second level is that the picked target items are also related to each other. We first introduce the method to seek out the related target items, and then we design a structure-aware hypergraph layer to adjust item representations.
\paragraph{\textbf{Hyperedge Construction.}}
For a given interacted item $i$ in a source domain $s$, we seek out a similar item set $\mathcal S_i^t$ from the target domain $t$, and construct a hyperedge to connect the source item node $i$ and the item nodes of picked item set $\mathcal S_i^t$.
We offer two ways to get similar items: path-based and embedding-based.
\begin{itemize}
\item Path-based: Utilize co-occurrence relation among items. If a user $u$ has clicked on both items $i$ and $j$, then the two items are similar. We design a walk path $(i\rightarrow u^s \rightarrow u^t \rightarrow j)$, and sample the items in the target domain as similar items.
\item Embedding-based: Path-based method is an intuitive way but it seriously relies on the interaction history of users. Embedding-based method makes use of the hidden representation of items $h_i^{(l-1)}$ and leverages the appropriate nearest neighbor algorithm to find the top-k similar items from the target domain.
\end{itemize}
\paragraph{\textbf{Graphormer Layer.}}
To perform message passing within the hyperedge, UniGNN~\cite{unignn} and AllSet~\cite{Chien2022YouAA} propose a message passing paradigm on the hypergraph. UniGNN rethinks the message-passing layer of the basic GNN as a two-stage aggregation process. In the first stage, for each hyperedge, use a permutation-invariant function to aggregate the information of the nodes within it. In the second stage, update each node with its incident hyperedges using another aggregating function. The method of AllSet is similar.
We claim that the above message-passing paradigm fails to model the two-level relationship within {hyperedge-i}. Instead, we employ attention mechanism~\cite{vaswani2017attention} to adjust the item representation. Moreover, to effectively exploit the two-level relations and leverage the topology structure within the {hyperedge-i}, we introduce the distance matrix of the shortest path among the picked nodes (denoted as $B$) into the attention layers, as introduced in \cite{graphormer}. The details of this module are illustrated in Fig.~\ref{fig:overview}. Specifically,
\begin{equation}
{h}_i^{(l-1)} \leftarrow \mathrm{GH}_{\mathrm{HyperI}}\left(\mathrm{Concat}\left(h_i^{(l-1)}, \left\{h_j^{(l-1)} | j\in \mathcal S_i^t\right\}\right)\right)[0]
\end{equation}
where $\mathrm{GH}_{\mathrm{HyperI}}(\cdot)$ is the graphormer layer for Hyper-I module:
\begin{equation}
\begin{split}
\mathrm{GH}_{\mathrm{HyperI}} (H_{\mathrm{I}}) & = \mathrm{Concat}\left(\mathrm{Attn}_{\mathrm{I},1}(H_{\mathrm{I}}), \cdots, \mathrm{Attn}_{\mathrm{I},M}(H_{\mathrm{I}}) \right) W_{\mathrm{I}}^O , \\
\mathrm{Attn}_{\mathrm{I},m}(H_{\mathrm{I}}) &= \mathrm{softmax}\left(\frac{Q_{\mathrm{I},m} {K_{\mathrm{I},m}} ^{\top}}{\sqrt{d_{h_i}/M}} + \Phi({B}) \right)V_{\mathrm{I},m}, \\
Q_{\mathrm{I},m} & = H_{\mathrm{I}} W_{\mathrm{I},m}^Q, K_{\mathrm{I},m} = H_{\mathrm{I}} W_{\mathrm{I},m}^K, V_{\mathrm{I},m} = H_{\mathrm{I}} W_{\mathrm{I},m}^V \\
\end{split}
\end{equation}
here $\Phi()$ is a learnable function shared across all layers that maps the distance between every paired nodes to a scalar. $W_{I,m}^Q, W_{I,m}^K, W_{I,m}^V$, and $W_I^O$ are training parameters.
\subsubsection{\textbf{Hyper-U: Adaptive User Aggregation Module}}\label{hyper-u}
After adjusting item representations with Hyper-I, we acquire the representations of separate user nodes by aggregating information from their neighbor items. Each user corresponds with multiple nodes that characterize the user's domain-specific preference. Next step is to transfer correlative user preferences from source domains to the target. We can refine the user representation of target domain with taking advantage of users' multi-domain behaviors.
Noted that the preference transfer in multi-domain recommendation involves more than one source. The key point is how to aggregate users' scattered preferences in multiple domains and adequately exploit the high-order connections among them. Here we integrate a hyperedge-based module: Hyper-U, to realize adaptive user aggregation.
\paragraph{\textbf{Hyperedge Construction}}
We utilize hyperedge to connect the nodes that belong to the same user, and we name this type of hyperedge as \textbf{hyperedge-u}. Within {hyperedge-u}, each separate node representation characterizes the user's interest preference under a specific domain. the {hyperedge-u} connects these separate user nodes and bridges the information propagation across domains, thus realizing preference transfer. Moreover, benefiting from that hyperedge connects plural nodes, {hyperedge-u} can further exploit the high-order (more than pairwise) connections among multiple domains. In this way, each user node can get access to his/her preference in other domains and adaptively extract correlative information to refine its representation.
\paragraph{\textbf{Multi-head Attention Layer.}}
We design a new message passing layer for the {hyperedge-u} to replace the original layer. For the $l$-th layer, we first acquire user's separate representations under multiple domains, denoted as $[h_{u^1}^{(l)}, h_{u^2}^{(l)}, \cdots, h_{u^T}^{(l)}]$. Hyper-U module take these separate representations as input, and then refine these representations by aggregating users' scattered preferences and transferring knowledge from other domains. Considering the domain discrepancy and diversity of users' multi-domain behaviors, we employ self-attention mechanism in the Hyper-U module to adaptively fuse users' cross-domain interest representations. Specifically, to refine representation for domain $t$,
\begin{equation}
[{h}_{u^1}^{(l)}, {h}_{u^2}^{(l)}, \cdots, {h}_{u^T}^{(l)}] \leftarrow \mathrm{MHA}_{\mathrm{HyperU}}([h_{u^1}^{(l)}, h_{u^2}^{(l)}, \cdots, h_{u^T}^{(l)}]),
\end{equation}
where $\mathrm{MHA}_{\mathrm{HyperU}}(\cdot)$ denotes the multi-head attention layer for Hyper-U module:
\begin{equation}
\small
\begin{split}
\mathrm{MHA}_{\mathrm{HyperU}}(H_{\mathrm{U}}) & = \mathrm{Concat}\left(\mathrm{Attn}_{\mathrm{U},1}(H_{\mathrm{U}}), \cdots, \mathrm{Attn}_{\mathrm{U},M}(H_{\mathrm{U}}) \right) W_{\mathrm{U}}^O , \\
\mathrm{Attn}_{\mathrm{U},m}(H_{\mathrm{U}}) &= \mathrm{softmax}\left(\frac{Q_{\mathrm{U},m} {K_{\mathrm{U},m}} ^{\top}}{\sqrt{d_{h_u}/M}} \right)V_{\mathrm{U},m}, \\
Q_{\mathrm{U},m} & = H_{\mathrm{U}} W_{\mathrm{U},m}^Q, K_{\mathrm{U},m} = H_{\mathrm{U}} W_{\mathrm{U},m}^K, V_{\mathrm{U},m} = H_{\mathrm{U}} W_{\mathrm{U},m}^V
\end{split}
\end{equation}
here $W_{U,m}^Q, W_{U,m}^K, W_{U,m}^V$, and $W_U^O$ are trainable parameters.
The multi-head attention layer takes users' separate nodes representations as input and exploits the high-order connections with the self-attention mechanism. For each domain, the corresponding node can adaptively refine its preference representation by extracting the correlative information from other domains.
\subsection{Model Optimization}\label{hypergraph_train}
These two hyperedge-base modules: dynamic item transfer module (Hyper-I) and adaptive user aggregation module (Hyper-U), compose a hierarchical hypergraph neural network. It realizes correlative preference transfer and exploits the high-order connection among users' multi-domain behaviors.
For model optimization, we mix the multi-domain data and randomly select a sample $(u^t, i)$ from domain $t$ for each training step. Domain $t$ is taken as the target domain and the others are source domains. Moreover, we employ a contrastive loss InfoNCE~\cite{van2018representation} to learn more effective representations, which maximizes the agreements between positive pairs. The loss is formulated as:
\begin{equation}
L(u, i\mid t) = - \log \frac{\exp (sim(z_u^t, z_i)/\tau)}{\sum_{i_{-}} \exp(sim(z_u^t, z_{i_{-}})/\tau)}
\end{equation}
where $sim(\cdot)$ stands for similarity measure function and
we use inner product. $(u^t, i_{-})$ is a randomly sampled negative pair that $r^t_{u, i_{-}} = 0$, and $\tau$ is the temperature hyperparameter.
\section{Experiments}
In this section, we conduct both offline and online experiments to validate the effectiveness of our method. And the experiments are intended to answer the following research questions:
\begin{itemize}
\item \textbf{RQ1}: How does our proposed method perform when compared with other state-of-the-art GNN-based methods?
\item \textbf{RQ2}: How do the different components (e.g., unified multi-domain graph, adaptive user aggregation module, dynamic item transfer module) contribute to the model performance?
\item \textbf{RQ3}: Does our method help alleviate the behavior sparseness issue and improve recommendation performance for the relatively inactive users (with fewer interaction items)?
\item \textbf{RQ4}: Does our method achieve improvement when deployed to industrial advertising systems?
\end{itemize}
\subsection{Experimental Settings}
\begin{table}[b]
\centering
\vspace{-1em}
\setlength{\abovecaptionskip}{0cm}
\footnotesize
\caption{Dataset Statistics}
\begin{tabular}{cccc|cccc}
\toprule
\multicolumn{4}{c|}{Product Dataset} & \multicolumn{4}{c}{Public Amazon Dataset} \\
Domains & \#user & \#item & \#click & Domains & \#user & \#item & \#click \\
\midrule
MDR-A & 84.6M & 6.3M & 3.1B & Books & 1.67M & 0.99M & 26.8M \\
MDR-B & 34.0M & 1.4M & 0.6B & Music & 0.11M & 0.12M & 1.5M \\
MDR-C & 24.7M & 0.5M & 0.3B & Movie & 0.23M & 0.08M & 3.1M \\
MDR-D & 29.1M & 0.6M & 0.2B & - & - & - & - \\
\bottomrule
\end{tabular}
\label{tab:dataset}
\end{table}
\subsubsection{\textbf{Datasets}}
We conduct extensive offline experiments on both the public dataset and the product dataset.
\textbf{Public Dataset:}
Amazon dataset~\cite{ni2019justifying} is a popular dataset to conduct experiments of multi-domain recommendation. The dataset provides dozens of domains and the frequently-used domains are Books, Movies and TV (Movie), and CDS and Vinyl (Music). Following existing research, we take binarize the ratings to 1 and 0 (the ratings higher or equal to 4 as positive and others as negative.) Besides, we filter the users and items with less than 5 interactions.
\textbf{Product Dataset:}
The product dataset is collected from four real-world scenarios from an industry advertising platform, named MDR-A, MDR-B, MDR-C, and MDR-D.
These four sub-datasets share the same user set and have overlapped items. Each subset consists of users' interacted items. We additionally filter the datasets to retain users/items with at least 5 interactions. Table~\ref{tab:dataset} lists the statistics of both the product dataset and the public amazon dataset.
\begin{table*}[t]
\centering
\setlength{\abovecaptionskip}{0.1cm}
\caption{Main results on product dataset}
\begin{tabular}{c|ccc|ccc|ccc|ccc}
\toprule
\multirow{2}{*}{Method} & \multicolumn{3}{c|}{MDS-A} & \multicolumn{3}{c|}{MDS-B} & \multicolumn{3}{c|}{MDS-C} & \multicolumn{3}{c}{MDS-D} \\
& Mrr & HR@20 & HR@50 & Mrr & HR@20 & HR@50 & Mrr & HR@20 & HR@50 & Mrr & HR@20 & HR@50 \\
\midrule
Base & 0.0368 & 2.37\% & 6.46\% & 0.0625 & 4.87\% & 12.60\% & 0.0640 & 4.78\% & 13.20\% & 0.0753 & 5.11\% & 12.65\% \\
PPGN (Mix) & 0.0481 & 2.98\% & 8.47\% & 0.0603 & 4.22\% & 11.61\% & 0.1017 & 8.58\% & 18.99\% & 0.1131 & 7.60\% & 17.59\% \\
MGNN & 0.0544 & 3.68\% & 8.11\% & 0.0699 & 5.34\% & 14.28\% & 0.1079 & 12.22\% & 21.34\% & 0.1428 & 10.67\% & 21.81\% \\
PCRec & 0.0635 & 4.38\% & 9.71\% & 0.0845 & 7.31\% & 16.63\% & 0.1546 & 14.71\% & 25.99\% & 0.1738 & 15.16\% & 26.59\% \\
BiTGCF & 0.0663 & 4.59\% & 10.61\% & 0.0986 & 8.66\% & 18.46\% & 0.1591 & 15.48\% & 26.49\% & 0.1577 & 13.73\% & 23.66\% \\
BiTGCF+ & 0.0750 & 5.08\% & 12.31\% & 0.1237 & 9.87\% & 20.71\% & 0.1713 & 16.15\% & 28.63\% & 0.1685 & 14.76\% & 25.85\% \\
\midrule
$\mathsf{H^3Trans}$ & \textbf{0.1171} & \textbf{7.20\%} & \textbf{16.79\%} & \textbf{0.1686} & \textbf{14.29\%} & \textbf{28.65\%} & \textbf{0.2084} & \textbf{18.78\%} & \textbf{34.89\%} & \textbf{0.2158} & \textbf{18.69\%} & \textbf{32.73\%} \\
\bottomrule
\end{tabular}
\label{tab:main_results}
\vspace{-0.6em}
\end{table*}
\begin{table}[t]
\centering
\small
\setlength{\abovecaptionskip}{0.1cm}
\caption{Main results on public amazon dataset}
\begin{tabular}{c|cc|cc|cc}
\toprule
\multirow{2}{*}{Method} & \multicolumn{2}{c|}{Books} & \multicolumn{2}{c|}{Music} & \multicolumn{2}{c}{Movie} \\
& NDCG & HR@20 & NDCG & HR@20 & NDCG & HR@20 \\
\midrule
Base & 0.0270 & 4.71\% & 0.0631 & 13.39\% & 0.0433 & 10.45\% \\
PPGN & 0.0289 & 4.96\% & 0.0660 & 13.93\% & 0.0473 & 11.23\% \\
MGNN & 0.0311 & 5.12\% & 0.0672 & 14.14\% & 0.0465 & 11.03\% \\
PCRec & 0.0331 & 5.31\% & 0.0742 & 15.67\% & 0.0489 & 11.52\% \\
BitGCF & 0.0359 & 5.57\% & 0.0694 & 14.65\% & 0.0495 & 11.78\% \\
BitGCF+ & 0.0381 & 5.78\% & 0.0719 & 15.29\% & 0.0509 & 12.02\% \\
\midrule
$\mathsf{H^3Trans}$ & \textbf{0.0399} & \textbf{5.97\%} & \textbf{0.0761} & \textbf{16.01\%} & \textbf{0.0524} & \textbf{12.33\%}\\
\bottomrule
\end{tabular}
\label{tab:public_main_results}
\vspace{-1.3em}
\end{table}
\subsubsection{\textbf{Compared methods}}
We compare $\mathsf{H^3Trans}$ with following strong baselines. Except for the base model, all baselines attempt to transfer information from other domains in different ways.
\begin{itemize}
\item \textbf{Base}. Base method constructs a user-item bipartite graph and trains models individually for each domain with its user behavior data.
\item \textbf{PPGN}. PPGN~\cite{PPGN} fuses the interaction information of multiple domains into a graph and shares the features of users learned from the joint interaction graph. Notes that one user only has one node within the joint graph.
\item \textbf{MGNN}. MGNN~\cite{MGNN} integrates users' multi-domain behaviors and constructs the unified multi-domain graph. Nodes belonging to the same user share the same attribute.
MGNN learns domain-specific representation for user nodes.
\item \textbf{PCRec}. PCRec~\cite{pretrain} adopts a pre-training and fine-tuning diagram to transfer knowledge from the source domain to the target. Here we first pre-train a graph model on the joint graph and then fine-tune it on each domain.
\item \textbf{BiTGCF}. BiTGCF~\cite{BiTGCF} is proposed for dual-target recommendation. It connects common users of both domains as bridge and designs a feature transfer layer to realize the two-way transfer of knowledge across two domains. Here we randomly pick two domains to realize the combination layer.
\item \textbf{BiTGCF+}. BiTGCF+ is an extended version of BiTGCF. Here we modify the feature transfer layer and extend it to multi-domain recommendation.
\end{itemize}
\subsubsection{Evaluation Protocol}
We adopt the widely used leave-one-out evaluation method. Specifically, we take the last interaction from each user’s interaction history as the test set, and the remaining are utilized for training.
For users in the testing set, we follow the all-ranking protocol ~\cite{wang2019neural} to evaluate the top-K recommendation performance. For product dataset, we report the average HitRate@K (HR@K) and Mean Reciprocal Rank (MRR) on each domain. For public dataset, we report the HR@K and NDCG@K as these two metrics are more popular of public experiments.
\subsubsection{Implement Details}
We provide the implemented details of our proposed model and baselines. The graph neural network has two layers, and the hidden embedding dimensions are set as [128, 64]. We sample $20$ related items to build \textit{hyperedge-i} in Hyper-I module. For model training, we set batch size $N = 512$ and adopt adam optimizer~\cite{kingma2014adam}, where the learning rate is set to $0.01$.
\subsection{Performance Comparison (RQ1)}
Table~\ref{tab:main_results} and Table~\ref{tab:public_main_results} present the experimental results of $\mathsf{H^3Trans}$ compared with other baselines. From these two tables, we have the following observations.
\begin{itemize}[leftmargin=*]
\item Base method performs poorly on all domains, which indicates that individually training model for each domain limits the recommendation performance in multi-domain recommendation.
\item PPGN mixes the multi-domain data and constructs a joint graph for model training. As a result, it achieves large improvement in most domains. But it still has negative effects on some domains such as MDS-B, because different domains share the same user representation and neglect the user's domain-specific preferences. The user representation is dominated by the data-rich domain.
\item MGNN takes account of both the common feature and the domain-specific feature for different domains. which brings improvement to the recommendation service. Note that common feature is only acquired by the shared node attributes. The information transfer among domains is limited.
\item PCRec performs transfer learning by adopting the pre-training and fine-tuning diagram. Pre-training on the joint graph helps learn users' common preferences among domains. Then fine-tuning on domain's individual graph make the user node representation more preferable for each domain. However, fine-tuning is more time- and space-consuming for multi-domain recommendation.
\item BiTGCF and BiTGCF+ are two competitive baselines in our experiments. BiTGCF leverages a combination layer to realize the two-way transfer across domains. Here we extend the feature transfer layer of BiTGCF to multiple domains as BiTGCF+. We can see that BitGCF+ achieves larger improvement than BitGCF because it introduces more domains to perform multi-domain recommendation. But the improvement is still limited because we just simply sum user's multi-domain representations and neglect the high-order connections among them.
\item $\mathsf{H^3Trans}$ achieves the best performance with significant improvement on all metrics of all domains. This indicates that $\mathsf{H^3Trans}$ benefits from learning the high-order connections among multiple domains extracted by Hyper-U module and transferring correlative information via Hyper-I. The high-quality representations learned from the hypergraph enhance the recommendation performance in all domains.
\end{itemize}
\begin{table*}[t]
\centering
\setlength{\abovecaptionskip}{0.2cm}
\caption{Ablation study on product dataset. Methods refer to different variants of $\mathsf{H^3Trans}$.}
\begin{tabular}{c|ccc|ccc|ccc|ccc}
\toprule
\multirow{2}{*}{Method} & \multicolumn{3}{c|}{MDS-a} & \multicolumn{3}{c|}{MDS-b} & \multicolumn{3}{c|}{MDS-c} & \multicolumn{3}{c}{MDS-d} \\
& Mrr & HR@20 & HR@50 & Mrr & HR@20 & HR@50 & Mrr & HR@20 & HR@50 & Mrr & HR@20 & HR@50 \\
\midrule
Vanilla & 0.0544 & 3.68\% & 8.11\% & 0.0699 & 5.34\% & 14.28\% & 0.1079 & 12.22\% & 21.34\% & 0.1428 & 10.67\% & 21.81\% \\
HU & 0.0750 & 5.08\% & 12.31\% & 0.1237 & 9.87\% & 20.71\% & 0.1712 & 16.15\% & 28.63\% & 0.1685 & 14.76\% & 25.85\% \\
HU+ & 0.0894 & 5.56\% & 13.68\% & 0.1383 & 10.53\% & 23.08\% & 0.1848 & 17.01\% & 29.82\% & 0.1846 & 16.38\% & 28.48\% \\
PHI & 0.1016 & 6.35\% & 15.22\% & 0.1509 & 11.96\% & 24.52\% & 0.1887 & 17.58\% & 30.92\% & 0.1913 & 17.21\% & 29.80\% \\
EHI & 0.1051 & 6.53\% & 15.68\% & 0.1581 & 12.34\% & 25.54\% & 0.1958 & 17.93\% & 31.64\% & 0.1937 & 17.84\% & 30.62\% \\
EHI+ & \textbf{0.1171} & \textbf{7.20\%} & \textbf{16.79\%} & \textbf{0.1686} & \textbf{14.29\%} & \textbf{28.65\%} & \textbf{0.2084} & \textbf{18.78\%} & \textbf{34.89\%} & \textbf{0.2158} & \textbf{18.69\%} & \textbf{32.73\%} \\
\bottomrule
\end{tabular}
\vspace{-0.5em}
\label{tab:ablation_study}
\end{table*}
\subsection{Ablation Study (RQ2)}
For further analysis, we compare different variants of $\mathsf{H^3Trans}$ on the product dataset for ablation study, and the results are listed in table~\ref{tab:ablation_study}. Vanilla is a basic graph model trained on the unified multi-domain graph. User nodes learn the common interest only through the shared node attributes.
\subsubsection{\textbf{Effect of Hyper-U module}:}
HU adds the Hyper-U module but without the attention mechanism based layer. It only utilizes a vanilla combination layer to combine users' separate representations from multiple domains. HU+ integrates our self-attention mechanism based message passing layer into HU. From the table, we can see that aggregating users' scattered preferences and modeling the high-order connections among multiple domains could help refine the user representation for each separate domain. And the self-attention mechanism contributes to further improving the representation quality, because the attention layer adaptively extracts correlative knowledge from source domains.
\subsubsection{\textbf{Effect of Hyper-I module}:}
PHI and EHI are two models that additionally integrate the Hyper-I module, and equipped with path-based or embed-based method to seek out similar items respectively. Table~\ref{tab:ablation_study} shows that these two methods perform better than HU+, which indicates that the dynamic item transfer module could eliminate the domain discrepancy and adjust the latent item representation more correlative to the target domain without interference information.
Besides, EHI achieves a marginal improvement than PHI, that shows embed-based method is a little better than path-based method. EHI+ is the best variant of our model, which further employs the graphormer layer to exploit the structure information within the \textit{hyperedge-i}. It consistently shows around 1\% on HR@20 and 2\% on HR@50.
\subsubsection{\textbf{Effect of multiple domains}:}
Multi-domain recommendation jointly optimizes the recommendation performance of all domains. Intuitively, with more domains, we can access more users' behaviors to better characterize users' interest. Here we analyze the effect when introducing different numbers of domains to perform multi-domain recommendation. The results are reported in figure~\ref{fig:domains_num}. We can see that it indeed achieves better performance when introducing more domains, because we can transfer knowledge from more source domains, and $\mathsf{H^3Trans}$ help exploit the high-order connections among them. Additionally, the marginal improvement decreases as more domains are introduced.
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=\columnwidth]{figures/res1.pdf}}
\vspace{-1em}
\caption{Performance comparison over different number of domains in MDR}
\label{fig:domains_num}
\vspace{-1em}
\end{figure}
\subsection{Improvement on Behavior Sparseness (RQ3)}
As stated before, GNN-based methods suffer from the behavior sparseness issue, and here we conduct a detailed analysis to test the improvement on behavior-sparse users. Specifically, we split the users into four groups G1, G2, G3, G4, and G5 in the order of increasing number of interactions. The larger the GroupID is, the more behaviors the users have collected. Figure~\ref{fig:sparsity_analysis} reports the percentage increase compared with the Base model. From Fig~\ref{fig:sparsity_analysis}, we can find that the improvement achieved in the first three groups is more significant than that of the last two. We can conclude that $\mathsf{H^3Trans}$ help improve more for relatively inactive users (with fewer user-item interactions), indicating that $\mathsf{H^3Trans}$ alleviates the sparseness of user behaviors by transferring knowledge from other domains.
\subsection{Online Experiment (RQ4)}
We have deployed $\mathsf{H^3Trans}$ online to the retrieval stage of our advertising system, and conducted online A/B test for one week. For fair comparison, we follow the same configuration with the best retrieval model deployed online. The online metrics include CTR and ROI (return on investment).
We observe that $\mathsf{H^3Trans}$ achieves \textbf{+2.8\%} lift on CTR and \textbf{7.26\%} lift on ROI, thus $\mathsf{H^3Trans}$ improves the important online metrics and promotes the performance to our system.
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=\columnwidth]{figures/Sparsity.pdf}}
\vspace{-0.5em}
\caption{Performance comparison over different user groups (percentage increase relative to Base model)}
\label{fig:sparsity_analysis}
\vspace{-1.6em}
\end{figure}
\section{Related Work}
\subsection{Multi-domain Recommendation}
Multi-domain recommendation aims to improve the recommendations performance of all domains by transferring knowledge from related domains. MCF~\cite{zhang2010multi} and ICAN~\cite{xie2020internal} considered multiple collaborative filtering tasks in different domains simultaneously and exploit the relationships between domains.
~\citet{ma2018your} further introduced cross-media content information.
Some works focus on the users' multiple behaviors (e.g., click, collect, cart, buy). MBGCN ~\cite{jin2020multi} and MGNN~\cite{MGNN} propose a multi-behavior graph convolutional network to capture behaviors' different influences on target behavior. These models can be used in MDR with the multi-type behaviors replaced by the multi-domain behaviors.
Furthermore, by considering each domain as a task, multi-task approaches can be directly applied in MDR. For general MDR, MMoE~\cite{ma2018modeling} models the tradeoffs between domain-specific objectives and inter-domain relationships with a new multi-gate expert strategy.
\subsection{GNNs for Cross-domain Recommendation}
Inspired by the success of graph neural networks\cite{hamilton2017inductive,kipf2016semi}, researchers have taken efforts to exploit the user-item interaction graph to learn users' interest preferences.
GNN-based methods~\cite{he2020lightgcn,wang2019neural,ying2018graph} suffer from the sparseness of user behaviors under some new and small domains, and some researchers have exploited to alleviate it by transferring information from other domains~\cite{PPGN,pretrain,BiTGCF}. PPGN~\cite{PPGN} fuses the interaction information of two domains into a graph and learns shared features for users. ~\citet{pretrain} propose a pre-training and fine-tuning diagram to transfer information to the target domain. ~\citet{BiTGCF} further realizes the two-way transfer of knowledge across two domains with a bi-directional feature transfer module. ~\citet{da-gtcdr} propose a graphical and attentional framework to combine the embeddings of common users from both domains, thus enhancing the quality of user embeddings and improving the recommendation performance on each domain. Nevertheless, they fail to model the high-order connections among more domains when employed in multi-domain recommendations.
\subsection{Hypergraph Learning for Recommendation}
Hypergraph, as a more general topological structure to model high-order connections, has been exploited in recommendation~\cite{ji2020dual,yu2021self,xia2021self,wang2020next,zhang2021double,chen2020neural}. \citet{xia2021self} models session-based data as a hypergraph and then propose a hypergraph convolutional network for session-based recommendation. \citet{yu2021self} propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user connections. \citet{zhang2021double} incorporate the complex tuple-wise correlations into a hypergraph and propose a self-supervised hypergraph learning framework for group recommendation. Our work is the first to investigate hypergraph learning in multi-domain recommendation, which can exploit the high-order connections among multiple domain and realize correlative preference transfer.
\section{Conclusion}
In this paper, we propose an correlative preference transfer framework with hierarchical hypergraph network ($\mathsf{H^3Trans}$) to improve multi-domain recommendations. $\mathsf{H^3Trans}$ constructs a unified multi-domain graph and integrates two hypergraph-based module: adaptive user aggregation module (Hyper-U) and dynamic item transfer module (Hyper-I).
$\mathsf{H^3Trans}$ not only exploits high-order connections among users’ scattered preferences in multiple domain, but also transfers correlative user preference to alleviate the behavior sparseness of each single domain. Extensive experiments demonstrate the superiority of our method.
\bibliographystyle{ACM-Reference-Format}
|
1106.2014
|
\section{Introduction}
Testing for white noise is important in many econometric contexts. Ignoring
autocorrelation of the residuals in a linear regression model can lead to
erroneous confidence intervals or tests. Correlation of residuals from an ARMA
model or of the squared residuals from an ARCH model can indicate an improper
choice of the order. Investigating autocorrelation function is also a popular
diagnostic tool in Macroeconomics and Finance, see e.g. Durlauf (1991) and
Campbell, Lo and Craig MacKinlay (1997).
The earliest tests for white noise were based on confidence intervals for
autocorrelation coefficients as described in Brockwell and Davies (2006) or
Fan and Yao (2005). See also Xiao and Wu (2011) who recently derives the
asymptotic distribution of the maximum standardized sample covariance of weak
white noise processes. A second approach was established by Grenander and
Rosenblatt (1952) who extended goodness-of-fit tests such as Kolmogorov and
Cram\'{e}r-von Mises tests to white noise testing. See also Durlauf (1991),
Anderson (1993) and Deo (2000). Following the popular Lagrange Multiplier
approach, Delgado, Hidalgo and Velasco (2005) proposes a modified test
statistic to be used with estimated residuals. Shao (2011a) has recently
extended this setup to cover the weak white noise null hypothesis.
An appealing feature of Cram\'{e}r-von Mises type tests is detection Pitman
local directional alternatives converging to the null with the parametric rate
$n^{-1/2}$, where $n$ is the sample size. This contrasts with detection
results for Box-Pierce\ type tests as in Hong (1996) or Paparodotis (2000) who
both consider slower rates of convergence for local alternatives defined
through the spectral density function. Such a finding suggests that
Cram\'{e}r-von Mises tests are more powerful than Box-Pierce ones. One of the
contributions of the present paper is to deliver an opposite conclusion for a
new class of alternatives defined through the autocovariance function. The new
class of alternatives formalizes the idea that small autocorrelation
coefficients, say of order $\rho_{n}$, can be detected provided that there are
enough of them regrouped at reasonably small lags. An important finding of the
paper is that detection is still possible for very small $\rho_{n}=o\left(
n^{-1/2}\right) $. The intuition is as follows. As seen from Hong (1996),
Shao (2011b) and Xiao and Wu (2011), the critical region of the Box-Pierce
test of order $p_{n}\rightarrow\infty$ is%
\begin{equation}
\frac{n\sum_{j=1}^{p_{n}}\left( \widehat{R}_{j}^{2}/\widehat{R}_{0}%
^{2}-1\right) }{\left( 2p_{n}\right) ^{1/2}}\geq c_{\alpha}\text{,}%
\label{BPcrit}%
\end{equation}
where $c_{\alpha}$ is a normal critical value. Consider an alternative close
enough to the null of independence, so that $\xi_{j}=n^{1/2}\left(
\widehat{R}_{j}/\widehat{R}_{0}-R_{j}/R_{0}\right) $ can be considered as
approximately independent standard normal, where $\widehat{R}_{j}$ and $R_{j}$
denote respectively sample and population covariance at lag $j$. See e.g.
Brockwell and Davies (2006, Theorem 11.2.2) for a justification for such a
rough and intuitive setup. Then taking this approximation as exact gives,
since $\sum_{j=1}^{\infty}R_{j}^{2}/R_{0}^{2}<\infty$,
\begin{align}
\frac{n\sum_{j=1}^{p_{n}}\left( \widehat{R}_{j}^{2}/\widehat{R}_{0}%
^{2}-1\right) }{\left( 2p_{n}\right) ^{1/2}} & =\frac{n\sum_{j=1}^{p_{n}%
}R_{j}^{2}/R_{0}^{2}}{\left( 2p_{n}\right) ^{1/2}}+\frac{2\sum_{j=1}^{p_{n}%
}R_{j}\xi_{j}/R_{0}}{\left( 2p_{n}\right) ^{1/2}}+\frac{\sum_{j=1}^{p_{n}%
}\left( \xi_{j}^{2}-1\right) }{\left( 2p_{n}\right) ^{1/2}}\nonumber\\
& =\frac{n\sum_{j=1}^{p_{n}}R_{j}^{2}/R_{0}^{2}}{\left( 2p_{n}\right)
^{1/2}}+O_{\mathbb{P}}\left( 1\right) .\label{BPasymp}%
\end{align}
This expansion shows that the Box-Pierce test is consistent provided $\left(
n/\left( 2p_{n}\right) ^{1/2}\right) \sum_{j=1}^{p_{n}}R_{j}^{2}/R_{0}^{2}$
is large enough or diverges. Let $N_{n}$ be the number of correlation
coefficients $R_{j}^{2}/R_{0}^{2}\geq\rho_{n}^{2}$ for $j\in\left[
1,p_{n}\right] $, so that $\left( n/\left( 2p_{n}\right) ^{1/2}\right)
\sum_{j=1}^{p_{n}}R_{j}^{2}/R_{0}^{2}\geq nN_{n}\rho_{n}^{2}/\left(
2p_{n}\right) ^{1/2}$. Hence the Box-Pierce test is consistent provided
\begin{equation}
n^{1/2}\left( \frac{N_{n}}{p_{n}^{1/2}}\right) ^{1/2}\rho_{n}\rightarrow
\infty,\label{BPcons}%
\end{equation}
a condition which allows for $\rho_{n}=o\left( n^{-1/2}\right) $ provided
there are enough correlation coefficients larger than $\rho_{n}$, that is
$N_{n}/p_{n}^{1/2}\rightarrow\infty$ which holds in particular when the exact
order of $N_{n}$ is $p_{n}$. In plain words, summing sample correlations as in
the Box-Pierce statistic allows to detect very small population correlations
provided they are not too sparse and concentrated at lags smaller than $p_{n}%
$. Such a detection feature is lost by the Cram\'{e}r-von Mises type tests
which weights down high order correlation coefficients or by the Xiao and Wu
(2011) maximum test. As detailed in Section \ref{Optimal}, such alternatives
includes $MA$ processes with a significant long term multiplier but $o\left(
n^{-1/2}\right) $ impulse response coefficients. Such processes therefore
correspond to a macroeconomic scenario where short term policies have no
significant effects whereas long term ones may have an impact.
An important limitation of the critical region (\ref{BPcrit}) is the use of an
ad hoc order $P_{n}$. Hong (1996), Shao (2011b) and Xiao and Wu (2011)
consider a deterministic $p_{n}\rightarrow\infty$. This is inadequate to
detect alternatives with low lags correlation: taking $p_{n}=30$ is unlikely
to give a test with power against popular $AR(1)$ or $MA(1)$ alternatives with
a reasonable sample size. Conversely, taking a fixed $p_{n}$ as in the
original Box and Pierce (1970) paper is not suitable to detect higher order
alternatives. The need to properly address the tuning of a smoothing parameter
which plays a role similar to $p_{n}$ has spurred the development of
data-driven approaches for various nonparametric testing problems. A recent
approach, the so-called adaptive approach, focuses on data-driven tests which
detects alternatives in a smoothness class converging to the null at the
fastest possible rate given that the smoothness class is unknown to the test
builder. See in particular Fan (1996), Spokoiny (1996), Horowitz and Spokoiny
(2001), Guerre and Lavergne (2005), Guay and Guerre (2006) and Chen and Gao
(2007) for various nonparametric models and related null hypotheses of
theoretical or practical relevance. Golubev, Nussbaum and Zhou (2010)\ has
proved Le Cam equivalence of Gaussian time series with spectral density
functions in a Besov space and the corresponding continuous time Gaussian
white noise model considered in Spokoiny (1996). This result, limited to
Gaussian Time Series, is of theoretical nature and cannot deliver ready to
apply white noise tests, especially for the weakly correlated alternatives in
(\ref{BPCons}). In fact, most of the data-driven choices of $p_{n}$ proposed
in the white noise testing literature do not consider the adaptive
rate-optimality issue. An exception is Fan and Yao (2005) which outlines, but
do not analyze, a data-driven test which is based on the maximum of a set of
Box-Pierce statistics. Popular data-driven choice of the order build on Newey
and West (1994), see, among others, the simulation section of Hong and Lee
(2005). This practice is however difficult to justify theoretically since such
a data-driven order is expected at best to be optimal for estimation of a long
run variance, which is the purpose of Newey and West (1994). As it is well
known, this will not produce an adaptive rate-optimal test since the optimal
order for testing differs from the estimation one, see Ingster (1993) and
Guerre and Lavergne (2002). Escanciano and Lobato (2009) proposes a
data-driven order choice of the order based on an AIC/BIC criterion which is
suitable for estimation but not adaptive rate-optimal for white-noise testing.
This contrasts with the new test proposed here which is adaptive rate-optimal
with respect to a class of alternatives allowing for correlation coefficients
of order $o\left( n^{-1/2}\right) $ as in (\ref{BPcons}) which, as far as we
know, has not been previously considered.
A third issue addressed in the paper concerns the behavior of the data-driven
test under the weak white noise hypothesis. For directly observed variables,
Escanciano and Lobato (2009) considers a more restrictive martingale
difference null hypothesis and, as far as we know, the null limit distribution
of test statistics using a data-driven order as in Newey and West (1994) has
not been studied yet. Shao (2011b) considers the case of directly observed or
estimated residuals. He finds that the standardized Box-Pierce statistic has a
standard normal limit distribution under the weak white noise hypothesis, but
provided $p_{n}\rightarrow\infty$ so that the resulting test would have low
power against low order $AR$ or $MA$ alternatives. When $p_{n}$ is fixed,
available choice of critical values involves the block bootstrap as in Romano
and Thombs (1996) and Lobato, Horowitz, Nankervis and Savin (2006) or a matrix
standardization of the sample covariance which considerably modifies the Box
Pierce statistic as in Lobato, Nankervis and Savin (2002), Francq, Roy and
Zakoian (2005) or Delgado and Velasco (2010). These critical values involve a
block length or a bandwidth for which there is no obvious choice. We design
instead a data-driven choice $\widehat{p}$ of the order used in the test which
is asymptotically equal to $1$ under the weak white noise hypothesis, for
directly observed or estimated residuals. It then suffices to use critical
values for $p=1$, that is for the simple statistic $n\widehat{R}_{1}^{2}$. The
robust critical values of Lobato (2001) can be used when the residuals are
directly observed. For estimated residuals, Kuan and Lee (2006) ones can be used.
The paper is organized as follows. Section \ref{Construction of the test}
details the penalty approach leading to the data-driven order $\widehat{p}$
and the construction of the rejection region of the test. Section
\ref{Main results} studies the test under the general weak white noise null
hypothesis and under the new class of alternatives mentioned above. It
illustrates the importance of the choice of a suitable penalty both under the
null and the alternative. Section \ref{Optimal} states our adaptive
rate-optimality results and compares the new test with the Cram\'{e}r-von
Mises test in Deo (2000), the data-driven test of Escanciano and Lobato (2009)
and the Xiao and Wu (2011) maximum test. Section \ref{Simulation experiments}
is a simulation experiment which proposes a calibration of the penalty term
and a comparison of our automatic test with other data-driven ones, including
Escanciano and Lobato (2009) and Newey and West (1994) plug in choice of the
order. Section \ref{Concluding remarks} concludes the paper. Our main
assumptions are gathered and discussed in an Appendix, while proofs are
grouped in a supplementary material document.
\section{Construction of the test and choice of the critical
values\label{Construction of the test}}
\setcounter{equation}{0}
Consider a parametric model $m(X_{t},X_{t-1},,\ldots,Z_{t};\theta)=u_{t}$ and
observations $X_{t}$, $Z_{t}$, $t=1,...,n$. The scalar error term $u_{t}$\ has
zero mean and finite variance and is unobservable when $\theta$\ is unknown.
In simpler situations $u_{t}$ can be directly observed as for financial
returns. We are interested in testing that $u_{t}$\ is uncorrelated. Let
$\widehat{\theta}$\ be an estimator of $\theta$ and estimate the population
residual $u_{t}$\ with its sample counterpart $\widehat{u}_{t}=u_{t}%
(\widehat{\theta})$. Suppose $\{u_{t}\}$\ is a stationary process with zero
mean and covariance function $R_{j}=\mathrm{Cov}(u_{t},u_{t+j})$. Then the
null and alternative hypotheses are
\[
\mathcal{H}_{0}:R_{j}=0\text{ for all }j\neq0,\quad\quad\text{versus}%
\quad\quad\mathcal{H}_{1}:R_{j}\neq0\text{ for some }j\neq0.
\]
A natural estimator of the covariance is $\widehat{R}_{j}=\sum_{t=1}%
^{n-|j|}\widehat{u}_{t}\widehat{u}_{t+|j|}/n$, $j=0,\pm1,\ldots,\pm(n-1)$
which uses the estimated residuals as if they were the true ones. Given the
kernel spectral density estimator
\[
\hat{f}_{n}(\lambda;p)=\frac{1}{2\pi}\sum_{j=-\infty}^{\infty}K\left(
\frac{j}{p}\right) \widehat{R}_{j}\exp\left( -ij\lambda\right)
\text{,\quad}K\left( 0\right) =1,\text{ }K\left( x\right) =K\left(
-x\right) \text{, and }\int K\left( x\right) dx=1,
\]
where $i=\sqrt{-1}$ and the support of $K\left( \cdot\right) $ is $\left[
0,1\right] $, Hong (1996) has proposed a Lagrange multiplier type test
statistic
\begin{equation}
\widehat{S}_{p}=n\pi\int_{-\pi}^{\pi}\left\vert \hat{f}_{n}(\lambda
;p)-\frac{\widehat{R}_{0}}{2\pi}\right\vert ^{2}d\lambda=n\sum_{j=1}%
^{n-1}K^{2}\left( \frac{j}{p}\right) \widehat{R}_{j}^{2}.\label{Sp hat}%
\end{equation}
For the uniform kernel $K(t)=\mathbb{I}(t\in\lbrack0,1])$,$\ \widehat{S}_{p}$
is the Box-Pierce statistic $\widehat{BP}_{p}=n\sum_{j=1}^{p}\widehat{R}%
_{j}^{2}$. Large values of $\widehat{S}_{p}$\ indicate evidence against the
null. The proposed test builds on a data-driven choice of the order $p$ in an
set $\left[ 1,\overline{p}_{n}\right] $, $\overline{p}_{n}\leq n-1$. Under
standard weak dependence of $\left\{ u_{t}\right\} $ and for $p$ large
enough, the mean and variance of $\left( \widehat{S}_{p}-\widehat{S}%
_{1}\right) /R_{0}^{2}$ are asymptotically close to, respectively%
\begin{align*}
E_{\Delta}(p) & =\sum_{j=1}^{n-1}\left( 1-\frac{j}{n}\right) \left(
K^{2}\left( \frac{j}{p}\right) -K^{2}\left( j\right) \right) ,\\
V_{\Delta}^{2}(p) & =2\sum_{j=1}^{n-1}\left( 1-\frac{j}{n}\right)
^{2}\left( K^{2}\left( \frac{j}{p}\right) -K^{2}\left( j\right) \right)
^{2},
\end{align*}
see e.g. Hong (1996) for independent $\left\{ u_{t}\right\} $ and Shao
(2011b) for the weak white noise case. In these notations, the subscript
\textquotedblleft$\Delta$\textquotedblright\ indicates difference
$\widehat{S}_{p}-\widehat{S}_{1}$. For the Box-Pierce statistic, $E_{\Delta
}(p)$ and $V_{\Delta}^{2}(p)$ are respectively equal to $\left( p-1\right)
\left( 1+O\left( p/n\right) \right) $ and $2\left( p-1\right) \left(
1+O\left( p/n\right) \right) $. We propose to select $\widehat{p}$\ as the
smallest maximizer of a penalized statistic,\textbf{\ }%
\begin{align}
\widehat{p} & =\arg\max_{p\in\left[ 1,\overline{p}_{n}\right] }\left(
\frac{\widehat{S}_{p}}{\widehat{R}_{0}^{2}}-E\left( p\right) -\gamma
_{n}V_{\Delta}(p)\right) \nonumber\\
& =\arg\max_{p\in\left[ 1,\overline{p}_{n}\right] }\left( \frac
{\widehat{S}_{p}-\widehat{S}_{1}}{\widehat{R}_{0}^{2}}-E_{\Delta}%
(p)-\gamma_{n}V_{\Delta}(p)\right) ,\label{Hatp}%
\end{align}
where $E(p)=\sum_{j=1}^{n-1}\left( 1-j/n\right) K^{2}\left( j/p\right) $.
Such a penalization procedure is similar to Guay and Guerre (2006) or Guerre
and Lavergne (2005). It differs from the ones used in the suboptimal AIC or
BIC procedures reviewed in Hart (1997) which uses a higher penalty term
$\gamma_{n}E\left( p\right) $ in place of $E\left( p\right) +\gamma
_{n}V_{\Delta}(p)$. Escanciano and Lobato (2009) similarly uses a penalty term
$\widehat{\gamma}_{n}E\left( p\right) $ for $p$ in a finite set. The
rationale for (\ref{Hatp}) is better understood when $\widehat{S}_{p}$ is the
Box-Pierce statistic $\widehat{BP}_{p}$ with directly observed or estimated
residuals. In this case $\left( \widehat{BP}_{p}-\widehat{BP}_{1}\right)
/\widehat{R}_{0}^{2}-E_{\Delta}(p)$ is an estimator of $n\sum_{j=2}^{p}%
R_{j}^{2}/R_{0}^{2}$ with a standard deviation which can be proxied with
$V_{\Delta}(p)$ when $p\rightarrow\infty$, see Shao (2011b). Hence a large
penalized statistic $\left( \widehat{S}_{p}-\widehat{S}_{1}\right)
/\widehat{R}_{0}^{2}-E_{\Delta}(p)-\gamma_{n}V_{\Delta}(p)$ suggests that
$n\sum_{j=2}^{p}R_{j}^{2}/R_{0}^{2}$ is large so that this particular order
$p$\ should be preferred to $p=1$. The selected $\widehat{p} $\ will retain
the best order $p$\ with respect to this detection criterion. In particular,
under the null, the Markov inequality yields that $\left( \widehat{S}%
_{p}-\widehat{S}_{1}\right) /\widehat{R}_{0}^{2}-E_{\Delta}(p)-\gamma
_{n}V_{\Delta}(p)=-\gamma_{n}\left( 1+o_{\mathbb{P}}\left( 1\right)
\right) V_{\Delta}(p)$, uniformly with respect to $p\in\left[ 1,\overline
{p}_{n}\right] $ provided $\gamma_{n}$ diverges fast enough. Since
$V_{\Delta}(p)>V_{\Delta}(1)=0$ for all $p>$ $1$ and $\gamma_{n}%
\rightarrow\infty$, $1=\arg\max_{p\in\left[ 1,\overline{p}_{n}\right]
}\left( -\gamma_{n}V_{\Delta}(p)\right) $ so that $\widehat{p}=1$ should
hold with a probability tending to $1$ under $\mathcal{H}_{0}$. It follows
that $\widehat{S}_{\widehat{p}}=\widehat{S}_{1}+o_{\mathbb{P}}\left(
1\right) $ under the null, for directly observed or estimated residuals. This
leads to the following rejection region of the test
\begin{equation}
\widehat{S}_{\widehat{p}}\geq z(\alpha),\label{Test}%
\end{equation}
where the critical value $z(\alpha)=\widehat{z}_{n}(\alpha)$ satisfies%
\begin{equation}
\lim_{n\rightarrow\infty}\mathbb{P}\left( \widehat{S}_{1}\geq z(\alpha
)\right) =\alpha\text{ under }\mathcal{H}_{0}\text{.}\label{Alpha}%
\end{equation}
The critical values used here for directly observed residuals are from Lobato
(2001) and from Kuan and Lee (2006) for estimated residuals.
A choice for critical values when $\left\{ u_{t}\right\} $ is directly
observed is as follows. Let a tilde superscript indicate more explicitely this
specific case,%
\begin{equation}
\widetilde{S}_{p}=n\sum_{j=1}^{p}K^{2}\left( \frac{j}{p}\right)
\widetilde{R}_{j}^{2}\text{ where }\widetilde{R}_{j}=\frac{1}{n}\sum
_{t=1}^{n-|j|}u_{t}u_{t+|j|}.\label{TildeR}%
\end{equation}
As seen from Francq et al. (2005) or Lobato et al. (2002), the limit
distribution of $\widetilde{S}_{1}=nK^{2}\left( 1\right) \widetilde{R}%
_{1}^{2}$ depends upon the limit distribution $n^{1/2}\left( \widetilde{R}%
_{1}-R_{1}\right) $, which is under standard conditions a centered normal
with variance%
\[
\lim_{n\rightarrow\infty}n\mathrm{Var}\left( \frac{1}{n}\sum_{t=1}^{n-1}%
u_{t}u_{t+1}\right) =\mathrm{Var}\left( u_{0}^{2}u_{1}^{2}\right)
+2\sum_{k=1}^{\infty}\mathbb{E}\left[ \left( u_{0}u_{1}-R_{1}\right)
\left( u_{k}u_{k+1}-R_{1}\right) \right] =\Gamma_{1}.
\]
Hence, under $\mathcal{H}_{0}$,\ the limit distribution of $\widetilde{S}%
_{1}/K^{2}(1)$\ is the one of a chi square with one degree of freedom times
the nuisance parameter $\Gamma_{1}$. The HAC\ approach developed by Kiefer,
Vogelsang and Bunzel (2000), Sun, Phillips and Jin (2008) and pioneered by
Lobato (2001) for weak white noise testing delivers a pivotal test statistic
with a null limit distribution which does not depend upon $\Gamma_{1}$. Under
suitable conditions, Lobato (2001) shows that%
\[
\frac{n\widetilde{R}_{1}^{2}}{\widetilde{\Gamma}_{1}},\text{ where
}\widetilde{\Gamma}_{1}=\frac{1}{\left( n-1\right) ^{2}}\sum_{t=1}%
^{n-1}\left( \sum_{j=1}^{t}\left( u_{j}u_{j+1}-\frac{1}{n-1}\sum_{j=1}%
^{n-1}u_{j}u_{j+1}\right) \right) ^{2},
\]
converges in distribution to
\begin{equation}
\frac{W^{2}\left( 1\right) }{\int_{0}^{1}\left( W\left( r\right)
-rW\left( 1\right) \right) ^{2}dr}\label{StdW}%
\end{equation}
where $W$\ is a standard Brownian motion, a limit distribution which is free
from the nuisance parameter $\Gamma_{1}$. Hence following Lobato (2001)
suggests the critical values%
\begin{equation}
\widetilde{z}_{L}(\alpha)=K^{2}\left( 1\right) \widetilde{\Gamma}_{1}%
z_{L}\left( \alpha\right) ,\label{Zlob}%
\end{equation}
where $z_{L}\left( \alpha\right) $\ are the critical values obtained from
the distribution of the random variable (\ref{StdW}) which are tabulated in
Lobato (2001, Table 1). Alternative approaches would use a Newey and West
(1994) estimator of $\Gamma_{1}$\ as in Francq et al. (2005), Lobato et al.
(2002) or fixed bandwidth asymptotic as in Sun et al. (2008). Note however
that these alternative procedures involve an additional choice of a tuning
parameter and may be more involved than (\ref{Zlob}).
The HAC procedure of Lobato (2001) has been extended by Kuan and Lee (2006) to
deal with the case of estimated residuals. Let $\widehat{\theta}_{t}$ be the
estimator $\widehat{\theta}$\ computed with the first $t$\ observations and
consider the recursive estimator of $\Gamma_{1}$\textit{\ }%
\[
\widehat{\Gamma}_{1}=\frac{1}{\left( n-1\right) ^{2}}\sum_{t=1}^{n-1}\left(
\sum_{j=1}^{t}\left( \widehat{u}_{j}\left( \widehat{\theta}_{t}\right)
\widehat{u}_{j+1}\left( \widehat{\theta}_{t}\right) -\frac{n}{n-1}%
\widehat{R}_{1}\right) \right) ^{2}.
\]
Kuan and Lee (2006) shows that, under suitable conditions, $n\widehat{R}%
_{1}^{2}/\widehat{\Gamma}_{1}$ and $n\widetilde{R}_{1}^{2}/\widetilde{\Gamma
}_{1}$\ have the same limit distribution. Hence this suggests the critical
values%
\begin{equation}
\widehat{z}_{L}(\alpha)=K^{2}\left( 1\right) \widehat{\Gamma}_{1}%
z_{L}\left( \alpha\right) ,\quad z_{L}\left( \alpha\right) \mathit{\ }%
\text{as in (\ref{Zlob}).}\label{Zest}%
\end{equation}
We shall also consider a modified version of the test which uses a
standardization of the sample covariances as in Deo (2000) or Escanciano and
Lobato (2009),%
\begin{equation}
\widehat{S}_{p}^{\ast}=n\sum_{j=1}^{n-1}K^{2}\left( \frac{j}{p}\right)
\left( \frac{\widehat{R}_{j}}{\widehat{\tau}_{j}}\right) ^{2}\text{ where
}\widehat{\tau}_{j}^{2}=\frac{1}{n-j}\sum_{t=1}^{n-j}\widehat{u}_{t}%
^{2}\widehat{u}_{t+j}^{2}-\left( \frac{n}{n-j}\widehat{R}_{j}\right)
^{2}.\label{Sstar}%
\end{equation}
The sample variance $\widehat{\tau}_{j}^{2}$ is an estimator of $\tau_{j}%
^{2}=\operatorname*{Var}\left( u_{t}u_{t-j}\right) $ which is the asymptotic
variance of $n^{1/2}\left( \widetilde{R}_{j}-R_{j}\right) $ in the case of
uncorrelated $u_{t}u_{t+j}$ or for Martingale difference.\footnote{Note
however that $\tau_{j}^{2}$ differs from $\operatorname*{Var}\left( \sqrt
{n}\left( \widetilde{R}_{j}-R_{j}\right) \right) $ which would be the
appropriate standardization. But in fact all these quantities goes to
$R_{0}^{2}$ when $j\rightarrow\infty$, which is the reason why they can be
used here.} As above, $u_{t}$ should be used in place of $\widehat{u}_{t}$
when the residuals are directly observed, leading to statistics $\widetilde{S}%
_{p}^{\ast}$ and $\widetilde{\tau}_{j}^{2}$ with a tilde subscript instead of
a hat one. The corresponding data-driven $p$ and critical values are%
\begin{align}
\widehat{p}^{\ast} & =\arg\max_{p\in\left[ 1,\overline{p}_{n}\right]
}\left( \widehat{S}_{p}^{\ast}-E\left( p\right) -\gamma_{n}V_{\Delta
}(p)\right) ,\label{Hatpstar}\\
\widetilde{z}_{L}^{\ast}(\alpha) & =\frac{\widetilde{z}_{L}(\alpha
)}{\widetilde{\tau}_{1}^{2}}\text{ and }\widehat{z}_{L}^{\ast}(\alpha
)=\frac{\widehat{z}_{L}(\alpha)}{\widehat{\tau}_{1}^{2}}.\label{Zstar}%
\end{align}
\section{Asymptotic level and consistency\label{Main results}}
\setcounter{equation}{0} This section deals with the behavior of the test
under the null and the alternative hypotheses. For the sake of exposition and
brevity the associated assumptions are grouped and discussed in Appendix A. In
what follows, $a_{n}\asymp b_{n}$ means that the sequences $\left\{
a_{n}\right\} $ and $\left\{ b_{n}\right\} $ has the same order, i.e. that
there is a constant $C>1$ such that $\left\vert a_{n}\right\vert
/C\leq\left\vert b_{n}\right\vert \leq C\left\vert a_{n}\right\vert $ for $n$
large enough.
An important issue in the construction of the test (\ref{Test}) is the choice
of the penalty sequence. Choosing $\gamma_{n}$\ large enough gives that
$\widehat{p}$ stays close to $1$, hence that the test statistic $\widehat{S}%
_{\widehat{p}}$\ remains close to $\widehat{S}_{1}$. Hence, in one hand, using
a large $\gamma_{n}$ ensures that the level of the test is close to its
nominal size due to the choice (\ref{Alpha}) of critical values which is
driven by the asymptotic distribution of $\widehat{S}_{1}$. On the other hand,
a large $\gamma_{n}$ may drastically limit the power of the test since the
statistic $\widehat{S}_{\widehat{p}}$ would not differ from $\widehat{S}_{1}$,
limiting so the power of the test. The trade-off between size and power
concerns is addressed by the two first results of this section. Consider first
the null hypothesis $\mathcal{H}_{0}$. The following theorem gives a lower
bound for $\gamma_{n}$ which ensures that the test is asymptotically of level
$\alpha$ under the null.
\begin{thm}
Let Assumptions \ref{Kernel} , \ref{M}, \ref{P} and \ref{Reg} in Appendix A
hold. If the penalty sequence $\{\gamma_{n},n\geq1\}$ satisfies
\begin{equation}
\gamma_{n}\geq\left( 1+\epsilon\right) \left( 2\ln\ln n\right)
^{1/2}\text{\quad for some }\epsilon>0,\label{Gam}%
\end{equation}
then, under $\mathcal{H}_{0}$, $\lim_{n\rightarrow\infty}\mathbb{P}\left(
\widehat{p}=1\right) =1$ and the test (\ref{Test}) is asymptotically of level
$\alpha$. \label{Level}
\end{thm}
\noindent The main result of Theorem \ref{Level} is about the asymptotic
behavior of the selected order $\widehat{p}$, which is asymptotically equal to
$1$. It then follows that $\widehat{S}_{\widehat{p}}=\widehat{S}%
_{1}+o_{\mathbb{P}}\left( 1\right) $ so that the choice of the critical
values (\ref{Zlob}) or (\ref{Zest}), which accounts for estimation of the
residuals and white noise dependence, ensures that the test is asymptotically
of level $\alpha$. Note that $\widehat{S}_{\widehat{p}}=\widehat{S}%
_{1}+o_{\mathbb{P}}\left( 1\right) $ allows to use other critical values
than (\ref{Zlob}) or (\ref{Zest}), as standard one degree of freedom Chi
squared ones which are valid under a stronger null of independence. A key
result is therefore that $\lim_{n\rightarrow\infty}\mathbb{P}\left(
\widehat{p}=1\right) =1$ holds when the residuals are estimated or not and
under various white noise structure. That this holds for estimated or directly
observed residuals comes from (\ref{Gam}) which imposes $\gamma_{n}%
\rightarrow\infty$. When $\widehat{\theta}$ is $\sqrt{n}$-consistent as
assumed here and under the considered assumptions, estimating the residuals
gives test statistics satisfying $\widehat{S}_{p}=\widetilde{S}_{p}%
+O_{\mathbb{P}}\left( 1\right) $ uniformly in $p$. The fact that the
remainder term $O_{\mathbb{P}}\left( 1\right) $ is negligible compared to
$\gamma_{n}$ is a key element to show that the asymptotic behavior of
$\widehat{p}$ is not affected by residuals estimation under the null. Compared
to the existing adaptive results of Horowitz and Spokoiny (2001), Guerre and
Lavergne (2005), Guay and Guerre (2006) or Chen and Gao (2007), an important
technical contribution is that Theorem \ref{Level} holds without assuming that
the set of admissible $p$ is a power set as $\left\{ a^{j},j\in
\mathbb{N}\right\} $, $a>1$.
Another important finding is that the penalty sequence $\gamma_{n}$ can
diverge with the low order $\left( \ln\ln n\right) ^{1/2}$ as allowed by
(\ref{Gam}). This contrasts with the larger order $\ln n$ used in the BIC
selection procedure and in the corresponding data-driven tests, see e.g. Hart
(1997). In view of the potential negative impact of a large $\gamma_{n} $ on
the power of the test, it is worth asking if the lower bound (\ref{Gam}) can
be improved. The proof suggests that it is not the case. The key argument
comes from the expression%
\begin{align}
\mathbb{P}\left( \widehat{p}\neq1\right) & =\mathbb{P}\left( \left(
\widetilde{S}_{p}-\widetilde{S}_{1}\right) /\widetilde{R}_{0}^{2}-E_{\Delta
}(p)-\gamma_{n}V_{\Delta}(p)\geq0\text{ for some }p\in\left[ 2,\overline
{p}_{n}\right] \right) \nonumber\\
& =\mathbb{P}\left( \max_{p\in\left[ 2,\overline{p}_{n}\right] }\left(
\frac{\left( \widetilde{S}_{p}-\widetilde{S}_{1}\right) /\widetilde{R}%
_{0}^{2}-E_{\Delta}(p)}{V_{\Delta}(p)}\right) \geq\gamma_{n}\right)
.\label{Hatpnot}%
\end{align}
for the probability of not selecting $1$ when the residuals are directly
observed. In the case of the Box-Pierce statistic $\widetilde{S}%
_{p}-\widetilde{S}_{1}=n\sum_{j=2}^{p}\widetilde{R}_{j}^{2}$. The proof then
uses a martingale approximation for $\widetilde{R}_{j}$ as in Xiao and Wu
(2011), Shao (2011b), a smooth approximation of the maximum by the $L_{e}$
norm, $e\rightarrow\infty$,\footnote{For positive $x_{1},\ldots,x_{m}$,
$\left( \sum_{k=1}^{m}x_{k}^{e}\right) ^{1/e}=\left( 1+O\left( e^{-1}\ln
m\right) \right) \max_{k\in\left[ 1,m\right] }x_{k}$.} and repeated
applications of the Lindeberg technique, see Pollard (2002, p.179), to
approximate the LHS of (\ref{Hatpnot}) with%
\[
\mathbb{P}\left( \max_{p\in\left[ 2,\overline{p}_{n}\right] }\left(
\frac{1}{\left( 2\left( p-1\right) \right) ^{1/2}}\sum_{j=2}^{p}\left(
\zeta_{j}^{2}-1\right) \right) \geq\gamma_{n}\right) ,
\]
where the $\zeta_{j}$ are i.i.d. standard normal. Hence the best order
ensuring that $\mathbb{P}\left( \widehat{p}\neq1\right) =o(1)\,$is the order
of the maximum of the standardized sum $(2k)^{-1/2}\sum_{j=1}^{k}\left(
\zeta_{j}^{2}-1\right) $, $k=1,\ldots,\overline{p}_{n}-1$, which is $\left(
2\ln\ln\left( \overline{p}_{n}-1\right) \right) ^{1/2}\asymp\left( 2\ln\ln
n\right) ^{1/2}$ as shown by Darling and Erd\"{o}s (1956). The term
$(1+\epsilon)$ is used to control for the fact that the variance of
$\widetilde{R_{j}}/\mathrm{Var}\left( u_{t}\right) $ are close but not equal
to $1$ due to possible dependence of the uncorrelated $\left\{ u_{t}\right\}
$. Hence the bound (\ref{Gam}) cannot be improved.
Let us now turn to the detection properties of the test. Consider first the
case of directly observed residuals $\left\{ u_{t}\right\} $. In our setup,
the correlated alternative $\left\{ u_{t}\right\} $ may depend on the sample
size and the observations should be denoted $u_{t,n}$, $t=1,\ldots,n$ with a
covariance function $R_{j}=R_{j,n}$, where for each $n$ $\left\{
u_{t,n}\right\} $ is stationary. This includes for instance local $MA\left(
\infty\right) $ alternatives $\varepsilon_{t}+\sum_{i=1}^{\infty}%
a_{i,n}\varepsilon_{t-i}$ where $a_{i,n}\rightarrow0$ when $n$ grows. For
estimated residuals $\widehat{u}_{t}=u_{t}\left( \widehat{\theta}\right) $,
we assume that $\sqrt{n}\left( \widehat{\theta}-\theta_{n}\right) $ is
asymptotically centered for some pseudo true value $\theta_{n}\ $and we set
$u_{t}\left( \theta_{n}\right) =u_{t,n}$ since this residual plays an
identical role than the alternative $\left\{ u_{t,n}\right\} $ of the
directly observed case. However, for the sake of brevity, we write in both
cases $u_{t}$ and $R_{j}$ instead of $u_{t,n}$ and $R_{j,n}$.
The new class of alternatives is defined similarly to (\ref{BPcons}) in the
Introduction section. Consider first a sequence $\rho_{n}\rightarrow0$ and a
lag order $P_{n}$. Autocorrelation coefficients smaller than $\rho_{n}$ are
considered as negligible and an important parameter for detection is the
number of correlations above $\rho_{n}$,%
\begin{equation}
N_{n}=N_{n}\left( P_{n},\rho_{n}\right) =\#\left\{ |R_{j}/R_{0}|\geq
\rho_{n},\quad1\leq j\leq P_{n}\right\} .\label{Sparse2}%
\end{equation}
The next theorem gives a detection condition on $N_{n}$, $P_{n}$ and $\rho
_{n}$ which is similar to (\ref{BPcons}). Note however that (\ref{BPcons}) was
derived assuming a known $P_{n}$, a condition which is now relaxed.
\begin{thm}
\label{Sparse} Suppose Assumptions \ref{Kernel} , \ref{M}, \ref{Reg} and
\ref{P} in Appendix A hold. Then, there exists a constant $\kappa_{\ast}>0$
such that the test (\ref{Test}) is consistent against all alternatives
$\{u_{t}\}$ satisfying, for some $\rho_{n}>0$ and $P_{n}\in\left[
1,\overline{p}_{n}/2\right] $,%
\begin{equation}
n^{1/2}\left( \frac{N_{n}}{\gamma_{n}P_{n}^{1/2}}\right) ^{1/2}\rho_{n}%
\geq\kappa_{\ast}.\label{Sparse3}%
\end{equation}
\end{thm}
The most noticeable difference between the detection conditions (\ref{Sparse3}%
) and (\ref{BPcons}) is that the lag index $p_{n}$ in (\ref{BPcons}) is used
in associated critical region (\ref{BPcrit}) whereas the lag index $P_{n}$ in
(\ref{Sparse3}) is unknown and can take any value in $\left[ 1,\overline
{p}_{n}/2\right] $. This illustrates the adaptive feature of the new test. A
second difference is that (\ref{Sparse3}) involves the penalty sequence. In
fact (\ref{Sparse3}) deteriorates with the penalty sequence since increasing
$\gamma_{n}$ request to increase $\rho_{n}$ or $N_{n}$ to ensure that the
condition still holds. This illustrates the potential negative impact on the
power of the test of the penalty sequence.
For alternatives such that $P_{n}$ and $N_{n}$ are prescribed in advance, the
detection condition (\ref{Sparse3}) allows for a rate $\rho_{n}^{\ast}$
satisfying%
\begin{equation}
\rho_{n}^{\ast}\asymp\frac{1}{n^{1/2}}\left( \frac{\gamma_{n}P_{n}^{1/2}%
}{N_{n}}\right) ^{1/2}.\label{Bestrho}%
\end{equation}
Two different regimes emerge in view of (\ref{Bestrho}). Of special interest
is $\lim_{n\rightarrow\infty}\gamma_{n}P_{n}^{1/2}/N_{n}=0$\ since
(\ref{Bestrho}) shows that the test can detect correlation coefficients
converging to $0$ at a rate that is faster than the parametric rate $n^{-1/2}
$. The best possible rate in this case is $\rho_{n}^{\ast}\asymp\gamma
_{n}^{1/2}/\left( nP_{n}^{1/2}\right) ^{1/2}$ which is achieved for
\textquotedblleft saturated\textquotedblright\ alternatives with $N_{n}\asymp
P_{n}$. A less favorable case corresponds to more sparse correlation
coefficient satisfying $\lim_{n\rightarrow\infty}\gamma_{n}P_{n}^{1/2}%
/N_{n}=\infty$. In this case (\ref{Bestrho}) does not anymore allow for
correlation coefficients converging to $0$ with rate $n^{-1/2}$. This case is
covered Donoho and Jin (2004) and Ingster (1997) for a theoretical model
obtained when observing a known number $P_{n}$ of independent Gaussian
variables with mean $n\left( R_{j}/R_{0}\right) ^{2}$ and variance $1$. In
such a setup, the authors show that the best possible detection rate $\rho
_{n}$ is $\left( \ln n/n\right) ^{1/2}$, a rate which is achieved by the
maximum white noise test of Xiao and Wu (2011). This suggests that our test
may not be optimal when $\lim_{n\rightarrow\infty}\gamma_{n}P_{n}^{1/2}%
/N_{n}=\infty$. However, it will be shown in Proposition \ref{WildCvM} below
that the Xiao and Wu (2011) test does not detect moderately sparse
alternatives satisfying (\ref{Bestrho}) with $\lim_{n\rightarrow\infty}%
\gamma_{n}P_{n}^{1/2}/N_{n}=0$ and $\gamma_{n}\asymp\left( 2\ln\ln n\right)
^{1/2}$.
We conclude this section by showing that the test statistic $\widehat{S}%
_{\widehat{p}^{\ast}}^{\ast}$ from (\ref{Sstar}) and (\ref{Hatpstar}) has a
similar behavior than $\widehat{S}_{\widehat{p}}$.
\begin{thm}
\label{Extension}Suppose Assumptions \ref{Kernel}, \ref{M} and \ref{P} in
Appendix A hold. Then using $\left( \widetilde{S}_{\widehat{p}^{\ast}}^{\ast
},\widetilde{z}_{L}^{\ast}\left( \alpha\right) \right) $ or $\left(
\widehat{S}_{\widehat{p}^{\ast}}^{\ast},\widehat{z}_{L}^{\ast}\left(
\alpha\right) \right) $ in (\ref{Test}) instead of $\left( \widetilde{S}%
_{\widehat{p}},\widetilde{z}_{L}\left( \alpha\right) \right) $ or $\left(
\widehat{S}_{\widehat{p}},\widehat{z}_{L}\left( \alpha\right) \right) $
gives a test which satisfies the conclusions of Theorems \ref{Level} and
\ref{Sparse}.
\end{thm}
\setcounter{equation}{0}
\section{Adaptive rate-optimality and comparisons with other tests
\label{Optimal}}
\setcounter{equation}{0} While Theorem \ref{Level} gives a sharp lower bound
(\ref{Gam}) of order $\left( 2\ln\ln n\right) ^{1/2}$ for the penalty
sequence $\gamma_{n}$ ensuring that the test is asymptotically of level
$\alpha$, Theorem \ref{Sparse} suggests that increasing $\gamma_{n}$ can
damage its detection properties. Hence a good compromise for the choice of the
penalty sequence suitable both under $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$
is $\gamma_{n}\asymp\left( 2\ln\ln n\right) ^{1/2}$. Once such a choice is
made one may wonder if the detection properties of the resulting test can be
improved or not. Focusing on alternatives with a prescribed $P_{n}$ and
$N_{n}$, this amounts to show that there is no test detecting alternatives
with $\sup_{j\in\left[ 1,P_{n}\right] }\left\vert R_{j}/R_{0}\right\vert
=o\left( \rho_{n}^{\ast}\right) $ where $\rho_{n}^{\ast}$ is as in
(\ref{Bestrho}), a property that we call adaptive rate-optimality. More
generally this amounts to check that there is no test that can detect
alternatives satisfying a condition less restrictive than (\ref{Sparse3}),
i.e. allowing for a $\kappa_{\ast}=\kappa_{n}\rightarrow0 $ in (\ref{Sparse3})
leads to consider alternatives that cannot be detected by any possible tests.
The next Theorem establishes such adaptive rate-optimality for alternatives
satisfying $\lim_{n\rightarrow\infty}\gamma_{n}P_{n}^{1/2}/N_{n}%
=0$.\footnote{As discussed after (\ref{Bestrho}), the test (\ref{Test}) is not
optimal for detection of sparse alternatives with $\lim_{n\rightarrow\infty
}\gamma_{n}P_{n}^{1/2}/N_{n}=\infty$ which are not considered here.}
\begin{thm}
\label{Sparseopt} Consider the case where $\left\{ u_{t}\right\} $ is
directly observed. For any $\kappa_{n}\rightarrow0$, there exists a sequence
of alternatives $\left\{ u_{t}\right\} $ such that, for some $P_{n}%
\in\left[ 1,\overline{p}_{n}\right] $ and a $\rho_{n}>0$ with
\[
\rho_{n}\geq\frac{\kappa_{n}}{n^{1/2}}\left( \frac{\left( 2\ln\ln n\right)
^{1/2}P_{n}^{1/2}}{N_{n}}\right) ^{1/2},\quad\lim_{n\rightarrow\infty}%
\frac{\left( 2\ln\ln n\right) ^{1/2}P_{n}^{1/2}}{N_{n}}=0,
\]
and satisfying the other assumptions of Theorem \ref{Sparse}, that cannot be
detected by any possible asymptotically $\alpha$-level test, $\alpha\in\left(
0,1\right) $.
\end{thm}
Hence, when $\gamma_{n}\asymp\left( 2\ln\ln n\right) ^{1/2}$, it is not
possible to improve the detection condition (\ref{Sparse3}) and the rate
$\rho_{n}^{\ast}$ in (\ref{Bestrho}) is optimal. We shall now give
alternatives which are detected by the test (\ref{Test}) but not by other
popular tests.\ Consider the following high-order moving average process,%
\begin{equation}
u_{t}=u_{t,n}=\varepsilon_{t}+\frac{\nu\gamma_{n}^{1/2}}{n^{1/2}P_{n}^{1/4}%
}\sum_{k=1}^{P_{n}}\psi_{k}\varepsilon_{t-k},\text{\quad}\sum_{k=1}^{P_{n}%
}\psi_{k}^{2}=O(P_{n}),\quad\lim_{n\rightarrow\infty}P_{n}=\infty
,\label{Wildalt}%
\end{equation}
where $\left\{ \varepsilon_{t}\right\} $ is a strong white noise with
variance $\sigma^{2}$, $\nu$ is a scaling constant and $\gamma_{n}%
\asymp\left( 2\ln\ln n\right) ^{1/2}$. This alternative has $MA$
coefficients of order $\gamma_{n}^{1/2}/\left( n^{1/2}P_{n}^{1/4}\right) $
which goes to $0$ faster than $n^{-1/2}$ provided $P_{n}$ diverges with a
polynomial rate. Hence short term shocks have a statistically negligible
impact. However the long term multiplier of (\ref{Wildalt}) is, when $\psi
_{k}=1$ for all $k$, equal to $\nu\left( \gamma_{n}P_{n}^{3/2}/n\right)
^{1/2}$ which has a larger order than $n^{-1/2}$. The following lemma
describes the covariance function of the alternative (\ref{Wildalt}).
\begin{lem}
\label{Wildlem} If $P_{n}=o((n/\gamma_{n})^{2/3})$ and $\lim_{n\rightarrow
\infty}\left( \gamma_{n}/n\right) =0$, then the alternative $\left\{
u_{t}\right\} $ in (\ref{Wildalt}) satisfies $R_{0}=\sigma^{2}\left(
1+O\left( \gamma_{n}P_{n}^{1/2}/n\right) \right) $ and, uniformly in
$j\in\left[ 1,P_{n}\right] $,%
\[
R_{j}=\frac{\nu\gamma_{n}^{1/2}}{n^{1/2}P_{n}^{1/4}}\psi_{j}\sigma
^{2}+o\left( \frac{\gamma_{n}^{1/2}}{n^{1/2}P_{n}^{1/4}}\right) .
\]
\end{lem}
\noindent Hence a distinctive feature of the alternative (\ref{Wildalt})
when\textbf{\ }$\max_{1\leq k\leq P_{n}}\left\vert \psi_{k}\right\vert
=O\left( 1\right) $\textbf{\ }is that both its moving average and
correlation coefficients approach zero uniformly faster than $n^{-1/2}%
$\ provided $P_{n}/\gamma_{n}^{2}$\ tends to infinity. We shall show below
that the new test (\ref{Test}) detects these alternatives but that this not
the case of the three following test statistics based on directly observed
variables and with $\widetilde{\tau}_{j}^{2}$ as in (\ref{Sstar}),%
\begin{equation}
W_{n}=b_{n}\left( n^{1/2}\max_{j\in\left[ 1,J_{n}\right] }\left\vert
\frac{\widetilde{R}_{j}}{\widetilde{\tau}_{j}}\right\vert -b_{n}\right)
,\quad b_{n}=\left( 2\ln J_{n}-\ln\ln J_{n}-\ln\left( 4\pi\right) \right)
^{1/2},\label{Wu}%
\end{equation}%
\begin{equation}
CvM_{n}=\frac{n}{\pi^{2}}\sum_{j=1}^{J_{n}}\frac{\widetilde{R}_{j}^{2}}%
{j^{2}\widetilde{\tau}_{j}^{2}},\label{CvM}%
\end{equation}%
\begin{align}
EL_{n} & =\widetilde{BP}_{\widetilde{p}_{EL}^{\ast}}^{\ast},\quad
\widetilde{p}_{EL}^{\ast}=\arg\max_{p\in\left[ 1,J_{n}\right] }\left\{
\widetilde{BP}_{p}^{\ast}-\widetilde{\gamma}_{EL}^{\ast}p\right\} \text{
where}\label{EL}\\
& \widetilde{\gamma}_{EL}^{\ast}=\left\{
\begin{array}
[c]{ll}%
\ln n & \text{if }n^{1/2}\max_{j\in\left[ 1,J_{n}\right] }\left\vert
\frac{\widetilde{R}_{j}}{\widetilde{\tau}_{j}}\right\vert \leq\left( 2.4\ln
n\right) ^{1/2},\\
2 & \text{otherwise.}%
\end{array}
\right. \nonumber
\end{align}
The statistic (\ref{Wu}) is studied in Xiao and Wu (2011) who shows that
$W_{n}$ has asymptotically an extreme value distribution. The statistic
(\ref{CvM}) is due to Deo (2000) and is a version of the Cram\'{e}r-von Mises
test of Durlauf (1991) partially corrected for heteroskedasticity. The test
statistic $EL_{n}$ has been introduced in Escanciano and Lobato (2009) who
considers a fixed $J_{n}$. We show that it can also work when $J_{n}$
increases with the sample size. The numerical value $2.4$ used for
$\widetilde{\gamma}_{EL}^{\ast}$ is used in the simulation experiment of
Escanciano and Lobato (2009) but the proof of Proposition \ref{WildCvM} below
suggests that any real number strictly larger than $2$ would also work. As for
our test, the \ Escanciano and Lobato (2009) selected order $\widetilde{p}%
_{EL}^{\ast}$ is asymptotically equal to $1$ under $\mathcal{H}_{0}$ and
similar critical values can be used. To show that these tests do not detect
alternatives with small correlation coefficients, it is sufficient to consider
the single Gaussian null hypothesis $G_{0}$: $\left\{ u_{t}\right\} $ is a
Gaussian white noise $\left\{ \varepsilon_{t}\right\} $ with variance
$\sigma^{2}$ and the Gaussian alternative $G_{1}$: $\left\{ u_{t}\right\} $
is given by (\ref{Wildalt}) with Gaussian i.i.d. $\left\{ \varepsilon
_{t}\right\} $, $\sum_{k=1}^{P_{n}}\psi_{k}^{2}=O(P_{n})$, $\max_{1\leq k\leq
P_{n}}\left\vert \psi_{k}\right\vert =O\left( 1\right) ,$ $\min_{1\leq k\leq
P_{n}}\left\vert \psi_{k}\sigma^{2}\right\vert \geq1$, $\nu>0$, $\gamma
_{n},P_{n}\rightarrow\infty$ with $\gamma_{n}/P_{n}^{1/2}=o\left( 1/\ln
n\right) $ and $P_{n}=O\left( \left( n/\gamma_{n}\right) ^{1/14}\right)
\leq\overline{p}_{n}/2$, $\gamma_{n}\asymp\left( 2\ln\ln n\right) ^{1/2}$
satisfies (\ref{Gam}). We also assume $J_{n}=O\left( n^{1/2}\right) $.
\begin{prop}
Let $\left\{ u_{t}\right\} $ be directly observed. Suppose that Assumptions
\ref{Kernel} and \ref{P} in Appendix A holds. Then, for $\nu$ large enough,
the alternative $G_{1}$ satisfies (\ref{Sparse3}) and
\textit{(i)} The new test (\ref{Test}) and its $\widetilde{S}_{\widehat{p}%
^{\ast}}^{\ast}$ version consistently detect $G_{1}$;
\textit{(ii)} By contrast, the statistics $W_{n}$, $CvM_{n}$ and $EL_{n}$ have
the same asymptotic distribution under $G_{0}$ and $G_{1}$ and the
corresponding tests are therefore not consistent.\label{WildCvM}
\end{prop}
\noindent Proposition \ref{WildCvM}-(ii) implies that tests based on $W_{n}$,
$CvM_{n}$ or $EL_{n}$ are not adaptive rate-optimal. This is due to a
continuous behavior of these test statistics that prevents detection of
$G_{1}$ as explained now. Let $\widetilde{R}_{0,j}/\widetilde{\tau}_{0,j}$ and
$\widetilde{R}_{1,j}/\widetilde{\tau}_{1,j}$ be the standardized sample
covariance computed under $G_{0}$ and $G_{1}$ respectively. It is established
in the proof of Proposition \ref{WildCvM} that%
\begin{equation}
\max_{j\in\left[ 1,J_{n}\right] }\left\vert \frac{\widetilde{R}_{0,j}%
}{\widetilde{\tau}_{0,j}}-\frac{\widetilde{R}_{1,j}}{\widetilde{\tau}_{1,j}%
}\right\vert =o_{\mathbb{P}}\left( \frac{1}{\left( n\log n\right) ^{1/2}%
}\right) ,\label{G01cov}%
\end{equation}
a fact which implies that he tests $W_{n}$ and $CvM_{n}$ are not consistent.
The case of the $EL_{n}$\ test is a bit more tricky. It is first shown that
(\ref{G01cov}) yields $\widetilde{\gamma}_{EL}^{\ast}=\ln n$\ with a
probability tending to 1 under both $G_{0}$\ and $G_{1}$ because the statistic
$W_{n}$\ has the same asymptotic behavior under $G_{0}$\ and $G_{1} $. The
next step is to show that $P\left( \widetilde{p}_{EL}^{\ast}=1\right)
\rightarrow1$\ under $G_{0}$\ and $G_{1}$. This holds by construction under
$G_{0}$. To understand that this also holds under $G_{1}$, observe that
$\widetilde{p}_{EL}^{\ast}=p>1$\ implies in particular that $\widetilde{BP}%
_{p}^{\ast}+\widetilde{\gamma}_{EL}^{\ast}p\geq\widetilde{BP}_{1}^{\ast
}+\widetilde{\gamma}_{EL}^{\ast}$\ by (\ref{EL}), an inequality which is
equivalent to%
\begin{equation}
\frac{\widetilde{BP}_{p}^{\ast}-\widetilde{BP}_{1}^{\ast}}{p-1}\geq
\widetilde{\gamma}_{EL}^{\ast}=\ln n+o_{\mathbb{P}}\left( 1\right)
.\label{ELpnot1}%
\end{equation}
\ But, due to the division by $p-1$, the asymptotic behavior of $\left(
\widetilde{BP}_{p}^{\ast}-\widetilde{BP}_{1}^{\ast}\right) /\left(
p-1\right) $\ is the same under $G_{0}$\ and $G_{1}$ by (\ref{G01cov}), so
that $P\left( \widetilde{p}_{EL}^{\ast}=1\right) \rightarrow1$\ under
$G_{1}$. Since (\ref{G01cov}) also gives that the limit distribution of
$\widetilde{BP}_{1}^{\ast}$\ is the same under $G_{0}$\ and $G_{1}$, this
would also be the case of the test statistic $\widetilde{BP}_{\widetilde{p}%
_{EL}^{\ast}}^{\ast}=\widetilde{BP}_{1}^{\ast}+o_{\mathbb{P}}\left( 1\right)
$, so that the Escanciano and Lobato (2009) test $EL_{n}$\ is inconsistent
against $G_{1}$. As seen from (\ref{ELpnot1}), this is due to a too high
penalization term $\widetilde{\gamma}_{EL}^{\ast}p$,\ proportional to $p$, for
$\widetilde{BP}_{p}^{\ast}$. This contrasts with our test which applies a
penalty of lower order $\gamma_{n}p^{1/2}$ to the \textquotedblleft
debiased\textquotedblright\ test statistics $\widetilde{BP}_{p}^{\ast
}-E\left( p\right) $.
\section{Simulation experiments\label{Simulation experiments}}
\setcounter{equation}{0} This simulation experiment aims to propose a
reasonable value of the penalty sequence $\gamma_{n}$ to be tested with
various strong and weak white noise processes and with various alternatives.
As preliminary experiments have shown that the test statistic $\widehat{S}%
_{\widehat{p}}$ may yield an oversized test for some practically relevant
white noise processes, we consider the test based on $\widehat{S}%
_{\widehat{p}^{\ast}}^{\ast}$ as in (\ref{Sstar}) and (\ref{Hatpstar}) with
the critical values (\ref{Zest}), $\widetilde{z}^{\ast}\left( \alpha\right)
$ for directly observed variables and $\widehat{z}^{\ast}\left(
\alpha\right) $ for estimated residuals, $\alpha=10\%$, $5\%$ and $1\%$. To
investigate the impact of choosing a large $\overline{p}_{n}$ we allows for
all possible orders and set $\overline{p}_{n}=n-1.$ We consider two kernel
choices. The first is $K\left( t\right) =\mathbb{I}\left( \left\vert
t\right\vert \leq1\right) $ which gives the Box Pierce statistic so that the
corresponding tests are labelled $BP$. The second is a modified Parzen kernel%
\[
k(t)=\left\{
\begin{array}
[c]{lll}%
1-6t^{2}+6|x|^{3}, & & \left\vert t\right\vert \leq1/2,\\
2(1-|t|)^{3}, & & 1/2<\left\vert t\right\vert \leq1,\\
0 & & \text{otherwise.}%
\end{array}
\right.
\]
Since $k\left( 1\right) =0$, we use the choice $K\left( t\right) =k\left(
t/2\right) /k\left( 1/2\right) $ and label the corresponding tests as $Par$.
The first experiment parallels Theorem \ref{Level} and aims to calibrate the
penalty sequence. It analyzes the sensitivity of the test to the penalty term.
It investigates the behavior of the test under the null for $\gamma_{n}%
=\gamma\left( 2\ln\ln\left( n-2\right) \right) ^{1/2}$ where the
proportionality coefficient $\gamma$ ranges from $2.8$ to $3.8$. The
considered white noise is a directly observed $\left\{ u_{t}\right\} $ with
a standard normal distribution. The next table reports the simulated levels
from $50,000$ replications and the percentage $\%\left\{ \widehat{p}^{\ast
}\neq1\right\} $, an important indicator to decide whether a difference
between nominal and observed levels is due to $\widehat{p}^{\ast}$ or to the
choice of critical values.
\[
\text{\textbf{[INSERT TABLE 1 HERE]}}%
\]
In Table 1, a `*' indicates a statistically oversized test, i.e. with a level
statistically greater than the nominal one at the $1\%$ level. A threshold
value for the $BP$ test is $\gamma=3.4$ which ensures that the observed sizes
are close to the nominal for $n=1,000$. The $Par$ test is slightly better
behaved with this respect. Both tests have very similar $\%\left\{
\widehat{p}^{\ast}\neq1\right\} $ which is well below $1\%$ for $\gamma=3.4$.
The rest of the simulation experiments will use $\gamma=3.4$.
Let us now introduce some benchmark tests. We shall compare our $BP$ and $Par
$ tests with the data-driven test $EL$ based on the statistic $EL_{n}$ in
(\ref{EL}) with $J_{n}=n-1$ and the Lobato (2001) and Kuan and Lee (2006)
critical values in (\ref{Zest}). We also consider the Newey-West data-driven
order $\widehat{p}_{IMSE}$ used in Hong and Lee (2005) and the test statistic%
\[
\widehat{p}_{IMSE}=\left( 1\vee\widetilde{c}^{1/5}\left( f\right) \right)
n^{1/5}\text{,\quad where\quad}\widetilde{c}\left( f\right) =\frac
{144\sum_{j=-(n-1)}^{n-1}k\left( j/\widetilde{p}\right) j^{4}\widehat{R}%
_{j}^{2}/\widehat{\tau}_{j}^{2}}{0.539285\sum_{j=-(n-1)}^{n-1}k\left(
j/\widetilde{p}\right) \widehat{R}_{j}^{2}/\widehat{\tau}_{j}^{2}},
\]%
\[
IMSE=\frac{\sum_{j=1}^{\widehat{p}_{IMSE}}k^{2}\left( j/\widehat{p}%
_{IMSE}\right) \left\{ \widehat{R}_{j}^{2}/\widehat{\tau}_{j}^{2}-\left(
1-\frac{j}{n}\right) \right\} }{\left( 2\sum_{j=1}^{\widehat{p}_{IMSE}%
}k^{4}\left( j/\widehat{p}_{IMSE}\right) \left( 1-\frac{j}{n}\right)
^{2}\right) ^{1/2}},
\]
where $k\left( \cdot\right) $ is the Parzen kernel and $\widehat{\tau}%
_{j}^{2}$ is as (\ref{Sstar}). In the definition of $\widehat{p}_{IMSE}$,
$\widetilde{p}$ is a pilot bandwidth set to $\widetilde{p}=(4n/100)^{4/25}$.
Observe that $\widetilde{c}\left( f\right) $ remains potentially stochastic
under the null so that the null limit distribution of $\widetilde{IMSE}$ may
differ from the standard normal obtained by Hong (1996), Xiao and Wu (2011)
and Shao (2011b) for deterministic $p$. We follow however common practice and
the $IMSE$ test will use standard normal critical values. The last benchmark
test, $CvM$, is the Deo (2000) Cram\'{e}r-von Mises statistic $CvM_{n}$ in
(\ref{CvM}) and uses the critical values tabulated in Anderson and Darling (1952).
The first comparison under $\mathcal{H}_{0}$ is based on i.i.d. $\left\{
u_{t}\right\} $ with the following distributions: standard normal (`Nor' in
Table 2), a Student with three degrees of freedom (`Stud'), and a centered chi
square with one degree of freedom (`Chi'). The Student distribution is used to
test the sensitivity of our test to the lack of higher-order moments and the
chi square one can reveal sensitivity to skewness.%
\[
\text{\textbf{[INSERT TABLE 2 HERE]}}%
\]
As in Table 1, the $Par$ test is slightly better than the $BP$ test but both
behave well here. The highest ${\small \%}\left\{ \widehat{p}^{\ast}%
\neq1\right\} $ for the tests $BP$ and $Par$ are achieved the centered chi
square distribution. $BP$ and $S$ are slightly oversized under `Chi' but still
behave better than the $CvM$ test which is their best competitor in this
experiment. $BP$ and $Par$ seem to be not sensitive to the lack of higher
moments as revealed from the `Stud' experiment. The $EL$ test is oversized due
to a very high ${\small \%}\left\{ \widehat{p}^{\ast}\neq1\right\} $ since
${\small \%}\left\{ \widehat{\gamma}_{EL}^{\ast}\neq\ln n\right\} $ is also
high. Escanciano and Lobato (2009) reports a similar behavior even when
$J_{n}$ remains finite. The $IMSE$ test is conservative at the $10\%$ level
but has a level which seems quite far from the nominal size when $\alpha=5\%$
or $1\%$. This is due to the fact that $\widehat{p}_{IMSE}$ remains moderate
and quite close to $1$ while the normal critical values of the $IMSE$ test
build on the fact that $\widehat{p}_{IMSE}$ should theoretically diverge under
$\mathcal{H}_{0}$. The $CvM$ test behaves well except for `Chi' where it is
more oversized than $BP$ and $Par$ for $n=200$.
The next experiment considers directly observed or estimated weak white noise
$\left\{ u_{t}\right\} $. Two conditional heteroskedastic differences of
martingales are examined. The first process is a GARCH(1,1) with $u_{t}%
=s_{t}\zeta_{t}$ and $s_{t}^{2}=0.001+0.90s_{t-1}^{2}+0.05u_{t-1}^{2}$ where
the i.i.d. $\zeta_{t}$ are standard normal. This process, which puts a high
weight on $s_{t-1}^{2}$, has been used in many simulation experiments, see
Lobato et al. (2002) who justifies this choice with financial markets examples
or Escanciano and Lobato (2009) among others. The second martingale difference
is the ARCH(1) $u_{t}=s_{t}\zeta_{t}$ and $s_{t}^{2}=0.001+0.9u_{t-1}^{2}$
with a dynamic of $s_{t}^{2}$ carried by $u_{t-1}^{2}$. Due to an ARCH
coefficient larger than $1/3$, $\mathbb{E}\left[ u_{t}^{4}\right] =\infty$
and the tests are, in principle, not expected to behave well in this
experiment. The three next processes are uncorrelated but are not difference
of martingales, so that the $CvM$ test is not expected to have a correct size
and is just reported here as a benchmark. The first, labelled `Bilinear' in
Table 3 below, is a bilinear model $u_{t}=\zeta_{t}+0.9\zeta_{t-1}u_{t-2}$.
The second, labelled `No-MDS', is given by $u_{t}=\zeta_{t-1}\zeta
_{t-2}\left( 1+\zeta_{t-2}+\zeta_{t}\right) $ and is from Lobato (2001). The
third, `All-Pass', is an All-Pass ARMA(1,1) process (Breidt, Davis, and
Trindade, 1999) as in Lobato et al. (2002), $u_{t}-0.5u_{t-1}=\zeta_{t}%
-\zeta_{t-1}/0.5$ where the i.i.d. $\zeta_{t}$ have a Student distribution
with $9$ degrees of freedom. Since the root of the $MA$ part is the inverse of
the $AR$ root, the resulting process is uncorrelated but the $u_{t}$ are
dependent due to non Gaussian $\zeta_{t}$. Finally, the last process `ARRes'
considers estimated residuals from the $AR\left( 1\right) $ $y_{t}%
=0.8y_{t-1}+\zeta_{t}$, $\widehat{u}_{t}=y_{t}-\widehat{\theta}y_{t-1}$,
$\widehat{\theta}=\sum_{t=0}^{n-1}y_{t}y_{t+1}/\sum_{t=0}^{n-1}y_{t}^{2}$. The
$BP$, $Par$ and $EL$ tests are all adjusted to tackle the estimation effect by
using the critical values $\widehat{z}^{\ast}\left( \alpha\right) $ of
(\ref{Zstar}). The critical values of the tests $IMSE$ and $CvM\,$are not
adjusted so they are not expected to perform well for this case.%
\[
\text{\textbf{[INSERT\ TABLE 3 HERE]}}%
\]
The behavior of the $BP$ and $Par$ tests is very good with observed levels
which are not oversized in general. This is due in part to a ${\small \%}%
\left\{ \widehat{p}^{\ast}\neq1\right\} $ which is always much smaller than
$1\%$. However the $BP$ and $Par$ tests can be undersized as in the case of
`ARCH(1)'. But even in this case ${\small \%}\left\{ \widehat{p}^{\ast}%
\neq1\right\} $ remains very small suggesting that this is due to Lobato
(2001) critical values. A not reported simulation study shows indeed
that\ using instead standard Chi-squared values gives much better observed
levels around $10\%$, $4.5\%$ and $0.7\%$ for the $BP$ and $Par$ tests,
$n=200$ or $1,000$. The behavior of the $EL$ test is much more erratic, with
observed levels which can be severely oversized as for `Bilinear' or
undersized see `ARCH(1)'. This comes from a ${\small \%}\left\{
\widehat{p}^{\ast}\neq1\right\} $ which is much higher than for the $BP$ and
$Par$ tests. The $IMSE$ test can also be severely undersized or oversized
especially at the nominal $5\%$ level. The $CvM$ test performs well except, as
expected, for weak white noises or estimated residuals.
We now consider $\mathcal{H}_{1}$. A first set of low lags alternatives will
be calibrated using the Cram\'{e}r von-Mises norm $D_{CvM}^{2}=\sum
_{j=1}^{n-1}R_{j}^{2}/\left( \pi^{2}j^{2}R_{0}^{2}\right) $ which is the
counterpart of $CvM_{n}/n$. We shall consider lacunary $AR(P)$, $u_{t}=\theta
u_{t-P}+\varepsilon_{t}$ and $MA\left( P\right) $, $u_{t}=\varepsilon
_{t}+\theta\varepsilon_{t-P}$ satisfying $D_{CvM}^{2}=3/n$ for i.i.d.
$N\left( 0,1\right) $ $\varepsilon_{t}$. We shall select the positive AR and
MA coefficients $\rho_{P,n}$ and $\theta_{P,n}$ with $D_{CvM}^{2}=3/n$. It can
be shown that this gives%
\[
\rho_{P,n}=\frac{3^{1/2}P}{n^{1/2}}\left( 1+o\left( 1\right) \right)
\text{\quad\quad and\quad\quad}\theta_{P,n}=\frac{3^{1/2}P}{n^{1/2}}\left(
1+o\left( 1\right) \right) ,
\]
for fixed $P$ so that the resulting alternatives can also be viewed as local
Pitman alternatives going to the null with the parametric rate $n^{-1/2}$. For
$P=1$, $n=200$ and $1,000$, we consider the $AR(1)$ and $MA(1)$ alternatives
obtained for $n=200$, with the view of seeing the impact of increasing the
sample size on the power. We also consider larger values of $P=4$ and $6$ in
which case we allow the alternative to vary with the sample size. This gives
the six following alternatives: $MA1$, $u_{t}=\varepsilon_{t}%
+0.1244\varepsilon_{t-1}$ for $n=200$ and $1,000$ ; $AR1$, $u_{t}%
=0.1233u_{t-1}+\varepsilon_{t}$ for $n=200$ and $1,000$; $MA4$, $u_{t}%
=\varepsilon_{t}+0.8165\varepsilon_{t-4}$ for $n=200$ and $u_{t}%
=\varepsilon_{t}+0.2307\varepsilon_{t-4}$ for $n=1,000$; $AR6$, $u_{t}%
=0.6849u_{t-6}+\varepsilon_{t}$ for $n=200$ and $u_{t}=0.3242u_{t-6}%
+\varepsilon_{t}$ for $n=1,000$. In Tables 4, 5 and 6, $\overline
{\widehat{p}^{\ast}}$ and $s_{\widehat{p}^{\ast}}$ are the simulation mean and
standard deviation of $\widehat{p}^{\ast}$. Such statistics are useful to
conjecture the impact on the power of $\overline{p}_{n}$ since large or
$\overline{\widehat{p}^{\ast}}$ and $s_{\widehat{p}^{\ast}}$ suggests that
decreasing $\overline{p}_{n}$ can decrease the power.
\[
\text{\textbf{[INSERT TABLE 4 HERE]}}%
\]
The low lags $AR1$ and $MA1$ experiments have very similar characteristics.
The data-driven tests $BP$, $Par$ and $EL$ seem to be outperformed by the
$IMSE$ and $CvM\,$tests. This is actually due to the fact that the former use
the robust critical values (\ref{Zstar}). Using chi-square critical values as
in Table 5 shows that all the test perform similarly. The fact the the $IMSE$
test seems more powerful at the nominal $1\%$ level is not really meaningful
since Table 2 reveals that $IMSE$ is oversized at this level.
\[
\text{\textbf{[INSERT TABLE 5 HERE]}}%
\]
For the higher order experiments $MA4$ and $AR6$, the $BP$, $Par$ and $EL$
tests behave very similarly and clearly outperform their competitors with a
power close to $100\%$. An interesting fact is the very high values achieved
by $\overline{\widehat{p}^{\ast}}$ and $s_{\widehat{p}^{\ast}}$ for the $BP$
and $Par$ tests. This is due to the selection procedure (\ref{Hatpstar})
which, compared to (\ref{EL} ), penalize less large $p$. Note that such a
behavior of $\widehat{p}^{\ast}$ prevents from using the selected order to
estimate the order of the underlying process.
The second set of alternatives are randomized small correlations processes as
in (\ref{Wildalt}),
\begin{equation}
u_{t}=\varepsilon_{t}+\frac{\left( 2.5\times\gamma_{n}\right) ^{1/2}%
}{n^{1/2}P^{1/4}}\sum_{k=1}^{P}\psi_{k,b}\varepsilon_{t-k}\text{,\quad
\quad\quad\quad}\psi_{k,b}\overset{\text{i.i.d.}}{\sim}N\left( 0,1\right)
\text{.}\label{Bayeswild}%
\end{equation}
In this setting $b=1,...,10,000$ is the simulation index. New $MA$
coefficients $\left\{ \psi_{k,b}\right\} $ are drawn for each simulations.
Randomizing the moving average coefficients allows us to explore various
shapes of the correlation function. The noise $\left\{ \varepsilon
_{t}\right\} $ is independent of the moving average coefficients $\left\{
\psi_{k,b}\right\} $ and is drawn randomly from the standard normal
distribution. Since $\sum_{k=1}^{P}\psi_{k,b}^{2}=P\left( 1+o_{\mathbb{P}%
}\left( 1\right) \right) $ when $P$ tends to infinity, the covariance
structure of the alternatives (\ref{Bayeswild}) is described in Lemma
\ref{Wildlem}. We consider two scenarios. In the experiment `LOW', $P$ is set
to $15$ for $n=200$ and to $75$ when $n=1,000$. The experiment `HIGH' doubles
the order $P$, $P=30$ for $n=200$ and $P=150$ for $n=1,000$. The next table
reports our simulation results.%
\[
\text{\textbf{[INSERT TABLE 6 HERE]}}%
\]
The $BP$ test outperforms its competitors. The $EL$ test achieves a similar
performance only in the LOW experiment when $P=15$ and $n=200$. The $Par$ test
performs similarly to the $BP$ test only for large $P=75,150$ and $n=1,000$,
in which case it also outperforms $EL$, $IMSE$ and $CvM$. The $IMSE $ test
performs poorly and is even dominated by $CvM$ due to a $\widehat{p}_{IMSE}$
which remains very close to $2$. As well, the average value of $\widehat{p}%
_{EL}$ is much lower than the ones achieved for the $BP$ and $Par$ tests. The
high values of $\overline{\widehat{p}^{\ast}}_{BP}$ and $\overline
{\widehat{p}^{\ast}}_{Par}$ may suggest that these tests would be affected by
a lower choice of $\overline{p}_{n}$. However setting $\overline{p}%
_{n}=3\left[ \left( n/2\right) ^{1/2}\right] $ give similar conclusions
for the $BP$ test.
\section{Concluding remarks\label{Concluding remarks}}
\setcounter{equation}
{0}The paper proposes an automatic test for the weak white noise null
hypothesis when the variables are directly observed or estimated residuals.
The test is based on a new data-driven selection procedure of the order used
in a Box and Pierce (1970) test statistic. The critical region uses Lobato
(2001) robust critical values when the variables are directly observed and
Kuan and Lee (2006) ones for estimated residuals. An important theoretical
finding is that the new test can consistently detect alternatives with small
autocorrelation coefficients of order $\rho_{n}=o\left( n^{-1/2}\right) $
where $n$ is the sample size, provided that the number of $O\left( \rho
_{n}\right) $ autocorrelation coefficients at reasonably moderate lags
remains large enough. The proposed test is shown to be adaptive rate-optimal
against this class of alternatives. The paper gives examples of MA
alternatives with small autocorrelation coefficients of order $o\left(
n^{-1/2}\right) $ which are detected by the new test but not by previous
procedures proposed by Deo (2000), Escanciano and Lobato (2009) or Xiao and Wu
(2011). These alternatives correspond to a plausible macroeconomic scenario
where a temporary shock has no significant impact whereas permanent ones may
cause some significant changes. A simulation experiment has shown that the new
test can cope with various weak white noises including some ARCH\ or GARCH
processes popular in empirical finance. The simulation experiment has also
confirmed the good power properties of the test regarding detection of
standard $AR(1)$ and $MA(1)$ alternatives as well as detection of small
$o\left( n^{-1/2}\right) $ autocorrelation coefficients. The methodology
considered here can be applied to many econometric problems which involves
models with many parametric coefficients including inference for impulse
response functions, VAR Causality testing, significance testing in series
expansion or detection of potentially weak instruments.
\section*{Appendix A: Main assumptions\label{Notations and main assumptions}}
\renewcommand{\baselinestretch}{1.5} \setcounter{lem}{0} \setcounter{prop}{0}
\setcounter{thm}{0} \renewcommand{\thelem}{A.\arabic{lem}}
\renewcommand{\theprop}{A.\arabic{prop}}
\renewcommand{\thethm}{A.\arabic{thm}} \setcounter{equation}{0}
\setcounter{subsection}{0} \renewcommand{\theequation}{A.\arabic{equation}}
\renewcommand{\thesubsection}{A.\arabic{subsection}} In what follows,
$\left\Vert Z\right\Vert _{a}=\mathbb{E}^{1/a}\left[ \left\vert Z\right\vert
^{a}\right] $ where $Z$ is a real r.v. and $a$ a positive real number. When
studying the performance of the test under the alternative, we consider a
sequence $\left\{ u_{t,n}\right\} $ of stationary alternatives with
autocovariance coefficients $\left\{ R_{j,n}\right\} $. This means that for
each given $n$, the process $\left\{ u_{t,n},t\in\mathbb{N}\right\} $ is
stationary. Note that $\left\{ u_{t,n}\right\} $ and $\left\{
R_{j,n}\right\} $ were abbreviated into $\left\{ u_{t}\right\} $ and
$\left\{ R_{j}\right\} $ in the main body of the paper. We follow, under the
null and the alternative, Xia and Wu (2011), Shao (2011b) and shall restrict
ourselves to uncorrelated stationary processes satisfying a Moment Contraction
condition from Wu (2005). We shall assume that $u_{t,n}=F_{n}\left(
\ldots,e_{t-1},e_{t}\right) $ for some measurable $F\left( \cdot\right) $
and where the $e_{t}$, $t=-\infty,\ldots,+\infty$, are i.i.d. (univariate or
not) r.v. Consider an independent copy $\left\{ e_{t}^{\prime}\right\} $ of
$\left\{ e_{t}\right\} $ and define for $\tau\leq t\leq n$%
\[
u_{t,n}^{\tau}=F_{n}\left( \ldots,e_{\tau-1},e_{\tau}^{\prime},e_{\tau
+1},\ldots,e_{t-1},e_{t}\right) ,
\]
that is $e_{\tau}$ is changed into $e_{\tau}^{\prime}$ in $u_{t,n}^{\tau} $.
The magnitude of the differences $u_{t}-u_{t}^{\tau}$ is a measure of the
process sensitivity to shocks on the past innovations. More formally assume
that for some $a>0$ and for all $j\geq0$%
\[
\left\Vert u_{t,n}-u_{t,n}^{t-j}\right\Vert _{a}\leq\delta_{a}\left(
j\right) \text{, where }\delta_{a}\left( j\right) \text{ decreases to
}0\text{ when }j\rightarrow\infty.
\]
Shao (2011b) assumes that $\delta_{a}\left( j\right) $ decreases with an
exponential rate, a condition which is fulfilled by many linear or nonlinear
time series models, including threshold, stochastic volatility, bilinear or
GARCH models, see Shao (2011b), Wu (2005, 2007, 2009) and the references
therein. Our main assumptions are given below.
\setcounter{hp}{10}
\begin{hp}
The kernel function $K\left( \cdot\right) $ from $\mathbb{R}^{+}$ to
$\left[ 0,\infty\right) $ is nonincreasing, bounded away from $0$ on
$[0,1/2]$ and continuous differentiable over its support $[0,1]$%
.\label{Kernel}
\end{hp}
\setcounter{hp}{15}
\begin{hp}
The maximal order $\overline{p}_{n}$ diverges faster than some power with
$\overline{p}_{n}=o(n^{1/\left( 2\left( 1+3/a\right) \right) })$ as
$n\rightarrow\infty$, where $a>1$ is as Assumption \ref{Reg} below. The
penalty sequence $\gamma_{n}$ satisfies $\gamma_{n}>0$, $\gamma_{n}%
\rightarrow\infty$ and $\gamma_{n}=o\left( n^{1/4}\right) $ as
$n\rightarrow\infty$.\label{P}
\end{hp}
\setcounter{hp}{17}
\begin{hp}
Under $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$, $\sup_{t}\left\Vert
u_{t,n}\right\Vert _{12a}<C_{0}R_{0,n}^{1/2}$ for some $a>1$ and, for some
$b>0$, $\delta_{12a}\left( j\right) \leq C_{1}j^{-7-b}$. Moreover
$1/C_{2}\leq R_{0,n}\leq C_{2}$, and $\max_{j\in\left[ 1,\overline{p}%
_{n}\right] }R_{0,n}^{2}/\operatorname*{Var}\left( u_{t,n}u_{t+j,n}\right)
\leq C_{3}$.\label{Reg}
\end{hp}
\setcounter{hp}{12}
\begin{hp}
The processes $\{u_{t,n}\}$, the model and the estimators $\left\{
\widehat{\theta}_{t}\right\} $ are such that: \label{M} (i) There is a
sequence $\left\{ \theta_{n}\right\} $, with $\theta_{n}=\theta_{0}$ for all
$n$ under $\mathcal{H}_{0}$, such that
\begin{equation}
\left\{ \left( n^{1/2}\left( \widehat{\theta}_{[ns]}-\theta_{n}\right)
^{\prime},n^{-1/2}\sum_{t=1}^{[ns]}\left( u_{t,n}u_{t-1,n}-\mathbb{E}\left[
u_{t,n}u_{t-1,n}\right] \right) \right) ^{\prime},s\in\left[ 0,1\right]
\right\} \label{M1}%
\end{equation}
$D_{[0,1]}$-converges in distribution to a Brownian Motion with a full rank
matrix; (ii) The residual function admit a second order expansion
$u_{t}\left( \theta\right) =u_{t,n}+(\theta-\theta_{n})^{\prime}%
u_{t,n}^{(1)}+\left( \theta-\theta_{n}\right) ^{\prime}u_{t,n}^{(2)}\left(
\theta-\theta_{n}\right) +\mathfrak{r}_{t,n}\left( \theta\right) $ where,
for any $C>0$,
\begin{equation}
\sup_{t\in\left[ 1,n\right] }\sup_{\theta;\left\Vert \theta-\theta
_{n}\right\Vert \leq Cn^{-1/2}}\left\vert \mathfrak{r}_{t,n}\left(
\theta\right) \right\vert =o_{\mathbb{P}}\left( \frac{1}{n}\right)
\label{M2}%
\end{equation}
and, for each $n$, $\{u_{t,n},u_{t,n}^{(1)},u_{t,n}^{(2)}\}$ is a stationary
process with $\mathbb{E}^{1/2}\left[ \left\Vert a_{t}\right\Vert ^{2}\right]
\leq C_{4}$, $\left\{ a_{t}\right\} $ being successively $\left\{
u_{t,n}^{(1)}\right\} $, $\left\{ u_{t,n}^{(2)}\right\} $ $\left\{
u_{t,n}^{2}\right\} $, $\left\{ u_{t,n}u_{t,n}^{(1)}\right\} $, $\left\{
u_{t,n}^{\left( 1\right) }u_{t,n}^{(1)^{\prime}}\right\} $, $\left\{
u_{t,n}u_{t,n}^{(2)}\right\} $, $\sum_{j=-\infty}^{\infty}\mathbb{E}\left[
\left\Vert u_{t-j,n}^{\left( 1\right) }u_{t,n}\right\Vert ^{2}\right] \leq
C_{5}$ and $\sup_{j\in\mathbb{Z}}\mathbb{E}\left[ \left\Vert n^{-1/2}%
\sum_{t=j+1}^{n}\left( u_{t-j,n}^{\left( 1\right) }u_{t,n}-\mathbb{E}%
[u_{t-j,n}^{\left( 1\right) }u_{t,n}]\right) \right\Vert ^{2}\right] \leq
C_{6}$, $\sup_{j\in\mathbb{Z}}\mathbb{E}\left[ \left\Vert u_{t,n}^{\left(
1\right) }u_{t,n}u_{t-j,n}^{2}\right\Vert \right] \leq C_{7}$ and
$\sup_{j\in\mathbb{Z}}\mathbb{E}\left[ \left\Vert n^{-1/2}\sum_{t=j+1}%
^{n}\left( u_{t,n}^{\left( 1\right) }u_{t,n}u_{t-j,n}^{2}-\mathbb{E}%
[u_{t,n}^{\left( 1\right) }u_{t,n}u_{t-j,n}^{2}]\right) \right\Vert
^{2}\right] \leq C_{8}$.
\end{hp}
The compact sets $\left[ 0,1/2\right] $ and $\left[ 0,1\right] $ in
Assumption \ref{Kernel} are somehow arbitrary and can be replaced by any
nested compact intervals. Note however that Assumption \ref{Kernel} forbids
the use of the Daniell Kernel $K\left( x\right) =\sin\left( x\right) /x$
due to the nonincreasingness and bounded support conditions.
Assumption \ref{Reg}-(i) assumes a polynomial decay for the coefficients
$\delta_{12a}\left( j\right) $, a condition which is weaker than the
exponential rate assumed in Shao (2011b). Note also that, in Assumption
\ref{P}, the order of $\overline{p}_{n}$ can come closer to $n^{1/2}$ when $a$
increases. Under Assumption \ref{Reg}-(i), $\left\{ u_{t,n}\right\} $\ must
have finite moments of order twelve at least. This is mostly needed for a
proof of Theorem \ref{Level} based on Lindeberg Substitution Method, see
Pollard (2002, p.179), which requires to bound moments like\textit{\ }%
$\mathbb{E}\left[ \left( u_{t}^{2}u_{t+j}^{2}\right) ^{3}\right]
\leq\mathbb{E}\left[ u_{t}^{12}\right] $\textit{.} However our simulation
experiments suggest that the test (\ref{Test}) is well behaved with a low
$\%\left\{ \widehat{p}\neq1\right\} $ even when $\left\{ u_{t}\right\} $
has a fat tail Student distribution or no fourth moments as in the ARCH(1)
experiment. It is possible that a proof for Theorem \ref{Level} which would
better use the self normalization of $\widetilde{S}_{p}/\widetilde{R}_{0}^{2}$
can work under better moment conditions as moderate deviation, which is at the
core of our proof, does not request existence of higher moments under self
normalization (see e.g. de la Pe\~{n}a, Lai and Shao, 2009).
Assumption \ref{M} is a shortened version of Assumptions B1 and A2 of Kuan and
Lee (2006) who uses a standard linear expansion $n^{1/2}\left(
\widehat{\theta}-\theta_{n}\right) =n^{-1/2}\sum_{t=1}^{n}\psi_{t}%
+o_{\mathbb{P}}\left( 1\right) $ to show that (\ref{M1}) satisfies a
Functional Central Limit Theorem (FCLT) as requested in \ref{M}-(i). This FCLT
is mostly used under $\mathcal{H}_{0}$ to show that (\ref{Alpha}) holds. Note
that the full rank FCLT condition in Assumption \ref{M}-(i) can be quite
restrictive. For a correctly specified $AR(1)$ model $X_{t}-\theta
X_{t-1}=u_{t}$, this rules out for instance $\theta=0$. However such an issue
can be addressed when an additional test statistic $T$ with proper critical
values $t\left( \alpha\right) $ is available, as the ones proposed by Francq
et al. (2005), or by Delgado and Velasco (2010, Theorem 3) which gives a
general approach to obtain test statistics which are not affected by parameter
estimation. Indeed, because $\mathbb{P}\left( \widehat{p}=1\right)
\rightarrow1$ under $\mathcal{H}_{0}$ as shown in Theorem \ref{Level}, setting
$z\left( \alpha\right) =\widehat{S}_{1}+T-t\left( \alpha\right) $ gives
$\lim_{n\rightarrow\infty}\mathbb{P}\left( \widehat{S}_{\widehat{p}}\geq
z\left( \alpha\right) \right) =\lim_{n\rightarrow\infty}\mathbb{P}\left(
\widehat{S}_{1}\geq z\left( \alpha\right) \right) =\lim_{n\rightarrow
\infty}\mathbb{P}\left( T\geq t\left( \alpha\right) \right) =\alpha$. When
$\left\{ u_{t}\right\} $ is directly observed, Assumption \ref{M} amounts to
Assumption 1 in Lobato (2001) and the FCLT for $n^{-1/2}\sum_{t=1}%
^{[ns]}\left( u_{t}u_{t-1}-\mathbb{E}\left[ u_{t}u_{t-1}\right] \right) $
is a consequence of \ref{Reg}-(i) and Wu (2007). Assumption \ref{M} is easily
checked for simple linear models and OLS estimation where $u_{t,n}^{(2)}$ and
$\mathfrak{r}_{t,n}$ can be set to $0$, see also Francq et al. (2005), Hong
(1996) and Shao (2011b).
\setcounter{equation}{0} \setcounter{footnote}{0}
\section*{References}
\textsc{Anderson T. W.} (1993). Goodness of Fit Tests for Spectral
Distributions. \textit{The Annals of Statistics} \textbf{21}, 830--847.
\textsc{Anderson T. W.} and \textsc{D.A. Darling} (1952). Asymptotic Theory of
Certain \textquotedblleft Goodness of Fit\textquotedblright\ Criteria Based on
Stochastic Processes. \textit{Annals of Mathematical Statistics} \textbf{23}, 193--212.
\textsc{Box, G.} and \textsc{D. Pierce} (1970). Distribution of Residual
Autocorrelations in Autoregressive-Integrated Moving Average Time Series
Models. \textit{Journal of American Statistical Association} \textbf{65}, 1509--1526.
\textsc{Breidt, F.J.}, \textsc{R.A. Davis} and \textsc{A. Trinidade} (1999).
Least Absolute Deviation Estimation for All-Pass Time Series Models. Preprint,
Colorado State University.
\textsc{Brockwell, P.J.} and \textsc{R.A. Davis} (2006). \textit{Time Series:
Theory and Methods}. Second Edition, Springer.
\textsc{Campbell, J.Y., A.W. Lo} and \textsc{A.C. Craig MacKinlay} (1997).
\textit{The Econometrics of Financial Markets}. Second Edition, Princeton
University Press.
\textsc{Chen, S.X.} and \textsc{J. Gao} (2007). An adaptive Empirical
Likelihood Test for Parametric Time Series Regression Models. \textit{Journal
of Econometrics }\textbf{141}, 950--972.
\textsc{Darling, D.} and \textsc{P. Erd\"{o}s} (1956). A Limit Theorem for the
Maximum of Normalized Sums of Independent Random Variables. \textit{Duke
Mathematical Journal }\textbf{23}, 143--155.
\textsc{Delgado, M.A.} and \textsc{C. Velasco} (2010). Distribution-Free Tests
for Time Series Models Specification. \textit{Journal of Econometrics
}\textbf{155}, 128--137.
\textsc{Delgado, M.A., J. Hidalgo} and \textsc{C. Velasco} (2005).
Distribution Free Goodness-of-Fit Tests for Linear Processes. \textit{The
Annals of Statistics }\textbf{33}, 2568-2609.
\textsc{Deo, R.S.} (2000). Spectral Tests of the Martingale Hypothesis under
Conditional Heteroscedasticity. \textit{Journal of Econometrics} \textbf{99}, 291-315.
\textsc{Donoho, D.} and \textsc{J. Jin} (2004). Higher criticism for detecting
sparse heterogeneous mixtures. \textit{The Annals of Statistics} \textbf{32} 962--994
\textsc{Durlauf, S.N.} (1991). Spectral Based Testing of the Martingale
Hypothesis. \textit{Journal of Econometrics} \textbf{50}, 355-376.
\textsc{Escanciano, J.C.} and \textsc{I.N. Lobato} (2009). An Automatic
Portmanteau Test for Serial Correlation. \textit{Journal of Econometrics
}\textbf{151}, 140--149.
\textsc{Fan. J.} (1996). Test of Significance Based on Wavelet Thresholding
and Neyman's Truncation. \textit{Journal of the American Statistical
Association} \textbf{91}, 674--688.
\textsc{Fan, J.} and \textsc{Q. Yao} (2005). \textit{Nonlinear Time Series.
Nonparametric and Parametric Methods}. Springer.
\textsc{Francq, C.} , \textsc{R. Roy} and \textsc{J.M. Zakoian} (2005).
Diagnostic Checking in ARMA Models With Uncorrelated Errors. \textit{Journal
of the American Statistical Association }\textbf{100}, 532--544.
\textsc{Golubev, G.K.}, \textsc{M. Nussbaum} and \textsc{H.H. Zhou} (2010).
Asymptotic Equivalence of Spectrum Density Estimation and Gaussian White
Noise. \textit{The Annals of Statistics }\textbf{38}, 181--214.
\textsc{Grenander, U.} and \textsc{M. Rosenblatt} (1952). On Spectral Analysis
of Stationary Time-series. \textit{Proceedings of the National Academy of
Sciences U.S.A. }\textbf{38 }519-521.
\textsc{Guay, A.} and \textsc{E. Guerre} (2006). A Data-Driven Nonparametric
Specification Test for Dynamic Regression Models. \textit{Econometric Theory}
\textbf{22}, 543--586.
\textsc{Guerre, E.} and \textsc{P. Lavergne} (2002). Optimal Minimax Rates for
Nonparametric Specification Testing in Regression Models. \textit{Econometric
Theory} \textbf{18}, 1139--1171.
\textsc{Guerre, E.} and \textsc{P. Lavergne} (2005). Rate-Optimal Data-Driven
Specification Testing for Regression Models. \textit{The Annals of Statistics}
\textbf{33}, 840--870.
\textsc{Hart, J.D.} (1997). \textit{Nonparametric Smoothing and Lack of Fit
Tests}. Springer-Verlarg, New-York.
\textsc{Hong, Y}. (1996). Consistent Testing for Serial Correlation of Unknown
Form. \textit{Econometrica} \textbf{64}, 837--864.
\textsc{Hong, Y}. and \textsc{Y.J Lee}. (2005). Generalized Spectral Tests for
Conditional Mean Models in Time Series with Conditional Heteroscedasticity of
Unknown Form. \textit{Review of Economic Studies} \textbf{72}, 499--541.
\textsc{Horowitz, J.L.} and \textsc{V.G. Spokoiny} (2001). An Adaptive,
Rate-Optimal Test of a Parametric Mean-Regression Model Against a
Nonparametric Alternative. \textit{Econometrica} \textbf{69}, 599--631.
\textsc{Ingster, Y.I.} (1993). Asymptotically Minimax Hypothesis Testing for
Nonparametric Alternatives. I, II, III. \textit{Mathematical Methods of
Statistics} \textbf{2}, 85--114, 171--189 and 249--268.
\textsc{Ingster, Y.I.} (1997). Some Problems of Hypothesis Testing Leading to
Infinitely Divisible Distribution. \textit{Mathematical Methods of Statistics
}\textbf{6}, 647--669
\textsc{Kiefer, N.M., T.J. Vogelsang,} and \textsc{H. Bunzel} (2000). Simple
Robust Testing of Regression Hypotheses. \textit{Econometrica} \textbf{68}, 695--714.
\textsc{Kuan, C.M. }and \textsc{W.M. Lee} (2006). Robust \textit{M} Tests
Without Consistent Estimation of the Asymptotic Covariance Matrix.
\textit{Journal of the American Statistical Association }\textbf{101}, 1264--1275.
\textsc{Lee, J.,} and \textsc{Y. Hong} (2001). Testing for Serial Correlation
of Unknown Form Using Wavelet Methods. \textit{Econometric Theory }%
\textbf{17}, 386--423.
\textsc{Lobato, I.N.} (2001). Testing That a Dependent Process Is
Uncorrelated. \textit{Journal of the American Statistical Association
}\textbf{96}, 1066--1076.
\textsc{Lobato, I.N.}, \textsc{J.L. Horowitz, J.C. Nankervis} and \textsc{N.E.
Savin} (2006). Bootstrapping the Box-Pierce $Q$ Test: A Robust Test of
Uncorrelatedness. \textit{Journal of Econometrics }\textbf{133}, 841--862.
\textsc{Lobato, I.N.}, \textsc{J.C. Nankervis} and \textsc{N.E. Savin} (2002).
Testing for Zero Autocorrelation in the Presence of Statistical Dependence.
\textit{Econometric Theory }\textbf{18}, 730--743.
\textsc{Newey, W.K. }and\textsc{\ K. West} (1994). Automatic Lag Selection in
Covariance Matrix Estimation. \textit{Review of Economic Studies},
\textbf{61}, pp. 631--653.
\textsc{de la Pe\~{n}a, V.H.}, \textsc{T.L. Lai} and \textsc{Q.M. Shao}
(2009). \textit{Self-Normalized Processes: Limit Theory and Statistical
Applications. }Springer.
\textsc{Paparoditis, E.} (2000). Spectral Density Based Goodness-of-Fit Tests
for Time Series Models. \textit{Scandinavian Journal of Statistics}
\textbf{27}, 143--176.
\textsc{Pollard, D.} (2002). \textit{A User's Guide to Measure Theoretic
Probability }. Cambridge University Press.
\textsc{Romano, J.P.} and \textsc{L.A. Thombs}$\ $(1996). Inference For
Autocorrelations Under Weak Assumptions. \textit{Journal of the American
Statistical Association} \textbf{91}, 590--600.
\textsc{Shao, X.} (2011a). A bootstrap-assisted spectral test of white noise
under unknown dependence. \textit{Journal of Econometrics} \textbf{162}, 213--224
\textsc{Shao, X.} (2011b). Testing for White Noise under Unknown Dependence
and its Applications to Goodness-of-Fit for Time Series Models.
\textit{Econometric Theory }\textbf{27}, 312--343.
\textsc{Spokoiny, V. G} (1996). Adaptive Hypothesis Testing Using Wavelets.
\textit{The Annals of Statistics} \textbf{24}, 2477--2498.
\textsc{Sun, Y., P.C.B. Phillips} and \textsc{S. Jin} (2008). Optimal
Bandwidth Selection in Heteroskedasticity-Autocorrelation Robust Tests.
\textit{Econometrica} \textbf{76}, 175--194.
\textsc{Xiao, H.} and \textsc{W.B. Wu} (2011). Asymptotic Inference of
Autocovariances of Stationary Processes. University of Chicago, arXiv:11053423v1.
\textsc{Wu, W. B.} (2005). Nonlinear system theory: Another look at
dependence. \textit{Proceedings of the National Academy of Sciences of the
United States of America }\textbf{102}, 14150--14154.
\textsc{Wu, W.B.} (2007). Strong Invariance Principles for Dependent Random
Variables. \textit{The Annals of Probability }\textbf{35}, 2294--2320.
\begin{figure}[ht]
\centering
\includegraphics{GGLtables1.pdf}
\label{fig:GGLtables1}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics{GGLtables2.pdf}
\label{fig:GGLtables2}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics{GGLtables3.pdf}
\label{fig:GGLtables3}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics{GGLtables4.pdf}
\label{fig:GGLtables4}
\end{figure}
\portrait\parindent=0.3in
\thispagestyle{empty} \portrait
\vspace*{1cm}
\begin{center}
Robust Adaptive Rate-Optimal Testing for the White Noise Hypothesis:
Supplementary Material
\vspace*{0.5 cm}
Alain Guay\footnote{CIRP\'EE and CIREQ, Universit\'e du Qu\'ebec \`a
Montr\'eal, e-mail: \texttt{guay.alain@uqam.ca}} \\[0pt]
\medskip
Emmanuel Guerre\footnote{School of Economics and Finance, Queen Mary,
University of London, e-mail: \texttt{e.guerre@qmul.ac.uk }}\\[0pt]
\medskip
\v{S}t\v{e}p\'{a}na Lazarov\'{a}\footnote{School of Economics and Finance,
Queen Mary, University of London, e-mail: \texttt{s.lazarova@qmul.ac.uk }%
}\\[0pt]\vspace{0.5cm}
This version: 4th October 2011
\vspace{1cm}
\end{center}
\renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{page}{0} \newpage
\section*{Appendix B: proofs of main results}
\renewcommand{\baselinestretch}{1.5} \setcounter{lem}{0} \setcounter{prop}{0}
\setcounter{thm}{0} \renewcommand{\thelem}{B.\arabic{lem}}
\renewcommand{\theprop}{B.\arabic{prop}}
\renewcommand{\thethm}{B.\arabic{thm}} \setcounter{equation}{0}
\setcounter{subsection}{0} \renewcommand{\theequation}{B.\arabic{equation}} \renewcommand{\thesubsection}{B.\arabic{subsection}}
This section contains the proofs of the results of Section 3. $C$ and
$C^{\prime}$ are constants that may vary from line to line but only depend on
the constants of the assumptions. Notation $\left[ \cdot\right] $ is used
for the integer part of a real number and $a\vee b=\max\left( a,b\right) $,
$a\wedge b=\min\left( a,b\right) $. Let $\overline{u}_{t}^{t-j}=\overline
{u}_{t,n}^{t-j}$ be a copy of $u_{t}=F_{n}\left( \ldots,e_{t-1},e_{t}\right)
$ obtained by changing $e_{t-j}$, $e_{t-j-1}$, $\ldots$ into $e_{t-j}^{\prime
}$, $e_{t-j-1}^{\prime}$, $\ldots$. Then the condition $\left\Vert u_{t}%
-u_{t}^{t-j}\right\Vert _{a}\leq\delta_{a}\left( j\right) $ ensures that%
\begin{equation}
\left\Vert u_{t}-\overline{u}_{t}^{t-j}\right\Vert _{a}\leq\Theta_{a}\left(
j\right) \text{ where }\Theta_{a}\left( j\right) =\sum_{i=j}^{\infty}%
\delta_{a}\left( j\right) .\label{Thet}%
\end{equation}
We first state some intermediary results that are used in the proofs of our
main results. These intermediary results are proven in Appendix C. Lemma
\ref{Ordersums} gives the order of standardization terms $E(p)$, $E_{\Delta
}(p)$ and $V_{\Delta}(p)$. Propositions \ref{Covesti} and \ref{Esti} deal with
the impact of the estimation of $\theta$. Proposition \ref{Selection} is used
to study the asymptotic null behavior of the test and to show that
$\mathbb{P}\left( \widehat{p}=1\right) \rightarrow1$ in Theorem \ref{Level}.
Proposition \ref{Selection} deals with directly observed or estimated
residuals thanks to Propositions \ref{Covesti} and \ref{Esti}. Propositions
\ref{MeanH1} and \ref{VarH1} are the key tools for our consistency result,
Theorem \ref{Sparse}. They dealt with directly observed variables but are
combined with Propositions \ref{Covesti} and \ref{Esti} to deal with
estimation errors in the proof of Theorem \ref{Sparse}.
\begin{lem}
\label{Ordersums}Suppose Assumption \ref{Kernel} holds and that $\overline
{p}_{n}/n\leq1/2$. (i) There exists a constant $C>1$ such that, for $q=1,2$
and for any $1\leq p\leq\overline{p}_{n}$, $\frac{p}{C}\leq\sum_{j=1}%
^{n-1}\left( 1-\frac{j}{n}\right) ^{q}K^{2q}\left( \frac{j}{p}\right) \leq
Cp$, $\frac{p}{C}\leq\sum_{j=1}^{n-1}K^{2q}\left( \frac{j}{p}\right) \leq
Cp$, $V_{\Delta}^{2}(p)\leq Cp$, and $E_{\Delta}(p)\leq\sum_{j=1}^{n-1}\left(
K^{2}\left( \frac{j}{p}\right) -K^{2}\left( j\right) \right) \leq
Cp^{1/2}V_{\Delta}(p)$; (ii) Under Assumption \ref{P}, for all $n$ and all
$p\in\left[ 1,\overline{p}_{n}\right] $, $V_{\Delta}(p)\geq C(p-1)^{1/2}$
and $E_{\Delta}(p)\geq0$.
\end{lem}
\begin{lem}
\label{L} Suppose Assumptions \ref{M} and \ref{Reg} hold. Then the statistics
and associated critical values $\left( \widetilde{S}_{1},\widetilde{z}%
_{L}\left( \alpha\right) \right) $, $\left( \widetilde{S}_{1}^{\ast
},\widetilde{z}_{L}^{\ast}\left( \alpha\right) \right) $, $\left(
\widehat{S}_{1},\widehat{z}_{L}\left( \alpha\right) \right) $ and $\left(
\widehat{S}_{1}^{\ast},\widehat{z}_{L}^{\ast}\left( \alpha\right) \right) $
satisfy (\ref{Alpha}) that is give an asymptotic $\alpha$ level test.
Moreover, under $\mathcal{H}_{1}$, $\widetilde{z}_{L}\left( \alpha\right) $,
$\widetilde{z}_{L}^{\ast}\left( \alpha\right) $, $\widehat{z}_{L}\left(
\alpha\right) $ and $\widehat{z}_{L}^{\ast}\left( \alpha\right) $ are all
$O_{\mathbb{P}}\left( 1\right) $.
\end{lem}
\begin{lem}
Under Assumption \ref{Reg}, $\sup_{0\leq j\leq n-1}\operatorname*{Var}\left(
\widetilde{R}_{j}\right) \leq\frac{C}{n}$. \label{Varcov}
\end{lem}
\begin{prop}
\label{Covesti}Suppose Assumptions \ref{M}, \ref{P} and \ref{Reg} hold. Then
$\max_{j\in\left[ 0,\overline{p}_{n}\right] }\left\vert \widehat{R}%
_{j}-\widetilde{R}_{j}\right\vert =O_{\mathbb{P}}\left( n^{-1/2}\right) $,
$\max_{p\in\left[ 0,n-1\right] }n\sum_{j=1}^{p}\left( \widehat{R}%
_{j}-\widetilde{R}_{j}\right) ^{2}=O_{\mathbb{P}}\left( 1\right) $, and%
\begin{align*}
\max_{j\in\left[ 0,n-1\right] }\left\vert \widetilde{R}_{j}-\left(
1-\frac{j}{n}\right) R_{j,n}\right\vert & =O_{\mathbb{P}}\left( \left(
\frac{\log n}{n}\right) ^{1/2}\right) ,\\
\max_{j\in\left[ 0,\overline{p}_{n}\right] }\left\vert \widehat{R}%
_{j}-R_{j,n}\right\vert & =O_{\mathbb{P}}\left( \left( \frac{\log n}%
{n}\right) ^{1/2}\right) ,\\
\max_{j\in\left[ 0,n-1\right] }\left( 1-\frac{j}{n}\right) \left\vert
\widetilde{\tau}_{j}^{2}-\tau_{j,n}^{2}\right\vert & =O_{\mathbb{P}}\left(
\left( \frac{\log n}{n}\right) ^{1/2}\right) ,\\
\max_{j\in\left[ 0,\overline{p}_{n}\right] }\left\vert \widehat{\tau}%
_{j}^{2}-\tau_{j,n}^{2}\right\vert & =O_{\mathbb{P}}\left( \left(
\frac{\log n}{n}\right) ^{1/2}\right) .
\end{align*}
\end{prop}
\begin{prop}
\label{Esti}Let Assumptions \ref{Kernel}, \ref{M}, \ref{P} and \ref{Reg} hold.
Let $\widetilde{S}_{p}$ be as in (\ref{TildeR}). Then
\[
\max_{p\in\left[ 2,\overline{p}_{n}\right] }\frac{|\left( \widehat{S}%
_{p}-\widehat{S}_{1}\right) -\left( \widetilde{S}_{p}-\widetilde{S}%
_{1}\right) |}{1+\left( n\sum_{j=1}^{p}R_{j,n}^{2}\right) ^{1/2}%
}=O_{\mathbb{P}}\left( 1\right)
\]
and for any $p_{n}=O(n^{1/2})$, $\widehat{S}_{p_{n}}-\widetilde{S}_{p_{n}%
}=O_{\mathbb{P}}\left( 1+\left( n\sum_{j=1}^{p_{n}}R_{j,n}^{2}\right)
^{1/2}\right) .$
\end{prop}
\begin{prop}
\label{Selection}Suppose Assumptions \ref{Kernel}, \ref{M}, \ref{P} and
\ref{Reg} hold and that $\mathcal{H}_{0}$ is true. Then (\ref{Gam}) ensures
that
\[
\lim_{n\rightarrow\infty}\mathbb{P}\left( \max_{p\in\left[ 2,\overline
{p}_{n}\right] }\frac{(\widehat{S}_{p}-\widehat{S}_{1})/\widehat{R}_{0}%
^{2}-E_{\Delta}(p)}{V_{\Delta}(p)}\geq\gamma_{n}\right) =0.
\]
\end{prop}
\begin{prop}
Under Assumptions \ref{Kernel}, \ref{P} and \ref{Reg}, there are some
$C,C^{\prime}>0$ such that for $n$ large enough and uniformly in $p\in\left[
1,\overline{p}_{n}\right] $,
\begin{align*}
\mathbb{E}\left[ \widetilde{S}_{p}\right] -R_{0,n}^{2}E\left( p\right) &
\geq Cn\sum_{j=1}^{p/2}R_{j,n}^{2}-C^{\prime}R_{0,n}^{2},\\
\mathbb{E}\left[ \sum_{j=1}^{n-1}K\left( \frac{j}{p}\right) \frac
{\widetilde{R}_{j}^{2}}{\tau_{j,n}^{2}}\right] -E\left( p\right) & \geq
Cn\sum_{j=1}^{p/2}\left( \frac{R_{j,n}}{R_{0,n}}\right) ^{2}-C^{\prime}.
\end{align*}
\label{MeanH1}
\end{prop}
\begin{prop}
Under Assumptions \ref{Kernel}, \ref{P} and \ref{Reg}, there is a constant
$C>0$ such that for $n$ large enough and uniformly in $p\in\left[
1,\overline{p}_{n}\right] $,%
\begin{align*}
\operatorname*{Var}\left( \widetilde{S}_{p}\right) & \leq C\left(
n\sum_{j=1}^{p}R_{j,n}^{2}+p\right) ,\\
\operatorname*{Var}\left( \sum_{j=1}^{n-1}K\left( \frac{j}{p}\right)
\frac{\widetilde{R}_{j}^{2}}{\tau_{j,n}^{2}}\right) & \leq C\left(
n\sum_{j=1}^{p}\frac{R_{j,n}^{2}}{R_{0,n}^{2}}+p\right) .
\end{align*}
\label{VarH1}
\end{prop}
\subsection{Proof of Theorem \ref{Level}}
(\ref{Hatpnot}), (\ref{Gam}) and Proposition \ref{Selection} give that
$\lim_{n\rightarrow\infty}\mathbb{P}(\widehat{p}\neq1)=0$. Hence
$\widehat{S}_{\widehat{p}}=\widehat{S}_{1}+o_{\mathbb{P}}\left( 1\right) $
and Lemma \ref{L}, which ensures that the retained critical value satisfies
(\ref{Alpha}), yield that the test (\ref{Test}) is asymptotically of level
$\alpha$.\hspace*{\fill}$\Box$
\subsection{Proof of Theorem \ref{Sparse}}
The definition (\ref{Hatp}) of $\widehat{p}$ gives, for any $p\in\left[
1,\overline{p}_{n}\right] $,%
\begin{align*}
\widehat{S}_{\widehat{p}} & =\arg\max_{p\in\left[ 1,\overline{p}_{n}\right]
}\left\{ \widehat{S}_{p}-\widehat{R}_{0}^{2}E\left( p\right) -\gamma
_{n}\widehat{R}_{0}^{2}V_{\Delta}\left( p\right) \right\} +\widehat{R}%
_{0}^{2}E\left( \widehat{p}\right) +\gamma_{n}\widehat{R}_{0}^{2}V_{\Delta
}\left( \widehat{p}\right) \\
& \geq\widehat{S}_{p}-\widehat{R}_{0}^{2}E\left( p\right) -\gamma
_{n}\widehat{R}_{0}^{2}V_{\Delta}\left( p\right) .
\end{align*}
Since the critical value $z\left( \alpha\right) $ in (\ref{Test}) is bounded
under $\mathcal{H}_{1}$ by Lemma \ref{L}, it is sufficient to find a $p_{n}%
\in\left[ 1,\overline{p}_{n}\right] $ such that $\widehat{S}_{p_{n}%
}-\widehat{R}_{0}^{2}E\left( p_{n}\right) -\gamma_{n}\widehat{R}_{0}%
^{2}V_{\Delta}\left( p_{n}\right) \overset{\mathbb{P}}{\rightarrow}+\infty$.
Let $p_{n}=2P_{n}$ where $P_{n}$ is as in (\ref{Sparse3}). Set
\[
\mathcal{R}_{n}^{2}=\sum_{j=1}^{P_{n}}\left( \frac{R_{j,n}}{R_{0,n}}\right)
^{2}.
\]
The detection condition (\ref{Sparse3}) gives%
\begin{equation}
n\mathcal{R}_{n}^{2}\geq n\rho_{n}^{2}\sum_{j=1}^{P_{n}}\mathbb{I}\left\{
\left( \frac{R_{j,n}}{R_{0,n}}\right) ^{2}\geq\rho_{n}^{2}\right\}
=nN_{n}\rho_{n}^{2}\geq\frac{\kappa_{\ast}^{2}\gamma_{n}p_{n}^{1/2}}{2^{1/2}%
}\rightarrow\infty,\label{Sparse4}%
\end{equation}
with a constant $\kappa_{\ast}$ which can be chosen as large as needed. Lemmas
\ref{Ordersums}, \ref{Varcov}, Assumption \ref{P} which ensures $P_{n}%
=o\left( n^{1/2}\right) $ and $\gamma_{n}=o\left( n^{1/4}\right) $, and
Proposition \ref{Covesti} for the case of estimated residuals yield that%
\begin{align*}
& \widehat{S}_{p_{n}}-\widehat{R}_{0}^{2}E\left( p_{n}\right) -\gamma
_{n}\widehat{R}_{0}^{2}V_{\Delta}\left( p_{n}\right) \\
& =\widetilde{S}_{p_{n}}+O_{\mathbb{P}}\left( 1+n^{1/2}R_{0,n}\mathcal{R}%
_{n}\right) -R_{0,n}^{2}E\left( p_{n}\right) -\gamma_{n}R_{0,n}%
^{2}V_{\Delta}\left( p_{n}\right) +O_{\mathbb{P}}\left( \frac{p_{n}%
+\gamma_{n}p_{n}^{1/2}}{n^{1/2}}\right) \\
& \geq\widetilde{S}_{p_{n}}+O_{\mathbb{P}}\left( 1+n^{1/2}R_{0,n}%
\mathcal{R}_{n}\right) -R_{0,n}^{2}E\left( p_{n}\right) -C\gamma_{n}%
R_{0,n}^{2}p_{n}^{1/2}.
\end{align*}
Now the Chebycheff inequality, Propositions \ref{MeanH1} and \ref{VarH1}, give%
\[
\widetilde{S}_{p_{n}}=\mathbb{E}\left[ \widetilde{S}_{p_{n}}\right]
+O_{\mathbb{P}}\left( \operatorname*{Var}\nolimits^{1/2}\left(
\widetilde{S}_{p_{n}}\right) \right) \geq R_{0,n}^{2}E\left( p_{n}\right)
+C^{\prime}R_{0,n}^{2}n\mathcal{R}_{n}^{2}+O_{\mathbb{P}}\left( p_{n}%
^{1/2}+n^{1/2}\mathcal{R}_{n}\right) .
\]
Hence substituting gives, since $n\mathcal{R}_{n}^{2}\rightarrow\infty$ by
(\ref{Sparse4}),%
\[
\widehat{S}_{p_{n}}-\widehat{R}_{0}^{2}E\left( p_{n}\right) -\gamma
_{n}\widehat{R}_{0}^{2}V_{\Delta}\left( p_{n}\right) \geq C^{\prime}%
R_{0,n}^{2}n\mathcal{R}_{n}^{2}\left( 1+o_{\mathbb{P}}\left( 1\right)
\right) -C\gamma_{n}R_{0,n}^{2}p_{n}^{1/2}\left( 1+o_{\mathbb{P}}\left(
1\right) \right) .
\]
Since Assumption \ref{Reg} ensures that $R_{0,n}^{2}$ stays bounded away from
$0$, (\ref{Sparse4}) gives that $\widehat{S}_{p_{n}}-\widehat{R}_{0}%
^{2}E\left( p_{n}\right) -\gamma_{n}\widehat{R}_{0}^{2}V_{\Delta}\left(
p_{n}\right) \overset{\mathbb{P}}{\rightarrow}+\infty$ as requested provided
$\kappa_{\ast}^{2}>C^{\prime}/C$. $\hfill\square$
\subsection{Proof of Theorem \ref{Extension}}
Consider first the null hypothesis. As seen from the proof of Theorem
\ref{Level}, it suffices to show that%
\[
\lim_{n\rightarrow\infty}\mathbb{P}\left( \max_{p\in\left[ 2,\overline
{p}_{n}\right] }\frac{(\widehat{S}_{p}^{\ast}-\widehat{S}_{1}^{\ast
})-E_{\Delta}(p)}{V_{\Delta}(p)}\geq\gamma_{n}\right) =0,
\]
a statement which implies that $\widehat{p}^{\ast}=1+o_{\mathbb{P}}\left(
1\right) $ so that Lemma \ref{L} implies that the conclusion of Theorem
\ref{Level} holds for the test based upon $\widehat{S}_{\widehat{p}^{\ast}%
}^{\ast}$. Since $\left\vert R_{j,n}\right\vert \leq\left\Vert u_{t,n}%
\right\Vert _{2}\left\Vert u_{t,n}-\overline{u}_{t,n}^{t-j}\right\Vert _{2}$
and%
\begin{align*}
\mathbb{E}\left[ u_{t-j,n}^{2}u_{t-j,n}^{2}\right] & =\mathbb{E}\left[
\left( \overline{u}_{t,n}^{t-j}\right) ^{2}u_{t-j,n}^{2}\right]
+\mathbb{E}\left[ \left( u_{t,n}^{2}-\left( \overline{u}_{t,n}%
^{t-j}\right) ^{2}\right) u_{t-j,n}^{2}\right] \\
& =R_{0,n}^{2}+\mathbb{E}\left[ \left( u_{t,n}-\overline{u}_{t,n}%
^{t-j}\right) \left( u_{t,n}+\overline{u}_{t,n}^{t-j}\right) u_{t-j,n}%
^{2}\right] ,
\end{align*}
(\ref{Thet})\ shows
\begin{equation}
\left\vert \tau_{j,n}^{2}-R_{0,n}^{2}\right\vert \leq C\left\Vert
u_{t,n}\right\Vert _{8}^{3}\Theta_{2}\left( j\right) \leq Cj^{-6}%
\label{Tau2sig}%
\end{equation}
for all $j\geq1$. Now Lemmas \ref{Ordersums} and \ref{Varcov}, Assumptions
\ref{Kernel}, \ref{P} and \ref{Reg}, and Proposition \ref{Covesti} give%
\begin{align*}
& \max_{p\in\left[ 2,\overline{p}_{n}\right] }\frac{\left\vert
(\widehat{S}_{p}^{\ast}-\widehat{S}_{1}^{\ast})-(\widehat{S}_{p}%
-\widehat{S}_{1})/\widehat{R}_{0}^{2}\right\vert }{V_{\Delta}(p)}\leq
C\max_{p\in\left[ 1,\overline{p}_{n}\right] }\frac{\left\vert \widehat{S}%
_{p}^{\ast}-\widehat{S}_{p}/\widehat{R}_{0}^{2}\right\vert }{p^{1/2}}\\
& \leq C\max_{p\in\left[ 1,\overline{p}_{n}\right] }\frac{n}{p^{1/2}}%
\sum_{j=1}^{p}\left( \frac{\widehat{R}_{j}}{\widehat{R}_{0}}\right)
^{2}\left\{ \left\vert \frac{\widehat{\tau}_{j}^{2}}{\widehat{R}_{0}^{2}%
}-\frac{\tau_{j,n}^{2}}{R_{0,n}^{2}}\right\vert +\left\vert \frac{\tau
_{j,n}^{2}}{R_{0,n}^{2}}-1\right\vert \right\} \\
& \leq Cn\overline{p}_{n}^{1/2}O_{\mathbb{P}}\left( \left( \frac{\log n}%
{n}\right) ^{3/2}\right) +O_{\mathbb{P}}\left( 1\right) n\sum
_{j=1}^{\overline{p}_{n}}\frac{\widehat{R}_{j}^{2}}{j^{6}}\\
& =o_{\mathbb{P}}\left( 1\right) +O_{\mathbb{P}}\left( \sum_{j=1}%
^{\overline{p}_{n}}\frac{\operatorname*{Var}\left( n^{1/2}\widehat{R}%
_{j}\right) }{j^{6}}\right) =O_{\mathbb{P}}\left( 1\right) .
\end{align*}
Hence (\ref{Gam}) and Proposition \ref{Selection}%
\begin{align*}
& \mathbb{P}\left( \max_{p\in\left[ 2,\overline{p}_{n}\right] }%
\frac{(\widehat{S}_{p}^{\ast}-\widehat{S}_{1}^{\ast})-E_{\Delta}(p)}%
{V_{\Delta}(p)}\geq\gamma_{n}\right) \\
& \text{ }=\mathbb{P}\left( \max_{p\in\left[ 2,\overline{p}_{n}\right]
}\frac{(\widehat{S}_{p}-\widehat{S}_{1})/\widehat{R}_{0}^{2}-E_{\Delta}%
(p)}{V_{\Delta}(p)}+O_{\mathbb{P}}\left( 1\right) \geq\gamma_{n}\right) \\
& \text{ }\leq\mathbb{P}\left( \max_{p\in\left[ 2,\overline{p}_{n}\right]
}\frac{(\widehat{S}_{p}-\widehat{S}_{1})/\widehat{R}_{0}^{2}-E_{\Delta}%
(p)}{V_{\Delta}(p)}\geq\left( 1+\frac{\epsilon}{2}\right) \left( 2\ln\ln
n\right) ^{1/2}\right) +o\left( 1\right) \\
& \text{ }\mathbb{=}o\left( 1\right) ,
\end{align*}
which gives the desired result under $\mathcal{H}_{0}$.
Consider now Theorem \ref{Sparse} and $\mathcal{H}_{1}$. Define%
\[
\widehat{S}_{p}^{\bigstar}=n\sum_{j=1}^{p}K^{2}\left( \frac{j}{p}\right)
\frac{\widehat{R}_{j}^{2}}{\tau_{j,n}^{2}},\quad\widetilde{S}_{p}^{\bigstar
}=n\sum_{j=1}^{p}K^{2}\left( \frac{j}{p}\right) \frac{\widetilde{R}_{j}^{2}%
}{\tau_{j,n}^{2}}.
\]
Let $P_{n}$ be as in (\ref{Sparse3}) and define $p_{n}=2P_{n}$ and
$\mathcal{R}_{n}$ as in the proof of Theorem \ref{Sparse}. Then Assumptions
\ref{Kernel} and \ref{Reg}, Propositions \ref{Covesti} and \ref{Esti}
\begin{align*}
\left\vert \widehat{S}_{p_{n}}^{\ast}-\widehat{S}_{p_{n}}^{\bigstar
}\right\vert & \leq Cn\sum_{j=1}^{p_{n}}\frac{\widehat{R}_{j}^{2}}{\tau
_{j,n}^{2}}\left\vert \frac{\tau_{j,n}^{2}}{\widehat{\tau}_{j}^{2}%
}-1\right\vert =O_{\mathbb{P}}\left( \left( \frac{\log n}{n}\right)
^{1/2}\right) \check{S}_{p_{n}}^{\bigstar},\\
\left\vert \widehat{S}_{p_{n}}^{\bigstar}-\widetilde{S}_{p_{n}}^{\bigstar
}\right\vert & \leq C\left\vert \widehat{S}_{p_{n}}-\widetilde{S}_{p_{n}%
}\right\vert =O_{\mathbb{P}}\left( n^{1/2}\mathcal{R}_{n}\right) .
\end{align*}
Hence, for directly observed or estimated residuals,%
\[
\widehat{S}_{p_{n}}^{\ast}=\left( 1+O_{\mathbb{P}}\left( \left( \frac{\log
n}{n}\right) ^{1/2}\right) \right) \widetilde{S}_{p_{n}}^{\bigstar
}+O_{\mathbb{P}}\left( n^{1/2}\mathcal{R}_{n}\right)
\]
The proof now follows the steps of the one of Theorem \ref{Sparse} based on
the order above, Proposition \ref{MeanH1} and \ref{VarH1}, and Lemma
\ref{Varcov} which gives $\mathbb{E}\left[ \widetilde{S}_{p_{n}}^{\bigstar
}\right] \leq C\left( p_{n}+n\mathcal{R}_{n}^{2}\right) $. Hence, since
$p_{n}=o\left( \left( \log n/n\right) ^{1/2}\right) $,%
\begin{align*}
\widehat{S}_{\widehat{p}^{\ast}}^{\ast} & =\arg\max_{p\in\left[
1,\overline{p}_{n}\right] }\left\{ \widehat{S}_{p}^{\ast}-E\left( p\right)
-\gamma_{n}V_{\Delta}\left( p\right) \right\} +E\left( \widehat{p}^{\ast
}\right) +\gamma_{n}V_{\Delta}\left( \widehat{p}^{\ast}\right) \\
& \geq\widehat{S}_{p_{n}}^{\ast}-E\left( p_{n}\right) -C\gamma_{n}%
p_{n}^{1/2}\\
& =\left( 1+O_{\mathbb{P}}\left( \left( \frac{\log n}{n}\right)
^{1/2}\right) \right) \left( \mathbb{E}\left[ \widetilde{S}_{p_{n}%
}^{\bigstar}\right] +\operatorname*{Var}\nolimits^{1/2}\left( \widetilde{S}%
_{p_{n}}^{\bigstar}\right) \right) -E\left( p_{n}\right) -C\gamma_{n}%
p_{n}^{1/2}\\
& =C^{\prime}R_{0,n}^{2}n\mathcal{R}_{n}^{2}-C\gamma_{n}R_{0,n}^{2}p_{n}%
^{1/2}+O_{\mathbb{P}}\left( p_{n}^{1/2}+n^{1/2}\mathcal{R}_{n}+\left(
\frac{\log n}{n}\right) ^{1/2}\left( p_{n}+n\mathcal{R}_{n}^{2}\right)
\right) \\
& =C^{\prime}R_{0,n}^{2}n\mathcal{R}_{n}^{2}\left( 1+o_{\mathbb{P}}\left(
1\right) \right) -C\gamma_{n}R_{0,n}^{2}p_{n}^{1/2}\left( 1+o_{\mathbb{P}%
}\left( 1\right) \right) \overset{\mathbb{P}}{\rightarrow}+\infty
\end{align*}
provided $\kappa_{\ast}$ is large enough.$\hfill\square$
\subsection{Proof of Theorem \ref{Sparseopt}}
We first introduce a set of alternatives. Let $f\left( \cdot\right) $ denote
the spectral density of a centered Gaussian stationary process $\left\{
u_{t}\right\} .$with covariance coefficients $R_{j}$. Define a H\"{o}lder
class of \ processes as
\[
\text{H\"{o}lder}\left( L\right) =\left\{ \left\{ u_{t}\right\} \text{:
}1/3\leq\inf_{\lambda\in\left[ -\pi,\pi\right] }f\left( \lambda\right)
\leq\sup_{\lambda\in\left[ -\pi,\pi\right] }f\left( \lambda\right)
\leq3\text{, }\sup_{\lambda\in\left[ -\pi,\pi\right] }\left\vert f^{\prime
}\left( \lambda\right) \right\vert \leq L,\text{ }\sum_{j=0}^{\infty
}\left\vert R_{j}\right\vert \leq L\right\} .
\]
The next Lemma describes a family of alternatives which satisfies Assumption
\ref{Reg} uniformly for prescribed constants and a given $\delta_{a}\left(
j\right) .$
\begin{lem}
\label{Altexp} Consider a centered stationary Gaussian process $\left\{
u_{t}\right\} $ with spectral density function $f\left( \lambda\right)
=\exp\left( g\left( \lambda\right) \right) /\left( 2\pi\right) $, where
\begin{equation}
g\left( \lambda\right) =2\rho\sum_{k=1}^{p}b_{k}\cos\left( k\lambda\right)
,\text{\quad\quad\quad\quad}b_{k}=-1,0,1.\label{Logsd}%
\end{equation}
If $p\geq1$ and $\rho\geq0$ are such that $p^{2}\rho\leq\epsilon\leq1/6$ then
there is some constant $L>0$, independent of $\epsilon$, $p$, $\rho$ and
$b=\left( b_{k},k\in\left[ 1,p\right] \right) $, such that (i) $\left\vert
R_{0}-1\right\vert \leq6\rho\epsilon$ and $\left\vert R_{j}-\rho
b_{j}\right\vert \leq6\rho\epsilon$ for $j\in\left[ 1,p\right] $; (ii)
$\left\vert R_{j}\right\vert \leq3\rho\left( 2\epsilon\right) ^{\ell}$ for
all $j$ in $\left[ \ell p+1,\left( \ell+1\right) p\right) $ and all
$\ell\geq1$; (iii) $\left\{ u_{t}\right\} $ is in H\"{o}lder$\left(
L\right) $; (iv) Suppose that $\rho_{n}^{2}=\rho_{n}^{2}(p)=2\kappa_{n}%
^{2}\left( 2\log\log n\right) ^{1/2}/\left( np^{1/2}\right) $ for some
$\kappa_{n}>0$ and bounded away from infinity, and that $p\in\left[
1,P_{n}\right] $ with $P_{n}=o\left( \left( n/\left( \kappa_{n}^{2}%
\log\log n\right) ^{1/2}\right) ^{1/14}\right) $. Then the associated
family of processes $\left\{ u_{t}\left( b,p\right) ;b\in\left\{
-1,0,1\right\} ^{p},p\in\left[ 1,P_{n}\right] \right\} $ satisfies
Assumption \ref{Reg} for any $a>0$ and a $\delta_{a}\left( j\right)
=O\left( j^{-7-1/4}\right) $.
\end{lem}
\noindent\textbf{Proof of Lemma \ref{Altexp}. }Rewrite $g$ as $g\left(
\lambda\right) =\rho\sum_{k=-p}^{p}b_{k}\exp\left( ik\lambda\right) $,
$b_{0}=0$, $b_{k}=b_{-k}=b_{\left\vert k\right\vert }$. Since $\exp\left(
x\right) =\sum_{m=0}^{\infty}x^{m}/m!$ uniformly over any compact set and
$\max_{\lambda}\left\vert g\left( \lambda\right) \right\vert \leq2p\rho
\leq2\epsilon\leq1/3$, we have
\begin{equation}
R_{j}=\int_{-\pi}^{\pi}\exp\left( -ij\lambda\right) f\left( \lambda\right)
d\lambda=\frac{1}{2\pi}\sum_{m=0}^{\infty}\frac{1}{m!}\int_{-\pi}^{\pi}%
\exp\left( -ij\lambda\right) \left( g\left( \lambda\right) \right)
^{m}d\lambda.\label{Ralt}%
\end{equation}
For $m>0$, since $\int_{-\pi}^{\pi}\exp\left( -ij\lambda\right)
d\lambda=2\pi$ if $j=0$ and $0$ if $j\neq0$,
\begin{align}
& \frac{1}{2\pi}\int_{-\pi}^{\pi}\exp\left( -ij\lambda\right) \left(
g\left( \lambda\right) \right) ^{m}d\lambda\nonumber\\
& =\frac{\rho^{m}}{2\pi}\sum_{\left( k_{1},...,k_{m}\right) \in K_{m}%
}b_{k_{1}}\times\cdots\times b_{k_{m}}\int_{-\pi}^{\pi}\exp\left( i\left(
k_{1}+\ldots+k_{m}-j\right) \lambda\right) d\lambda\nonumber\\
& =\rho^{m}\sum_{\left( k_{1},...,k_{m}\right) \in K_{m}\left( j\right)
}b_{k_{1}}\times\cdots\times b_{k_{m}},\label{Fourierg}%
\end{align}
where $K_{m}$ is the set of $m$-tuples with entries in $\left[ -p,p\right]
\setminus\left\{ 0\right\} $ so that $\#K_{m}=\left( 2p\right) ^{m}$ and
$K_{m}\left( j\right) $ contains $m$-tuples in $K_{m}$ for which
$k_{1}+\cdots+k_{m}=j$ so that $\#K_{m}(j)\leq\left( 2p\right) ^{m-1}$.
\textit{Proof of (i).} Part (i) is a consequence of (\ref{Ralt}),
(\ref{Fourierg}) and inequality $2p\rho\leq2\epsilon<1$ which together imply
that for $j\in\left[ 0,p\right] $, $\left\vert R_{j}-\mathbb{I}\left(
j=0\right) -\rho b_{j}\right\vert \leq\rho\sum_{m=2}^{\infty}\frac{\left(
2p\rho\right) ^{m-1}}{m!}\leq2p\rho^{2}\sum_{m=0}^{\infty}\frac{1}{m!}%
\leq2e\rho\epsilon<6\rho\epsilon$.
\textit{Proof of (ii).} Let $\ell p+1\leq j>\left( \ell+1\right) p$. Observe
that $K_{m}\left( j\right) $ is an empty set when $m\leq\ell$. Hence it
follows from (\ref{Ralt}) and (\ref{Fourierg}) that $\left\vert R_{j}%
\right\vert \leq\left\vert \frac{1}{2\pi}\sum_{m=\ell+1}^{\infty}\frac{1}%
{m!}\int_{-\pi}^{\pi}\exp\left( -ij\lambda\right) \left( g\left(
\lambda\right) \right) ^{m}d\lambda\right\vert \leq\rho\sum_{m=\ell
+1}^{\infty}\frac{\left( 2p\rho\right) ^{m-1}}{m!}\leq\rho\left(
2\epsilon\right) ^{\ell}e$.
\textit{Proof of (iii). }Observe that $\left\vert g\left( \lambda\right)
\right\vert \leq2\rho p\leq2\epsilon\leq1/3$ and that therefore%
\[
1/3<1-1/3<\exp\left( -1/3\right) \leq f\left( \lambda\right) \leq
\exp\left( 1/3\right) \leq e\leq3\quad\quad\quad\quad\text{for all }%
\lambda\in\left[ -\pi,\pi\right] .
\]
Parts (i), (ii) and $0\leq\rho\leq\epsilon<1/6$, $p\rho\leq1/6$ yield that,
for $L$ large enough,%
\begin{align*}
\sum_{j=0}^{\infty}\left\vert R_{j}\right\vert & \leq R_{0}+\sum_{j=1}%
^{p}\left\vert R_{j}\right\vert +\sum_{\ell=1}^{\infty}\sum_{j=\ell
p+1}^{\left( \ell+1\right) p}\left\vert R_{j}\right\vert \leq1+6\rho
\epsilon+\left( 1+6\epsilon\right) p\rho+3\sum_{\ell=1}^{\infty}\left(
\ell+1\right) p\rho\left( 2\epsilon\right) ^{\ell}\\
& \leq1+1+1+1+\sum_{\ell=1}^{\infty}\left( \ell+1\right) \left(
2\epsilon\right) ^{\ell}\leq L.
\end{align*}
Since $f^{\prime}\left( \lambda\right) =g^{\prime}\left( \lambda\right)
f\left( \lambda\right) $ with $g^{\prime}\left( \lambda\right) =-2\rho
\sum_{k=1}^{p}b_{k}k\sin\left( k\lambda\right) $, we have $\sup_{\lambda
\in\left[ -\pi,\pi\right] }\left\vert f^{\prime}\left( \lambda\right)
\right\vert \leq3\times2p^{2}\rho\leq1$.
\textit{Proof of (iv). }Let $u_{t}=\varepsilon_{t}+\sum_{j=1}^{\infty}\psi
_{j}\varepsilon_{t-j}$ be the Wold decomposition of the process. Brillinger
(2001) and $\int_{-\pi}^{\pi}\log f\left( \lambda\right) \exp\left(
ij\lambda\right) d\lambda/2\pi=\rho b_{j}$ gives%
\begin{align*}
\psi_{j} & =\frac{\int_{-\pi}^{\pi}\exp\left( \rho\sum_{k=1}^{p}b_{k}%
\exp\left( -ik\lambda\right) \right) \exp\left( ij\lambda\right)
d\lambda}{\int_{-\pi}^{\pi}\exp\left( \rho\sum_{k=1}^{p}b_{k}\exp\left(
-ik\lambda\right) \right) d\lambda},\\
\operatorname*{Var}\left( \varepsilon_{t}\right) & =\left\vert \frac
{1}{2\pi}\int_{-\pi}^{\pi}\exp\left( \rho\sum_{k=1}^{p}b_{k}\exp\left(
-ik\lambda\right) \right) d\lambda\right\vert ^{2}.
\end{align*}
Arguing as in (i) and (ii) with an expansion as in (\ref{Ralt}) give
$\operatorname*{Var}\left( \varepsilon_{t}\right) =1$, $\left\vert \psi
_{j}-\rho b_{j}\right\vert \leq C\rho\epsilon$ for $j\in\left[ 1,p\right] $
and $\left\vert \psi_{j}\right\vert \leq C\rho\left( 2\epsilon\right)
^{\ell}$ for all $j\in\left[ \ell p+1,\left( \ell+1\right) p\right) $ and
all $\ell\geq1$. Gaussianity, the choice of $\rho$ in (iv) with the
restriction on $P_{n}$ and Wu (2005) give, for any $a>1$, $\delta_{12a}\left(
j\right) \leq C_{a}\left\vert \psi_{j}\right\vert \leq C_{a}j^{-7-1/4}$. That
the other conditions of Assumption \ref{Reg} hold uniformly in $p\in\left[
1,P_{n}\right] $ follows from (i) and (ii).$\hfill\square$
We will now define a family $\mathcal{F}_{n}$\ of correlated Gaussian
alternatives. We first introduce some notation. Consider $\widetilde{\gamma
}_{n}=\left( 2\ln\ln n\right) ^{1/2}$ and $\mathcal{P}^{\prime}=\left\{
2^{j},j=1,\ldots,J_{n}\right\} $, $2^{J_{n}}=P_{n}=o\left( \overline{p}%
_{n}\wedge\left( n/\widetilde{\gamma}_{n}\right) ^{1/14}\right) $ so that
$\mathcal{P}^{\prime}\subset\left[ 1,\overline{p}_{n}\right] $ for $n$ large
enough. Define also
\begin{equation}
\rho_{n}^{2}(p)=2\frac{\kappa_{n}^{2}\widetilde{\gamma}_{n}}{np^{1/2}}%
,\quad\widetilde{\rho}_{n}(p)=2\rho_{n}^{2}(p)\quad\epsilon_{n}=P_{n}^{2}%
\rho_{n}(P_{n})=\frac{\left( \widetilde{\gamma}_{n}\right) ^{1/2}\kappa
_{n}P_{n}^{7/4}}{n^{1/2}}=o\left( 1\right) .\label{Rhop}%
\end{equation}
Since $p^{2}\rho_{n}(p)\leq\epsilon_{n}$ for all $p\in\mathcal{P}^{\prime}$,
$\epsilon_{n}$ plays the role of the real number $\epsilon$ of Lemma
\ref{Altexp} and we assume from now on that $n$ is so large that $\epsilon
_{n}\leq1/6$. Consider the following log-spectral density functions:%
\[
g\left( \lambda;b,p\right) =2\widetilde{\rho}_{n}(p)\sum_{k\in\left[
p,2p\right) }b_{k}\cos\left( k\lambda\right) ,\quad b=\left( b_{1}%
,\ldots,b_{P_{n}}\right) \in\left\{ -1,1\right\} ^{P_{n}},\quad
p\in\mathcal{P}^{\prime}.
\]
Functions $g$ are of the form specified in (\ref{Logsd}). Let $W$ be a
symmetric standard Brownian motion process. Consider a centered stationary
Gaussian processes%
\[
u_{t,n}\left( b,p\right) =\frac{1}{\left( 2\pi\right) ^{1/2}}\int_{-\pi
}^{\pi}\exp\left( \frac{g\left( \lambda;b,p\right) }{2}\right) \exp\left(
it\lambda\right) dW\left( \lambda\right) .
\]
Observe that $\left\{ u_{t,n}\left( 0,p\right) \right\} $ does not depend
on $p$ and is a Gaussian white noise process with variance 1. Let $\left\{
R_{j,n}\left( b,p\right) \right\} $ denote the covariance function of
$\left\{ u_{t,n}\left( b,p\right) \right\} $. The family $\mathcal{F}_{n}%
$\ of Gaussian processes can now be defined as\textbf{\ }%
\[
\mathcal{F}_{n}=\left\{ \left\{ u_{t,n}\left( b,p\right) \right\}
,b\in\left\{ -1,1\right\} ^{P_{n}},p\in\mathcal{P}^{\prime}\right\} .
\]
Lemma \ref{Altexp} implies that all sequences $\left\{ u_{t,n}\right\} $ in
$\mathcal{F}_{n}$ satisfies Assumption \ref{Reg}\texttt{\ }and that
$\mathcal{F}_{n}\subset$H\"{o}lder$\left( L\right) $. We now study the
asymptotic behavior of the stochastic covariance sequence $\left\{
R_{j,n}\left( B,P\right) \right\} $. Let $N_{n}\left( b,p\right) $ be as
in (\ref{Sparse2}), that is
\[
N_{n}\left( b,p\right) =N_{n}\left( \left\{ u_{t,n}\left( b,p\right)
\right\} ,p,\rho_{n}\left( p\right) \right) =\#\left\{ \left\vert
\frac{R_{j,n}\left( b,p\right) }{R_{0,n}\left( b,p\right) }\right\vert
\geq\rho_{n}\left( p\right) ,\text{ }j\in\left[ 1,p\right] \right\} .
\]
Lemma \ref{Altexp}-(i,ii) and (\ref{Rhop}) gives that $N_{n}\left(
b,p\right) =p/2$ for $n$ large enough and uniformly in $p=2^{j}\in
\mathcal{P}^{\prime}$, so that $\rho_{n}^{2}(p)=2\kappa_{n}^{2}%
\widetilde{\gamma}_{n}/\left( np^{1/2}\right) =\kappa_{n}^{2}%
\widetilde{\gamma}_{n}p^{1/2}/\left( nN_{n}\left( b,p\right) \right) $.
Hence the sequences $\left\{ u_{t,n}\right\} $ in $\mathcal{F}_{n}$
satisfies condition (i) in Theorem \ref{Sparseopt}. Therefore the Theorem will
be proved if we show that $\sup_{T_{n}}\min_{\left\{ u_{t,n}\right\}
\in\mathcal{F}_{n}}\mathbb{P}\left( T_{n}=0\right) \leq\alpha+o\left(
1\right) $, where $\sup_{T_{n}}$ is a supremum over asymptotically $\alpha
$-level tests. Since the equivalence result of Golubev et al. (2010) holds
over $\mathcal{F}_{n}\subset$H\"{o}lder$\left( L\right) $ this is equivalent
to show that $\sup_{T_{n}}\min_{\left\{ U_{n}\right\} \in\mathcal{F}_{n}%
}\mathbb{Q}\left( T_{n}=0\right) \leq\alpha+o\left( 1\right) $,
$\mathbb{Q}$ being the distribution of the continuous time regression model
\[
dU_{n}\left( \lambda;b,p\right) =g\left( \lambda;b,p\right) d\lambda
+2\pi^{1/2}\frac{dW\left( \lambda\right) }{n^{1/2}},\quad\quad\lambda
\in\left[ -\pi,\pi\right] ,
\]
where $W\left( \cdot\right) $ is a Brownian motion over $\lambda\in\left[
-\pi,\pi\right] $. This can be done as in Spokoiny (1996, Proof of Theorem
2.3) by bounding $\sup_{T_{n}}\min_{\left\{ U_{n}\right\} \in\mathcal{F}%
_{n}}\mathbb{Q}\left( T_{n}=0\right) $ with a Bayes risk, based on the
choice of a uniform distribution for $p$ and a Bernoulli one for $b$%
.$\hfill\square$
\subsection{Proof of Lemma \ref{Wildlem}}
The first approximation $R_{0,n}=\sigma^{2}\left( 1+O\left( \gamma_{n}%
P_{n}^{1/2}/n\right) \right) $ follows easily from the definition
(\ref{Wildalt}) of the alternative. To show that the second approximation is
valid, note that for $j=1,...,P_{n}$,%
\[
R_{j,n}=\frac{\nu\gamma_{n}^{1/2}}{n^{1/2}P_{n}^{1/4}}\psi_{j}\sigma
^{2}+\left( \frac{\nu\gamma_{n}^{1/2}}{n^{1/2}P_{n}^{1/4}}\right)
^{2}\left( \psi_{j+1}\psi_{1}+\cdot\cdot\cdot+\psi_{P_{n}}\psi_{P_{n}%
-j}\right) \sigma^{2}.
\]
By the Cauchy-Schwarz inequality, $\left\vert \psi_{j+1}\psi_{1}+\cdot
\cdot\cdot+\psi_{P_{n}}\psi_{P_{n}-j}\right\vert \leq\sum_{k=1}^{P_{n}}%
\psi_{k}^{2}=O(P_{n})$\ for all $j=1,...,P_{n}$, hence, uniformly in
$j=1,...,P_{n}$,%
\[
R_{j,n}=\frac{\nu\gamma_{n}^{1/2}}{n^{1/2}P_{n}^{1/4}}\psi_{j}\sigma
^{2}+O\left( \frac{\gamma_{n}P_{n}^{1/2}}{n}\right) =\frac{\nu\gamma
_{n}^{1/2}}{n^{1/2}P_{n}^{1/4}}\psi_{j}\sigma^{2}+o\left( \frac{\gamma
_{n}^{1/2}}{n^{1/2}P_{n}^{1/4}}\right)
\]
since $P_{n}=o((n/\gamma_{n})^{2/3})$.\hspace*{\fill}$\Box$
\subsection{Proof of Proposition \ref{WildCvM}}
Let us now check consistency of the test (\ref{Test}) under the assumption
that $\min_{k\in\left[ 1,P_{n}\right] }\left\vert \psi_{k}\sigma
^{2}\right\vert \geq1$. Define $\rho_{n}=\left( \nu/2\right) \gamma
_{n}^{1/2}/\left( n^{1/2}P_{n}^{1/4}\right) $. Lemma \ref{Wildlem} implies
that $N_{n}=P_{n}\left( 1+o(1)\right) $ for such a $\rho_{n}$, which
therefore satisfies
\[
\rho_{n}=\left( 1+o\left( 1\right) \right) \left( \nu/2\right) \left(
\gamma_{n}P_{n}^{1/2}/N_{n}\right) ^{1/2}/n^{1/2},
\]
so that (\ref{Sparse3}) asymptotically holds provided $\nu\geq3\kappa^{\ast}$
and the test is consistent if $1\leq P_{n}\leq\overline{p}_{n}/2$ by Theorem
\ref{Sparse} provided the considered alternatives satisfies Assumption
\ref{Reg}. Wu (2005) gives that the alternative (\ref{Wildalt}) satisfies for
any $a>0$,%
\[
\delta_{12a}\left( j\right) \leq C_{a}\frac{\nu\gamma_{n}^{1/2}}%
{n^{1/2}P_{n}^{1/4}}\left\vert \sigma\psi_{j}\right\vert \text{ for all }%
j\in\left[ 1,P_{n}\right] \text{, }\delta_{12a}\left( j\right) =0\text{
for all }j>P_{n}.
\]
Hence the condition $P_{n}=O\left( \left( n/\gamma_{n}\right)
^{1/14}\right) $ gives that $\delta_{12a}\left( j\right) \leq Cj^{-7-1/4}$
since the $\left\vert \sigma\psi_{j}\right\vert $ are bounded away from
infinity. Moreover Gaussianity ensures that%
\[
\left\Vert u_{t,n}-\varepsilon_{t}\right\Vert _{12a}\leq C_{a}\sigma\left(
\frac{\nu^{2}\gamma_{n}}{nP_{n}^{1/2}}\sum_{k=1}^{P_{n}}\psi_{k}^{2}\right)
^{1/2}=O\left( \frac{\nu\gamma_{n}^{1/2}P_{n}^{1/4}}{n^{1/2}}\right)
=o\left( 1\right) ,
\]
which gives $\operatorname*{Var}\left( u_{t,n}\right) =\sigma^{2}+o\left(
1\right) $ and $\max_{j\in\left[ 1,n\right] }\operatorname*{Var}^{2}\left(
u_{t,n}\right) /\operatorname*{Var}\left( u_{t,n}u_{t+j,n}\right)
=1+o\left( 1\right) $ so that Assumption \ref{Reg} holds. This ends the
proof of Proposition \ref{WildCvM}-(i).
Consider now the other tests in Proposition \ref{WildCvM}-(ii). Define
$\widetilde{R}_{1,j}=\sum_{t=1}^{n-j}u_{t,n}u_{t+j,n}/n$, $\widetilde{R}%
_{0,j}=\sum_{t=1}^{n-j}\varepsilon_{t}\varepsilon_{t+j}/n$, $\widetilde{\tau
}_{1,j}^{2}=\sum_{t=1}^{n-j}u_{t,n}^{2}u_{t+j,n}^{2}/\left( n-j\right)
-n\widetilde{R}_{1,j}^{2}/\left( n-j\right) $ and $\widetilde{\tau}%
_{0,j}^{2}=\sum_{t=1}^{n-j}\varepsilon_{t}^{2}\varepsilon_{t+j}^{2}/\left(
n-j\right) -n\widetilde{R}_{0,j}^{2}/\left( n-j\right) $. Define also
$\eta_{t}=\eta_{t,n}=\nu\sum_{k=1}^{\infty}\psi_{k}\varepsilon_{t-k}$, setting
$\psi_{k}=0$ for $k>P_{n}$, so that $u_{t,n}=\varepsilon_{t}+\gamma_{n}%
^{1/2}\eta_{t}/\left( n^{1/2}P_{n}^{1/4}\right) $. We have%
\[
\left\vert \widetilde{R}_{j}-\widetilde{R}_{0,j}\right\vert \leq\frac
{\gamma_{n}^{1/2}}{n^{3/2}P_{n}^{1/4}}\left\vert \sum_{t=1}^{n-j}\eta
_{t}\varepsilon_{t+j}\right\vert +\frac{\gamma_{n}^{1/2}}{n^{3/2}P_{n}^{1/4}%
}\left\vert \sum_{t=1}^{n-j}\varepsilon_{t}\eta_{t+j}\right\vert +\frac
{\gamma_{n}}{n^{2}P_{n}^{1/2}}\left\vert \sum_{t=1}^{n-j}\eta_{t}\eta
_{t+j}\right\vert .
\]
The Burkholder inequality gives, for any $a>1$,%
\begin{align*}
& \left\Vert \frac{\gamma_{n}^{1/2}}{n^{3/2}P_{n}^{1/4}}\sum_{t=1}^{n-j}%
\eta_{t}\varepsilon_{t+j}\right\Vert _{a}\leq C\frac{\gamma_{n}^{1/2}\left(
n-j\right) ^{1/2}}{n^{3/2}P_{n}^{1/4}}\left\Vert \eta_{t}\right\Vert _{a}\leq
C\frac{\gamma_{n}^{1/2}P_{n}^{1/4}}{n},\\
& \left\Vert \frac{\gamma_{n}^{1/2}}{n^{3/2}P_{n}^{1/4}}\sum_{t=1}%
^{n-j}\left( \varepsilon_{t}\eta_{t+j}-\psi_{j}\varepsilon_{t}^{2}\right)
\right\Vert _{a}\leq\left\Vert \frac{\gamma_{n}^{1/2}}{n^{3/2}P_{n}^{1/4}}%
\sum_{t=1}^{n-j}\varepsilon_{t}\left( \sum_{k=0}^{j-1}\psi_{j}\varepsilon
_{t+j-k}\right) \right\Vert _{a}\\
& +\left\Vert \frac{\gamma_{n}^{1/2}}{n^{3/2}P_{n}^{1/4}}\sum_{t=1}%
^{n-j}\left( \sum_{k=j+1}^{\infty}\psi_{j}\varepsilon_{t+j-k}\right)
\varepsilon_{t}\right\Vert _{a}\leq C\frac{\gamma_{n}^{1/2}P_{n}^{1/4}}{n},\\
& \left\Vert \frac{\gamma_{n}^{1/2}}{n^{3/2}P_{n}^{1/4}}\sum_{t=1}%
^{n-j}\left( \varepsilon_{t}^{2}-\sigma^{2}\right) \right\Vert _{a}\leq
C\frac{\gamma_{n}^{1/2}}{nP_{n}^{1/4}},\quad\left\Vert \frac{\gamma_{n}}%
{n^{2}P_{n}^{1/2}}\sum_{t=1}^{n}\eta_{t}^{2}\right\Vert _{a}\leq\frac
{\gamma_{n}}{nP_{n}^{1/2}}\leq C\frac{\gamma_{n}P_{n}^{1/2}}{n},
\end{align*}
for all $j$. Note also that $\left\vert \sum_{t=1}^{n-j}\eta_{t}\eta
_{t+j}\right\vert \leq\sum_{t=1}^{n}\eta_{t}^{2}$ and the Markov inequality
give for $a$ large enough, since $\gamma_{n}P_{n}^{1/2}=o(n^{1/4})$
\begin{align*}
& \max_{j\in\left[ 1,n\right] }\left\vert \widetilde{R}_{1,j}-\widetilde{R}%
_{0,j}\right\vert ^{a}=O_{\mathbb{P}}\left( \max_{j\in\left[ 1,n\right]
}\left\vert \widetilde{R}_{1,j}-\widetilde{R}_{0,j}\right\vert ^{a}\right) \\
& \text{ }=O_{\mathbb{P}}\left( \sum_{j=1}^{n}\left\Vert \frac{\gamma
_{n}^{1/2}}{n^{3/2}P_{n}^{1/4}}\sum_{t=1}^{n-j}\eta_{t}\varepsilon_{t+j}%
+\sum_{t=1}^{n-j}\varepsilon_{t}\eta_{t+j}\right\Vert _{a}^{a}+\left\Vert
\frac{\gamma_{n}}{n^{2}P_{n}^{1/2}}\sum_{t=1}^{n}\eta_{t}^{2}\right\Vert
_{a}^{a}\right) \\
& \text{ }=O_{\mathbb{P}}\left( n\left( \frac{\gamma_{n}^{1/2}P_{n}^{1/4}%
}{n}\right) ^{a}+\left( \frac{\gamma_{n}P_{n}^{1/2}}{n}\right) ^{a}\right)
=o_{\mathbb{P}}\left( \frac{1}{n^{7a/8-1}}+\frac{1}{n^{3a/4}}\right) \\
& \text{ }=o_{\mathbb{P}}\left( \frac{1}{\left( n\log n\right) ^{a/2}%
}\right) .
\end{align*}
Hence%
\begin{equation}
\max_{j\in\left[ 1,n\right] }\left\vert \widetilde{R}_{1,j}-\widetilde{R}%
_{0,j}\right\vert =o_{\mathbb{P}}\left( \frac{1}{\left( n\log n\right)
^{1/2}}\right) .\label{MaxtildR}%
\end{equation}
Arguing similarly for the $\widetilde{\tau}_{k,j}^{2}$ give, since
$J_{n}=O\left( n^{1/2}\right) $
\begin{equation}
\max_{j\in\left[ 1,J_{n}\right] }\left\vert \widetilde{\tau}_{1,j}%
^{2}-\widetilde{\tau}_{0,j}^{2}\right\vert =o_{\mathbb{P}}\left( \frac
{1}{\left( n\log n\right) ^{1/2}}\right) \quad,\max_{j\in\left[
1,J_{n}\right] }\left\vert \widetilde{\tau}_{0,j}^{2}-\sigma^{4}\right\vert
=O_{\mathbb{P}}\left( \frac{\log^{1/2}n}{n^{1/2}}\right) ,\label{Maxtildsig}%
\end{equation}
where the latter is from Proposition \ref{Covesti}. Note that (\ref{MaxtildR})
and (\ref{Maxtildsig}) gives (\ref{G01cov}). Let $W_{k,n}$, $CvM_{k,n}$,
$EL_{k,n}$ be the statistic computed under $G_{k}$, $k=0,1$, i.e. with
$\widetilde{R}_{0,j}/\widetilde{\tau}_{0,j}$ and $\widetilde{R}_{1,j}%
/\widetilde{\tau}_{1,j}$. Note that (\ref{MaxtildR}) and (\ref{Maxtildsig})
gives $W_{1,n}=W_{0,n}+o_{\mathbb{P}}\left( 1\right) $. (\ref{G01cov}) and
Proposition \ref{Covesti} give%
\begin{align*}
& \left\vert CvM_{1,n}-CvM_{0,n}\right\vert \leq\frac{2}{\pi^{2}}\sum
_{j=1}^{J_{n}}\frac{n\left\vert \left( \widetilde{R}_{1,j}/\widetilde{\tau
}_{1,j}+\widetilde{R}_{0,j}/\widetilde{\tau}_{0,j}\right) \left(
\widetilde{R}_{1,j}/\widetilde{\tau}_{1,j}-\widetilde{R}_{0,j}/\widetilde{\tau
}_{0,j}\right) \right\vert }{j^{2}}\\
& \text{ }\leq2\max_{j\in\left[ 1,J_{n}\right] }\frac{\left\vert
n^{1/2}\widetilde{R}_{0,j}\right\vert }{\widetilde{\tau}_{0,j}}\times
\max_{j\in\left[ 1,J_{n}\right] }\left\vert n^{1/2}\left( \frac
{\widetilde{R}_{1,j}}{\widetilde{\tau}_{1,j}}-\frac{\widetilde{R}_{0,j}%
}{\widetilde{\tau}_{0,j}}\right) \right\vert \frac{2}{\pi^{2}}\sum
_{j=1}^{J_{n}}\frac{1}{j^{2}}\\
& \text{ }+\max_{j\in\left[ 1,J_{n}\right] }n\left( \frac{\widetilde{R}%
_{1,j}}{\widetilde{\tau}_{1,j}}-\frac{\widetilde{R}_{0,j}}{\widetilde{\tau
}_{0,j}}\right) ^{2}\frac{2}{\pi^{2}}\sum_{j=1}^{J_{n}}\frac{1}{j^{2}}\\
& \text{ }=n^{1/2}O_{\mathbb{P}}\left( \left( \frac{\log n}{n}\right)
^{1/2}\right) n^{1/2}o_{\mathbb{P}}\left( \frac{1}{\left( n\log n\right)
^{1/2}}\right) +no_{\mathbb{P}}\left( \frac{1}{n\log n}\right)
=o_{\mathbb{P}}\left( 1\right) ,
\end{align*}
Hence $CvM_{1,n}=CvM_{0,n}+o_{\mathbb{P}}\left( 1\right) $. For $EL_{n}$,
$W_{1,n}=W_{0,n}+o_{\mathbb{P}}\left( 1\right) $ and Xiao and Wu (2011)
gives that $\max_{j\in\left[ 1,J_{n}\right] }\left\vert \widetilde{R}%
_{k,j}/\widetilde{\tau}_{k,j}\right\vert \leq\left( 2\ln n\right)
^{1/2}\left( 1+o_{\mathbb{P}}\left( 1\right) \right) $ for $k=0,1$ so that
$\mathbb{P}\left( \widetilde{\gamma}_{EL}^{\ast}=\ln n\right) \rightarrow1$
under $G_{0}$ and $G_{1}$.We now show that $\mathbb{P}\left( \widetilde{p}%
_{EL}^{\ast}=1\right) \rightarrow1$ under $G_{0}$. Propositions \ref{MeanH1}
and \ref{VarH1}, (\ref{Maxtildsig}) give
\begin{align*}
& \mathbb{P}\left( \widetilde{p}_{0,EL}^{\ast}\neq1\right) =\mathbb{P}%
\left( \max_{p\in\left[ 2,J_{n}\right] }\frac{\widetilde{BP}_{0,p}^{\ast
}-\widetilde{BP}_{0,1}^{\ast}}{p-1}>\ln n\right) +o\left( 1\right) \\
& \text{ }=\mathbb{P}\left( \left( 1+o_{\mathbb{P}}\left( 1\right)
\right) \max_{p\in\left[ 2,J_{n}\right] }\frac{n\sum_{j=2}^{p}%
\widetilde{R}_{0,j}^{2}/\sigma^{4}}{p-1}>\ln n\right) +o\left( 1\right) \\
& \text{ }=\mathbb{P}\left( \frac{n\sum_{j=2}^{p}\widetilde{R}_{0,j}%
^{2}/\sigma^{4}}{p-1}>\frac{1}{2}\ln n\text{ for some }p\in\left[
2,J_{n}\right] \right) +o\left( 1\right) \\
& \text{ }\leq\sum_{p=2}^{J_{n}}\mathbb{P}\left( \frac{n\sum_{j=2}^{p}\left(
\widetilde{R}_{0,j}^{2}/\sigma^{4}-\mathbb{E}\left[ \widetilde{R}_{0,j}%
^{2}/\sigma^{4}\right] \right) }{p-1}>\frac{1}{2}\ln n-\frac{n\sum_{j=2}%
^{p}\mathbb{E}\left[ \widetilde{R}_{0,j}^{2}/\sigma^{4}\right] }%
{p-1}\right) +o\left( 1\right) \\
& \text{ }\leq\sum_{p=2}^{J_{n}}\frac{\operatorname*{Var}\left( \frac
{n\sum_{j=2}^{p}\left( \widetilde{R}_{0,j}^{2}/\sigma^{4}-\mathbb{E}\left[
\widetilde{R}_{0,j}^{2}/\sigma^{4}\right] \right) }{p-1}\right) }{\left(
\frac{1}{2}\ln n-\frac{1}{p-1}\sum_{j=2}^{p}\left( 1-j/n\right) \right)
^{2}}+o\left( 1\right) \\
& \text{ }\leq\frac{C}{\log^{2}n}\sum_{p=2}^{J_{n}}\frac{1}{p-1}+o\left(
1\right) =O\left( \frac{1}{\log n}\right) +o\left( 1\right) =o\left(
1\right) .
\end{align*}
Now, observe that Proposition \ref{Covesti} and (\ref{G01cov}) give%
\begin{align*}
& \max_{p\in\left[ 2,J_{n}\right] }\left\vert \frac{\widetilde{BP}%
_{0,p}^{\ast}-\widetilde{BP}_{0,1}^{\ast}}{p-1}-\frac{\widetilde{BP}%
_{1,p}^{\ast}-\widetilde{BP}_{1,1}^{\ast}}{p-1}\right\vert \leq\max
_{p\in\left[ 2,J_{n}\right] }\left\vert \frac{n\sum_{j=2}^{p}\left(
\widetilde{R}_{0,j}^{2}/\widetilde{\tau}_{0,j}^{2}-\widetilde{R}_{1,j}%
^{2}/\widetilde{\tau}_{1,j}^{2}\right) }{p-1}\right\vert \\
& \text{ }\leq2\max_{p\in\left[ 2,J_{n}\right] }\left\vert n^{1/2}%
\frac{\widetilde{R}_{0,j}}{\widetilde{\tau}_{0,j}}\right\vert \times\max
_{p\in\left[ 2,J_{n}\right] }\left\vert n^{1/2}\left( \frac{\widetilde{R}%
_{0,j}}{\widetilde{\tau}_{0,j}}-\frac{\widetilde{R}_{1,j}}{\widetilde{\tau
}_{1,j}}\right) \right\vert +\left( \max_{p\in\left[ 2,J_{n}\right]
}\left\vert n^{1/2}\left( \frac{\widetilde{R}_{0,j}}{\widetilde{\tau}_{0,j}%
}-\frac{\widetilde{R}_{1,j}}{\widetilde{\tau}_{1,j}}\right) \right\vert
\right) ^{2}\\
& \text{ }=n^{1/2}O_{\mathbb{P}}\left( \left( \frac{\log n}{n}\right)
^{1/2}\right) n^{1/2}o_{\mathbb{P}}\left( \frac{1}{\left( n\log n\right)
^{1/2}}\right) +no_{\mathbb{P}}\left( \frac{1}{n\log n}\right)
=o_{\mathbb{P}}\left( 1\right) .
\end{align*}
This, since arguing as in the bound above gives $\max_{p\in\left[
2,J_{n}\right] }\left\vert \left( \widetilde{BP}_{0,p}^{\ast}-\widetilde{BP}%
_{0,1}^{\ast}\right) /\left( p-1\right) \right\vert =O_{\mathbb{P}}\left(
\log^{1/2}n\right) $, implies that $\max_{p\in\left[ 2,J_{n}\right]
}\left\vert \left( \widetilde{BP}_{1,p}^{\ast}-\widetilde{BP}_{1,1}^{\ast
}\right) /\left( p-1\right) \right\vert \leq\log n$ with a probability
tending to $1$ and then $\mathbb{P}\left( \widetilde{p}_{EL}^{\ast}=1\right)
\rightarrow1$ under $G_{1}$. Hence (\ref{G01cov}) gives that $EL_{1,n}%
=\widetilde{BP}_{1,1}^{\ast}+o_{\mathbb{P}}\left( 1\right) =\widetilde{BP}%
_{0,1}^{\ast}+o_{\mathbb{P}}\left( 1\right) =EL_{0,n}+o_{\mathbb{P}}\left(
1\right) $, so that $EL_{n}$ converges in distribution to a Chi square one
with one degree of freedom under $G_{0}$ and $G_{1}$.\hspace*{\fill}$\Box$
\section*{Appendix C: Proofs of intermediary
results\label{Proofs of intermediary results}}
\setcounter{lem}{0} \setcounter{thm}{0} \renewcommand{\thelem}{C.\arabic{lem}}
\renewcommand{\thethm}{C.\arabic{thm}} \setcounter{equation}{0}
\renewcommand{\theequation}{C.\arabic{equation}} \setcounter{subsection}{0}
\renewcommand{\thesubsection}{C.\arabic{subsection}} \renewcommand{\thesubsubsection}{C.\arabic{subsection}.\arabic{subsubsection}}
The proofs also use the notion of cumulants, see for example Brillinger (2001,
p. 19) or Xiao and Wu (2011)\ for a definition. Let
\[
\mathrm{Cum}\left( u_{t_{1,n}},\ldots,u_{t_{q,n}}\right) =\Gamma_{n}%
(t_{1},\ldots,t_{q})
\]
stands for the $q$th cumulants of $\left\{ u_{t,n}\right\} $. The next
theorem on cumulant summability is Theorem 21 in Xiao and Wu (2011). These
authors do not formally consider sequences $\left\{ u_{t,n}\right\} $ but
the following result is a straightforward extension of Xiao and Wu (2011).
\begin{thm}
[Shao and Wu (2011)]\label{XW11} Suppose $\left\{ u_{t,n}\right\} $ is
stationary for each $n$, with
\[
\sup_{n}\left\Vert u_{t,n}\right\Vert _{q+1}<\infty\text{ and }\sup
_{n}\left\Vert u_{t,n}-u_{t,n}^{t-j}\right\Vert _{q}\leq\delta_{q}\left(
j\right) \text{ where }\sum_{j=0}^{\infty}j^{q-2}\delta_{q}\left( j\right)
<\infty.
\]
Then there is a $\mathcal{C}$ which only depends on $\sup_{n}\left\Vert
u_{t,n}\right\Vert _{q+1}$ and $\sum_{j=0}^{\infty}j^{q-2}\delta_{q}\left(
j\right) $ such that%
\[
\sum_{t_{2},\ldots,t_{q}=-\infty}^{\infty}\left\vert \Gamma_{n}(0,t_{2}%
,\ldots,t_{q})\right\vert \leq\mathcal{C}\text{.}%
\]
\end{thm}
In what follows, we drop subscript $n$ in expressions like $u_{t,n}$,
$R_{j,n}$, $\Gamma_{n}\left( \cdot\right) $ and $\theta_{n}$ when there is
no ambiguity. We denote%
\begin{equation}
K_{jp}=K^{2}\left( \frac{j}{p}\right) -K^{2}\left( j\right) \quad
\quad\text{and}\quad\quad K_{1n}(p)=\sum_{j=1}^{n-1}K_{jp}.\label{kjdifcov}%
\end{equation}
\subsection{\textbf{Proof of Lemma \ref{Ordersums}}}
(i) The first three bounds of the lemma follow directly from Assumption
\ref{Kernel} which implies that $K^{2}\left( j/p\right) \geq K^{2}\left(
j\right) $ for all $j$ and $\mathbb{I}(x\in\lbrack0,1/2])/C\leq K^{2q}(x)\leq
C\mathbb{I}(x\in\lbrack0,1])$ for some $C>0$. The Cauchy-Schwarz inequality
implies that for any $p\in\lbrack1,n/2]$, $E_{\Delta}(p)=\sum_{j=1}%
^{n-1}\left( 1-\frac{j}{n}\right) K_{jp}\leq K_{1n}(p)\leq p^{1/2}\left(
\sum_{j=1}^{n-1}k_{j}^{2}(p)\right) ^{1/2}\leq Cp^{1/2}V_{\Delta}(p)$, which
is the last bound in (i). (ii) Write $p=1+\nu$. Since $p\leq\overline{p}%
_{n}\leq n/2$, the support of $K\left( \cdot\right) $ is $\left[
0,1\right] $ and $K\left( \cdot\right) $ is a decreasing function, we have
\begin{align*}
V_{\Delta}^{2}(p) & \geq\frac{1}{2}\times2\sum_{j=2}^{p}K^{2}\left( \frac
{j}{p}\right) \geq\sum_{j=1}^{\nu}K^{2}\left( \frac{1+j}{1+\nu}\right)
\geq\sum_{j=1}^{\nu}\int_{j}^{j+1}K^{2}\left( \frac{1+x}{1+\nu}\right) dx\\
& =\int_{1}^{\nu+1}K^{2}\left( \frac{1+x}{1+\nu}\right) dx=\nu\int_{0}%
^{1}K^{2}\left( \frac{2+z\nu}{1+\nu}\right) dz.
\end{align*}
The map $\nu\longmapsto$ $\left( 2+z\nu\right) /\left( 1+\nu\right) $,
$z\in\left[ 0,1\right) $, is decreasing. Hence, for $\nu\geq2$, $V_{\Delta
}^{2}(p)\geq\nu\int_{0}^{1/2}K^{2}\left( \frac{2+2z}{3}\right) dz\geq
C\left( p-1\right) $. Now $V_{\Delta}^{2}(2)\geq2\left( K^{2}\left(
\frac{1}{2}\right) -K^{2}\left( 1\right) \right) ^{2}>0$ gives the desired
result for $V_{\Delta}(p)$. Since $K$ is nonincreasing, $p\longmapsto
E_{\Delta}(p)$ is non decreasing and $E_{\Delta}(p)\geq0$ for all
$p\in\mathcal{P}$.\hspace*{\fill}$\Box$
\subsection{Proof of Lemma \ref{L}}
Under $\mathcal{H}_{0}$, The proof repeats the steps of Lobato (2001) and Kuan
and Lee (2006) using the joint FCLT of Assumption \ref{M}. The joint FCLT of
Assumption \ref{M} gives that the critical values are $O_{\mathbb{P}}\left(
1\right) $ under $\mathcal{H}_{1}$.\hspace*{\fill}$\Box$
\subsection{\textbf{Proof of Lemma \ref{Varcov}.}}
Equation (5.3.21) in Priestley (1981) and Theorem \ref{XW11} gives uniformly
in $j$,
\begin{align*}
\operatorname*{Var}\left( \widetilde{R}_{j}\right) & =\frac{1}{n}%
\sum_{j_{1}=-n+j+1}^{n-j-1}\left( 1-\frac{|j_{1}|+j}{n}\right) \left(
R_{j_{1}}^{2}+R_{j_{1}+j}R_{j_{1}-j}+\Gamma\left( 0,j_{1},j,j_{1}+j\right)
\right) \\
& \leq\frac{2}{n}\sum_{j_{1}=-2n}^{2n}R_{j_{1}}^{2}+\frac{1}{n}\sum
_{j_{2},j_{3},j_{4}=-\infty}^{+\infty}\left\vert \Gamma\left( 0,j_{2}%
,j_{3},j_{4}\right) \right\vert \\
& \leq\frac{4}{n}\sum_{j=0}^{\infty}R_{j}^{2}+\frac{1}{n}\sum_{j_{2}%
,j_{3},j_{4}=-\infty}^{+\infty}\left\vert \Gamma\left( 0,j_{2},j_{3}%
,j_{4}\right) \right\vert <C.\hfill\square
\end{align*}
\subsection{\textbf{Proof of Proposition \ref{Covesti}}}
For the sake of brevity we assume that $\theta$ is unidimensional. That
\begin{align*}
\max_{j\in\left[ 0,n-1\right] }\left\vert \widetilde{R}_{j}-\left(
1-\frac{j}{n}\right) R_{j,n}\right\vert & =O_{\mathbb{P}}\left( \left(
\frac{\log n}{n}\right) ^{1/2}\right) ,\\
\max_{j\in\left[ 0,n-1\right] }\left( 1-\frac{j}{n}\right) \left\vert
\widetilde{\tau}_{j}^{2}-\tau_{j,n}^{2}\right\vert & =O_{\mathbb{P}}\left(
\left( \frac{\log n}{n}\right) ^{1/2}\right) ,
\end{align*}
follow from Xiao and Wu (2011, Theorem 2). Note that these authors do not
consider stationary sequences $\left\{ u_{t,n}\right\} $ but their arguments
carry over under Assumption \ref{Reg}. Hence it suffices to study $\max
_{j\in\left[ 0,\overline{p}_{n}\right] }\left\vert \widehat{R}%
_{j}-\widetilde{R}_{j}\right\vert $ and $\max_{j\in\left[ 0,\overline{p}%
_{n}\right] }\left\vert \widehat{\tau}_{j}^{2}-\widetilde{\tau}_{j}%
^{2}\right\vert $ since $\overline{p}_{n}/n=o\left( n^{-1/2}\right) $ under
Assumption \ref{P}. We then now show that $\max_{j\in\left[ 0,\overline
{p}_{n}\right] }\left\vert \widehat{R}_{j}-\widetilde{R}_{j}\right\vert
=O_{\mathbb{P}}\left( n^{-1/2}\right) $. Let $e_{t}=\widehat{u}_{t}-u_{t}$,
so that%
\[
\widehat{R}_{j}=\frac{1}{n}\sum_{t=1}^{n-{j}}\left( u_{t}+e_{t}\right)
\left( u_{t+j}+e_{t+j}\right) =\widetilde{R}_{j}+\frac{1}{n}\sum
_{t=1}^{n-{j}}\left( u_{t}e_{t+j}+e_{t}u_{t+j}\right) +\frac{1}{n}\sum
_{t=1}^{n-{j}}e_{t}e_{t+j}%
\]
with, by the Cauchy Schwarz inequality, $\left\vert \sum_{t=1}^{n-{j}}%
e_{t}e_{t+j}\right\vert /n\leq\sum_{t=1}^{n}e_{t}^{2}/n$ and, under Assumption
\ref{M}, for $\widehat{\mathfrak{r}}_{t}=\mathfrak{r}_{t}\left(
\widehat{\theta}\right) $,%
\[
\frac{1}{n}\sum_{t=1}^{n-{j}}u_{t}e_{t+j}=\left( \widehat{\theta}%
-\theta\right) \frac{1}{n}\sum_{t=1}^{n-{j}}u_{t}u_{t+j}^{\left( 1\right)
}+\frac{1}{2}\left( \widehat{\theta}-\theta\right) ^{2}\frac{1}{n}\sum
_{t=1}^{n-{j}}u_{t}u_{t+j}^{\left( 2\right) }+\frac{1}{n}\sum_{t=1}^{n-{j}%
}u_{t}\widehat{\mathfrak{r}}_{t+j}.
\]
Now, observe that Assumption \ref{M} gives $\widehat{\theta}-\theta
=O_{\mathbb{P}}\left( n^{-1/2}\right) $, $\max_{t\in\left[ 1,n\right]
}\left\vert \widehat{\mathfrak{r}}_{t}\right\vert =o_{\mathbb{P}}\left(
1/n\right) $ and%
\[
\frac{1}{n}\sum_{t=1}^{n}e_{t}^{2}\leq3\left( \widehat{\theta}-\theta\right)
^{2}\frac{1}{n}\sum_{t=1}^{n}\left( u_{t}^{\left( 1\right) }\right)
^{2}+\frac{3}{4}\left( \widehat{\theta}-\theta\right) ^{4}\frac{1}{n}%
\sum_{t=1}^{n}\left( u_{t}^{\left( 1\right) }\right) ^{2}+\frac{3}{n}%
\sum_{t=1}^{n}\left\vert \widehat{\mathfrak{r}}_{t}\right\vert =O_{\mathbb{P}%
}\left( \frac{1}{n}\right) ,
\]%
\[
\max_{j\in\left[ 1,n\right] }\left\vert \frac{1}{n}\sum_{t=1}^{n-{j}}\left(
u_{t}\widehat{\mathfrak{r}}_{t+j}+u_{t+j}\widehat{\mathfrak{r}}_{t}\right)
\right\vert \leq\frac{2\max_{t\in\left[ 1,n\right] }\left\vert
\widehat{\mathfrak{r}}_{t}\right\vert }{n}\sum_{t=1}^{n-{j}}\left\vert
u_{t}\right\vert =o_{\mathbb{P}}\left( \frac{1}{n}\right) .
\]
This gives, uniformly in $j\in\left[ 1,n\right] $%
\begin{align}
& \left\vert \widehat{R}_{j}-\widetilde{R}_{j}\right\vert \leq\left\vert
\widehat{\theta}-\theta\right\vert \left\vert \mathbb{E}\left[ u_{t}%
u_{t+j}^{\left( 1\right) }+u_{t+j}u_{t}^{\left( 1\right) }\right]
\right\vert \nonumber\\
& \text{ }+\left\vert \widehat{\theta}-\theta\right\vert \left\vert \frac
{1}{n}\sum_{t=1}^{n-{j}}\left( u_{t}u_{t+j}^{\left( 1\right) }+u_{t+j}%
u_{t}^{\left( 1\right) }-\mathbb{E}\left[ u_{t}u_{t+j}^{\left( 1\right)
}+u_{t+j}u_{t}^{\left( 1\right) }\right] \right) \right\vert
+O_{\mathbb{P}}\left( \frac{1}{n}\right) .\label{HatR2tildeR}%
\end{align}
It also follows from Assumption \ref{M} and $\overline{p}_{n}=o\left(
n^{1/2}\right) $ that $\left\vert \widehat{\theta}-\theta\right\vert
\max_{j\in\left[ 1,n\right] }\left\vert \mathbb{E}\left[ u_{t}%
u_{t+j}^{\left( 1\right) }+u_{t+j}u_{t}^{\left( 1\right) }\right]
\right\vert =O_{\mathbb{P}}\left( 1/n^{1/2}\right) $, $n\left(
\widehat{\theta}-\theta\right) ^{2}\sum_{j=0}^{\infty}\mathbb{E}^{2}\left[
u_{t}u_{t+j}^{\left( 1\right) }+u_{t+j}u_{t}^{\left( 1\right) }\right]
=O_{\mathbb{P}}\left( 1\right) $, and for $A_{t}\left( j\right)
=u_{t}u_{t+j}^{\left( 1\right) }+u_{t+j}u_{t}^{\left( 1\right)
}-\mathbb{E}\left[ u_{t}u_{t+j}^{\left( 1\right) }+u_{t+j}u_{t}^{\left(
1\right) }\right] $%
\begin{align*}
& \left\vert \widehat{\theta}-\theta\right\vert \max_{j\in\left[
0,\overline{p}_{n}\right] }\left\vert \frac{1}{n}\sum_{t=1}^{n-{j}}%
A_{t}\left( j\right) \right\vert \leq O_{\mathbb{P}}\left( \frac{1}%
{n^{1/2}}\right) \sum_{j=0}^{\overline{p}_{n}}\left\vert \frac{1}{n}%
\sum_{t=1}^{n-{j}}A_{t}\left( j\right) \right\vert \\
& \text{ }=O_{\mathbb{P}}\left( \frac{1}{n}\right) O_{\mathbb{P}}\left(
\sum_{j=0}^{\overline{p}_{n}}\mathbb{E}^{1/2}\left[ \left( \frac{1}{n^{1/2}%
}\sum_{t=1}^{n-{j}}A_{t}\left( j\right) \right) ^{2}\right] \right) \\
& \text{ }=O_{\mathbb{P}}\left( \frac{1}{n}\right) O_{\mathbb{P}}\left(
\overline{p}_{n}\max_{j\in\left[ 0,\overline{p}_{n}\right] }\left[ \left(
\frac{1}{n^{1/2}}\sum_{t=1}^{n-{j}}A_{t}\left( j\right) \right)
^{2}\right] \right) =O_{\mathbb{P}}\left( \frac{1}{n^{1/2}}\right) ,
\end{align*}%
\begin{align*}
& n\sum_{j=0}^{n-1}\left( \widehat{\theta}-\theta\right) ^{2}\left(
\frac{1}{n}\sum_{t=1}^{n-{j}}A_{t}\left( j\right) \right) ^{2}\\
& \text{ }=O_{\mathbb{P}}\left( 1\right) \frac{1}{n}O_{\mathbb{P}}\left(
\sum_{j=0}^{n-1}\mathbb{E}\left[ \left( \frac{1}{n^{1/2}}\sum_{t=1}^{n-{j}%
}A_{t}\left( j\right) \right) ^{2}\right] \right) \\
& \text{ }=O_{\mathbb{P}}\left( 1\right) \frac{1}{n}O_{\mathbb{P}}\left(
n\max_{j\in\left[ 0,n\right] }\mathbb{E}\left[ \left( \frac{1}{n^{1/2}%
}\sum_{t=1}^{n-{j}}A_{t}\left( j\right) \right) ^{2}\right] \right)
=O_{\mathbb{P}}\left( 1\right) .
\end{align*}
This gives $\max_{j\in\left[ 0,\overline{p}_{n}\right] }\left\vert
\widehat{R}_{j}-\widetilde{R}_{j}\right\vert =O_{\mathbb{P}}\left(
n^{-1/2}\right) $ and $\max_{p\in\left[ 0,n-1\right] }n\sum_{j=1}%
^{p}\left( \widehat{R}_{j}-\widetilde{R}_{j}\right) ^{2}=$ $O_{\mathbb{P}%
}\left( 1\right) $. The study of $\max_{j\in\left[ 0,\overline{p}%
_{n}\right] }\left\vert \widehat{\tau}_{j}^{2}-\widetilde{\tau}_{j}%
^{2}\right\vert $ is similar.$\hfill\square$
\subsection{\textbf{Proof of Proposition \ref{Esti}}}
For the sake of brevity we assume that $\theta$ is unidimensional. Since
$\widehat{R}_{j}^{2}-\widetilde{R}_{j}^{2}=\left( \widehat{R}_{j}%
-\widetilde{R}_{j}\right) ^{2}+2\widetilde{R}_{j}\left( \widehat{R}%
_{j}-\widetilde{R}_{j}\right) $, Proposition \ref{Esti} is a direct
consequence of Proposition \ref{Covesti} and Lemma \ref{Crosscov} below.
\begin{lem}
\label{Crosscov}Assume that Assumptions \ref{Kernel}, \ref{M}, \ref{P} and
\ref{Reg} hold. Then
\[
\max_{p\in\left[ 2,\overline{p}_{n}\right] }\frac{\left\vert n\sum
_{j=1}^{n-1}\left( K^{2}(j/p)-K^{2}(j)\right) \widetilde{R}_{j}\left(
\widehat{R}_{j}-\widetilde{R}_{j}\right) \right\vert }{\left( 1+n\sum
_{j=1}^{p}R_{j}^{2}\right) ^{1/2}}=O_{\mathbb{P}}\left( 1\right)
\]
and $n\sum_{j=1}^{n-1}K^{2}(j/p_{n})\widetilde{R}_{j}\left( \widehat{R}%
_{j}-\widetilde{R}_{j}\right) =O_{\mathbb{P}}\left( \left( 1+n\sum
_{j=1}^{p_{n}}R_{j}^{2}\right) ^{1/2}\right) $ for any $p_{n}=O(n^{1/2})$.
\end{lem}
{\small \noindent}\textbf{Proof of Lemma \ref{Crosscov}.}{\small \ }We just
prove the first equality since the proof of the second is very similar. Define
$\overline{R}_{j}=\mathbb{E}\left[ \widetilde{R}_{j}\right] =(1-j/n)R_{j}$.
We have
\begin{align*}
& \left\vert n\sum_{j=1}^{n-1}K_{jp}\widetilde{R}_{j}\left( \widehat{R}%
_{j}-\widetilde{R}_{j}\right) \right\vert \leq C_{n}(p)+D_{n}(p),\text{
where}\\
& C_{n}(p)=\left\vert n\sum_{j=1}^{n-1}K_{jp}R_{j}\left( \widehat{R}%
_{j}-\widetilde{R}_{j}\right) \right\vert ,\\
& D_{n}(p)=\left\vert n\sum_{j=1}^{n-1}K_{jp}\left( \widetilde{R}%
_{j}-\overline{R}_{j}\right) \left( \widehat{R}_{j}-\widetilde{R}%
_{j}\right) \right\vert .
\end{align*}
The Cauchy-Schwarz inequality and Assumption \ref{Kernel} gives
\[
C_{n}(p)\leq C\left( n\sum_{j=1}^{p}R_{j}^{2}\right) ^{1/2}\left(
n\sum_{j=1}^{p}\left( \widehat{R}_{j}-\widetilde{R}_{j}\right) ^{2}\right)
^{1/2}.
\]
Hence Proposition \ref{Covesti} yields that $\max_{p\in\left[ 2,\overline
{p}_{n}\right] }|C_{n}(p)/\left( n\sum_{j=1}^{p}R_{j}^{2}\right)
^{1/2}|=O_{\mathbb{P}}\left( 1\right) $. For $D_{n}(p)$, Assumptions
\ref{Kernel}, \ref{M}, (\ref{HatR2tildeR}) and $\widehat{\mathfrak{r}}%
_{t}=\mathfrak{r}_{t}\left( \widehat{\theta}\right) $ give
\begin{align*}
\max_{p\in\left[ 2,\overline{p}_{n}\right] }D_{n}(p) & \leq O_{\mathbb{P}%
}(n^{-1/2})\left( \max_{p\in\left[ 2,\overline{p}_{n}\right] }%
D_{1n}(p)+\max_{p\in\left[ 2,\overline{p}_{n}\right] }D_{2n}(p)\right)
+O_{\mathbb{P}}(n^{-1})\max_{p\in\left[ 2,\overline{p}_{n}\right] }%
D_{3n}(p)\\
& +\left( \frac{1}{n}\sum_{t=1}^{n}e_{t}^{2}+2\frac{\max_{t\in\left[
1,n\right] }\left\vert \mathfrak{r}_{t}\right\vert }{n}\sum_{t=1}%
^{n}\left\vert u_{t}\right\vert \right) \max_{p\in\left[ 2,\overline{p}%
_{n}\right] }D_{4n}(p),
\end{align*}
where $D_{1n}(p)=n\sum_{j=1}^{p}\left\vert \widetilde{R}_{j}-\overline{R}%
_{j}\right\vert \left\vert \mathbb{E}\left[ u_{t}u_{t+j}^{(1)}+u_{t+j}%
u_{t}^{(1)}\right] \right\vert $,
\begin{align*}
D_{2n}(p) & =n\sum_{j=1}^{p}\left\vert \widetilde{R}_{j}-\overline{R}%
_{j}\right\vert \left\vert \frac{1}{n}\sum_{t=1}^{n-j}\left( u_{t}%
u_{t+j}^{(1)}+u_{t+j}u_{t}^{(1)}-\mathbb{E}\left[ u_{t}u_{t+j}^{(1)}%
+u_{t+j}u_{t}^{(1)}\right] \right) \right\vert ,\\
D_{3n}(p) & =n\sum_{j=1}^{p}\left\vert \widetilde{R}_{j}-\overline{R}%
_{j}\right\vert \left\vert \frac{1}{n}\sum_{t=1}^{n-j}\left( u_{t}%
u_{t+j}^{(2)}+u_{t+j}u_{t}^{(2)}\right) \right\vert ,\\
D_{4n}(p) & =n\sum_{j=1}^{p}\left\vert \widetilde{R}_{j}-\overline{R}%
_{j}\right\vert .
\end{align*}
By Assumption \ref{Kernel} and \ref{M} and by Lemma \ref{Varcov}, we have%
\[
\mathbb{E}\left[ \max_{p\in\left[ 2,\overline{p}_{n}\right] }%
D_{1n}(p)\right] \leq Cn\sum_{j=1}^{\overline{p}_{n}}\operatorname*{Var}%
\nolimits^{1/2}\left( \widetilde{R}_{j}\right) \left\vert \mathbb{E}\left[
u_{t}u_{t+j}^{(1)}+u_{t+j}u_{t}^{(1)}\right] \right\vert \leq Cn^{1/2},
\]%
\begin{align*}
\mathbb{E}\left[ \max_{p\in\left[ 2,\overline{p}_{n}\right] }%
D_{2n}(p)\right] & \leq Cn^{1/2}\sum_{j=1}^{\overline{p}_{n}}%
\operatorname*{Var}\nolimits^{1/2}\left( \widetilde{R}_{j}\right) \\
& \times\mathbb{E}^{1/2}\left[ \left\vert \frac{1}{n^{1/2}}\sum_{t=1}%
^{n}\left( u_{t}u_{t+j}^{(1)}+u_{t+j}u_{t}^{(1)}-\mathbb{E}\left[
u_{t}u_{t+j}^{(1)}+u_{t+j}u_{t}^{(1)}\right] \right) \right\vert ^{2}\right]
\\
& \leq C\overline{p}_{n},
\end{align*}%
\[
\mathbb{E}\left[ \max_{p\in\left[ 2,\overline{p}_{n}\right] }%
D_{3n}(p)\right] \leq Cn\sum_{j=1}^{\overline{p}_{n}}\operatorname*{Var}%
\nolimits^{1/2}\left( \widetilde{R}_{j}\right) \mathbb{E}^{1/2}\left[
\left\vert \frac{1}{n}\sum_{t=1}^{n}\left( u_{t}u_{t+j}^{(2)}+u_{t+j}%
u_{t}^{(2)}\right) \right\vert ^{2}\right] \leq C\overline{p}_{n}n^{1/2},
\]%
\[
\mathbb{E}\left[ \max_{p\in\left[ 2,\overline{p}_{n}\right] }%
D_{4n}(p)\right] \leq Cn\sum_{j=1}^{\overline{p}_{n}}\mathbb{E}\left[
\left\vert \widetilde{R}_{j}-\overline{R}_{j}\right\vert \right] \leq
Cn\sum_{j=1}^{\overline{p}_{n}}\operatorname*{Var}\nolimits^{1/2}\left(
\widetilde{R}_{j}\right) \leq Cn^{1/2}\overline{p}_{n}.
\]
The Markov inequality gives us the stochastic orders of magnitude of the four
maxima in the bound for $\max_{p\in\left[ 2,\overline{p}_{n}\right] }%
D_{n}(p)$. Since $\overline{p}_{n}=O\left( n^{1/2}\right) $ by Assumption
\ref{P}, $\max_{t\in\left[ 1,n\right] }\left\vert \widehat{\mathfrak{r}}%
_{t}\right\vert =o_{\mathbb{P}}\left( 1/n\right) $ and $n^{-1}\sum_{t=1}%
^{n}e_{t}^{2}=O_{\mathbb{P}}(n^{-1})$\ by Assumption \ref{M}, we have
$\max_{p\in\left[ 2,\overline{p}_{n}\right] }\left\vert D_{n}(p)\right\vert
=O_{\mathbb{P}}\left( 1+\frac{\overline{p}_{n}}{n^{1/2}}\right)
=O_{\mathbb{P}}\left( 1\right) $. This together with $\max_{p\in\left[
2,\overline{p}_{n}\right] }|C_{n}(p)/\left( n\sum_{j=1}^{p}R_{j}^{2}\right)
^{1/2}|=O_{\mathbb{P}}\left( 1\right) $ shows that the Lemma is
proved.\hspace*{\fill}$\Box$
\subsection{Proof of Proposition \ref{Selection}}
The proof of Proposition \ref{Selection} is long and divided in three setps.
In the two first steps, we focus on directly observed residuals. In the first
step, we approximate the sample covariance $\widetilde{R}_{j}$ by a martingale
counterpart $\sum_{t=1}^{n}D_{jt}/n$, $j\in\left[ 1,\overline{p}_{n}\right]
$, as in Shao (2011b), see the notations below and Lemmas \ref{CrossDD},
\ref{Sumvareta}. and \ref{Momsig}. The second step deals with the deviation
probability of
\[
\frac{n\sum_{j=1}^{p}\left( \frac{1}{n}\sum_{t=j+1}^{n}D_{jt}\right)
^{2}\left( K^{2}\left( j/p\right) -K^{2}\left( 1\right) \right)
-\sigma^{4}E_{\Delta}\left( p\right) }{\sigma^{4}V_{\Delta}\left( p\right)
}%
\]
which is approximated with some Gaussian counterparts through the Lindeberg
technique, see Lemma \ref{Lindeberg}. The third step concludes and explicitely
deals with the case of estimated residuals thanks to Propositions
\ref{Covesti} and \ref{Esti}.
Let us now introduce additional notations. Let $\mathcal{F}_{k}$ be the sigma
field generated by $e_{k},e_{k-1},\ldots$. Define $\mathbf{P}_{t}\left[
Z\right] =\mathbb{E}\left[ Z\left\vert \mathcal{F}_{t}\right. \right]
-\mathbb{E}\left[ Z\left\vert \mathcal{F}_{t-1}\right. \right] $. Wu (2007,
Proposition 3) establishes that $\left\Vert \mathbf{P}_{t}\left[
u_{t+k}\right] \right\Vert _{a}\leq\delta_{a}\left( k\right) $ and Shao
(2011b) has shown that%
\begin{equation}
\left\Vert \mathbf{P}_{0}\left[ u_{k}u_{k-j}\right] \right\Vert _{a}%
\leq2\left\Vert u_{k}\right\Vert _{2a}\left( \delta_{2a}\left( k\right)
+\delta_{2a}\left( k-j\right) \mathbb{I}\left( j\leq k\right) \right)
,\label{Dep}%
\end{equation}
which is smaller than $4\left\Vert u_{k}\right\Vert _{2a}\delta_{2a}\left(
k-j\right) $ when $j\leq k$. Define now the vector of martingale difference
$D_{t}=\left[ D_{1t},\ldots,D_{\overline{p}_{n}t}\right] ^{\prime}$ with%
\[
D_{jt}=\sum_{k=t}^{\infty}\mathbf{P}_{t}\left[ u_{k}u_{k-j}\right]
\]
which converges a.s. and satisfies $\mathbb{E}\left[ D_{jt}\left\vert
\mathcal{F}_{t-1}\right. \right] =0$, $\max_{j}\mathbb{E}\left[ \left\vert
D_{jt}\right\vert ^{a}\right] <\infty$, provided $\left\Vert u_{t}\right\Vert
_{2a}<\infty$ and $\sum_{k=0}^{\infty}\delta_{2a}\left( k\right) <\infty$.
Consider the martingale $M_{j}=M_{jn}=\sum_{t=j+1}^{n}D_{jt}$ which is an
approximation of $\widetilde{R}_{j}$. Shao (Lemma A.1, 2011b) gives under
Assumption \ref{Reg} and for any $\mathfrak{a\in}\left[ 1,6a\right] $,%
\begin{equation}
\left( \mathbb{E}^{\frac{1}{\mathfrak{a}}}\left[ \left\vert \sum_{t=j+1}%
^{n}u_{t}u_{t-j}-M_{j}\right\vert ^{\mathfrak{a}}\right] \right) ^{2}\leq
C.\label{Cov2M}%
\end{equation}
We shall also use a $\mathfrak{p}$-dependent version of $D_{t}$, denoted
$D_{t}^{_{t-\mathfrak{p}+1}}$, with entries%
\begin{align}
& D_{jt}^{_{t-\mathfrak{p}+1}}=\mathbb{E}\left[ D_{jt}\left\vert e_{t}%
,\ldots,e_{t-\mathfrak{p}+1}\right. \right] =\sum_{k=t}^{\infty}%
\mathbf{P}_{t}^{\prime}\left[ u_{k}u_{k-j}\right] ,\text{ where}%
\label{Djtp}\\
\text{ } & \mathbf{P}_{t}^{\prime}\left[ Z\right] =\mathbf{P}%
_{t}^{t-\mathfrak{p}+1}\left[ Z\right] =\mathbb{E}\left[ Z\left\vert
e_{t},\ldots,e_{t-\mathfrak{p}+1}\right. \right] -\mathbb{E}\left[
Z\left\vert e_{t-1},\ldots,e_{t-\mathfrak{p}+1}\right. \right] .\nonumber
\end{align}
Arguing as in Shao (2011b, Lemma A.2-(iii)) gives%
\begin{equation}
\left\Vert D_{jt}-D_{jt}^{_{t-\mathfrak{p}+1}}\right\Vert _{\mathfrak{a}}\leq
C\left\Vert u_{t}\right\Vert _{2\mathfrak{a}}\Theta_{2\mathfrak{a}}\left(
\mathfrak{p-}j\right) ,\quad\text{for all }j\in\left[ 1,\mathfrak{p}\right]
.\label{Shaolem52}%
\end{equation}
\subsubsection{Martingale approximation and preliminary lemmas}
An important property of $D_{t}$ and $D_{t}^{_{t-\mathfrak{p}+1}}$ is as follows.
\begin{lem}
\label{CrossDD}Suppose Assumption \ref{Kernel} and \ref{Reg} hold. Let
$K_{jp}$ be as in (\ref{kjdifcov}). Then for any $p\leq\mathfrak{p}$, $t$, and
any $s\leq t-\mathfrak{p}$, $\left\Vert \sum_{j=1}^{p}K_{jp}D_{js}%
D_{jt}^{_{t-\mathfrak{p}+1}}\right\Vert _{3a}\leq Cp^{1/2}$.
\end{lem}
\noindent\textbf{Proof of Lemma \ref{CrossDD}}. We have%
\begin{align}
& \left\Vert \sum_{j=1}^{p}K_{jp}D_{js}D_{jt}^{t-\mathfrak{p}+1}\right\Vert
_{3a}\nonumber\\
& \quad=\left\Vert \sum_{j=1}^{p}K_{jp}\sum_{k_{1}=0}^{\infty}\mathbf{P}%
_{s}\left[ u_{s+k_{1}}u_{s+k_{1}-j}\right] \sum_{k_{2}=0}^{\infty}%
\mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}u_{t+k_{2}-j}\right] \right\Vert
_{3a}\nonumber\\
& \quad\leq\left\Vert \sum_{j=1}^{p}K_{jp}\sum_{k_{1}=0}^{j-1}\mathbf{P}%
_{s}\left[ u_{s+k_{1}}u_{s+k_{1}-j}\right] \sum_{k_{2}=0}^{j-1}%
\mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}u_{t+k_{2}-j}\right] \right\Vert
_{3a}\label{TildeDD1}\\
& \quad+\left\Vert \sum_{j=1}^{p}K_{jp}\sum_{k_{1}=0}^{j-1}\mathbf{P}%
_{s}\left[ u_{s+k_{1}}u_{s+k_{1}-j}\right] \sum_{k_{2}=j}^{\infty}%
\mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}u_{t+k_{2}-j}\right] \right\Vert
_{3a}\label{TildeDD2}\\
& \quad+\left\Vert \sum_{j=1}^{p}K_{jp}\sum_{k_{1}=j}^{\infty}\mathbf{P}%
_{s}\left[ u_{s+k_{1}}u_{s+k_{1}-j}\right] \sum_{k_{2}=0}^{j-1}%
\mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}u_{t+k_{2}-j}\right] \right\Vert
_{3a}\label{TildeDD3}\\
& \quad+\left\Vert \sum_{j=1}^{p}K_{jp}\sum_{k_{1}=j}^{\infty}\mathbf{P}%
_{s}\left[ u_{s+k_{1}}u_{s+k_{1}-j}\right] \sum_{k_{2}=j}^{\infty}%
\mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}u_{t+k_{2}-j}\right] \right\Vert
_{3a}.\label{TildeDD4}%
\end{align}
We have for (\ref{TildeDD1})%
\begin{align*}
\text{(\ref{TildeDD1})} & =\left\Vert \sum_{j=1}^{p}K_{jp}\sum_{k_{1}%
=0}^{p-1}\mathbb{I}\left( k_{1}<j\right) u_{s+k_{1}-j}\mathbf{P}_{s}\left[
u_{s+k_{1}}\right] \sum_{k_{2}=0}^{p-1}\mathbb{I}\left( k_{2}<j\right)
u_{t+k_{2}-j}\mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}\right] \right\Vert
_{3a}\\
& =\left\Vert \sum_{k_{1}=0}^{p-1}\sum_{k_{2}=0}^{p-1}\left( \sum
_{j=k_{1}\vee k_{2}}^{p-1}K_{jp}u_{s+k_{1}-j}u_{t+k_{2}-j}\right)
\mathbf{P}_{s}\left[ u_{s+k_{1}}\right] \mathbf{P}_{t}^{\prime}\left[
u_{t+k_{2}}\right] \right\Vert _{3a}\\
& \leq\sum_{k_{1}=0}^{p-1}\sum_{k_{2}=0}^{p-1}\left\Vert \sum_{j=k_{1}\vee
k_{2}}^{p-1}K_{jp}u_{s+k_{1}-j}u_{t+k_{2}-j}\right\Vert _{6a}\delta
_{12a}\left( k_{1}\right) \delta_{12a}\left( k_{2}\right) ,
\end{align*}
using $\left\Vert \mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}\right]
\right\Vert _{12a}\leq\left\Vert \mathbf{P}_{t}\left[ u_{t+k_{2}}\right]
\right\Vert _{12a}=\delta_{12a}\left( k_{2}\right) $. Now (\ref{Cov2M}) and
the Burkholder inequality give%
\begin{align*}
& \left\Vert \sum_{j=k_{1}\vee k_{2}}^{p-1}K_{jp}u_{s+k_{1}-j}u_{t+k_{2}%
-j}\right\Vert _{6a}\leq\left\Vert \sum_{j=k_{1}\vee k_{2}}^{p-1}%
K_{jp}D_{t+k_{2}-j,t-s+k_{2}-k_{1}}\right\Vert _{6a}\\
& \quad+\left\Vert \sum_{j=k_{1}\vee k_{2}}^{p-1}K_{jp}\left( u_{s+k_{1}%
-j}u_{t+k_{2}-j}-D_{t+k_{2}-j,t-s+k_{2}-k_{1}}\right) \right\Vert _{6a}\leq
Cp^{1/2}.
\end{align*}
Hence (\ref{TildeDD1}) is smaller than $Cp^{1/2}$. For (\ref{TildeDD2}), we
have since $\left\{ u_{s+k_{1}-j},j\in\left[ 1,k_{1}\right] \right\} $ and
$\left\{ \mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}u_{t+k_{2}-j}\right]
,j\in\left[ 1,k_{1}\right] ,k_{2}\geq0\right\} $ are independent,%
\begin{align*}
\text{(\ref{TildeDD2})} & =\left\Vert \sum_{k_{1}=0}^{p-1}\sum_{k_{2}%
=0}^{\infty}\left( \sum_{j=k_{1}}^{p-1}K_{jp}u_{s+k_{1}-j}\mathbf{P}%
_{t}^{\prime}\left[ u_{t+k_{2}+j}u_{t+k_{2}}\right] \right) \mathbf{P}%
_{s}\left[ u_{s+k_{1}}\right] \right\Vert _{3a}\\
& \leq\sum_{k_{1}=0}^{p-1}\sum_{k_{2}=0}^{\infty}\left\Vert \sum_{j=k_{1}%
}^{p-1}K_{jp}u_{s+k_{1}-j}\mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}%
+j}u_{t+k_{2}}\right] \right\Vert _{6a}\delta_{6a}\left( k_{1}\right) .
\end{align*}
Let $d_{t}=\sum_{k=t}^{\infty}\mathbf{P}_{t}\left[ u_{k}\right] $ be the
martingale difference approximation of $u_{t}$, see Wu (2007). Now, since
$\left\{ u_{s+k_{1}-j},d_{s+k_{1}-j,}j\in\left[ 1,k_{1}\right] \right\} $
and $\left\{ \mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}u_{t+k_{2}-j}\right]
,j\in\left[ 1,k_{1}\right] ,k_{2}\geq0\right\} $ are independent, arguing
as in the proof of Theorem 1 in Wu (2007), (\ref{Cov2M}) and the Burkholder
inequality give%
\begin{align*}
& \left\Vert \sum_{j=k_{1}}^{p-1}K_{jp}u_{s+k_{1}-j}\mathbf{P}_{t}^{\prime
}\left[ u_{t+k_{2}+j}u_{t+k_{2}}\right] \right\Vert _{6a}^{2}\\
& \text{ }\leq2\left\Vert \sum_{j=k_{1}}^{p-1}K_{jp}d_{s+k_{1}-j}%
\mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}+j}u_{t+k_{2}}\right] \right\Vert
_{6a}^{2}+2\left\Vert \sum_{j=k_{1}}^{p-1}K_{jp}\left( u_{s+k_{1}-j}%
-d_{t}\right) \mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}+j}u_{t+k_{2}}\right]
\right\Vert _{6a}^{2}\\
& \text{ }\leq C\left\Vert \sum_{j=k_{1}}^{p-1}K_{jp}d_{s+k_{1}-j}^{2}\left(
\mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}+j}u_{t+k_{2}}\right] \right)
^{2}\right\Vert _{3a}+C\left\Vert \mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}%
+j}u_{t+k_{2}}\right] \right\Vert _{6a}^{2}\leq Ck_{1}\delta_{6a}^{2}\left(
k_{2}\right) .
\end{align*}
Hence Assumption \ref{Reg} gives (\ref{TildeDD2})$\leq\sum_{k_{1}=0}^{p-1}%
\sum_{k_{2}=0}^{\infty}k_{1}\delta_{6a}^{2}\left( k_{2}\right) \delta
_{6a}\left( k_{1}\right) \leq C$.
For (\ref{TildeDD3}), observe first that (\ref{Cov2M}) gives%
\begin{align*}
\text{(\ref{TildeDD3})} & =\left\Vert \sum_{k_{1}=0}^{\infty}\sum_{k_{2}%
=0}^{p-1}\sum_{j=1}^{p}K_{jp}\mathbb{I}\left( j\leq k_{1}\right)
\mathbf{P}_{s}\left[ u_{s+k_{1}}u_{s+k_{1}-j}\right] \mathbb{I}\left(
k_{2}<j\right) \mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}u_{t+k_{2}%
-j}\right] \right\Vert _{3a}\\
& \leq\sum_{k_{1}=0}^{\infty}\sum_{k_{2}=0}^{p-1}\sum_{j=k_{2}}^{p}%
\mathbb{I}\left( j\leq k_{1}\right) \delta_{6a}\left( k_{1}-j\right)
\left\Vert \mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}u_{t+k_{2}-j}\right]
\right\Vert _{6a}\\
& \leq\left( \sum_{k_{1}=0}^{\infty}\delta_{6a}\left( k_{1}\right) \right)
\times\sum_{k_{2}=0}^{p-1}\sum_{j=k_{2}}^{p}\left\Vert \mathbf{P}_{t}^{\prime
}\left[ u_{t+k_{2}}u_{t+k_{2}-j}\right] \right\Vert _{6a}.
\end{align*}
Since $\overline{u}_{t+k_{2}-j}^{t}$ is independent of $e_{t},\ldots
,e_{t-\mathfrak{p}+1}$ and $\mathbf{P}_{t}\left[ u_{t+k_{2}}\right] $,%
\begin{align}
& \left\Vert \mathbf{P}_{t}^{\prime}\left[ u_{t+k_{2}}u_{t+k_{2}-j}\right]
\right\Vert _{6a}\leq\left\Vert \underset{0}{\underbrace{\mathbb{E}\left[
\overline{u}_{t+k_{2}-j}^{t}\mathbf{P}_{t}\left[ u_{t+k_{2}}\right]
\left\vert e_{t},\ldots,e_{t-\mathfrak{p}+1}\right. \right] }}\right\Vert
_{6a}\nonumber\\
& \text{ }+\left\Vert \mathbb{E}\left[ \left( u_{t+k_{2}-j}-\overline
{u}_{t+k_{2}-j}^{t}\right) \mathbf{P}_{t}\left[ u_{t+k_{2}}\right]
\left\vert e_{t},\ldots,e_{t-\mathfrak{p}+1}\right. \right] \right\Vert
_{6a}\nonumber\\
& \text{ }\leq\left\Vert u_{t+k_{2}-j}-\overline{u}_{t+k_{2}-j}^{t}\right\Vert
_{12a}\left\Vert \mathbf{P}_{t}\left[ u_{t+k_{2}}\right] \right\Vert
_{12a}\leq\Theta_{12a}\left( k_{2}-j\right) \delta_{12a}\left(
k_{2}\right) .\label{Pprime}%
\end{align}
Substituting gives that (\ref{TildeDD3})$\leq C\sum_{k_{2}=0}^{p-1}%
\sum_{j=k_{2}}^{p}\Theta_{12a}\left( k_{2}-j\right) \delta_{12a}\left(
k_{2}\right) \leq C$.
For (\ref{TildeDD4}), (\ref{Dep}) and (\ref{Pprime}) give%
\begin{align*}
\text{(\ref{TildeDD4})} & \leq C\sum_{j=1}^{p}\left( \sum_{k_{1}=j}^{\infty
}\left\Vert \mathbf{P}_{s}\left[ u_{s+k_{1}}u_{s+k_{1}-j}\right] \right\Vert
_{6a}\right) \sum_{k_{2}=j}^{\infty}\left\Vert \mathbf{P}_{t}^{\prime}\left[
u_{t+k_{2}}u_{t+k_{2}-j}\right] \right\Vert _{6a}\\
& \leq C\sum_{j=1}^{p}\left( \sum_{k_{1}=j}^{\infty}\delta_{6a}\left(
k_{1}-j\right) \right) \sum_{k_{2}=j}^{\infty}\Theta_{12a}\left(
k_{2}-j\right) \delta_{12a}\left( k_{2}\right) \leq C.
\end{align*}
Hence substituting gives $\left\Vert \sum_{j=1}^{p}K_{jp}D_{js}D_{jt}%
^{t-\mathfrak{p}+1}\right\Vert _{3a}\leq Cp^{1/2}$. \hfill$\square$
\bigskip
We now define a suitable sequence of Gaussian vector. Let $2\overline{p}%
_{n}\leq\ell\leq3\overline{p}_{n}$ be an integer number. Consider a sequence
of independent centered Gaussian vectors $\eta_{t}=\left[ \eta_{1t}%
,\ldots,\eta_{\overline{p}_{n}t}\right] ^{\prime}$ with%
\begin{equation}
\mathbb{E}\left[ \eta_{j_{1}t}\eta_{j_{2}t}\right] =\mathbb{E}\left[
D_{j_{1}t}^{_{t-\ell+1}}D_{j_{2}t}^{_{t-\ell+1}}\right] .\label{Vareta}%
\end{equation}
We shall also assume that $\left\{ \eta_{t}\right\} $ and $\left\{
e_{t}\right\} $ are independent.
\begin{lem}
\label{Sumvareta}Let $\left\{ \eta_{t}\right\} $ be as in (\ref{Vareta}) and
suppose Assumption \ref{Reg} holds. Then for all $p\in\left[ 1,\overline
{p}_{n}\right] $ and $t,s\in\left[ 1,n\right] $,%
\begin{align*}
& \sum_{j_{1}\neq j_{2}\in\left[ 1,\overline{p}_{n}\right] }\left\vert
\operatorname*{Cov}\left( \eta_{j_{1}t},\eta_{j_{2}t}\right) \right\vert
\leq C\text{ and }\sum_{j=1}^{\overline{p}_{n}}\left\vert \operatorname*{Var}%
\left( \eta_{jt}\right) -\sigma^{4}\right\vert \leq C,\\
& \left\vert \sum_{j=1}^{p}\left( 1-\frac{j}{n}\right) K_{jp}\left(
\operatorname*{Var}\left( \eta_{jt}\right) -\sigma^{4}\right) \right\vert
\leq C,\\
& \left\vert \left( 2\sum_{j=1}^{p}\left( 1-\frac{j}{n}\right) ^{2}%
K_{jp}^{2}\operatorname*{Var}\nolimits^{2}\left( \eta_{jt}\right) \right)
^{1/2}-\sigma^{4}V_{\Delta}\left( p\right) \right\vert \leq C,\\
& \operatorname*{Var}\left( \frac{1}{p^{1/2}}\sum_{j=1}^{p}K_{jp}D_{js}%
\eta_{jt}\left\vert D_{s}\right. \right) \leq\frac{C}{p}\sum_{j=1}^{p}%
K_{jp}^{2}D_{js}^{2}.
\end{align*}
\end{lem}
\noindent\textbf{Proof of Lemma \ref{Sumvareta}.} (\ref{Cov2M}) gives for all
$j_{1}$, $j_{2}$,%
\[
\operatorname*{Cov}\left( D_{j_{1}t},D_{j_{2}t}\right) =\lim_{n\rightarrow
\infty}\operatorname*{Cov}\left( \frac{\sum_{t=j_{1}+1}^{n}u_{t}u_{t-j_{1}}%
}{\left( n-j_{1}\right) ^{1/2}},\frac{\sum_{t=j_{2}+1}^{n}u_{t}u_{t-j_{2}}%
}{\left( n-j_{2}\right) ^{1/2}}\right) =\sum_{k=-\infty}^{\infty}%
\mathbb{E}\left[ u_{0}u_{j_{1}}u_{k}u_{k+j_{2}}\right] ,
\]
see also Lemma A.2 in Shao (2011b), provided $\sum_{k=-\infty}^{\infty
}\left\vert \mathbb{E}\left[ u_{0}u_{j_{1}}u_{k}u_{k+j_{2}}\right]
\right\vert <\infty$ as shown below. (\ref{Shaolem52}) and (\ref{Vareta}) give%
\begin{equation}
\max_{j_{1},j_{2}\in\left[ 0,\overline{p}_{n}\right] }\left\vert
\operatorname*{Cov}\left( \eta_{j_{1}t},\eta_{j_{2}t}\right) -\sum
_{k=-\infty}^{\infty}\mathbb{E}\left[ u_{0}u_{j_{1}}u_{k}u_{k+j_{2}}\right]
\right\vert \leq C\Theta_{12a}\left( \overline{p}_{n}\right) .\label{Coveta}%
\end{equation}
Now relation between cumulants and moments in Brillinger (2001) and Theorem
\ref{XW11} gives absolute summability of the $4$th moments. Hence
$\Theta_{12a}\left( \overline{p}_{n}\right) =O(\overline{p}_{n}^{-6})$ gives
the first bound of the Lemma. For the second and the third bound, observe that
under the null%
\[
\left\vert \sum_{k=-\infty}^{\infty}\mathbb{E}\left[ u_{0}u_{j}u_{k}%
u_{k+j}\right] -\sigma^{4}\right\vert \leq\left\vert \mathbb{E}\left[
u_{0}^{2}u_{j}^{2}\right] -\mathbb{E}\left[ u_{0}^{2}\right] \mathbb{E}%
\left[ u_{j}^{2}\right] \right\vert +2\left\vert \sum_{k=1}^{\infty
}\mathbb{E}\left[ u_{0}u_{j}u_{k}u_{k+j}\right] \right\vert .
\]
$\left\vert \mathbb{E}\left[ u_{0}^{2}u_{j}^{2}\right] -\mathbb{E}\left[
u_{0}^{2}\right] \mathbb{E}\left[ u_{j}^{2}\right] \right\vert \leq
C\Theta_{12a}\left( j\right) =O\left( j^{-6}\right) $ and absolute
summability of the $4$th moments gives the second bound. This also gives the
fourth one since%
\begin{align*}
& \left\vert \left( 2\sum_{j=1}^{p}\left( 1-\frac{j}{n}\right) ^{2}%
K_{jp}^{2}\operatorname*{Var}\nolimits^{2}\left( \eta_{jt}\right) \right)
^{1/2}-\sigma^{4}V_{\Delta}\left( p\right) \right\vert \\
& \text{ }\leq\left( 2\sum_{j=1}^{p}\left( 1-\frac{j}{n}\right) ^{2}%
K_{jp}^{2}\left( \operatorname*{Var}\left( \eta_{jt}\right) -\sigma
^{4}\right) ^{2}\right) ^{1/2}\\
& \text{ }\leq2^{1/2}\left\vert \sum_{j=1}^{p}\left( 1-\frac{j}{n}\right)
K_{jp}\left( \operatorname*{Var}\left( \eta_{jt}\right) -\sigma^{4}\right)
\right\vert \leq C.
\end{align*}
For the last one, observe first that%
\[
\sum_{1\leq j_{1}<j_{2}\leq\overline{p}_{n}}\left\vert \operatorname*{Cov}%
\left( \eta_{j_{1}t},\eta_{j_{2}t}\right) \right\vert ^{2}\leq\left(
\sum_{1\leq j_{1}<j_{2}\leq\overline{p}_{n}}\left\vert \operatorname*{Cov}%
\left( \eta_{j_{1}t},\eta_{j_{2}t}\right) \right\vert \right) ^{2}<\infty
\]
by Theorem \ref{XW11} since the $2$th cumulants are the covariance. This
gives, for any $z=\left[ z_{1},\ldots,z_{\overline{p}_{n}}\right] ^{\prime}%
$,
\begin{align*}
\operatorname*{Var}\left( z^{\prime}\eta\right) & =z^{\prime}%
\mathbb{E}\left[ \eta\eta^{\prime}\right] z\leq\sum_{j=1}^{\overline{p}_{n}%
}\operatorname*{Var}\left( \eta_{jt}\right) z_{j}^{2}+2\sum_{1\leq
j_{1}<j_{2}\leq\overline{p}_{n}}\left\vert \operatorname*{Cov}\left(
\eta_{j_{1}t},\eta_{j_{2}t}\right) \right\vert \left\vert z_{j_{1}%
}\right\vert \left\vert z_{j_{2}}\right\vert \\
& \leq Czz^{\prime}+2\left( \sum_{1\leq j_{1}<j_{2}\leq\overline{p}_{n}%
}\left\vert \operatorname*{Cov}\left( \eta_{j_{1}t},\eta_{j_{2}t}\right)
\right\vert ^{2}\right) ^{1/2}\left( \sum_{1\leq j_{1}<j_{2}\leq\overline
{p}_{n}}z_{j_{1}}^{2}z_{j_{2}}^{2}\right) ^{1/2}\\
& \leq Cz^{\prime}z.
\end{align*}
Hence $\operatorname*{Var}\left( \sum_{j=1}^{p}K_{jp}D_{js}\eta
_{jt}\left\vert D_{s}\right. \right) \leq C\left( \sum_{j=1}^{p}K_{jp}%
^{2}D_{js}^{2}\right) ^{1/2}$ since $\left\{ D_{t}\right\} $ and $\left\{
\eta_{t}\right\} $ are independent.\hfill$\square$
\subsubsection{The deviation probability of the maximum of Proposition
\ref{Selection}}
The proof is based on a smooth approximation of the maximum of real numbers
$x_{1},\ldots,x_{\overline{p}_{n}}$. Consider an increasing and three times
continuously differentiable real function $f$ with%
\begin{equation}
\lim_{x\rightarrow-\infty}f\left( x\right) =1,\quad f\left( x\right)
=x\text{ for }x\geq2,\quad\max_{i=1,2,3}\sup_{x}\left\vert f^{(i)}\left(
x\right) \right\vert <\infty.\label{f}%
\end{equation}
Let $e=e_{n}\rightarrow\infty$ with $\ln\left( \overline{p}_{n}\right)
/e=o(1)$. Then $\max_{p\in\left[ 1,\overline{p}_{n}\right] }\left\{
f\left( x_{p}\right) \right\} \leq\left( \sum_{p=1}^{\overline{p}_{n}%
}f^{e}\left( x_{p}\right) \right) ^{1/e}\leq\overline{p}_{n}^{1/e}%
\max_{p\in\left[ 1,\overline{p}_{n}\right] }\left\{ f\left( x_{p}\right)
\right\} $ gives that%
\begin{equation}
\left( \sum_{p=1}^{\overline{p}_{n}}f^{e}\left( x_{p}\right) \right)
^{1/e}=\left( 1+O\left( \frac{\ln\overline{p}_{n}}{e}\right) \right)
\max_{p\in\left[ 1,\overline{p}_{n}\right] }\left\{ f\left( x_{p}\right)
\right\} .\label{Smthmax}%
\end{equation}
We will first find a suitable approximation for the distribution of%
\begin{equation}
\mathcal{M}=\left( \sum_{p=1}^{\overline{p}_{n}}f^{e}\left( \check{s}%
_{p}\right) \right) ^{1/e}\text{ where }\check{S}_{p}=n\sum_{j=1}^{p}%
K_{jp}\left( \frac{M_{jn}}{n}\right) ^{2},\quad\check{s}_{p}=\frac{\check
{S}_{p}-\sigma^{4}E_{\Delta}(p)}{\sigma^{4}V_{\Delta}(p)}\text{.}%
\label{Pseudomax}%
\end{equation}
Define, for $\eta=\left[ \eta_{1},\ldots,\eta_{\overline{p}_{n}}\right]
^{\prime}$ and $x\in\left[ 0,1\right] $,%
\begin{align}
M_{jt}\left( x;\eta\right) & =\sum_{s=j+1}^{t-1}D_{js}+x\eta_{j}%
+\sum_{s=t+1}^{n}\eta_{js},\quad R_{jt}\left( x;\eta\right) =\frac
{M_{jt}\left( x;\eta\right) }{n}\nonumber\\
\check{s}_{pt}\left( x;\eta\right) & =\frac{n\sum_{j=1}^{p}K_{jp}%
R_{jt}^{2}\left( x;\eta\right) -\sigma^{4}E_{\Delta}(p)}{\sigma^{4}%
V_{\Delta}\left( p\right) },\quad\Sigma_{t}\left( x;\eta\right) =f\left(
\check{s}_{pt}\left( x;\eta\right) \right) ,\nonumber\\
\mathcal{M}_{t}\left( x;\eta\right) & =\left( \sum_{p=1}^{\overline
{p}_{n}}\Sigma_{t}^{e}\left( x;\eta\right) \right) ^{\frac{1}{e}}%
,\quad\mathcal{M}_{t}\left( \eta\right) =\mathcal{M}_{t}\left(
1;\eta\right) ,\label{Meta}%
\end{align}
and%
\begin{align*}
\check{s}_{pt}^{(1)}\left( x;\eta\right) & =\frac{d\check{s}_{pt}\left(
x;\eta\right) }{dx}=\frac{2\sum_{j=1}^{p}K_{jp}\left( \sum_{s=j+1}%
^{t-1}D_{js}+x\eta_{j}+\sum_{s=t+1}^{n}\eta_{js}\right) \eta_{j}}{n\sigma
^{4}V_{\Delta}\left( p\right) },\\
\check{s}_{pt}^{(2)}\left( x;\eta\right) & =\frac{d_{pt}^{2}\check
{s}\left( x;\eta\right) }{dx^{2}}=\frac{2\sum_{j=1}^{p}K_{jp}\eta_{j}^{2}%
}{n\sigma^{4}V_{\Delta}\left( p\right) },\\
\Sigma_{pt}^{(1)}\left( x;\eta\right) & =f^{(1)}\left( \check{s}%
_{pt}\left( x;\eta\right) \right) \check{s}_{pt}^{(1)}\left(
x;\eta\right) ,\\
\Sigma_{pt}^{(2)}\left( x;\eta\right) & =f^{(2)}\left( \check{s}%
_{pt}\left( x;\eta\right) \right) \left( \check{s}_{pt}^{(1)}\left(
x;\eta\right) \right) ^{2}+f^{(1)}\left( \check{s}_{pt}\left(
x;\eta\right) \right) \check{s}_{pt}^{(2)}\left( x;\eta\right) ,\\
\Sigma_{pt}^{(3)}\left( x;\eta\right) & =f^{(3)}\left( \check{s}%
_{pt}\left( x;\eta\right) \right) \left( \check{s}_{pt}^{(1)}\left(
x;\eta\right) \right) ^{3}+3f^{(2)}\left( \check{s}_{pt}\left(
x;\eta\right) \right) \check{s}_{pt}^{(1)}\left( x;\eta\right) \check
{s}_{pt}^{(2)}\left( x;\eta\right) .
\end{align*}
We first bound the moments of $\Sigma_{pt}^{(1)}\left( x;\eta\right) $,
$\Sigma_{pt}^{(2)}\left( x;\eta\right) $ and $\Sigma_{pt}^{(3)}\left(
x;\eta\right) $ when $\eta$ is set to $D_{t}$ or $\eta_{t}$.
\begin{lem}
Under Assumption \ref{Reg} and if $\overline{p}_{n}=O\left( n^{1/2}\right)
$, we have uniformly in $p\in\left[ 1,\overline{p}_{n}\right] $,
$x\in\left[ 0,1\right] $ and $t=1,\ldots,n$,\label{Momsig}%
\begin{align}
\max\left\{ \left\Vert \Sigma_{pt}^{(1)}\left( x;D_{t}\right) \right\Vert
_{3a},\left\Vert \Sigma_{pt}^{(1)}\left( x;\eta_{t}\right) \right\Vert
_{3a}\right\} & \leq\frac{C}{n^{1/2}},\label{Sig1}\\
\max\left\{ \left\Vert \Sigma_{pt}^{(2)}\left( x;D_{t}\right) \right\Vert
_{3a/2},\left\Vert \Sigma_{pt}^{(2)}\left( x;\eta_{t}\right) \right\Vert
_{3a/2}\right\} & \leq\frac{Cp^{1/2}}{n},\label{Sig2}\\
\max\left\{ \left\Vert \Sigma_{pt}^{(3)}\left( x;D_{t}\right) \right\Vert
_{a},\left\Vert \Sigma_{pt}^{(3)}\left( x;\eta_{t}\right) \right\Vert
_{a}\right\} & \leq\frac{Cp^{1/2}}{n^{3/2}}.\label{Sig3}%
\end{align}
\end{lem}
\noindent\textbf{Proof of Lemma \ref{Momsig}.} (\ref{f}) gives%
\begin{align}
\left\vert \Sigma_{pt}^{(1)}\left( x;\eta\right) \right\vert & \leq
C\left\vert \check{s}_{pt}^{(1)}\left( x;\eta\right) \right\vert
,\quad\left\vert \Sigma_{pt}^{(2)}\left( x;\eta\right) \right\vert \leq
C\left( \left( \check{s}_{pt}^{(1)}\left( x;\eta\right) \right)
^{2}+\left\vert \check{s}_{pt}^{(2)}\left( x;\eta\right) \right\vert
\right) ,\nonumber\\
\left\vert \Sigma_{pt}^{(3)}\left( x;\eta\right) \right\vert & \leq
C\left\vert \check{s}_{pt}^{(1)}\left( x;\eta\right) \right\vert \left(
\left( \check{s}_{pt}^{(1)}\left( x;\eta\right) \right) ^{2}+\left\vert
\check{s}_{pt}^{(2)}\left( x;\eta\right) \right\vert \right)
.\label{Boundss}%
\end{align}
(\ref{Boundss}) shows that the lemma directly follows from%
\begin{align}
\max\left\{ \left\Vert \check{s}_{pt}^{(1)}\left( x;D_{t}\right)
\right\Vert _{3a},\left\Vert \check{s}_{pt}^{(1)}\left( x;\eta_{t}\right)
\right\Vert _{3a}\right\} & \leq\frac{C}{n^{1/2}},\label{sig1}\\
\max\left\{ \left\Vert \check{s}_{pt}^{(2)}\left( x;D_{t}\right)
\right\Vert _{3a/2},\left\Vert \check{s}_{pt}^{(2)}\left( x;\eta_{t}\right)
\right\Vert _{3a/2}\right\} & \leq\frac{Cp^{1/2}}{n}.\label{sig2}%
\end{align}
(\ref{sig2}) directly follow from the triangular inequality. For (\ref{sig1}),
we first bound $\left\Vert \check{s}_{pt}^{(1)}\left( x;D_{t}\right)
\right\Vert _{3a}$. We have%
\begin{align}
& \left\Vert \check{s}_{pt}^{(1)}\left( x;D_{t}\right) \right\Vert _{3a}\leq
C\left\Vert \frac{\sum_{s=1}^{t-1}\left( \sum_{j=1}^{p}K_{jp}D_{js}%
D_{jt}\right) }{np^{1/2}}\right\Vert _{3a}\label{Momsig1}\\
& +C\left\Vert \frac{\sum_{j=1}^{p}K_{jp}D_{jt}^{2}}{np^{1/2}}\right\Vert
_{3a}+C\left\Vert \frac{\sum_{s=t+1}^{n}\left( \sum_{j=1}^{p}K_{jp}D_{jt}%
\eta_{js}\right) }{np^{1/2}}\right\Vert _{3a}.\label{Momsig2}%
\end{align}
We have, for the first item (\ref{Momsig1})%
\begin{align*}
\text{(\ref{Momsig1})} & \leq\left\Vert \frac{\sum_{j=1}^{p}D_{jt}\sum
_{s=1}^{t-\mathfrak{p}}K_{jp}D_{js}}{np^{1/2}}\right\Vert _{3a}+\left\Vert
\frac{\sum_{s=t-\mathfrak{p+1}}^{t-1}D_{jt}\sum_{j=1}^{p}K_{jp}D_{js}%
}{np^{1/2}}\right\Vert _{3a}\\
& \leq\left\Vert \frac{\sum_{j=1}^{p}D_{jt}\sum_{s=1}^{t-\mathfrak{p}}%
K_{jp}D_{js}}{np^{1/2}}\right\Vert _{3a}+\frac{1}{np^{1/2}}\sum_{j=1}%
^{p}\left\Vert K_{jp}D_{jt}\right\Vert _{6a}\left\Vert \sum
_{s=t-\mathfrak{p+1}}^{t-1}D_{js}\right\Vert _{6a}\\
& \leq\left\Vert \frac{\sum_{s=1}^{t-\mathfrak{p}}K_{jp}\sum_{j=1}^{p}%
D_{jt}D_{js}}{np^{1/2}}\right\Vert _{3a}+\frac{Cp^{1/2}\mathfrak{p}^{1/2}}{n},
\end{align*}
where $\mathfrak{p}\geq p$ and by the Burkholder inequality. Now let
$\widetilde{D}_{jt}=D_{jt}^{t-\mathfrak{p}+1}$ be as in (\ref{Djtp}). Since
$\sum_{j=1}^{p}K_{jp}D_{js}\widetilde{D}_{jt}$ is a difference of martingale
given $e_{t},\ldots,e_{t-\mathfrak{p}+1}$, (\ref{Shaolem52}), the Burkholder
and triangular inequalities, Lemma \ref{CrossDD} give%
\begin{align*}
& \left\Vert \frac{\sum_{j=1}^{p}\sum_{s=1}^{t-\mathfrak{p}}K_{jp}D_{js}%
D_{jt}}{np^{1/2}}\right\Vert _{3a}\\
& \text{ }\leq\left\Vert \frac{\sum_{s=1}^{t-\mathfrak{p}}\sum_{j=1}^{p}%
K_{jp}D_{js}\widetilde{D}_{jt}}{np^{1/2}}\right\Vert _{3a}+\frac{1}{np^{1/2}%
}\sum_{j=1}^{p}\left\vert K_{jp}\right\vert \left\Vert \sum_{s=1}%
^{t-\mathfrak{p}}D_{js}\right\Vert _{6a}\left\Vert D_{jt}-\widetilde{D}%
_{jt}\right\Vert _{6a}\\
& \text{ }\leq\frac{C}{np^{1/2}}\left( \sum_{s=1}^{t-\mathfrak{p}}\left\Vert
\sum_{j=1}^{p}K_{jp}D_{js}\widetilde{D}_{jt}\right\Vert _{3a}^{2}\right)
^{1/2}+C\frac{\Theta_{6a}\left( \mathfrak{p}-p\right) }{p^{1/2}}\\
& \text{ }\leq\frac{C}{np^{1/2}}\left( \left\vert t-\mathfrak{p}\right\vert
p\right) ^{1/2}+C\frac{\Theta_{6a}\left( \mathfrak{p}-p\right) }{p^{1/2}%
}\leq C\left( \frac{1}{n^{1/2}}+\frac{\Theta_{6a}\left( \mathfrak{p}%
-p\right) }{p^{1/2}}\right) .
\end{align*}
Hence substituting gives%
\begin{equation}
\left\Vert \frac{\sum_{s=1}^{t-1}\left( \sum_{j=1}^{p}K_{jp}D_{js}%
D_{jt}\right) }{np^{1/2}}\right\Vert _{3a}\leq C\left( \frac{1}{n^{1/2}%
}+\frac{p^{1/2}\mathfrak{p}^{1/2}}{n}+\frac{\Theta_{6a}\left( \mathfrak{p}%
-p\right) }{p^{1/2}}\right) .\label{Boundsig1}%
\end{equation}
For the first item in (\ref{Momsig2}), (\ref{sig2}) gives a bound $C/n^{1/2}$.
For the second item in (\ref{Momsig2}), conditional Gaussianity of the
$\left\{ \sum_{j=1}^{p}K_{jp}D_{jt}\eta_{js}\right\} $ and Lemma
\ref{Sumvareta} give%
\begin{align*}
& \left\Vert \frac{\sum_{s=t+1}^{n}\left( \sum_{j=1}^{p}K_{jp}D_{jt}\eta
_{js}\right) }{np^{1/2}}\right\Vert _{3a}\\
& \text{ }\leq\frac{C}{np^{1/2}}\left\Vert \left\{ \sum_{s=t+1}^{n}\left(
\sum_{j=1}^{p}K_{jp}^{2}D_{jt}^{2}\right) \right\} ^{1/2}\right\Vert
_{3a}\leq\frac{C}{np^{1/2}}\left\Vert \sum_{s=t+1}^{n}\left( \sum_{j=1}%
^{p}K_{jp}^{2}D_{jt}^{2}\right) \right\Vert _{3a/2}^{1/2}\\
& \text{ }\leq\frac{C}{np^{1/2}}\left( \sum_{s=t+1}^{n}\sum_{j=1}^{p}%
K_{jp}^{2}\left\Vert D_{jt}\right\Vert _{3a}^{2}\right) ^{1/2}\leq\frac
{C}{np^{1/2}}\left( \left( n-t\right) p\right) ^{1/2}\leq\frac{C}{n^{1/2}%
}.
\end{align*}
Substituting the two last bounds in (\ref{Momsig2}) and (\ref{Boundsig1}) in
(\ref{Momsig1}) shows that
\begin{equation}
\max\left\{ \left\Vert \check{s}_{pt}^{(1)}\left( x;D_{t}\right)
\right\Vert _{3a},\left\Vert \check{s}_{pt}^{(1)}\left( x;\eta_{t}\right)
\right\Vert _{3a}\right\} \leq C\left( \frac{1}{n^{1/2}}+\frac
{p^{1/2}\mathfrak{p}^{1/2}}{n}+\frac{\Theta_{6a}\left( \mathfrak{p}-p\right)
}{p^{1/2}}\right) .\label{sig1a}%
\end{equation}
Observe that $\Theta_{6a}\left( \mathfrak{p}-p\right) \leq C\left(
\mathfrak{p}-p\right) ^{-11/2}$ by Assumption \ref{Reg}. Consider now%
\[
\mathfrak{p}=\max\left( 2p,\left( \frac{n}{p}\right) ^{\frac{1}{6}}\right)
\geq2p,
\]
which is such that, since $p\in\left[ 1,\overline{p}_{n}\right] $ with
$\overline{p}_{n}=O\left( n^{1/2}\right) $,%
\begin{align*}
\text{If }\left( \frac{n}{p}\right) ^{\frac{1}{6}} & \geq2p\text{, }%
\frac{\left( \mathfrak{p}-p\right) ^{-11/2}}{p^{1/2}}\asymp\frac
{p^{1/2}\mathfrak{p}^{1/2}}{n}\leq\frac{\mathfrak{p}}{n}\leq\frac{1}{n^{5/6}%
}\leq\frac{1}{n^{1/2}},\\
\text{If }\left( \frac{n}{p}\right) ^{\frac{1}{6}} & <2p\Leftrightarrow
\left( \frac{n}{2^{6}}\right) ^{\frac{1}{7}}<p\text{, }\frac{\Theta
_{6a}\left( \mathfrak{p}-p\right) }{p^{1/2}}\leq Cp^{-6}\leq\frac{C}%
{n^{1/2}}\text{, }\frac{p^{1/2}\mathfrak{p}^{1/2}}{n}\leq\frac{\overline
{p}_{n}}{n}\leq\frac{C}{n^{1/2}}.
\end{align*}
Hence (\ref{sig1a}) gives (\ref{sig1}).\hfill$\square$
\bigskip
Let $I\left( \cdot\right) $ be a three times differentiable real function
and define for $\mathcal{M}_{t}\left( \eta\right) $ as in (\ref{Meta}),%
\[
\mathcal{I}_{t}\left( \eta\right) =\mathcal{I}_{tn}\left( \eta\right)
=I\left( \mathcal{M}_{t}\left( \eta\right) \right) ,\quad\mathcal{I}%
_{t}\left( x;\eta\right) =\mathcal{I}\left( x\eta\right) ,\quad
\mathcal{I}_{t}^{(j)}\left( x;\eta\right) =\frac{d_{t}^{j}\mathcal{I}\left(
x;\eta\right) }{d^{j}x},\quad j=1,2.
\]
Observe that $I\left( \mathcal{M}\right) =I\left( \mathcal{M}_{n}\left(
D_{n}\right) \right) =\mathcal{I}_{n}\left( D_{n}\right) $, $\mathcal{I}%
_{t}\left( D_{t}\right) =\mathcal{I}_{t+1}\left( \eta_{t+1}\right) $, and
that $I\left( \mathcal{M}_{1}\left( \eta_{1}\right) \right) $
$=\mathcal{I}_{1}\left( \eta_{1}\right) $ is a function of the Gaussian
vectors $\eta_{1},\ldots,\eta_{n}$ only.
\begin{lem}
Let $\mathcal{M}$ and $\mathcal{M}_{1}\left( \eta_{1}\right) $ be as in
(\ref{Pseudomax}) and (\ref{Meta}). Consider a real function $I\left(
\cdot\right) $ which may depend on $n$ and three times continuously
differentiable with $\max_{j=1,2,3}\sup_{x}\left\vert I^{(j)}\left( x\right)
\right\vert \leq C$. Then under Assumptions \ref{P}, \ref{Reg} and if
$e=O\left( \overline{p}_{n}^{1/(2a)}\right) $,\label{Lindeberg}%
\[
\left\vert \mathbb{E}\left[ I\left( \mathcal{M}\right) -I\left(
\mathcal{M}_{1}\left( \eta_{1}\right) \right) \right] \right\vert \leq
C\left( \frac{\overline{p}_{n}^{1+3/a}}{n^{1/2}}+\frac{1}{\overline{p}%
_{n}^{1-1/a}}\right) .
\]
\end{lem}
\noindent\textbf{Proof of Lemma \ref{Lindeberg}. }The proof of the Lemma works
by changing $D_{n}$ into $\eta_{n}$, $D_{n-1}$ into $\eta_{n-1}$ and so on,
the so called Lindeberg technique described in Pollard (2002, p.179).\textbf{
}This amounts to decompose $I\left( \mathcal{M}\right) -I\left(
\mathcal{M}_{n}\left( \eta_{n}\right) \right) $ into the following sum of
differences,
\begin{align*}
& I\left( \mathcal{M}\right) -I\left( \mathcal{M}_{n}\left( \eta
_{n}\right) \right) \\
& \text{ }=\mathcal{I}_{n}\left( D_{n}\right) -\mathcal{I}_{n-1}\left(
D_{n-1}\right) +\mathcal{I}_{n-1}\left( D_{n-1}\right) -\mathcal{I}%
_{n-2}\left( D_{n-2}\right) +\cdots+\mathcal{I}_{1}\left( D_{1}\right)
-\mathcal{I}_{1}\left( \eta_{1}\right) \\
& \text{ }=\mathcal{I}_{n}\left( D_{n}\right) -\mathcal{I}_{n}\left(
\eta_{n}\right) +\mathcal{I}_{n-1}\left( D_{n-1}\right) -\mathcal{I}%
_{n-1}\left( \eta_{n-1}\right) +\cdots+\mathcal{I}_{1}\left( D_{1}\right)
-\mathcal{I}_{1}\left( \eta_{1}\right) .
\end{align*}
Since $\mathcal{I}_{t}(\eta)=\mathcal{I}_{t}(1;\eta)$ and $\mathcal{I}%
_{t}(0;\eta)=\mathcal{I}_{t}(0)$, a third-order Taylor expansion around
$\eta=0$ with integral remainder gives%
\begin{align*}
& \left[ \mathcal{I}_{t}(D_{t})-\mathcal{I}_{t}(\eta_{t})\right]
=\mathbb{E}\left[ \mathcal{I}_{t}^{(1)}(0;D_{t})-\mathcal{I}_{t}^{(1)}%
(0;\eta_{t})\right] \\
& +\frac{1}{2}\mathbb{E}\left[ \mathcal{I}_{t}^{(2)}(0;D_{t})-\mathcal{I}%
_{t}^{(2)}(0;\eta_{t})\right] +\frac{1}{2}\int_{0}^{1}(1-x)^{2}%
\mathbb{E}\left[ \mathcal{I}_{t}^{(3)}(x;D_{t})-\mathcal{I}_{t}^{(3)}%
(x;\eta_{t})\right] dx.
\end{align*}
Since $\left\{ D_{t}\right\} $ is a sequence of martingale difference,
$\mathbb{E}\left[ \mathcal{I}_{t}^{(1)}(0;D_{t})-\mathcal{I}_{t}^{(1)}%
(0;\eta_{t})\right] =0$ due to the expression of $\mathcal{I}_{t}%
^{(1)}\left( 0;\eta\right) $ given above. Hence%
\begin{align}
& \left\vert \mathbb{E}\left[ I\left( \mathcal{M}\right) \right]
-\mathbb{E}\left[ I\left( \mathcal{M}_{1}\left( \eta_{1}\right) \right)
\right] \right\vert \leq\frac{1}{2}\left\vert \sum_{t=1}^{n}\mathbb{E}\left[
\mathcal{I}_{t}^{(2)}(0;D_{t})-\mathcal{I}_{t}^{(2)}(0;\eta_{t})\right]
\right\vert \label{Lind2}\\
& +\frac{1}{2}\int_{0}^{1}(1-x)^{2}\left\{ \sum_{t=1}^{n}\left\vert
\mathbb{E}\left[ \mathcal{I}_{t}^{(3)}(x;D_{t})-\mathcal{I}_{t}^{(3)}%
(x;\eta_{t})\right] \right\vert \right\} dx.\label{Lind3}%
\end{align}
We now compute the differentials $\mathcal{I}_{t}^{(j)}\left( x;\eta\right)
$, $j=1,2,3$. We have%
\begin{align*}
\mathcal{I}_{t}^{(1)}\left( x;\eta\right) & =I^{\prime}\left(
\mathcal{M}_{t}\left( x;\eta\right) \right) \mathcal{M}_{t}^{(1)}\left(
x;\eta\right) ,\\
\mathcal{I}_{t}^{(2)}\left( x;\eta\right) & =I^{^{\prime\prime}}\left(
\mathcal{M}_{t}\left( x;\eta\right) \right) \left( \mathcal{M}_{t}%
^{(1)}\left( x;\eta\right) \right) ^{2}+I^{\prime}\left( \mathcal{M}%
_{t}\left( x;\eta\right) \right) \mathcal{M}_{t}^{(2)}\left(
x;\eta\right) ,\\
\mathcal{I}_{t}^{(3)}\left( x;\eta\right) & =I^{^{\prime\prime\prime}%
}\left( \mathcal{M}_{t}\left( x;\eta\right) \right) \left( \mathcal{M}%
_{t}^{(1)}\left( x;\eta\right) \right) ^{3}+3I^{^{\prime\prime}}\left(
\mathcal{M}_{t}\left( x;\eta\right) \right) \mathcal{M}_{t}^{(1)}\left(
x;\eta\right) \mathcal{M}_{t}^{(2)}\left( x;\eta\right) \\
& +I^{\prime}\left( \mathcal{M}_{t}\left( x;\eta\right) \right)
\mathcal{M}_{t}^{(3)}\left( x;\eta\right) .
\end{align*}
We compute the differentials of $\mathcal{M}_{t}$. We have%
\begin{align*}
\mathcal{M}_{t}^{(1)}\left( x;\eta\right) & =\left( \sum_{p=1}%
^{\overline{p}_{n}}\Sigma_{pt}^{e}\left( x;\eta\right) \right) ^{1/e-1}%
\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-1}\left( x;\eta\right)
\Sigma_{pt}^{(1)}\left( x;\eta\right) \\
& =\mathcal{M}_{t}^{1-e}\left( x;\eta\right) \sum_{p=1}^{\overline{p}_{n}%
}\Sigma_{pt}^{e-1}\left( x;\eta\right) \Sigma_{pt}^{(1)}\left(
x;\eta\right) ,\\
\mathcal{M}_{t}^{(2)}\left( x;\eta\right) & =\mathcal{M}_{1t}^{(2)}\left(
x;\eta\right) +\mathcal{M}_{2t}^{(2)}\left( x;\eta\right) +\mathcal{M}%
_{3t}^{(2)}\left( x;\eta\right) ,\\
\mathcal{M}^{(3)}\left( x;\eta\right) & =\mathcal{M}_{1t}^{(3)}\left(
x;\eta\right) +\cdots+\mathcal{M}_{6t}^{(3)}\left( x;\eta\right) ,
\end{align*}
where, dropping the variables $x$, $\eta$ for notational convenience%
\begin{align*}
\mathcal{M}_{1t}^{(2)} & =\left( \frac{1}{e}-1\right) \mathcal{M}%
_{t}^{1-2e}\left( \sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-1}\Sigma
_{pt}^{(1)}\right) ^{2},\\
\mathcal{M}_{2t}^{(2)} & =\mathcal{M}_{t}^{1-e}\sum_{p=1}^{\overline{p}_{n}%
}\Sigma_{pt}^{e-1}\Sigma_{pt}^{(2)},\\
\mathcal{M}_{3t}^{(2)} & =\left( e-1\right) \mathcal{M}_{t}^{1-e}\sum
_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-2}\left( \Sigma_{pt}^{(1)}\right)
^{2},\\
\mathcal{M}_{1t}^{(3)} & =\left( \frac{1}{e}-1\right) \left( \frac{1}%
{e}-2\right) \mathcal{M}_{t}^{1-3e}\left( \sum_{p=1}^{\overline{p}_{n}%
}\Sigma_{pt}^{e-1}\Sigma_{pt}^{(1)}\right) ^{3},\\
\mathcal{M}_{2t}^{(3)} & =3\left( \frac{1}{e}-1\right) \mathcal{M}%
_{t}^{1-2e}\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-1}\Sigma_{pt}^{(1)}%
\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-1}\Sigma_{pt}^{(2)},\\
\mathcal{M}_{3t}^{(3)} & =3\left( \frac{1}{e}-1\right) \left( e-1\right)
\mathcal{M}_{t}^{1-2e}\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-1}%
\Sigma_{pt}^{(1)}\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-2}\left(
\Sigma_{pt}^{(1)}\right) ^{2},\\
\mathcal{M}_{4t}^{(3)} & =\left( 3e-1\right) \mathcal{M}_{t}^{1-e}%
\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-2}\Sigma_{pt}^{(2)}\Sigma
_{pt}^{(1)},\\
\mathcal{M}_{5t}^{(3)} & =\left( e-1\right) \left( e-2\right)
\mathcal{M}_{t}^{1-e}\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-2}\left(
\Sigma_{pt}^{(1)}\right) ^{3},\\
\mathcal{M}_{6t}^{(3)} & =\mathcal{M}_{t}^{1-e}\sum_{p=1}^{\overline{p}_{n}%
}\Sigma_{pt}^{e-1}\Sigma_{pt}^{(3)}.
\end{align*}
\medskip\textbf{The third-order item(\ref{Lind3})}. Since%
\begin{align*}
& \frac{1}{2}\int_{0}^{1}(1-x)^{2}\left\{ \sum_{t=1}^{n}\left\vert
\mathbb{E}\left[ \mathcal{I}_{t}^{(3)}(x;D_{t})-\mathcal{I}_{t}^{(3)}%
(x;\eta_{t})\right] \right\vert \right\} dx\\
& \text{ }\leq\frac{1}{2}\int_{0}^{1}(1-x)^{2}\left\{ \sum_{t=1}^{n}\left(
\left\vert \mathbb{E}\left[ \mathcal{I}_{t}^{(3)}(x;D_{t})\right]
\right\vert +\left\vert \mathbb{E}\left[ \mathcal{I}_{t}^{(3)}(x;\eta
_{t})\right] \right\vert \right) \right\} dx,
\end{align*}
it is sufficient to bound $\sum_{t=1}^{n}\left\vert \mathbb{E}\left[
\mathcal{I}_{t}^{(3)}(x)\right] \right\vert $ independently of $x$ where
$\mathcal{I}_{t}^{(3)}(x)$ stands for $\mathcal{I}_{t}^{(3)}(x;\eta_{t})$ or
$\mathcal{I}_{t}^{(3)}(x;D_{t})$. We have, dropping dependence w.r.t. to $x$
for ease of notation,%
\begin{align*}
\sum_{t=1}^{n}\left\vert \mathbb{E}\left[ \mathcal{I}_{t}^{(3)}\right]
\right\vert & \leq C\sum_{t=1}^{n}\left\{ \mathbb{E}\left[ \left\vert
\mathcal{M}_{t}^{(1)}\right\vert ^{3}\right] +\mathbb{E}\left[ \left\vert
\mathcal{M}_{t}^{(1)}\mathcal{M}_{1t}^{(2)}\right\vert \right] +\mathbb{E}%
\left[ \left\vert \mathcal{M}_{t}^{(1)}\mathcal{M}_{2t}^{(2)}\right\vert
\right] \right\} \\
& +C\sum_{t=1}^{n}\left\{ \mathbb{E}\left[ \left\vert \mathcal{M}_{t}%
^{(1)}\mathcal{M}_{3t}^{(2)}\right\vert \right] +\sum_{j=1}^{6}%
\mathbb{E}\left[ \left\vert \mathcal{M}_{jt}^{(3)}\right\vert \right]
\right\} .
\end{align*}
We now study the ten items above.
\bigskip
\textbf{(1)} $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}_{t}%
^{(1)}\right\vert ^{3}\right] $. We have for $a$, $\overline{a}\geq1$ with
$1/a=1-1/\overline{a}$,%
\begin{align*}
\mathbb{E}\left[ \left\vert \mathcal{M}_{t}^{(1)}\right\vert ^{3}\right] &
=\mathbb{E}\left[ \left\vert \mathcal{M}_{t}^{1-e}\sum_{p=1}^{\overline
{p}_{n}}\Sigma_{pt}^{e-1}\Sigma_{pt}^{(1)}\right\vert ^{3}\right] \\
& \leq\sum_{p_{1},p_{2},p_{3}=1}^{\overline{p}_{n}}\mathbb{E}\left[
\left\vert \mathcal{M}_{t}^{3(1-e)}\Sigma_{p_{1}t}^{e-1}\Sigma_{p_{2}t}%
^{e-1}\Sigma_{p_{3}t}^{e-1}\Sigma_{p_{1}t}^{(1)}\Sigma_{p_{2}t}^{(1)}%
\Sigma_{p_{3}t}^{(1)}\right\vert \right] \\
& \leq\max_{p,t}\left\Vert \Sigma_{pt}^{(1)}\right\Vert _{3a}^{3}\sum
_{p_{1},p_{2},p_{3}=1}^{\overline{p}_{n}}\mathbb{E}^{1/\overline{a}}\left[
\left\vert \mathcal{M}_{t}^{3(1-e)}\Sigma_{p_{1}t}^{e-1}\Sigma_{p_{2}t}%
^{e-1}\Sigma_{p_{3}t}^{e-1}\right\vert ^{\overline{a}}\right] \\
& \leq\frac{C}{n^{3/2}}\sum_{p_{1},p_{2},p_{3}=1}^{\overline{p}_{n}}%
\mathbb{E}^{1/\overline{a}}\left[ \left\vert \mathcal{M}_{t}^{3(1-e)}%
\Sigma_{p_{1}t}^{e-1}\Sigma_{p_{2}t}^{e-1}\Sigma_{p_{3}t}^{e-1}\right\vert
^{\overline{a}}\right] ,
\end{align*}
by (\ref{Sig1}) for all $x\in\left[ 0,1\right] $. Now, since $t\mapsto
t^{1/\overline{a}}$, $t\mapsto t^{1-1/e}$ are concave and $\sum_{p=1}%
^{\overline{p}_{n}}t_{p}^{\overline{a}}\leq\left( \sum_{p=1}^{\overline
{p}_{n}}t_{p}\right) ^{\overline{a}}$, the definition of $\mathcal{M}_{t}$
gives%
\begin{align*}
& \sum_{p_{1},p_{2},p_{3}=1}^{\overline{p}_{n}}\mathbb{E}^{1/\overline{a}%
}\left[ \left\vert \mathcal{M}_{t}^{3(1-e)}\Sigma_{p_{1}t}^{e-1}\Sigma
_{p_{2}t}^{e-1}\Sigma_{p_{3}t}^{e-1}\right\vert ^{\overline{a}}\right] \\
& \text{ }=\overline{p}_{n}^{3}\times\frac{1}{\overline{p}_{n}^{3}}\sum
_{p_{1},p_{2},p_{3}=1}^{\overline{p}_{n}}\mathbb{E}^{1/\overline{a}}\left[
\left\vert \mathcal{M}_{t}^{3(1-e)}\Sigma_{p_{1}t}^{e-1}\Sigma_{p_{2}t}%
^{e-1}\Sigma_{p_{3}t}^{e-1}\right\vert ^{\overline{a}}\right] \\
& \text{ }\leq\overline{p}_{n}^{3}\left( \frac{1}{\overline{p}_{n}^{3}%
}\mathbb{E}\left[ \sum_{p_{1},p_{2},p_{3}=1}^{\overline{p}_{n}}%
\mathcal{M}_{t}^{3\overline{a}(1-e)}\Sigma_{p_{1}t}^{\overline{a}%
e(1-1/e)}\Sigma_{p_{2}t}^{\overline{a}e(1-1/e)}\Sigma_{p_{3}t}^{\overline
{a}e(1-1/e)}\right] \right) ^{1/\overline{a}}\\
& \text{ }=\overline{p}_{n}^{3}\left( \mathbb{E}\left[ \left( \sum
_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e}\right) ^{-3\overline{a}%
(1-1/e)}\left( \frac{1}{\overline{p}_{n}}\sum_{p=1}^{\overline{p}_{n}}%
\Sigma_{pt}^{\overline{a}e(1-1/e)}\right) ^{3}\right] \right)
^{1/\overline{a}}\\
& \text{ }\leq\overline{p}_{n}^{3}\left( \mathbb{E}\left[ \left( \sum
_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{\overline{a}e}\right) ^{-3(1-1/e)}%
\left( \frac{1}{\overline{p}_{n}}\sum_{p=1}^{\overline{p}_{n}}\Sigma
_{pt}^{\overline{a}e}\right) ^{3(1-1/e)}\right] \right) ^{1/\overline{a}%
}\\
& \text{ }\leq\overline{p}_{n}^{3\left( 1-1/\overline{a}\right)
+3/(e\overline{a})}\leq C\overline{p}_{n}^{3/a},
\end{align*}
uniformly w.r.t. to $t$ since $\left( \ln\overline{p}_{n}\right) /e=o(1)$.
Hence for all $x\in\left[ 0,1\right] $
\begin{equation}
\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}_{t}^{(1)}\right\vert
^{3}\right] \leq C\frac{\overline{p}_{n}^{3/a}}{n^{1/2}}.\label{M13}%
\end{equation}
\medskip
\textbf{(2)} $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}_{t}%
^{(1)}\mathcal{M}_{1t}^{(2)}\right\vert \right] $. We have, since
$\mathcal{M}_{t}\geq1$,%
\begin{align*}
\mathbb{E}\left[ \left\vert \mathcal{M}_{t}^{(1)}\right\vert \left\vert
\mathcal{M}_{1t}^{(2)}\right\vert \right] & \leq C\mathbb{E}\left[
\mathcal{M}_{t}^{2-3e}\left\vert \sum_{p=2}^{\overline{p}_{n}}\Sigma
_{pt}^{e-1}\Sigma_{pt}^{(1)}\right\vert ^{3}\right] \leq C\mathbb{E}\left[
\mathcal{M}_{t}^{3-3e}\left\vert \sum_{p=2}^{\overline{p}_{n}}\Sigma
_{pt}^{e-1}\Sigma_{pt}^{(1)}\right\vert ^{3}\right] \\
& \leq C\mathbb{E}\left[ \left\vert \mathcal{M}_{t}^{(1)}\right\vert
^{3}\right] ,
\end{align*}
for all $t$, such that $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert
\mathcal{M}_{t}^{(1)}\right\vert ^{2}\left\vert \mathcal{M}_{1t}%
^{(2)}\right\vert \right] \leq C\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert
\mathcal{M}_{t}^{(1)}\right\vert ^{3}\right] $. Hence a bound similar to
(\ref{M13}) holds.
\medskip
\textbf{(3)} $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}_{t}%
^{(1)}\mathcal{M}_{2t}^{(2)}\right\vert \right] $. Let $\overline{a}>1$ be
such that $1/\overline{a}=1-1/a$. Arguing as for \textbf{(1)} with
(\ref{Sig1}) and (\ref{Sig2}),
\begin{align*}
\mathbb{E}\left[ \left\vert \mathcal{M}_{t}^{(1)}\mathcal{M}_{1t}%
^{(2)}\right\vert \right] & \leq C\sum_{p_{1},p_{2}=1}^{\overline{p}_{n}%
}\mathbb{E}\left[ \mathcal{M}_{t}^{2(1-e)}\left\vert \Sigma_{p_{1}t}%
^{e-1}\Sigma_{p_{2}t}^{e-1}\Sigma_{p_{1}t}^{(1)}\Sigma_{p_{2}t}^{(2)}%
\right\vert \right] \\
& \leq C\max_{p,t}\left\{ \left\Vert \Sigma_{pt}^{(1)}\right\Vert
_{3a}\left\Vert \Sigma_{pt}^{(2)}\right\Vert _{3a/2}\right\} \sum
_{p_{1},p_{2}=1}^{\overline{p}_{n}}\mathbb{E}^{1/\overline{a}}\left[
\left\vert \mathcal{M}_{t}^{2(1-e)}\Sigma_{p_{1}t}^{e-1}\Sigma_{p_{2}t}%
^{e-1}\right\vert ^{\overline{a}}\right] \\
& \leq C\frac{\overline{p}_{n}^{1/2}}{n^{3/2}}\times\overline{p}_{n}^{2}%
\times\mathbb{E}^{1/\overline{a}}\left[ \left( \sum_{p=1}^{\overline{p}_{n}%
}\Sigma_{pt}^{e}\right) ^{-2\overline{a}(1-1/e)}\left( \frac{1}{\overline
{p}_{n}}\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e\overline{a}(1-1/e)}%
\right) ^{2}\right] \\
& \leq C\frac{\overline{p}_{n}^{1/2}}{n^{3/2}}\times\overline{p}_{n}^{2}%
\times\mathbb{E}^{1/\overline{a}}\left[ \left( \sum_{p=1}^{\overline{p}_{n}%
}\Sigma_{pt}^{e\overline{a}(1-1/e)}\right) ^{-2}\left( \frac{1}{\overline
{p}_{n}}\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e\overline{a}(1-1/e)}%
\right) ^{2}\right] \\
& =C\frac{\overline{p}_{n}^{1/2}}{n^{3/2}}\times\overline{p}_{n}^{2}%
\times\overline{p}_{n}^{-2/\overline{a}}=C\frac{\overline{p}_{n}^{\frac{1}%
{2}\left( 1+4/a\right) }}{n^{3/2}}.
\end{align*}
Hence, uniformly w.r.t. $x\in\left[ 0,1\right] $,%
\begin{equation}
\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}_{t}^{(1)}%
\mathcal{M}_{2t}^{(2)}\right\vert \right] \leq C\frac{\overline{p}_{n}%
^{\frac{1}{2}\left( 1+4/a\right) }}{n^{1/2}}.\label{M1M22}%
\end{equation}
\medskip
\textbf{(4)} $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}_{t}%
^{(1)}\mathcal{M}_{3t}^{(2)}\right\vert \right] $. Proceeding as \textbf{(1)}
and \textbf{(3)} gives, since $\inf_{p,t}\Sigma_{pt}\geq1$,%
\[
\mathbb{E}\left[ \left\vert \mathcal{M}_{t}^{(1)}\mathcal{M}_{3t}%
^{(2)}\right\vert \right] \leq Ce\sum_{p_{1},p_{2}=1}^{\overline{p}_{n}%
}\mathbb{E}\left[ \mathcal{M}_{t}^{2(1-e)}\left\vert \Sigma_{p_{1}t}%
^{e-1}\Sigma_{p_{2}t}^{e-1}\Sigma_{p_{1}t}^{(1)}\left( \Sigma_{p_{2}t}%
^{(1)}\right) ^{2}\right\vert \right] \leq C\frac{e\overline{p}_{n}^{2/a}%
}{n^{3/2}}\leq C\frac{\overline{p}_{n}^{3/a}}{n^{3/2}},
\]
provided $e=O(\overline{p}_{n}^{1/a})$. Hence $\sum_{t=1}^{n}\mathbb{E}\left[
\left\vert \mathcal{M}_{t}^{(1)}\mathcal{M}_{3t}^{(2)}\right\vert \right] $
can be bounded as in (\ref{M13}).
\medskip
\textbf{(5)} $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}%
_{1t}^{(3)}\right\vert \right] $ can be bounded as in (\ref{M13}) since
$\mathcal{M}_{t}\geq1$ gives $\mathbb{E}\left[ \left\vert \mathcal{M}%
_{1t}^{(3)}\right\vert \right] \leq C\mathbb{E}\left[ \mathcal{M}%
_{t}^{3(1-e)}\left\vert \sum_{p=2}^{\overline{p}_{n}}\Sigma_{pt}^{e-1}%
\Sigma_{pt}^{(1)}\right\vert ^{3}\right] .$
\medskip
\textbf{(6)} $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}%
_{2t}^{(3)}\right\vert \right] $. Arguing as in \textbf{(3)} gives that
$\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}_{2t}^{(3)}\right\vert
\right] $ can be bounded as in (\ref{M1M22}).
\medskip
\textbf{(7)} $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}%
_{3t}^{(3)}\right\vert \right] $. Arguing as in \textbf{(4) }shows that this
item is negligible compared to (\ref{M13}).
\medskip
\textbf{(8)} $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}%
_{4t}^{(3)}\right\vert \right] $. \ Let $\overline{a}>1$ be such that
$1/\overline{a}=1-1/a$. We have, since $\inf_{p,t}\Sigma_{pt}\geq1$,
\begin{align*}
\mathbb{E}\left[ \left\vert \mathcal{M}_{4t}^{(3)}\right\vert \right] &
\leq Ce\mathbb{E}\left[ \mathcal{M}_{t}^{1-e}\sum_{p=1}^{\overline{p}_{n}%
}\left\vert \Sigma_{pt}^{e-2}\Sigma_{pt}^{(2)}\Sigma_{pt}^{(1)}\right\vert
\right] \leq Ce\sum_{p=p_{o}}^{\overline{p}_{n}}\mathbb{E}^{1/\overline{a}%
}\left[ \left( \mathcal{M}_{t}^{1-e}\Sigma_{pt}^{e-1}\right) ^{\overline
{a}}\right] \left\Vert \Sigma_{pt}^{(2)}\right\Vert _{3a/2}\left\Vert
\Sigma_{pt}^{(1)}\right\Vert _{3a}\\
& \leq C\frac{e\overline{p}_{n}^{1/2}\overline{p}_{n}^{1-1/\overline{a}}%
}{n^{3/2}}\leq C\frac{\overline{p}_{n}^{\frac{1}{2}\left( 1+4/a\right) }%
}{n^{3/2}},
\end{align*}
provided $e=O\left( \overline{p}_{n}^{1/a}\right) $. This gives a bound
similar to (\ref{M1M22}) for $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert
\mathcal{M}_{4t}^{(3)}\right\vert \right] $.
\medskip
\textbf{(9)} $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}%
_{5t}^{(3)}\right\vert \right] $ can be bounded as in (\ref{M13}) provided
$e=O(\overline{p}_{n}^{1/(2a)})$.
\medskip
\textbf{(10)} $\sum_{t=1}^{n}\mathbb{E}\left[ \left\vert \mathcal{M}%
_{6t}^{(3)}\right\vert \right] $ can be bounded as in (\ref{M1M22}).
\medskip
\noindent Hence, collecting the dominant bounds (\ref{M13}) and (\ref{M1M22})
in \textbf{(1)}-\textbf{(10)} gives%
\begin{equation}
\frac{1}{2}\int_{0}^{1}(1-x)^{2}\left\{ \sum_{t=1}^{n}\left\vert
\mathbb{E}\left[ \mathcal{I}_{t}^{(3)}(x;D_{t})-\mathcal{I}_{t}^{(3)}%
(x;\eta_{t})\right] \right\vert \right\} dx\leq C\frac{\overline{p}%
_{n}^{\frac{3}{a}}+\overline{p}_{n}^{\frac{1}{2}\left( 1+4/a\right) }%
}{n^{1/2}}\leq C\left( \frac{\overline{p}_{n}^{1+\frac{4}{a}}}{n}\right)
^{\frac{1}{2}}.\label{Lind3bnd}%
\end{equation}
\textbf{The second-order term (\ref{Lind2})}. Note that $\mathcal{I}_{t}%
^{(2)}(0;\eta)=\eta^{\prime}A_{t}\eta$ where $A_{t}$ depends upon
$D_{1},\ldots,D_{t-1}$ and $\eta_{t+1},\ldots,\eta_{n}$. In the standard
Lindeberg method, $\left\{ D_{t},t\in\left[ 1,n\right] \right\} $ and
$\left\{ \eta_{t},t\in\left[ 1,n\right] \right\} $ are both independent
variables with identitical mean and variance, so that the second order term,
which writes as a sum of items $\mathbb{E}\left[ D_{t}^{\prime}A_{t}%
D_{t}\right] -\mathbb{E}\left[ \eta_{t}^{\prime}A_{t}\eta_{t}\right] $, is
equal to $0$ in this simpler case. However this does not hold in our case. In
this step, the second order term is dealt with by removing from $\mathcal{I}%
_{t}^{(2)}(0;\eta)$ a block $\sum_{j=1}^{p}K_{jp}\sum_{s=t-\ell}^{t-1}D_{js}$
and by changing the $D_{jt}$ into $D_{jt}^{t-\ell+1}=\mathbb{E}\left[
D_{jt}\left\vert e_{t},\ldots,e_{t-\ell+1}\right. \right] $.
Observe that $\mathcal{I}_{t}^{(2)}(0;\eta)=\mathcal{I}_{1t}^{(2)}%
(0;\eta)+\mathcal{I}_{2t}^{(2)}(0;\eta)+\mathcal{I}_{3t}^{(2)}(0;\eta
)+\mathcal{I}_{4t}^{(2)}(0;\eta)$ with, dropping the dependence upon $0$ and
$\eta$,%
\begin{align*}
\mathcal{I}_{1t}^{(2)} & =\left( \frac{1}{e}-1\right) I_{tn}^{(1)}%
\mathcal{M}_{t}^{1-2e}\left( \sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}%
^{e-1}\Sigma_{pt}^{(1)}\right) ^{2},\quad I_{tn}^{(1)}=I^{\prime}\left(
\mathcal{M}_{t}\right) ,\\
\mathcal{I}_{2t}^{(2)} & =I_{tn}^{(1)}\mathcal{M}_{t}^{1-e}\sum
_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-1}\Sigma_{pt}^{(2)},\quad
\mathcal{I}_{3t}^{(2)}=\left( e-1\right) I_{tn}^{(1)}\mathcal{M}_{t}%
^{1-e}\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}^{e-1}\left( \Sigma_{pt}%
^{(1)}\right) ^{2},\\
\mathcal{I}_{4t}^{(2)} & =I^{^{\prime\prime}}\left( \mathcal{M}_{t}\right)
\left( \mathcal{M}_{t}^{1-e}\sum_{p=1}^{\overline{p}_{n}}\Sigma_{pt}%
^{e-1}\Sigma_{pt}^{(1)}\right) ^{2}.
\end{align*}
Observe $\mathcal{M}_{t}\left( 0;D_{t}\right) =\mathcal{M}_{t}\left(
0;\eta_{t}\right) $ and $\Sigma_{pt}\left( 0;D_{t}\right) =\Sigma
_{pt}\left( 0;\eta_{t}\right) $ and that these quantities do not depend upon
$\eta_{t}$ or $D_{t}$. We shall first focus on $\mathcal{I}_{1t}^{(2)}$. Let
$\ell\geq2\overline{p}_{n}$ be an integer number. Define, for $y\in\left[
0,1\right] $,%
\begin{align*}
\mathfrak{S}_{pt}\left( y;\eta\right) & =\frac{2\sum_{j=1}^{p}K_{jp}\left(
\sum_{s=j+1}^{t-\ell-1}D_{js}+y\sum_{s=t-\ell}^{t-1}D_{js}+\sum_{s=t+1}%
^{n}\eta_{js}\right) \eta_{j}}{n\sigma^{4}V_{\Delta}\left( p\right) },\\
\mathfrak{S}_{pt}\left( y\right) & =\mathfrak{S}_{pt}\left( y;yD_{t}%
+\left( 1-y\right) D_{t}^{t-\ell+1}\right) ,\\
\mathfrak{T}_{pt}\left( y;\eta\right) & =\check{s}_{pt}^{(2)}(y;\eta
)=\frac{2\sum_{j=1}^{p}K_{jp}\eta_{j}^{2}}{n\sigma^{4}V_{\Delta}\left(
p\right) },\quad\mathfrak{T}_{pt}\left( y\right) =\mathfrak{T}_{pt}\left(
y;yD_{t}+\left( 1-y\right) D_{t}^{t-\ell+1}\right) ,
\end{align*}
which are such that $\mathfrak{S}_{pt}\left( 1;\eta\right) =\check{s}%
_{pt}^{(1)}(0;\eta)$, $\mathfrak{S}_{pt}\left( 1\right) =\check{s}%
_{pt}^{(1)}(0;D_{t})$, $\mathfrak{T}_{pt}\left( 1\right) =\check{s}%
_{pt}^{(2)}(0;D_{t})$. Define also%
\begin{align*}
\mathbf{M}_{jt}\left( y\right) & =\sum_{s=j+1}^{t-\ell-1}D_{js}%
+y\sum_{s=t-\ell}^{t-1}D_{js}+\sum_{s=t+1}^{n}\eta_{js},\quad\mathbf{R}%
_{jt}\left( y\right) =\frac{\mathbf{M}_{jt}\left( y\right) }{n},\\
\mathbf{s}_{pt}\left( y\right) & =\frac{n\sum_{j=1}^{p}K_{jp}%
\mathbf{R}_{jt}^{2}\left( y\right) -\sigma^{4}E_{\Delta}(p)}{\sigma
^{4}V_{\Delta}\left( p\right) },\quad\mathbf{\Sigma}_{pt}\left( y\right)
=f\left( \mathbf{s}_{pt}\left( y\right) \right) ,\\
\widetilde{\Sigma}_{pt}^{\left( 1\right) }\left( y;\eta\right) &
=f^{\left( 1\right) }\left( \mathbf{s}_{pt}\left( y\right) \right)
\mathfrak{S}_{pt}\left( y;\eta\right) ,\\
\widetilde{\Sigma}_{pt}^{\left( 2\right) }\left( y;\eta\right) &
=f^{\left( 1\right) }\left( \mathbf{s}_{pt}\left( y\right) \right)
\mathfrak{T}_{pt}\left( y;\eta\right) +f^{\left( 2\right) }\left(
\mathbf{s}_{pt}\left( y\right) \right) \left( \mathfrak{S}_{pt}\left(
y;\eta\right) \right) ^{2},\\
\widetilde{\Sigma}_{pt}^{\left( 1\right) }\left( y\right) &
=\widetilde{\Sigma}_{pt}^{\left( 1\right) }\left( y;yD_{t}+\left(
1-y\right) D_{t}^{t-\ell+1}\right) ,\\
\widetilde{\Sigma}_{pt}^{\left( 2\right) }\left( y;\eta\right) &
=\widetilde{\Sigma}_{pt}^{\left( 2\right) }\left( y;yD_{t}+\left(
1-y\right) D_{t}^{t-\ell+1}\right) ,\\
\mathfrak{M}_{t}\left( y\right) & =\left( \sum_{p=1}^{\overline{p}_{n}%
}\mathbf{\Sigma}_{pt}^{e}\left( y\right) \right) ^{\frac{1}{e}}%
,\quad\mathfrak{I}_{tn}^{(1)}\left( y\right) =I^{\prime}\left(
\mathfrak{M}_{t}\left( y\right) \right) ,
\end{align*}
and the counterpart of $\mathcal{I}_{1t}^{(2)}\left( 0;\eta_{t}\right) $ and
$\mathcal{I}_{1t}^{(2)}\left( 0;D_{t}\right) $ as%
\begin{align*}
\mathfrak{I}_{t}\left( y;\eta\right) & =\left( \frac{1}{e}-1\right)
\mathfrak{I}_{tn}^{(1)}\left( y\right) \mathfrak{M}_{t}^{1-2e}\left(
y\right) \left( \sum_{p=1}^{\overline{p}_{n}}\mathbf{\Sigma}_{pt}%
^{e-1}\left( y\right) \widetilde{\Sigma}_{pt}^{\left( 1\right) }\left(
y;\eta\right) \right) ^{2},\\
\mathfrak{I}_{t}\left( y\right) & =\mathfrak{I}_{t}\left( y;yD_{t}+\left(
1-y\right) D_{t}^{t-\ell+1}\right) .
\end{align*}
Observe that $\mathcal{I}_{1t}^{(2)}\left( 0;\eta_{t}\right) =\mathfrak{I}%
_{t}\left( 1;\eta_{t}\right) $ and $\mathcal{I}_{1t}^{(2)}\left(
0;D_{t}\right) =\mathfrak{I}_{t}\left( 1\right) $. Hence $\mathbb{E}\left[
\mathcal{I}_{1t}^{(2)}\left( 0;D_{t}\right) -\mathcal{I}_{1t}^{(2)}\left(
0;\eta_{t}\right) \right] =\mathbb{E}\left[ \mathfrak{I}_{t}\left(
1\right) -\mathfrak{I}_{t}\left( 1;\eta_{t}\right) \right] $ and%
\begin{align}
\mathbb{E}\left[ \mathcal{I}_{1t}^{(2)}\left( 0;D_{t}\right) -\mathcal{I}%
_{1t}^{(2)}\left( 0;\eta_{t}\right) \right] & =\mathbb{E}\left[
\mathfrak{I}_{t}\left( 0\right) -\mathfrak{I}_{t}\left( 0;\eta_{t}\right)
\right] \label{I12}\\
& +\int_{0}^{1}\mathbb{E}\left[ \mathfrak{I}_{t}^{(1)}\left( y\right)
-\mathfrak{I}_{t}^{(1)}\left( y;\eta_{t}\right) \right] dy,\label{IntI12}%
\end{align}
where $\mathfrak{I}_{t}^{(1)}\left( y\right) =d\mathfrak{I}_{t}\left(
y\right) /dy$ and $\mathfrak{I}_{t}^{(1)}\left( y;\eta_{t}\right)
=d\mathfrak{I}_{t}\left( y;\eta_{t}\right) /dy$.
We first consider the integral item $\int_{0}^{1}\left\vert \mathbb{E}\left[
\mathfrak{I}_{t}^{(1)}\left( y\right) \right] \right\vert dy$ from
(\ref{IntI12}) and first compute $\mathfrak{I}_{1t}^{(1)}\left( y\right) $.
Define%
\begin{align*}
\mathfrak{S}_{pt}^{(1)}\left( y\right) & =\frac{d\mathfrak{S}_{pt}\left(
y\right) }{dy}=\frac{2\sum_{j=1}^{p}K_{jp}\left( \sum_{s=t-\ell}^{t-1}%
D_{js}\right) \left( yD_{jt}+\left( 1-y\right) D_{jt}^{t-\ell+1}\right)
}{n\sigma^{4}V_{\Delta}\left( p\right) }\\
& +\frac{2\sum_{j=1}^{p}K_{jp}\left( \sum_{s=j+1}^{t-\ell-1}D_{js}%
+y\sum_{s=t-\ell}^{t-1}D_{js}+\sum_{s=t+1}^{n}\eta_{js}\right) \left(
D_{jt}^{t-\ell+1}-D_{jt}\right) }{n\sigma^{4}V_{\Delta}\left( p\right) },
\end{align*}%
\begin{align*}
\mathfrak{T}_{pt}^{(1)}\left( y\right) & =\frac{d\mathfrak{T}_{pt}\left(
y\right) }{dy}=\frac{4\sum_{j=1}^{p}K_{jp}\left( yD_{jt}+\left( 1-y\right)
D_{jt}^{t-\ell+1}\right) \left( D_{jt}-D_{jt}^{t-\ell+1}\right) }%
{n\sigma^{4}V_{\Delta}\left( p\right) },\\
\mathbf{s}_{pt}^{(1)}\left( y\right) & =\frac{d\mathbf{s}_{pt}\left(
y\right) }{dy}=\frac{2\sum_{j=1}^{p}K_{jp}\mathbf{M}_{jt}\left( y\right)
\sum_{s=t-\ell}^{t-1}D_{js}}{n\sigma^{4}V_{\Delta}\left( p\right) },\\
\widetilde{\Sigma}_{pt}^{\left( 1,1\right) }\left( y\right) &
=\frac{d\widetilde{\Sigma}_{pt}^{\left( 1\right) }\left( y\right) }%
{dy}=f^{\left( 2\right) }\left( \mathbf{s}_{pt}\left( y\right) \right)
\mathbf{s}_{pt}^{(1)}\left( y\right) \mathfrak{S}_{pt}\left( y\right)
+f^{\left( 1\right) }\left( \mathbf{s}_{pt}\left( y\right) \right)
\mathfrak{S}_{pt}^{\left( 1\right) }\left( y\right) ,
\end{align*}%
\begin{align*}
\widetilde{\Sigma}_{pt}^{\left( 2,1\right) }\left( y\right) &
=\frac{d\widetilde{\Sigma}_{pt}^{\left( 2\right) }\left( y\right) }%
{dy}=f^{\left( 2\right) }\left( \mathbf{s}_{pt}\left( y\right) \right)
\mathbf{s}_{pt}^{(1)}\left( y\right) \mathfrak{T}_{pt}\left( y\right)
+f^{\left( 1\right) }\left( \mathbf{s}_{pt}\left( y\right) \right)
\mathfrak{T}_{pt}^{\left( 1\right) }\left( y\right) \\
& +f^{\left( 3\right) }\left( \mathbf{s}_{pt}\left( y\right) \right)
\mathbf{s}_{pt}^{(1)}\left( y\right) \left( \mathfrak{S}_{pt}\left(
y\right) \right) ^{2}+2f^{\left( 2\right) }\left( \mathbf{s}_{pt}\left(
y\right) \right) \mathfrak{S}_{pt}\left( y\right) \mathfrak{S}%
_{pt}^{\left( 1\right) }\left( y\right) ,
\end{align*}%
\[
\mathfrak{I}_{tn}^{(2)}\left( y\right) =I^{\prime\prime}\left(
\mathfrak{M}_{t}\left( y\right) \right) ,
\]
and%
\begin{align*}
\mathfrak{I}_{1t}^{(1)}\left( y\right) & =\left( \frac{1}{e}-1\right)
\mathfrak{I}_{tn}^{(2)}\left( y\right) \mathfrak{M}_{t}^{2-3e}\left(
y\right) \left( \sum_{p=1}^{\overline{p}_{n}}\mathbf{\Sigma}_{pt}%
^{e-1}\left( y\right) \widetilde{\Sigma}_{pt}^{\left( 1\right) }\left(
y\right) \right) ^{2}\sum_{p=1}^{\overline{p}_{n}}\mathbf{\Sigma}_{pt}%
^{e-1}\left( y\right) \mathbf{\Sigma}_{pt}^{(1)}\left( y\right) ,\\
\mathfrak{I}_{2t}^{(1)}\left( y\right) & =\left( \frac{1}{e}-1\right)
\left( \frac{1}{e}-2\right) \mathfrak{I}_{tn}^{(1)}\left( y\right)
\mathfrak{M}_{t}^{1-3e}\left( y\right) \left( \sum_{p=1}^{\overline{p}_{n}%
}\mathbf{\Sigma}_{pt}^{e-1}\left( y\right) \widetilde{\Sigma}_{pt}^{\left(
1\right) }\left( y\right) \right) ^{2}\sum_{p=1}^{\overline{p}_{n}%
}\mathbf{\Sigma}_{pt}^{e-1}\left( y\right) \mathbf{\Sigma}_{pt}^{(1)}\left(
y\right) ,\\
\mathfrak{I}_{3t}^{(1)}\left( y\right) & =2\left( \frac{1}{e}-1\right)
\left( e-1\right) \mathfrak{I}_{tn}^{(1)}\left( y\right) \mathfrak{M}%
_{t}^{1-2e}\left( y\right) \left( \sum_{p=1}^{\overline{p}_{n}%
}\mathbf{\Sigma}_{pt}^{e-1}\left( y\right) \widetilde{\Sigma}_{pt}^{\left(
1\right) }\left( y\right) \right) \left( \sum_{p=1}^{\overline{p}_{n}%
}\mathbf{\Sigma}_{pt}^{e-2}\left( y\right) \left( \mathbf{\Sigma}%
_{pt}^{(1)}\left( y\right) \right) ^{2}\right) ,\\
\mathfrak{I}_{4t}^{(1)}\left( y\right) & =2\left( \frac{1}{e}-1\right)
\mathfrak{I}_{tn}^{(1)}\left( y\right) \mathfrak{M}_{t}^{1-2e}\left(
y\right) \left( \sum_{p=1}^{\overline{p}_{n}}\mathbf{\Sigma}_{pt}%
^{e-1}\left( y\right) \widetilde{\Sigma}_{pt}^{\left( 1\right) }\left(
y\right) \right) \left( \sum_{p=1}^{\overline{p}_{n}}\mathbf{\Sigma}%
_{pt}^{e-1}\left( y\right) \widetilde{\Sigma}_{pt}^{\left( 1,1\right)
}\left( y\right) \right) .
\end{align*}
To bound the moments of $\widetilde{\Sigma}_{pt}^{\left( 1\right) }\left(
y\right) $, $\widetilde{\Sigma}_{pt}^{\left( 1,1\right) }\left( y\right)
$ and $\mathbf{\Sigma}_{pt}^{(1)}\left( y\right) $, consider first
$\left\Vert \mathfrak{S}_{pt}\left( y\right) \right\Vert _{3a}$, $\left\Vert
\mathfrak{S}_{pt}^{(1)}\left( y\right) \right\Vert _{3a}$ and $\left\Vert
\mathbf{s}_{pt}^{(1)}\left( y\right) \right\Vert _{3a}$. For $\left\Vert
\mathfrak{S}_{pt}\left( y\right) \right\Vert _{3a}$ and $\left\Vert
\mathfrak{S}_{pt}^{(1)}\left( y\right) \right\Vert _{3a}$, (\ref{Sig1}), the
Burkholder inequality, (\ref{Shaolem52}) $\overline{p}_{n}=O\left(
n^{1/2}\right) $, $2\overline{p}_{n}\leq\ell\leq3\overline{p}_{n}$ and
$\Theta_{6a}\left( \ell-\overline{p}_{n}\right) \leq C\overline{p}_{n}^{-1}$
give%
\begin{align*}
& \left\Vert \mathfrak{S}_{pt}\left( y\right) \right\Vert _{3a}\\
& \leq\left\Vert \frac{2\sum_{j=1}^{p}K_{jp}\left( \sum_{s=j+1}^{t-\ell
-1}D_{js}+y\sum_{s=t-\ell}^{t-1}D_{js}+\sum_{s=t+1}^{n}\eta_{js}\right)
D_{jt}}{n\sigma^{4}V_{\Delta}\left( p\right) }\right\Vert _{3a}\\
& +2\left\vert 1-y\right\vert \sum_{j=1}^{p}\frac{\left\vert K_{jp}\right\vert
}{n\sigma^{4}V_{\Delta}\left( p\right) }\left\Vert \left( \sum
_{s=j+1}^{t-\ell-1}D_{js}+y\sum_{s=t-\ell}^{t-1}D_{js}+\sum_{s=t+1}^{n}%
\eta_{js}\right) \right\Vert _{6a}\left\Vert D_{jt}-D_{jt}^{t-\ell
+1}\right\Vert _{6a}\\
& \leq C\left( \frac{1}{n^{1/2}}+\frac{\overline{p}_{n}}{n}+\left(
\frac{\overline{p}_{n}}{n}\right) ^{1/2}\Theta_{6a}\left( \ell-\overline
{p}_{n}\right) \right) \leq\frac{C}{n^{1/2}},
\end{align*}
\begin{align*}
& \left\Vert \mathfrak{S}_{pt}^{(1)}\left( y\right) \right\Vert _{3a}\\
& \text{ }\leq\left\Vert \frac{2\sum_{j=1}^{p}K_{jp}\left( \sum_{s=t-\ell
}^{t-1}D_{js}\right) D_{jt}}{n\sigma^{4}V_{\Delta}\left( p\right)
}\right\Vert _{3a}\\
& \text{ }+2\left\vert 1-y\right\vert \sum_{j=1}^{p}\frac{\left\vert
K_{jp}\right\vert }{n\sigma^{4}V_{\Delta}\left( p\right) }\left\Vert
\sum_{s=t-\ell}^{t-1}D_{js}\right\Vert _{6a}\left\Vert D_{jt}-D_{jt}%
^{t-\ell+1}\right\Vert _{6a}\\
& \text{ }+2\sum_{j=1}^{p}\frac{\left\vert K_{jp}\right\vert }{n\sigma
^{4}V_{\Delta}\left( p\right) }\left\Vert \sum_{s=j+1}^{t-\ell-1}%
D_{js}+y\sum_{s=t-\ell}^{t-1}D_{js}+\sum_{s=t+1}^{n}\eta_{js}\right\Vert
_{6a}\left\Vert D_{jt}-D_{jt}^{t-\ell+1}\right\Vert _{6a}\\
& \text{ }\leq C\left( \frac{\ell^{1/2}}{n}+\frac{\ell^{1/2}\overline{p}%
_{n}^{1/2}}{n}\Theta_{6a}\left( \ell-\overline{p}_{n}\right) +\left(
\frac{\overline{p}_{n}}{n}\right) ^{1/2}\Theta_{6a}\left( \ell-\overline
{p}_{n}\right) \right) \\
& \text{ }\leq C\left( \frac{\overline{p}_{n}^{1/2}}{n}+\frac{1}{\left(
n\overline{p}_{n}\right) ^{1/2}}\right) ,
\end{align*}%
\[
\left\Vert \mathfrak{T}_{pt}\left( y\right) \right\Vert _{3a}\leq
C\frac{\overline{p}_{n}^{1/2}}{n},\quad\left\Vert \mathfrak{T}_{pt}^{\left(
1\right) }\left( y\right) \right\Vert _{3a}\leq\frac{C}{n\overline{p}_{n}}.
\]
For $\left\Vert \mathbf{s}_{pt}^{(1)}\left( y\right) \right\Vert _{3a}$
(\ref{Sig1}), $\overline{p}_{n}=O\left( n^{1/2}\right) $ and the Burkholder
inequality give%
\begin{align*}
& \left\Vert \mathbf{s}_{pt}^{(1)}\left( y\right) \right\Vert _{3a}\\
& \text{ }\leq\left\Vert 2\sum_{s_{1}=t-\ell}^{t-1}\sum_{j=1}^{p}\frac{K_{jp}%
}{n\sigma^{4}V_{\Delta}\left( p\right) }\left( \sum_{s_{2}=j+1}^{t-\ell
-1}D_{js_{2}}\right) D_{js_{1}}\right\Vert _{3a}+\left\Vert \frac{2\sum
_{j=1}^{p}K_{jp}\left( \sum_{s=t-\ell}^{t-1}D_{js}\right) ^{2}}{n\sigma
^{4}V_{\Delta}\left( p\right) }\right\Vert _{3a}\\
& \text{ }+\left\Vert \frac{2\sum_{j=1}^{p}K_{jp}\left( \sum_{s=t-\ell}%
^{t-1}D_{js}\right) \left( \sum_{s=t+1}^{n}\eta_{js}\right) }{n\sigma
^{4}V_{\Delta}\left( p\right) }\right\Vert _{3a}\\
& \text{ }\leq C\left( \sum_{s_{1}=t-\ell}^{t-1}\left\Vert \sum_{j=1}%
^{p}\frac{K_{jp}}{n\sigma^{4}V_{\Delta}\left( p\right) }\left( \sum
_{s_{2}=j+1}^{t-\ell-1}D_{js_{2}}\right) D_{js_{1}}\right\Vert _{3a}%
^{2}\right) ^{1/2}+C\sum_{j=1}^{p}\frac{\left\vert K_{jp}\right\vert
}{n\sigma^{4}V_{\Delta}\left( p\right) }\left\Vert \sum_{s=t-\ell}%
^{t-1}D_{js}\right\Vert _{6a}^{2}\\
& \text{ }+C\left\Vert \frac{\left( \sum_{j=1}^{p}K_{jp}^{2}\left(
\sum_{s=t-\ell}^{t-1}D_{js}\right) ^{2}\right) ^{1/2}}{\left( np\right)
^{1/2}}\right\Vert _{3a}\\
& \text{ }\leq C\left( \ell^{1/2}\left( \frac{1}{n^{1/2}}+\frac{\overline
{p}_{n}}{n}\right) +\frac{\overline{p}_{n}^{1/2}\ell}{n}+\frac{\ell^{1/2}%
}{n^{1/2}}\right) \leq C\left( \frac{\overline{p}_{n}}{n}\right) ^{1/2}.
\end{align*}
These bounds and (\ref{f}) give, uniformly in $y$, $p$ and $t$,%
\begin{align*}
\left\Vert \widetilde{\Sigma}_{pt}^{\left( 1\right) }\left( y\right)
\right\Vert _{3a} & \leq\frac{C}{n^{1/2}},\quad\left\Vert \mathbf{\Sigma
}_{pt}^{\left( 1\right) }\left( y\right) \right\Vert _{3a}\leq C\left(
\frac{\overline{p}_{n}}{n}\right) ^{1/2},\\
\left\Vert \widetilde{\Sigma}_{pt}^{\left( 1,1\right) }\left( y\right)
\right\Vert _{3a/2} & \leq C\left( \frac{\overline{p}_{n}^{1/2}}{n}+\left(
\frac{\overline{p}_{n}}{n}\right) ^{3/2}+\frac{\overline{p}_{n}^{1/2}%
}{n^{3/2}}+\frac{1}{n\overline{p}_{n}^{1/2}}\right) \leq C\frac{\overline
{p}_{n}^{1/2}}{n}.
\end{align*}
Now, arguing as for the study of (\ref{Lind3}), $e=O\left( \overline{p}%
_{n}^{1/a}\right) $ give uniformly in $p$, $t$ and $y$,%
\[
\mathbb{E}\left[ \left\vert \mathfrak{I}_{1t}^{(1)}\left( y\right)
\right\vert \right] +\mathbb{E}\left[ \left\vert \mathfrak{I}_{2t}%
^{(1)}\left( y\right) \right\vert \right] +\mathbb{E}\left[ \left\vert
\mathfrak{I}_{4t}^{(1)}\left( y\right) \right\vert \right] \leq
C\frac{\overline{p}_{n}^{1/2+3/a}}{n^{3/2}},\quad\mathbb{E}\left[ \left\vert
\mathfrak{I}_{3t}^{(1)}\left( y\right) \right\vert \right] \leq
C\frac{\overline{p}_{n}^{1+3/a}}{n^{3/2}}.
\]
It then follows $\sum_{t=1}^{n}\int_{0}^{1}\left\vert \mathbb{E}\left[
\mathfrak{I}_{t}^{(1)}\left( y\right) \right] \right\vert dy\leq
C\overline{p}_{n}^{1+3/a}/n^{1/2}$. Since $\sum_{t=1}^{n}\int_{0}%
^{1}\left\vert \mathbb{E}\left[ \mathfrak{I}_{t}^{(1)}\left( y;\eta
_{t}\right) \right] \right\vert dy$ satisfies a similar bound, we have for
(\ref{IntI12}),%
\[
\sum_{t=1}^{n}\left\vert \int_{0}^{1}\mathbb{E}\left[ \mathfrak{I}_{t}%
^{(1)}\left( y\right) -\mathfrak{I}_{t}^{(1)}\left( y;\eta_{t}\right)
\right] dy\right\vert \leq C\frac{\overline{p}_{n}^{1+3/a}}{n^{1/2}}.
\]
Consider now (\ref{I12}). Since $D_{jt}^{t-\ell+1}$ and $\eta_{t}$ are
independent of the $\mathfrak{J}_{tn}^{(1)}\left( 0\right) $, $\mathfrak{M}%
_{t}^{1-2e}\left( 0\right) $ and $\mathbf{\Sigma}_{pt}\left( 0\right) $,
we have using (\ref{Vareta}),%
\begin{align*}
& \mathbb{E}\left[ \mathfrak{I}_{t}\left( 0\right) -\mathfrak{I}_{t}\left(
0;\eta_{t}\right) \right] \\
& =\frac{4}{n}\mathbb{E}\left[ \left( \frac{1}{e}-1\right) \mathfrak{J}%
_{tn}^{(1)}\left( 0\right) \mathfrak{M}_{t}^{1-2e}\left( 0\right) \right.
\\
& \sum_{p_{1},p_{2}=1}^{\overline{p}}\mathbf{\Sigma}_{p_{1}t}^{e-1}\left(
0\right) \mathbf{\Sigma}_{p_{2}t}^{e-1}\left( 0\right) f\left(
\mathbf{\Sigma}_{p_{1}t}^{e-1}\left( 0\right) \right) f\left(
\mathbf{\Sigma}_{p_{2}t}^{e-1}\left( 0\right) \right) \sum_{j_{1}=1}%
^{p_{1}}\sum_{j_{2}=1}^{p_{2}}\left( \mathbb{E}\left[ D_{j_{1}t}^{t-\ell
+1}D_{j_{2}t}^{t-\ell+1}\right] -\mathbb{E}\left[ \eta_{j_{1}t}\eta_{j_{2}%
t}\right] \right) \\
& \left. \frac{K_{j_{1}p_{1}}\left( \sum_{s_{1}=j_{1}+1}^{t-\ell+1}%
D_{j_{1}s_{1}}+\sum_{s_{1}=t-\ell}^{n}\eta_{j_{1}s_{1}}\right) }%
{n^{1/2}\sigma^{4}V_{\Delta}\left( p_{1}\right) }\frac{K_{j_{2}p_{2}}\left(
\sum_{s_{2}=j_{2}+1}^{t-\ell+1}D_{j_{2}s_{2}}+\sum_{s_{2}=t-\ell}^{n}%
\eta_{j_{2}s_{2}}\right) }{n^{1/2}\sigma^{4}V_{\Delta}\left( p_{2}\right)
}\right] \\
& =0.
\end{align*}
Hence (\ref{I12}) and (\ref{IntI12}) give%
\[
\left\vert \sum_{t=1}^{n}\mathbb{E}\left[ \mathcal{I}_{1t}^{(2)}\left(
0;D_{t}\right) -\mathcal{I}_{1t}^{(2)}\left( 0;\eta_{t}\right) \right]
\right\vert \leq C\frac{\overline{p}_{n}^{1+3/a}}{n^{1/2}}.
\]
To study $\left\vert \mathbb{E}\left[ \mathcal{I}_{2t}^{(2)}\left(
0;D_{t}\right) -\mathcal{I}_{2t}^{(2)}\left( 0;\eta_{t}\right) \right]
\right\vert $, observe that, uniformly with respect to $p$, $t$ and $y$,%
\begin{align*}
\max\left( \left\Vert \widetilde{\Sigma}_{pt}^{\left( 2\right) }\left(
y\right) \right\Vert _{3a/2},\left\Vert \widetilde{\Sigma}_{pt}^{\left(
2\right) }\left( y;\eta_{t}\right) \right\Vert _{3a/2}\right) & \leq
C\frac{\overline{p}_{n}^{1/2}}{n},\\
\max\left( \left\Vert \widetilde{\Sigma}_{pt}^{\left( 2,1\right) }\left(
y\right) \right\Vert _{a},\left\Vert \widetilde{\Sigma}_{pt}^{\left(
2,1\right) }\left( y;\eta_{t}\right) \right\Vert _{a}\right) & \leq
C\left( \frac{\overline{p}_{n}}{n^{3/2}}+\frac{1}{n\overline{p}_{n}}\right)
.
\end{align*}
Arguing as for $\sum_{t=1}^{n}\mathbb{E}\left[ \mathcal{I}_{1t}^{(2)}\left(
0;D_{t}\right) -\mathcal{I}_{1t}^{(2)}\left( 0;\eta_{t}\right) \right] $
gives $\left\vert \sum_{t=1}^{n}\mathbb{E}\left[ \mathcal{I}_{2t}%
^{(2)}\left( 0;D_{t}\right) -\mathcal{I}_{2t}^{(2)}\left( 0;\eta
_{t}\right) \right] \right\vert \leq C\left( \frac{\overline{p}_{n}%
^{1+2/a}}{n^{1/2}}+\frac{\overline{p}_{n}^{1/a}}{\overline{p}_{n}}\right) $,
and provided $e=O\left( \overline{p}_{n}^{1/(2a)}\right) $%
\[
\left\vert \sum_{t=1}^{n}\mathbb{E}\left[ \mathcal{I}_{3t}^{(2)}\left(
0;D_{t}\right) -\mathcal{I}_{3t}^{(2)}\left( 0;\eta_{t}\right) \right]
\right\vert +\left\vert \sum_{t=1}^{n}\mathbb{E}\left[ \mathcal{I}_{4t}%
^{(2)}\left( 0;D_{t}\right) -\mathcal{I}_{4t}^{(2)}\left( 0;\eta
_{t}\right) \right] \right\vert \leq C\frac{\overline{p}_{n}^{1+3/a}%
}{n^{1/2}}.
\]
It then follows%
\begin{equation}
\left\vert \sum_{t=1}^{n}\mathbb{E}\left[ \mathcal{I}_{t}^{(2)}\left(
0;D_{t}\right) -\mathcal{I}_{t}^{(2)}\left( 0;\eta_{t}\right) \right]
\right\vert \leq C\left( \frac{\overline{p}_{n}^{1+3/a}}{n^{1/2}}+\frac
{1}{\overline{p}_{n}^{1-1/a}}\right) .\label{Lind2bnd}%
\end{equation}
Substituting (\ref{Lind3bnd}), (\ref{Lind2bnd}) in (\ref{Lind3}),
(\ref{Lind2}) shows that the Lemma is proved.\hfill$\square$
\subsubsection{End of the proof of Proposition \ref{Selection}}
The rest of the proof is divided in 3 steps.\smallskip
\textbf{Step 1: Martingale approximation}. Let $\widetilde{S}_{p}$ and
$\check{S}_{p}$ be as in (\ref{TildeR}) and (\ref{Pseudomax}) respectively.
Let $\mathfrak{a}=4a/3$. The Cauchy Schwarz inequality gives%
\begin{align*}
\left\vert \check{S}_{p}-\widetilde{S}_{p}\right\vert & =\sum_{j=1}%
^{p}\left( K_{jp}\frac{1}{n^{1/2}}\left\vert M_{jn}-\sum_{t=j+1}^{n}%
u_{t}u_{t-j}\right\vert \times\frac{1}{n^{1/2}}\left\vert M_{jn}+\sum
_{t=j+1}^{n}u_{t}u_{t-j}\right\vert \right) \\
& \leq C\left( \sum_{j=1}^{p}\frac{1}{n}\left( M_{jn}-\sum_{t=j+1}^{n}%
u_{t}u_{t-j}\right) ^{2}\right) ^{1/2}\left( \sum_{j=1}^{p}\frac{1}%
{n}\left( M_{jn}+\sum_{t=j+1}^{n}u_{t}u_{t-j}\right) ^{2}\right) ^{1/2}.
\end{align*}
Hence%
\begin{align*}
& \left\Vert \check{S}_{p}-\widetilde{S}_{p}\right\Vert _{\mathfrak{a}/2}\\
& \text{ }\leq C\mathbb{E}^{\frac{1}{\mathfrak{a}}}\left[ \left( \sum
_{j=1}^{p}\frac{1}{n}\left( M_{jn}-\sum_{t=j+1}^{n}u_{t}u_{t-j}\right)
^{2}\right) ^{\frac{\mathfrak{a}}{2}}\right] \mathbb{E}^{\frac
{1}{\mathfrak{a}}}\left[ \left( \sum_{j=1}^{p}\frac{1}{n}\left( M_{jn}%
+\sum_{t=j+1}^{n}u_{t}u_{t-j}\right) ^{2}\right) ^{\frac{\mathfrak{a}}{2}%
}\right] .
\end{align*}
Observe now that (\ref{Cov2M}) gives%
\begin{align*}
& \mathbb{E}^{\frac{1}{\mathfrak{a}}}\left[ \left( \sum_{j=1}^{p}\frac{1}%
{n}\left( M_{jn}-\sum_{t=j+1}^{n}u_{t}u_{t-j}\right) ^{2}\right)
^{\frac{\mathfrak{a}}{2}}\right] \\
& \text{ }\leq\left( \frac{1}{n}\sum_{j=1}^{p}\mathbb{E}^{\frac
{2}{\mathfrak{a}}}\left[ \left\vert M_{jn}-\sum_{t=j+1}^{n}u_{t}%
u_{t-j}\right\vert ^{\mathfrak{a}}\right] \right) ^{1/2}\leq C\left(
\frac{p}{n}\right) ^{1/2}.
\end{align*}
Since the Burkholder inequality and $\max_{j}\mathbb{E}\left[ \left\vert
D_{jt}\right\vert ^{\mathfrak{a}}\right] <\infty$ give $\max_{j\in\left[
1,\overline{p}_{n}\right] }\mathbb{E}^{1/\mathfrak{a}}\left[ \left\vert
M_{jn}\right\vert ^{\mathfrak{a}}\right] \leq Cn^{1/2}$, we also have%
\begin{align*}
& \mathbb{E}^{\frac{1}{\mathfrak{a}}}\left[ \left( \sum_{j=1}^{p}\frac{1}%
{n}\left( M_{jn}+\sum_{t=j+1}^{n}u_{t}u_{t-j}\right) ^{2}\right)
^{\frac{\mathfrak{a}}{2}}\right] \\
& \leq\left( \frac{1}{n}\sum_{j=1}^{p}\left( \mathbb{E}^{\frac
{1}{\mathfrak{a}}}\left[ \left\vert M_{jn}+\sum_{t=j+1}^{n}u_{t}%
u_{t-j}\right\vert ^{\mathfrak{a}}\right] \right) ^{2}\right) ^{1/2}\\
& \leq\left( \frac{1}{n}\sum_{j=1}^{p}\left( 2\mathbb{E}^{\frac
{1}{\mathfrak{a}}}\left[ \left\vert M_{jn}\right\vert ^{\mathfrak{a}}\right]
+\mathbb{E}^{\frac{1}{\mathfrak{a}}}\left[ \left\vert \sum_{t=j+1}^{n}%
u_{t}u_{t-j}-M_{jn}\right\vert ^{\mathfrak{a}}\right] \right) ^{2}\right)
^{1/2}\\
& \text{ }\leq\left( \frac{p(Cn^{1/2}+C)^{2}}{n}\right) ^{1/2}\leq Cp^{1/2}.
\end{align*}
It then follows that $\left\Vert \check{S}_{p}-\widetilde{S}_{p}\right\Vert
_{\mathfrak{a}/2}\leq Cp/n^{1/2}$ and them $\max_{p\in\left[ 1,\overline
{p}_{n}\right] }\mathbb{E}\left[ \left\vert \left( \check{S}_{p}%
-\widetilde{S}_{p}\right) /p^{1/2}\right\vert ^{\mathfrak{a}/2}\right] \leq
C\left( \overline{p}_{n}/n\right) ^{\mathfrak{a}/4}$. Hence the Markov
inequality gives%
\begin{align*}
& \mathbb{P}\left( \max_{p\in\left[ 1,\overline{p}_{n}\right] }\left\vert
\frac{\check{S}_{p}-\widetilde{S}_{p}}{p^{1/2}}\right\vert \geq t\right)
\leq\sum_{p=1}^{\overline{p}_{n}}\mathbb{P}\left( \left\vert \frac{\check
{S}_{p}-\widetilde{S}_{p}}{p^{1/2}}\right\vert \geq t\right) \\
& \text{ }\leq\frac{\overline{p}_{n}}{t^{\mathfrak{a}/2}}\max_{p\in\left[
1,\overline{p}_{n}\right] }\mathbb{E}\left[ \left\vert \frac{\check{S}%
_{p}-\widetilde{S}_{p}}{p^{1/2}}\right\vert ^{\frac{\mathfrak{a}}{2}}\right]
\leq\frac{C}{t^{a/2}}\left( \frac{\overline{p}_{n}^{1+\frac{4}{\mathfrak{a}}%
}}{n}\right) ^{\mathfrak{a}/4},
\end{align*}
and $\overline{p}_{n}=o\left( n^{1/\left( 2\left( 1+4/\mathfrak{a}\right)
\right) }\right) $ gives%
\begin{equation}
\max_{p\in\left[ 1,\overline{p}_{n}\right] }\left\vert \frac{\check{S}%
_{p}-\widetilde{S}_{p}}{p^{1/2}}\right\vert =o_{\mathbb{P}}%
(1).\label{Check2tilde}%
\end{equation}
\textbf{Step 2: some Gaussian approximations}. Let $\gamma_{n}^{\prime}%
=\gamma_{n}\left( 1+\epsilon/2\right) /\left( 1+\epsilon\right) $.
(\ref{Gam}) gives $\gamma_{n}\geq\gamma_{n}^{\prime}\geq\widetilde{\gamma}%
_{n}=\left( 2\ln\ln\overline{p}_{n}\right) ^{1/2}\left( 1+\epsilon
/3\right) $. Consider a three times continuously differentiable function
$\iota\left( x\right) $ with $\max_{j=1,2,3}\sup_{x}\left\vert
\iota^{\left( 3\right) }\left( x\right) \right\vert <\infty$ and
$\mathbb{I}\left( x\geq0\right) \leq\iota\left( x\right) \leq
\mathbb{I}\left( x\geq-\epsilon\right) $. Let $\mathcal{I}\left( x\right)
=\iota\left( x-\gamma_{n}^{\prime}\right) $. Let $\check{s}_{p}$ be as in
(\ref{Pseudomax}). Then Lemma \ref{Lindeberg} with $e=\overline{p}%
_{n}^{1/(2a)}$, (\ref{f}) and (\ref{Pseudomax}), and Assumption \ref{Reg} give%
\begin{align*}
& \mathbb{P}\left( \max_{p\in\left[ 2,\overline{p}_{n}\right] }\left\{
\check{s}_{p}\right\} \geq\gamma_{n}^{\prime}\right) \leq\mathbb{P}\left(
\mathcal{M}\geq\gamma_{n}^{\prime}\right) \leq\mathbb{E}\left[
\mathcal{I}\left( \mathcal{M}\right) \right] \\
& \text{ }\leq\mathbb{E}\left[ \mathcal{I}\left( \mathcal{M}_{1}\left(
\eta_{1}\right) \right) \right] +o\left( 1\right) \leq\mathbb{P}\left(
\mathcal{M}_{1}\left( \eta_{1}\right) \geq\gamma_{n}^{\prime}-\epsilon
\right) +o\left( 1\right) .
\end{align*}
We now look for a more explicit expression for the RHS. Recall that
$\mathcal{M}_{1}\left( \eta_{1}\right) =\left( \sum_{p=1}^{\overline{p}%
_{n}}f^{e}\left( \check{s}_{p1}\left( 1;\eta_{1}\right) \right) \right)
^{1/e}$. Consider $\Omega\left( p\right) =\left[ \omega_{1},\ldots
,\omega_{p}\right] ^{\prime}$ where the $\omega_{p}$'s are i.i.d. standard
normal variables,
\begin{align*}
\mathcal{K}\left( p\right) & =\operatorname*{Diag}\left( \left(
1-j/n\right) K_{jp},j=1,\ldots,p\right) ,\\
\mathcal{C}_{\eta}\left( p\right) & =\left[ \operatorname*{Cov}\left(
\eta_{j_{1}t},\eta_{j_{2}t}\right) ,j_{1},j_{2}=1,\ldots,p\right] ,\\
\mathcal{V}_{\eta}\left( p\right) & =\mathcal{C}_{\eta}^{1/2}\left(
p\right) \mathcal{K}\left( p\right) \mathcal{C}_{\eta}^{1/2}\left(
p\right) ,
\end{align*}
and $\mathcal{D}_{\eta}\left( p\right) =\operatorname*{Diag}\left( \left(
1-j/n\right) K_{jp}\operatorname*{Var}\left( \eta_{jt}\right)
,j=1,\ldots,p\right) $ the $p\times p$ diagonal matrix obtained from the
diagonal entries of $\mathcal{V}_{\eta}\left( p\right) $. Then the
$\check{s}_{p1}\left( 1;\eta_{1}\right) $, $p=1,\ldots,\overline{p}_{n}$,
have the same joint distribution than%
\[
\tilde{s}_{p}=\frac{\Omega\left( p\right) ^{\prime}\mathcal{V}_{\eta}\left(
p\right) \Omega\left( p\right) -\sigma^{4}E_{\Delta}\left( p\right)
}{\sigma^{4}V_{\Delta}\left( p\right) },\quad p=1,\ldots,\overline{p}_{n},
\]
so that $\mathcal{M}_{1}\left( \eta_{1}\right) $ and $\widetilde{\mathcal{M}%
}=\left( \sum_{p=1}^{\overline{p}_{n}}f^{e}\left( \tilde{s}_{p}\right)
\right) ^{1/e}$ have the same distribution, and then%
\[
\mathbb{P}\left( \max_{p\in\left[ 2,\overline{p}_{n}\right] }\left\{
\check{s}_{p}\right\} \geq\gamma_{n}^{\prime}\right) \leq\mathbb{P}\left(
\widetilde{\mathcal{M}}\geq\gamma_{n}^{\prime}-\epsilon\right) +o\left(
1\right) .
\]
Define now%
\[
\bar{s}_{p}=\frac{\Omega\left( p\right) ^{\prime}\mathcal{D}_{\eta}\left(
p\right) \Omega\left( p\right) -\sigma^{4}E_{\Delta}\left( p\right)
}{\sigma^{4}V_{\Delta}\left( p\right) }=\frac{\sum_{j=1}^{p}\left(
1-\frac{j}{n}\right) K_{jp}\operatorname*{Var}\left( \eta_{jt}\right)
\omega_{j}^{2}-\sigma^{4}E_{\Delta}\left( p\right) }{\sigma^{4}V_{\Delta
}\left( p\right) }.
\]
Then uniformly in $p=1,\ldots,\overline{p}_{n}$,
\begin{align*}
& \left\vert \tilde{s}_{p}-\bar{s}_{p}\right\vert =\left\vert \frac
{\Omega\left( p\right) ^{\prime}\left( \mathcal{V}_{\eta}\left( p\right)
-\mathcal{D}_{\eta}\left( p\right) \right) \Omega\left( p\right) }%
{\sigma^{4}V_{\Delta}\left( p\right) }\right\vert \\
& \text{ }\leq C\sum_{1\leq j_{1}\neq j_{2}\leq\overline{p}_{n}}\left\vert
\operatorname*{Cov}\left( \eta_{j_{1}t},\eta_{j_{2}t}\right) \right\vert
\left\vert \omega_{j_{1}}\right\vert \left\vert \omega_{j_{2}}\right\vert
=O_{\mathbb{P}}\left( 1\right) ,
\end{align*}
by Lemma \ref{Sumvareta}. Hence since $f\left( x\right) \leq2\vee x$ by
(\ref{f}) and using (\ref{Smthmax}),%
\begin{align*}
\widetilde{\mathcal{M}} & \leq\left( 1+O\left( \frac{\ln n}{\overline
{p}_{n}^{1/(2a)}}\right) \right) \max_{p\in\left[ 2,\overline{p}%
_{n}\right] }\left\{ 2\vee\tilde{s}_{p}\right\} \leq\left( 1+O\left(
\frac{\ln n}{\overline{p}_{n}^{1/(2a)}}\right) \right) 2\vee\max
_{p\in\left[ 2,\overline{p}_{n}\right] }\left\{ \tilde{s}_{p}\right\} \\
& \leq\left( 1+O\left( \frac{\ln n}{n^{1/8a}}\right) \right) \max
_{p\in\left[ 2,\overline{p}_{n}\right] }\left\{ \bar{s}_{p}\right\}
+O_{\mathbb{P}}\left( 1\right) .
\end{align*}
Define now%
\[
\mathsf{V}_{\Delta}\left( p\right) =\left( 2\sum_{j=1}^{p}K_{jp}%
^{2}\right) ^{1/2},\text{\quad}\mathsf{s}_{p}=\frac{\sum_{j=1}^{p}%
K_{jp}\left( \omega_{j}^{2}-1\right) }{\mathsf{V}_{\Delta}\left( p\right)
},
\]
which is such that%
\begin{align*}
& \left\vert \bar{s}_{p}-\mathsf{s}_{p}\right\vert \leq\left\vert
\mathfrak{e}_{1p}\right\vert +\left\vert \mathfrak{e}_{2p}\right\vert \text{
where}\\
& \text{ }\mathfrak{e}_{1p}=\left( \frac{\sigma^{4}\mathsf{V}_{\Delta}\left(
p\right) }{\sigma^{4}V_{\Delta}\left( p\right) }-1\right) \sigma
^{4}\mathsf{s}_{p},\\
& \text{ }\mathfrak{e}_{2p}=\frac{\sum_{j=1}^{p}\left\{ \left( 1-\frac{j}%
{n}\right) \operatorname*{Var}\left( \eta_{jt}\right) -\sigma^{4}\right\}
K_{jp}\omega_{j}^{2}-\sigma^{4}\sum_{j=1}^{p}\frac{j}{n}K_{jp}}{\sigma
^{4}V_{\Delta}\left( p\right) }.
\end{align*}
Since $K^{\prime}\left( \cdot\right) $ is continuous on $\left[ 0,1\right]
$, the Weierstrass Theorem implies it can be uniformly approximated with a
sequence of polynomial function. Hence (\ref{kjdifcov}), Assumption
\ref{Kernel} and the LIL for weighted sums in Li and Tomkins (1996) gives that%
\[
\limsup_{p\rightarrow\infty}\frac{\left\vert \mathsf{V}_{\Delta}\left(
p\right) \mathsf{s}_{p}\right\vert }{p^{1/2}\left( 2\ln\ln p\right) ^{1/2}%
}\leq\left( 2\int K^{4}\left( t\right) dt\right) ^{1/2}\text{, almost
surely.}%
\]
Since, under Assumption \ref{Kernel}, $\mathsf{V}_{\Delta}\left( p\right)
/p^{1/2}\rightarrow\left( 2\int K^{4}\left( t\right) dt\right) ^{1/2}$ by
convergence of Riemann sums, this gives%
\begin{equation}
\sup_{p\in\left[ 2,\overline{p}_{n}\right] }\left\vert \mathsf{s}%
_{p}\right\vert \leq\left( 2\ln\ln\overline{p}_{n}\right) ^{1/2}\left(
1+o_{\mathbb{P}}\left( 1\right) \right) .\label{LIL}%
\end{equation}
Observe also that Lemma \ref{Ordersums}-(ii), $\overline{p}_{n}=o\left(
n^{1/2}\right) $, and Assumption \ref{Kernel} give uniformly in $p\in\left[
1,\overline{p}_{n}\right] $
\[
\left\vert \frac{\mathsf{V}_{\Delta}\left( p\right) }{V_{\Delta}\left(
p\right) }-1\right\vert \leq C\left( \frac{1}{p}\sum_{j=1}^{p}\frac{j^{2}%
}{n^{2}}K_{jp}^{2}\right) ^{1/2}=o\left( \frac{1}{n^{1/2}}\right) .
\]
Hence%
\[
\max_{p\in\left[ 2,\overline{p}_{n}\right] }\left\vert \mathfrak{e}%
_{1p}\right\vert =o_{\mathbb{P}}\left( \left( \frac{2\ln\ln\overline{p}_{n}%
}{n}\right) ^{1/2}\right) =o_{\mathbb{P}}\left( 1\right) .
\]
Now, for $\max_{p\in\left[ 2,\overline{p}_{n}\right] }\left\vert
\mathfrak{e}_{2p}\right\vert $, we have by Lemmas \ref{Ordersums}-(ii) and
\ref{Sumvareta}, $\overline{p}_{n}=o\left( n^{1/2}\right) $, and Assumption
\ref{Kernel},%
\[
\max_{p\in\left[ 2,\overline{p}_{n}\right] }\left\vert \mathfrak{e}%
_{2p}\right\vert \leq C\left\{ \sum_{j=1}^{\overline{p}_{n}}\left\vert
\operatorname*{Var}\left( \eta_{jt}\right) -\sigma^{4}\right\vert \omega
_{j}^{2}+\frac{1}{n}\sum_{j=1}^{\overline{p}_{n}}j\omega_{j}^{2}%
+\frac{\overline{p}_{n}^{3/2}}{n}\right\} =O_{\mathbb{P}}\left( 1\right)
+O_{\mathbb{P}}\left( \frac{\overline{p}_{n}^{2}}{n}\right) =O_{\mathbb{P}%
}\left( 1\right) .
\]
Hence $\max_{p\in\left[ 2,\overline{p}_{n}\right] }\left\vert \bar{s}%
_{p}-\mathsf{s}_{p}\right\vert =O_{\mathbb{P}}\left( 1\right) $ and
substituting in the bounds for $\mathbb{P}\left( \max_{p\in\left[
2,\overline{p}_{n}\right] }\left\{ \check{s}_{p}\right\} \geq\gamma
_{n}^{\prime}\right) $ and $\widetilde{\mathcal{M}}$ above gives, by
(\ref{Gam}), $\gamma_{n}^{\prime}=\gamma_{n}\left( 1+\epsilon/2\right)
/\left( 1+\epsilon\right) $, $\gamma_{n}^{\prime}\geq\left( 2\ln
\ln\overline{p}_{n}\right) ^{1/2}\left( 1+\epsilon/3\right) $ and
(\ref{LIL})%
\begin{align}
\mathbb{P}\left( \max_{p\in\left[ 2,\overline{p}_{n}\right] }\left\{
\check{s}_{p}\right\} \geq\gamma_{n}^{\prime}\right) & =\mathbb{P}\left(
\left( 1+O\left( \frac{\ln n}{n^{1/8a}}\right) \right) \max_{p\in\left[
2,\overline{p}_{n}\right] }\left\{ \mathsf{s}_{p}\right\} +O_{\mathbb{P}%
}\left( 1\right) \geq\gamma_{n}^{\prime}-\epsilon\right) +o\left( 1\right)
\nonumber\\
& \leq\mathbb{P}\left( \max_{p\in\left[ 2,\overline{p}_{n}\right] }\left\{
\mathsf{s}_{p}\right\} \geq\left( 2\ln\ln\overline{p}_{n}\right)
^{1/2}\left( 1+\epsilon/3\right) \right) +o\left( 1\right) \nonumber\\
& =o\left( 1\right) .\label{DE56}%
\end{align}
\textbf{Step 3: Conclusion}. Propositions \ref{Esti} and \ref{Covesti}, Lemma
\ref{Ordersums} and $\overline{p}_{n}=O\left( n^{1/2}\right) $, the
expression of $\check{S}_{p}$ and $\check{s}_{p}$ in (\ref{Pseudomax}) and
(\ref{Check2tilde}) gives%
\begin{align*}
& \max_{p\in\left[ 2,\overline{p}_{n}\right] }\frac{\left( \widehat{S}%
_{p}-\widehat{S}_{1}\right) /\widehat{R}_{0}^{2}-E_{\Delta}\left( p\right)
}{V_{\Delta}\left( p\right) }=\max_{p\in\left[ 2,\overline{p}_{n}\right]
}\frac{\left( \widehat{S}_{p}-\widehat{S}_{1}\right) -\widehat{R}_{0}%
^{2}E_{\Delta}\left( p\right) }{\widehat{R}_{0}^{2}V_{\Delta}\left(
p\right) }\\
& \text{ }=\left( 1+o_{\mathbb{P}}\left( 1\right) \right) \max
_{p\in\left[ 2,\overline{p}_{n}\right] }\frac{\left( \widetilde{S}%
_{p}-\widetilde{S}_{1}\right) -R_{0}^{2}E_{\Delta}\left( p\right) }%
{R_{0}^{2}V_{\Delta}\left( p\right) }+O_{\mathbb{P}}\left( 1+\overline
{p}_{n}^{1/2}\left( \widehat{R}_{0}^{2}-R_{0}^{2}\right) \right) \\
& \text{ }=\left( 1+o_{\mathbb{P}}\left( 1\right) \right) \max
_{p\in\left[ 2,\overline{p}_{n}\right] }\left\{ \check{s}_{p}\right\}
+O_{\mathbb{P}}\left( 1\right) .
\end{align*}
Hence (\ref{DE56}) gives, since $\gamma_{n}-\gamma_{n}^{\prime}\rightarrow
+\infty$,%
\[
\mathbb{P}\left( \max_{p\in\left[ 2,\overline{p}_{n}\right] }\frac{\left(
\widehat{S}_{p}-\widehat{S}_{1}\right) /\widehat{R}_{0}^{2}-E_{\Delta}\left(
p\right) }{V_{\Delta}\left( p\right) }\geq\gamma_{n}\right) =\mathbb{P}%
\left( \max_{p\in\left[ 2,\overline{p}_{n}\right] }\left\{ \check{s}%
_{p}\right\} \geq\gamma_{n}^{\prime}\right) +o\left( 1\right) =o\left(
1\right) .
\]
This ends the proof of the Proposition.$\hfill\square$
\subsection{Proof of Propositions \ref{MeanH1} and \ref{VarH1}}
\noindent When studying the mean and variance of $\widetilde{S}_{p}$, we make
use of Theorem 2.3.2 in Brillinger (2001) which implies in particular that,
for any real zero-mean random variables $Z_{1},\ldots,Z_{4}$,
\begin{align}
& \operatorname*{Var}\left( Z_{1}Z_{2},Z_{3}Z_{4}\right)
=\operatorname*{Var}(Z_{1},Z_{3})\operatorname*{Var}(Z_{2},Z_{4}%
)+\operatorname*{Var}(Z_{1},Z_{4})\operatorname*{Var}(Z_{2},Z_{3})\nonumber\\
& +\operatorname*{Cum}\left( Z_{1},Z_{2},Z_{3},Z_{4}\right) .\label{Brill}%
\end{align}
Note that Assumption \ref{Reg} and Theorem \ref{XW11} imply that%
\begin{equation}
\sup_{n,q\in\left[ 2,8\right] }\sum_{t_{2},\ldots,t_{q}=-\infty}^{\infty
}\left\vert \Gamma_{n}\left( 0,t_{2},\ldots,t_{q}\right) \right\vert
<\infty.\label{Sumcum}%
\end{equation}
\subsubsection{\textbf{Proof of Proposition \ref{MeanH1}}}
(\ref{Brill}) yields
\begin{align*}
& \mathbb{E}\left[ \widetilde{R}_{j}^{2}\right] =\frac{1}{n^{2}}\sum
_{t_{1},t_{2}=1}^{n-j}\mathbb{E}\left[ u_{t_{1}}u_{t_{1}+j}u_{t_{2}}%
u_{t_{2}+j}\right] \\
& \text{ }=\frac{1}{n^{2}}\sum_{t_{1},t_{2}=1}^{n-j}\left( R_{j}^{2}%
+R_{t_{2}-t_{1}}^{2}+R_{t_{2}-t_{1}+j}R_{t_{2}-t_{1}-j}+\Gamma\left(
0,j,t_{2}-t_{1},t_{2}-t_{1}+j\right) \right) ,
\end{align*}
where
\begin{align*}
\sum_{t_{1},t_{2}=1}^{n-j}R_{t_{2}-t_{1}}^{2} & =(n-j)R_{0}^{2}+2\sum
_{\ell=1}^{n-j-1}(n-j-\ell)R_{\ell}^{2},\\
\sum_{t_{1},t_{2}=1}^{n-j}R_{t_{2}-t_{1}+j}R_{t_{2}-t_{1}-j} & =(n-j)R_{j}%
^{2}+2\sum_{\ell=1}^{n-j-1}(n-j-\ell)R_{\ell+j}R_{\ell-j},\\
\sum_{t_{1},t_{2}=1}^{n-j}\Gamma\left( 0,j,t_{2}-t_{1},t_{2}-t_{1}+j\right)
& =\sum_{\ell=-n+j+1}^{n-j-1}\left( n-j-|\ell|\right) \Gamma\left(
0,j,\ell,\ell+j\right) .
\end{align*}
Set $k_{j}=K^{2}\left( j/p\right) $ to prove the first equality and
$k_{j}=K^{2}\left( j/p\right) /\tau_{j}^{2}$ for the second. Note that
Assumptions \ref{Kernel} and \ref{Reg} give, in both case, $\max_{j\in\left[
1,n-1\right] }k_{j}\leq C$ and $k_{j}\geq C\mathbb{I}\left( j\leq
p/2\right) $. The equalities above give
\begin{align}
& \mathbb{E}\left[ \sum_{j=1}^{n-1}k_{j}\widetilde{R}_{j}^{2}\right]
-R_{0}^{2}\sum_{j=1}^{n-1}\left( 1-\frac{j}{n}\right) k_{j}\nonumber\\
& \text{ }=n\sum_{j=1}^{n-1}\left( \left( 1-\frac{j}{n}\right) ^{2}%
+\frac{1}{n}\left( 1-\frac{j}{n}\right) \right) k_{j}R_{j}^{2}%
\label{EtildeSp}\\
& \text{ }+2\sum_{j=1}^{n-1}k_{j}\sum_{\ell=1}^{n-j-1}\left( 1-\frac{j+\ell
}{n}\right) \left( R_{\ell}^{2}+R_{\ell+j}R_{\ell-j}\right) \nonumber\\
& \text{ }+\sum_{j=1}^{n-1}k_{j}\sum_{\ell=-n+j+1}^{n-j-1}\left(
1-\frac{j+|\ell|}{n}\right) \Gamma\left( 0,j,\ell,\ell+j\right) .\nonumber
\end{align}
We start with the item $R_{0}^{2}\sum_{j=1}^{n-1}\left( 1-\frac{j}{n}\right)
k_{j}$, which is equal to $R_{0}^{2}E\left( p\right) $ when $k_{j}%
=K^{2}\left( j/p\right) $, that is when proving the first equality. When
$k_{j}=K^{2}\left( j/p\right) /\tau_{j}^{2}$, (\ref{Tau2sig}) gives, under
Assumptions \ref{Kernel} and \ref{Reg},%
\[
\left\vert R_{0}^{2}\sum_{j=1}^{n-1}\left( 1-\frac{j}{n}\right)
k_{j}-E\left( p\right) \right\vert \leq C\sum_{j=1}^{p}\left\vert \tau
_{j}^{2}-R_{0}^{2}\right\vert \leq C\sum_{j=1}^{\infty}j^{-6}%
\]
so that $R_{0}^{2}\sum_{j=1}^{n-1}\left( 1-j/n\right) k_{j}\geq E\left(
p\right) -C^{\prime}$.
Let us now turn to the other items. The lower bound$k_{j}\geq CI(j\leq p/2)$
gives that (\ref{EtildeSp}) is larger than $Cn\sum_{j=1}^{p/2}R_{j}^{2}$. To
bound the remaining terms in (\ref{EtildeSp}), we note that by Assumptions
\ref{Kernel}, \ref{Reg} and (\ref{Sumcum}),%
\[
\left\vert \sum_{j=1}^{n-1}k_{j}\sum_{\ell=1}^{n-j-1}\left( 1-\frac{j+\ell
}{n}\right) R_{\ell}^{2}\right\vert \leq C\sum_{j=1}^{n-1}\mathbb{I}(j\leq
p)\times\sum_{j=1}^{\infty}R_{j}^{2}\leq Cp\sum_{j=1}^{\infty}R_{j}%
^{2}=o(n)\sum_{j=1}^{\infty}R_{j}^{2},
\]%
\[
\left\vert \sum_{j=1}^{n-1}k_{j}\sum_{\ell=1}^{n-j-1}\left( 1-\frac{j+\ell
}{n}\right) R_{\ell+j}R_{\ell-j}\right\vert \leq C\sum_{j=1}^{+\infty}%
\sum_{\ell=1}^{+\infty}\left\vert R_{\ell+j}R_{\ell-j}\right\vert \leq
C\left( \sum_{j=0}^{\infty}|R_{j}|\right) ^{2}\leq C,
\]%
\[
\left\vert \sum_{j=1}^{n-1}k_{j}\sum_{\ell=-n+j+1}^{n-j-1}\left(
1-\frac{j+\ell}{n}\right) \Gamma\left( 0,j,\ell,\ell+j\right) \right\vert
\leq C\sum_{t_{2},t_{3},t_{4}=-\infty}^{\infty}\left\vert \Gamma(0,t_{2}%
,t_{3},t_{4})\right\vert \leq C
\]
uniformly with respect to $p\in\left[ 1,\overline{p}_{n}\right] $.
Substituting these bounds in the equality above establishes the proposition.
\hspace*{\fill}$\Box$
\subsubsection{\textbf{Proof of Proposition \ref{VarH1}}}
Let $f$ be the spectral density of the alternative. Using (\ref{Sumcum}), we
obtain%
\begin{equation}
\sup_{\lambda\in\lbrack-\pi,\pi]}\left\vert f\left( \lambda\right)
\right\vert \leq C\text{\quad and\quad}\sum_{j=1}^{\infty}R_{j}^{2}\leq
C\label{L2}%
\end{equation}
because $\sup_{\lambda\in\lbrack-\pi,\pi]}\left\vert f\left( \lambda\right)
\right\vert \leq\left( |R_{0}|+2\sum_{j=1}^{\infty}|R_{j}|\right) /(2\pi)$
and $\sum_{j=1}^{\infty}R_{j}^{2}\leq\left( \sum_{j=1}^{\infty}%
|R_{j}|\right) ^{2}$. We recall that $\widetilde{R}_{j}=\sum_{t=1}^{n-j}%
u_{t}u_{t+j}/n$\ and define $\overline{R}_{j}=\mathbb{E}\left[ \widetilde{R}%
_{j}\right] =\left( 1-j/n\right) R_{j}$. Set $k_{j}=K^{2}\left(
j/p\right) $ to prove the first equality and $k_{j}=K^{2}\left( j/p\right)
/\tau_{j}^{2}$ for the second. Note that Assumptions \ref{Kernel} and
\ref{Reg} give, in both case, $k_{j}\leq C\mathbb{I}\left( j\leq p\right) $.
To avoid notation burdens, redefine $\widetilde{S}_{p}$ as $\sum_{j=1}%
^{n-1}k_{j}\widetilde{R}_{j}^{2}$. Define $D_{j}=\widetilde{R}_{j}%
-\overline{R}_{j}$. We have $\mathbb{E}\left[ D_{j}\right] =0$ and
$\widetilde{S}_{p}=n\sum_{j=1}^{n-1}k_{j}\overline{R}_{j}^{2}+2n\sum
_{j=1}^{n-1}k_{j}\overline{R}_{j}D_{j}+n\sum_{j=1}^{n-1}k_{j}D_{j}^{2}$. The
inequality $(a+b)^{2}\leq2a^{2}+2b^{2}$ implies that
\begin{equation}
\operatorname*{Var}\left( \widetilde{S}_{p}\right) \leq4\operatorname*{Var}%
\left( n\sum_{j=1}^{n-1}k_{j}\overline{R}_{j}\widetilde{R}_{j}\right)
+2\operatorname*{Var}\left( n\sum_{j=1}^{n-1}k_{j}D_{j}^{2}\right)
.\label{VarH1.1}%
\end{equation}
By identity (\ref{Brill}),
\[
\operatorname*{Var}\left( n\sum_{j=1}^{n-1}k_{j}\overline{R}_{j}%
\widetilde{R}_{j}\right) =\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}%
}\overline{R}_{j_{1}}\overline{R}_{j_{2}}\sum_{t_{1}=1}^{n-j_{1}}\sum
_{t_{2}=1}^{n-j_{2}}\operatorname*{Cov}\left( u_{t_{1}}u_{t_{1}+j_{1}%
},u_{t_{2}}u_{t_{2}+j_{2}}\right) \leq V_{1}+K_{1}%
\]
with
\begin{align*}
V_{1} & =\left\vert \sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}\overline
{R}_{j_{1}}\overline{R}_{j_{2}}\sum_{t_{1}=1}^{n-j_{1}}\sum_{t_{2}=1}%
^{n-j_{2}}\left( R_{t_{2}-t_{1}}R_{t_{2}-t_{1}+j_{2}-j_{1}}+R_{t_{2}%
-t_{1}-j_{1}}R_{t_{2}-t_{1}+j_{2}}\right) \right\vert ,\\
K_{1} & =\left\vert \sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}\overline
{R}_{j_{1}}\overline{R}_{j_{2}}\sum_{t_{1}=1}^{n-j_{1}}\sum_{t_{2}=1}%
^{n-j_{2}}\Gamma\left( t_{1},t_{1}+j_{1},t_{2},t_{2}+j_{2}\right)
\right\vert .
\end{align*}
The second term on the right of (\ref{VarH1.1}) is, up to a multiplicative
constant, equal to
\[
\operatorname*{Var}\left( n\sum_{j=1}^{n-1}k_{j}D_{j}^{2}\right) =n^{2}%
\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}\operatorname*{Cov}\left(
D_{j_{1}}^{2},D_{j_{2}}^{2}\right) .
\]
Applying (\ref{Brill}) twice we obtain
\begin{align*}
\lefteqn{\operatorname*{Cov}\left( D_{j_{1}}^{2},D_{j_{2}}^{2}\right) }\\
& \text{ }=\frac{1}{n^{4}}\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}%
=1}^{n-j_{2}}\operatorname*{Cov}\left[ \prod_{q=1}^{2}\left( u_{t_{q}%
}u_{t_{q}+j_{1}}-\mathbb{E}[u_{t_{q}}u_{t_{q}+j_{1}}]\right) ,\prod_{q=3}%
^{4}\left( u_{t_{q}}u_{t_{q}+j_{2}}-\mathbb{E}[u_{t_{q}}u_{t_{q}+j_{2}%
}]\right) \right] \\
& \text{ }=\frac{1}{n^{4}}\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}%
=1}^{n-j_{2}}\left[ \operatorname*{Cov}\left( u_{t_{1}}u_{t_{1}+j_{1}%
},u_{t_{3}}u_{t_{3}+j_{2}}\right) \operatorname*{Cov}\left( u_{t_{2}%
}u_{t_{2}+j_{1}},u_{t_{4}}u_{t_{4}+j_{2}}\right) \right. \\
& \left.
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+\operatorname*{Cov}%
\left( u_{t_{1}}u_{t_{1}+j_{1}},u_{t_{4}}u_{t_{4}+j_{2}}\right)
\operatorname*{Cov}\left( u_{t_{2}}u_{t_{2}+j_{1}},u_{t_{3}}u_{t_{3}+j_{2}%
}\right) \right] \\
& +\frac{1}{n^{4}}\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}%
}\operatorname*{Cum}\left( u_{t_{1}}u_{t_{1}+j_{1}},u_{t_{2}}u_{t_{2}+j_{1}%
},u_{t_{3}}u_{t_{3}+j_{2}},u_{t_{4}}u_{t_{4}+j_{2}}\right) \\
& \text{ }=\frac{2}{n^{4}}\left( \sum_{t_{1}=1}^{n-j_{1}}\sum_{t_{2}%
=1}^{n-j_{2}}\left( R_{t_{2}-t_{1}}R_{t_{2}-t_{1}+j_{2}-j_{1}}+R_{t_{2}%
-t_{1}-j_{1}}R_{t_{2}-t_{1}+j_{2}}+\Gamma(t_{1},t_{1}+j_{1},t_{2},t_{2}%
+j_{2})\right) \right) ^{2}\\
& +\frac{1}{n^{4}}\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}%
}\operatorname*{Cum}\left( u_{t_{1}}u_{t_{1}+j_{1}},u_{t_{2}}u_{t_{2}+j_{1}%
},u_{t_{3}}u_{t_{3}+j_{2}},u_{t_{4}}u_{t_{4}+j_{2}}\right) .
\end{align*}
Since $(a+b+c)^{2}\leq3(a^{2}+b^{2}+c^{2})$, we can write $\operatorname*{Var}%
\left( n\sum_{j=1}^{n-1}k_{j}D_{j}^{2}\right) \leq6V_{2}+K_{2}%
+6K_{2}^{\prime}$ with
\begin{align*}
\lefteqn{V_{2}=\frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}%
}\left( \left( \sum_{t_{1}=1}^{n-j_{1}}\sum_{t_{2}=1}^{n-j_{2}}%
R_{t_{2}-t_{1}}R_{t_{2}-t_{1}+j_{2}-j_{1}}\right) ^{2}+\left( \sum_{t_{1}%
=1}^{n-j_{1}}\sum_{t_{2}=1}^{n-j_{2}}R_{t_{2}-t_{1}-j_{1}}R_{t_{2}-t_{1}%
+j_{2}}\right) ^{2}\right) ,}\\
& K_{2}=\left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}%
}\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}%
\operatorname*{Cum}\left( u_{t_{1}}u_{t_{1}+j_{1}},u_{t_{2}}u_{t_{2}+j_{1}%
},u_{t_{3}}u_{t_{3}+j_{2}},u_{t_{4}}u_{t_{4}+j_{2}}\right) \right\vert ,\\
& K_{2}^{\prime}=\frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}%
}\left( \sum_{t_{1}=1}^{n-j_{1}}\sum_{t_{2}=1}^{n-j_{2}}\Gamma\left(
t_{1},t_{1}+j_{1},t_{2},t_{2}+j_{2}\right) \right) ^{2},
\end{align*}
Substituting in (\ref{VarH1.1}) shows that the proposition holds if the
following inequalities hold:
\[
V_{1}\leq Cn\sum_{j=1}^{p}R_{j}^{2},\quad V_{2}\leq Cp,\quad K_{1}\leq C,\quad
K_{2}^{\prime}\leq C,\quad K_{2}\leq C\frac{p^{2}}{n}.
\]
We establish these inequalities in five steps.
\textit{Step 1: bound for }$V_{1}$\textit{.} We note that $|\overline{R}%
_{j}|\leq|R_{j}|$ and that under Assumption \ref{Kernel}, $0\leq k_{j}\leq C $
for all $j$. Using a covariance spectral representation $R_{j}=\int_{-\pi
}^{\pi}\exp(\pm ij\lambda)f(\lambda)d\lambda$, the Cauchy-Schwarz inequality
and (\ref{L2}), we obtain by Assumption \ref{Kernel}%
\begin{align*}
\lefteqn{\left\vert \sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}\overline
{R}_{j_{1}}\overline{R}_{j_{2}}\sum_{t_{1}=1}^{n-j_{1}}\sum_{t_{2}=1}%
^{n-j_{2}}R_{t_{2}-t_{1}}R_{t_{2}-t_{1}+j_{2}-j_{1}}\right\vert }\\
& \text{ }=\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\left\vert \sum_{j=1}^{n-1}%
k_{j}\overline{R}_{j}\sum_{t=1}^{n-j}\text{e}^{it\lambda_{1}}\text{e}%
^{i(t+j)\lambda_{2}}\right\vert ^{2}f(\lambda_{1})f(\lambda_{2})d\lambda
_{1}d\lambda_{2}\\
& \text{ }\leq\left( \sup_{\lambda\in\lbrack-\pi,\pi]}|f(\lambda)|\right)
^{2}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}%
}\overline{R}_{j_{1}}k_{j_{2}}\overline{R}_{j_{2}}\sum_{t_{1}=1}^{n-j_{1}}%
\sum_{t_{2}=1}^{n-j_{2}}\text{e}^{it_{1}\lambda_{1}}\text{e}^{i(t_{1}%
+j_{1})\lambda_{2}}\text{e}^{-it_{2}\lambda_{1}}\text{e}^{-i(t_{2}%
+j_{2})\lambda_{2}}d\lambda_{1}d\lambda_{2}\\
& \text{ }\leq C\sum_{j=1}^{n-1}(n-j)k_{j}^{2}\overline{R}_{j}^{2}\leq
Cn\sum_{j=1}^{p}R_{j}^{2},
\end{align*}%
\begin{align*}
\lefteqn{\left\vert \sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}\overline
{R}_{j_{1}}\overline{R}_{j_{2}}\sum_{t_{1}=1}^{n-j_{1}}\sum_{t_{2}=1}%
^{n-j_{2}}R_{t_{2}-t_{1}-j_{1}}R_{t_{2}-t_{1}+j_{2}}\right\vert }\\
& \text{ }=\left\vert \int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\sum_{j_{1}=1}%
^{n-1}k_{j_{1}}\overline{R}_{j_{1}}\sum_{t_{1}=1}^{n-j_{1}}\text{e}%
^{-i(t_{1}+j_{1})\lambda_{1}}\text{e}^{-it_{1}\lambda_{2}}\times\sum_{j_{2}%
=1}^{n-1}k_{j_{2}}\overline{R}_{j_{2}}\sum_{t_{2}=1}^{n-j_{2}}\text{e}%
^{it_{2}\lambda_{1}}\text{e}^{i(t_{2}+j_{2})}f(\lambda_{1})f(\lambda
_{2})d\lambda_{1}d\lambda_{2}\right\vert \\
& \text{ }\leq\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\left\vert \sum_{j=1}%
^{n-1}k_{j}\overline{R}_{j}\sum_{t=1}^{n-j}\text{e}^{it\lambda_{1}}%
\text{e}^{i(t+j)\lambda_{2}}\right\vert ^{2}f(\lambda_{1})f(\lambda
_{2})d\lambda_{1}d\lambda_{2}\leq Cn\sum_{j=1}^{p}R_{j}^{2}%
\end{align*}
This establishes the bound for $V_{1}$.
\textit{Step 2: bound for }$V_{2}$\textit{.} We define $t_{2}=t_{1}%
+t_{2}^{\prime}$, $j_{2}=j_{1}+j_{2}^{\prime}$. By Assumption \ref{Kernel} and
by (\ref{Sumcum}),
\begin{align*}
\lefteqn{\frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}\left(
\sum_{t_{1}=1}^{n-j_{1}}\sum_{t_{2}=1}^{n-j_{2}}R_{t_{2}-t_{1}}R_{t_{2}%
-t_{1}-j_{1}+j_{2}}\right) ^{2}}\\
& \text{ }\leq\frac{C}{n^{2}}\sum_{j_{1}=1}^{n-1}K^{2}(j_{1}/p)\sum
_{j_{2}\prime=-\infty}^{\infty}\left( n\sum_{t_{2}\prime=-\infty}^{+\infty
}\left\vert R_{t_{2}\prime}R_{t_{2}\prime+j_{2}\prime}\right\vert \right)
^{2}\\
& \text{ }\leq Cp\times\left( \sum_{j_{2},t_{1},t_{2}=-\infty}^{\infty
}\left\vert R_{t_{1}}R_{t_{1}+j_{2}}R_{t_{2}}R_{t_{2}+j_{2}}\right\vert
\right) \leq Cp\left( \sum_{t=-\infty}^{\infty}|R_{t}|\right) ^{4}\leq Cp,
\end{align*}%
\begin{align*}
\lefteqn{\frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}\left(
\sum_{t_{1}=1}^{n-j_{1}}\sum_{t_{2}=1}^{n-j_{2}}R_{t_{2}-t_{1}-j_{1}}%
R_{t_{2}-t_{1}+j_{2}}\right) ^{2}}\\
& \leq\frac{C}{n^{2}}\sum_{j_{1}=1}^{n-1}K^{2}(j_{1}/p)\sum_{j_{2}%
\prime=-\infty}^{\infty}\left( n\sum_{t_{2}\prime=-\infty}^{+\infty
}\left\vert R_{t_{2}\prime-j_{1}}R_{t_{2}\prime+j_{1}+j_{2}\prime}\right\vert
\right) ^{2}\\
& \text{ }\leq Cp\sum_{j_{2}^{\prime},t_{1},t_{2}=-\infty}^{\infty}\left\vert
R_{t_{1}-j_{1}}R_{t_{1}+j_{1}+j_{2}^{\prime}}R_{t_{2}-j_{1}}R_{t_{2}%
+j_{1}+j_{2}^{\prime}}\right\vert \leq Cp\sum_{j,t_{1},t_{2}=-\infty}^{\infty
}\left\vert R_{t_{1}}R_{t_{1}+j}R_{t_{2}}R_{t_{2}+j}\right\vert \\
& \text{ }\leq Cp\left( \sum_{t=-\infty}^{\infty}|R_{t}|\right) ^{4}\leq Cp,
\end{align*}
therefore $V_{2}\leq Cp$.
\textit{Step 3: bound for }$K_{1}$\textit{.} Define $t_{2}=t_{1}+t$.
Assumption \ref{Kernel}, and (\ref{Sumcum}) yield
\[
K_{1}\leq C\sum_{j_{1},j_{2}=1}^{p}\sum_{t=-\infty}^{\infty}\left\vert
\Gamma(0,j_{1},t,t+j_{2})\right\vert \leq\sum_{t_{1},t_{2},t_{3}=-\infty
}^{\infty}\left\vert \Gamma(0,t_{1},t_{2},t_{3})\right\vert .
\]
\textit{Step 4: bound for }$K_{2}^{\prime}$\textit{.} (\ref{Sumcum}) gives
\begin{align*}
& K_{2}^{\prime}\leq\frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}%
k_{j_{2}}\left( \sum_{t_{1}=1}^{n-j_{1}}\sum_{t_{2}=1}^{n-j_{2}}\left\vert
\Gamma\left( 0,j_{1},t_{2}-t_{1},t_{2}-t_{1}+j_{2}\right) \right\vert
\right) ^{2}\\
& \leq C\sum_{j_{1},j_{2}=1}^{+\infty}\left( \sum_{t=-\infty}^{\infty
}\left\vert \Gamma(0,j_{1},t,t+j_{2})\right\vert \right) ^{2}\\
& =C\sum_{j_{1},j_{2}=1}^{+\infty}\sum_{t_{1},t_{2}=-\infty}^{\infty
}\left\vert \Gamma(0,j_{1},t_{1},t_{1}+j_{2})\Gamma(0,j_{1},t_{2},t_{2}%
+j_{2})\right\vert \\
& \leq C\left( \sum_{t_{2},t_{3},t_{4}=-\infty}^{\infty}\left\vert
\Gamma(0,t_{2},t_{3},t_{4})\right\vert \right) ^{2}\leq C.
\end{align*}
\textit{Step 5: bound for }$K_{2}$\textit{.}\quad Bounding $K_{2}$ requires
additional notation. First set $t_{5}=t_{1}+j_{1}$, $t_{6}=t_{2}+j_{1}$,
$t_{7}=t_{3}+j_{2}$ and $t_{8}=t_{4}+j_{2}$, and note that $t_{5},\ldots
,t_{8}$ depend upon $t_{1},\ldots,t_{4}$ and $j_{1},j_{2}$ only. For a
partition $B=\{B_{\ell},\ell=1,\ldots,d_{B}\}$ of $\{1,\ldots,8\}$, define
$d_{B}=\operatorname*{Card}B$, $\Gamma_{B}(t_{1},\ldots,t_{8})=\prod_{\ell
=1}^{d_{B}}\operatorname*{Cum}\left( u_{t_{q}},q\in B_{\ell}\right) $, and
recall that $\operatorname*{Cum}(u_{t})=Eu_{t}=0$. Then the largest $d_{B}$
yielding a non-vanishing $\Gamma_{B}$ is $d_{B}=4$. When $d_{B}=4$, $B$ is a
pairwise partition of $\{1,\ldots,8\}$ so that $\Gamma_{B}$ is a product of
covariances. Let $B$ be the set of indecomposable partitions of the two-way
table
\[%
\begin{array}
[c]{cc}%
1 & 5\\
2 & 6\\
3 & 7\\
4 & 8\\
&
\end{array}
,
\]
see Brillinger (2001, p. 20) for a definition. Then according to Brillinger
(2001, Theorem 2.3.2),
\begin{align*}
& \operatorname*{Cum}\left( u_{t_{1}}u_{t_{1}+j_{1}},u_{t_{2}}u_{t_{2}+j_{1}%
},u_{t_{3}}u_{t_{3}+j_{2}},u_{t_{4}}u_{t_{4}+j_{2}}\right) \\
& \text{ }=\sum_{B\in\mathcal{B}}\Gamma_{B}(t_{1},\ldots,t_{8})=\sum
_{B\in\mathcal{B},d_{B}\leq3}\Gamma_{B}(t_{1},\ldots,t_{8})+\sum
_{B\in\mathcal{B},d_{B}=4}\Gamma_{B}(t_{1},\ldots,t_{8}).
\end{align*}
Some properties of partitions in $\mathcal{B}$ are as follows. Call $\{1,5\}$,
$\{2,6\}$, $\{3,7\}$ and $\{4,8\}$ fundamental pairs and say that a $B_{1}$ in
a partition $B$ breaks the pair $\{1,5\}$ if $\{1,5\}$ is not a subset of
$B_{1}$. Then partitions $B\in\mathcal{B}$ are such that each $B_{\ell}\in B$
must break a fundamental pair. Note that fundamental pairs play a symmetric
role. Since $t_{q+4}-t_{q}$ is $j_{1}$ or $j_{2}$ with vanishing $k_{j_{1}}$
or $k_{j_{2}}$ if $j_{1}$ or $j_{2}$ is larger than $p$, the indexes $t_{q}$
and $t_{q+4}$ of a fundamental pair also play a symmetric role in the
computations below. We now discuss the contribution to $K_{2}$ of partitions
of $\{1,\ldots,8\}$ according to the possible values $1,\ldots,4$ of $d_{B}$.
Due to symmetry, we only consider representative partitions for each case.
Under Assumption \ref{Kernel} and (\ref{Sumcum}), the case $d_{B}=1$ gives a
contribution to $K_{2}$ bounded by
\begin{align*}
\left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}%
\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}\Gamma\left(
t_{1},\ldots,t_{8}\right) \right\vert & \leq\frac{C}{n^{2}}\sum
_{t_{1},\ldots,t_{8}=-n}^{n}\left\vert \Gamma\left( 0,t_{2}-t_{1}%
,\ldots,t_{8}-t_{1}\right) \right\vert \\
& \leq\frac{C}{n}\sum_{t_{2}^{\prime},\ldots,t_{8}^{\prime}=-\infty}^{\infty
}\left\vert \Gamma\left( 0,t_{2}^{\prime},\ldots,t_{8}^{\prime}\right)
\right\vert \leq\frac{C}{n}.
\end{align*}
The case $d_{B}=2$ corresponds to $\{\operatorname*{Card}B_{1}%
,\operatorname*{Card}B_{2}\}$ being $\{2,6\}$, $\{3,5\}$ or $\{4,4\}$. These
cases are very similar and we limit ourselves to $\{2,6\}$ and $B_{1}%
=\{1,2\}$. The corresponding contribution to $K_{2}$ is bounded by
\begin{align*}
\lefteqn{\left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}%
k_{j_{2}}\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}%
\Gamma_{B}\left( t_{1},\ldots,t_{8}\right) \right\vert \leq\frac{C}{n^{2}%
}\sum_{t_{1},\ldots,t_{8}=-n}^{n}\left\vert \Gamma\left( 0,t_{2}%
-t_{1}\right) \Gamma\left( t_{3}-t_{1},\ldots,t_{8}-t_{1}\right)
\right\vert }\\
& \text{ }\leq\frac{C}{n}\sum_{t_{2}^{\prime},\ldots,t_{8}^{\prime}=-n}%
^{n}\left\vert \Gamma\left( 0,t_{2}^{\prime}\right) \Gamma\left(
t_{3}^{\prime},\ldots,t_{8}^{\prime}\right) \right\vert \leq\frac{C}{n}%
\sum_{t=-n}^{n}\left\vert R_{t}\right\vert \sum_{t_{3}^{\prime},\ldots
,t_{8}^{\prime}=-n}^{n}\left\vert \Gamma\left( 0,t_{4}^{\prime}-t_{3}%
^{\prime},\ldots,t_{8}^{\prime}-t_{3}^{\prime}\right) \right\vert \\
& C\sum_{t=-\infty}^{\infty}\left\vert R_{t}\right\vert \sum_{t_{2}%
,\ldots,t_{6}=-\infty}^{\infty}\left\vert \Gamma\left( 0,t_{2},\ldots
,t_{6}\right) \right\vert \leq C,
\end{align*}
by Assumption \ref{Kernel} and (\ref{Sumcum}).
The case $d_{B}=3$ corresponds to $\{ \operatorname*{Card}B_{1}%
,\operatorname*{Card}B_{2},\operatorname*{Card}B_{3}\}$ being $\{2,2,4\}$ or
$\{2,3,3\}$. We start with $\operatorname*{Card}B_{1}=2$,
$\operatorname*{Card}B_{2}=2$ and $\operatorname*{Card}B_{3}=4$. The
discussion concerns the number of fundamental pair broken by $B_{3}$. Note
that the situation where $B_{3}$ breaks only 3 or 1 fundamental pair is
impossible. The case where $B_{3}$ does not break any fundamental pairs
corresponds to partitions that are not indecomposable, so that the only
possible cases are those where $B_{3}$ breaks $4$ or $2$ fundamental pairs.
\begin{itemize}
\item $B_{3}$ breaks 4 fundamental pairs. Consider $B_{3}=\{1,2,3,4\}$,
$B_{2}=\{5,6\}$ and $B_{3}=\{7,8\}$. The corresponding contribution to $K_{2}$
is bounded by
\begin{align*}
& \left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}%
\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}\Gamma_{B}\left(
t_{1},\ldots,t_{8}\right) \right\vert \\
& =\left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}%
\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}\Gamma\left(
0,t_{2}-t_{1},t_{3}-t_{1},t_{4}-t_{1}\right) R_{t_{2}-t_{1}}R_{t_{4}-t_{3}%
}\right\vert \\
& \text{ }\leq C\frac{p^{2}}{n}\sup_{j}|R_{j}|^{2}\sum_{t_{2},t_{3}%
,t_{4}=-\infty}^{\infty}\left\vert \Gamma\left( 0,t_{2},t_{3},t_{4}\right)
\right\vert \leq C\frac{p^{2}}{n}%
\end{align*}
by Assumption \ref{Kernel} and (\ref{Sumcum}).
\item $B_{3}$ breaks 2 fundamental pairs. Take $B_{3}=\{1,2,3,5\}$,
$B_{2}=\{4,6\}$ and $B_{1}=\{7,8\}$. The change of variables $t_{2}%
=t_{2}^{\prime}+t_{1}$, $t_{3}=t_{3}^{\prime}+t_{1}$ and $t_{4}=t_{4}^{\prime
}+t_{3}$ shows that contribution to $K_{2}$ is bounded by
\begin{align*}
& \left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}%
\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}\Gamma_{B}\left(
t_{1},\ldots,t_{8}\right) \right\vert \\
& \text{ }=\left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}%
}k_{j_{2}}\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}%
\Gamma\left( 0,t_{2}-t_{1},t_{3}-t_{1},j_{1}\right) R_{t_{4}-t_{2}-j_{1}%
}R_{t_{4}-t_{3}}\right\vert \\
& \text{ }\leq\frac{C}{n}\sum_{j_{2}=1}^{n-1}K^{2}(j_{2}/p)\sum_{t_{2}%
^{\prime},t_{3}^{\prime},j_{1}=-\infty}^{\infty}\left\vert \Gamma\left(
0,t_{2}^{\prime},t_{3}^{\prime},j_{1}\right) \right\vert \sum_{t_{4}^{\prime
}=-\infty}^{+\infty}\left\vert R_{t_{4}^{\prime}}\right\vert \times\sup
_{j}|R_{j}|\leq C\frac{p}{n}.
\end{align*}
under Assumption \ref{Kernel} and (\ref{Sumcum}).
\end{itemize}
We now turn to the case $\operatorname*{Card}B_{3}=\operatorname*{Card}%
B_{2}=3$ and $\operatorname*{Card}B_{1}=2$. Observe that $B_{3}$ or $B_{2}$
must break 3 or 1 fundamental pair. The discussion now concerns the
fundamental pairs which are simultaneously broken by $B_{3}$ and $B_{2}$. Note
that $B_{3}$ and $B_{2}$ cannot break the same 3 fundamental pairs because if
it did, $B_{1}$ would be given by the remaining fundamental pair in which case
$B_{1}$ cannot communicate with $B_{2}$ or $B_{3}$, a fact that would
contradict the requirement that the partition $\{B_{1},B_{2},B_{3}\}$ is indecomposable.
\begin{itemize}
\item $B_{3}$ and $B_{2}$ break 3 fundamental pairs, 2 of which are the same.
Take $B_{3}=\{1,2,3\}$, $B_{2}=\{4,5,6\}$ and $B_{1}=\{7,8\}$. Using change of
variables $t_{2}=t_{1}+t_{2}^{\prime}$, $t_{3}=t_{1}+t_{3}^{\prime}$ and
$t_{4}=t_{3}+t_{4}^{\prime}$, we can see that under Assumption \ref{Kernel}
and (\ref{Sumcum}) the contribution to $K_{2}$ of this case is bounded by
\begin{align*}
& \left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}%
\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}\Gamma_{B}\left(
t_{1},\ldots,t_{8}\right) \right\vert \\
& \text{ }=\left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}%
}k_{j_{2}}\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}%
\Gamma\left( 0,t_{2}-t_{1},t_{3}-t_{1}\right) \Gamma\left( 0,t_{1}%
-t_{4}+j_{1},t_{2}-t_{4}+j_{1}\right) R_{t_{4}-t_{3}}\right\vert \\
& \text{ }\leq\frac{C}{n}\sum_{j_{1},j_{2}=1}^{n-1}K^{2}(j_{1}/p)K^{2}%
(j_{2}/p)\sup_{t_{2},t_{3}}\left\vert \Gamma(0,t_{2},t_{3})\right\vert
\sum_{t_{2}^{\prime},t_{3}^{\prime}=-\infty}^{\infty}\left\vert \Gamma\left(
0,t_{2}^{\prime},t_{3}^{\prime}\right) \right\vert \sum_{t_{4}^{\prime
}=-\infty}^{+\infty}\left\vert R_{t_{4}^{\prime}}\right\vert \leq C\frac
{p^{2}}{n}%
\end{align*}
Note that the case where $B_{3}$ and $B_{2}$ break 3 fundamental pairs with
less than one in common is impossible.
\end{itemize}
The next case assumes that $B_{2}$ breaks only $1$ fundamental pair, which is
also necessarily broken by $B_{3}$ since $B_{2}$ must contain the remaining
unbroken pair.
\begin{itemize}
\item $B_{3}$ breaks 3 fundamental pairs and $B_{2}$ breaks only 1 pair. Take
$B_{3}=\{1,2,3\}$, $B_{2}=\{4,5,8\}$ and $B_{3}=\{6,7\}$ and consider a change
of variables $t_{2}=t_{1}+t_{2}^{\prime}$, $t_{3}=t_{1}+t_{3}^{\prime}$ and
$t_{4}=t_{1}+j_{1}-t_{4}^{\prime}$. Under Assumption \ref{Kernel} and
(\ref{Sumcum}), the contribution of this term to $K_{2}$ is bounded by\
\begin{align*}
& \left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}%
\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}\Gamma_{B}\left(
t_{1},\ldots,t_{8}\right) \right\vert \\
& \text{ }=\left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}%
}k_{j_{2}}\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}%
\Gamma\left( 0,t_{2}-t_{1},t_{3}-t_{1}\right) \Gamma\left( t_{1}%
-t_{4}+j_{1},0,j_{2}\right) R_{t_{3}-t_{2}+j_{2}-j_{1}}\right\vert \\
& \leq\frac{C\sup_{j}|R_{j}|}{n}\sum_{j_{1}}^{n-1}K^{2}(j_{1}/p)\sum
_{t_{2}^{\prime},t_{3}^{\prime}=-\infty}^{\infty}\left\vert \Gamma
(0,t_{2}^{\prime},t_{3}^{\prime})\right\vert \sum_{t_{4}^{\prime}%
,j_{2}=-\infty}^{\infty}\left\vert \Gamma\left( t_{4}^{\prime},0,j_{2}%
\right) \right\vert \leq C\frac{p}{n}.
\end{align*}
\item $B_{3}$ and $B_{2}$ break only 1 pair. Note that $B_{3}$ and $B_{2}$
cannot break the same pair because $B_{1}$ must be the remaining pair and
cannot communicate, so that the partition is not indecomposable. Hence all the
partitions in this case are similar to $B_{3}=\{1,2,5\}$, $B_{2}=\{3,4,8\}$,
$B_{1}=\{6,7\}$. The change of variable $t_{2}=t_{1}+t_{2}^{\prime}$,
$t_{3}=-j_{2}+t_{2}+j_{1}+t_{3}^{\prime}$ and $t_{4}=t_{3}-t_{4}^{\prime}$
yields a contribution to $K_{2}$ bounded by
\begin{align*}
& \left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}}k_{j_{2}}%
\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}\Gamma_{B}\left(
t_{1},\ldots,t_{8}\right) \right\vert \\
& \text{ }=\left\vert \frac{1}{n^{2}}\sum_{j_{1},j_{2}=1}^{n-1}k_{j_{1}%
}k_{j_{2}}\sum_{t_{1},t_{2}=1}^{n-j_{1}}\sum_{t_{3},t_{4}=1}^{n-j_{2}}%
\Gamma\left( 0,t_{2}-t_{1},j_{1}\right) \Gamma\left( t_{3}-t_{4}%
,0,j_{2}\right) R_{t_{3}-t_{2}+j_{2}-j_{1}}\right\vert \\
& \text{ }\leq C\sum_{j_{1},t_{2}^{\prime}=-\infty}^{\infty}\left\vert
\Gamma(0,t_{2}^{\prime},j_{1})\right\vert \sum_{j_{2},t_{4}^{\prime}=-\infty
}^{\infty}\left\vert \Gamma(t_{4},0,j_{2})\right\vert \sum_{t_{3}^{\prime
}=-\infty}^{\infty}\left\vert R_{t_{3}^{\prime}}\right\vert \leq
C.\hfill\square
\end{align*}
\end{itemize}
\section*{Supplementary material additional references}
\textsc{Brillinger, D.R.} (2001). \textit{Time Series Analysis: Data Analysis
and Theory }. Holt, Rinehart \& Winston, New-York.
\textsc{Chow, Y.S.} and \textsc{H. Teicher} (1988). \textit{Probability
Theory. Independence, Interchangeability, Martingales }. Second Edition, Springer.
\textsc{Li, D.} and \textsc{R.J. Tomkins} (1996). Laws of the Iterated
Logarithm for Weighted Independent Random Variables. \textit{Statitiscs and
Probability Letters} \textbf{27}, 247--254.
\textsc{Priestley, M.B.} (1981). \textit{Spectral Analysis and Time Series}.
New York: John Wiley.
\end{document}
|
1106.2342
|
\section{Introduction}
The use of copulas has become commonplace for dependence modelling in finance, insurance, and risk management (see, for example, \citet{CLV2004}, \citet{FV1998}, and \citet{MFE2005}).
The Archimedean copulas---a subclass of copulas---have received particular attention in the literature for both their tractability and practical convenience.
An $n$-dimensional Archimedean copula $C:[0,1]^n\rightarrow[0,1]$ can be written as
\begin{equation}
C(\mathbf{u})=h(h^{-1}(u_1)+\cdots+h^{-1}(u_n)),
\end{equation}
where $h$ is the \emph{generator function} of $C$.
We introduce a family of multivariate stochastic processes that we call \emph{Archimedean survival processes} (ASPs).
ASPs are constructed in such a way that they are naturally linked to Archimedean copulas.
An ASP is defined over a finite time horizon, and its terminal value has an $\ell_1$-norm symmetric distribution.
This implies that the terminal value of an ASP has an Archimedean survival copula.
Indeed, there is a bijection from the class of Archimedean copulas to the class of ASPs.
\begin{comment}
\citet{SS2001} and \citet{RS2003} describe continuous-time processes that have Archimedean copulas at all times, and they are used to model default times in credit-risk applications.
By construction, these processes are limited to have copulas with completely-monotone generating functions.
The dynamics of these processes are quite different to the dynamics ASPs.
\end{comment}
A random vector $\mathbf{X}$ has a multivariate Liouville distribution if
\begin{equation}
\mathbf{X} \law R\frac{\mathbf{G}}{\sum_{i=1}^n G_i},
\end{equation}
where $R$ is a non-negative random variable, and $\mathbf{G}$ is a vector of $n$ independent gamma random variables with identical scale parameters
(see, for example, \citet{FKN1990}).
In the special case where $\mathbf{G}$ is a vector of identical exponential random variables, $\mathbf{X}$ has an $\ell_{1}$-norm symmetric distribution.
\citet{MN2009} give a account of how Archimedean copulas coincide with survival copulas of $\ell_{1}$-norm symmetric distributions which have no point-mass at the origin.
This particular relationship relies on the characterization of $n$-monotone functions through an integral transform of \citet{W1956}.
Then in \citep{MN2010}, \citeauthor{MN2010} generalise Archimedean copulas to so-called Liouville copulas, which are defined as the survival copulas of multivariate Liouville distributions.
\citet{RN2} suggested using a randomly-scaled gamma bridge (also called a Dirichlet process) for modelling the cumulative payments made on insurance claims (see also \citet{BHM3}).
Such a process $\{\xi_{tT}\}_{0\leq t\leq T}$ can be constructed as
\begin{equation}
\label{eq:gam_br}
\xi_{tT}=R \g_{tT},
\end{equation}
where $R$ is a positive random variable and $\{\g_{tT}\}$ is an independent gamma bridge satisfying $\g_{0T}=0$ and $\g_{TT}=1$, for some $T\in(0,\infty)$.
This is an increasing process and so lends itself to the modelling of cumulative gains or losses,
where the random variable $R$ represents the total, final gain.
We can interpret $R$ as a signal and the gamma bridge $\{\g_{tT}\}$ as independent multiplicative noise.
\citeauthor{BHM3} show that $\{\xi_{tT}\}$ is a Markov process, and that
\begin{equation}
\E[X \,|\, \xi_{tT}=x]=\frac{\int_x^{\infty}z^{2-mT}(z-x)^{m(T-t)-1} \, \nu(\dd z)}
{\int_x^{\infty}z^{1-mT}(z-x)^{m(T-t)-1} \, \nu(\dd z)},
\end{equation}
where $\nu$ is the law of $R$, and $m>0$ is a parameter.
The process $\{\xi_{tT}\}$ can be considered to be a gamma process conditioned to have the marginal law $\nu$ at time $T$,
and so belongs to the class of \levy random bridges (see \citet{HHM1}).
As such, we call a process that can be decomposed as in (\ref{eq:gam_br}) a `gamma random bridge' (GRB).
Archimedean survival processes are an $n$-dimensional extension of gamma random bridges.
Let the process $\{(\xi_t^{(1)},\xi_t^{(2)},\ldots,\xi_t^{(n)})^\tp\}_{0\leq t \leq T}$ be an ASP.
Then each one-dimensional marginal process $\{\xi^{(i)}_t\}$ is a GRB.
Thus we can write
\begin{equation}
\xi_t^{(i)}=X_i \g_{tT}^{(i)},
\end{equation}
for some gamma bridge $\{\g_{tT}^{(i)}\}$ and some independent $X_i>0$.
The $X_i$'s are identical but in general not independent, and the $\{\g^{(i)}_{tT}\}$'s are identical but in general not independent.
We shall construct each $\{\xi_t^{(i)}\}$ by splitting a `master' GRB into $n$ non-overlapping subprocesses.
This method of splitting a \levy random bridge into subprocesses (which are themselves \levy random bridges) was used by \citet{HHM2} to develop a bivariate insurance reserving model based on random bridges of the stable-1/2 subordinator.
A remarkable feature of the proposed construction is that the terminal vector $(\xi_T^{(1)},\xi_T^{(2)},\ldots,\xi_T^{(n)})^\tp$ has an $\ell_1$-norm symmetric distribution, and hence an Archimedean survival copula.
In particular, we shall show that
\begin{align}
\label{eq:archform}
\P\left[ \bar{F}(\xi_T^{(1)})>u_1,\bar{F}(\xi_T^{(2)})>u_2,\ldots,\bar{F}(\xi_T^{(n)})>u_n \right]= \bar{F}\left(\sum^{n}_{i=1}\bar{F}^{-1}(u_i)\right),
\end{align}
where
\begin{equation}
\bar{F}(u)=\P\left[\xi_T^{(i)}>u\right], \qquad \text{for $i=1,2,\ldots,n$.}
\end{equation}
Here $\bar{F}(x)$ is the marginal survival function of the $\xi_T^{(i)}$'s, and $\bar{F}^{-1}(u)$ is its (generalised) inverse.
The right-hand side of (\ref{eq:archform}) is an Archimedean copula with the generator function $\bar{F}(x)$.
We shall also construct Liouville processes by splitting a GRB into $n$ pieces.
By allowing more flexibility in the splitting mechanism and by employing some deterministic time changes, a broader range of behaviour can be achieved by Liouville processes than ASPs.
For example, the one-dimensional marginal processes of Liouville process are in general not identical.
A direct application of ASPs and Liouville processes is to the modelling of multivariate cumulative gain (or loss) processes.
Consider, for example, an insurance company that underwrites several lines of motor business (such as personal motor, fleet motor or private-hire vehicles) for a given accident year.
A substantial payment made on one line of business is unlikely to coincide with a substantial payment made on another line of business
(e.g.~a large payment is unlikely to be made on a personal motor claim at the same time as a large payment is made on a fleet motor claim).
However, the total sums of claims arising from the lines of business will depend on certain common factors such as prolonged periods of adverse weather or the quality of the underwriting process at the company.
Such common factors will produce dependence across the lines.
An ASP or a Liouville process might be a suitable model for the cumulative paid-claims processes of the lines of motor business.
The one-dimensional marginal processes of a Liouville process are increasing and do not exhibit simultaneous large jumps, but they can display strong correlation.
ASPs can be used to interpolate the dependence structure when using Archimedean copulas in discrete-time models.
Consider a risk model where the marginal distributions of the returns on $n$ assets are fitted for the future dates $t_1<\cdots<t_n<T<\infty$.
An Archimedean copula $C$ is used to model the dependence of the returns to time $T$.
At this stage we have a model for the joint distribution of returns to time $T$, but we have only the one-dimensional marginal distributions at the intertemporal times $t_1,\ldots,t_n$.
The problem then is to choose copulas to complete the joint distributions of the returns to the times $t_1,\ldots,t_n$ in a way that is consistent with the time-$T$ joint distribution.
For each time $t_i$, this can achieved by using the the time-$t_i$ survival copula implied by the Archimedean survival process with survival copula $C$ at terminal time $T$.
This paper is organized as follows: In Section 2, we review multivariate $\ell_1$-norm symmetric distributions, multivariate Liouville distributions, Archimedean copulas and gamma random bridges.
In Section 3, we define ASPs and provide various characterisations of their law.
We also detail how to construct a multivariate process such that each one-dimensional marginal is uniformly distribution.
In Section 4, we generalise ASPs to Liouville processes.
\section{Preliminaries}
This work draws together ideas from mathematical statistics and the theory of stochastic processes.
This extended preliminary section gives relevant background results from both of these subjects.
We fix a probability space $(\Omega,\P,\F)$ and assume that all processes and filtrations under consideration are c\`adl\`ag.
We let $f^{-1}$ denote the generalised inverse of a monotonic function $f$, i.e.~
\begin{equation}
f^{-1}(y)=\left\{\begin{aligned}
& \inf\{x: f(x)\geq y\}, && \text{$f$ increasing,}
\\ & \inf\{x: f(x)\leq y\}, && \text{$f$ decreasing.}
\end{aligned} \right.
\end{equation}
We denote the $\ell_1$ norm of a vector $\mathbf{x}\in\R^n$ by $\|\mathbf{x}\|$, i.e.~
\begin{equation}
\|\mathbf{x}\|=\sum_{i=1}^n |x_i|.
\end{equation}
\subsection{Multivariate distributions}
In this subsection we present some definitions and results from the theory of multivariate distributions.
We refer the reader to the thorough exposition by \citet{FKN1990} for further details.
\subsubsection{Multivariate $\ell_1$-norm symmetric distributions}
The multivariate $\ell_1$-norm symmetric distributions form a family of distributions that are closely related to Archimedean copulas.
The definition of the $n$-dimensional $\ell_1$-norm symmetric distribution is in terms of a random variable uniformly distributed on the simplex
\begin{equation}
\label{eq:simplex}
S=\left\{\mathbf{u}\in [0,1]^n : \|\mathbf{u}\|=1 \right\}.
\end{equation}
Such a random variable $\mathbf{U}$ has the stochastic representation
\begin{equation}
\mathbf{U}\law\frac{\mathbf{E}}{\|\mathbf{E}\|},
\end{equation}
where $\mathbf{E}$ is a vector of $n$ independent, identically-distributed, exponential random variables.
Note that this representation holds for any value of the rate parameter $\lambda>0$ of the exponential random variables, and that the random variable $\|\mathbf{E}\|$ has a gamma distribution with shape parameter $n$, and scale parameter $\lambda^{-1}$.
Each marginal variable $U_i$ has a beta distribution with parameters $\a=1$ and $\b=n-1$;
thus the survival function of $U_i$ is
\begin{equation}
\P[U_i>u]=(1-u)^{n-1},
\end{equation}
for $0\leq u\leq 1$.
\begin{defn} \label{defn:ell1}
A random variable $\mathbf{X}$ taking values in $\R^n$ has a \emph{multivariate $\ell_1$-norm symmetric distribution} if
\begin{equation}
\label{eq:l1rv}
\mathbf{X}\law R \mathbf{U},
\end{equation}
where $R$ is a non-negative random variable, and $\mathbf{U}$ is a random vector uniformly distributed on the simplex $S$.
We say that the law of $R$ is the \emph{generating law} of the distribution.
\end{defn}
\begin{rem}
The construction of multivariate $\ell_1$-norm symmetric random variables is similar to the construction of elliptical random variables.
To be precise, in \textnormal{(\ref{eq:l1rv})} if $\mathbf{U}$ was uniformly distributed on the unit sphere in $\R^n$, then $\mathbf{X}$ would have an elliptical distribution.
\end{rem}
Note that if $R$ admits a density, then $\mathbf{X}$ satisfying (\ref{eq:l1rv}) admits a density, and this density is simplectly contoured.
This is analogous to the elliptical contours of elliptical distributions.
If $\mathbf{X}$ is a multivariate $\ell_1$-norm symmetric random variable with generating law $\nu$, then the survival function of each one-dimensional marginal of $\mathbf{X}$ is
\begin{align}
\bar{F}(x)&=\P[X_i>x] \nonumber
\\ &=\int_x^{\infty} (1-x/r)^{n-1}\, \nu(\dd r), \label{eq:l1survival}
\end{align}
for $x\geq 0$.
The survival function $\bar{F}$ determines the law $\nu$.
Indeed, using the results of \citet{W1956}, \citet{MN2009} showed that
\begin{equation}
\nu([0,x])=1-\sum_{k=0}^{n-2}\frac{(-1)^kx^k\bar{F}^{(k)}_0(x)}{k!}-\frac{(-1)^{n-1}x^{n-1}\max[0,\bar{F}^{(n-1)}_0(x)]}{(n-1)!},
\end{equation}
where $\bar{F}^{(k)}$ is the $k$th derivative of $\bar{F}$, and
\begin{equation}
\bar{F}_0(x)=\left\{\begin{aligned}
&\bar{F}(x), && x>0 \\
&1-\bar{F}(0), && x=0.
\end{aligned} \right.
\end{equation}
The following theorem provides the multivariate version of $(\ref{eq:l1survival})$; a proof can be found in \citet[Theorem 5.4]{FKN1990}.
\begin{thm} \label{thm:l1survival}
If $\mathbf{X}$ has a multivariate $\ell_1$-norm symmetric distribution with generating law $\nu$, then the joint survival function of $\mathbf{X}$ is
\begin{align*}
\P[X_1>x_1,X_2>x_2,\ldots,X_n>x_n]&=\int_{\|\mathbf{x}\|}^{\infty} (1-\|\mathbf{x}\|/r)^{n-1} \, \nu(\dd r)
\\ &=\bar{F}\left( \|\mathbf{x}\| \right),
\end{align*}
for $\mathbf{x}\in\R^n_+$.
\end{thm}
\subsubsection{Multivariate Liouville distributions}
The multivariate Liouville distribution is an extension of the multivariate $\ell_1$-norm symmetric distribution.
Before defining the multivariate Liouville distribution, it is convenient to first define the Dirichlet distribution.
The $n$-dimensional Dirichlet distribution is a distribution on the simplex $S$ defined in (\ref{eq:simplex}).
\begin{defn} \label{def:dirichlet}
Let $\mathbf{G}$ be vector of independent random variables such that $G_i$ is a gamma random variable with shape parameter $\a_i>0$ and scale parameter unity.
Then the random vector
\begin{equation}
\label{eq:DRV}
\mathbf{D}=\frac{\mathbf{G}}{\|\mathbf{G}\|},
\end{equation}
has a \emph{Dirichlet distribution} with \emph{parameter vector} $\ab=(\a_1,\ldots,\a_n)^\tp$.
\end{defn}
\begin{rem}
The scaling property of the gamma distribution implies that $\kappa \mathbf{G}$, $\kappa>0$, is a vector of gamma random variables each with scale parameter $\kappa$.
Since \textnormal{(\ref{eq:DRV})} holds, if we replace $\mathbf{G}$ with $\kappa \mathbf{G}$, we could have used an arbitrary positive scale parameter in Definition \ref{def:dirichlet}.
\end{rem}
In two dimensions, a Dirichlet random variable can be written as $(B,1-B)^\tp$, where $B$ is a beta random variable.
If all the elements of the parameter vector $\ab$ are identical, then $\mathbf{D}$ is said to have a \emph{symmetric} Dirichlet distribution.
Notice that if $\a_i=1$ for $i=1,2,\ldots,n$, then $\mathbf{D}$ is uniformly distributed on the simplex $S$.
The density of $(D_1,D_2,\ldots,D_{n-1})^\tp$ is
\begin{equation}
\label{eq:Dir_den}
\mathbf{x} \mapsto \frac{\prod_{i=1}^{n}\G[\a_i]}{\G\left[\|\ab\|\right]} \prod_{i=1}^n x_i^{\a_i-1},
\end{equation}
for $\mathbf{x}\in[0,1]^{n-1}$, $\|\mathbf{x}\| \leq 1$,
where $x_n=1-\sum_{i=1}^{n-1} x_i$, and $\G[z]$ is the gamma function, defined as usual for $x>0$ by
\begin{equation}
\G[x]=\int_0^\infty u^{x-1} \e^{-u} \d u.
\end{equation}
The first- and second-order moments of the Dirichlet distribution are given by
\begin{align}
\E[D_i]&=\frac{\a_i}{\|\ab\|},
\\ \var[D_i]&=\frac{\a_i(\|\ab\|-\a_i)}{\|\ab\|^2(\|\ab\|+1)},
\\ \cov[D_i,D_j]&=-\frac{\a_i\a_j}{\|\ab\|^2(\|\ab\|+1)}, \qquad \text{for $i\ne j$.}
\end{align}
The Dirichlet distribution is an extension of a random variable uniformly distributed on a simplex.
The multivariate Liouville distribution is a similar extension of the multivariate $\ell_1$-norm symmetric distribution.
\begin{defn} \label{def:liouville}
A random variable $\mathbf{X}$ has a \emph{multivariate Liouville distribution} if
\begin{align}
\mathbf{X}\law R \mathbf{D}, \label{eq:MLRV}
\end{align}
for $R\geq 0$ a random variable, and $\mathbf{D}$ a Dirichlet random variable with parameter vector $\ab$.
We call the law of $R$ the \emph{generating law} and $\ab$ the \emph{parameter vector} of the distribution.
\end{defn}
In the case where $R$ has a density $p$, the density of $\mathbf{X}$ exists and can be written as
\begin{equation}
\label{eq:MLD}
\mathbf{x}\mapsto \G[\|\ab\|]\frac{p\left(\|\mathbf{x}\|\right)}{\left(\|\mathbf{x}\|\right)^{\|\ab\|-1}} \prod_{i=1}^n \frac{x_i^{\a_i-1}}{\G[\a_i]},
\end{equation}
for $\mathbf{x}\in\R_+^n$. Writing $\mu_1=\E[R]$ and $\mu_2=\E[R^2]$ (when these moments exist), the first- and second-order moments of $\mathbf{X}$ are given by
\begin{align}
\E[X_i]&=\mu_1\frac{\a_i}{\|\ab\|}, \label{eq:Expected}
\\ \var[X_i]&=\frac{\a_i}{\|\ab\|}\left(\mu_2\frac{\a_i+1}{\|\ab\|+1}-\mu_1^2\frac{\a_i}{\|\ab\|}\right), \label{eq:Var}
\\ \cov[X_i,X_j]&=\frac{\a_i\a_j}{\|\ab\|}\left(\frac{\mu_2}{\|\ab\|+1}-\frac{\mu_1^2}{\|\ab\|}\right), \qquad \text{for $i\ne j$.} \label{eq:Covar}
\end{align}
\subsection{Archimedean copulas}
A copula is a distribution function on the unit hypercube with the added property that each one-dimensional marginal distribution is uniform.
For further details, we refer to \citet{N2006}.
We define a copula as follows:
\begin{defn}
An $n$-copula defined on the $n$-dimensional unit hypercube $[0,1]^{n}$ is a function $C:[0,1]^{n}\rightarrow [0,1]$, which satisfies the following:
\begin{enumerate}
\item $C(\textbf{u})=0$ whenever $u_{j}=0$ for at least one $j=1,2,..,n$.
\item $C(\textbf{u})=u_{j}$ if $u_{i}=1$ for all $i\neq j$.
\item \label{num:dist} $C$ is $n$-increasing on $[0,1]^{n}$, that is
\begin{equation}
\sum_{i_1=1}^2\cdots\sum_{i_n=1}^2 (-1)^{i_1+\cdots+i_n} C(u_{1,i_1},\ldots,u_{n,i_n}) \geq 0,
\end{equation}
for all $(u_{1,1},u_{2,1},\ldots,u_{n,1})^\tp$ and $(u_{1,2},u_{2,2},\ldots,u_{n,2})^\tp$ in $[0,1]^n$ with $u_{j,1}\leq u_{j,2}$.
\end{enumerate}
\end{defn}
In the definition above, condition \ref{num:dist} is necessary to ensure that the function $C$ is a well-defined distribution function.
The theory of copulas is founded upon a theorem of Sklar.
This theorem was reformulated in terms of survival functions by \citet{MN2009} as follows:
\begin{thm}
Let $\bar{H}$ be an $n$-dimensional survival function with margins $\bar{F}_i$, $i=1,2,\ldots,n$.
Then there exists a copula $C$, referred to as the \emph{survival copula} of $\bar{H}$, such that, for any $\mathbf{x}\in\R^n$,
\begin{equation}
\label{eq:survfn}
\bar{H}(\mathbf{x})=C(\bar{F}_1(x_1),\ldots,\bar{F}_n(x_n)).
\end{equation}
Furthermore, $C$ is uniquely determined on
\[ D=\left\{ \mathbf{u}\in[0,1]^n:u\in\mathrm{ran } \bar{F}_1 \times \cdots \times \mathrm{ran } \bar{F}_n\right\}, \]
where $\mathrm{ran } f$ denotes the range of $f$.
In addition, for any $\mathbf{u}\in D$,
\[ C(\mathbf{u})=\bar{H}(\bar{F}^{-1}_1(u_1),\ldots,\bar{F}^{-1}_n(u_n)).\]
Conversely, given a copula $C$ and univariate survival functions $\bar{F}_i$, $i=1,\ldots,n$, $\bar{H}$ defined by \textnormal{(\ref{eq:survfn})}
is an $n$-dimensional survival function with marginals $F_1,\ldots,F_n$ and survival copula $C$.
\end{thm}
From a modelling perspective, one of the attractive features of copulas is that they allow the fitting of one-dimensional marginal distributions to be performed separately from the fitting of cross-sectional dependence.
However, this two-step approach of modelling multivariate phenomena by first specifying marginals and then choosing a copula is not suited to all situations
(for criticism see, for example, \citet{M2006}).
Archimedean copulas are copulas that take a particular functional form.
The following definition given in \citep{MN2009} is convenient for the present work:
\begin{defn}
A decreasing and continuous function $h:[0,\infty)\rightarrow[0,1]$ which satisfies the conditions $h(0)=1$ and $\lim_{x\rightarrow\infty}h(x)=0$,
and is strictly decreasing on $[0,\inf\{x:h(x)=0\}]$ is called an \emph{Archimedean generator}.
An $n$-dimensional copula $C$ is called an \emph{Archimedean copula} if it permits the representation
\[ C(\mathbf{u})=h(h^{-1}(u_1)+\cdots+h^{-1}(u_n)), \qquad \mathbf{u}\in[0,1]^n,\]
for some Archimedean generator $h$ with inverse $h^{-1}:[0,1]\rightarrow[0,\infty)$, where we set $h(\infty)=0$ and $h^{-1}(0)=\inf\{u:h(u)=0\}$.
\end{defn}
If $\mathbf{X}$ is a random vector with a multivariate $\ell_1$-norm symmetric distribution such that $\P[\mathbf{X}=\mathbf{0}]=0$,
then its marginal survival function $\bar{F}$ given in (\ref{eq:l1survival}) is continuous.
Hence it follows from Theorem \ref{thm:l1survival} that
\begin{align}
\P[\bar{F}(X_1)>u_1,\bar{F}(X_2)>u_2,\ldots,\bar{F}(X_n)>u_n]=\bar{F}\left(\sum^{n}_{i=1}\bar{F}^{-1}(u_i)\right).
\end{align}
In other words, $\mathbf{X}$ has an Archimedean survival copula with generating function $h(x)=\bar{F}(x)$.
\citet{MN2009} showed that the converse is also true:
\begin{thm}
Let $\mathbf{U}$ be a random vector whose distribution function is an $n$-dimensional Archimedean copula $C$ with generator $h$.
Then $(h^{-1}(U_1),h^{-1}(U_2),\ldots,h^{-1}(U_n))^{\tp}$ has a multivariate $\ell_1$-norm distribution with survival copula $C$ and generating law $\nu$.
Furthermore, $\nu$ is uniquely determined by
\begin{equation*}
\nu([0,x])=1-\sum_{k=0}^{n-2}\frac{(-1)^kx^kh^{(k)}(x)}{k!}-\frac{(-1)^{n-1}x^{n-1}\max[0,h^{(n-1)}(x)]}{(n-1)!}.
\end{equation*}
\end{thm}
\begin{rem}
There is one-to-one mapping from distribution functions on the positive half-line to the class of $n$-dimensional Archimedean copulas
through the invertible transformation $\nu \leftrightarrow h$.
\end{rem}
\subsection{Gamma random bridges}
A gamma random bridge is an increasing stochastic process, and both the gamma process and gamma bridge are special cases.
\subsubsection{Gamma process}
A gamma process is a subordinator (an increasing \levy process) with gamma distributed increments (see, for example, \citet{Sato1999}).
The law of a gamma process is uniquely determined by its mean and variance at time 1, which are both positive.
Let $\{\g_t\}$ be a gamma process with mean and variance $m>0$ at time 1; then
\begin{align}
\label{eq:gamma_choice}
\E[\g_t]=mt, \hspace{0.1in} \text{and} \hspace{0.1in} \var[\g_t]=mt.
\end{align}
The density of $\g_t$ is
\begin{equation}
\label{eq:gamma_m_den}
f_t(x)= \1_{\{x>0\}} \frac{ x^{mt-1}}{\G[mt]} \e^{-x}.
\end{equation}
Due to the scaling property of the gamma distribution, if $\kappa>0$ then the process $\{\kappa \g_t\}$ is a gamma process with mean $m\kappa$, and variance $m\kappa^2$ at $t=1$.
The characteristic function of $\g_t$ is
\begin{equation}
\E[\e^{\i\l\g_t}]=(1-\i\l)^{-mt}, \qquad \text{for $\lambda\in\mathbb{C}$.}
\end{equation}
As noted in \citet{BHM3}, the parameter $m$ has units of inverse time, and so $\{\g_t\}$ is dimensionless.
Taking $\k=1/m$, the scaled process $\{\k \g_t\}$ has units of time, making this alternative parameterisation suitable as a basis for a stochastic time change
(see, for example, \citet{MS1990}).
The characteristic function of $\k\g_t$ is then
\begin{equation}
\E[\e^{\i\l\k\g_t}]=(1-\i\l/m)^{-mt}.
\end{equation}
In the limit $m\rightarrow \infty$ this characteristic function is $\e^{\i\l t}$, which is the characteristic function of the Dirac measure centred at $t$.
It follows that $\{\k \g_t\}\overset{\text{law}}{\longrightarrow} \{t\}$ as $m\rightarrow \infty$.
\subsubsection{Gamma bridge}
A gamma bridge is a gamma process conditioned to have a fixed value at a fixed future time.
A gamma bridge is a \levy bridge, and hence a Markov process.
Gamma bridges exhibit a number of remarkable similarities to Brownian bridges, some of which have been presented by \citet{EY2004}.
Let $\{\g_{tT}\}_{0\leq t\leq T}$ be a gamma bridge identical in law to the gamma process $\{\g_t\}$ pinned to the value 1 at time $T$.
Using the Bayes theorem, the transition law of $\{\g_{tT}\}$ is given by
\begin{align}
\P\left[\g_{tT}\in \dd y \left|\, \g_{sT}=x\right.\right]&=\P\left[\g_t\in \dd y \left|\, \g_s=x, \g_T=1 \right.\right] \nonumber
\\ &=\frac{f_{t-s}(y-x) f_{T-t}(1-y)}{f_{T-s}(1-x)} \nonumber
\\ &=\1_{\{x<y< 1\}}\frac{\left(\frac{y-x}{1-x}\right)^{m(t-s)-1} \left(\frac{1-y}{1-x}\right)^{m(T-t)-1}}
{(1-x)\mathrm{B}[m(t-s),m(T-t)]} \d y, \label{eq:gambrtrans}
\end{align}
for $0\leq s <t\leq T$ and $x\geq 0$.
Here $\mathrm{B}[\a,\b]$ is the beta function, defined for $\a,\b>0$ by
\begin{equation}
\mathrm{B}[\a,\b]=\int_0^1 x^{\a-1}(1-x)^{\b-1}\d x=\frac{\G[\a]\G[\b]}{\G[\a+\b]}.
\end{equation}
We say that $m$ is the \emph{activity parameter} of $\{\g_{tT}\}$.
If the gamma bridge $\{\g_{tT}\}$ has reached the value $x$ at time $s$, then it must yet travel a distance $1-x$ over the time period $(s,T]$.
Equation (\ref{eq:gambrtrans}) shows that the proportion of this distance that the gamma bridge will cover over $(s,t]$ is a random variable with a beta distribution
(with parameters $\a=m(t-s)$ and $\b=m(T-t)$).
The conditional characteristic function of $\g_{tT}$ is
\begin{equation}
\label{eq:gambrcf}
\E\left[\left. \e^{\i\l\g_{tT}} \,\right| \g_{sT}=x \right]=M[m(t-s),m(T-s),\i(1-x)\l],
\end{equation}
where $M[\a,\b,z]$ is Kummer's confluent hypergeometric function of the first kind, which can be expanded as the power series \citep[13.1.2]{AS1964}
\begin{equation}
\label{eq:Kummer}
M[\a,\b,z]=1+\frac{\a}{\b}z+\frac{\a(\a+1)}{\b(\b+1)}\frac{z^2}{2!}+\frac{\a(\a+1)(\a+2)}{\b(\b+1)(\b+2)}\frac{z^3}{3!}+\cdots.
\end{equation}
Taking the limit as $m\rightarrow\infty$ in (\ref{eq:gambrcf}), we have
\begin{align}
\E\left[\left. \e^{\i\l\g_{tT}} \,\right| \g_{sT} =x\right]&\rightarrow \sum_{k=0}^{\infty}\left(\frac{t-s}{T-s} \right)^k \frac{(\i(1-x)\l)^k}{k!} \nonumber
\\&=\exp\left( \i\frac{t-s}{T-s}(1-x)\l \right),
\end{align}
which is the characteristic function of the Dirac measure centered at $(1-x)(t-s)/(T-s)$.
It then follows from the Markov property of gamma bridges that $\{\g_{tT}\}\overset{\text{law}}{\longrightarrow}\{t/T\}$ as $m\rightarrow \infty$.
It is a property of gamma processes that the renormalised process $\{\g_t/\g_T\}_{0\leq t \leq T}$ is independent of $\g_T$
(indeed, this independence property characterises the gamma process among \levy processes).
This leads to the remarkable identity
\begin{equation}
\label{eq:gamma_ratio}
\left\{ \frac{\g_t}{\g_T} \right\}\law \{\g_{tT}\}.
\end{equation}
The identity (\ref{eq:gamma_ratio}) can be proved by showing that the process on the left-hand side is Markov, and then verifying that its transition law is the same as (\ref{eq:gambrtrans}).
This can be done using the results in \citet{BHM3}.
We note three properties of gamma bridges that follow from (\ref{eq:gamma_ratio}).
The first is that the bridge of the scaled gamma process $\{\kappa \g_t\}$ is, for any $\k>0$, identical in law to the bridge of the unscaled process.
The second property is that a $\{\g_t\}$-bridge to the value $z>0$ at time $T$ is identical in law to the process $\{z \g_{tT}\}$.
The third is that the joint distribution of increments of a gamma bridge is Dirichlet.
To see this last fact, fix times $0=t_0<t_1<\cdots<t_n=T$ and define
\begin{align}
\bar{\D}_i&=\g_{t_i}-\g_{t_{i-1}},
\\ \D_i&=\g_{t_i,T}-\g_{t_{i-1},T}.
\end{align}
Then $\bar{\D}_i$ has a gamma distribution with shape parameter $\a_i=m(t_i-t_{i-1})$ and scale parameter unity.
Hence
\begin{align}
(\D_1,\D_2,\ldots,\D_n)&\law \frac{(\bar{\D}_1,\bar{\D}_2,\ldots,\bar{\D}_n)}{\|(\bar{\D}_1,\bar{\D}_2,\ldots,\bar{\D}_n)\|}.
\end{align}
The fact then follows from Definition \ref{def:dirichlet}.
\subsubsection{Gamma random bridge}
Under a different name, the gamma random bridge was introduced by \citet{RN2} as a model for a cumulative payment process in an insurance model.
It was also studied in detail by \citet{BHM3}.
We define a gamma random bridge as follows:
\begin{defn}
\label{def:GRB}
The process $\{\G_{t}\}_{0\leq t \leq T}$ is a \emph{gamma random bridge} if
\begin{equation} \label{eq:GRB}
\{\G_{t}\}\law \{R \g_{tT}\},
\end{equation}
for $R>0$ a random variable, and $\{\g_{tT}\}$ a gamma bridge.
We say that $\{\G_{t}\}$ has \emph{generating law $\nu$} and \emph{activity parameter $m$}, where of $\nu$ is the law of $R$ and $m$ is the activity parameter of $\{\g_{tT}\}$.
\end{defn}
\begin{rem}
Suppose that $\{\G_t\}$ is a GRB satisfying \textnormal{(\ref{eq:GRB})}.
If $\P[R=z]=1$ for some $z>0$, then $\{\G_t\}$ is a gamma bridge.
If $R$ is gamma random variable with shape parameter $mT$ and scale parameter $\k$,
then $\{\G_t\}$ is a gamma process such that $\E[\G_t]=m\k t$ and $\var[\G_t]=m\k^2t$, for $t\in[0,T)$.
\end{rem}
Gamma random bridges (GRBs) fall within the class of \levy random bridges described by \citet{HHM1}.
The process $\{\G_{t}\}$ is identical in law to a gamma process defined over $[0,T]$ conditioned to have the law of $R$ at time $T$.
The bridges of a GRB are gamma bridges.
GRBs are Markov processes, and the transition law of $\{\G_{t}\}$ is given by
\begin{multline}
\label{eq:grb_law1}
\P[\G_{t} \in \dd y \,|\, \G_{s}=x] \\
=\frac{\1_{\{y>x\}}}{\mathrm{B}[m(T-t),m(t-s)]}\frac{\int_{y}^{\infty}(z-y)^{m(T-t)-1}z^{1-mT}\,
\nu(\dd z)}{\int_{x}^{\infty}(z-x)^{m(T-s)-1}z^{1-mT}\,\nu(\dd z)}(y-x)^{m(t-s)-1} \d y,
\end{multline}
and
\begin{equation}
\label{eq:grb_law2}
\P[\G_{T} \in \dd y \,|\, \G_{s}=x]
=\frac{\1_{\{y>x\}}(y-x)^{m(T-s)-1}y^{1-mT}\,\nu(\dd y)}{\int_{x}^{\infty}(z-x)^{m(T-s)-1}z^{1-mT}\,\nu(\dd z)},
\end{equation}
where $\mathrm{B}[\a,\b]$ is the beta function.
Since increments of a gamma bridge have a Dirichlet distribution,
it follows from Definition \ref{def:liouville} that the increments of a gamma random bridge have a multivariate Liouville distribution.
The following proposition, stated as a corollary in \citep{HHM1} for a general \levy random bridge, is a key result for the construction of Archimedean survival processes:
\begin{prop} \label{lem:crucial}
Let $\{\G_{t}\}$ be a GRB with terminal law $\nu$ and activity parameter $m$.
\begin{enumerate}
\item [\textnormal{({\bf A})}]
Fix times $s_1,T_1$ satisfying $0<T_1\leq T-s_1$. The time-shifted, space-shifted partial process
\begin{equation}
\xi_{t}^{(1)}=\G_{s_1+t}-\G_{s_1}, \qquad (0\leq t \leq T_1), \notag
\end{equation}
is a gamma random bridge with activity parameter $m$, and with generating law
\[ \nu^{(1)}(\dd x)=\frac{x^{mT_1-1}}{\B[mT_1,m(T-T_1)]} \int_{z=x}^{\infty} z^{mT-1}(z-x)^{m(T-T_1)-1} \nu(\d z) \d x. \]
\item[\textnormal{({\bf B})}]
Construct partial processes $\{\xi^{(i)}_{t}\}_{0\leq t \leq T_i}$, $i=1,\ldots,n$, from non-overlapping portions of $\{\G_{t}\}$, in a similar way to that above.
The intervals $[s_i,s_i+T_i]$, $i=1,\ldots,n$, are non-overlapping except possibly at the endpoints.
Set $\xi^{(i)}_{t}=\xi^{(i)}_{T_i}$ when $t>T_i$.
If $u>t$, then
\begin{multline}
\P\left[\left. \xi^{(1)}_{u}-\xi^{(1)}_{t}\leq x_1,\ldots,\xi^{(n)}_{u}-\xi^{(n)}_{t}\leq x_n \,\right| \F_t \right]=
\\ \P\left[ \xi^{(1)}_{u}-\xi^{(1)}_{t}\leq x_1,\ldots,\xi^{(n)}_{u}-\xi^{(n)}_{t}\leq x_n \left|\, \sum_{i=1}^n \xi^{(i)}_{t} \right.\right], \notag
\end{multline}
where the filtration $\{\F_{t}\}$ is is given by
\begin{equation}
\F_{t}=\s\left(\left\{ \xi^{(i)}_{s} \right\}_{0\leq s \leq t}, i=1,2,\ldots,n \right). \notag
\end{equation}
\end{enumerate}
\end{prop}
\begin{rem}
Define the process $\{R_t\}$ by
\begin{equation}
R_t=\sum_{i=1}^n \xi^{(i)}_{t},
\end{equation}
for $t\in[0,\max_i T_i]$.
Then $\{R_t\}$ is a GRB with terminal law $\nu$, and time-dependent activity parameter
\begin{equation}
M(t)=m \sum_{i=1}^n \1_{\{t\leq T_i\}}.
\end{equation}
The proof of this result is similar to the proof of the special case that appears later in \textnormal{Proposition \ref{prop:R_GRB}}.
\end{rem}
We can construct an $n$-dimensional Markov process $\{\xib_t\}$ from the partial processes of Lemma \ref{lem:crucial}, part (B), by setting
\begin{equation}
\xib_t=(\xi^{(1)}_t,\ldots,\xi^{(n)}_t)^\tp.
\end{equation}
The Markov property means that, for any fixed time $s\geq 0$,
the $\F_s$-conditional law of $\{\xib_t\}_{s\leq t}$ is identical to the $\xib_s$-conditional law of $\{\xib_t\}_{s\leq t}$.
The remarkable feature of Lemma \ref{lem:crucial}, part (B), is that
the $\F_s$-conditional law of $\{\xib_t-\xib_s\}_{s\leq t}$ is identical to the $R_s$-conditional law of $\{\xib_t-\xib_s\}_{s\leq t}$.
Hence the increment probabilities of the $n$-dimensional process $\{\xib_t\}$ can be described by the one-dimensional state process $\{R_t\}$.
\section{Archimedean survival process}
We construct an Archimedean survival process (ASP) by splitting a gamma random bridge into $n$ non-overlapping subprocesses.
We start with a `master' GRB $\{\G_t\}_{0\leq t \leq n}$ with activity parameter $m=1$ and generating law $\nu$, where $n\in\N_+$, $n\geq 2$.
In this section, we write $f_t$ for the gamma density with shape parameter unity and scale parameter unity (in (\ref{eq:gamma_m_den}) we set $m=1$).
That is
\begin{equation} \label{eq:gam_den_2}
f_t(x)=\frac{x^{t-1} \e^{-x}}{\G[t]}.
\end{equation}
\begin{defn}
The process $\{\xib_{t}\}_{0\leq t \leq 1}$ is an \emph{$n$-dimensional Archimedean survival process} if
\begin{equation*}
\{\xib_t\}_{0\leq t \leq 1}\law\left\{\left[ \begin{aligned}
& \G_{t}-\G_{0}
\\ & \vdots
\\ & \G_{(i-1)+t}-\G_{i-1}
\\ & \vdots
\\ & \G_{(n-1)+t}-\G_{n-1}
\end{aligned} \right]\right\}_{0\leq t \leq 1}
\end{equation*}
for $\{\G_t\}_{0\leq t \leq n}$ is a gamma random bridge with activity parameter $m=1$.
We say that the generating law of $\{\G_t\}$ is the \emph{generating law} of $\{\xib_{t}\}$.
\end{defn}
Note that from Definition \ref{def:GRB} $\P[\G_n=0]=0$, and so $\P[\xib_t=\mathbf{0}]=0$.
Each one-dimensional marginal process of an ASP is a subprocess of a GRB, and hence a GRB.
Thus ASPs are a multivariate generalisation of GRBs.
We defined ASPs over the time interval $[0,1]$;
it is straightforward to restate the definition to cover an arbitrary closed interval.
\begin{prop}
The terminal value of an Archimedean survival process has an Archimedean survival copula.
\end{prop}
\begin{proof}
Let $\{\xib_{t}\}$ be an $n$-dimensional ASP with generating law $\nu$. Then we have
\begin{align}
\P[\xib_1\in\dd \mathbf{x}]&=\P\left[\G_1\in\dd x_1,\G_2-\G_1\in\dd x_2,\ldots,\G_n-\G_{n-1}\in\dd x_n\right] \nonumber
\\ &=\P\left[R\frac{\g_1}{\g_n}\in\dd x_1,R\frac{\g_2-\g_1}{\g_n}\in\dd x_2,\ldots,R\frac{\g_n-\g_{n-1}}{\g_n}\in\dd x_n\right],
\end{align}
for $\mathbf{x}\in\R^n$, $R$ a random variable with law $\nu$, and $\{\g_t\}$ a gamma process such that $\g_t$ has the density (\ref{eq:gam_den_2}).
Each increment $\g_i-\g_{i-1}$ has an exponential distribution (with unit rate).
Thus
\begin{equation}
\P[\xib_1\in\dd \mathbf{x}]=\P\left[R\frac{\mathbf{E}}{\|\mathbf{E}\|}\in \dd \mathbf{x}\right],
\end{equation}
for $\mathbf{E}$ an $n$-vector of independent, identically-distributed, exponential random variables.
From Definition \ref{defn:ell1}, $\xib_1$ has a multivariate $\ell_1$-norm symmetric distribution. Therefore, it has an Archimedean survival copula.
\end{proof}
\begin{rem}
Let $g_{i}:\R_{+}\rightarrow\R$ be strictly decreasing for $i=1,\ldots,n$, and let $\{\xib_t\}$ be an ASP.
Then the vector-valued process
\begin{equation*}
\left\{\left(g_{1}(\xi^{(1)}_{t}),\ldots,g_{i}(\xi^{(i)}_{t}),\ldots,g_{n}(\xi^{(n)}_{t})\right)^\tp\right\}_{0\leq t \leq 1}
\end{equation*}
has an Archimedean copula at time $t=1$.
\end{rem}
\subsection{Characterisations}
In this subsection we shall characterize ASPs first through their finite-dimensional distributions, and then through their transition probabilities.
\subsubsection{Finite-dimensional distributions}
The finite-dimensional distributions of the master process $\{\G_t\}$ are given by
\begin{equation}
\label{eq:master_fd}
\P[\G_{t_1} \in \dd x_1, \ldots, \G_{t_k} \in \dd x_k, \G_n\in \dd z]
= \P[\G_{t_1} \in \dd x_1, \ldots, \G_{t_k} \in \dd x_k \,|\, \G_n=z] \, \nu(\dd z),
\end{equation}
where $x_0=0$, for all $k\in\N_+$, all partitions $0=t_0<t_1<\cdots<t_k<n$, all $z\in\R_+$, and all $(x_1,\ldots,x_k)^\tp=\mathbf{x} \in \R_+^k$.
It was mentioned earlier that the bridges of a GRB are gamma bridges.
(In fact, this is the basis of the definition of \levy random bridges given in \citet{HHM1}.)
Hence, for $\{\g_t\}$ a gamma process such that $\E[\g_1]=1$ and $\var[\g_1]=1$, we have
\begin{align} \label{eq:master_fd2}
\P[\G_{t_1} \in \dd x_1, \ldots, \G_{t_k} \in \dd x_k, \G_n&\in \dd z] \nonumber
\\ &= \P[\g_{t_1} \in \dd x_1, \ldots, \g_{t_k} \in \dd x_k \,|\, \g_n=z] \, \nu(\dd z).
\end{align}
From (\ref{eq:gamma_ratio}) and (\ref{eq:GRB}), we have
\begin{equation}
(\G_{t_1}-\G_{t_0},\ldots,\G_{t_k}-\G_{t_{k-1}},\G_n-\G_{t_k}) \law \frac{R}{\g_n}(\g_{t_1}-\g_{t_0},\ldots,\g_{t_k}-\g_{t_{k-1}},\g_n-\g_{t_k}).
\end{equation}
Hence, from Definition \ref{def:liouville}, $(\G_{t_1}-\G_{t_0},\ldots,\G_{t_k}-\G_{t_{k-1}},\G_n-\G_{t_k})^{\tp}$
has a multivariate Liouville distribution with generating law $\nu$ and parameter vector $(t_1-t_0,\ldots,t_k-t_{k-1},n-t_k)^\tp$.
We can use these results to characterise the law of the ASP $\{\xib_t\}$ through the joint distribution of its increments.
Fix $k_i\geq 1$ and the partitions
\begin{equation}
0=t_0^i<t_1^i<\cdots<t_{k_i}^i=1,
\end{equation}
for $i=1,\ldots,n$. Then define the non-overlapping increments $\{\D_{ij}\}$ by
\begin{equation}
\D_{ij}=\xi^{(i)}_{t^i_j}-\xi^{(i)}_{t^i_{j-1}},
\end{equation}
for $j=1,\ldots,k_i$ and $i=1,\ldots,n$. The distribution of the vector
\begin{align}
\boldsymbol{\D}=(&\D_{11},\D_{12},\ldots,\D_{1k_1}, \nonumber
\\ &\D_{21},\D_{22},\ldots,\D_{2k_2}, \nonumber
\\ &\vdots \nonumber
\\ &\D_{n1},\D_{n2},\ldots,\D_{nk_n})^\tp
\end{align}
characterises the finite-dimensional distributions of the ASP $\{\xib_t\}$.
Thus it follows from the Kolmogorov extension theorem that the distribution of $\boldsymbol{\D}$ characterises the law of $\{\xib_t\}$.
Note that $\boldsymbol{\D}$ contains non-overlapping increments of the master GRB $\{\G_t\}$ such that $\|\boldsymbol{\D}\|=\G_n$.
Hence $\boldsymbol{\D}$ has a multivariate Liouville distribution with parameter vector
\begin{align}
\boldsymbol{\a}=(&t^1_1-t^1_0,t^1_2-t^1_1,\ldots,t^1_{k_1}-t^1_{k_1-1}, \nonumber
\\ &t^2_1-t^2_0,t^2_2-t^2_1,\ldots,t^2_{k_2}-t^2_{k_2-1}, \nonumber
\\ &\vdots \nonumber
\\ &t^n_1-t^n_0,t^n_2-t^n_1,\ldots,t^n_{k_n}-t^n_{k_n-1})^\tp,
\end{align}
and the generating law $\nu$.
\subsubsection{Transition law}
We denote the filtration generated by $\{\xib_t\}_{0\leq t \leq 1}$ by $\{\F_t\}$.
From Lemma \ref{lem:crucial}, $\{\xib_t\}$ is a Markov process with respect to $\{\F_t\}$.
We shall calculate the transition probabilities of $\{\xib_t\}$ after introducing some further notation.
For a set $B\subset\R$ and a constant $x\in\R$, we write $B+x$ for the shifted set
\begin{equation}
\label{eq:shiftedset}
B+x=\{y\in\R: y-x\in B\}.
\end{equation}
In what follows, we assume that $\{\xib_t\}$ is an $n$-dimensional ASP with generating law $\nu$, and that $\{\G_t\}$ is a master process of $\{\xib_t\}$.
We define the process $\{R_t\}_{0\leq t \leq 1}$ by setting
\begin{equation}
R_t=\sum_{i=1}^n \xi^{(i)}_t=\|\xib_t\|.
\end{equation}
Note that the terminal value of $\{R_t\}$ is the terminal value of the master process $\{\G_t\}$, i.e.~$R_1=\G_n$.
We define a family of unnormalised measures, indexed by $t\in[0,1)$ and $x\in\R_+$, as follows:
\begin{align}
\psi_0(B;x)&=\nu(B),
\\ \psi_t(B;x)&=\int_B \frac{f_{n(1-t)}(z-x)}{f_n(z)} \, \nu(\dd z) \nonumber
\\ &=\frac{\G[n]\e^{x}}{\G[n(1-t)]}\int_{B} \1_{\{z>x\}} z^{1-n}(z-x)^{n(1-t)-1} \, \nu(\dd z)
\end{align}
for $B\in\Borel$.
We also write
\begin{equation}
\Psi_t(x)=\psi_t([0,\infty);x).
\end{equation}
It follows from (\ref{eq:master_fd2}) and the independent increments of gamma processes that
\begin{align}
\P[\G_{t_1} \in \dd x_1, \ldots, \G_{t_k} \in \dd x_k, \G_n\in \dd z]
&= \prod_{i=1}^k[f_{t_i-t_{i-1}}(x_i-x_{i-1}) \d x_i] \frac{f_{n-t_k}(z-x_n)}{f_n(z)} \nu(\dd z) \nonumber
\\ &= \prod_{i=1}^k[f_{t_i-t_{i-1}}(x_i-x_{i-1}) \d x_i] \psi_{t_k/n}(\dd z;x_n). \label{eq:useful}
\end{align}
\begin{prop}
The ASP $\{\xib_t\}$ is a Markov process with the transition law given by
\begin{multline}
\label{eq:ASP1}
\P\left[\left. \xi_1^{(1)}\in\dd z_1,\ldots, \xi_1^{(n-1)}\in\dd z_{n-1},\xi_1^{(n)}\in B \,\right| \xib_s=\mathbf{x} \right]=
\\ \frac{\psi_{\t(s)}(B+\sum_{i=1}^{n-1}z_i;x_n+\sum_{i=1}^{n-1}z_i)}{\Psi_s(\|\mathbf{x}\|)}
\prod_{i=1}^{n-1}\frac{(z_i-x_i)^{-s}\e^{-(z_i-x_i)}}{\G[1-s]}\d z_i,
\end{multline}
and
\begin{equation}
\label{eq:ASP2}
\P\left[ \xib_t\in \d\mathbf{y} \,|\, \xib_s=\mathbf{x} \right]=
\frac{\Psi_{t}(\|\mathbf{y}\|)}{\Psi_s(\|\mathbf{x}\|)}
\prod_{i=1}^{n}\frac{(y_i-x_i)^{(t-s)-1}\e^{-(y_i-x_i)}}{\G[t-s]}\d y_i,
\end{equation}
where $\t(t)=1-(1-t)/n$, $0\leq s<t<1$, and $B\in\Borel$.
\end{prop}
\begin{proof}
We begin by verifying (\ref{eq:ASP1}).
From the Bayes theorem we have
\begin{multline}
\label{eq:A}
\P\left[\left. \xi_1^{(1)}\in\dd z_1,\ldots, \xi_1^{(n-1)}\in\dd z_{n-1},\xi_1^{(n)}\in B \,\right| \xib_s=\mathbf{x} \right]=
\\ =\frac{\P\left[\xi_1^{(1)}\in\dd z_1,\ldots, \xi_1^{(n-1)}\in\dd z_{n-1},\|\xib_1\|\in B+\sum_{i=1}^{n-1} z_i, \xib_s\in\d\mathbf{x} \right]}
{\P\left[\xib_s\in\d\mathbf{x} \right]}.
\end{multline}
The law of $R_1=\|\xib_1\|$ is $\nu$; hence using (\ref{eq:useful}) the numerator of (\ref{eq:A}) is
\begin{multline}
\label{eq:num}
\int_{u\in B+\sum_{i=1}^{n-1} z_i}
\P\left[\left.\xi_1^{(1)}\in\dd z_1,\ldots, \xi_1^{(n-1)}\in\dd z_{n-1}, \xib_s\in\d\mathbf{x} \,\right| R_1=u \right] \nu(\dd u)=
\\ \prod_{i=1}^{n}[f_{s}(x_i)\d x_i] \prod_{i=1}^{n-1}[f_{1-s}(z_i-x_i)\d z_i]
\int_{u \in B+\sum_{i=1}^{n-1} z_i} \frac{f_{1-s}(u-\sum_{i=1}^{n-1}z_i)}{f_n(u)} \,\nu(\dd u),
\end{multline}
and the denominator is
\begin{equation}
\label{eq:denom}
\int_{u=0}^{\infty}\P\left[\xib_s\in\d\mathbf{x} \,|\, R_1=u\right] \nu(\dd u)=
\prod_{i=1}^{n}[f_{s}(x_i)\d x_i] \int_{u=0}^{\infty} \frac{f_{n(1-s)}(u-\|\mathbf{x}\|)}{f_n(u)} \,\nu(\dd u).
\end{equation}
In equations (\ref{eq:num}) and (\ref{eq:denom}) we have used the fact that, given $\|\xib_1\|=R_1$, $\{\xib_t\}$ is a vector of subprocesses of a gamma bridge.
Dividing (\ref{eq:num}) by (\ref{eq:denom}) yields
\begin{multline}
\frac{\int_{u \in B+\sum_{i=1}^{n-1} z_i}
\frac{1}{f_n(u)} f_{1-s}(u-\sum_{i=1}^{n-1}z_i) \,\nu(\dd u)}
{\int_{u=0}^{\infty} \frac{1}{f_n(u)} f_{n(1-s)}(u-\|\mathbf{x}\|) \,\nu(\dd u)} \prod_{i=1}^{n-1}[f_{1-s}(z_i-x_i)\d z_i]=
\\ \frac{\psi_{\t(s)}(B+\sum_{i=1}^{n-1}z_i;x_n+\sum_{i=1}^{n-1}z_i)}{\psi_s([0,\infty);\|\mathbf{x}\|)}
\prod_{i=1}^{n-1}\frac{(z_i-x_i)^{-s}\e^{-(z_i-x_i)}}{\G[1-s]}\d z_i,
\end{multline}
as required.
We shall now verify (\ref{eq:ASP2}) following similar steps.
From the Bayes theorem we have
\begin{equation}
\label{eq:B}
\P[\xib_t\in\dd \mathbf{y} \,|\, \xib_s= \mathbf{x}]
=\frac{\P[\xib_t\in\dd \mathbf{y} , \xib_s\in\dd \mathbf{x} ]}{\P[\xib_s\in\dd \mathbf{x}]}.
\end{equation}
The numerator of (\ref{eq:B}) is
\begin{multline}
\label{eq:num2}
\int_{z=0}^{\infty} \P\left[\xib_t\in\dd \mathbf{y} , \xib_s\in\dd \mathbf{x} \,|\, R_1=z \right] \, \nu(\dd z)=
\\ \prod_{i=1}^n[f_{s}(x_i) \d x_i] \prod_{i=1}^n[f_{t-s}(y_i-x_i) \d y_i] \int_{z=0}^{\infty} \frac{f_{n(1-t)}(z-\|\mathbf{y}\|)}{f_n(z)} \, \nu(\dd z),
\end{multline}
and the denominator is given in (\ref{eq:denom}).
Dividing (\ref{eq:num2}) by (\ref{eq:denom}) yields
\begin{multline}
\frac{\int_{z=0}^{\infty} \frac{1}{f_n(z)}f_{n(1-t)}(z-\|\mathbf{y}\|) \, \nu(\dd z)}
{\int_{z=0}^{\infty} \frac{1}{f_n(z)}f_{n(1-s)}(z-\|\mathbf{x}\|)\, \nu(\dd z)} \, \prod_{i=1}^n[f_{t-s}(y_i-x_i) \d y_i]=
\\ \frac{\psi_{t}([0,\infty);\|\mathbf{y}\|)}{\psi_s([0,\infty);\|\mathbf{x}\|)}
\prod_{i=1}^{n}\frac{(y_i-x_i)^{(t-s)-1}\e^{-(y_i-x_i)}}{\G[t-s]}\d y_i,
\end{multline}
which completes the proof.
\end{proof}
\begin{rem} \label{rem:ASPtransitiondensity}
When the generating law $\nu$ admits a density $p$, \textnormal{(\ref{eq:A})} is equivalent to
\begin{equation}
\label{eq:nu_den_td}
\P\left[ \xib_1\in \d\mathbf{z} \,|\, \xib_s=\mathbf{x} \right]=
\frac{\G[n] \e^{\|\mathbf{x}\|} p(\|\mathbf{z}\|)}{\Psi_s(\|\mathbf{x}\|)\|\mathbf{z}\|^{n-1}} \prod_{i=1}^n \frac{(z_{i}-x_i)^{-s}}{\G[1-s]}\d z_i.
\end{equation}
\end{rem}
\subsubsection{Increments of ASPs}
We shall show that increments of an ASP have $n$-dimensional Liouville distributions.
Indeed, at time $s\in[0,1)$, the increment $\xib_t-\xib_s$, $t\in(s,1]$, has a multivariate Liouville distribution with a generating law that can be expressed in terms of the $\xib_s$-conditional law of the norm variable $R_t=\|\xib_t\|$.
Before we show this, we first examine the law of the process $\{R_t\}$.
\begin{prop} \label{prop:R_GRB}
The process $\{R_t\}_{0\leq t\leq T}$ is a GRB with generating law $\nu$ and activity parameter $n$.
That is,
\begin{align}
\P[R_t\in \dd r\,|\, \xib_s=\mathbf{x}]&= \frac{\Psi_{t}(r)}{\Psi_{s}(\|\mathbf{x}\|)}
\frac{(r-\|\mathbf{x}\|)^{n(t-s)-1}\exp(-(r-\|\mathbf{x}\|))}{\G[n(t-s)]} \dd r, \label{eq:R_law1}
\\\intertext{and}
\P[R_1\in \dd r\,|\, \xib_s=\mathbf{x}]&= \frac{\psi_{s}(\dd r;\|\mathbf{x}\|)}{\Psi_{s}(\|\mathbf{x}\|)}, \label{eq:R_law2}
\end{align}
for $0<s<t<1$.
\end{prop}
Before proceeding the proof, note that, after simplification, (\ref{eq:R_law1}) and (\ref{eq:R_law2}) are consistent with (\ref{eq:grb_law1}) and (\ref{eq:grb_law2}).
\begin{proof}
Since $\{\xib_t\}$ is a Markov process with respect to $\{\F_t\}$, $\{R_t\}$ is a Markov process with respect to $\{\F_t\}$.
Thus to prove the proposition we need only verify that the transition probabilities of $\{R_t\}$ match those given in (\ref{eq:R_law1}) and (\ref{eq:R_law2}).
We first verify the $\xib_s$-conditional law of $R_1$.
We can calculate this using the Bayes theorem;
\begin{align} \label{eq:R1conditional}
\P[R_1\in \dd r\,|\, \xib_s=\mathbf{x}] &= \frac{\P[\xib_s\in\dd \mathbf{x} \,|\, R_1=r]\P[R_1\in \d r]}
{\int_{r=0}^{\infty}\P[\xib_s\in\dd \mathbf{x} \,|\, R_1=r]\P[R_1\in \d r] } \nonumber
\\ &= \frac{\frac{1}{f_n(r)}f_{n(1-s)}(r-\|\mathbf{x}\|) \, \nu(\dd r)}
{\int_{r=0}^{\infty}\frac{1}{f_n(r)}f_{n(1-s)}(r-\|\mathbf{x}\|) \, \nu(\dd r)} \nonumber
\\ &= \frac{\psi_{s}(\dd r;\|\mathbf{x}\|)}{\Psi_{s}(\|\mathbf{x}\|)}.
\end{align}
Similarly, the $\xib_s$-conditional law of $R_t$ for $t\in(s,1)$ can be derived by the use of the Bayes theorem;
\begin{align}
\P[R_t\in \dd r\,|\, \xib_s=\mathbf{x}] &= \frac{\int^{\infty}_{z=0}\P[\xib_s\in\dd \mathbf{x}, R_t\in\dd r \,|\, R_1=z]\P[R_1\in \d z]}
{\int^{\infty}_{z=0}\int_{r=0}^{\infty}\P[\xib_s\in\dd \mathbf{x}, R_t\in\dd r \,|\, R_1=z]\dd r\P[R_1\in \d z] } \nonumber
\\ &= \frac{\int^{\infty}_{z=0}\frac{1}{f_n(z)}f_{n(t-s)}(r-\|\mathbf{x}\|)f_{n(1-t)}(z-r)\dd r \, \nu(\dd z)}
{\int^{\infty}_{z=0}\frac{1}{f_n(z)}\int_{r=\|\mathbf{x}\|}^{z}f_{n(t-s)}(r-\|\mathbf{x}\|)f_{n(1-t)}(z-r)\dd r \, \nu(\dd z)} \label{eq:denom_cov}
\\ &= \frac{\Psi_{t}(r)}{\Psi_{s}(\|\mathbf{x}\|)}f_{n(t-s)}(r-\|\mathbf{x}\|)\dd r.
\end{align}
The denominator of (\ref{eq:denom_cov}) was simplified using the fact that gamma densities with common scale parameter are closed under convolution.
\end{proof}
We define the measure $\nu_{st}$, $0\leq s<t\leq 1$, by
\begin{align}
\nu_{st}(B)&=\P[R_t \in B \,|\, \xib_s].
\\\intertext{Thus we have}
\label{eq:nu_s1}
\nu_{s1}(\dd r)&= \frac{\psi_{s}(\dd r;R_s)}{\Psi_{s}(R_s)},
\\\intertext{and}
\label{eq:nu_st}
\nu_{st}(\dd r)&=\frac{\Psi_{t}(r)}{\Psi_{s}(R_s)}\frac{(r-R_s)^{n(t-s)-1}\exp(-(r-R_s))}{\G[n(t-s)]} \dd r, \qquad (t<1).
\end{align}
When $\nu_{st}$ admits a density, we denote it by $p_{st}(r)=\nu_{st}(\dd r)/\dd r$.
We see from (\ref{eq:nu_st}) that $p_{st}$ exists for $t<1$.
When $t=1$, it follows from the definition of $\psi_t$ that $p_{st}$ only exists if $\nu$ admits a density.
Note that $\P[R_t\in \dd r\,|\, \xib_s]=\P[R_t\in \dd r\,|\, R_s]$ for $t\in(s,1]$.
This is not surprising since $\{R_s\}$ is a GRB, and hence a Markov process with respect to its natural filtration.
\bigskip
We now show that the increments of an ASP are of multivariate Liouville-type.
\begin{prop}
Fix $s\in[0,1)$.
Given $\xib_s$, the increment $\xib_t-\xib_s$, $t\in(s,1]$, has an $n$-variate Liouville distribution with generating law
\begin{equation} \label{eq:nu_star}
\nu^*(B)=\nu_{st}(B+R_s),
\end{equation}
and parameter vector $\a=(t-s,\ldots,t-s)^\tp$.
\end{prop}
\begin{proof}
First we prove the case $t<1$.
In this case the density $p_{st}$ exists.
From (\ref{eq:ASP2}) and (\ref{eq:nu_st}), we have
\begin{align}
\P[\xib_t-\xib_s\in\dd \mathbf{y} \,|\, \xib_s]&=\frac{\Psi_t(\|\mathbf{y}\|+R_s)}{\Psi_s(R_s)} \prod_{i=1}^n \frac{y_i^{(t-s)-1} \e^{-y_i}}{\G[t-s]} \d y_i \nonumber
\\ &=\frac{p_{st}(\|\mathbf{y}\|+R_s) \G[n(t-s)]}{\|\mathbf{y}\|^{n(t-s)-1}} \prod_{i=1}^n \frac{y_i^{(t-s)-1}}{\G[t-s]} \d y_i. \label{eq:tran_den}
\end{align}
Comparing (\ref{eq:tran_den}) to (\ref{eq:MLD}) shows it to be the law of Liouville distribution
with generating law $p_{st}(x+R_s)\dd x$ and parameter vector $(t-s,\ldots,t-s)^\tp$.
Noting that $p_{st}(x+R_s)\dd x=\nu^*(\dd x)$, where $\nu^*$ is given by (\ref{eq:nu_star}), yields the required result.
We now consider the case $t=1$ when $\nu$ admits a density $p$.
In this case the density $p_{s1}$ exists.
From (\ref{eq:nu_den_td}) and (\ref{eq:nu_s1}), we have
\begin{align}
\P[\xib_1-\xib_s \in\dd \mathbf{y}\,|\, \xib_s]&=\frac{\G[n]\e^{R_s}p(\|\mathbf{y}\|+R_s)}{\Psi_s(R_s)(\|\mathbf{y}\|+R_s)^{n-1}}
\prod_{i=1}^n \frac{y_i^{-s}}{\G[1-s]} \d y_i \nonumber
\\ &=\frac{\G[n(1-t)]p_{s1}(\|\mathbf{y}\|+R_s)}{\|\mathbf{y}\|^{n(1-t)-1}} \prod_{i=1}^n \frac{y_i^{-s}}{\G[1-s]} \d y_i.
\end{align}
Hence $\xib_t-\xib_s$ has the required density.
For the final case where $t=1$ and $\nu$ has no density we only outline the proof since the details are far from illuminating.
Given $\xib_s$, the law of $\xib_1-\xib_s$ is characterised by (\ref{eq:ASP1}).
We then need to show that this law is equal to the law of $X\mathbf{D}$,
where $X$ is a random variable with law $\nu^*$ given by (\ref{eq:nu_star}),
and $\mathbf{D}$ is a Dirichlet random variable, independent of $X$, with parameter vector $(1-s,\ldots,1-s)^{\tp}$.
This is possible by mixing the Dirichlet density (\ref{eq:Dir_den}) with the random scale parameter $X$.
\end{proof}
\subsection{Moments}
In this subsection we fix a time $s\in [0,1)$, and we assume that the first two moments of $\nu$ exist and are finite.
\begin{prop}
The first- and second-order moments of $\xib_t$, $t\in(s,1]$, are
\begin{align*}
&\textbf{1.}
&& \E\left[\left.\xi^{(i)}_{t}\,\right| \xib_s\right]
=\frac{1}{n}\mu_1 +\xi_{s}^{(i)},
\\
&\textbf{2.}
&& \var\left[\left.\xi^{(i)}_{t}\,\right| \xib_{s}\right]
=\frac{1}{n}\left[\left(\frac{t-s+1}{n(t-s)+1}\right)\mu_2-\frac{1}{n}\mu_1^2\right],
\\
&\textbf{3.}
&& \cov\left[\left.\xi^{(i)}_{t}, \xi^{(j)}_{t} \,\right| \xib_{s}\right]
=\frac{t-s}{n}\left[\frac{\mu_2}{n(t-s)+1}-\frac{\mu_1^2}{n(t-s)}\right], \quad (i\ne j),
\end{align*}
where
\begin{align*}
\mu_1&=\frac{t-s}{1-s}\left(\E[R_1 \,|\, R_{s}]-R_{s}\right),
\\ \mu_2&=\frac{(t-s)(1+n(t-s))}{(1-s)(1+n(1-s))}\E[(R_1-R_s)^2 \,|\, R_{s}].
\end{align*}
\end{prop}
\begin{proof}
Given $\xib_s$, the increment $\xib_{t}-\xib_{s}$ has an $n$-dimensional Liouville distribution with generating law
\begin{equation}
\nu^{*}(A)=\nu_{st}(A+R_s),
\end{equation}
for $t\in(s,1]$, and with parameter vector $(t-s,\ldots,t-s)^{\tp}$.
We have
\begin{align}
\mu_1 &= \int_0^{\infty} y \, \nu^*(\dd y)
= \int_{R_s}^{\infty} y \, \nu_{st}(\dd y)-R_s
= \E[R_t \,|\, \xib_{s}]-R_s,
\\\intertext{and}
\mu_2 &= \int_0^{\infty} y^2 \, \nu^*(\dd y)
= \int_{R_s}^{\infty} (y-R_s)^2 \, \nu_{st}(\dd y)
= \E[(R_t-R_s)^2 \,|\, \xib_{s}].
\end{align}
It then follows from equations (\ref{eq:Expected})-(\ref{eq:Covar}) that
\begin{align*}
&\text{1.}
&& \E\left[\left.\xi^{(i)}_{t}\,\right| \xib_s\right]
=\frac{1}{n}\left(\E[R_t \,|\, \xib_{s}]-R_s\right) +\xi_{s}^{(i)},
\\
&\text{2.}
&& \var\left[\left.\xi^{(i)}_{t}\,\right| \xib_{s}\right]
=\frac{1}{n}\left[\left(\frac{t-s+1}{n(t-s)+1}\right)\E[(R_t-R_s)^2\,|\,\xib_s]-\frac{1}{n}\left(\E[R_t \,|\, \xib_{s}]-R_s\right)^2\right],
\\
&\text{3.}
&& \cov\left[\left.\xi^{(i)}_{t}, \xi^{(j)}_{t} \,\right| \xib_{s}\right]
=\frac{t-s}{n}\left[\frac{\left( \E[(R_t-R_s)^2\,|\,\xib_s] \right)}{n(t-s)+1}-\frac{\left(\E[R_t \,|\, \xib_{s}]-R_s\right)^2}{n(t-s)}\right], \quad (i\ne j).
\end{align*}
To compute $\E[R_t\,|\,\xib_s]$ and $\E[(R_t-R_s)^2\,|\,\xib_s]$, we use two results results about \levy random bridges found in \citep{HHM1}.
First, we can write
\begin{align*}
\E[R_t \,|\, R_{s}]&=\frac{t-s}{1-s}\E[R_1 \,|\, R_{s}]+\frac{1-t}{1-s} R_{s}.
\end{align*}
The expression for $\mu_{1}$ then follows directly.
Second, given $R_s$, the process $\{R_t-R_s\}_{s\leq t\leq 1}$ is a GRB with terminal law $\bar{\nu}(B)=\nu_{s1}(B+R_s)$ and activity parameter $n$.
Hence, given $R_s$,
\begin{equation}
\{R_t-R_s\}_{s\leq t\leq 1}\law \{X \g_{t1} \}_{s\leq t\leq 1},
\end{equation}
where $X$ is a random variable with law $\bar{\nu}$, and $\{\g_{t1}\}_{s\leq t\leq 1}$ is a gamma bridge with activity parameter $n$,
independent of $X$, satisfying $\g_{s1}=0$ and $\g_{11}=1$.
Note that $\g_{t1}$, $t\in(s,1)$, is a beta random variable with parameters $\a=n(t-s)$ and $\b=n(1-t)$.
Thus
\begin{align}
\E[(R_t-R_s)^2\,|\,R_s]&=\E[\g_{t1}^{2}]\E[X^2] \nonumber
\\ &=\E[\g_{t1}^{2}]\int_0^\infty x^2 \,\bar{\nu}(\dd x) \nonumber
\\ &=\E[\g_{t1}^{2}]\int_{R_s}^\infty (y-R_s)^2 \,\nu_{s1}(\dd x) \nonumber
\\ &=\frac{(t-s)(1+n(t-s))}{(1-s)(1+n(1-s))} \E[(R_1-R_s)^2\,|\,R_s].
\end{align}
The expression for $\mu_2$ follows.
\end{proof}
\subsection{Measure change}
In this section we shall show that the law of an $n$-dimensional ASP is equivalent to an $n$-dimensional gamma process.
To demonstrate this result it is convenient to begin by assuming that under some measure $\Q$ the process $\{\xib_t\}$ is an $n$-dimensional gamma process,
and then show that $\{\xib_t\}$ is an ASP under an equivalent measure $\P$.
Under $\Q$, we assume that $\{\xib_t\}$ is an $n$-dimensional gamma process such that
\begin{equation}
\Q[\xib_t \in\dd \mathbf{x}]=\prod_{i=1}^n \frac{x_i^{t-1}}{\G[t]} \e^{-x_i} \d x_i.
\end{equation}
Hence the gamma processes $\{\xi^{(i)}_t\}$, $i=1,2,\ldots,n$, are independent and identical in law.
The process $\{R_t\}_{0\leq t \leq 1}$, defined as above by $R_t=\|\xib_t\|$, is a one-dimensional gamma process and satisfies
\begin{equation}
\Q[R_t\in \dd x]=\frac{x^{nt-1}}{\G[nt]} \e^{-x} \d x.
\end{equation}
As before, the filtration $\{\F_t\}$ is that generated by $\{\xib_t\}$.
We shall show that the process $\{\Psi_t(R_t)\}_{0\leq t <1}$ is a martingale, where
\begin{align}
\Psi_t(R_t)&=\int_{R_t}^{\infty} \frac{f_{n(1-t)}(z-R_t)}{f_n(z)} \, \nu(\dd z) \nonumber
\\ &=\frac{\G[n]\exp(R_t)}{\G[n(1-t)]} \int_{R_t}^{\infty}z^{n-1}(z-R_t)^{n(1-t)-1} \, \nu(\dd z).
\end{align}
For times $0\leq s<t<1$, we have
\begin{align}
\E_{\Q}\left[\Psi_t(R_t) \left|\, \F_s \right.\right]
&=\E_{\Q}\left[\left.\int_{R_t}^{\infty}\frac{f_{n(1-t)}(z-R_t)}{f_n(z)}\, \nu(\dd z)\,\right|\F_s \right] \nonumber
\\ &=\E_{\Q}\left[\left.\int_{R_t}^{\infty}\frac{f_{n(1-t)}(z-R_s-(R_t-R_s))}{f_n(z)}\, \nu(\dd z)\,\right|\xib_{s} \right] \nonumber
\\ &=\int_{y=0}^{\infty}\int_{z=R_s+y}^{\infty}\frac{f_{n(1-t)}(z-R_s-y)}{f_n(z)}\, \nu(\dd z)\,f_{n(t-s)}(y) \d y \nonumber
\\ &=\int_{z=R_s}^{\infty}\frac{1}{f_n(z)}\int_{y=0}^{z-R_s}f_{n(1-t)}(z-R_s-y)f_{n(t-s)}(y)\d y \, \nu(\dd z) \nonumber
\\ &=\int_{R_s}^{\infty}\frac{f_{n(1-s)}(z-R_s)}{f_n(z)} \, \nu(\dd z) \nonumber
\\ &=\Psi_s(R_s).
\end{align}
Since $\Psi_0(R_0)=1$ and $\Psi_t(R_t)>0$, the process $\{\Psi_t(R_t)\}_{0\leq t<1}$ is a Radon-Nikodym density process.
\begin{prop}
Define a measure $\P$ by
\begin{equation}
\label{eq:RNDP}
\left.\frac{\dd \P}{\dd \Q}\right|_{\F_t}=\Psi_t(R_t).
\end{equation}
Under $\P$, $\{\xib_{t}\}_{0\leq t< 1}$ is an ASP with generating law $\nu$.
\end{prop}
\begin{proof}
We prove the proposition by verifying that the transition law of $\{\xib_t\}$ under $\P$ is that of an ASP.
\begin{align}
\P\left[\xib_t \in \dd \mathbf{x} \,|\, \F_s\right]
&=\E_{\P}[\1\{\xib_t\in\dd\mathbf{x}\} \,|\, \F_s] \nonumber
\\ &=\frac{1}{\Psi_s(R_s)}\,\E_{\Q}[\Psi_t(R_t) \1\{\xib_t\in\dd\mathbf{x}\} \,|\, \xib_s] \nonumber
\\ &=\frac{\Psi_{t}(R_t)}{\Psi_{s}(R_{s})}\prod_{i=1}^n f_{t-s}(x_i-\xi^{(i)}_s) \d x_i \nonumber
\\ &=\frac{\Psi_{t}(R_t)}{\Psi_{s}(R_{s})}\prod_{i=1}^n \frac{(x_i-\xi^{(i)}_s)^{(t-s)-1}\e^{-(x_i-\xi_s^{(i)})}}{\G[t-s]}\d x_i. \label{eq:MC}
\end{align}
Comparing equations (\ref{eq:MC}) and (\ref{eq:ASP2}) completes the proof.
\end{proof}
We can restate the results of this subsection as the following:
\begin{prop}
Suppose that $\{\xib_{t}\}_{0\leq t\leq 1}$ is an ASP with generating law $\nu$ under some measure $\P$.
Then
\begin{equation}
\left.\frac{\dd \Q}{\dd \P}\right|_{\F_t}=\Psi_t(R_t)^{-1}
\end{equation}
defines a probability measure $\Q$ for $t\in[0,1)$.
Furthermore, under $\Q$ $\{\xib_t\}_{0\leq t<1}$ is an $n$-dimensional gamma process such that
\begin{equation*}
\Q[\xib_t \in\dd \mathbf{x}]=\prod_{i=1}^n \frac{x_i^{t-1}}{\G[t]} \e^{-x_i} \d x_i.
\end{equation*}
\end{prop}
\subsection{Independent gamma bridges representation}
The increments of an $n$-dimensional ASP are identical in law to a positive random variable
multiplied by the Hadamard product of an $n$-dimensional Dirichlet random variable and a vector of $n$ independent gamma bridges.
For notational convenience, in this subsection we denote a gamma bridge defined over $[0,1]$ as $\{\g(t)\}$ (instead of $\{\g_{t1}\}$).
For vectors $\mathbf{x},\mathbf{y}\in\R^n$, we denote their Hadamard product as $\mathbf{x}\circ \mathbf{y}$.
That is,
\begin{equation}
\mathbf{x}\circ \mathbf{y}=(x_1y_1,\ldots,x_ny_n)^{\tp}.
\end{equation}
\begin{prop}
Given the value of $\xib_s$, the ASP process $\{\xib_t\}$ satisfies the following identity in law:
\[ \{\xib_t-\xib_s \}_{s\leq t\leq 1} \law \{R^* \, \mathbf{D} \circ \boldsymbol{\g}_{t} \}_{s\leq t\leq 1}, \]
where
\begin{enumerate}
\item
$\mathbf{D}\in [0,1]^n$ is a symmetric Dirichlet random variable with parameter vector $(1-s,\ldots,1-s)^\tp$;
\item
$\{\boldsymbol{\g}_{t}\}$ is a vector of $n$ independent gamma bridges, each with activity parameter $m=1$,
starting at the value 0 at time $s$, and terminating with unit value at time 1;
\item
$R^*>0$ is a random variable with law $\nu^*$ given by
\[ \nu^*(A)=\nu_{st}(A+R_s);\]
\item
$R^*$, $\mathbf{D}$, and $\{\boldsymbol{\g}_{t}\}$ are mutually independent.
\end{enumerate}
\end{prop}
\begin{proof}
Fix $k_i\geq 1$ and the partition
\begin{equation}
s=t_0^i<t_1^i<\cdots<t_{k_i}^i=1,
\end{equation}
for $i=1,\ldots,n$.
Define the non-overlapping increments $\{\D_{ij}\}$ by
\begin{equation}
\D_{ij}=\xi^{(i)}_{t^i_j}-\xi^{(i)}_{t^i_{j-1}},
\end{equation}
for $j=1,\ldots,k_i$ and $i=1,\ldots,n$. The distribution of the vector
\begin{align}
\boldsymbol{\D}=(&\D_{11},\D_{12},\ldots,\D_{1k_1}, \nonumber
\\ &\D_{21},\D_{22},\ldots,\D_{2k_2}, \nonumber
\\ &\vdots \nonumber
\\ &\D_{n1},\D_{n2},\ldots,\D_{nk_n})^\tp
\end{align}
characterises the finite-dimensional distributions of the process $\{\xib_t-\xib_s \}_{s\leq t\leq 1}$.
It follows from the Kolmogorov extension theorem that the distribution of $\boldsymbol{\D}$ characterises the law of $\{\xib_t-\xib_s \}$.
Note that $\boldsymbol{\D}$ are non-overlapping increments of the master GRB $\{\G_t\}$.
Thus, given $\xib_s$, $\boldsymbol{\D}$ has a multivariate Liouville distribution with parameter vector
\begin{align}
\boldsymbol{\a}=(&t^1_1-t^1_0,t^1_2-t^1_1,\ldots,t^1_{k_1}-t^1_{k_1-1}, \nonumber
\\ &t^2_1-t^2_0,t^2_2-t^2_1,\ldots,t^2_{k_2}-t^2_{k_2-1}, \nonumber
\\ &\vdots \nonumber
\\ &t^n_1-t^n_0,t^n_2-t^n_1,\ldots,t^n_{k_n}-t^n_{k_n-1})^\tp,
\end{align}
and generating law
\begin{equation}
\nu^*(A)=\nu_{st}(A+R_s)
\end{equation}
for $t\in(s,1]$. It follows from \citet[theorem 6.9]{FKN1990} that
\begin{equation}
(\D_{i1},\ldots,\D_{ik_i})^\tp \law R^* \, D_i \mathbf{Y}_i, \qquad \text{for $i=1,\ldots,n$,}
\end{equation}
where
(i) $R^*$ has law $\nu^*$,
(ii) $\mathbf{D}=(D_1,\ldots,D_n)^\tp$ has a Dirichlet distribution with parameter vector $(1-s,\ldots,1-s)^\tp$,
(iii) $\mathbf{Y}_i\in[0,1]^{k_i}$ has a Dirichlet distribution with parameter vector $(t^i_1-t^i_0,\ldots,t^i_{k_i}-t^i_{k_i-1})^\tp$,
(iv) $\mathbf{Y}_1,\ldots,\mathbf{Y}_n$, $R^*$, and $\mathbf{D}$ are mutually independent.
Let $\{\g(t)\}_{s\leq t \leq 1}$ be a gamma bridge with activity parameter $m=1$ such that $\g(s)=0$ and $\g(1)=1$.
Then the increment vector
\begin{equation}
\label{eq:gambr_inc}
(\g(t^i_1)-\g(t^i_0), \ldots, \g(t^i_{k_i})-\g(t^i_{k_i-1}))^\tp
\end{equation}
has a Dirichlet distribution with parameter vector $(t^i_1-t^i_0,\ldots,t^i_{k_i}-t^i_{k_i-1})^\tp$.
Hence the increment vector (\ref{eq:gambr_inc}) is identical in law to $\mathbf{Y}_i$.
From the Kolmogorov extension theorem, this identity characterises the law of $\{\g(t)\}$.
It follows that
\begin{equation}
\{\xi_t^{(i)}-\xi_s^{(i)}\}_{s\leq t\leq 1} \law \{R^* \, D_i \g_{t}\}_{s\leq t\leq 1}, \qquad \text{for $i=1,\ldots,n$},
\end{equation}
which completes the proof.
\end{proof}
\subsection{Uniform process}
We construct a multivariate process from the ASP $\{\xib_t\}$ such that each one-dimensional marginal is uniformly distributed for every time $t\in (0,1]$.
Fix a time $t\in(0,1]$.
Each $\xi^{(i)}_{t}$ is a scale-mixed beta random variable with survival function
\begin{align}
\bar{F}_{t}(x)&=\int_x^{\infty} \left(1-I_{x/y}[t,n-t]\right)\nu(\dd y) \nonumber
\\ &= \int_x^{\infty} I_{1-x/y}[n-t,t] \, \nu(\dd y), \label{eq:copula}
\end{align}
where ${I}_{z}[\alpha,\beta]$ is the regularized incomplete Beta function, defined as usual for $z\in [0,1]$ by
\begin{align}
I_{z}[\a,\b]&=\frac{\int_0^{z}u^{\alpha-1}(1-\beta)^{\beta-1}\,\dd u}{\int_0^{1}u^{\alpha-1}(1-\beta)^{\beta-1}\,\dd u} \qquad (\a,\b>0).
\end{align}
The random variables
\begin{equation}
Y^{(i)}_{t}=\bar{F}_{t}(\xi^{(i)}_{t}), \qquad i=1,\ldots,n,
\end{equation}
are then uniformly distributed.
We now define a process $\{\mathbf{Y}\}_{0\leq t\leq 1}$ by
\begin{equation}
\mathbf{Y}_t=\left(\bar{F}_t(\xi^{(1)}_t),\ldots, \bar{F}_t(\xi^{(n)}_t)\right)^\tp.
\end{equation}
By construction, each one-dimensional marginal $Y^{(i)}_t$ is uniform for $t>0$.
\section{Liouville process}
We generalise ASPs to a family of stochastic processes that we call \textit{Liouville processes}.
A Liouville process is a Markov process whose finite-dimensional distributions are multivariate Liouville.
Liouville processes display a broader range of dynamics than ASPs.
The one-dimensional marginal processes of a Liouville process are in general not identical.
This generalisation comes at the expense of losing the direct connection to Archimedean copulas.
\begin{defn} \label{def:LP}
Fix $n\in\N_+$, $n\geq 2$, and the vector $\mathbf{m}\in\R^n$ satisfying $m_i>0$, $i=1,\ldots,n$.
Define the strictly increasing sequence $\{u_{i}\}^{n}_{i=1}$ by
\begin{align}
u_{0}&=0, \nonumber
\\ u_{i}&=u_{i-1}+m_{i}, \qquad \text{for $i=1,\ldots,n$.}
\end{align}
Then a process $\{\xib_{t}\}_{0\leq t \leq 1}$ satisfying
\begin{equation*}
\{\xib_t\}_{0\leq t \leq 1}\law\left\{\left[ \begin{aligned}
& \G_{t(u_{1})}-\G_{0}
\\ & \vdots
\\ & \G_{t(u_{i}-u_{i-1})+u_{i-1}}-\G_{u_{i-1}}
\\ & \vdots
\\ & \G_{t(u_{n}-u_{n-1})+u_{n-1}}-\G_{u_{n-1}}
\end{aligned} \right]\right\}_{0\leq t \leq 1}
\end{equation*}
for $\{\G_t\}_{0\leq t \leq u_{n}}$ a GRB with activity parameter $m=1$, is an \emph{$n$-dimensional Liouville process}.
We say that the generating law of $\{\G_t\}$ is the \emph{generating law} of $\{\xib_{t}\}$ and the \emph{activity parameter} of $\{\xib_{t}\}$ is $\textbf{m}$.
\end{defn}
Note that allowing the activity parameter of the master process to differ from unity in Definition \ref{def:LP} would not broaden the class of processes.
Indeed, changing the activity parameter of the master process would be equivalent to multiplying the vector $\mathbf{m}$ by a scale factor.
Let $\{\xib_t\}$ be a Liouville process with generating law $\nu$ and parameter vector $\mathbf{m}$.
Each one-dimensional marginal process of $\{\xib_t\}$ is a GRB with activity parameter $m_{i}$,
and Definition \ref{def:LP} ensures that $\xib_{t}$ is defined for $t\in[0,1]$.
It is straightforward to adjust the definition so that a Liouville process is defined over an arbitrary closed interval.
In the language of \citet{MN2010}, $\xib_{1}$ has a \emph{Liouville copula}.
That is, the survival copula of $\xib_{1}$ is the survival copula of a multivariate Liouville distribution.
We shall provide the transition law, moments and an independent gamma bridge representation of a Liouville process.
We present the results as propositions.
Proofs are omitted since they are similar to the proofs in Section 3.
We define a family of unnormalised measures, indexed by $t\in[0,1)$ and $x\in\R_+$, as
\begin{align}
\psi_0(B;x)&=\nu(B),
\\ \psi_t(B;x)&=\int_B \frac{f_{T(1-t)}(z-x)}{f_T(z)} \, \nu(\dd z) \nonumber
\\ &=\frac{\G[T]\e^{x}}{\G[T(1-t)]}\int_{B} \1_{\{z>x\}} z^{T-1}(z-x)^{T(1-t)-1} \, \nu(\dd z),
\end{align}
for $B\in\Borel$ where $T=\|\textbf{m}\|$.
Again we write
\begin{equation}
\Psi_t(x)=\psi_t([0,\infty);x),
\end{equation}
and
\begin{equation}
R_t=\|\xib_t\|.
\end{equation}
The process $\{R_t\}$ is a GRB with activity parameter $T$.
Given $\xib_{s}$, the law of $R_{1}$ is
\begin{equation} \label{nu1s2}
\nu_{s1}(\dd r)= \frac{\psi_{s}(\dd r;R_s)}{\Psi_{s}(R_s)},
\end{equation}
and law of $R_{t}$ is
\begin{equation} \label{nuts2}
\nu_{st}(\dd r)=\frac{\Psi_{t}(r)}{\Psi_{s}(\|\mathbf{x}\|)}\frac{(r-\|\mathbf{x}\|)^{T(t-s)-1}\exp(-(r-\|\mathbf{x}\|))}{\G[T(t-s)]}\dd r
\end{equation}
for $t\in(s,1)$.
\begin{prop}
The Liouville process $\{\xib_t\}$ is a Markov process with the transition law given by
\begin{multline}
\P\left[\left. \xi_1^{(1)}\in\dd z_1,\ldots, \xi_1^{(n-1)}\in\dd z_{n-1},\xi_1^{(n)}\in B \,\right| \xib_s=\mathbf{x} \right]=
\\ \frac{\psi_{\t(s)}(B+\sum_{i=1}^{n-1}z_i;x_n+\sum_{i=1}^{n-1}z_i)}{\Psi_s(\|\mathbf{x}\|)}
\prod_{i=1}^{n-1}\frac{(z_i-x_i)^{m_{i}(1-s)-1}\e^{-(z_i-x_i)}}{\G[m_{i}(1-s)]}\d z_i,
\end{multline}
and
\begin{equation}
\P\left[ \xib_t\in \d\mathbf{y} \,|\, \xib_s=\mathbf{x} \right]=
\frac{\Psi_{t}(\|\mathbf{y}\|)}{\Psi_s(\|\mathbf{x}\|)}
\prod_{i=1}^{n}\frac{(y_i-x_i)^{m_{i}(t-s)-1}\e^{-(y_i-x_i)}}{\G[m_{i}(t-s)]}\d y_i,
\end{equation}
where $\t(t)=1-(1-t)/T$, $0\leq s<t<1$, and $B\in\Borel$.
\end{prop}
\begin{prop}
Fix $s\in[0,1)$.
The first- and second-order moments of $\xib_t$, $t\in(s,1]$, are
\begin{align*}
&\textbf{1.}
&& \E\left[\left.\xi^{(i)}_{t}\,\right| \xib_s\right]
=\frac{m_{i}}{T}\mu_1 +\xi_{s}^{(i)},
\\
&\textbf{2.}
&& \var\left[\left.\xi^{(i)}_{t}\,\right| \xib_{s}\right]
=\frac{m_{i}}{T}\left[\left(\frac{m_{i}(t-s)+1}{T(t-s)+1}\right)\mu_2-\frac{m_{i}}{T}\mu_1^2\right],
\\
&\textbf{3.}
&& \cov\left[\left.\xi^{(i)}_{t}, \xi^{(j)}_{t} \,\right| \xib_{s}\right]
=\frac{m_{i}m_{j}(t-s)}{T}\left[\frac{\mu_2}{T(t-s)+1}-\frac{\mu_1^2}{T(t-s)}\right], \quad (i\ne j),
\end{align*}
where
\begin{align*}
\mu_1&=\frac{t-s}{1-s}(\E[R_1 \,|\, R_{s}]-R_{s}),
\\ \mu_2&=\frac{(t-s)(1+T(t-s))}{(1-s)(1+T(1-s))}\E[(R_1-R_s)^2 \,|\, R_{s}].
\end{align*}
\end{prop}
The law of the increments of an $n$-dimensional Liouville process can be characterised by a positive random variable
multiplied by the Hadamard product of an $n$-dimensional Dirichlet random variable and a vector of $n$ independent gamma bridges.
\begin{prop}
Given the value of $\xib_s$, the Liouville process $\{\xib_t\}$ satisfies the following identity in law:
\[ \{\xib_t-\xib_s \}_{s\leq t\leq 1} \law \{R^* \, \mathbf{D} \circ \boldsymbol{\g}_{t} \}_{s\leq t\leq 1}, \]
where
\begin{enumerate}
\item
$\mathbf{D}\in [0,1]^n$ has a Dirichlet distribution with parameter vector $(1-s)\textbf{m}$;
\item
$\{\boldsymbol{\g}_{t}\}$ is a vector of $n$ independent gamma bridges, such that the $i$th marginal process is a gamma bridge with activity parameter $m_{i}$,
starting at the value 0 at time $s$, and terminating with unit value at time 1;
\item
$R^*>0$ is a random variable with law $\nu^*$ given by
\[ \nu^*(A)=\nu_{st}(A+R_{s});\]
\item
$R^*$, $\mathbf{D}$, and $\{\boldsymbol{\g}_{t}\}$ are mutually independent.
\end{enumerate}
\end{prop}
|
2301.11111
|
\section{Introduction}
Moving toward the post-pandemic world, we are experiencing a new, previously inconceivable change in our lives. A large part of our daily activities are going online, and we are witnessing numerous use-cases in shopping, mobility, healthcare, and manufacturing. As a platform to embrace this dramatic change,
the fifth generation (5G) wireless systems have been standardized and commercialized in 2019, and as of now, 5G services have been rolled out in more than 70 countries.
Notwithstanding the many technological advances brought by 5G, there are still many hurdles and challenges in realizing an envisioned boundless and hyper-connected society where intelligent things such as machines, vehicles, sensors, and robots are seamlessly connected both in physical and virtual spaces without limitation on the data rates and/or transmission delays \cite{Samsung6G}. To back up these disruptive changes and therefore make the slogan a reality, we need to prepare the sixth generation (6G) of wireless networks by identifying key enabling technologies that cannot be achieved through a simple evolution of 5G.
One important expectation in the 6G era is that
machines and things will be the main consumers of mobile data traffic, and thus there will be more than 55 billion devices connected to the Internet by 2025. These devices will continuously sense, process, act, and communicate with the surrounding environment, generating more than 73 zettabytes of data per year \cite{IDC}.
Since 6G wireless networks should support a variety of form factors with different service requirements and hardware capabilities, enabling technologies should be dearly distinct from those of 5G in various aspects.
The primary purpose of this article is to discuss key challenges arising from these considerations. These include the provision of terabits-per-second (Tbps) wireless connectivity, zero coverage-hole networks, and pervasive computing for connected intelligence, along with enabling technologies that can be used as a core for the 6G system. The key concepts of 6G hyper-connectivity are illustrated in Fig. \ref{fig:concept2}.
After the introduction, the rest of this article is organized as follows. In Section II, we address the challenges facing 6G hyper-connectivity. In Section III, we present the key enabling technologies for realizing this 6G vision. In Section IV, we discuss future research issues and provide our concluding remarks.
\begin{figure*}
\centering
\includegraphics[width=6.7in]{fig_rev8.jpg}
\caption{Key concepts of 6G hyper-connectivity.}
\label{fig:concept2}
\end{figure*}
\section{Challenges and Prospects Facing 6G Hyper-Connectivity} \label{sec:2}
\subsection{Tbps for Immersive User Experience}
Since the target of 5G is to support $20$ gigabits-per-second (Gbps) peak data rate and $0.1$ Gbps user-experienced data rate, it is not too difficult to support advanced multimedia services such as $8$K augmented reality (AR) streaming, $16$K virtual reality (VR) streaming, and extended reality (XR) requiring beyond-gigabit data rates.
It is however not easy to support truly immersive mobile services such as digital twin or metaverse since the virtual model accurately reflecting the physical world using XR devices or high-fidelity mobile holographic displays requires a rate of up to Tbps \cite{Samsung6G}.
For example, a full-view $24$K VR transmission with a $64$ pixels per degree (PPD) minimum resolution, a $200\,\text{Hz}$ minimum refresh rate, and a $300\!\!:\!\!1$ compression rate requires at least $6.37$ Gbps.
Clearly, this level of rate cannot be supported anytime anywhere in the current 5G systems since the typical user-experienced data rate of 5G is in the range of 0.1$\sim$1 Gbps.
One way to support emerging services requiring tremendous data rates is to exploit the terahertz (THz) frequency band.
The THz band offers colossal spectrum but the THz waves suffer high attenuation due to the attendant significant path loss, molecular absorption, and atmospheric attenuation.
One straightforward option to overcome these problems is to densify the network, meaning that we reduce the distance between the base station (BS) and mobile device.
In this ultra-dense scenarios, pencil beamforming via ultra-massive multi-input multi-output (UM-MIMO) plasmonic nano-antenna arrays can be useful in providing 10 to 100$\times$ improvement in data rate \cite{UM-MIMO}.
While the densified small cells equipped with massive multi-input multi-output (MIMO) systems are a good fit for THz communication systems, their deployment will make the notions of cell and handover obsolete.
Thus, we need to design the cell-free networks from the scratch by considering various technical challenges, such as beam tracking/management, user association and BS coordination, and synchronization and initial access.
Other key enabling technologies to support Tbps data rates include orbital angular momentum (OAM) multiplexing, full-duplex, and spatial modulation-MIMO (SM-MIMO).
\vspace{-2mm}
\subsection{Zero Coverage-Hole Networks}
Another more important goal of 5G systems is to provide advanced wireless connectivity for a wide variety of vertical applications including self-driving cars and drones, robot-aided smart factories and healthcare, and remote control for disaster scenarios.
To accommodate such applications,
wireless access should be available anywhere, meaning that there should be no coverage holes.
Although many network operators claim nationwide coverage for 5G, there are still many coverage holes in places such as tunnels, basements, elevators, mountains, and oceans.
Moreover, higher frequencies (3.5 GHz at FR1 and 28 GHz at FR2) used in 5G systems tend to be vulnerable to blockage owing to their high directivity and severe path loss.
For these reasons, more often than not we experience link failure in our daily use of 5G services.
Recently, several new approaches to enhance the coverage and ensure universal and seamless connectivity have been proposed.
An intelligent reflecting surface (IRS), an approach that provides an artificial path in the absence of line-of-sight (LOS) links, is a viable candidate
to eliminate coverage holes, particularly in dense urban areas \cite{RIS}.
Wave-controlled holographic MIMO surfaces that flexibly shape electromagnetic waves using sub-wavelength metallic or dielectric scattering particle structures is another good option.
In remote areas such as deserts, oceans, and mountain areas, in which the deployment of terrestrial networks (TNs) is almost impossible, non-terrestrial networks (NTNs) with satellites, high-altitude platform stations (HAPSs), and unmanned aerial vehicles (UAVs) can be an attractive solution because they ensure coverage without relying on the ground infrastructure.
By the deliberate combination of TNs and NTNs \cite{SAGINnew}, one can dramatically improve the coverage and thus generate the network with virtually no coverage hole.
\begin{figure*}
\begin{subfigure}{0.646\textwidth}
\includegraphics[width=1\textwidth]{fig2.png}
\caption{}
\label{fig:cell_free1}
\end{subfigure}
\begin{subfigure}{0.354\textwidth}
\hspace{-8mm}
\includegraphics[width=1.1\textwidth]{fig3.png}
\vspace{5.9mm}
\hspace{8mm} \caption{}
\label{fig:cell_free2}
\end{subfigure
\caption{Left-hand side: Illustrations of 6G cell-free massive MIMO network; right-hand side: CDFs of user SINR in cell-free massive MIMO networks, small cell, and centralized massive MIMO networks.}
\label{fig:cell_free}
\end{figure*}
\subsection{Pervasive Computing for Connected Intelligence}
With the advent of truly immersive mobile services, such as XR and metaverse, computational overhead has become a serious concern since these services require high energy consumption, end-to-end latency, and processor and memory overhead.
In 5G, mobile edge computing (MEC) has been widely used to delegate computation-intensive tasks to the edge (a local server near a BS). However, MEC would not be enough to support future applications.
In a VR service, for example, a mobile VR headset must provide photorealistic and $6$ degrees of freedom (6-DoF) rendered images within 20 ms of motion-to-render-to-photon (M2R2P) latency.
To control this overhead solely in a mobile device or offload it to a nearby BS will be by no means easy so that new paradigm to deal with the surge of computation is required.
Fortunately, as the density of wireless networks increases and beyond-gigabit transmissions can be supported using mmWave and THz band transmissions,
one can exploit computational resources in nearby BSs and mobile devices that are equipped with high performance computing processors.
For instance, a vehicle equipped with a high-performance battery ($1000\,\text{Wh/L}$) and powerful graphics processing units (GPUs) can perform environment sensing, learning, and computation-oriented tasks and send the results to the destination device.
Particularly, computation offloading is useful for the artificial intelligence (AI) applications, such as object detection and classification, speech recognition, and metaverse control, since these tasks usually require heavy deep neural network (DNN) computation, but the inference results are often very simple.
In order to facilitate the boundless use of computation resources over the network, one should use advanced wireless distributed computing techniques.
Decentralized learning obtaining a global model from localized data via peer-to-peer communication among mobile devices can be one good choice.
When security matters, federated learning where the mobile devices jointly train the global model without exchanging the local dataset with the central server would be a proper choice.
Yet another option to support large-scale models is split learning, in which large-scale DNNs are divided into several pieces and then trained separately on mobile devices.
\section{Key Enabling Technologies for 6G Hyper-Connectivity}
In this section, we discuss key enabling technologies to achieve beyond Gbps for immersive user experience, zero coverage-hole networks, and pervasive computing for connected intelligence.
\subsection{Distributed $\&$ Intelligence-Aware Cell-Free Massive MIMO Network}
In the 6G era, a network will be heavily densified and thus, the areal density of BSs will be comparable to, or even surpass, the density of mobile devices.
As the coverage of a cell gets smaller, the distance between BS and mobile will also be shorter.
To make the most of the proximity between BSs and mobile devices, a network should be designed such that a group of small cells coherently serve users in a user-centric manner (see in Fig. \ref{fig:cell_free1}) \cite{cell-free}.
In most urban scenarios, user-centric cell association will generate multiple LOS links between BSs and mobile so that the mobile can easily achieve a high degree of macro-diversity gain and spectral efficient improvement.
To evaluate the efficacy of the cell-free massive MIMO network, we plot cumulative distribution functions (CDFs) of user signal-to-interference-and-noise ratio (SINR) in mmWave ($100\,\text{GHz}$) and THz ($1\,\text{THz}$) communication scenarios.
One can see from Fig. \ref{fig:cell_free2} that the SINR variation of the cell-free massive MIMO network is much smaller, meaning that the cell-free massive MIMO network can well suppress the inter-cell interference through the cooperative beamforming of BSs.
For example, in the THz systems, the cell-free massive MIMO network can provide at least $7\,$dB SINR to more than $95\%$ of users while the conventional small cell network guarantees only -$2.5\,$dB.
We also observe that the SINR gain of the THz band is higher than that for the mmWave band, meaning that it can effectively compensate for the huge path loss of the THz systems.
To fully enjoy the benefits of cell-free massive MIMO, we need to address the following challenges.
\smallskip \noindent
\textbf{Intelligence-aware network architecture: }
In order to provide the user-centric coverage through the joint beamforming of cooperating BSs, a cell control information, channel state information (CSI), and transmit data of local BSs should be shared with the central unit (CU) through the fronthaul link.
Sharing such a large information among all network entities will cause a significant fronthaul overhead, not to mention the tremendous computational overhead required for the channel estimation, data transmission/reception, and resource allocation \cite{cell-free2}.
A promising approach to mitigate the overhead is the decentralized processing. One such example is the multi-agent deep reinforcement learning (MA-DRL) where the operations like beamforming, resource allocation, and user association are performed locally and individually by the DRL agent in each small cell.
\smallskip \noindent
\textbf{Multi-sensory data fusion for channel prediction and beamforming:}
In order to realize a truly zero coverage-hole cell-free massive MIMO network, the network should have mobility-aware functions such as fast-fading channel estimation, mobile trajectory tracking, blockage prediction, and dynamic BS clustering and beamforming.
One promising direction is to use the multi-modal sensory data (e.g., RGB depth vision information of camera, range information of LiDAR, SLAM-based user location) along with the computer vision techniques to identify the 3D shape of nearby wireless environments (e.g., obstacle locations, LOS and non-line-of-sight (NLOS) paths, and mobility patterns).
In this scheme, using a set of panoramic images obtained from the stereo camera, the DL-based object detection can extract the location (3D Cartesian coordinate $(x, y, z)$) of target objects in an image and then determine its class.
In acquiring the location information of a mobile device from input images, a convolutional neural network (CNN) that extracts the spatially correlated feature using the convolution filter can be employed.
Once the location of the mobile is identified, by measuring the angle and distance from the BS to the target object (e.g., mobile phone, vehicle, or urban air mobility (UAM)), the mmWave and THz beam can be generated without a complicated beam management process.
\smallskip \noindent
\textbf{Computation-oriented end-to-end cell-free massive MIMO network:}
In order to support the computation-intensive and/or latency-critical applications, the future 6G network should have the computation offloading functionality \cite{EdgeComp}. Computation offloading can be realized by utilizing the local computing power at the BSs as well as cloud computing at the CU. For example, latency-sensitive computation tasks will be processed at the BSs in proximity to the user, and the computation-intensive tasks can be handled via the cooperation of BSs and CU.
To ensure successful computation offloading, an integrated cell-free network design exploring both communication and computational perspectives needs to be suggested. In general, it should be guaranteed that computing operations can be delivered by reliable and low-latency communication links through a user, BSs, and CU. From a computation perspective, edge servers located at the BSs may have limited computing capabilities in terms of computing power, memory size, and energy consumption compared to the central server at the CU. In addition, since the user needs to offload computation tasks to nearby BSs, the randomness in computation latency at each BS should be taken into account to guarantee low-latency and reliable computing services.
\begin{figure*}
\centering
\includegraphics[width=7.2in]{fig2_0714.jpg}
\caption{Satellite communication systems with different altitudes: Round-trip delay and single satellite coverage as a fraction of the Earth’s surface.}
\label{fig:3D_connectivity}
\end{figure*}
\smallskip \noindent
\textbf{Integrated RF and optical X-haul provisioning:}
In the cell-free massive MIMO network, fronthaul and backhaul overhead is a serious yet underrepresented problem.
Recently, advanced fronthaul/backhaul techniques have been proposed.
Replacing the current CPRI interface with the optical fiber-based fronthaul/backhaul is a simple option but will increase the CAPEX significantly.
A radio stripe network-based fronthaul and backhaul where the BSs are serially connected using daisy-chain architecture to the same cables would be a cost-effective option for the densely deployed BSs.
Also, an integrated access and backhaul (IAB) network where only a few BSs are connected to traditional fiber infrastructures while other BSs wirelessly relay the fronthaul/backhaul traffic would be a promising option.
Yet another simple solution is free-space optical (FSO) communication where the modulated optical beams are transmitted through the free space.
When the LOS link is guaranteed, FSO-based backhaul can offer several Gbps data rates with a much lower deployment cost than the optical fiber-based backhaul.
When the LOS link is absent, optical IRS, a planar metasurface that controls the optical propagation environment by adjusting the phase shifts of reflecting elements can be used to offer the virtual LOS link between the BSs and CU.
\smallskip \noindent
\textbf{Rank-sufficient THz LOS communications using cell-free massive MIMO network:}
In the THz band, due to the severe path loss and high directivity, the conventional MIMO techniques exploiting the rich scattering of multipath environments will not work.
In the case where the LOS path is the dominant, the cell-free massive MIMO systems can be an appealing solution to address the rank-deficiency issue of THz communications.
Specifically, through the joint transmission of small BSs, the cell-free massive MIMO systems can generate a virtual multipath propagation.
Further, the shortened communication distance along with the extremely large number of transmit antennas incur a spherical wavefront which creates a rich non-linear pattern of phase variations and therefore, increases the channel rank.
\subsection{Boundless and Fully Integrated Terrestrial $\&$ Non-Terrestrial Networks
In remote and rural areas where the existing terrestrial cellular network is unreachable \cite{6GNTN}, NTNs will be a useful means of providing cost-effective and high-speed internet services.
Depending on the flying altitude and coverage area, NTN devices can be classified as UAV, HAPS, and satellite.
Satellite stations are further divided into geostationary orbit (GEO), medium earth orbit (MEO), and low earth orbit (LEO) based on their orbital altitude.
In Fig. \ref{fig:3D_connectivity}, we plot the round-trip delay (RTD) and single satellite coverage on the Earth's surface for various altitude levels.
When a satellite revolves in orbit close to the surface of the earth, a faster response time can be guaranteed, but the satellite's decreases owing to its proximity to the ground.
For example, only three equally spaced GEO satellites are needed to can provide near-global coverage because each GEO satellite covers almost 35\% of the Earth's area, but RTD in this case exceeds 0.5 second. Conversely, the RTD of the LEO satellite is less than 0.1 second, but the coverage will be just 0.2\% at an altitude of 550 km.
One can infer from this discussion that a large number of LEO satellites, often called LEO mega-constellation, would be a good option to guarantee the latency in the order of a few tens of milliseconds and global coverage.
We note that there exists a trade-off between the ground observer's minimum elevation angle and link quality (see Fig. \ref{fig:3D_connectivity}).
If we increase the minimum elevation angle, so is the LOS probability of the satellite link; however, the beam coverage and visible time of the satellite will be reduced.
To balance this trade-off, the ground station’s minimum elevation angle should be carefully determined in practice.
Recently, SpaceX decided to change the minimum elevation angle of Starlink LEO satellites from 40\textdegree\ to 25\textdegree\ (granted by the FCC in April 2021) to improve coverage with a small number of LEO satellites \cite{starlink}.
In analyzing the NTN communication of satellite and aerial devices, areal capacity ($\mathrm{bits/s/m^2}$) is no longer useful; therefore, a new metric called volumetric traffic capacity ($\mathrm{bits/s/m^3}$) that measures the quality of user experience in 3D space is needed.
To obtain a sufficient volumetric traffic capacity to ensure universal global connectivity for 6G systems, the following issues must be addressed.
\smallskip \noindent
\textbf{Mobility-aware channel modeling and tracking in 3D space:} Since the NTN terminals are located at a high altitude and are moving fast with respect to the ground stations, LOS propagation paths are guaranteed, but the wireless signals suffer significant Doppler shift. The first step in addressing the distortions caused by high mobility and a large coverage area is an accurate 3D channel model that accounts for the Doppler shift and propagation delay.
In addition, dynamic channel prediction based on the real-time location tracking of NTN terminals is of great importance
to achieve agile and effective NTN management.
\smallskip \noindent
\textbf{Boundless multi-layered TN/NTN architecture:} To provide sufficient and continuous coverage everywhere, the large LEO satellite constellations should be combined with GEO/MEO using the coverage-aware protocol and NTN-backhaul connection. In fact, a boundless multi-layered architecture fully integrating TNs and NTNs is essential to guarantee the reliable connection in the ocean and desert, as well as high-speed trains and airplanes \cite{38821}. UAV and HAPS play a key role in improving the connection quality because they can be used as relays between the LEO/MEO/GEO satellite and ground stations. In addition, an integrated network protocol that includes multi-layer routing, network mobility management, and resource allocation should be introduced.
\smallskip \noindent
\textbf{Seamless inter-beam/satellite/orbit handover:}
Owing to the high-speed movement of the LEO satellites, the maximum visible time of the satellites at the ground station is at most 10$\sim$20 minutes, causing a frequent handover even for a fixed ground station.
To mitigate this problem, LEO satellites should be connected to nearby satellites in the same or adjacent orbits through laser or radio frequency (RF) links. This so-called inter-satellite link (ISL) connectivity supports relatively low end-to-end latency even for the long distance communication scenario.
Candidate techniques to establish an efficient ISL connectivity include spatial-mode diversity and multiplexing, wavelength division multiplexing, and adaptive beam control using pointing/acquisition/tracking (PAT) process.
%
Another challenge of NTNs is that the cell size varies according to the elevation angle of the NTN terminals; therefore, conventional handover techniques might not be suitable.
%
%
To address this challenge, a forecast-based handover decision on the basis of satellite position estimation is required.
%
%
\begin{figure}[!t]
\centering
\includegraphics[width=3.7in]{fig3-1_0714.jpg}
\caption{Over-the-air distributed computing scenario where a VR headset assigns linear computation tasks to four edges with uncoded/coded offloading.}
\label{fig_comp}
\end{figure}
\smallskip
\noindent
\textbf{3D cell planning and fast beam management:}
Since the cell size varies according to the altitude and elevation angle of the satellite, we need to introduce 3D cell planning based on the orbits of satellites.
A satellite with a higher altitude and lower elevation angle tends to form a larger cell size owing to its wider beam footprint on the ground.
Specifically, the GEO and LEO satellites have beam widths of up to 3,500 km and 1,000 km at an altitude of approximately 600 km, respectively.
In addition, 3D cell types can be classified into earth-moving and earth-fixed cells with respect to the Earth's surface, depending on whether the satellite beams can be fixed or steered.
The earth-fixed cell should be designed to be highly robust to satellite beam pointing errors since even a very small error makes the satellite beam footprint placed tens of kilometers away from the initially designated area on the Earth; whereas, the earth-moving cell may require more frequent handovers because of the dynamics of beam movement on the ground.
As for the cell planning in multi-layered TN/NTN architecture, a truncated octahedron-shaped cell planning distinguished from the hexagon-shaped 2D cell layout is desired when considering the volumetric quotient in 3D space.
\smallskip
\noindent
\textbf{Intelligent spectrum management and NTN system optimization:}
To achieve seamless and broad coverage in integrated terrestrial and non-terrestrial networks, powerful spectrum management scheme is needed.
To this end, intelligent and dynamic cooperation between primary and cognitive users is preferred over the semi-static spectrum sharing.
Note that because the S-band (i.e., 2170$\sim$2200 MHz) is adjacent to the 3G/4G LTE band (i.e., 2110$\sim$2170 MHz), the large Doppler shift of the high-speed satellites causes severe interference in the adjacent band, deteriorating the service quality of 3G/4G TN services.
A possible option to mitigate the interference is the on-board pre-compensation of the Doppler shift at the center of the beam on the ground while the residual Doppler shift can be corrected at each receiver side.
To further improve spectral efficiency, we should have an appropriate frequency reuse pattern among multiple beams, available channels, and antenna gains.
\begin{figure}[!t]
\centering
\includegraphics[width=3.2in]{fig3-2_0714.jpg}
\caption{Task completion times when a user offloads linear computation tasks to four edges over two different channel conditions (Scenario 1: LOS channel at a distance of 15m; Scenario 2: NLOS channel at a distance of 300m).}
\label{fig_comp2}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=6.5in]{Fig6.png}
\caption{Hyper-connected 6G network architecture}
\label{fig_NWA}
\end{figure*}
\subsection
Over-the-air Distributed Computing for Artificial Intelligence Applications}
A future trend in dealing with the relentless growth in computing-intensive applications is pervasive computing, in which mobile devices offload computational overhead to network edges or nearby devices.
Notwithstanding the rosy outlook, a number of issues have to be addressed since there is a fundamental trade-off between computation and communication. In fact, when one tries to offload computational overhead, one should investigate the overhead of the communication link.
Fig. \ref{fig_comp} depicts computation offloading scenarios where a VR headset assigns linear computations to four edges with various offloading schemes \cite{CodedComp}.
Since the reduction in the communication load over the network increases the computation overhead at the edges, we need to come up with a proper offloading scheme optimized for the computation capability and network conditions.
One thing to note is that the wireless channels experienced by devices and their computing capabilities are quite different among devices so that there is a large variation in the task completion time.
Particularly, this problem is critical for the ultra-reliable and low-latency communications (URLLC) services since the task completion time is determined by the device with the worst channel condition or straggling/malfunctioning devices (e.g., edge failure in Fig. \ref{fig_comp}).
For the seamless integration of communication and computing,
the following approaches can be considered.
\smallskip \noindent
\textbf{Communication-aware pervasive computation design: } To alleviate the difficulties caused by the channel uncertainty, we need to exploit the mmWave and THz frequency bands.
%
If sufficient bandwidth is provided and LOS channel conditions are guaranteed for every computing node, wireless links will not be a serious bottleneck.
%
Obviously, this ideal condition cannot be guaranteed always, so we need to come up with an environment-aware and communication-friendly computing mechanism.
%
For example, the communication load can be reduced by quantization, compression, and sparsification of the transmit information as long as it does not severely degrade the task quality.
%
For special cases where the (weighted) average of the computation tasks is needed, we can design the system such that the computation results are added during the wireless transmission of the same time/frequency resources. This so-called over-the-air computation can significantly reduce the communication and computation overhead.
\smallskip \noindent
\textbf{Collaborative and robust coded computation offloading:} A naïve approach to exploit multiple computing nodes is to split the whole computation task into multiple pieces and then distribute them equally to multiple nodes. This split-based offloading will not be effective in the presence of link failure and straggling/malfunctioning devices, since the whole task can be
completed only when all subtasks are finished. One way to avoid this problem is to replicate a part of computation tasks and assign them to the multiple devices.
%
One option to overcome difficulties caused by the channel heterogeneity, link failure, and straggling/malfunctioning is to apply coding techniques to distributed task allocation.
%
By introducing redundancy in distributed tasks, coded distributed computing can be more resilient to the link and edge failures. For example, if computation tasks are designed using maximum distance separable (MDS) codes, one can recover the straggling tasks from other completed
tasks \cite{CodedComp2}.
%
As illustrated in Fig. \ref{fig_comp2}, one can achieve the better computation-communication trade-off by assigning coded computation sub-tasks to computing nodes.
\smallskip \noindent
\textbf{Scalable and over-the-air deep learning:}
To perform a large-scale deep learning (DL) task over wireless networks, we must decide how to allocate the tasks among the cloud, network edges, and the mobile devices \cite{scalable}.
For instance, network edges perform DNN training, and the mobile device performs the inference using the trained model from edges.
To perform the flexible DL using abundant network resources, we can consider the following: 1) federated learning, which trains local models on each device and then passes updated parameters to the central cloud for a global model; 2) distributed learning, in which each device communicates directly with its neighbors; and 3) split learning, in which devices or the central cloud trains each split model individually.
These DL techniques can be combined with over-the-air techniques \cite{air} to effectively aggregate large-scale distributed data in ultra-low-latency
scenarios. In addition, a scalable approach can be applied to resource-constrained mobile devices to reduce communication costs by leveraging gradient clustering, correction, and quantization methods.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\section{Summary and Outlook}
In this article, we discussed the vision and challenges to achieve the hyper-connected society in 6G.
We have identified several key enabling technologies and their roles in supporting ultra-broadband, ubiquitous 3D, and computation-aware connectivity in 6G. With these disruptive technologies, 6G is envisioned to collapse the existing boundaries of space/air/ground networks, communication/computing/intelligence capabilities, as well as physical/virtual worlds.
We anticipate that this will bring the truly {\it 6G hyper-connectivity} society to humans, machines, and sensors in ubiquitous 3D global coverage (beyond ground-level 2D connectivity for 5G) even in space, oceans and desserts.
In Fig. 6, we summarized the hyper-connected 6G network architecture by highlighting the key enabling technologies along with other remaining major challenges.
Going forward, achieving Tbps data rates, zero coverage holes, and pervasive computing for connected intelligence
will contribute to reducing differences in social and regional infrastructures and economic opportunities, thereby ameliorating many social issues such as the digital divide, regional polarization, and education inequality.
Our hope is that this article will spark further interest and expedite the research activities for 6G.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
Moving toward the post-pandemic world, we are experiencing a new, previously inconceivable change in our lives. A large part of our daily activities are going online, and we are witnessing numerous use-cases in shopping, mobility, healthcare, and manufacturing. As a platform to embrace this dramatic change,
the fifth generation (5G) wireless systems have been standardized and commercialized in 2019, and as of now, 5G services have been rolled out in more than 70 countries.
Notwithstanding the many technological advances brought by 5G, there are still many hurdles and challenges in realizing an envisioned boundless and hyper-connected society where intelligent things such as machines, vehicles, sensors, and robots are seamlessly connected both in physical and virtual spaces without limitation on the data rates and/or transmission delays \cite{Samsung6G}. To back up these disruptive changes and therefore make the slogan a reality, we need to prepare the sixth generation (6G) of wireless networks by identifying key enabling technologies that cannot be achieved through a simple evolution of 5G.
One important expectation in the 6G era is that
machines and things will be the main consumers of mobile data traffic, and thus there will be more than 55 billion devices connected to the Internet by 2025. These devices will continuously sense, process, act, and communicate with the surrounding environment, generating more than 73 zettabytes of data per year \cite{IDC}.
Since 6G wireless networks should support a variety of form factors with different service requirements and hardware capabilities, enabling technologies should be dearly distinct from those of 5G in various aspects.
The primary purpose of this article is to discuss key challenges arising from these considerations. These include the provision of terabits-per-second (Tbps) wireless connectivity, zero coverage-hole networks, and pervasive computing for connected intelligence, along with enabling technologies that can be used as a core for the 6G system. The key concepts of 6G hyper-connectivity are illustrated in Fig. \ref{fig:concept2}.
After the introduction, the rest of this article is organized as follows. In Section II, we address the challenges facing 6G hyper-connectivity. In Section III, we present the key enabling technologies for realizing this 6G vision. In Section IV, we discuss future research issues and provide our concluding remarks.
\begin{figure*}
\centering
\includegraphics[width=6.7in]{fig_rev8.jpg}
\caption{Key concepts of 6G hyper-connectivity.}
\label{fig:concept2}
\end{figure*}
\section{Challenges and Prospects Facing 6G Hyper-Connectivity} \label{sec:2}
\subsection{Tbps for Immersive User Experience}
Since the target of 5G is to support $20$ gigabits-per-second (Gbps) peak data rate and $0.1$ Gbps user-experienced data rate, it is not too difficult to support advanced multimedia services such as $8$K augmented reality (AR) streaming, $16$K virtual reality (VR) streaming, and extended reality (XR) requiring beyond-gigabit data rates.
It is however not easy to support truly immersive mobile services such as digital twin or metaverse since the virtual model accurately reflecting the physical world using XR devices or high-fidelity mobile holographic displays requires a rate of up to Tbps \cite{Samsung6G}.
For example, a full-view $24$K VR transmission with a $64$ pixels per degree (PPD) minimum resolution, a $200\,\text{Hz}$ minimum refresh rate, and a $300\!\!:\!\!1$ compression rate requires at least $6.37$ Gbps.
Clearly, this level of rate cannot be supported anytime anywhere in the current 5G systems since the typical user-experienced data rate of 5G is in the range of 0.1$\sim$1 Gbps.
One way to support emerging services requiring tremendous data rates is to exploit the terahertz (THz) frequency band.
The THz band offers colossal spectrum but the THz waves suffer high attenuation due to the attendant significant path loss, molecular absorption, and atmospheric attenuation.
One straightforward option to overcome these problems is to densify the network, meaning that we reduce the distance between the base station (BS) and mobile device.
In this ultra-dense scenarios, pencil beamforming via ultra-massive multi-input multi-output (UM-MIMO) plasmonic nano-antenna arrays can be useful in providing 10 to 100$\times$ improvement in data rate \cite{UM-MIMO}.
While the densified small cells equipped with massive multi-input multi-output (MIMO) systems are a good fit for THz communication systems, their deployment will make the notions of cell and handover obsolete.
Thus, we need to design the cell-free networks from the scratch by considering various technical challenges, such as beam tracking/management, user association and BS coordination, and synchronization and initial access.
Other key enabling technologies to support Tbps data rates include orbital angular momentum (OAM) multiplexing, full-duplex, and spatial modulation-MIMO (SM-MIMO).
\vspace{-2mm}
\subsection{Zero Coverage-Hole Networks}
Another more important goal of 5G systems is to provide advanced wireless connectivity for a wide variety of vertical applications including self-driving cars and drones, robot-aided smart factories and healthcare, and remote control for disaster scenarios.
To accommodate such applications,
wireless access should be available anywhere, meaning that there should be no coverage holes.
Although many network operators claim nationwide coverage for 5G, there are still many coverage holes in places such as tunnels, basements, elevators, mountains, and oceans.
Moreover, higher frequencies (3.5 GHz at FR1 and 28 GHz at FR2) used in 5G systems tend to be vulnerable to blockage owing to their high directivity and severe path loss.
For these reasons, more often than not we experience link failure in our daily use of 5G services.
Recently, several new approaches to enhance the coverage and ensure universal and seamless connectivity have been proposed.
An intelligent reflecting surface (IRS), an approach that provides an artificial path in the absence of line-of-sight (LOS) links, is a viable candidate
to eliminate coverage holes, particularly in dense urban areas \cite{RIS}.
Wave-controlled holographic MIMO surfaces that flexibly shape electromagnetic waves using sub-wavelength metallic or dielectric scattering particle structures is another good option.
In remote areas such as deserts, oceans, and mountain areas, in which the deployment of terrestrial networks (TNs) is almost impossible, non-terrestrial networks (NTNs) with satellites, high-altitude platform stations (HAPSs), and unmanned aerial vehicles (UAVs) can be an attractive solution because they ensure coverage without relying on the ground infrastructure.
By the deliberate combination of TNs and NTNs \cite{SAGINnew}, one can dramatically improve the coverage and thus generate the network with virtually no coverage hole.
\begin{figure*}
\begin{subfigure}{0.646\textwidth}
\includegraphics[width=1\textwidth]{fig2.png}
\caption{}
\label{fig:cell_free1}
\end{subfigure}
\begin{subfigure}{0.354\textwidth}
\hspace{-8mm}
\includegraphics[width=1.1\textwidth]{fig3.png}
\vspace{5.9mm}
\hspace{8mm} \caption{}
\label{fig:cell_free2}
\end{subfigure
\caption{Left-hand side: Illustrations of 6G cell-free massive MIMO network; right-hand side: CDFs of user SINR in cell-free massive MIMO networks, small cell, and centralized massive MIMO networks.}
\label{fig:cell_free}
\end{figure*}
\subsection{Pervasive Computing for Connected Intelligence}
With the advent of truly immersive mobile services, such as XR and metaverse, computational overhead has become a serious concern since these services require high energy consumption, end-to-end latency, and processor and memory overhead.
In 5G, mobile edge computing (MEC) has been widely used to delegate computation-intensive tasks to the edge (a local server near a BS). However, MEC would not be enough to support future applications.
In a VR service, for example, a mobile VR headset must provide photorealistic and $6$ degrees of freedom (6-DoF) rendered images within 20 ms of motion-to-render-to-photon (M2R2P) latency.
To control this overhead solely in a mobile device or offload it to a nearby BS will be by no means easy so that new paradigm to deal with the surge of computation is required.
Fortunately, as the density of wireless networks increases and beyond-gigabit transmissions can be supported using mmWave and THz band transmissions,
one can exploit computational resources in nearby BSs and mobile devices that are equipped with high performance computing processors.
For instance, a vehicle equipped with a high-performance battery ($1000\,\text{Wh/L}$) and powerful graphics processing units (GPUs) can perform environment sensing, learning, and computation-oriented tasks and send the results to the destination device.
Particularly, computation offloading is useful for the artificial intelligence (AI) applications, such as object detection and classification, speech recognition, and metaverse control, since these tasks usually require heavy deep neural network (DNN) computation, but the inference results are often very simple.
In order to facilitate the boundless use of computation resources over the network, one should use advanced wireless distributed computing techniques.
Decentralized learning obtaining a global model from localized data via peer-to-peer communication among mobile devices can be one good choice.
When security matters, federated learning where the mobile devices jointly train the global model without exchanging the local dataset with the central server would be a proper choice.
Yet another option to support large-scale models is split learning, in which large-scale DNNs are divided into several pieces and then trained separately on mobile devices.
\section{Key Enabling Technologies for 6G Hyper-Connectivity}
In this section, we discuss key enabling technologies to achieve beyond Gbps for immersive user experience, zero coverage-hole networks, and pervasive computing for connected intelligence.
\subsection{Distributed $\&$ Intelligence-Aware Cell-Free Massive MIMO Network}
In the 6G era, a network will be heavily densified and thus, the areal density of BSs will be comparable to, or even surpass, the density of mobile devices.
As the coverage of a cell gets smaller, the distance between BS and mobile will also be shorter.
To make the most of the proximity between BSs and mobile devices, a network should be designed such that a group of small cells coherently serve users in a user-centric manner (see in Fig. \ref{fig:cell_free1}) \cite{cell-free}.
In most urban scenarios, user-centric cell association will generate multiple LOS links between BSs and mobile so that the mobile can easily achieve a high degree of macro-diversity gain and spectral efficient improvement.
To evaluate the efficacy of the cell-free massive MIMO network, we plot cumulative distribution functions (CDFs) of user signal-to-interference-and-noise ratio (SINR) in mmWave ($100\,\text{GHz}$) and THz ($1\,\text{THz}$) communication scenarios.
One can see from Fig. \ref{fig:cell_free2} that the SINR variation of the cell-free massive MIMO network is much smaller, meaning that the cell-free massive MIMO network can well suppress the inter-cell interference through the cooperative beamforming of BSs.
For example, in the THz systems, the cell-free massive MIMO network can provide at least $7\,$dB SINR to more than $95\%$ of users while the conventional small cell network guarantees only -$2.5\,$dB.
We also observe that the SINR gain of the THz band is higher than that for the mmWave band, meaning that it can effectively compensate for the huge path loss of the THz systems.
To fully enjoy the benefits of cell-free massive MIMO, we need to address the following challenges.
\smallskip \noindent
\textbf{Intelligence-aware network architecture: }
In order to provide the user-centric coverage through the joint beamforming of cooperating BSs, a cell control information, channel state information (CSI), and transmit data of local BSs should be shared with the central unit (CU) through the fronthaul link.
Sharing such a large information among all network entities will cause a significant fronthaul overhead, not to mention the tremendous computational overhead required for the channel estimation, data transmission/reception, and resource allocation \cite{cell-free2}.
A promising approach to mitigate the overhead is the decentralized processing. One such example is the multi-agent deep reinforcement learning (MA-DRL) where the operations like beamforming, resource allocation, and user association are performed locally and individually by the DRL agent in each small cell.
\smallskip \noindent
\textbf{Multi-sensory data fusion for channel prediction and beamforming:}
In order to realize a truly zero coverage-hole cell-free massive MIMO network, the network should have mobility-aware functions such as fast-fading channel estimation, mobile trajectory tracking, blockage prediction, and dynamic BS clustering and beamforming.
One promising direction is to use the multi-modal sensory data (e.g., RGB depth vision information of camera, range information of LiDAR, SLAM-based user location) along with the computer vision techniques to identify the 3D shape of nearby wireless environments (e.g., obstacle locations, LOS and non-line-of-sight (NLOS) paths, and mobility patterns).
In this scheme, using a set of panoramic images obtained from the stereo camera, the DL-based object detection can extract the location (3D Cartesian coordinate $(x, y, z)$) of target objects in an image and then determine its class.
In acquiring the location information of a mobile device from input images, a convolutional neural network (CNN) that extracts the spatially correlated feature using the convolution filter can be employed.
Once the location of the mobile is identified, by measuring the angle and distance from the BS to the target object (e.g., mobile phone, vehicle, or urban air mobility (UAM)), the mmWave and THz beam can be generated without a complicated beam management process.
\smallskip \noindent
\textbf{Computation-oriented end-to-end cell-free massive MIMO network:}
In order to support the computation-intensive and/or latency-critical applications, the future 6G network should have the computation offloading functionality \cite{EdgeComp}. Computation offloading can be realized by utilizing the local computing power at the BSs as well as cloud computing at the CU. For example, latency-sensitive computation tasks will be processed at the BSs in proximity to the user, and the computation-intensive tasks can be handled via the cooperation of BSs and CU.
To ensure successful computation offloading, an integrated cell-free network design exploring both communication and computational perspectives needs to be suggested. In general, it should be guaranteed that computing operations can be delivered by reliable and low-latency communication links through a user, BSs, and CU. From a computation perspective, edge servers located at the BSs may have limited computing capabilities in terms of computing power, memory size, and energy consumption compared to the central server at the CU. In addition, since the user needs to offload computation tasks to nearby BSs, the randomness in computation latency at each BS should be taken into account to guarantee low-latency and reliable computing services.
\begin{figure*}
\centering
\includegraphics[width=7.2in]{fig2_0714.jpg}
\caption{Satellite communication systems with different altitudes: Round-trip delay and single satellite coverage as a fraction of the Earth’s surface.}
\label{fig:3D_connectivity}
\end{figure*}
\smallskip \noindent
\textbf{Integrated RF and optical X-haul provisioning:}
In the cell-free massive MIMO network, fronthaul and backhaul overhead is a serious yet underrepresented problem.
Recently, advanced fronthaul/backhaul techniques have been proposed.
Replacing the current CPRI interface with the optical fiber-based fronthaul/backhaul is a simple option but will increase the CAPEX significantly.
A radio stripe network-based fronthaul and backhaul where the BSs are serially connected using daisy-chain architecture to the same cables would be a cost-effective option for the densely deployed BSs.
Also, an integrated access and backhaul (IAB) network where only a few BSs are connected to traditional fiber infrastructures while other BSs wirelessly relay the fronthaul/backhaul traffic would be a promising option.
Yet another simple solution is free-space optical (FSO) communication where the modulated optical beams are transmitted through the free space.
When the LOS link is guaranteed, FSO-based backhaul can offer several Gbps data rates with a much lower deployment cost than the optical fiber-based backhaul.
When the LOS link is absent, optical IRS, a planar metasurface that controls the optical propagation environment by adjusting the phase shifts of reflecting elements can be used to offer the virtual LOS link between the BSs and CU.
\smallskip \noindent
\textbf{Rank-sufficient THz LOS communications using cell-free massive MIMO network:}
In the THz band, due to the severe path loss and high directivity, the conventional MIMO techniques exploiting the rich scattering of multipath environments will not work.
In the case where the LOS path is the dominant, the cell-free massive MIMO systems can be an appealing solution to address the rank-deficiency issue of THz communications.
Specifically, through the joint transmission of small BSs, the cell-free massive MIMO systems can generate a virtual multipath propagation.
Further, the shortened communication distance along with the extremely large number of transmit antennas incur a spherical wavefront which creates a rich non-linear pattern of phase variations and therefore, increases the channel rank.
\subsection{Boundless and Fully Integrated Terrestrial $\&$ Non-Terrestrial Networks
In remote and rural areas where the existing terrestrial cellular network is unreachable \cite{6GNTN}, NTNs will be a useful means of providing cost-effective and high-speed internet services.
Depending on the flying altitude and coverage area, NTN devices can be classified as UAV, HAPS, and satellite.
Satellite stations are further divided into geostationary orbit (GEO), medium earth orbit (MEO), and low earth orbit (LEO) based on their orbital altitude.
In Fig. \ref{fig:3D_connectivity}, we plot the round-trip delay (RTD) and single satellite coverage on the Earth's surface for various altitude levels.
When a satellite revolves in orbit close to the surface of the earth, a faster response time can be guaranteed, but the satellite's decreases owing to its proximity to the ground.
For example, only three equally spaced GEO satellites are needed to can provide near-global coverage because each GEO satellite covers almost 35\% of the Earth's area, but RTD in this case exceeds 0.5 second. Conversely, the RTD of the LEO satellite is less than 0.1 second, but the coverage will be just 0.2\% at an altitude of 550 km.
One can infer from this discussion that a large number of LEO satellites, often called LEO mega-constellation, would be a good option to guarantee the latency in the order of a few tens of milliseconds and global coverage.
We note that there exists a trade-off between the ground observer's minimum elevation angle and link quality (see Fig. \ref{fig:3D_connectivity}).
If we increase the minimum elevation angle, so is the LOS probability of the satellite link; however, the beam coverage and visible time of the satellite will be reduced.
To balance this trade-off, the ground station’s minimum elevation angle should be carefully determined in practice.
Recently, SpaceX decided to change the minimum elevation angle of Starlink LEO satellites from 40\textdegree\ to 25\textdegree\ (granted by the FCC in April 2021) to improve coverage with a small number of LEO satellites \cite{starlink}.
In analyzing the NTN communication of satellite and aerial devices, areal capacity ($\mathrm{bits/s/m^2}$) is no longer useful; therefore, a new metric called volumetric traffic capacity ($\mathrm{bits/s/m^3}$) that measures the quality of user experience in 3D space is needed.
To obtain a sufficient volumetric traffic capacity to ensure universal global connectivity for 6G systems, the following issues must be addressed.
\smallskip \noindent
\textbf{Mobility-aware channel modeling and tracking in 3D space:} Since the NTN terminals are located at a high altitude and are moving fast with respect to the ground stations, LOS propagation paths are guaranteed, but the wireless signals suffer significant Doppler shift. The first step in addressing the distortions caused by high mobility and a large coverage area is an accurate 3D channel model that accounts for the Doppler shift and propagation delay.
In addition, dynamic channel prediction based on the real-time location tracking of NTN terminals is of great importance
to achieve agile and effective NTN management.
\smallskip \noindent
\textbf{Boundless multi-layered TN/NTN architecture:} To provide sufficient and continuous coverage everywhere, the large LEO satellite constellations should be combined with GEO/MEO using the coverage-aware protocol and NTN-backhaul connection. In fact, a boundless multi-layered architecture fully integrating TNs and NTNs is essential to guarantee the reliable connection in the ocean and desert, as well as high-speed trains and airplanes \cite{38821}. UAV and HAPS play a key role in improving the connection quality because they can be used as relays between the LEO/MEO/GEO satellite and ground stations. In addition, an integrated network protocol that includes multi-layer routing, network mobility management, and resource allocation should be introduced.
\smallskip \noindent
\textbf{Seamless inter-beam/satellite/orbit handover:}
Owing to the high-speed movement of the LEO satellites, the maximum visible time of the satellites at the ground station is at most 10$\sim$20 minutes, causing a frequent handover even for a fixed ground station.
To mitigate this problem, LEO satellites should be connected to nearby satellites in the same or adjacent orbits through laser or radio frequency (RF) links. This so-called inter-satellite link (ISL) connectivity supports relatively low end-to-end latency even for the long distance communication scenario.
Candidate techniques to establish an efficient ISL connectivity include spatial-mode diversity and multiplexing, wavelength division multiplexing, and adaptive beam control using pointing/acquisition/tracking (PAT) process.
%
Another challenge of NTNs is that the cell size varies according to the elevation angle of the NTN terminals; therefore, conventional handover techniques might not be suitable.
%
%
To address this challenge, a forecast-based handover decision on the basis of satellite position estimation is required.
%
%
\begin{figure}[!t]
\centering
\includegraphics[width=3.7in]{fig3-1_0714.jpg}
\caption{Over-the-air distributed computing scenario where a VR headset assigns linear computation tasks to four edges with uncoded/coded offloading.}
\label{fig_comp}
\end{figure}
\smallskip
\noindent
\textbf{3D cell planning and fast beam management:}
Since the cell size varies according to the altitude and elevation angle of the satellite, we need to introduce 3D cell planning based on the orbits of satellites.
A satellite with a higher altitude and lower elevation angle tends to form a larger cell size owing to its wider beam footprint on the ground.
Specifically, the GEO and LEO satellites have beam widths of up to 3,500 km and 1,000 km at an altitude of approximately 600 km, respectively.
In addition, 3D cell types can be classified into earth-moving and earth-fixed cells with respect to the Earth's surface, depending on whether the satellite beams can be fixed or steered.
The earth-fixed cell should be designed to be highly robust to satellite beam pointing errors since even a very small error makes the satellite beam footprint placed tens of kilometers away from the initially designated area on the Earth; whereas, the earth-moving cell may require more frequent handovers because of the dynamics of beam movement on the ground.
As for the cell planning in multi-layered TN/NTN architecture, a truncated octahedron-shaped cell planning distinguished from the hexagon-shaped 2D cell layout is desired when considering the volumetric quotient in 3D space.
\smallskip
\noindent
\textbf{Intelligent spectrum management and NTN system optimization:}
To achieve seamless and broad coverage in integrated terrestrial and non-terrestrial networks, powerful spectrum management scheme is needed.
To this end, intelligent and dynamic cooperation between primary and cognitive users is preferred over the semi-static spectrum sharing.
Note that because the S-band (i.e., 2170$\sim$2200 MHz) is adjacent to the 3G/4G LTE band (i.e., 2110$\sim$2170 MHz), the large Doppler shift of the high-speed satellites causes severe interference in the adjacent band, deteriorating the service quality of 3G/4G TN services.
A possible option to mitigate the interference is the on-board pre-compensation of the Doppler shift at the center of the beam on the ground while the residual Doppler shift can be corrected at each receiver side.
To further improve spectral efficiency, we should have an appropriate frequency reuse pattern among multiple beams, available channels, and antenna gains.
\begin{figure}[!t]
\centering
\includegraphics[width=3.2in]{fig3-2_0714.jpg}
\caption{Task completion times when a user offloads linear computation tasks to four edges over two different channel conditions (Scenario 1: LOS channel at a distance of 15m; Scenario 2: NLOS channel at a distance of 300m).}
\label{fig_comp2}
\end{figure}
\begin{figure*}[!t]
\centering
\includegraphics[width=6.5in]{Fig6.png}
\caption{Hyper-connected 6G network architecture}
\label{fig_NWA}
\end{figure*}
\subsection
Over-the-air Distributed Computing for Artificial Intelligence Applications}
A future trend in dealing with the relentless growth in computing-intensive applications is pervasive computing, in which mobile devices offload computational overhead to network edges or nearby devices.
Notwithstanding the rosy outlook, a number of issues have to be addressed since there is a fundamental trade-off between computation and communication. In fact, when one tries to offload computational overhead, one should investigate the overhead of the communication link.
Fig. \ref{fig_comp} depicts computation offloading scenarios where a VR headset assigns linear computations to four edges with various offloading schemes \cite{CodedComp}.
Since the reduction in the communication load over the network increases the computation overhead at the edges, we need to come up with a proper offloading scheme optimized for the computation capability and network conditions.
One thing to note is that the wireless channels experienced by devices and their computing capabilities are quite different among devices so that there is a large variation in the task completion time.
Particularly, this problem is critical for the ultra-reliable and low-latency communications (URLLC) services since the task completion time is determined by the device with the worst channel condition or straggling/malfunctioning devices (e.g., edge failure in Fig. \ref{fig_comp}).
For the seamless integration of communication and computing,
the following approaches can be considered.
\smallskip \noindent
\textbf{Communication-aware pervasive computation design: } To alleviate the difficulties caused by the channel uncertainty, we need to exploit the mmWave and THz frequency bands.
%
If sufficient bandwidth is provided and LOS channel conditions are guaranteed for every computing node, wireless links will not be a serious bottleneck.
%
Obviously, this ideal condition cannot be guaranteed always, so we need to come up with an environment-aware and communication-friendly computing mechanism.
%
For example, the communication load can be reduced by quantization, compression, and sparsification of the transmit information as long as it does not severely degrade the task quality.
%
For special cases where the (weighted) average of the computation tasks is needed, we can design the system such that the computation results are added during the wireless transmission of the same time/frequency resources. This so-called over-the-air computation can significantly reduce the communication and computation overhead.
\smallskip \noindent
\textbf{Collaborative and robust coded computation offloading:} A naïve approach to exploit multiple computing nodes is to split the whole computation task into multiple pieces and then distribute them equally to multiple nodes. This split-based offloading will not be effective in the presence of link failure and straggling/malfunctioning devices, since the whole task can be
completed only when all subtasks are finished. One way to avoid this problem is to replicate a part of computation tasks and assign them to the multiple devices.
%
One option to overcome difficulties caused by the channel heterogeneity, link failure, and straggling/malfunctioning is to apply coding techniques to distributed task allocation.
%
By introducing redundancy in distributed tasks, coded distributed computing can be more resilient to the link and edge failures. For example, if computation tasks are designed using maximum distance separable (MDS) codes, one can recover the straggling tasks from other completed
tasks \cite{CodedComp2}.
%
As illustrated in Fig. \ref{fig_comp2}, one can achieve the better computation-communication trade-off by assigning coded computation sub-tasks to computing nodes.
\smallskip \noindent
\textbf{Scalable and over-the-air deep learning:}
To perform a large-scale deep learning (DL) task over wireless networks, we must decide how to allocate the tasks among the cloud, network edges, and the mobile devices \cite{scalable}.
For instance, network edges perform DNN training, and the mobile device performs the inference using the trained model from edges.
To perform the flexible DL using abundant network resources, we can consider the following: 1) federated learning, which trains local models on each device and then passes updated parameters to the central cloud for a global model; 2) distributed learning, in which each device communicates directly with its neighbors; and 3) split learning, in which devices or the central cloud trains each split model individually.
These DL techniques can be combined with over-the-air techniques \cite{air} to effectively aggregate large-scale distributed data in ultra-low-latency
scenarios. In addition, a scalable approach can be applied to resource-constrained mobile devices to reduce communication costs by leveraging gradient clustering, correction, and quantization methods.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\section{Summary and Outlook}
In this article, we discussed the vision and challenges to achieve the hyper-connected society in 6G.
We have identified several key enabling technologies and their roles in supporting ultra-broadband, ubiquitous 3D, and computation-aware connectivity in 6G. With these disruptive technologies, 6G is envisioned to collapse the existing boundaries of space/air/ground networks, communication/computing/intelligence capabilities, as well as physical/virtual worlds.
We anticipate that this will bring the truly {\it 6G hyper-connectivity} society to humans, machines, and sensors in ubiquitous 3D global coverage (beyond ground-level 2D connectivity for 5G) even in space, oceans and desserts.
In Fig. 6, we summarized the hyper-connected 6G network architecture by highlighting the key enabling technologies along with other remaining major challenges.
Going forward, achieving Tbps data rates, zero coverage holes, and pervasive computing for connected intelligence
will contribute to reducing differences in social and regional infrastructures and economic opportunities, thereby ameliorating many social issues such as the digital divide, regional polarization, and education inequality.
Our hope is that this article will spark further interest and expedite the research activities for 6G.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
2007.13226
|
\section{Introduction and Preliminaries}
Let $E$ and $F$ be two Riesz spaces. A mapping $T:E\rightarrow F$ is said to
be a \textit{lattice homomorphism} after Mena and Roth in [7] whenever
\[
T(x\vee y)=Tx\vee Ty\text{ and }T(x\wedge y)=Tx\wedge Ty\text{, for all
}x,y\in E\text{.
\]
A linear homomorphism is called a \textit{Riesz homomorphism}. We will look at
the following questions: When is all lattice homomorphism Riesz homomorphism?
This problem can be treated in a different manner, depending on domains and
co-domains on which maps acts. In the last decade, several authors have been
studied this problem on different Archimedean Riesz spaces. The study of the
relationship between lattice and Riesz homomorphism was really inaugurated in
1978 thanks to the fundamental work of Mena and Roth [7], who were interested
essentially for the setting of the Riesz spaces of real valued continuous
functions defined on a compact Hausdorff spaces. They proved that if $X$ and
$Y$ are compact Hausdorff spaces and $T:C(X)\rightarrow C(Y)$ is a lattice
homomorphism such that $T(\lambda1)=\lambda T(1)$ for all $\lambda\i
\mathbb{R}
$, then $T$ is linear. Later, many authors interested in this problem. In [9],
Thanh generalized Mena and Roth's result to the case when $X$ and $Y$ are real
compact spaces. For another generalization by Lochan and Strauss see [5]. So
far the best results, in this field, are duo to Ercan and Wickstead in [2].
They showed from the theorem of Mana and Roth by using the Kakutani
representation theorem, that if $E$ and $F$ are uniformly complete Archimedean
Riesz spaces with weak order units $e_{1}\in E$ and $e_{2}\in F$ and if
$T:E\rightarrow F$ is a lattice homomorphism such that $T(\lambda
e_{1})=\lambda e_{2}$ for all $\lambda\i
\mathbb{R}
$, then $T$ is linear. As an application they gave a corresponding result for
the case where $E$ and $F$ are two uniformly complete Archimedean Riesz spaces
with disjoint complete systems of projections as an example for two $\sigma
$-Dedekind complete Riesz spaces. It will be noted that the proofs of the last
results are heavily based on the uniform structure of the spaces, hence it can
not be expected that it can be adapted to the Riesz space case. Surprisingly
enough, to the best of our knowledge, no attention has been paid in the
literature of this problem for the general case of Riesz spaces. It seems that
the main reason for this, it lies in the fact of the topological richness of
the uniformly complete structure of Riesz spaces with weak order units which
gives a particular interest to lattice homomorphisms on those spaces. It seems
natural therefore to ask what happenings in the general case of Riesz spaces?
What about weaker condition under which a lattice homomorphism defined on a
Riesz space is linear? The above results tell us what is happening in some
cases, but there are certainly gaps in the general case. The complications are
partly due to the delicate behavior of lattice homomorphisms, partly to the
topological poverty of the domain. If we focus our attention to find a manner
for extending the proofs of aforementioned results from an Archimedean Riesz
space to its uniformly completion, then we would hope to establish more. In
the present paper, as indicated by its title, we take a look at this problem.
Our main goal is to prove that, in the results of Mena and Roth [7], Thanh
[9], Lochan and Strauss [5] and Ercan and Wickstead [2], the assumption of the
uniform completeness condition on the domain of mapping is superfluous. As
applications we give some generalization for those results. Actually, we
provide not only new results but also new techniques, which we think that are
useful additions to the literature. Our results here are to be compared with
the corresponding ones in the last mentioned papers, in this respect we find
that the lattice structure of maps is quite natural in this setting, and even
that it has some advantages, as the key results are obtained more directly constructive.
We take it for granted that the reader is familiar with notions of Riesz
spaces (or vector lattices) and operators between them. For terminology,
notations and concepts that are not explained in this paper, one can refer to
the standard monographs [1,6,8].
In order to avoid unnecessary repetition we will assume throughout the paper
that all Riesz spaces under consideration are Archimedean.
A Riesz space is called \textit{Dedekind complete} whenever every nonempty
subset that is bounded from above has a supremum. Similarly, a Riesz space is
said to be $\sigma$\textit{-Dedekind complete} whenever every countable subset
that is bounded from above has a supremum. Let us say that a vector subspace
$G$ of a Riesz space $E$ is \textit{majorizing} $E$ whenever for each $x\in E$
there exists some $y\in G$ with $x\leq y$.(or, equivalently whenever for each
$x\in E$ there exists some $y\in G$ with $y\leq x$)$.$ A Dedekind complete
Riesz space $L$ is said to be a \textit{Dedekind completion} of the Riesz
space $E$ whenever $E$ is Riesz isomorphic to an order dense majorizing Riesz
subspace of $L$ (which we identify with $E$). It is a classical result that
every Archimedean Riesz space $E$ has a Dedekind completion, which we shall
denoted by $E^{\delta}$.
The relatively uniform topology on Riesz spaces plays a key role in the
context of this work. Let us recall the definition of the \textit{relatively
uniform convergence}. Let $E$ be an Archimedean Riesz space and an element
$u\in E^{+}$. A sequence $(x_{n})_{n}$ of elements of $E$ \textit{converges
}$u$\textit{-uniformly} to the element $x\in E$ whenever, for every
$\epsilon\geq0$, there exists a natural number $N_{\epsilon}$ such that
$|x_{n}-x|\leq\epsilon u$ holds for all all $n\geq N_{\epsilon}$. This will be
denoted $x_{n}\rightarrow x(r.u)$. The element $u$ is called the
\textit{regulator of the convergence}. The sequence $(x_{n})_{n}$
\textit{converges relatively uniformly} to $x\in E$, whenever $x_{n
\rightarrow x(u)$ for some $u\in E^{+}$. We shall write $x_{n}\rightarrow
x(r.u)$ if we do not want to specify the regulator. \textit{Relatively uniform
limits} are unique if and only if $E$ is Archimedean [6, Theorem 63.2]. A
nonempty subset $D$ of $E$ is said to be \textit{relatively uniformly closed}
if every relatively uniformly convergent sequence in $D$ has its limit also in
$D$. We emphasize that the regulator does need not to be an element of $D$.
The relatively uniformly closed subsets are the closed sets of a topology in
$E$, namely, the relatively uniform topology. The closure, with respect to the
relatively uniform topology of the Riesz space $E$ in its Dedekind completion
is a uniformly complete vector lattice, denoted by $E^{ru}$ and referred to as
the uniform completion of $E$ (see [6]). Thus $E$ is an order dense majorizing
Riesz subspace of $E^{ru}$.
The notion of \textit{relatively uniform Cauchy sequences} is defined in the
obvious way. A Riesz space $E$ is said to be \textit{relatively uniformly
complete} whenever every relatively uniformly Cauchy sequence has a (unique)
limit. For more details we refer the reader to [6].
\section{Main results}
In order to reach our main result we need some prerequisites.
\begin{lemma}
Let $E$ be a Riesz space and $E^{ru}$ its uniformly complete then for every
$x\in E^{ru}$ there exists a sequence $(x_{n})_{n}\in E$ such that
$x_{n}\nearrow x(r.u)$.
\end{lemma}
\begin{proof}
Let $0\neq x\in E^{ru}$. Since $E$ is majorizing in $E^{ru}$, there exists
$0<y\in E$ such tha
\[
0<\left\vert x\right\vert \leq y.
\]
Now let $E_{y}$ be the principal ideal generated by $y$ in $E,$ and let
$E_{y}^{ru}$ be the principal ideal generated by $y$ in $E^{ru}.$ By the
Kakutani theorem [8, Theorem 2.1.3], $E_{y}^{ru}$ is order isomorphic to a
$C(X)$ for a compact Hausdorff space $X$ and so $E_{y}$ is uniformly dense in
$C(X)=E_{y}^{ru}$. Now by the fact that in $C(X)$ the relatively uniform
convergence and uniform convergence of sequences coincide, the relatively
uniform closure of a set is the familiar uniform closure and the relatively
uniform topology is the norm topology with respect to the uniform norm. It
immediately follows that the pseudo closure and the closure of any set with
respect to this topology are the same. Since $x\in E_{y}^{ru}$, there exists a
sequence $(x_{n})_{n}\in E_{y}$ such that $x_{n}\nearrow x(r.u)$.
\end{proof}
The following results deals with prime ideal in Riesz space which plays an
important role in our approach. A prime ideal $P$ of a Riesz space $E$ is a
nonempty proper lattice subset of $E$ (not necessarily a vector subspace) that
containing with any element all smaller ones and with $x\in P$ or $y\in P$
whenever $x\wedge y\in P$. We say that a prime ideal $P$ in $C(X)$ is
associated with a point $x\in X$ if $g\in P$ whenever $f\in P$ and
$g(x)<f(x).$ In [4] Kaplansky proved that for $X$ compact, every prime ideal
in $C(X)$ is associated with some point of $X$ and this point is unique if $X$
is also Hausdorff. In the same paper, he proved also that if $P\subset Q$
where $P$ and $Q$ are prime ideals in $C(X),$ $X$ is a compact Hausdorff space
then $P$ and $Q$ are associated with the same point. For more details about
prime ideals in $C(X)$ see [4].
The following result will be of great use next.
\begin{proposition}
Let $E$ be a Riesz space and $P$ be a prime ideal of $E$ then $P^{ru}$, the
uniform completion of $P$, is a prime ideal of $E^{ru}$.
\end{proposition}
\begin{proof}
First we prove that $P^{ru}$ is a proper set of $E^{ru}$. Suppose that
$P^{ru}=E^{ru}$ then $E\subset P^{ru}$ on the other hand for every $x\in
E\subset P^{ru}$ there exists $y\in P$ such that $x\leq y$ (because $P$ is
\textit{majorizing} $P^{ru}$) which implies that $x\in P$ as $P$ is an ideal
in $E$ which implies that $E\subset P$ then $P=E$. This contradict with $P$
proper. Consequently $P^{ru}$ is a proper set of $E^{ru}$. Secondly it is not
hard to prove that $P^{ru}$ is a lattice ideal of $E^{ru}$. It remains to show
the prime property of $P^{ru}$. To do this let $x,y\in E^{ru}$ such that
$x\wedge y\in P^{ru}$ we claim that $x\in P^{ru}$ or $y\in P^{ru}$. By using
the preceding Lemma there exist two sequences $(x_{n})_{n}$,$(y_{n})_{n}\in E$
such that $x_{n}\nearrow x(r.u)$ and $y_{n}\nearrow y(r.u)$ then $x_{n}\wedge
y_{n}\nearrow x\wedge y$. Now by using the majorizing property of $P$ in
$P^{ru}$ there exists $z\in P$ such that $x\wedge y\leq z$ and thus
$x_{n}\wedge y_{n}\leq x\wedge y\leq z$ then $x_{n}\wedge y_{n}\leq z$.
Therefore $x_{n}\wedge y_{n}\in P$ which implies that $x_{n}\in P$ or
$y_{n}\in P$. Thus $x\in P^{ru}$ or $y\in P^{ru}$ and we are done.
\end{proof}
The main result of this paper is strongly based on the following proposition.
\begin{proposition}
Let $E$ be a uniformly dense Riesz subspace of $C(X)$ where $X$ is a compact
Haudorff space and $P$ a prime ideal. Then $P$ and $P^{ru}$ are associated
with the same point $x\in X$.
\end{proposition}
\begin{proof}
By the preceding proposition $P^{ru}$ is a prime ideal of $E^{ru}=C(X)$ then
from [4] there exist $x\in X$ such that $P^{ru}$ is associated with $x$. We
claim that $P$ is also associated with $x$. To see this, let $g\in E$ and
$f\in P$ such that $g(x)<f(x)$ we claim that $g\in P$. By the fact that
$E\subset E^{ru}$ and $P\subset P^{ru}$, we have $g\in E^{ru}$, $f\in P^{ru}$
so by $g(x)<f(x)$ we have $g\in P^{ru}$ (because $P^{ru}$ is associated with
$x$) then there exists $h\in P$ such that $g\leq h$ which implies $g\in P$
(because $P$ is an ideal of $E$). Which is the desired result.
\end{proof}
We arrive at this point to a one of the central results of this paper in which
we provide a representation of every real lattice homomorphism on a uniformly
dense Riesz subspace of $C(X)$ as evaluation at some point of $X$. We note
that the proof of the corresponding result in the case of $C(X)$ (see [7]) is
heavily based on the structure of $X$ and hence it can not be expected that it
can be adapted directly to the general Riesz space case. The complications are
essentially due to the topological poverty of the domain. Our curiosity about
this problem was in part initiated by the fact that after reading clearly the
details of proofs of aforementioned results we were certain that with similar
techniques we can do better. A crucial piece of informations that we need from
the richness of the uniform completion structure are came from its one of
prime ideals proved in last results. We give a generalization of results in
[2,7] to uniformly dense Riesz subspaces of $C(X)$ by adding some facts to
Mena Roth's proofs. To clarify our contribution we give a details proofs for
the two following results.
\begin{proposition}
Let $E$ be a uniformly dense Riesz subspace of $C(X)$ where $X$ is a compact
Haudorff space such that $1\in E$. Let $\phi:E\rightarro
\mathbb{R}
$ be a lattice homomorphism satisfying $\phi(\lambda)=\lambda$ for all
$\lambda\i
\mathbb{R}
$. Then there exist $x\in X$ such that $\phi=\delta_{x}$, the point evaluation
map defined for $f\in C(X)$ by $\delta_{x}(f)=f(x)$.
\end{proposition}
\begin{proof}
Let $I_{a}^{-}=\left\{ r\i
\mathbb{R}
:r\leq a\right\} $ for $a\i
\mathbb{R}
$. As it is mentioned in [7] $\left\{ I_{a}^{-}:a\i
\mathbb{R}
\right\} $ and $\left\{ \phi^{-1}(I_{a}^{-}):a\i
\mathbb{R}
\right\} $ are chains of prime ideals in
\mathbb{R}
$ and $E$, respectively$.$ Now by using proposition 1 $\left\{ (\phi
^{-1}(I_{a}^{-}))^{ru}:a\i
\mathbb{R}
\right\} $ is a chain of prime ideals in $E^{ru}=C(X)$. Then by the preceding
proposition $(\phi^{-1}(I_{a}^{-}))^{ru}$ and $(\phi^{-1}(I_{a}^{-}))$ are
associated with the same point $x\in X$ for all $a\i
\mathbb{R}
.$ We claim that $\phi=\delta_{x}$. To do this let $f\in E$. By using the fact
that $f\in\phi^{-1}(I_{\phi(f)}^{-})$ and $x$ is an associated point for
$\phi^{-1}(I_{\phi(f)}^{-})$ we have $r\in\phi^{-1}(I_{\phi(f)}^{-})$ for
every $r<f(x)$. It follows that $r\leq\phi(f)$ for every $r<f(x)$ which
implies that $f(x)\leq\phi(f)$ (by limits if $r\rightarrow f(x)$). Also by the
fact that $r\in\phi^{-1}(I_{r}^{-})$ and $x$ is an associated point for
$\phi^{-1}(I_{r}^{-})$ we have $f\in\phi^{-1}(I_{r}^{-})$ for every $r>f(x)$.
It follows that $\phi(f)\leq r$ for every $f(x)<r$ which implies that
$\phi(f)\leq f(x)$ (by limits if $r\rightarrow f(x)$). Therefore
$\phi(f)=f(x)=\delta_{x}(f)$ for every $f\in E$ which implies that
$\phi=\delta_{x}$.
\end{proof}
Now we can give a generalization of Mena and Roth, thanh, Lochan and Strauss,
Ercan and Wickstead's approaches [2,5,7,9] for Riesz spaces with strong order
unit as follows:
\begin{theorem}
Let $E$ be a Riesz space with strong order unit $e$ and let $F$ be a Riesz
space. If $T$ is a lattice homomorphism from $E$ into $F$ such that $T(\lambda
e)=\lambda T(e)$ for each $\lambda\i
\mathbb{R}
$ then $T$ is linear (Riesz homomorphism).
\end{theorem}
\begin{proof}
First we claim that $T(E)\subset F_{T(e)}$, the principal ideal generated by
$T(e)$ in $F$. Take $y\in T(E)$, then there exists $x\in E$ such that
$y=T(x)$. Now, from the fact that $e$ is a strong order unit there exists
$\lambda\i
\mathbb{R}
^{+}$ such that $-\lambda e\leq x\leq\lambda e$ since $T$ is an increasing map
we obtain $T(-\lambda e)=-\lambda T(e)\leq T(x)\leq T(\lambda e)=\lambda
T(e)$. Finally, $\left\vert T(x)\right\vert \leq\lambda T(e)$ thus $T(x)\in
F_{T(e)}$ which implies that $T(E)\subset F_{T(e)}$. On the other hand, we can
regard $T$ as a map from $E$ into $F_{T(e)}^{ru}$, the uniform completion of
$F_{T(e)}$. According to the Kakutani theorem [8,Theorem 2.1.3] there exist
two compact Hausdorff spaces $X$ and $Y$ such that $E^{ru}$ and $(F_{T(e)
)^{ru}$ can be identified with $C(X)$ and $C(Y)$, respectively$.$ Now let
$y\in Y$ then the map $\delta_{y}\circ T:E\rightarro
\mathbb{R}
$ defined by $\delta_{y}\circ T(f)=T(f)(y)$ for all $f\in E$ is a lattice
homomorphism fixing the constants and thus by the above proposition there
exists a unique $x\in X$ with
\[
\delta_{y}\circ T=\delta_{x
\]
which implies that $T(f)(y)=f(x)$ for all $f\in E$. Noting $y=\varphi(x)$
thus
\[
T(f)=f\circ\varphi\text{.
\]
Consequently $T$ is linear which gives the desired result.
\end{proof}
In the next result we prove that the condition of the Dedekind $\sigma
$-completeness in Ercan and Wickstead result's [2, corollary 8] can be removed
as follows:
\begin{theorem}
Let $E$ and $F$ be Riesz spaces. If $T$ is a lattice homomorphism from $E$
into $F$ such that $T(\lambda x)=\lambda T(x)$ for each $\lambda\i
\mathbb{R}
$ and $x\in E$ then $T$ is linear (Riesz homomorphism).
\end{theorem}
\begin{proof}
It is suffices to prove that $T$ is additive. To do this let $x,y\in E$ and
put $e=\left\vert x\right\vert +\left\vert y\right\vert $. Denote by $E_{e}$
the principal ideal generated by $e$ in $E$ and $T_{e}$ the restriction of $T$
to $E_{e}$. Therefore $E_{e}$ is a Riesz space with strong order unit $e$ and
$T_{e}:E_{e}\rightarrow F$ is a lattice homomorphism which satisfies
$T_{e}(\lambda e)=\lambda T_{e}(e)$. According to the preceding Theorem
$T_{e}$ is linear. Since $x,y,x+y\in E_{e}$ it follows that $T(x+y)=T_{e
(x+y)=T_{e}(x)+T_{e}(y)=T(x)+T(y)$, which gives the desired result.
\end{proof}
Now we are able to announce the main result of this paper in which we give a
Riesz space version of Mena and Roth, thanh, Lochan and Strauss, Ercan and
Wickstead's approaches [2,5,7,9] as follows. For the proof we use a similar
techniques that proves lemma 1 in [2] with some addition and more details.
\begin{theorem}
Let $E$ and $F$ be a Riesz spaces with weak order units $e$ and $f$,
respectively. If $T$ is a lattice homomorphism from $E$ into $F$ satisfying
$T(\lambda e)=\lambda f$ for each $\lambda\i
\mathbb{R}
$ then $T$ is a Riesz homomorphism (linear).
\end{theorem}
\begin{proof}
Let $E_{e}$ and $F_{f}$ be the principal order ideals generated by $e$ in $E$
and $f$ in $F$, respectively. As proved in theorem 1 we have $T(E_{e})\subset
F_{f}$. Denote $T_{e}$ the restriction of $T$ to $E_{e}$, that is $T_{e
:E_{e}\rightarrow F_{f}$ defined by $T_{e}(x)=T(x)$. By applying theorem 1
$T_{e}$ is a Riesz homomorphism (linear). On the other hand $T(e)=f$ is a weak
order unit of $F$ so for every $x\geq0$ ($T(x)\geq0$ because $T$ is positive)
we have
\[
T(x)=\sup(T(x)\wedge nT(e))=\sup(T(x\wedge ne))=\sup(T_{e}((x\wedge ne)).
\]
We claim that $T$ is additive in $E^{+}$. Let $x,y\in E^{+}$, then the
inequality $(x+y)\wedge ne\leq x\wedge ne+y\wedge ne$ for all $n\i
\mathbb{N}
$ shows that
\[
T_{e}((x+y)\wedge ne)\leq T_{e}(x\wedge ne+y\wedge ne)=T_{e}(x\wedge
ne)+T_{e}(y\wedge ne)\leq T(x)+T(y)
\]
and hence
\[
T(x+y)\leq T(x)+T(y)\text{.
\]
On the other hand, the inequality $x\wedge ne+y\wedge me\leq x+y$ for all
$n,m\i
\mathbb{N}
$ implies that
\[
T(x\wedge ne+y\wedge me)=T_{e}(x\wedge ne+y\wedge me)=T_{e}(x\wedge
ne)+T_{e}(y\wedge me)\leq T(x+y)\text{.
\]
Now by taking supremum on $n,m\i
\mathbb{N}
$ we obtain
\[
T(x)+T(y)\leq T(x+y)
\]
therefore
\[
T(x+y)=T(x)+T(y)
\]
for all $x,y\in E^{+}$. Let $S$ be the restriction of $T$ to $E^{+}$. From the
Kantorovich theorem (see [1, Theorem 1.7]) $S$ extends uniquely to a positive
operator $S$ from $E$ into $F$ by putting
\[
S(x)=S(x^{+})-S(x^{-})=T(x^{+})-T(x^{-})
\]
for all $x\in E$. Next, we claim that $T(-x)=-T(x)$ for all $x\in E^{+}$.
Since $-T(-x)\geq0$ (because $T$ is positive) we have
\begin{align*}
-T(-x) & =\sup((-T(-x))\wedge nT(e))=\sup(-(T(-x)\vee(-ne))\\
& =\sup(-(T_{e}(-x)\vee(-ne))=\sup(T_{e}(-((-x)\vee(-ne)))\\
& =\sup(T_{e}(x\wedge ne))=T(x)\text{.
\end{align*}
for all $x\in E^{+}.$ Consequently,
\[
T(x)^{-}=(-T(x))\vee0=-T(x\wedge0)=-T(-x^{-})=T(x^{-})
\]
for all $x\in E$. Hence
\[
T(x)=T(x)^{+}-T(x)^{-}=T(x^{+})-T(x^{-})=S(x).
\]
Which implies that $T$ is linear and so $T$ is a Riesz homomorphism and the
proof is finished.
\end{proof}
As an immediate application to the above theorem we give the following result.
\begin{corollary}
Let $E$ and $F$ be Riesz spaces with a weak order unit $e$ of $E$. If $T$ is a
lattice isomorphism from $E$ into $F$ such that $T(\lambda e)=\lambda T(e)$
for each $\lambda\i
\mathbb{R}
$ then $T$ is a Riesz isomorphism (linear).
\end{corollary}
\begin{proof}
It is sufficient to prove that $Te$ is a weak order unit of $F$. To see this,
let $0\leq y\in F$ and $y\wedge T(e)=0$ choose $0\leq x\in E$ with $T(x)=y$
then
\[
0=y\wedge T(e)=T(x)\wedge T(e)=T(x\wedge e)\text{.}
\]
This implies that $x\wedge e=0$ (because $T$ is an isomorphism). Then $x=0$
because $e$ is a weak order unit of $E$, so $y=0$. Hence $Te$ is a weak order
unit of $F$. Now, from the above theorem $T$ is a Riesz isomorphism.
\end{proof}
At the end of this paper, we give an important Banach-stone type theorem for
non linear homomorphisms between lattices of Lipschitz functions on metric
spaces (which are, in general, not uniformly complete spaces). For a metric
space $(X,d)$ we denote by $Lip(X)$, the unital vector lattice of all
lipschitz real function on $X$. In the following we show that the unital
vector lattice structure of $Lip(X)$ determines the lipschitzian structure of
a complete metric space $X$. Completeness can not be avoided here, since every
metric space has the same lipschitz functions as its completion.
\begin{theorem}
Let $(X,d)$ and $(Y,d^{\prime})$ be complete metric spaces and
$T:Lip(X)\rightarrow Lip(Y)$ be a lattice isomorphism with $T(\lambda
1)=\lambda T(1)$ for each $\lambda\i
\mathbb{R}
$. Then $X$ and $Y$ are lipschitz homeomorphic.
\end{theorem}
\begin{proof}
It is plain that $1$ is a weak order unit of $Lip(X)$ so from Corollary 1 $T$
is a Riesz isomorphism. Now, According to [3, Theorem 3.10, ] $X$ and $Y$ are
lipschitz homeomorphic.
\end{proof}
|
2203.12328
|
\section{Introduction}
\label{sec1}
In mobile communication networks, orthogonal frequency division multiplexing (OFDM) systems have to deal with time-selectivity of the channel in addition to frequency-selectivity. A key issue in OFDM is the problem of channel estimation/equalization in time-varying channels \cite{coleri}. In addition to time-selectivity of the channel, OFDM receivers are known to be sensitive to impairments due to local oscillator phase noise \cite{tomba}. With a conventional single-tap equalizer, the bit error rate (BER) performance of OFDM floors due to inter-carrier interference (ICI) caused by phase noise \cite{smaini}. Recently, deep neural networks are finding use in a wide range of problems in the physical layer design, including transceiver designs \cite{shea}-\cite{mehran}. Fully connected neural networks based channel estimation in OFDM systems have been considered in \cite{ce1},\cite{ce2}. These works do not consider the time-selectivity of the fading channel. Convolutional neural network (CNN) is a type of neural network that has been widely used in the field of image and video processing for applications like de-noising and improving the resolution of an image. The application of CNNs for channel estimation in OFDM systems has been considered in \cite{mehran}. In \cite{mehran}, the TF grid of channel coefficients is modelled as an image. The estimates obtained at the pilot locations are interpolated across time and frequency and filtered through a super resolution network and a de-noising network. This approach does not yield a single trained architecture that performs consistently across all signal-to-noise ratio (SNR) values. Also, these works do not consider the problem of channel estimation in the presence of phase noise (PN). Several methods have been reported for PN estimation and compensation, e.g., \cite{ce5},\cite{ce7}. In \cite{ce7}, the authors use learning based schemes to estimate PN and decode symbols in doubly-selective channels. However, the approach assumes the channel to be static over the subframe duration. Further, the computational complexity and pilot density are high. In this work, we propose a novel, low complexity learning based scheme for channel estimation in time-varying channels (where the channel varies in time within one OFDM subframe depending on the Doppler) and phase noise compensation in OFDM systems. Under such rapid time-varying channel conditions, the proposed scheme achieves better performance at lower complexity and pilot density compared to other state-of-the art techniques. The new contributions in this letter are summarized as follows.
We consider the doubly-selective TF channel grid as a 2D image and solve the problem of channel estimation as an image completion problem using sparse data. The sparse data here is the pilot symbols sparsely populated in the TF grid. We propose a 2D CNN architecture for this, which is a natural fit. A novelty here is that, in order to render the network robust to PN, a training methodology where the training data is rotated by random phases before being fed to the network is employed. We further propose a simple time-domain PN compensation scheme that uses the channel estimates from the estimator network and the knowledge of pilot symbols to estimate the PN at the pilot locations, which are then 2D interpolated across the entire TF grid. Simulation results for Vehicular A (VehA) channel model show that the proposed channel estimator network and PN compensation scheme achieve robust mean square error and bit error performance in the presence of PN.
\vspace{-2mm}
\section{System Model}
\label{sys_mod}
Consider a single-input single-output (SISO) OFDM system with $N_c$ subcarriers. Let $\{X_k\}_{k=0}^{N_c-1}$ be the information symbols multiplexed on the $N_c$ subcarriers of one OFDM symbol in the frequency domain. Let the corresponding time domain sequence after inverse discrete Fourier transform (IDFT) be ${\bf x}_t = \{x_n\}_{n=0}^{N_c-1}$. An $N_{\text{cp}}$-length cyclic prefix (CP) is added, and the cyclic-prefixed time domain sequence is transmitted through a frequency-selective channel with $L$ taps ($N_{\text{cp}} \geq L$). The received signal is affected by PN induced multiplicative distortion. The received time domain sequence at the receiver, after removing the CP, is given by
\begin{align}
{\bf y}_{t} = {\bf \Phi}_t\left({\bf h}_t \text{\textcircled{$*$}}{\bf x}_t + {\bf n}_t\right),
\label{sys1}
\end{align}
where ${\bf \Phi}_t$ = diag$(e^{j\phi_0}, e^{j\phi_1}, \cdots, e^{j\phi_{N_c-1}}) \in \mathbb{C}^{N_c \times N_c}$ is a diagonal matrix with PN realizations on the diagonal, ${\bf h}_t \in \mathbb{C}^{N_c \times 1}$ is the channel impulse response (padded with $N_c-L$ zeros), and ${\bf n}_t \in \mathbb{C}^{N_c \times 1}$ contains i.i.d. circularly symmetric Gaussian noise samples with variance $\sigma^2$. Let {\boldmath $\psi$}$_t = [e^{j\phi_0} \ e^{j\phi_1} \cdots e^{j\phi_{N_c-1}}]^T$ denote the time domain PN vector. The receiver converts ${\bf y}_{t}$ to a frequency domain vector ${\bf y}_{f} \in \mathbb{C}^{N_c \times 1}$ using discrete Fourier transform (DFT), which can be written as
\begin{align}
{\bf y}_{f} = {\bf \Psi}_f\text{\textcircled{$*$}}\left({\bf h}_f \odot {\bf x}_f + {\bf n}_f\right),
\label{sys2}
\end{align}
where ${\bf \Psi}_f =[\psi_0 \ \psi_1 \cdots \psi_{N_c-1}]^T \in \mathbb{C}^{N_c \times 1}$ represents the DFT coefficient vector of the time domain PN vector {\boldmath $\psi$}$_t$, and ${\bf h}_f,{\bf x}_f,{\bf n}_f \in \mathbb{C}^{N_c \times 1}$ represent the channel response, transmitted symbol, and noise vector in the frequency domain, respectively. Defining a circulant matrix ${\bf \Theta}_f \in \mathbb{C}^{N_c \times N_c}$ as
\begin{align}
{\bf \Theta}_f =
\begin{bmatrix}
\psi_0 & \psi_{N_c-1} & \cdots & \psi_1 \\
\psi_1 & \psi_0 & \cdots & \psi_2 \\
\vdots & \vdots & \ddots & \vdots \\
\psi_{N_c-1} & \psi_{N_c-2} & \cdots & \psi_0 \\
\end{bmatrix},
\label{sys3}
\end{align}
the circular convolution in \eqref{sys2} is equivalently represented as
\begin{align}
{\bf y}_{f} = {\bf \Theta}_f\left({\bf h}_f \odot {\bf x}_f + {\bf n}_f\right).
\label{sys4}
\end{align}
Using the distributive property of matrix multiplication, \eqref{sys4} can be simplified to
\begin{align}
{\bf y}_{f} & \hspace{0.5mm} = \hspace{0.5mm} {\bf \Theta}_f{\bf h}_f \odot {\bf x}_f + {\bf \Theta}_f{\bf n}_f
\hspace{0.5mm} = \hspace{0.5mm} {\bf \tilde{h}}_f \odot {\bf x}_f + {\bf \tilde{n}}_f,
\label{sys5}
\end{align}
where ${\bf \tilde{h}}_f={\bf \Theta}_f{\bf h}_f$ and ${\bf \tilde{n}}_f={\bf \Theta}_f{\bf n}_f$ are the PN effected channel frequency response and noise vectors, respectively, and $({\bf \tilde{n}}_f)_i \sim \mathcal{CN}(0, \sigma^2)$, $i=0, 1, \cdots N_c-1$.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{Fig1.eps}
\caption{Pilot and data symbols placement in the TF grid.}
\label{fig:frame_partition}
\end{figure}
Transmission is divided into subframes. Each subframe consists of $N_s$ OFDM symbols as shown in Fig. \ref{fig:frame_partition}. From \eqref{sys5}, the received signal matrix in the frequency domain corresponding to one subframe can be written as
\begin{align}
{\bf Y}_{f} &= {\bf \tilde{H}}_f \odot {\bf X}_f + {\bf \tilde{N}}_f,
\label{sys6}
\end{align}
where ${\bf X}_f = [{\bf x}_{f}^{(0)} \ {\bf x}_{f}^{(1)}\cdots{\bf x}_{f}^{(N_s-1)}] \in \mathbb{C}^{N_c \times N_s}$ denotes the $N_s$ transmitted OFDM symbols
with the $i$ in ${\bf x}_{f}^{(i)}$ denoting the OFDM symbol index. Likewise, ${\bf \tilde{H}}_f = [{\bf \tilde{h}}_{f}^{(0)} \ {\bf \tilde{h}}_{f}^{(1)} \cdots {\bf \tilde{h}}_{f}^{(N_s-1)}]$ $\in \mathbb{C}^{N_c \times N_s}$, ${\bf \tilde{N}}_f = [{\bf \tilde{n}}_{f}^{(0)} \ {\bf \tilde{n}}_{f}^{(1)} \cdots {\bf \tilde{n}}_{f}^ {(N_s-1)}] \in \mathbb{C}^{N_c \times N_s}$, and ${\bf Y}_f = [{\bf y}_{f}^{(0)} \ {\bf y}_{f}^{(1)} \cdots {\bf y}_{f}^{(N_s-1)}] \in \mathbb{C}^{N_c \times N_s}$ are the channel response, noise, and the received OFDM symbols, respectively.
Towards estimating the coefficients of the channel matrix ${\bf \tilde{H}}_f$ for a subframe, pilot symbols are placed at known locations in the subframe. Figure \ref{fig:frame_partition} shows one such arrangement, called lattice-type pilot arrangement, wherein $N_p$ pilot symbols are placed in the subframe. The pilot symbols are separated in time by $S_t$ time slots and in frequency by $S_f$ subcarriers. Let ${\bf x}_{f, p} \in \mathbb{C}^{N_p \times 1}$ be the vector of transmitted pilot symbols and ${\bf y}_{f, p}$ be the corresponding received vector. Let ${\bf \tilde{h}}_{f, p} \in \mathbb{C}^{N_p \times 1}$ be the vector of channel coefficients seen by the pilot symbols. The vector of least squares (LS) channel estimates at the pilot locations, ${\bf \hat{\tilde{h}}}_{f, p}$, is obtained as
\begin{align}
{\bf \hat{\tilde{h}}}_{f, p} = \underset{{\bf \tilde{h}}_{f, p}}{\text{argmin}} \Vert {\bf y}_{f, p} - {\bf \tilde{h}}_{f, p} \odot {\bf x}_{f, p} \Vert^2,
\label{sys7}
\end{align}
which on solving gives
\begin{align}
{\bf \hat{\tilde{h}}}_{f, p} = \frac{{\bf y}_{f, p}}{{\bf x}_{f, p}}.
\label{sys8}
\end{align}
Typically, using the knowledge of ${\bf \hat{\tilde{h}}}_{f, p}$ at the pilot locations, interpolation is carried out to obtain the estimates for the entire TF grid, i.e., to obtain an estimate of ${\bf \tilde{H}}_f$. But due to the time-selective nature of the channel and the random nature of rotations introduced by PN, such traditional approaches yield poor estimates/performance. It is therefore necessary to $i)$ learn and track the time variations of the channel, and $ii)$ estimate and compensate the PN in order to achieve robust performance. In the following section, we propose a CNN architecture to solve the first task, and solve the second task using the estimates obtained from the first task.
\section{Proposed channel estimation and PN compensation}
\label{ch_est}
Figure \ref{fig:block_dia} shows the block diagram of the proposed CNN based channel estimator network and PN compensation scheme for OFDM systems. At the transmitter,
an OFDM subframe comprising of $N_s$ OFDM symbols (with pilot and data symbols as shown in Fig. \ref{fig:frame_partition}), represented by ${\bf X}_f$, is converted to time domain using IDFT operation to obtain ${\bf X}_t \in \mathbb{C}^{N_c \times N_s}$, and prefixed with CP. The subframe is transmitted over a doubly-selective fading channel. The channel matrix for the subframe in the frequency domain is ${\bf H}_f = [{\bf h}_f^{(0)} {\bf h}_f^{(1)} \cdots {\bf h}_f^{(N_s-1)}]$, where ${\bf h}_f^{(i)}$ is the channel response of the $i$th OFDM symbol. The receiver introduces additive noise (${\bf N}_t$) and multiplicative distortion (${\bf \Psi}_t$) due to PN. CP is removed and the resultant matrix ${\bf Y}_t$ is converted to frequency domain using DFT to obtain ${\bf Y}_f$. The matrix ${\bf Y}_f$ comprises of both pilot and data symbols. The pilot symbols are used to obtain an estimate ${\bf \hat{H}}_f = [{\bf \hat{h}}_f^{(0)} {\bf \hat{h}}_f^{(1)} \cdots {\bf \hat{h}}_f^{(N_s-1)}]$ of the channel matrix ${\bf H}_f$ using the proposed channel estimator network.
Following this, ${\bf \hat{H}}_f$ and the received pilot symbols are used to estimate the PN samples and compensate the received subframe ${\bf Y}_f$ to obtain ${\bf Y}'_f$ using the proposed PN compensation algorithm.
A second set of channel estimates, ${\bf \hat{H}}'_f$, is obtained using the channel estimator network with ${\bf Y}'_f$ as the received subframe. Finally, ${\bf \hat{H}}'_{f}$ is used for decoding the data symbols in ${\bf Y}'_f$.
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{Fig2.eps}
\caption{Proposed CNN based channel estimator network and PN compensation scheme.}
\label{fig:block_dia}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Fig3.eps}
\caption{Proposed CNN based channel estimator network.}
\label{fig:ch_est}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
{\bf Layer} & {\bf Input channels} & {\bf Output channels} & {\bf Kernel size}\\
\hline
1 & 1 & 64 & (16, 4) \\
\hline
2 & 64 & 32 & (16, 4) \\
\hline
3 & 32 & 21 & (17, 5) \\
\hline
4 & 21 & 1 & (20, 8) \\
\hline
\end{tabular}
\caption{Parameters of the 2D-CNN layers in the channel estimator network.}
\label{tab:ch_est_par}
\end{table}
\subsection{Proposed channel estimator network and training}
\label{sec3a}
The proposed channel estimator network comprises of four 2D-CNN layers, with the stride of each layer set to one and padding adjusted to make the output of each layer have the same dimension as the input. The input to the network is a sparse TF grid comprising of LS estimates (using \eqref{sys8}) of the channel at the pilot locations and zeros elsewhere. The sparse TF grid is separated into real and imaginary parts and estimation is performed sequentially. Using the sparse information available at the input, the network is trained to `complete' the TF grid, i.e., to provide estimates for the entire TF grid. This is depicted in Fig. \ref{fig:ch_est} wherein, at the input, the squares marked yellow represent the availability of LS estimates in pilot locations and the estimator network provides the estimates for the TF grid at the output, tracking the channel variations in both time and frequency. The other parameters of the channel estimator network are presented in Table \ref{tab:ch_est_par}.
Doubly-selective channel realizations are generated and used for training the channel estimator network. During training, PN induced rotations are introduced at the input of the network. The network is trained using PN samples that are Gaussian distributed \cite{smaini} with zero mean and $\sigma_{{\tiny \mbox{PN}}}^2$ variance.
Modelling PN samples using Gaussian distribution improves the training robustness as the model is able to learn both with (when PN absolute value is greater than zero) and without (when PN absolute value is close to zero) the effect of PN. This helps the model generalize beyond the $\sigma_{{\tiny \mbox{PN}}}$ values seen while training.
To train the network, the input is set to be a sparse TF-grid, ${\bf H}_p$, where the pilot locations contain channel coefficients of a channel realization with PN and zeros elsewhere. The output of the network is compared against the channel realization, ${\bf H}_{\text{act}}$, using an $L_1$-loss function given by
\begin{align}
L = \overline{\sum_{{\bf H}_{\text{act}}} \vert f({\bf \Theta}_{\text{CE}}, {\bf H}_p) - {\bf H}_{\text{act}} \vert},
\label{l1_loss}
\end{align}
where $f(\cdot)$ represents the channel estimator network, ${\bf \Theta}_{\text{CE}}$ is the set of all trainable parameters in the network, and $\overline{\cdot}$ denotes the mean operation over all the training samples.
The other hyper-parameters used in the training are shown in Table \ref{tab:training_hp}.
\begin{table}
\centering
\begin{tabular}{|l|l|}
\hline
{\bf Hyper-parameter} & {\bf Value} \\
\hline
Epochs & 10000 \\
\hline
Optimizer & Adam \\
\hline
Learning rate & 0.001, divide by 2 every 2000 epochs \\
\hline
Batch size & 1000 \\
\hline
Mini-batch size & 64 \\
\hline
\end{tabular}
\caption{Hyper-parameters used for training the channel estimator network.}
\label{tab:training_hp}
\end{table}
\subsection{Proposed PN compensation algorithm}
\label{subsec3b}
The proposed PN compensation algorithm begins by estimating the PN samples in the TF grid, following which the received subframe is compensated. Using the estimate ${\bf \hat{H}}_f$ obtained from the channel estimator network, the $i$th OFDM symbol (consisting of pilot and data symbols) in the received subframe can be approximated as (using \eqref{sys2})
\begin{align}
{\bf y}_{f}^{(i)} = {\bf \Psi}_f^{(i)}\text{\textcircled{$*$}}\left({\bf \hat{h}}_f^{(i)} \odot {\bf x}_f^{(i)} + {\bf n}_f^{(i)} \right).
\label{comp1}
\end{align}
Representing \eqref{comp1} in the time domain yields (the superscript $i$ is dropped for brevity)
\begin{align}
{\bf y}_{t} = \boldsymbol{\psi}_t\left({\bf \hat{h}}_t \text{\textcircled{$*$}}{\bf x}_t + {\bf n}_t\right).
\label{comp2}
\end{align}
Defining a circulant matrix ${\bf \hat{H}}_t$ as
\begin{align}
{\bf \hat{H}}_t =
\begin{bmatrix}
\hat{h}_t^{(0)} & \hat{h}_t^{(N_c-1)} & \cdots & \hat{h}_t^{(1)} \\
\hat{h}_t^{(1)} & \hat{h}_t^{(0)} & \cdots & \hat{h}_t^{(2)} \\
\vdots & \vdots & \ddots & \vdots \\
\hat{h}_t^{(N_c-1)} & \hat{h}_t^{(N_c-2)} & \cdots & \hat{h}_t^{(0)} \\
\end{bmatrix},
\label{comp3}
\end{align}
\eqref{comp2} can be equivalently written as
\begin{align}
{\bf y}_{t}
= \boldsymbol{\psi}_t\left({\bf \hat{H}}_t{\bf x}_t + {\bf n}_t\right) = \boldsymbol{\psi}_t\left({\bf \hat{H}}_t{\bf F}^{H}{\bf x}_f + {\bf n}_t\right),
\label{comp4}
\end{align}
where ${\bf F}$ represents the $N_c$-point DFT matrix and ${\bf x}_f$ is the frequency domain vector corresponding to ${\bf x}_t$. Let ${\mathcal J}$ denote the set of subcarrier indices at which pilot symbols are present in ${\bf x}_f$. For indices $j \in {\mathcal J}$, \eqref{comp4} can be written as
\begin{align}
{\bf y}_{t}^{(j \in {\mathcal J})} = \boldsymbol{\psi}_t^{(j \in {\mathcal J})}\left(\left({\bf \hat{H}}_t{\bf F}^{H}{\bf x}_f\right)^{(j \in {\mathcal J})} + {\bf n}_t^{(j \in {\mathcal J})}\right).
\label{comp5}
\end{align}
To obtain an estimate of PN samples at locations indexed by ${\mathcal J}$, the following objective function is minimized:
\begin{align}
\boldsymbol{\hat{\psi}}_t^{(j \in {\mathcal J})} = \underset{\boldsymbol{\psi}_t^{(j \in {\mathcal J})}}{\text{argmin}} \left\Vert {\bf y}_{t}^{(j \in {\mathcal J})} - \boldsymbol{\psi}_t^{(j \in {\mathcal J})}\left({\bf \hat{H}}_t{\bf F}^{H}{\bf x}_f\right)^{(j \in {\mathcal J})} \right\Vert^2.
\label{comp6}
\end{align}
Equation \eqref{comp6} is used on all OFDM symbols containing pilots in the subframe. The PN estimates at all the pilot locations are interpolated across the entire TF grid using an MMSE interpolator to obtain $\boldsymbol{\hat{\Psi}}_t$. For the $i$th received symbol ${\bf y}_t^{(i)}$, a compensated symbol ${\bf y}_t'^{(i)} = \boldsymbol{\hat{\Psi}}^{*(i)}_t{\bf y}_t^{(i)}$ is obtained, where $(\cdot)^*$ indicates the conjugation operation. The compensated time domain symbols are converted to frequency domain to obtain the compensated subframe ${\bf Y}_f'$. A final set of channel estimates, ${\bf \hat{H}}_f'$, is obtained from the pilot locations in ${\bf Y}_f'$ using the channel estimator network again. This is done because ${\bf \hat{H}}_f'$ has higher accuracy when compared to ${\bf \hat{H}}_f$ obtained from ${\bf Y}_f$ having PN induced rotations. ${\bf \hat{H}}'_f$ is used for decoding data symbols in ${\bf Y}'_f$.
The operations outlined in \eqref{comp1} through \eqref{comp6} are carried out using the channel estimates ${\bf \hat{H}}_f$ obtained from the channel estimator network, to estimate the PN values.
The compensated subframe ${\bf Y}_f'$ is obtained using the estimated PN values, and is employed to obtain channel estimates ${\bf \hat{H}}'_f$ from the channel estimator network.
\section{Results and Discussions}
\label{results}
In this section, we present the mean square error (MSE) and BER performance of the proposed channel estimator network and the PN compensation algorithm. For all the simulations presented below, for each subframe, $N_s=14$ and $N_c=72$ as per LTE standards \cite{lte}, $N_p=48, S_f = 6,$ and $S_t=7$.
50000 realizations of Vehicular A (VehA) channel model defined by ITU-R \cite{itu} with six taps, carrier frequency of $f_c=2.1$ GHz, bandwidth of 1.6 MHz, and user equipment (UE) speed of 50 km/h (corresponding Doppler frequency of $f_D=97$ Hz) are generated. 35000 realizations are used for training the proposed channel estimator network (training data), 5000 realizations are used for validating the training (validation data) and the remaining 10000 realizations are used for testing (test data).
Each tap in the VehA model is Rayleigh distributed with the time selectivity based on Jakes model\cite{jakes}.
For all the results presented below, a single trained channel estimator network is obtained using VehA channel realizations with PN $\sim \mathcal{N}\left(0, \sigma_{\tiny \mbox{PN}}^2\right), \sigma_{\tiny \mbox{PN}} = 1.58^\circ$, and $f_D=97$ Hz as training data. We use PyTorch machine learning library for the implementation, training, and testing of the channel estimator network. We use Nvidia RTX 3090 GPU platform to carry out all the simulations. PN samples are generated from its power spectral density given by \cite{smaini}
\begin{align}
L(f_m) = \frac{B_{\text{PLL}}^2L_0}{B_{\text{PLL}}^2 + f_m^2}\left(1 + \frac{f_{\text{corner}}}{f_m}\right) + L_{\text{floor}},
\label{psd}
\end{align}
where $B_{\text{PLL}}$ is the -3 dB bandwidth of the phase locked loop (PLL), $L_0$ is the in-band phase noise level in rad$^2$/Hz (dBc/Hz), $f_m$ is the frequency offset from the carrier frequency, $f_{\text{corner}}$ is the flicker corner frequency, and $L_{\text{floor}}$ is the noise floor. For performance evaluation, we choose three sets of values for the parameters in \eqref{psd}, that correspond to $\sigma_{\tiny \mbox{PN}}=2.78^\circ, 5.46^\circ$, and $10.85^\circ$, respectively, as shown in Table \ref{tab:phase_noise}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\hline
{\bf Set} & $B_{\text{PLL}}$ & $L_0$ & $f_{\text{corner}}$ & $L_{\text{floor}}$ &$\sigma_{\tiny \mbox{PN}}$ \\
& (Hz) & (dBc/Hz) & (Hz) & (dBc/Hz) & (degree)\\
\hline
1 & 10$^7$ & -95 & 10$^3$ & -150 & 2.78$^\circ$\\
\hline
2 & $4\times 10^7$ & -95 & 10$^3$ & -150 & 5.46$^\circ$ \\
\hline
3 & $4\times 10^7$ & -89 & 10$^3$ & -150 & 10.85$^\circ$\\
\hline
\end{tabular}
\caption{Phase noise PSD parameters.}
\label{tab:phase_noise}
\end{table}
\begin{figure}
\centering
\includegraphics[width=9.25cm, height=7.00cm]{Fig4.eps}
\caption{MSE performance of the proposed channel estimator network without and with the proposed PN compensation for different values of $\sigma_{\tiny \mbox{PN}}$.}
\label{fig:mse_pn}
\end{figure}
Figure \ref{fig:mse_pn} shows the MSE performance of the proposed channel estimator network as a function of pilot SNR at a Doppler frequency of $f_D=97$ Hz for the three values of $\sigma_{\tiny \mbox{PN}}$ considered. The MSE performance of the first set of channel estimates (${\bf \hat{H}}_f$) and the second set of channel estimates refined by the PN estimation and compensation algorithm (${\bf \hat{H}}_f'$) are plotted. The performance with no PN is also plotted for comparison. In addition, the MSE performance achieved by a 2D spline interpolation scheme is also shown. It is seen that the MSE performance of the interpolation scheme is poor due to the presence of PN and time selectivity of the channel. On the other hand, the proposed network is able to learn and estimate the channel much better. For example, the MSE of the first estimate ${\bf \hat{H}}_f$ itself is much better while the MSE of the second estimate ${\bf \hat{H}}_f'$ is close to the MSE with no PN. This demonstrates the generalization ability of the proposed channel estimator network, wherein the same trained network is able to perform well under different PN levels.
Next, Fig. \ref{fig:comp_trad_meth} shows the BER performance of the OFDM system as a function of $E_b/N_0$ for 4-QAM, $f_D=97$ Hz, and 30 dB pilot SNR.
The BER performance is evaluated for the three considered values of $\sigma_{\tiny \mbox{PN}}$ using the channel estimates ${\bf \hat{H}}_f$ (without PN compensation) and ${\bf \hat{H}}_f'$ (with PN compensation).
For comparison purposes, the BER performance with perfect channel state information (CSI) and no PN is also plotted.
The following observations can be made from the figure.
Without PN, the BER performance with the proposed channel estimator network is very close to the that with perfect CSI. In the presence of PN, using the first channel estimate ${\bf \hat{H}}_f$, the BER performance floors whereas using the refined estimate ${\bf \hat{H}}_f'$, the performance gets close to that with perfect CSI. For example, while the BER floors at $10^{-2}$ for $\sigma_{\tiny \mbox{PN}}=10.85^\circ$ when ${\bf \hat{H}}_f$ is used, the BER improves to about $4\times 10^{-4}$ at $E_b/N_0=30$ dB for the same $\sigma_{\tiny \mbox{PN}}$ when ${\bf \hat{H}}_f'$ is used. Note that the BER with perfect CSI and no PN for the same $E_b/N_0$ is $2.5\times 10^{-4}$.
Figure \ref{fig:comp_trad_meth} also presents a comparison of the performance of the proposed scheme with that of the PN compensation scheme (ref. scheme) proposed in \cite{sec4a}.
It is observed that, for all $\sigma_{\tiny \mbox{PN}}$ values, the BER performance of the ref. scheme is comparable with that using the channel estimates $\mathbf{\hat{H}}_f$, while the performance with channel estimates $\mathbf{\hat{H}}_f'$ in the proposed scheme is superior. This can be attributed to the sub-optimality of the iterative scheme followed in \cite{sec4a}.
Further, the transmitted subframe in \cite{sec4a} consists of an initial block type pilot and comb type pilot thereafter (about 30\% of the symbols are pilots,
while the proposed approach uses lattice-type pilots (only 5\% of symbols are pilots) and has better bandwidth efficiency.
\begin{figure}
\centering
\includegraphics[width=9.25cm, height=7.00cm]{Fig5.eps}
\caption{BER performance comparison between the proposed channel estimator network without and with the proposed PN compensation with PN compensation scheme in \cite{sec4a}.}
\label{fig:comp_trad_meth}
\vspace{-2mm}
\end{figure}
\begin{figure}
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=4.7cm, height=4.7cm]{Fig6a.eps}
\caption{BER vs SNR}
\label{fig:comp_state_art}
\end{subfigure}
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=4.7cm, height=4.7cm]{Fig6b.eps}
\caption{BER vs pilot SNR}
\label{fig:ber_vs_pil_snr}
\end{subfigure}
\caption{BER performance comparison between the proposed scheme and the PN compensation scheme in \cite{ce7}.}
\label{fig:perf_state_art}
\vspace{-2mm}
\end{figure}
\subsection{Comparison with NN-based PN compensation in \cite{ce7}
\label{subsec4b}}
Figure \ref{fig:comp_state_art} shows the performance comparison between the proposed channel estimation and PN compensation scheme and the scheme in \cite{ce7} (ref. scheme) for 16-QAM, $f_D=97$ Hz, and 30 dB pilot SNR. The PN is modelled as a Brownian motion process (as in \cite{ce7}) with PN bandwidth parameter $\beta$. The BER performance of the ref. scheme for $\beta = 10^3$ Hz is observed to floor. Further, increasing the number of iterations ($N_{iter}$) to get the initial estimates in the ref. scheme improves its performance in the low and mid SNR regimes. However, the proposed scheme is able to perform better than the ref. scheme throughout the considered SNR range. This can be attributed to the absence of pre-processing in the proposed scheme, while in the ref. scheme the networks are trained using estimates obtained from the first iteration of a non-linear least square estimation algorithm. Next, for $\beta = 10^2$ Hz, the performance of both the schemes are observed to be close, and close to the perfect CSI performance. We note that the ref. scheme considers the pilot arrangement as in \cite{sec4a}, which has lower bandwidth efficiency than the proposed scheme. Also, the ref. scheme is computationally expensive. For example, for each OFDM symbol with $N_c=160$, the ref. scheme involves 4 NNs with about $10^8$ floating point operations (FLOPs), while the proposed scheme's NN requires only about $3\times10^7$ FLOPs.
BER performance as a function of pilot SNR for 20, 30 dB data SNRs are plotted in Fig. \ref{fig:ber_vs_pil_snr}.
As expected, BERs are high at low pilot SNRs due to increased MSE. The gap from respective perfect CSI performance gets more when data SNR is high. This is because when data SNR is high (e.g., at 30 dB), MSE dominates the effect of thermal noise, and vice versa when data SNR is low (e.g., at 20 dB). It is also seen that the proposed scheme performs better than the ref. scheme in \cite{ce7} across all pilot SNRs considered.
\vspace{-2mm}
\section{Conclusions}
\label{conclusions}
We proposed a 2D CNN based learning network to estimate the doubly-selective channel coefficients of a TF grid in an OFDM system by treating the problem as an image completion problem using sparsely available data (i.e., pilot symbols). Numerical results showed that a single trained channel estimator network along with a PN compensation scheme performed well under different PN levels and Doppler frequencies, outperforming other recent schemes.
Compensation of PN considering the effects of interference is suggested as an area for future research.
|
1311.3806
|
\chapter{Assumptions and prerequisites}
Throughout this paper we will assume ${\cal K}$ is a homogeneous metric abstract elementary class with perturbations with complete type spaces (see [HH]) that is weakly simple and $d^p$-superstable.
We write $a,b$ etc. for finite tuples. As a shorthand $ab$ will denote the concatenated tuple of $a$ and $b$. For sets $A$ and $B$, $AB$ will denote their union.
By $\lambda ({\cal K} )$ we mean the least cardinal $\lambda$ such that
as a homogeneous AEC, ${\cal K}$ is $\lambda$-stable (i.e. $\lambda$-stable
in the sense of [HS]).
By $\kappa({\cal K})$ we denote the least cardinal $\kappa$ such that there is no strongly splitting sequence of length $\kappa$.
We say that $a$ and $b$ have the same Lascar strong types over $A$, $Lstp(a/A)=Lstp(b/A)$, if $E(a,b)$ holds for any $A$-invariant equivalence relation with a bounded number of equivalence classes.
\th 1 Fact.
Let ${\bf M}$ be strongly $\lambda$-homogeneous. For every $\kappa<\lambda$ there is a cardinal $H(\kappa)$
such that if $A$ is a set of size $\leq\kappa$ and $(a_i)_{i<H(\kappa)}\subset{\bf M}$ then there exists an $A$-indiscernible sequence $(b_i)_{i<\omega}\in{\bf M}$ such that for every $n<\omega$ there exist $i_0<\cdots<i_n<H(\kappa)$ such that
$$t^g(b_0,\dots,b_n/A)=t^g(a_{i_0},\dots,a_{i_n}/A).$$
Note that the fact ensures that over any set $A$ there are less than $H(\abs{A})$ Lascar strong types over $A$.
In a stable homogeneous class we can define an independence notion based on strong splitting as done in [HS]:
We write $a\downarrow_AB$ if there is $C\subseteq A$ of power $<\kappa({\cal K})$ such that for all $D\supseteq A\cup B$ there is $b$ which satisfies $t^g(b/AB)=t^g(a/AB)$ such that $t^g(b/D)$ does not split strongly over $C$. For an arbitrary set $C$, $C\downarrow_AB$ means $a\downarrow_AB$ for all $a\in C$.
\th 2 Fact. (Hyttinen-Shelah [HS]) In a stable homogeneous class $\downarrow$ satisfies:
(i) If $A\subseteq A'\subseteq B'\subseteq B$ and $a\downarrow_AB$ then $a\downarrow_{A'}B'$.
(ii) If $A\subseteq B$, $a\downarrow_AA$ and $t^g(a/A)$ is unbounded, then there is $b$ such that $b\downarrow_AB$ and $Lstp(b/A)=Lstp(a/A)$.
(iii) If $A\subseteq B$, $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$ and $a\downarrow_AA$ and $t^g(a/A)$ is unbounded then there is some finite $B'\subseteq B$ such that $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB'$.
(iv) For all $a$, $b$ and $A$, $b\downarrow_AA$ and $a\downarrow_Ab$ implies $b\downarrow_Aa$. By finite character this generalises to: if $A\downarrow_BC$ and $C\downarrow_BB$ then $C\downarrow_BA$.
(v) If $b\downarrow_AD$ and $c\downarrow_{Ab}D$ then $bc\downarrow_AD$.
(vi) If $a\downarrow_Ac$, $b\downarrow_Ac$ and $Lstp(a/A)=Lstp(b/A)$ then $t^g(a/Ac)=t^g(b/Ac)$.
\th 3 Lemma. If $C\downarrow_AB$ and $D\downarrow_{AC}B$ then $CD\downarrow_AB$.
{\bf Proof}.\enspace By strong extension and stationarity of Lascar strong types we may assume $B$ is $\lambda({\cal K})$-saturated.
We may also assume $A$ and $C$ are of power $<\kappa({\cal K})$.
Now if $CD\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$ there are finite $c\in C$ and $d\in D$ such that $cd\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$. So there is an $A$-indiscernible sequence $I=(a_i)_{i<\omega}\subset B$ such that $t^g(cda_0/A)\neq t^g(cda_1/A)$. If $I$ is $AC$-indiscernible, this contradicts $D\downarrow_{AC}B$. So $I$ cannot be indiscernible over $AC$ but then (by re-enumerating) for some $n$ and $c\in C$ $t^g(c,a_0,\dots,a_{n-1}/A)\neq t^g(c,a_n,\dots,a_{2n-1}/A)$ giving an $A$-indiscernible sequence contradicting $C\downarrow_AB$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 4 Corollary. If $A\subseteq B$, $a\downarrow_AB$, $a\downarrow_BC$, $C\downarrow_BB$ and $B\downarrow_AA$ then $a\downarrow_AC$.
{\bf Proof}.\enspace By symmetry $B\downarrow_Aa$ and $C\downarrow_Ba$ and thus by Lemma 3
$BC\downarrow_Aa$. By symmetry we then have $a\downarrow_ABC$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
In this context we define weak simplicity as $a\downarrow_AA$ for all $a$ and finite $A$. Thus if ${\cal K}$ is stable and weakly simple then $\downarrow$ satisfies monotonicity and stationarity of strong types, and over finite sets in addition transitivity, symmetry and strong extension.
As in [HK1], we write $Lstp^{w}(a/A)=Lstp^{w}(b/A)$
if for all finite $B\subseteq A$,
$Lstp(a/B)=Lstp(b/B)$. Following [HK2], the types
$Lstp^{w}$ are called Lascar types.
By homogeneity,
if $Lstp^{w}(a/A)=Lstp^{w}(b/A)$ then
$t^g(a/A)=t^g(b/A)$.
The following lemma is needed because the $d^{p}$-distance of
Galois-types we use, need not be a metric, see [HH].
If there are no perturbations, i.e.
$F_{\epsilon}=F_{0}$ for all $\epsilon >0$, then it is and we can
choose $n(\epsilon )=\epsilon /n$ and $\delta_n=2^{-n}$.
\th 5 Lemma.
(i) For all $n>1$ and $\epsilon >0$, there is $n(\epsilon )>0$ such that
for all $a_{i}$, $i\le n$, if for all $i<n$
$d^{p}(t^g(a_{i}/\emptyset ),t^g(a_{i+1}/\emptyset ))\le n(\epsilon )$,
then $d^{p}(t^g(a_{0}/\emptyset ),t^g(a_{n}/\emptyset ))\le\epsilon$.
(ii) There are $\delta_n>0$ such that if $d^p(t^g(a_n/\emptyset),t^g(a_{n+1}/\emptyset))\le\delta_n$ for all $n<\omega$ then $(t^g(a_i/A))$ is a Cauchy sequence (wrt $d^p$).
{\bf Proof}.\enspace Immediate by the definitions, see [HH]. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
Note also the following:
\th 6 Fact.
If ${\cal K}$ is homogeneous, satisfies the perturbation property and has complete type spaces (as defined in [HH]) then $d^{p}$-Cauchy sequences over any parameter set converge.
\chapter{Measuring independence}
In this section we define a distance-like relation $d^p_a$ on the space of Lascar types. As for $d^p$ in [HH] it is not exactly a metric but defines a metrisable topology. Using $d^p_a$ we define $\epsilon$-independence (Definition 10) and explore its properties.
\th 7 Definition.
(i) For finite $A$ we define
$$d^{p}_{a}(Lstp(a/A),Lstp(b/A))=\sup\{d^{p}(t^g(a/B),t^g(b/B)):A\subseteq B \, {\sl finite},\, B\downarrow_Aab\}.$$
(ii) For any set $B$ we then define
$$
d^{p}_{a}(Lstp^w(a/B),Lstp^w(b/B)) = \sup\{d^{p}_{a}(Lstp(a/A),Lstp(b/A)):A\subseteq B,\, A\, {\sl finite}\}.$$
\th 8 Lemma.
(i) For $A$ finite, the definitions 7(i) and 7(ii) give the same result.
(ii) If $Lstp^{w}(a/B)=Lstp^{w}(a'/B)$
and $Lstp^{w}(b/B)=Lstp^{w}(b'/B)$, then
$$d^{p}_{a}(Lstp^{w}(a/B),Lstp^{w}(b/B))=
d^{p}_{a}(Lstp^{w}(a'/B),Lstp^{w}(b'/B)).$$
(iii) If $A$ is finite and $ab\downarrow_AB$ then
$$
d^p_a(Lstp^w(a/B),Lstp^w(b/B))= d^p_a(Lstp(a/A),Lstp(b/A)).
$$
(iv) If $d^{p}_{a}(Lstp^{w}(a/B),Lstp^{w}(b/B))=0$,
then $Lstp^{w}(a/B)=Lstp^{w}(b/B)$.
{\bf Proof}.\enspace (i)--(iii): Immediate by the definitions.
(iv): It suffices to show that for finite $A$,
if $d^{p}_{a}(Lstp(a/A),Lstp(b/A))=0$,
then $Lstp(a/A)=Lstp(b/A)$. For this choose
$c$ so that $Lstp(c/A)=Lstp(a/A)$ and
$c\downarrow_{A}ab$. Then by the assumption,
$d^{p}(t^g(a/Ac),t^g(b/Ac))=0$ and thus by the perturbation
property, $t^g(a/Ac)=t^g(b/Ac)$ and thus
$Lstp(a/A)=Lstp(b/A)$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
Although $d^{p}_{a}$ may not satisfy the triangle
inequality, as in [HH] for $d^{p}$,
it gives rise to a metrisable topology to the set of
all Lascar types over any fixed set $B$. In fact we have the following analogue of Lemma 5 (i):
\th 9 Lemma. For all $n>1$ and $\epsilon >0$, for all $a_{i}$, $i\le n$, and all $A$ if for all $i<n$ $d^{p}_{a}(Lstp^w(a_{i}/A ),Lstp^w(a_{i+1}/A))\le n(\epsilon )$, where $n(\epsilon)$ is as in Lemma 5 (i),
then $d^{p}_{a}(Lstp^w(a_{0}/A ),Lstp^w(a_{n}/A))\le\epsilon$.
{\bf Proof}.\enspace It suffices to prove this when $A$ is finite. For this assume that $d^{p}_{a}(Lstp(a_{i}/A ),Lstp(a_{i+1}/A))\le n(\epsilon )$ and let $D\supseteq A$ be finite and such that $D\downarrow_Aa_0a_n$. Choose $D'\supseteq A$ satisfying $Lstp(D'/Aa_0a_n)=Lstp(D/Aa_0a_n)$ and $D'\downarrow_A\bigcup_{i\leq n}a_i$. Then by assumption $d^p(t^g(a_i/D'),t^g(a_{i+1}/D'))\leq n(\epsilon)$ and thus by Lemma 5 (i), $d^p(t^g(a_0/D'),t^g(a_n/D'))=d^p(t^g(a_0D'/\emptyset),t^g(a_nD'/\emptyset))\leq\epsilon$. As $t^g(D'/Aa_0a_n)=t^g(D/Aa_0a_n)$ we are done.
$\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
We are ready to define the main notion of this paper:
\th 10 Definition. For $\epsilon >0$, we write
$a\downarrow^{\epsilon}_{A}B$ if for all finite $C\subseteq A$,
there is finite $C\subseteq D\subseteq A$ and
$b$ such that
$Lstp(b/D)=Lstp(a/D)$, $b\downarrow_{D}AB$ and
$d^{p}_{a}(Lstp^{w}(b/AB),Lstp^{w}(a/AB))\le\epsilon$.
By $a\downarrow^{0}_{A}B$ we mean that $a\downarrow^{\epsilon}_{A}B$ holds
for all $\epsilon >0$.
This independence notion has some immediate properties. Note, however, that we only have partial monotonicity.
\th 11 Lemma. Suppose $\epsilon >0$.
(i) If $A\subseteq C\subseteq D$
and $a\downarrow^{\epsilon}_{A}D$, then $a\downarrow^{\epsilon}_{A}C$.
(ii) If $ab\downarrow^{\epsilon}_{A}B$, then $a\downarrow^{\epsilon}_{A}B$.
(iii) If $a\downarrow^{\epsilon}_{A}B$, $A'\subseteq A$ is finite, then there is some finite $A''$ such that $A'\subseteq A''\subseteq A$ and $a\downarrow^{\epsilon}_{A''}B$.
(iv) If $A$ is finite and
$a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{\epsilon}_{A}B$, then there is finite $C\subseteq B$
such that $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{\epsilon}_{A}C$.
(v) If $\epsilon >\delta >0$, then
$a\downarrow^{\delta}_{A}B$ implies $a\downarrow^{\epsilon}_{A}B$.
(vi) If $A$ is finite and $a\downarrow_{A}B$, then $a\downarrow^{0}_{A}B$.
(vii) If $A$ is finite, $a\downarrow_AB$ and $a\downarrow^\epsilon_{AB}C$ then $a\downarrow^\epsilon_ABC$.
(viii) If $A$ is finite, $a\downarrow_AB$ and $a\downarrow^\epsilon_ABC$ then $a\downarrow^\epsilon_{AB}C$.
{\bf Proof}.\enspace Immediate by the definitions.
$\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
$d^p$-superstability ensures that $\downarrow^\epsilon$ has local character.
\th 12 Lemma.
(i) For all $\epsilon >0$, $a$ and $A$ there is finite $B\subseteq A$
such that $a\downarrow^{\epsilon}_{B}A$.
(ii) For all $a$ and $A$, $a\downarrow^{0}_{A}A$.
{\bf Proof}.\enspace (i): By Lemma 11, it is enough to show that
there are no $a$ and finite $A_{i}$, $i<\omega$,
such that for all $i<\omega$, $A_{i}\subseteq A_{i+1}$
and $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{\epsilon}_{A_{i}}A_{i+1}$. For a contradiction,
suppose such $a$ and $A_{i}$, $i<\omega$, exist.
Let $\kappa >\lambda ({\cal K} )$ be a cardinal of cofinality $\omega$
and such that ${\cal K}$ is $\kappa$-$d^{p}$-stable.
We define a new increasing sequence of finite sets $A'_i$ such
that every $b\models t^g(a/A'_i)$ with
$b\downarrow_{A'_i}A'_{i+1}$ satisfies $d^p(t^g(b/A'_{i+1}),t^g(a/A'_{i+1}))>\epsilon$:
For each $i<\omega$ let $c_i\models Lstp(a/A_i)$ with $c_i\downarrow_{A_i}a$.
Now let $A'_0=A_0c_0$. When $A'_i$ has been defined
such that $b\models t^g(a/A'_i)$ implies $b\models Lstp(a/A_i)$
and $a\downarrow_{A_i}A'_i$ let $b_i\models Lstp(a/A_i)$, $b_i\downarrow_{A_i}A_{i+1}$.
As $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^\epsilon_{A_i}A_{i+1}$, we have $d^p_a(Lstp(b_i/A_{i+1}),Lstp(a/A_{i+1}))>
\epsilon$, i.e. there is some finite $B_i\supseteq A_{i+1}$ with $B_i\downarrow_{A_{i+1}}ab_i$ such that $d^p(t^g(a/B_i),t^g(b_i/B_i))>\epsilon$ and we may assume $B_i\downarrow_{A_{i+1}}ab_ic_{i+1}$. Then define $A'_{i+1}=A'_iB_ic_{i+1}$.
Now if $b\models t^g(a/A'_i)$ and $b\downarrow_{A'_i}A'_{i+1}$, these imply $b\models Lstp(a/A_i)$ and $b\downarrow_{A_i}B_i$. So $b\models t^g(b_i/B_i)$ and thus $d^p(t^g(a/B_i),t^g(b/B_i))>\epsilon$.
Then we can proceed with the usual construction from [Sh]:
For all $\eta\in\kappa^\omega$ and all $n<\omega$, choose $A_{\eta\restriction n}$ and $a_\eta$ so that
(a) for all $\eta\in \kappa^{\omega}$ there is an automorphism $F_{\eta}$
of the monster model such that $F_{\eta}(a_{\eta})=a$,
for all $i<\omega$, $F_{\eta}(A_{\eta\restriction i})=A'_{i}$ and if
$\xi\in \kappa^{\omega}$ and
$\eta\restriction i=\xi\restriction i$, then $F_{\eta}\restriction A_{\eta\restriction i}=
F_{\xi}\restriction A_{\eta\restriction i}$,
(b) for all $\eta\in \kappa^{\omega}$ and $i<\omega$,
$$a_{\eta}\downarrow_{A_{\eta\restriction i}}\cup\{A_{\xi}\vert\ \xi\in \kappa^{<\omega},
\ \eta\restriction i+1\not\subseteq\xi\} .$$
Let $D =\cup_{\eta\in \kappa^{<\omega}}A_{\eta}$. Now
clearly $d^{p}(t^g(a_{\eta}/D ),t^g(a_{\xi}/D ))>\epsilon$
for distinct $\eta ,\xi\in \kappa^{\omega}$. This contradicts
the choice of $\kappa$.
(ii): Note that if $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{\epsilon}_{A}A$ then there is a finite $A'\subseteq A$ such that $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{\epsilon}_{B}A$ for all finite $A'\subseteq B\subseteq A$. Then proceed as in (i). $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 13 Corollary. For all $A$ and $a$, there is countable $B\subseteq A$ such that $a\downarrow^0_BA$.
{\bf Proof}.\enspace We define $B$ by induction. Let $B_0=\emptyset$. When a finite $B_n$ has been defined we define $B_{n+1}\supseteq B_n$ finite such that $a\downarrow^{1/(n+1)}_{B_{n+1}}A$: By Lemma 12 (ii) $A\downarrow^0_AA$ and thus by Lemma 11 (iii) there is some finite $B_{n+1}$ with $B_n\subseteq B_{n+1}\subseteq A$ such that $a\downarrow^{1/(n+1)}_{B_{n+1}}A$. In the end let $B=\bigcup_{n<\omega}B_n$. Then for any $\epsilon>0$ and finite $C\subseteq B$ there is $n>1/\epsilon$ such that $C\subseteq B_n$ and $B_n$ witnesses (as $D$ in Definition
10) $a\downarrow^\epsilon_BA$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 14 Corollary. Suppose the class ${\cal K}$ is stable and simple. Then T.F.A.E.
(i) ${\cal K}$ is $d^p$-superstable.
(ii) For no $\epsilon>0$ is there an infinite $\downarrow^\epsilon$-forking sequence.
(iii) For all $a$, $A$ and $\epsilon>0$, there is a finite $B\subseteq A$ such that $a\downarrow^\epsilon_BA$.
{\bf Proof}.\enspace (i)$\Rightarrow$(ii)$\Rightarrow$(iii) follow by Lemmas 12 and 11
so we prove (iii)$\Rightarrow$(i).
Let $H(\aleph_0)$ be as in Fact 1. We claim that ${\cal K}$ is $d^p$-stable in every
$\kappa\geq H(\aleph_0)$. In fact, the density character of the set of Lascar types ($Lstp^w$) over a set $A$ (wrt. $d^p_a$) is at most $\abs{A}+H(\aleph_0)$.
So let $\abs{A}=\kappa\geq H(\aleph_0)$. If the density character of the set
of Lascar types over $A$ is greater than $\kappa$ there are some $\epsilon>0$
and tuples $a_i$ for $i<\kappa^+$ such that
$d^p_a(Lstp^w(a_i/A),Lstp^w(a_j/A))>\epsilon$ for all $i\neq j$.
Let $\delta=2(\epsilon)$. By (iii) there are finite sets $A_i\subset A$ such
that $a_i\downarrow^\delta_{A_i}A$. Since there are only $\kappa$ finite
subsets of $A$, $\kappa^+$ many of the $A_i$s are the same set $A'$.
Further, since there are less than $H(\aleph_0)$ Lascar
types over $A'$, for $\kappa^+$ many indices the Lascar strong type
$Lstp(a_i/A')$ is the same. Let $Lstp(a/A')=Lstp(a_i/A')$ and $a\downarrow_{A'}A$.
Then for $\kappa^+$ many indices $d^p_a(Lstp^w(a/A),Lstp^w(a_i/A))\leq\delta$
and thus by Lemma 5
$d^p_a(Lstp^w(a_i/A),Lstp^w(a_j/A))\leq\epsilon$ for $\kappa^+$ many $i,j<\kappa^+$, a contradiction. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 15 Lemma. Suppose $A$ and $B$ are finite, $\epsilon >0$, $a\downarrow^{\epsilon}_{A}B$ and $a\downarrow_{AB}C$.
Then $a\downarrow^{\epsilon}_{A}BC$.
{\bf Proof}.\enspace Clearly it is enough to prove this for such $A$, $B$ and $C$ that $A\subseteq B\subseteq C$ and $C$ is finite. For this let $b$ be such that
$Lstp(b/A)=Lstp(a/A)$ and $b\downarrow_{A}C$.
We need to prove that
$d^{p}_{a}(Lstp(b/C),Lstp(a/C))\le\epsilon$. By Lemma 8(ii)
$d^{p}_{a}(Lstp(c/C),Lstp(a/C))=d^{p}_{a}(Lstp(b/C),Lstp(a/C))$
for all $c$ such that $Lstp(c/C)=Lstp(b/C)$
and thus we may assume that
$b\downarrow_{A}Ca$. Let $D\supseteq C$ be finite
and such that $D\downarrow_{C}ab$.
We need to show that $d^{p}(t^g(a/D),t^g(b/D))\le\epsilon$.
But now $a\downarrow^{\epsilon}_{A}B$ and thus
$d^{p}_{a}(Lstp(b/B),Lstp(a/B))\le\epsilon$.
By transitivity
$D\downarrow_{B}ab$, so by Lemma 8(iii) $d^{p}(t^g(a/D),t^g(b/D))\le\epsilon$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 16 Corollary.
If $A$ and $B$ are finite then for all $\epsilon >0$, if $a\downarrow^{\epsilon}_{A}B$,
then for all $C$ there is $b$ such that
$b\downarrow^{\epsilon}_{A}BC$ and $Lstp(b/AB)=Lstp(a/AB)$.
{\bf Proof}.\enspace Immediate by Lemmas 12 and 15. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 17 Lemma. For all $\epsilon>0$ there exists a $\delta>0$ such that if $A\subseteq B\subseteq C$, $a\downarrow^{\delta}_AB$ and $a\downarrow^{\delta}_BC$, then $a\downarrow^{\epsilon}_AC$. In particular $\downarrow^0$ satisfies transitivity.
{\bf Proof}.\enspace Let $\delta=2(\epsilon)$. By Lemma 11 we may assume $A\subseteq B\subseteq C$ are finite. Let $c$ be such that $Lstp(c/A)=Lstp(a/A)$ and $c\downarrow_AC$. We wish to show that $d^p_a(Lstp(a/C),Lstp(c/C))\leq\epsilon$.
Choose $b$ such that $Lstp(b/B)=Lstp(a/B)$ and $b\downarrow_BC$. As $a\downarrow^{\delta}_BC$, we must have $d^p_a(Lstp(a/C),Lstp(b/C))\leq\delta$. By Lemma 15 $b\downarrow^{\delta}_AC$, and thus $d^p_a(Lstp(b/C),Lstp(c/C))\leq\delta$. But by Lemma 9
$d^p_a(Lstp(a/C),Lstp(c/C))\leq \epsilon$ proving $a\downarrow^{\epsilon}_AC$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 18 Lemma. Suppose $Lstp^w(a/A)=Lstp^w(b/A)$, $a\downarrow^0_AB$ and $b\downarrow^0_AB$. Then $Lstp^w(a/B)=Lstp^w(b/B)$.
{\bf Proof}.\enspace Let $\epsilon>0$. We show that $d^p_a(Lstp^w(a/B),Lstp(b/B))\leq\epsilon$. Let $\delta\le2(\epsilon)$ and $\delta'\leq 2(\delta)$. By Lemma 12 let $C\subseteq A$ be finite such that $ab\downarrow^{\delta'}_CA$. Then $a\downarrow^{\delta'}_CA$ and $a\downarrow^0_AB$ so by Lemma 17
$a\downarrow^\delta_CB$. Similarly $b\downarrow^\delta_CB$. By assumption $Lstp(a/C)=Lstp(b/C)$ and if we choose $c$ such that $Lstp(c/C)=Lstp(a/C)$ and $c\downarrow_CB$, we have $d^p_a(Lstp^w(a/B),Lstp^w(c/B))\leq\delta$ and $d^p_a(Lstp^w(b/B),Lstp^w(c/B))\leq\delta$. By Lemma 9
$d^p_a(Lstp^w(a/B),Lstp^w(b/B))\leq\epsilon$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 19 Lemma.
For any $a$ and $B$ and any countable $A\subseteq B$ there exists some $a'$ satisfying $Lstp^w(a'/A)=Lstp^w(a/A)$ and $a'\downarrow^{0}_{A}B$.
{\bf Proof}.\enspace We may assume $B$ is $\lambda ({\cal K})$-saturated.
For each $i<\omega$, let $\delta_i$ be as in Lemma 5 (ii). By Lemmas 12 and 11
we can find an increasing sequence of finite sets $A_i$ such that
$a\downarrow^{\delta_i}_{A_i}A$ and $\bigcup_{i<\omega}A_i=A$. Further choose
$a_i$ such that $Lstp(a_i/A_i)=Lstp(a/A_i)$ and $a_i\downarrow_{A_i}B$.
By Lemma 15, $a_{i+1}\downarrow^{\delta_i}_{A_i}B$ and thus
$d^{p}_{a}(Lstp(a_{i+1}/B),Lstp(a_i/B))\leq\delta_i$ which implies
$d^{p}(t^g(a_{i+1}/B),t^g(a_i/B))\leq\delta_i$. Now the types $t^g(a_i/B)$
form a $d^{p}$-Cauchy
sequence and thus have a limit $a'$, i.e., for any $\epsilon>0$ there is $i<\omega$ such that $d^{p}(t^g(a_i/B),t^g(a'/B))<\epsilon$ and, as $B$ was $\lambda ({\cal K})$-saturated, $d^{p}_{a}(Lstp^w(a_i/B),Lstp^w(a'/B))<\epsilon$.
If $A'\subset A$ is finite and $\epsilon>0$ there is $n<\omega$ such that $A'\subseteq A_n$ and $d^p_a(Lstp^w(a'/B),Lstp^w(a_n/B))<\epsilon$ and as $a_n\models Lstp(a/A_n)$,
$d^p_a(Lstp(a'/A'),Lstp(a/A'))<\epsilon$. As $\epsilon>0$ was arbitrary we must have $d^p(Lstp(a'/A')=Lstp(a/A'))$ and this must hold for all finite $A'\subset A$. So $Lstp^w(a'/A)=Lstp^w(a/A)$. Finally $a'\downarrow^0_AB$ is witnessed by the pairs $(a_n,A_n)$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 20 Corollary. If $Lstp^{w}(a/A)=Lstp^{w}(b/A)$,
then $Lstp(a/A)=Lstp(b/A)$.
{\bf Proof}.\enspace For finite $A$ the claim is trivial,
so let $A$ be infinite. By Corollary 13 there is a countable
$B\subseteq A$ such that $ab\downarrow^0_BA$. Let ${\cal A}\supset A$ be
$\lambda({\cal K})$-saturated. By Lemma 19 choose $a'$ and $b'$ such that
$Lstp^w(a'b'/B)=Lstp^w(ab/B)$ and $a'b'\downarrow^0_B{\cal A}$. By Lemma 18,
$Lstp^w(a'b'/A)=Lstp^w(ab/A)$ and thus, by homogeneity, they
have the same Galois-type. So there exists an automorphism $F$
such that $F\restriction A=id$ and $F(a'b')=ab$. Then $ab\downarrow^0_BF({\cal A})$ so
by Lemma 18, $Lstp^w(a/F({\cal A}))=
Lstp^w(b/F({\cal A}))$. As $F({\cal A})\supseteq A$ and $F({\cal A})$ is $\lambda({\cal K})$-saturated, $Lstp(a/A)=Lstp(b/A)$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 21 Lemma. Let $A$ be finite or countable. Then $a\downarrow^0_AB$ if and only if $a\downarrow_AB$.
{\bf Proof}.\enspace First assume $A$ is finite. Then the direction from right to left is Lemma 11(vi). To prove the claim from left to right, let $a\downarrow^0_AB$ and $b$ be such that $Lstp(b/A)=Lstp(a/A)$ and $b\downarrow_AB$. By Lemma 11(vi) $b\downarrow^0_AB$ so by Lemma 18 and Corollary 20, $Lstp(a/B)=Lstp(b/B)$ and thus $a\downarrow_AB$.
Then assume $A$ is countable and $a\downarrow^0_AB$. By Lemmas 19 and 18 we may assume $B$ is $\lambda({\cal K})$-saturated and $B\supset A$. Now if $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$, $t^g(a/B)$ splits strongly over $A$.
So there are $b,c\in B$, some $\epsilon>0$ and finite $A'\subset A$
satisfying $Lstp(b/A)=Lstp(c/A)$ but $d^p(t^g(b/A'a),t^g(c/A'a))>\epsilon$.
We claim that then $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{2(\epsilon)}_AB$, namely for any finite
$A''$ with $A'\subseteq A''\subset A$ if $Lstp(a'/A'')=Lstp(a/A'')$
and $a'\downarrow_{A''}B$ we must have $d^p_a(Lstp^w(a'/B),Lstp^w(a/B))>2(\epsilon)$.
Otherwise we would have $d^p(t^g(bA'a/\emptyset),t^g(bA'a'/\emptyset))\leq2(\epsilon)$
and $d^p(t^g(cA'a/\emptyset),t^g(cA'a'))\leq2(\epsilon)$ and by
stationarity $t^g(bA'a'/\emptyset)=t^g(cA'a'/\emptyset)$,
adding up to $d^p(t^g(bA'a/\emptyset),t^g(cA'a/\emptyset))\leq\epsilon$, a contradiction.
For the other direction assume $a\downarrow_AB$. By Lemmas 19 and Corollary 20 let $Lstp(a'/A)=Lstp(a/A)$, $a'\downarrow^0_AB$. By the previous direction $a'\downarrow_AB$ and thus by stationarity, $t^g(a'/B)=t^g(a/B)$, i.e. $a\downarrow^0_AB$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
Compare the following with the result from [HS]
for stable homogeneous AEC's
that
there is
$\kappa ({\cal K} )<\beth_{(2^{LS({\cal K} )})^{+}}$ such that for
all $a$ and $\lambda ({\cal K} )$-saturated ${\cal A}$ there is
$A\subseteq{\cal A}$ of power $<\kappa ({\cal K} )$ such that
$a\downarrow_{A}{\cal A}$. Even in the first-order case $\kappa ({\cal K} )$
cannot be chosen to be smaller than $LS({\cal K} )^{+}$.
\th 22 Corollary. For all $A$ and $a$, there is countable $B\subseteq A$
such that $a\downarrow_{B}A$.
{\bf Proof}.\enspace Follows from Corollary 13 and Lemma 21. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 23 Corollary. ${\cal K}$ is simple, i.e. $a\downarrow_AA$ holds for any $a$ and $A$ (giving us transitivity, symmetry and strong extension over any set).
{\bf Proof}.\enspace Follows from Corollary 22 and monotonicity of $\downarrow$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
Note that weak simplicity does not in general imply simplicity. An example of a class that that is homogeneous, stable and weakly simple but not simple can be constructed by modifying an example by Shelah in [HL] showing that $\omega$-stability does not imply simplicity in the setting of homogeneous models.
The language contains a binary relation symbol $E_i$ for each $i<\omega+\omega$. We let our monster model ${\bf M}$ consist of functions $f:\omega+\omega\to\kappa$ such that for some $i<\omega+\omega$ for all $j>i$ $f(j)=0$. On this model we let $E_i$ be an equivalence relation such that $(f,g)\in E_i$ if
(a) $i<\omega$ and $f\restriction i+1=g\restriction i+1$ or
(b) $i\geq\omega$, $f\restriction\omega=g\restriction\omega$ and for all $j>i$ $f(j)=g(j)$.
Then the class consisting of elementary submodels of ${\bf M}$ is homogeneous
and stable. It is not simple: let, for $n<\omega$, $f_n$ be such
that $f_n(i)=1$ if $i\leq n$ and $f_n(i)=0$ otherwise and define
$A=\{f_n:n<\omega\}$. Further let $f$ be such that $f(i)=1$ if $i<\omega$
and $f(i)=0$ otherwise. Then $t^g(f/A)$ has no free extension so $f\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AA$.
However, this is the only way we do not get free extensions, so the
class is weakly simple.
\th 24 Lemma.
For every $\epsilon>0$ there exists some $\delta>0$ such that if $d^p_a(Lstp^w(a/A),Lstp^w(b/A))\leq\delta$ and $ab\downarrow^0_AB$, then $d^p_a(Lstp^w(a/B),Lstp^w(b/B))\leq\epsilon$.
{\bf Proof}.\enspace Let $\delta=3(\epsilon)$. First note that by transitivity of
$\downarrow^0$ we may assume $A$ to be countable. Let $B'\subset B$
be finite. We need to show $d^p_a(Lstp(a/B'),Lstp(b/B'))\leq\epsilon$.
For this let $D\supset B'$ be finite and such that $D\downarrow_{B'}ab$.
We may assume $D\downarrow_{B'}abA$ and thus $ab\downarrow_{B'A}D$. Now by Lemma 21
$ab\downarrow^0_{B'A}D$ and thus by Lemma 17 $ab\downarrow^0_AB'D$. Now let $A'\subset A$
be finite and such that $ab\downarrow^{\delta}_{A'}D$ and choose $a'b'$ satisfying
$Lstp(a'b'/A')=Lstp(ab/A')$ and $a'b'\downarrow_{A'}D$.
Then $d^p_a(Lstp(a/D),Lstp(a'/D))\leq\delta$ and $d^p_a(Lstp(b'/D),Lstp(b/D))\leq\delta$ and by Lemma 8 (iii) $d^p_a(Lstp(a'/D),Lstp(b'/D))\leq\delta$. This sums up to $d^p(t^g(a/D),t^g(b/D))\leq\epsilon$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 25 Lemma. For every $\epsilon>0$ there is a $\delta>0$ such that if $a\downarrow^{\delta}_AB$ and $a\downarrow_{AB}C$ then $a\downarrow^{\epsilon}_AC$.
{\bf Proof}.\enspace Let $\delta$ be given by Lemma 24 and assume $a\downarrow^\delta_AB$ and
$a\downarrow_{AB}C$. Note that we may assume $A\subseteq B\subseteq C$ and
by Corollary 22 and transitivity of $\downarrow$ we may assume $B$ is countable.
Now if $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^\epsilon_AC$, then there is some finite $A'\subseteq A$ such
that $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^\epsilon_{A^+}C$ for all finite $A^+$ with $A'\subseteq A^+\subseteq A$.
As $a\downarrow^\delta_AB$, by Lemma 11, there is some finite $A''$ with
$A'\subseteq A''\subseteq A$ such that $a\downarrow^\delta_{A''}B$ and we still have
$a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^\epsilon_{A''}C$. So
if there is a counterexample to the claim, we may find one with $A$ finite, $B$ at most countable and $A\subseteq B\subseteq C$ so it is enough to prove the lemma for such sets.
To prove $a\downarrow^\epsilon_AC$, let $b$ be such that $Lstp(b/A)=Lstp(a/A)$ and $b\downarrow_AC$. We may assume that $b\downarrow_ACa$. As $a\downarrow^\delta_AB$, we have $d^p_a(Lstp^w(a/B),Lstp^w(b/B))\leq\delta$. Also now $ab\downarrow_BC$ so by Lemma 21 $ab\downarrow^0_BC$ and thus by Lemma 24 $d^p_a(Lstp^w(a/C),Lstp^w(b/C))\leq\epsilon$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 26 Corollary. For every $\epsilon>0$ there is a $\delta>0$ such that if $a\downarrow^\delta_AB$ then for all $C$ there is $b$ satisfying $Lstp(b/AB)=Lstp(a/AB)$ and $b\downarrow^\epsilon_ABC$.
{\bf Proof}.\enspace Follows from Lemma 25
by taking $b\models Lstp(a/AB)$ satisfying $b\downarrow_{AB}C$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\chapter{Lascar $\epsilon$-splitting}
In this section we define and study Lascar $\epsilon$-splitting. Via a characterisation of $\downarrow^\epsilon$ using Lascar splitting we can finally prove monotonicity of $\downarrow^0$ and show that $\downarrow$ and $\downarrow^0$ are equal over all sets.
\th 27 Definition.
(i) If $A$ is finite and $A\subseteq B$, we say that $t^g(a/B)$ Lascar $\epsilon$-splits over $A$ if for all $\delta >0$, there are $b,c\in B$
such that $d^{p}_{a}(Lstp(b/A),Lstp(c/A))<\delta$ but
$d^{p}(t^g(ab/A),t^g(ac/A))>\epsilon$.
(ii) We say that $t^g(a/B)$ locally Lascar $\epsilon$-splits over $A\subseteq B$ if it Lascar $\epsilon$-splits over every finite $A'\subseteq A$. (Note that for finite $A$ this is equivalent to (i).)
\th 28 Lemma. For all $\epsilon >0$, there is $\delta >0$ such that
if $a\downarrow^{\delta}_{A}B$, $A\subseteq B$, then
$t^g(a/B)$ does not locally Lascar $\epsilon$-split over $A$.
{\bf Proof}.\enspace Let $\delta =3(\epsilon )$, let $A'\subseteq A$ be finite such that $a\downarrow^{\delta}_{A'}B$, and let $a'$ be such that $Lstp(a'/A')=Lstp(a/A')$
and $a'\downarrow_{A'}B$. Now let $b,c\in B$ be such that
$d^{p}_{a}(Lstp(b/A'),Lstp(c/A'))<\delta$. It suffices to show that
$d^{p}(t^g(ab/A'),t^g(ac/A'))\le\epsilon$.
Since $a\downarrow_{A'}^{\delta}B$,
$d^{p}(t^g(ab/A'),t^g(a'b/A'))$, $d^{p}(t^g(ac/A'),t^g(a'c/A'))<\delta$.
By the choice of $b$ and $c$, and by Lemma 8 (iii), $d^{p}(t^g(a'b/A'),t^g(a'c/A'))<\delta$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 29 Theorem. For $A\subseteq B$ the following are equivalent:
(i) $a\downarrow_{A}B$,
(ii) for all $\epsilon >0$, there is finite $C\subseteq A$ with the following property:
for all $B'\supseteq B$ there is
$b$ such that $t^g(b/B)=t^g(a/B)$ and $t^g(b/B')$ does not
Lascar $\epsilon$-split over $C$.
{\bf Proof}.\enspace (i)$\Rightarrow$(ii): Let $\epsilon>0$ be given. Let $\delta$ be as in Lemma 28 for $\epsilon$ and let $\delta'$ be as in Lemma 25 for $\delta$. Then choose finite $C\subseteq A$ so that $a\downarrow^{\delta'}_{C}A$. Now for any $B'\supseteq B$ there is $b$ such that $t^g(b/B)=t^g(a/B)$ and $b\downarrow_BB'$. Then $b\downarrow_AB'$ and by Lemma 25 $b\downarrow^{\delta}_CB'$. By Lemma 28 we are done.
(ii)$\Rightarrow$(i): Let ${\cal D}\supseteq B$ be a saturated model
of power $>\vert B\vert$.
For all $n>0$, choose $b_{n}$ and $C_{n}$ as in (ii) for
$\epsilon =1/n$ and $B'={\cal D}$. We can choose these so that in addition,
for all $n>0$, $Lstp(b_{n}/B)=Lstp(a/B)$:
By the choice of $b_{n}$ there is an automorphism $F$ such that
$F\restriction B=id$ and $F(b_{n})=a$. Then choose $b'\in F({\cal D} )$
and $b''\in{\cal D}$ so that
$Lstp(b''/B)=Lstp(b'/B)=Lstp(a/B)$. Since ${\cal D}$ and $F({\cal D} )$ are saturated,
there is an automorphism $G$ such that $G(F({\cal D} ))={\cal D}$,
$G\restriction B=id$ and $G(b')=b''$. Now $G(a)$ is as wanted.
Let $c$ be such that $Lstp(c/A)=Lstp(a/A)$ and
$c\downarrow_{A}{\cal D}$ and let $d\in B$. It is enough to show that
$t^g(cd/\emptyset )=t^g(ad/\emptyset )$. Let $\epsilon >0$. It is enough
to show that $d^{p}(t^g(cd/\emptyset ),t^g(ad/\emptyset ))<\epsilon$.
Let $n>0$ be such that $1/n <\epsilon$. Since $d\in B$,
it is enough to show that $d^{p}(t^g(cd/\emptyset ),t^g(b_{n}d/\emptyset ))<\epsilon$.
Choose $d'\in{\cal D}$ so that $Lstp(d'/A)=Lstp(d/A)$
and $d'\downarrow_{A}b_{n}c$. Then
$t^g(cd/\emptyset)=t^g(cd'/\emptyset)=t^g(b_{n}d'/\emptyset )$.
Thus it is enough to show that
$d^{p}(t^g(b_{n}d/\emptyset ),t^g(b_{n}d'/\emptyset ))<\epsilon$.
But since $t^g(b_{n}/{\cal D})$ does not Lascar $\epsilon$-split over
$C_{n}\subseteq A$, even
$d^{p}(t^g(b_{n}d/C_{n}),t^g(b_{n}d'/C_{n}))\le 1/n<\epsilon$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 30 Definition. We say that $A$ is almost strongly
$\omega$-saturated if for all finite $B\subseteq A$, $\epsilon >0$ and
$a$ there is $b\in A$ such that $d^{p}_{a}(Lstp(b/B),Lstp(a/B))<\epsilon$.
\th 31 Corollary. Galois-types over almost strongly $\omega$-saturated sets are
stationary.
{\bf Proof}.\enspace Let $A$ be almost strongly $\omega$-saturated, $A\subseteq B$,
$t^g(a/A)=t^g(b/A)$, $a\downarrow_AB$ and $b\downarrow_AB$. Now if
$t^g(a/B)\neq t^g(b/B)$ there is $\epsilon>0$ and some finite $B'\subset B$
such that $d^p(t^g(a/B'),t^g(b/B'))>\epsilon$. Let $\delta<2(\epsilon)$.
By Theorem 29 there is some finite $A'\subset A$ such that
$t^g(a/B)$ and $t^g(b/B)$ do not Lascar $\delta$-split over $A'$, so
for some $\delta'>0$ whenever $c_1,c_2\in B$ satisfy
$d^p_a(Lstp(c_1/A'),Lstp(c_2/A'))<\delta'$ we have
$d^p(t^g(ac_1/A'),t^g(ac_2/A'))\leq\delta$ and
$d^p(t^g(bc_1/A'),t^g(bc_2/A'))\leq\delta$.
Now let $B^{\delta'}\subset A$ satisfy $d^p_a(Lstp(B^{\delta'}/A'),Lstp(B'/A'))<\delta'$. Then $d^p(t^g(aB^{\delta'}/A'),t^g(aB'/A'))\leq\delta$ and $d^p(t^g(bB^{\delta'}/A'),t^g(bB'/A'))\leq\delta$. As $t^g(a/B^{\delta'})=t^g(b/B^{\delta'})$ this gives $d^p(t^g(a/B'),t^g(b/B'))\leq\epsilon$, a contradiction. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
By taking a closer look at the proof of (ii)$\Rightarrow$(i)
from Theorem 29, we get the following:
\th 32 Theorem. For all $\epsilon >0$ there is $\delta >0$ such that
if $B\supseteq A$ then for all $a$,
(*) below implies that $a\downarrow^{\epsilon}_{A}B$.
(*) For all $D\supseteq B$ there is $b$ such that
$t^g(b/B)=t^g(a/B)$ and $t^g(b/D)$ does not locally Lascar $\delta$-split over
$A$.
{\bf Proof}.\enspace We first prove that (*) implies the following:
(*') There is a finite $A'\subset A$ such that for all $D\supseteq B$ there is $b$ such that $t^g(b/B)=t^g(a/B)$ and $t^g(b/D)$ does not Lascar $\delta$-split over $A'$.
For this assume (*) and let ${\cal D}\supset B$ be a saturated model of power $>\abs{B}$.
By (*) there is $b$ such that $t^g(b/B)=t^g(a/B)$ and $t^g(b/{\cal D})$ does
not Lascar $\delta$-split over some finite $A'\subset A$.
Then let $D'\supset B$ be any set and choose $b'$ such
that $t^g(b'/B)=t^g(a/B)$ and $b'\downarrow_BD'$. We claim that $t^g(b'/D')$
does not Lascar $\delta$-split over $A'$. Otherwise for any $\delta'>0$
there are $c,d\in D'$ such that $d^p_a(Lstp(c/A'),Lstp(d/A'))<\delta'$
but $d^p(t^g(b'c/A'),t^g(b'd/A'))>\delta$. Then there is an automorphism
$F$ such that $F\restriction B=id$ and $F(b')=b$. Denote $c'=F(c)$, $d'=F(d)$.
As $b'\downarrow_BD'$, we have $b\downarrow_Bc'd'$ and we may assume $c'd'\downarrow_Bb{\cal D}$. By saturation of ${\cal D}$ we can find $c^+,d^+\in{\cal D}$ such that $Lstp(c^+d^+/B)=Lstp(c'd'/B)$ and $c^+d^+\downarrow_Bb$. Thus $t^g(c^+d^+/Bb)=t^g(c'd'/Bb)$ and $t^g(c^+d^+b/B)=t^g(c'd'b/B)=t^g(cdb'/B)$. In particular $d^p_a(Lstp(c^+/A'),Lstp(d^+/A'))<\delta'$ and $d^p(t^g(bc^+/A'),t^g(bd^+/A'))>\delta$, a contradiction.
Now for the theorem, let $\delta =2(\epsilon )$. To prove $a\downarrow^{\epsilon}_AB$, let $C\subseteq A$ be finite. Let $A'\subseteq A$ be finite as given by (*') and define $A^+=A'\cup C$. Then let $b$ satisfy $Lstp(b/A^+)=Lstp(a/A^+)$ and $b\downarrow_{A^+}B$.
Let ${\cal D}\supseteq B$ be a large saturated model
such that ${\cal D}\downarrow_{B}ab$ (in particular $b\downarrow_{A^+}{\cal D}$).
Clearly it is enough to show that $d^{p}(t^g(b/{\cal D} ),t^g(a/{\cal D} ))\le\epsilon$.
For this, let $d\in{\cal D}$. It is enough to show that
$d^{p}(t^g(bd/\emptyset ),t^g(ad/\emptyset ))\le\epsilon$. Let $b'$
be such that $t^g(b'/B)=t^g(a/B)$ and
$t^g(b'/{\cal D} )$ does not Lascar $\delta$-split over $A'$ (and thus not over $A^+$).
As in the proof of (ii)$\Rightarrow$(i)
in Theorem 29, we can choose $b'$ so that in addition
$Lstp(b'/B)=Lstp(a/B)$.
By Lemma 5, it is enough to show that
$d^{p}(t^g(bd/\emptyset ),t^g(b'd/\emptyset ))\le\delta$ and
$d^{p}(t^g(b'd/\emptyset ),t^g(ad/\emptyset ))\le\delta$.
For the first one choose $d'\in{\cal D}$ such that $Lstp(d'/A^+)=Lstp(d/A^+)$ and $d'\downarrow_{A^+}bb'$. Then $t^g(bd/\emptyset)=t^g(bd'/\emptyset)=t^g(b'd'/\emptyset)$ and as $t^g(b'/{\cal D})$ does not Lascar $\delta$-split over $A^+$, $d^p(t^g(b'd/\emptyset),t^g(b'd'/\emptyset))\leq\delta$.
For the second, choose $d'\in{\cal D}$ so that $Lstp(d'/B)=Lstp(d/B)$ and $d'\downarrow_{B}ab'$.
Then $t^g(ad/\emptyset )=t^g(ad'/\emptyset )=t^g(b'd'/\emptyset )$
and since $t^g(b'/{\cal D} )$ does not Lascar $\delta$-split over $A^+\subset B$,
$d^{p}(t^g(b'd'/\emptyset ),t^g(b'd/\emptyset ))\le\delta$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 33 Corollary. For all $\epsilon >0$ there is $\delta >0$
such that for all $A\subseteq B\subseteq C$ and $a$,
if $a\downarrow^{\delta}_{A}C$,
then $a\downarrow^{\epsilon}_{B}C$.
{\bf Proof}.\enspace Let $\delta=3(3(2(\epsilon)))$ and assume $a\downarrow^\delta_AC$. To use Theorem 32, let $D\supseteq C$ and choose $b$ such that $t^g(b/C)=t^g(a/C)$ and $b\downarrow_CD$. Then by Lemma 25, $b\downarrow^{3(2(\epsilon))}_AD$. By Lemma 28 $t^g(b/D)$ does not locally Lascar $2(\epsilon)$-split over $A$ and thus not over $B$. By Theorem 32, $a\downarrow^\epsilon_BC$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 34 Corollary. For all $\epsilon >0$, there is $\delta >0$ for which
there are no $a$ and $b_{n},c_{n}$, $n>0$,
such that for all $n>0$, the following holds:
(i) $d^{p}_{a}(Lstp(b_{n}/A_{n}),Lstp(c_{n}/A_{n}))<\delta$,
where $A_{n}=\bigcup_{i<n}b_{i}c_{i}$,
(ii) $d^{p}(t^g(ab_{n}/A_{n}),t^g(ac_{n}/A_{n})) >\epsilon$.
{\bf Proof}.\enspace Let $\delta=3(3(2(3(\epsilon ))))$.
For a contradiction, suppose that
$a$, $b_{n},c_{n}$ for $n<\omega$ exist
such that (i) and (ii) hold.
We can find a finite $A\subseteq\bigcup_{n<\omega}A_{n}$
such that $a\downarrow^{\delta}_{A}\bigcup_{n<\omega}A_{n}$.
Choose $n<\omega$ so that $A\subseteq A_{n}$. By the proof of Corollary 33,
$a\downarrow^{3(\epsilon )}_{A_{n}}b_{n}c_{n}$. Let $b$ satisfy $Lstp(b/A_n)=Lstp(a/A_n)$ and $b\downarrow_{A_n}b_nc_n$. Then $d^p_a(Lstp(b/A_nb_nc_n),Lstp(a/A_nb_nc_n))\leq 3(\epsilon)$.
Further since $d^{p}_{a}(Lstp(b_{n}/A_{n}),Lstp(c_{n}/A_{n}))<\delta\le 3(\epsilon)$, by Lemma 8 (iii) we also have that $d^p_a(Lstp(b_n/A_nb),Lstp(c_n/A_nb))\leq 3(\epsilon)$. Then finally by Lemma 9 this sums up to $d^p_a(Lstp(ab_n/A_n),Lstp(ac_n/A_n))\leq\epsilon$, a contradiction. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
Corollary 33 ensures full monotonicity for $\downarrow^0$, giving us the following generalisations of Lemma 21 and Lemma 19:
\th 35 Corollary. $a\downarrow^0_AB$ if and only if $a\downarrow_AB$.
{\bf Proof}.\enspace Assume $a\downarrow^0_AB$. By Corollary 13 let $A_0\subseteq A$ be countable and such that $a\downarrow^0_{A_0}A$. By Lemma 17 $a\downarrow^0_{A_0}B$ and by Lemma 21 $a\downarrow_{A_0}B$, and thus $a\downarrow_AB$.
For the other direction suppose $a\downarrow_AB$. Again let $A_0\subseteq A$ be countable and such that $a\downarrow^0_{A_0}A$. By Lemma 25 $a\downarrow^0_{A_0}B$ and by Corollary 33 $a\downarrow^0_AB$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 36 Corollary. For any $a$ and $A\subseteq B$ there exists some $a'$ satisfying $Lstp(a'/A)=Lstp(a/A)$ and $a'\downarrow^{0}_{A}B$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\chapter{Almost summability and $\downarrow^{>\epsilon}$}
In this section we study a property which we call almost summability. It allows us to add small distances to a given distance in the type space without the combined distance growing too big. We also study a weakening of $\downarrow^\epsilon$, which under almost summability is very well behaved.
\th 37 Definition. We say that the perturbation system $({\bf F}_\epsilon)_{\epsilon\geq0}$ is {\it almost summable} if for all $\epsilon>\delta>0$ there exists some $m(\epsilon,\delta)>0$ such that for all $a_i$, $i\leq 2$, if $d^p(t^g(a_0/\emptyset),t^g(a_1/\emptyset))\leq\delta$ and $d^p(t^g(a_1/\emptyset),t^g(a_2/\emptyset))\leq m(\epsilon,\delta)$ then $d^p(t^g(a_0/\emptyset),t^g(a_2/\emptyset))\leq\epsilon$.
\rem 38 Remark.
(i) As in Lemma 9 one can show that if the perturbation system is almost summable then with $\epsilon>\delta>0$ and $m(\epsilon,\delta)$ as in the definition, also $d^p_a$-distances of $\delta$ and $m(\epsilon,\delta)$ add up to $\epsilon$.
(ii) Almost summability holds e.g. for the perturbation system of Hilbert spaces with an automorphism [BUZ] or linear isomorphisms of Banach spaces. Below we give an example where almost summability fails.
\rem 39 Example. We give an example of a class that is homogeneous with complete type spaces
but whose perturbation system is not almost summable. The vocabulary is
$L=\{P_n,E,<,R_q,d\}_{n<\omega,q\in\hbox{\bf Q}\cap(0,2]}$ where the $P_n$
are unary predicates and $E$, $<$ and $R_q$ are binary. $E$ is an
equivalence relation, the predicates $P_n$ partition the universe and
each predicate is a union of $E$-equivalence classes. $<$ is an order
on each equivalence class such that there for each equivalence class
exists a real $1\leq r\leq10$
such that $([a]_E,<)$ is isomorphic to the ordered real interval $[r,2r]$.
$R_q(a,b)$ holds if and only if $[a]_E=[b]_E$ and $b/a=q$.
The metric $d$ is defined as the one induced by the interval
$[r,2r]$ within the equivalence classes and $d(a,b)=10$ if
$a$ and $b$ are in different equivalence classes. $d$ and
$<$ together fix the $r$ and a unique isomorphism $l:[a]_E\to[r,2r]$
for each element $a$. Thus we can define $r_a$ as the real $r$ given by
the isomorphism above and the length of $a$ as $l(a)\in[r_a,2r_a]$.
We define the perturbation system as follows: $f\in{\bf F}_\epsilon$ if $f$ is a $L\backslash\{d\}$-isomorphism and if $a\in P_n$ then also
$$
e^{-n\epsilon}\leq\frac{l(f(a))}{l(a)}\leq e^{n\epsilon}.
$$
The above condition makes sure that ${\bf F}_0=\bigcap_{\epsilon>0}{\bf F}_\epsilon$. As the $R_q$
prevents $\epsilon$-isomorphisms from stretching the interval $[r,2r]$ the error
in metric arises from mapping equivalence classes onto each other and
thus switching the $r$. As $r$ varies between 1 and 10 this can only
increase distances to the 10-fold and thus $\epsilon$-isomorphisms are bi-Lipschitz
with Lipschitz constant 10 (regardless of $\epsilon$) so they are uniformly
continuous. The rest of the conditions of a perturbation system are trivial.
It is not hard to see that this gives a MAEC with perturbations that is homogeneous with JEP, AP, the perturbation property and complete type spaces.
The perturbation system, however, is not almost summable. If $\epsilon>\delta>0$ is such that $\epsilon<4\delta$ and $\delta<2$
we show that no $\delta'>0$ can suffice as the $m(\epsilon,\delta)$ in Definition 37. So
let $\delta'>0$ be given and choose $n$ such that $e^{n\delta'}>10$.
Within $P_n$ let $a,b,c$ be elements in an equivalence class corresponding
to an interval $[r,2r]$ with $r<2$ and such that $a<b<c$ and $d(b,c)=\delta$.
Then $d^p(t^g(ab/\emptyset),t^g(ac/\emptyset))=d(ab,ac)=\delta$. Now we can
map $a,b,c$ with a $\delta'$-isomorphism to elements $a',b',c'$ in an
equivalence class, inside $P_n$, corresponding to an interval $[r',2r']$
with $r'>9$. This shows that
$d^p(t^g(ac/\emptyset),t^g(a'c'/\emptyset))\leq\delta'$. But as this is the only way we can map $a,b,c$ to that interval, we must have $d^p(t^g(ab/\emptyset),t^g(a'c'/\emptyset))\geq d(a'b',a'c')>4\delta>\epsilon$.
\th 40 Definition. We define $a\downarrow^{>\epsilon}_AB$ if $a\downarrow^{\xi}_AB$ for all $\xi>\epsilon$. Note that with this notation $a\downarrow^0_AB$ if and only if $a\downarrow^{>0}_AB$.
\rem 41 Remark. If $A$ is finite $a\downarrow^{>\epsilon}_AB$ if and only if $a\downarrow^{\epsilon}_AB$. This is easily seen via the observation that if $A$ is finite then $a\downarrow^\epsilon_AB$ says that $a$ is $\epsilon$-$d^p_a$-close to the free extension of $Lstp(a/A)$ over $B$. With an almost summable perturbation system a similar characterisation holds for any $A$:
\th 42 Lemma. Assume the perturbation system is almost summable and $A\subseteq B$.
(i) If $a\downarrow^{>\epsilon}_AB$, $Lstp^w(b/A)= Lstp^w(a/A)$ and $b\downarrow_AB$, then we have $d^p_a(Lstp^w(a/B),Lstp^w(b/B))\leq\epsilon$.
(ii) If $Lstp^w(b/A)= Lstp^w(a/A)$, $b\downarrow_AB$ and $d^p_a(Lstp^w(a/B),Lstp^w(b/B))\leq\epsilon$ then $a\downarrow^{>\epsilon}_AB$.
{\bf Proof}.\enspace (i) Assume $a\downarrow^{>\epsilon}_AB$, $b\models Lstp^w(a/A)$ and $b\downarrow_AB$. We prove that $d^p_a(Lstp^w(a/B),Lstp^w(b/B))\leq\xi$ for every $\xi>\epsilon$. So let $\xi>\epsilon$ be given and let $\xi>\epsilon''>\epsilon'>\epsilon$, $\delta^+=\min\{m(\xi,\epsilon''),m(\epsilon'',\epsilon')\}$
and $\delta=3(\delta^+)$. By Lemma 12 find a finite $A_\delta\subseteq A$ such that $ab\downarrow^\delta_{A_\delta}A$. As in the proof of Lemma 17 (since $\delta\leq m(\epsilon'',\epsilon')$) we get $a\downarrow^{\epsilon''}_{A_\delta}B$. Now let $c\models Lstp(a/A_\delta)$ and $c\downarrow_{A_\delta}Bb$. Then $\delta^p_a(Lstp^w(a/B),Lstp^w(c/B))\leq\epsilon''$. Also, as $c\downarrow_{A_\delta}A$, we have $d^p_a(Lstp^w(b/A),Lstp^w(c/A))\leq\delta$. Further by Fact 2 and Corollary 35 $bc\downarrow^0_AB$ so by Lemma 24 $d^p_a(Lstp^w(b/B),Lstp^w(c/B))\leq\delta^+$. But then $d^p_a(Lstp^w(a/B),Lstp^w(b/B))\leq\xi$.
(ii) Assume $b\models Lstp(a/A)$, $b\downarrow_AB$ and
$d^p_a(Lstp^w(a/B),Lstp^w(b/B))\leq\epsilon$ and let $\xi>\epsilon$.
Let $\delta=2(m(\xi,\epsilon))$. Now $b\downarrow^0_AA$, so by Lemma 11, for every
finite $C\subseteq A$ there is some finite $A'$, $C\subseteq A'\subseteq A$
such that $b\downarrow^\delta_{A'}A$ and by Lemma 17 $b\downarrow^{m(\xi,\epsilon)}_{A'}B$.
Now if $c\models Lstp(a/A')=Lstp(b/A')$ and $c\downarrow_{A'}B$, we have
$d^p_a(Lstp^w(b/B),Lstp^w(c/B))\leq m(\xi,\epsilon)$. Together with the
assumption this yields $d^p_a(Lstp^w(a/B),Lstp^w(c/B))\leq\xi$ proving
$a\downarrow^\xi_AB$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 43 Corollary. If the perturbation system is almost summable, $a\downarrow_AB$ and $a\downarrow^{>\epsilon}_{AB}C$ then $a\downarrow^{>\epsilon}_ABC$.
{\bf Proof}.\enspace Assume $a\downarrow_AB$ and $a\downarrow^{>\epsilon}_{AB}C$ and let $b\models Lstp(a/A)$, $b\downarrow_ABC$. By stationarity $Lstp(b/AB)=Lstp(a/AB)$ so by $a\downarrow^{\epsilon}_{AB}C$ and Lemma 42 (i) $d^p_a(Lstp^w(a/ABC),Lstp^w(b/ABC))\leq\epsilon$. But then by Lemma 42 (ii) $a\downarrow^{>\epsilon}_ABC$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 44 Lemma. If the perturbation system is almost summable then $a\downarrow^{>\epsilon}_ABC$ and $a\downarrow_AB$ imply $a\downarrow^{>\epsilon}_{AB}C$.
{\bf Proof}.\enspace Let $\xi>\epsilon$ and some finite $B'\subseteq AB$ be given. Let $\xi>\epsilon'>\epsilon$, $\delta=m(\xi,\epsilon')$ and $\delta'=3(3(2(\delta)))$ (from Corollary 33). Denote $A'=B'\cap A$. By $a\downarrow_AB$ there is a finite $A''\supseteq A'$ such that $a\downarrow^{\delta'}_{A''}AB$. By $a\downarrow^{>\epsilon}_ABC$ there is a finite $A^+\supseteq A''$ such that $a\downarrow^{\epsilon'}_{A^+}ABC$. Define $B^+=A^+\cup B'$ and let $b\models Lstp(a/B^+)$ such that $b\downarrow_{B^+}ABC$. We need to show that $d^p_a(Lstp^w(a/ABC),Lstp^w(b/ABC))\leq\xi$.
Now by Corollary 33 and the choice of $\delta'$ we have $a\downarrow^\delta_{A^+}AB$ and thus $b\downarrow^\delta_{A^+}B^+$. By Lemma 15 we get $b\downarrow^\delta_{A^+}ABC$. Let $b'\models Lstp(a/A^+)=Lstp(b/A^+)$ such that $b'\downarrow_{A^+}ABC$. Then $d^p_a(Lstp^w(b/ABC),Lstp^w(b'/ABC))\leq\delta$. By $a\downarrow^{\epsilon'}_{A^+}ABC$ we also have $d^p(Lstp^w(a/ABC),Lstp^w(b'/ABC))\leq\epsilon'$. By almost summability we are done. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\chapter{Finding a pregeometry in ${\bf M}^{eq}$}
In this section we study a closure operator defined by $\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}$ on the set of realisations of a Lascar type. We find conditions on $\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^\epsilon$ that guarantee that there is an equivalence relation on this set such that the closure operator forms a pregeometry on the set of equivalence classes. The $p$-adic integers, studied at the end of this paper, form an example of a class where this happens, but where the type itself is not regular.
Let $D$ be the set of all realisations of some unbounded $p=Lstp^w(a/A)$.
Let $E$ be an $A$-invariant equivalence relation.
Denote $a^*=a/E$. We define in $D/E$ a closure operator by
$a^*\in cl(b_1^*,\dots,b_n^*)$ if for all $a'\in a^*$ and $b_i'\in b_i^*$, $i=1,\dots,n$, $a'\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab_1'\dots b_n'$. For an arbitrary $B^*\subseteq D/E$ we define $a^*\in cl(B^*)$ if $a^*\in cl(B_0^*)$ for some finite $B_0^*\subseteq B^*$.
\th 45 Lemma. $cl$ as defined above satisfies Steinitz' exchange property, i.e. if $a^*\in cl(b_1^*,\dots,b_n^*,c^*)\backslash cl(b_1^*,\dots,b_n^*)$ then $c^*\in cl(b_1^*,\dots,b_n^*,a^*)$.
{\bf Proof}.\enspace Assume $a^*\in cl(b_1^*,\dots,b_n^*,c^*)\backslash cl(b_1^*,\dots,b_n^*)$.
If there are $c'\in c^*,a'\in a^*, b_k'\in b_k^*$, $1\leq k\leq n$,
such that $c'\downarrow_A b_1'\dots b_n'a'$ then we can form a Morley sequence
$c_i$, $i<\lambda({\cal K})$ such that $c_i\models Lstp(c'/A)$ and
$c_i\downarrow_Aa'b_1'\dots b_n'\bigcup_{j<i}c_j$. Now for each $i<\lambda({\cal K})$
there is an automorphism $F_i\in Aut({\bf M}/Ab_1',\dots b_n'a')$ mapping
$c_i$ to $c'$ and since it fixes the $b_k'$, $1\leq k\leq n$, and $a'$ it
must fix their equivalence classes setwise.
Now as $a^*\notin cl(b_1^*,\dots,b_n^*)$ there are $a''\in a^*$ and $b_k''\in b_k^*$, for $1\leq k\leq n$, such that $a''\downarrow_Ab_1''\dots b_n''$. Now $F_i(a'')\in a^*$, $F_i(b_k'')\in b^*$ and $F_i(c_i)=c'\in c^*$ and as $a^*\in cl(b_1^*,\dots,b_n^*,c^*)$ we have $F_i(a'')\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AF_i(b_1'')\dots F_i(b_n'')c'$ so $a''\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab_1''\dots b_n'' c_i$ for every $i<\lambda({\cal K})$. As $a''\downarrow_A b_1''\dots b_n''$ this implies $a''\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_{Ab_1''\dots b_n''}c_i$. By symmetry $c_i\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_{Ab_1''\dots b_n''}a''$
and further $c_i\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab_1''\dots b_n''a''$. But as the $c_i$ form a Morley sequence this implies $b_1''\dots b_n''a''\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_{A\bigcup_{j<i}c_j}c_i$ for every $i<\lambda({\cal K})$ but this gives a strongly splitting chain of length $\lambda({\cal K})$, a contradiction. So we must have $c'\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_A b_1'\dots b_n'a'$ for all $a'\in a^*$, $b_k'\in b_k^*$, $1\leq k\leq n$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 46 Lemma. Assume $A$ is finite or the perturbation system is almost summable.
If (*) below holds, then $(D/E,cl)$ is a pregeometry.
(*) There is $\epsilon>0$ such that for all $b\in D$ and
$B\subseteq D$ the following are equivalent:
(i) $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$
(ii) $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{>\epsilon}_AB$
(iii) for all $c\in D$ there exists $b'\in b^*$ such that $c\downarrow^{>\epsilon}_{AB} b'$.
{\bf Proof}.\enspace Monotonicity is clear, finite character was built into the definition
and exchange was proved in Lemma 45 so all that remains is $cl(cl(B))=cl(B)$.
It is enough to consider the case where $c^*\in cl(b_1^*,\dots,b_n^*)$ and
$a^*\in cl(b_1^*,\dots,b_n^*,c^*)$ and show that $a^*\in cl(b_1^*,\dots,b_n^*)$.
So let $a'\in a^*,b_i'\in b_i^*$. We need to show $a'\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab_1'\dots b_n'$.
As all $c\in c^*$ satisfy $c\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab_1'\dots b_n'$ by (*) there exists
$c'\in c^*$ such that $a'\downarrow^{>\epsilon}_{Ab_1'\dots b_n'}c'$.
Now if $a'\downarrow_Ab_1'\dots b_n'$ and $A$ is finite then by Lemma 11 (vii) $a'\downarrow^{>\epsilon}_A b_1'\dots b_n'c'$ and by (*) $a'\downarrow_A b_1'\dots b_n'c'$, a contradiction.
If $d^p$ is almost summable then by Corollary 43
we again get $a'\downarrow_A b_1'\dots b_n'c'$. So $a'\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab_1'\dots b_n'$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\rem 47 Remark. Note that in the lemma above $p$ itself need not be regular as we will see in the $p$-adic example.
\th 48 Lemma. If $E$ is an equivalence relation such that (*) of Lemma 46 holds, then $aEb$ implies $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab$.
{\bf Proof}.\enspace We first show that if (*) holds then $E$ has more than one equivalence class. Assume $aEb$ and let $B$ be such that $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$. Now by the equivalence of (i) and (iii) in (*) and since $a^*=b^*$ we get $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$. Thus if $c\downarrow_AB$, $c$ is not in the $E$-class of $a$, so $E\neq D^2$.
Now if $aEb$, $a\downarrow_Ab$ and $c\in D$ is arbitrary, choose $d\in D$ with $d\downarrow_Abc$ then $t^g(d/Ab)=t^g(a/Ab)$ so $dEb$. Further $t^g(c/Ad)=t^g(b/Ad)$ so $cEb$ and $E$ has $D$ as its only equivalence class, a contradiction. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 49 Corollary. If $E$ is an equivalence relation such that (*) of Lemma 46 holds and either $A$ is finite or the perturbation system is almost summable, then $\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_A$ is transitive on $D$ (and thus an equivalence relation) and (*) holds for this equivalence relation.
{\bf Proof}.\enspace Assume towards a contradiction that $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab$ and $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ac$ but $a\downarrow_Ac$. As $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ac$ there is $b'$ such that $b'Eb$ and $a\downarrow^{>\epsilon}_{Ac}b'$ and as $a\downarrow_Ac$ we have $a\downarrow^{>\epsilon}_Acb'$ and thus $a\downarrow_Ab'$. Further, as $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Aa$ there is $b''$ such that $b''Eb$ and $b'\downarrow^{\epsilon}_{Aa}b''$. Now as $b'\downarrow_Aa$ we get $b'\downarrow^{>\epsilon}_Aab''$ and thus $b'\downarrow_Ab''$, contradicting Lemma 48.
Now $\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_A$ forms an equivalence relation on $D$ such that each $\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_A$-equivalence class is a union of $E$-equivalence classes. Thus (iii) with respect to $E$ implies (iii) with respect to $\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_A$.
Now if for all $c\in D$ there is $b'$ such that $b'\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab$ and $c\downarrow^{\epsilon}_{AB}b'$ then in particular this holds for $c=b$. Then if $b\downarrow_AB$ we have $b\downarrow^{>\epsilon}_ABb'$ and by equivalence of (i) and (ii) $b\downarrow_Ab'$, a contradiction. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
The relation $\downarrow^\epsilon$ measures distances to free extensions. Another view is looking at how much a type can still fork.
\th 50 Definition. We define a real-valued rank function
$$
R(a/A)=\sup\{\epsilon : a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^\epsilon_AB {\sl\ for\ some\ }B\}
$$
and $R(a/A)=0$ if the above set is empty.
\rem 51 Remark.
(i) $R(a/A)=0$ if and only if $t^g(a/A)$ is bounded.
(ii) $R$ is not in general monotone (i.e. it is not always the case that $R(a/AB)\leq R(a/A)$ as can be seen by considering functions $\omega\to X$ for some set $X$ and defining the following metric:
$$d(f,g) = \cases{ 0,9 &if $\min\{n:f(n)\neq g(n)\}=0$\cr
m^{-1} &if $m=\min\{n:f(n)\neq g(n)\}>0$.\cr}$$
\th 52 Lemma. If either $A$ is finite or the perturbation system is almost summable then $a\downarrow_AC$ implies $R(a/AC)=R(a/A)$.
{\bf Proof}.\enspace Assume $a\downarrow_AC$. If $R(a/A)>\epsilon$ then for some $B$ $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{>\epsilon}_AB$ and thus by monotonicity $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{>\epsilon}_ABC$. As $a\downarrow_AC$ this implies $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{>\epsilon}_{AC}B$ (by Lemma 11(vii) if $A$ is finite and by Corollary 42 if the perturbation system is almost summable) which shows that $R(a/AC)>\epsilon$.
If $R(a/A)\leq\epsilon$ then for all $B$ $a\downarrow^{\epsilon}_AB$, in particular $a\downarrow^{\epsilon}_ABC$. Then either by Lemma 11(viii) or Lemma 43 and $a\downarrow_AC$ we get $a\downarrow^{>\epsilon}_{AC}B$ for all $B$, proving $R(a/AC)\leq\epsilon$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 53 Lemma. Let $D$ be as above and assume either that $A$ is finite of the perturbation system is almost summable. Suppose there is $\epsilon>0$ such that for all $B\subset D$ and all $b\in D$ the following are equivalent
(i) $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$
(ii) $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{>\epsilon}_AB$
(iii) $R(b/AB)\leq\epsilon$
Then $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab$ is an equivalence relation on $D$.
{\bf Proof}.\enspace Suppose $a,b,c\in D$, $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Aa$ and $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ac$. For a contradiction suppose $a\downarrow_Ac$. Then $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_{Aa}c$. Let $b'\models Lstp^w(b/Aa)$ and $b'\downarrow_{Aa}c$. Since $R(b/Aa)\leq\epsilon$, we have $b\downarrow^{>\epsilon}_{Aa}c$ so $d^p_a(Lstp^w(b'/Aac),Lstp^w(b/Aac))\leq\epsilon$. In particular, $d^p_a(Lstp^w(b'/Ac),Lstp^w(b/Ac))\leq\epsilon$.
On the other hand, $b'\downarrow_Ac$ and so by (ii), $d^p_a(Lstp^w(b'/Ac),Lstp^w(b/Ac))>\epsilon$, a contradiction. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 54 Corollary. Let $D$ be as above. Assume there is $\epsilon>0$ such that for all $b\in D$ and $B\subseteq D$ the following are equivalent:
(i) $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$
(ii) $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^{>\epsilon}_AB$
(iii) $R(b/AB)\leq\epsilon$
(iv) for all $c\in D$ there exists $b'\in D$ with $b'\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab$ such that $c\downarrow^{>\epsilon}_{AB} b'$
Then $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab$ is an equivalence relation on $D$ and $(D/\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_A,cl)$ is a pregeometry.
{\bf Proof}.\enspace This is clear by Lemmas 53 and 46. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\chapter{Example: the $p$-adics}
\def\preccurlyeq_{\K}{\preccurlyeq_{{\cal K}}}
\def\pcl#1{\left\langle#1\right\rangle_P}
\def\cl#1{\left\langle#1\right\rangle}
\def\pnorm#1{\left\Vert #1\right\Vert_p}
\def\mid{\mid}
\def\nmid{\nmid}
As an example
we consider a class of ultrametric spaces, namely that of models $\overline{\hbox{\bf Z}_p^{(\kappa)}}$ for a fixed prime $p$, where $\hbox{\bf Z}_p$ is the set of $p$-adic integers (i.e. the completion of the integers in the $p$-adic topology). Recall that the $p$-adic topology is given by the $p$-adic norm $\pnorm{a}=p^{-max\{k : p^k\mid a\}}$.
We shortly recall some group theoretic notions. An element $a\in A$ is {\it divisible by $n$} if there is $a'\in A$ such that $na'=a$. A subgroup $B\leq A$ is {\it pure} if for every $n\in\hbox{\bf Z}$ each $b\in B$ which is divisible by $n$ in $A$ is divisible by $n$ already in $B$. The {\it $p$-height} of $a$ is the largest $k\in\hbox{\rm I\hskip-0.14em N}$ such that $a$ is divisible by $p^k$. All $p$-adic integers are divisible by all $n\in\hbox{\rm I\hskip-0.14em N}$ coprime to $p$, so in the $p$-adic integers {\it height} refers to $p$-height.
We work in the vocabulary of groups $L=\{0,+,-\}$. The class ${\cal K}_p$ consists of completions of direct sums of copies of the $p$-adic integers, $\overline{\hbox{\bf Z}_p^{(\kappa)}}$ with $\kappa$ any cardinal. We let $A\preccurlyeq_{\K} B$ if $A$ is a closed pure subgroup of $B$.
Although we work in the vocabulary of groups, we will use the fact that our models, as completions in the $p$-adic topology of $\hbox{\bf Z}$-modules, are $p$-adic modules (modules over the ring of $p$-adic integers). Thus we can use structure theorems of complete modules. By a pure submodule of a $p$-adic module $A$ we mean a submodule $B$ such that $p^kB=B\cap p^kA$ for all $k\in\hbox{\rm I\hskip-0.14em N}$. Thus a pure closed subgroup of a $p$-adic module is a pure submodule.
The facts below actually hold for complete modules over any complete discrete valuation ring (complete principal ideal ring with exactly one prime element), but we constrain our attention to $p$-adic modules.
\th 55 Facts. ([Ka, \S 16])
(i) The completion, in the $p$-adic topology, of a module with no elements of infinite height is again a module with no elements of infinite height.
(ii) A module with no elements of infinite height is pure in its $p$-adic completion.
(iii) If $T$ is a pure submodule of a complete module $M$ then the closure of $T$ is likewise pure.
(iv) A module with no elements of infinite height which is complete in its $p$-adic topology is the completion of a direct sum of cyclic modules.
(v) If $M$ is a $\hbox{\bf Z}_p$-module and $S$ is a pure submodule of $M$ with no elements of infinite height which is complete in its $p$-adic topology then $S$ is a direct summand of $M$.
\th 56 Corollary. If $A\in{\cal K}_p$ then $B\in{\cal K}_p$ and $B\preccurlyeq_{\K} A$ if and only if $B$ is a direct summand of $A$.
{\bf Proof}.\enspace If $A,B\in{\cal K}_p$ and $B\preccurlyeq_{\K} A$ then $B$ is a direct summand by (v) of Fact 55. On the other hand if $B$ is a direct summand of $A$ then $B$ is a pure closed subgroup of $A$ and by (iv) of Fact 55 $B\in{\cal K}_p$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
Taking into account that a product of groups $A_i$ is complete in the $p$-adic topology if and only if every $A_i$ is we may write our models in the form $\overline{\hbox{\bf Z}^{(\kappa)}}$. The backbone $\hbox{\bf Z}^{(\kappa)}$ of the model is what Fuchs [Fu] calls a $p$-basic subgroup:
\th 57 Definition. A $p$-basic subgroup $B$ of $A$ is a subgroup of $A$ satisfying the following three conditions:
(i) $B$ is a direct sum of cyclic $p$-groups and infinite cyclic groups,
(ii) $B$ is $p$-pure in $A$ ($p^kB=B\cap p^kA$ for $k\in\hbox{\rm I\hskip-0.14em N}$),
(iii) $A/B$ is $p$-divisible ($p^kA/B=A/B$ for $k\in\hbox{\rm I\hskip-0.14em N}$).
If $B$ is a $p$-basic subgroup of $A$ then $B$ has a basis which is said to be a $p$-basis of $A$. This basis is $p$-independent, i.e. for every finite subsystem $a_1,\dots,a_m$
$$
n_1a_1+\cdots +n_ma_m\in pA \quad (n_ia_i\neq 0,n_i\in\hbox{\bf Z})
$$
implies
$$
p\mid n_i \quad (i=1,\dots,m).
$$
\th 58 Facts. ([Fu])
(i) A subgroup generated by a $p$-independent system in $A$ is $p$-pure in $A$.
(ii) Every $p$-independent system of $A$ can be expanded to a $p$-basis of $A$.
(iii) For a given prime $p$ all $p$-basic subgroups of a group are isomorphic.
\th 59 Proposition. For a given prime $p$, ${\cal K}_p$ is a MAEC with L\"owenheim-Skolem number $\aleph_0$.
{\bf Proof}.\enspace Both ${\cal K}_p$ and $\preccurlyeq_{\K}$ are closed under isomorphism. If $A\preccurlyeq_{\K} B$
then $A$ is a substructure of $B$ and $\preccurlyeq_{\K}$ is a partial order of
${\cal K}$. For unions note that if $(A_i)$ is an increasing chain of groups in
${\cal K}_p$ with $A_i$ pure in $A_j$ for $i\leq j$, then $\bigcup_iA_i$ is a
torsion-free $\hbox{\bf Z}_p$-module with no non-zero elements of infinite height.
By Fact 55 its completion is again a $\hbox{\bf Z}_p$-module with no elements of
infinite height and thus of the form $\overline{\hbox{\bf Z}_p^{(\kappa)}}$ and is
a model in ${\cal K}_p$.
Also, as each $A_i$ is pure in $\bigcup_iA_i$ which in turn by Fact 55 is pure in its completion, each $A_i$ is pure in $\overline{\bigcup_iA_i}$ and if for all $i$ $A_i\preccurlyeq_{\K} B\in{\cal K}_p$ then $\bigcup_iA_i$ is pure in $B$ and by Fact 55 so is $\overline{\bigcup_iA_i}$.
For the coherence axiom note that if $A\preccurlyeq_{\K} C$ then $A$ is a pure subgroup of any subgroup of $C$ that it is contained in. Thus if $B\preccurlyeq_{\K} C$ and $A\subset B$ we have $A\preccurlyeq_{\K} B$.
The L\"owenheim-Skolem number $LS^d({\cal K}_p)$ is $\aleph_0$. To see this let $C$ be a subset of $A\in{\cal K}_p$. Clearly $\overline{\pcl{C}}$ is the smallest pure closed subgroup of $A$ containing $C$, so $C\subset\overline{\pcl{C}}\preccurlyeq_{\K} A$ and as $\pcl{C}$ has cardinality at most $\abs{C}+\aleph_0$ we are done. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
In this example we only consider isometric isomorphisms so the $d^p$-metric from [HH] reduces to the infimum-distance metric $d(p,q)=\inf\{d(a,b) : a\models p,b\models q\}$. Also almost summability trivially holds.
\th 60 Proposition. The class ${\cal K}_p$ has the joint embedding and amalgamation properties.
{\bf Proof}.\enspace As direct sums of (disjoint) models are models this is clear by corollary 56. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
If $A$ is a subset of a group $B$, $\cl{A}$ denotes the subgroup generated by $A$ and $\pcl{A}$ denotes the pure subgroup in $B$ generated by $A$, i.e. $\pcl{A}=\{b\in B : \exists n\in\hbox{\rm I\hskip-0.14em N}^\ast\ nb\in\cl{A}\}$. When using this notation $B$ will be clear from the context.
\th 61 Lemma. If $A\preccurlyeq_{\K} B$ with a $p$-base of strictly smaller cardinality and if $f:A\to B$ is a ${\cal K}$-embedding, i.e. an embedding such that $f(A)\preccurlyeq_{\K} B$ then $f$ can be extended to an automorphism of $B$.
{\bf Proof}.\enspace Write $B=A\oplus B_1=f(A)\oplus B_2$ and note that by cardinality considerations $B_1$ and $B_2$ must be isomorphic. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
By the above lemma any large enough model acts as a monster model and below we shall assume we work inside such a model $M\in{\cal K}_p$.
\th 62 Lemma. If $A\subset M$ is a set, $a\in M$ an element and $a\notin\overline{\pcl{A}}$ then the Galois-type of $a$ over $A$, $t^g(a/A)$, is determined exactly by the distance of $a$ to $\pcl{A}$ and the set $A_a\subset\pcl{A}$ of closest elements, i.e. $A_a=\{b\in\pcl{A} : d(a,b)=d(a,\pcl{A})\}$.
{\bf Proof}.\enspace First note that $\pcl{A}$ (and thus also its closure) is fixed pointwise by any automorphism fixing $A$ pointwise. This is because in a torsion-free group the equation $nx=a$ has at most one solution. Thus also distances to elements within $\pcl{A}$ must be preserved.
Now if $a$ and $b$ have the same positive distance to $\pcl{A}$, say $p^{-k}$, and the same set of closest elements, choose one of these, say $c$ and write $a=p^ka'+c$, $b=p^kb'+c$, where $p\nmid a',b'$. Let $I$ be a $p$-basis for $\pcl{A}$. Then $I\cup\{a'\}$ is $p$-independent: Let $a_1,\dots,a_m\in I$, $n_0,\dots,n_m\in\hbox{\bf Z}$ such that $p\mid n_0a'+n_1a_1+\cdots+n_ma_m$. If for some $i\leq m$ $p\nmid n_i$ we are left with three scenarios:
(i) $p\mid n_i$ for $1\leq i\leq m$ but $p\nmid n_0$. But then $p\mid a'$ contradicting the maximality of $k$ (in the distance of $a$ to $\pcl{A}$).
(ii) $p\mid n_0$ but for some $1\leq i\leq m$ $p\nmid n_i$. This contradicts $p$-independence of $I$.
(iii) $p\nmid n_0$ and for some $1\leq i\leq m$ $p\nmid n_i$. By removing terms we may assume $p\nmid n_i$ for all $i\leq m$. As $p\nmid n_0$ and $\pcl{A}$ is pure in the $\hbox{\bf Z}_p$-module $M$, there is $b\in\pcl{A}$ such that $n_0b=n_1a_1+\cdots n_ma_m$. Thus $p\mid a'+b$, i.e. $p^{k+1}\mid a-c+p^kb$ contradicting the maximality of $k$.
So $I\cup\{a'\}$ is $p$-independent and similarly $I\cup\{b'\}$. Thus we can construct a ${\cal K}$-embedding of $\pcl{Aa}$ into $M$ by fixing $I$ pointwise and mapping $a'$ to $b'$. By Lemma 61 this extends to an automorphism of $M$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
Note that in the lemma the set of closest elements $A_a=\{b\in\pcl{A} : d(a,b)=d(a,\pcl{A})\}$ is a closed ball of radius $d(a,\pcl{A})$ so to check that two elements with the same distance to $\pcl{A}$ have the same type it is enough to show that their sets of closest elements intersect.
\th 63 Proposition. The class ${\cal K}_p$ is homogeneous.
{\bf Proof}.\enspace Let $(a_i)_{i<\alpha}$ and $(b_i)_{i<\alpha}$ be sequences of elements in $M$ such that
$$t^g((a_{i_k})_{k<n}/\emptyset)=t^g((b_{i_k})_{k<n}/\emptyset)\quad{\rm for\ each}\ n<\omega.$$
Now define $f$ by $a_i\mapsto b_i$. As finite tuples of $(a_i)_{i<\alpha}$ and $(b_i)_{i<\alpha}$ have the same type, this induces a group isomorphism between $\pcl{(a_i)_{i<\alpha}}$ and $\pcl{(b_i)_{i<\alpha}}$ which naturally extends to their closures. Thus $f$ extends to a map $\overline{\pcl{(a_i)_{i<\alpha}}}$ to $\overline{\pcl{(b_i)_{i<\alpha}}}$ and as these are models the map further extends to an automorphism of $M$ by Lemma 61. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 64 Proposition. The class ${\cal K}_p$ has the perturbation property, i.e., if $d^p(t^g(a/\emptyset),t^g(b/\emptyset))=0$ then $t^g(a/\emptyset)=t^g(b/\emptyset)$.
{\bf Proof}.\enspace Let $(b_i)_{i<\omega}$ be a sequence of tuples in a large model $M$
such that $t^g(b_i/\emptyset)=t^g(b_j/\emptyset)$ for all $i,j<\omega$ and
assume $(b_i)_{i<\omega}$ converges to $b$. Denote $B_i=\pcl{b_i}$.
By assumption the mappings mapping $b_0$ to $b_i$ induce isomorphisms
$f_i:B_0\to B_i$. Now consider $B=\pcl{b}$. Define a map $f:B_0\to B$ by
letting $f(a)$, for $a\in B_0$, be the limit of $(f_i(a))_{i<\omega}$.
As $b_i\to b$ it is easy to see that linear combinations of $\cl{b_i}$
converge to the corresponding linear
combination of $\cl{b}$. Also each $f_i$ must preserve divisibility and if $(na_i')_{i<\omega}$ converges to some $a$ then $(a_i')_{i<\omega}$ must be convergent and its limit $a'$ must satisfy $na'=a$. Thus $f$ is a group isomorphism $B_0\to B$ and extends to the closures. These in turn are models, so $f$ extends to an automorphism of $M$ mapping $b_0\mapsto b$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
Note that as we only consider isometric mappings and $d^p$ thus coincides with the infimum-distance metric, completeness of type spaces ($d^p$-Cauchy sequences of types over $\emptyset$ have a limit) is just completeness of the model.
\th 65 Proposition. The class ${\cal K}_p$ is $\omega$-$d^p$-stable, i.e. the set of types over a parameter set of cardinality (or density) $\aleph_0$ has density character $\aleph_0$ in the $d^p$-topology.
{\bf Proof}.\enspace Let $A$ be a separable (i.e. containing a dense countable set)
model of ${\cal K}_p$ and let $B=A\oplus\overline{\hbox{\bf Z}_p^{(\omega)}}$. It is
enough to show that all types over $A$ can be realised in $B$.
Since all types over $A$ can be realised in a separable strong extension
$C\succcurlyeq A$ we need to show that such a $C$ can be embedded over $A$
into $B$. Now as $A\preccurlyeq_{\K} C$, $A$ is a direct summand of $C$ so $C=A\oplus C'$
and $C'$ is either empty or of the form $\overline{\hbox{\bf Z}_p^{(\alpha)}}$.
Then $\alpha$ is
either finite or $\omega$ and thus $C'$ can be embedded into the complement of $A$ in $B$. Combining this with the identity map on $A$ we are done. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 66 Proposition. For elements $a$, $a\downarrow_AB$ if and only if $d(a,\pcl{A})=d(a,\pcl{AB})$.
{\bf Proof}.\enspace Assume $d(a,\pcl{AB})<d(a,\pcl{A})$ (which implies $d(a,\pcl{A})\neq 0$).
Choose an element $b\in\pcl{AB}$ such that $d(a,b)<d(a,\pcl{A})$
(and if $d(a,\pcl{AB})>0$ we can actually choose $b$ such that
$d(a,b)=d(a,\pcl{AB})$). Now let $b_A$ be a closest element to $b$ in
$\pcl{A}$. If $J_A$ is a $p$-basis for $\pcl{A}$ and $d(b,\pcl{A})=p^{-k}$
we can write $b=b_A+p^kb'_0$ where $J_A\cup\{b'_0\}$ is $p$-independent.
We can extend this to a $p$-basis $J_B$ for $\pcl{AB}$ and further to a
$p$-independent
sequence $J_B\cup\{b'_i\}_{0<i<\omega}$. Then define $b_i=b_A+p^kb'_i$. The $b_i$ form an $A$-indiscernible sequence, $b_0=b$ and for $i\neq j$ $d(b_i,b_j)=d(b_0,\pcl{A})=d(b,b_A)$. As $d(a,b)<d(a,b_A)$ we must have $d(a,b_A)=d(b,b_A)$. Now if $a'\models t^g(a/AB)$ then $d(a',b)=d(a,b)$ and we must have $d(a',b_1)=d(b_0,b_1)>d(a',b_0)$. So $t^g(a'/AB\cup{b'_i}_{0<i<\omega})$ splits strongly over $A$. This proves $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$.
For the other direction, assume $d(a,\pcl{A})=d(a,\pcl{AB})$. If this
distance is 0, then there is a countable $A'\subset A$ s.t.
$a\in\overline{\pcl{A'}}$. Then for any $B'\supseteq AB$, $t^g(a/B')$
does not split strongly over $A'$ as any $A'$-indiscernible sequence must
be $A'a$-indiscernible. If the distance is positive, let $A'\subset A$
be finite such that $d(a,\pcl{A'})=d(a,\pcl{A})$. Let $B'\supseteq AB$.
We need to find $b\models t^g(a/AB)$ such that $t^g(b/B')$ does not split
strongly over $A'$. As $d(a,\pcl{A'})=d(a,\pcl{AB})$
we can find $a_{A'}\in\pcl{A'}$ such that it is a closest element to $a$
in $\pcl{AB}$ and write $a=a_{A'}+a'$. Let $b=a_{A'}+b'$ where
$d(b',\pcl{B'})=\pnorm{a'}$. Then $b\models t^g(a/AB)$ and we prove
that $t^g(b/B')$ does not split strongly over $A'$: Let
$\{b_i:i<\omega\}\subset B'$ be $A'$-indiscernible. Then
$\overline{\pcl{A'b_0}}$ is a model and the map generated by fixing
$A'$ and mapping $b_0$ to $b_1$ is ${\cal K}$-elementary and by Lemma 61 extends
to an automorphism of $\overline{\pcl{B'}}$. Now looking at $\overline{\pcl{B'}}$ inside any larger model containing $b'$ we see that we can extend the mapping to one fixing $b'$. Then we have a map showing that $t^g(bb_0/A')=t^g(bb_1/A')$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 67 Corollary. ${\cal K}_p$ is simple.
{\bf Proof}.\enspace Let $a$ be a finite tuple and $A$ a set. If $a$ is a single element then $a\downarrow_AA$ by Proposition 66. If $a=a_1\dots a_n$, use induction on $n$ and Fact 2 (v). $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 68 Lemma. For single elements $a$, $R(a/A)=d(a,\pcl{A})$.
{\bf Proof}.\enspace If $a'\models t^g(a/A)$ then $d(a,a')\leq d(a,\pcl{A})$ so $a\downarrow^{d(a,\pcl{A})}_AB$ for any $B$. On the other hand if $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$ and $a'\downarrow_AB$ then $d(a,a')=d(a,\pcl{A})$ so $a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}^\epsilon_AB$ for all $\epsilon<d(a,\pcl{A})$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
\th 69 Lemma. In ${\cal K}_p$, for any set $A$ any type of a single element satisfies the assumptions of Corollary 54.
{\bf Proof}.\enspace Let $p=Lstp^w(a/A)$ where $a$ is a single element. Define $d=d(a,\pcl{A})$ and let $d^-$ be the largest distance smaller than $d$ (recall that the positive values of the metric are discrete). Choose any $\epsilon$ within $d^-<\epsilon<d$. Let $b\in D$ and $B\subseteq D$.
(i)$\Leftrightarrow$(iii) is clear by Lemmas 68 and 66.
(ii)$\Rightarrow$(i) is trivial.
(iv)$\Rightarrow$(ii): Assume (iv) holds for $c=b$, i.e., there exists $b'$ such that $b'\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab$ and $b\downarrow^{>\epsilon}_{AB}b'$. Now if $b\downarrow^{>\epsilon}_AB$, by Corollary 68 and the choice of $\epsilon$ $b\downarrow_AB$. Then by Corollary 43 $b\downarrow^{>\epsilon}_ABb'$ and again by Corollary 68 $b\downarrow_Ab'$, a contradiction.
(i)$\Rightarrow$(iv): Assume $b\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB$. Then $d(b,\pcl{AB})\leq d^-$ so there is $b'\in\pcl{AB}$ satisfying $d(b,b')\leq d^-$. Then $b'\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_Ab$ and $\pcl{ABb'}=\pcl{AB}$ so any element $c\in D$ must satisfy $c\downarrow_{AB}b'$ and thus $d\downarrow^{>e}_{AB}b'$. $\mathchoice\sqr34\sqr34\sqr{2.1}3\sqr{1.5}3$
Thus the $\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_A$-equivalence classes of a type (over $A$) form a pregeometry.
Note that we really need to look at equivalence classes to get a pregeometry.
If we define the closure simply by $cl(B)=\{a:a\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}_AB\}$ the property
$cl(cl(B))=cl(B)$ fails. This can be seen by considering $p$-independent
elements $b_i$ and letting, e.g., $A=\emptyset$, $p$ be the type of any
element of length $1$, $B=\{b_1-pb_2\}$, $c=pb_0+b_1$ and $a=b_0+b_2$.
Then $c\in cl(B)$, $a\in cl(Bc)$ but $a\notin cl(B)$. This reflects the way the structure theorem for (nice)
Abelian groups looks at the Ulm invariants, i.e. the dimensions of $p^\alpha G/p^{\alpha+1}G$, considered as vector spaces over the integers mod $p$.
Note that when $A=\emptyset$ and $a,b\in G-pG$, they are in the same $\mathrel{\lower0pt\hbox to 3pt{\kern3pt$\not$\hss}\downarrow}$-equivalence class if and only if, when $G/pG$ is considered as a vector space over $\hbox{\bf Z}/p\hbox{\bf Z}$, the cosets of $a$ and $b$ span the same linear subspace, $(\hbox{\bf Z}/p\hbox{\bf Z})a/pG=(\hbox{\bf Z}/p\hbox{\bf Z})b/pG$.
\chapter{References}
\item{[BU]} I. Ben Yaacov, A. Usvyatsov, On $d$-finiteness in continuous structures, Fund. Math. 194 (2007), 67--88.
\item{[BUZ]} I. Ben Yaacov, A. Usvyatsov, M. Zadka, Generic automorphism of a Hilbert space, preprint.
\item{[Fu]} L. Fuchs, Infinite Abelian Groups Vol I, Pure and Appl. Math., Vol. 36, Academic Press, New York, 1970.
\item{[HH]} \AA. Hirvonen, T. Hyttinen, Metric abstract elementary classes with perturbations, Fund. Math. 217 (2012), 123--170.
\item{[HK1]} T. Hyttinen, M. Kes\"al\"a, Superstability in simple finitary AECs,
Fund. Math. 195(3) (2007), 221--268.
\item{[HK2]} T. Hyttinen, M. Kes\"al\"a, Lascar types and Lascar automorphisms in abstract elementary classes, Notre Dame J. Form. Log. 52(1) (2011), 39--54.
\item{[HL]} T. Hyttinen, O. Lessmann, A rank for the class of elementary submodels of a superstable homogeneous model, J. Symbolic Logic 67 (2002), no. 4, 1469--1482.
\item{[HS]} T. Hyttinen, S. Shelah, Strong splitting in stable homogeneous models, Ann. Pure Appl. Logic 103 (2000), 201--228.
\item{[Ka]} I. Kaplansky, Infinite Abelian Groups, University of Michigan Press, Ann Arbor, 1954.
\item{[Sh]} S. Shelah, Classification Theory and the Number of Nonisomorphic Models, 2nd rev. ed., Stud. Logic Found. Math., Vol. 92, North-Holland, Amsterdam, 1990.
\end
|
1311.4217
|
\section{Introduction}
\label{Intro}
This paper concerns operations on knots and links, particularly infection by string links. Classically, knots and links are considered as isotopy classes of embeddings of a 1-manifold into a 3-manifold, such as $\mathbb{R}^3$, $D^3$, or $S^3$. Instead of considering just isotopy classes, we consider the whole \emph{space} of links, that is the space of embeddings of a certain 1-manifold into a certain 3-manifold.
We also consider spaces parametrizing the operations and organize all of these spaces via the concept of an operad (or colored operad). The operad framework is in turn convenient for studying spaces of links and generalizing statements about isotopy classes to the space level. Finding such statements to generalize was the motivation for recent work of the authors and R Blair on isotopy classes of string links \cite{StringLinkMonoid}.
Our work closely follows the work of Budney.
Budney first showed that the little 2-cubes operad $\mathcal{C}_2$ acts on the space $\mathcal{K}$ of (long) knots, which implies the well known commutativity of connect-sum of knots on isotopy classes. He showed that $\mathcal{K}$ is freely generated over $\mathcal{C}_2$ by the space $\mathcal{P}$ of prime knots, generalizing the prime decomposition of knots of Schubert from isotopy classes to the level of the space of knots \cite{Budney}. Later, he constructed a splicing operad $\mathcal{SP}$ which encodes splicing of knots. He showed that $\mathcal{K}$ is freely generated over a certain suboperad of $\mathcal{SP}$ by the subspace of torus and hyperbolic knots, thus generalizing the satellite decomposition of knots from isotopy classes to the space level \cite{BudneySplicing}.
Infection by string links is a generalization of splicing from knots to links. This operation is most commonly used in studying knot concordance. One instance where string link infection arises is in the clasper surgery of Habiro \cite{Habiro}, which is related to finite-type invariants of knots and links. In another vein, Cochran, Harvey, and Leidy observed that iterating the infection operation gives rise to a fractal-like structure \cite{CHLPrimaryDecomposition}. This motivated our work, and we provide another perspective on the structure arising from string link infection. We do this by constructing a colored operad which encodes this infection operation. We then prove a statement that decomposes part of the space of 2-component string links via our colored operad.
Splicing and infection are both generalizations of the connect-sum operation. The latter is always a well defined operation on isotopy classes of knots, but if one considers long knots, it is even well defined on the knots themselves. This connect-sum operation (i.e., ``stacking'') is also well defined for long (a.k.a.~string) links with any number of components. Thus we restrict our attention to string links.
\subsection{Basic definitions and remarks}
Let $I=[-1,1]$ and let $D^2 \subset \mathbb{R}^2 \cong \mathbb{C}$ be the unit disk with boundary.
\begin{definition}
\label{StringLink}
A \emph{$c$-component string link} (or \emph{$c$-string link}) is a proper embedding of $c$ disjoint intervals
\[
\coprod_c I \hookrightarrow I \times D^2
\]
whose values and derivatives of all orders at the boundary points agree with those of a fixed embedding $i_c$.
For concreteness, we take $i_c$ to be the map which on the $i^\mathrm{th}$ copy of $I$ is given by $t \mapsto (t, x_i)$
where $x_i = \left(\frac{i-1}{c}, 0\right)$. We will call $i_c$ the trivial string link.
Another example of a string link is shown in Figure \ref{stringlink}.
\end{definition}
\begin{figure}[h!]
\begin{picture}(300,108)
\put(37,0){\includegraphics[scale=.7]{stringlink.pdf}}
\put(-3,33){$D^2 \times \{0\}$}
\put(335,85){$D^2 \times \{1\}$}
\end{picture}
\caption{A string link}
\label{stringlink}
\end{figure}
In our work \cite{StringLinkMonoid}, our definition of string links allowed more general embeddings, and the ones defined above were called ``pure string links.'' We choose the definition above in this paper because infection by string links behaves more nicely with this more restrictive notion of string link. (Specifically, it preserves the number of components in the infected link.)
The condition on derivatives is not always required in the literature.\footnote{The homotopy type of the space of such embeddings would be unchanged by omitting the condition on derivatives, since the space of possible tangent vectors and higher-order derivatives at the boundary is contractible.} We impose it because this allows us to identify a $c$-string link with an embedding $\coprod_c \mathbb{R} \hookrightarrow \mathbb{R} \times D^2$ which agrees with a fixed embedding outside of $I \times D^2$. Let $\mathcal{L}_c = \mathrm{Emb}(\coprod_c \mathbb{R}, \mathbb{R}\times D^2)$ denote the space of $c$-string links, equipped with the $C^\infty$ Whitney topology. An isotopy of string links is a path in this space, so the path components
of $\mathcal{L}_c$ are precisely the isotopy classes of $c$-string links. Often we will write $\mathcal{K}$ for the space $\mathcal{L}_1$ of long knots.
The braids which qualify as string links under Definition \ref{StringLink} are precisely the pure braids.
There is a map from $\mathcal{L}_c$ to the space $\mathrm{Emb}(\coprod_c S^1, \mathbb{R}^3)$ of closed links in $\mathbb{R}^3$ by taking the closure of a string link. When $c=1$, this map is an isomorphism on $\pi_0$. In other words, isotopy classes of long knots correspond to isotopy classes of closed knots.
In general, this map is easily seen to be surjective on $\pi_0$, but it is not injective on $\pi_0$. For example, any string link and its conjugation by a pure braid yield isotopic closed links, and for $c \geq 3$, there are conjugations of string links by braids which are not isotopic to the original string link.
We will sometimes write just ``link'' rather than ``string link'' or ``closed link'' when the type of link is either clear from the context or unimportant.
\subsection{Main results}
Our first main result is the construction of a colored operad encoding string link infection. An operad $\mathcal{O}$ consists of spaces $\mathcal{O}(n)$ of $n$-ary operations for all $n\in \mathbb{N}$. Roughly, an operad acts on a space $X$ if each $\mathcal{O}(n)$ can parametrize ways of multiplying $n$ elements in $X$. (We provide thorough definitions in Section \ref{Operads}.) A colored operad arises when different types of inputs must be treated differently. In our case, we have to treat string links with different numbers of components differently, so the colors in our colored operad are the natural numbers. This theorem is proven as Theorem \ref{DefnThm} and Proposition \ref{CommRelations}.
\begin{T1}
There is a colored operad $\mathcal{I}$ which encodes the infection operation and acts on spaces of string links $\mathcal{L}_c$ for $c=1,2,3,...$.
\begin{itemize}
\item
When restricting to the color 1, the (ordinary) operad $\mathcal{I}_{\{1\}}$ which we recover is Budney's splicing operad, and the action of $\mathcal{I}_{\{1\}}$ on $\mathcal{K}$ is the same as Budney's splicing operad action.
\item
For any $c$, the operad $\mathcal{I}_{\{c\}}$ obtained by restricting to $c$ is an operad which admits a map from the little intervals operad $\mathcal{C}_1$. The resulting $\mathcal{C}_1$-action on $\mathcal{L}_c$ encodes the operation of stacking string links.
\item
On the level of $\pi_0$, our infection operad encodes all the relations in the whole 2-string link monoid.
\end{itemize}
\end{T1}
We then use our colored operad to decompose part of the space of string links. We rely on an analogue of prime decomposition for 2-string links proven in our joint work with R Blair \cite{StringLinkMonoid}, so we must restrict to $c=2$.
We consider a ``stacking operad'' $\mathcal{I}_\#$, which is a suboperad of $\mathcal{I}_{\{2\}}$ and which is homeomorphic to the little intervals operad. This operad simply encodes the operation of stacking 2-string links in $I \times D^2$, with the little intervals acting in the $I$ factor.
The theorem below is proven as Theorem \ref{DecompThm}.
\begin{T2}
Let $\pi_0 \mathcal{S}_2$ denote the submonoid of $\pi_0 \mathcal{L}_2$ generated by those prime 2-string links which are not central. (By \cite{StringLinkMonoid}, this monoid is free.) Let $\mathcal{S}_2$ be the subspace of $\mathcal{L}_2$ consisting of the path components of $\mathcal{L}_2$ that are in $\pi_0 \mathcal{S}_2$.
Then $\pi_0 \mathcal{S}_2$ is freely generated as a monoid over the stacking suboperad $\mathcal{I}_\#$;
The generating space is the subspace consisting of those components in $\mathcal{S}_2$ which correspond to prime string links.
\end{T2}
\subsection{Organization of the paper}
In Section \ref{Infection}, we review the definition of string link infection.
In Section \ref{Operads}, we review the definitions of an operad and the particular example of the little cubes operad. We then give the more general definition of a colored operad.
In Section \ref{Budney}, we review Budney's operad actions on the space of knots. This includes his action of the little 2-cubes operad, as well as the action of his splicing operad.
In Section \ref{InfectionOperad}, we define our colored operad for infection and prove Theorem 1. We make some remarks about our operad related to pure braids and rational tangles, and we briefly discuss a generalization to embedding spaces of more general manifolds.
In Section \ref{Decomp}, we focus on the space of 2-string links. We prove Theorem 2, which decomposes part of the space of 2-string links in terms of a suboperad of our infection colored operad. We conclude with several other statements about the homotopy type of certain components of the space of 2-string links. \\
\textbf{Notation:}
\begin{itemize}
\item
$\coprod_c X$ means $\underset{\mbox{$c$ times}}{\underbrace{X \sqcup ... \sqcup X}}$
\item
$f|A$ means the restriction of $f$ to $A$
\item
$\overline{X}$ denotes the closure of $X$; $\overset{\circ}{X}$ denotes the interior of $X$
\item
$[a]$ denotes the equivalence class represented by an element $a$; $[a_1,..a_n]$ denotes the equivalence class of a tuple $(a_1,...,a_n)$.
\end{itemize}
\subsection{Acknowledgments}
The authors thank Tom Goodwillie for useful comments and conversations. They thank Ryan Budney for useful explanations and especially for his work which inspired this project. They thank Ryan Blair for useful conversations and for invaluable contributions in their joint work with him, on which Theorem 2 depends. They thank a referee for a careful reading of the paper and useful comments. They thank Connie Leidy for suggesting the rough idea of this project. They thank Slava Krushkal for suggesting terminology and for pointing out the work of Habiro. Finally, they thank David White for introducing the authors to each other. The second author was supported partly by NSF grant DMS-1004610 and partly by a PIMS Postdoctoral Fellowship.
\section{Infection}
\label{Infection}
Infection is an operation which takes a link with additional decoration together with a string link and produces a link. This operation is a generalization of splicing which in turn is a generalization of the connect-sum operation. Infection has been called multi-infection by Cochran, Friedl, and Teichner \cite{CochranFriedlTeichner}, infection by a string link by Cochran \cite{Cochran2004} and tangle sum by Cochran and Orr \cite{CochranOrr1994}. Special cases of this construction have been used extensively since the late 1970's, for example in the work of Gilmer \cite{Gilmer1983}; Livingston \cite{Livingston2005}; Cochran, Orr, and Teichner \cite{CochranOrrTeichner2003, CochranOrrTeichner2004}; Harvey \cite{Harvey2008}; and Cimasoni \cite{Cimasoni2006}. The operad we define in this paper will encode a slightly more general operation than the infection operation that has been defined in previous literature. This section is meant to
inform the reader of the definition in previous literature and
provide motivation for the infection operad.
\subsection{Splicing}
Consider a link $R \in S^3$ and a closed curve $\eta \in S^3 \setminus R$ such that $\eta$ bounds an embedded disk in $S^3$ ($\eta$ is unknoted in $S^3$) which intersects the link components transversely. Given a knot $K$, one can create a new link $R_{\eta}(K)$, with the same number of components as $R$, called the result of splicing $R$ by $K$ at $\eta$. Informally, the splicing process is defined by taking the disk in $S^3$ bounded by $\eta$; cutting $R$ along the disk; grabbing the cut strands; tying them into the knot $K$ (with no twisting among the strands) and regluing. The result of splicing given a particular $R$, $\eta$ and $K$ is show in Figure \ref{splicing}. Note that if $\eta$ simply linked one strand of $R$ then the result of the splicing would be isotopic to the connect-sum of $R$ and $K$.
\begin{figure}[h!]
\begin{picture}(420,120)
\put(0,20){\includegraphics[scale=.8]{infection_knot.pdf}}
\put(45,0){$R$}
\put(100,63){$\eta$}
\put(145,32){$K$}
\put(301,0){$R_{\eta}(K)$}
\end{picture}
\caption{An example of the splicing operation.}
\label{splicing}
\end{figure}
Formally, $R_{\eta}(K)$ is arrived at by first removing a tubular neighborhood, $N(\eta)$, of $\eta$ from
$S^3$. Note $S^3 \setminus N(\eta) \subset S^3$ is a solid torus with $R$ embedded in its interior. Let $C_K$ denote the complement in $S^3$ of a tubular neighborhood of $K$.
Since the boundary of $C_K$ is also a torus, one can identify these two manifolds along their boundary.
In order to specify the identification, we use the terminology of meridians and longitudes.
Recall that the meridian of a knot is the simple closed curve, up to ambient isotopy, on the boundary of the complement of the knot which bounds a disk in the closure of the tubular neighborhood of the knot and has +1 linking number with the knot. Also recall that the longitude of a knot is the simple closed curve, up to ambient isotopy, on the boundary of the complement of the knot which has +1 intersection number with the meridian of the knot and has zero linking number with the knot.
The gluing of $S^3 \setminus N(\eta)$ to $C_K$ is done so that
the meridian of the boundary of $S^3 \setminus N(\eta)$
is identified with the meridian of $K$ in the boundary of $C_K$. Note that this process describes a Dehn surgery with surgery coefficient $\infty$ along $K \subset S^3$ where the solid torus glued in is $S^3 \setminus N(\eta)$. Thus, the resulting manifold will be a 3-sphere with a subset of disjoint embedded circles whose union is $R_{\eta}(K)$ (the image of $R$ under this identification).
Although the embedding of $R_\eta(K)$ in $S^3$ depends on the identification of the surgered 3-manifold with $S^3$, its isotopy class is independent of this choice of identification.
\subsection{String link infection}
Although there is a well studied generalization of the connect-sum operation from closed knots to closed links, there is no generalization of splicing by a closed link. There is, however, a generalization of splicing called infection by a string link, which we will now define. See the work of Cochran, Friedl, and Teichner \cite[Section 2.2]{CochranFriedlTeichner} for a thorough reference.
By an \emph{$r$-multi-disk} $\mathbb{D}$ we mean the oriented disk $D^2$ together with $r$ ordered embedded open disks $D_1, \dots D_r$ (see Figure \ref{Multidisk}). Given a link $L \subset S^3$ we say that an embedding $\varphi \colon\thinspace \mathbb{D} \rightarrow S^3$ of an $r$-multi-disk into $S^3$ is \emph{proper} if the image of the multi-disk, denoted by $\mathcal{D}$, intersects the link components transversely and only in the images of the disks $D_1, \dots D_r$ as in Figure \ref{Multidisk}. We will refer to the image of the boundary curves of $\varphi(D_1), \dots, \varphi(D_r)$ by $\eta_1, \dots, \eta_r$.
\begin{figure}[h!]
\begin{picture}(400,255)
\put(60,0){\includegraphics[scale=1]{MultiDisk.pdf}}
\put(110,240){$\mathbb{D}$}
\put(73,171){$D_1$}
\put(139,160){$D_2$}
\put(270,162){$D_r$}
\put(63,93){$\mathcal{D}$}
\put(70,35){$\eta_1$}
\put(300,35){$\eta_r$}
\end{picture}
\caption{An $r$-multi-disk and a properly embedded multi-disk}
\label{Multidisk}
\end{figure}
Suppose $R \subset S^3$ is link, $\mathcal{D} \subset S^3$ is the image of a properly embedded $r$-multi-disk, and $L$ is an $r$ component string link. Then informally,
the infection of $R$ by $L$ at $\mathcal{D}$, denoted by $R_{\mathcal{D}}(L)$, is the link obtained by tying the $r$ collections of strands of $R$ that intersect the disks $\varphi(D_1), \dots, \varphi(D_r)$ into the pattern of the string link $L$, where the strands linked by $\eta_i$ are identified with the $i^\text{th}$ component of $L$, such that the $i^\text{th}$ collection of strands are parallel copies of the $i^\text{th}$ component of $L$. Figure \ref{infection} shows an example of this operation.
We now define this operation formally.
Given a string link $L \colon\thinspace \coprod_r I \hookrightarrow I \times D^2$, let $C_L$ denote the complement of a tubular neighborhood of (the image of) $L$ in $I \times D^2$. In Figure \ref{stringlinkwithcomplement} an example of a string link is shown with its complement to the right. The meridian of a component of a string link is the simple closed curve, up to ambient isotopy, on the $\partial D^2 \times I$ boundary of the closure of the tubular neighborhood of the component which bounds a disk and has +1 linking number with the component. We call the set of such meridians the meridians of the string link. The longitude of a component of a string link is a properly embedded line segment $f\colon\thinspace I \rightarrow \partial D^2 \times I$, up to ambient isotopy, on the $\partial D^2 \times I$ boundary of the closure of the tubular neighborhood of the component; it is required to have +1 intersection number with the meridian of that component, to have zero linking number with that component, and to satisfy $f(0) = (1,0) \in \partial D^2 \times \{0\}$ and $f(1) = (1,1) \in \partial D^2 \times \{1\}$. We call the set of such longitudes the longitudes of the string link. In Figure \ref{stringlinkwithcomplement} the meridians, $\mu_i$, and longitudes, $\ell_i$, are shown on the boundary of the complement. Note that the boundary of the complement of any $r$-component string link is homeomorphic to a genus-$r$ orientable surface.
\begin{figure}[h!]
\begin{picture}(300,135)
\put(43,0){\includegraphics[scale=1]{stringlinkwithcomplement.pdf}}
\end{picture}
\put(-27,25){$\footnotesize{\ell_1}$}
\put(-54,85){$\footnotesize{\mu_1}$}
\put(-35,114){$\footnotesize{\mu_2}$}
\put(3,85){$\footnotesize{\ell_2}$}
\caption{A string link and its complement.}
\label{stringlinkwithcomplement}
\end{figure}
Let $R \subset S^3$ be a link, and let $L\colon\thinspace \coprod_r I \hookrightarrow I \times D^2$ be a string link.
Fix a proper embedding of a thickened $r$-multidisk $\mathcal{D} \times I$ in $S^3 \setminus R$.
Formally the infection of $R$ by $L$ at $\mathcal{D}$ is obtained by removing $(\mathcal{D} \setminus \sqcup_i \varphi(D_1)) \times I$ from $S^3$ and gluing in the complement of $L$. Note that $(\mathcal{D} \setminus \sqcup_i \varphi(D_i))\times I$ is the complement of a $r$-component trivial string link $T$ (see Figure \ref{infection}), and thus the boundary of $S^3 \setminus ((\mathcal{D} \setminus \sqcup_i \varphi(D_1))\times I)$ is a genus-$r$ orientable surface. One identifies this boundary and the boundary of the complement of $L$, $\partial C_L$, first by identifying $\partial \mathcal{D} \times I$ with $\partial D^2 \times I$ subset of the boundary of $C_L$ where $\partial D^2 \times I$ is taken to be a subset of the boundary of $D^2 \times I$ where $L$ lives, $(\mathcal{D} \setminus \sqcup_i \varphi(D_i)) \times \{0,1\}$ is identified with $(D^2 \setminus N(L)) \times \{0,1\}$ and the $D^2 \times I$ components of the closure of $N(T)$ and $N(L)$ are identified so that the meridians and longitudes of $L$ are identified with the meridians and longitudes of $T$.
We claim that the resulting manifold is $S^3$ containing a link $R_{\mathcal{D}}(L)$ (the image of $R$ under this identification). The resulting manifold is homeomorphic to $S^3$ because
\begin{align*}
& S^3 \setminus Int((\mathcal{D} \setminus \sqcup_i \varphi(D_1)) \times I) \,\,\, \cup \,\,\, (D^2 \times I) \setminus N(L) \\
= &(S^3 \setminus \mathcal{D} \times I) \,\,\, \cup \,\,\, \left( (\sqcup_i(\varphi(D_i) \times I)) \,\,\, \cup \,\,\, (D^2\times I)\setminus N(L)\right) \\
\cong & S^3
\end{align*}
where the last homeomorphism follows form the observation that the previous space is the union of two 3-balls.
Again, the specific embedding of $R_{\mathcal{D}}(L)$ will depend on the choice of homeomorphism, but all choices will yield isotopic embeddings.
\begin{figure}[h!]
\begin{picture}(450,300)
\put(80,0){\includegraphics[scale=.7]{infectionexample.pdf}}
\put(100,183){$R$}
\put(129,284){$\mathcal{D}$}
\end{picture}
\caption{Infection of the string link $R$ along $\mathcal{D}$ by the string link $L$ from Figure \ref{stringlinkwithcomplement}.}
\label{infection}
\end{figure}
\section{Operads}
\label{Operads}
We start by reviewing the definitions of an operad $\mathcal{O} (=\{\mathcal{O}(n)\}_{n\in \mathbb{N}})$, and
an action of $\mathcal{O}$ on $X$
(a.k.a.~an algebra $X$ over $\mathcal{O}$). We then proceed to colored operads. Technically, the definition of a colored operad subsumes the definition of an ordinary operad, but for ease of readability, we first present ordinary operads. Readers familiar with these concepts may safely skip this Section.
\subsection{Operads}
Operads can be defined in any symmetric monoidal category, but we will only consider the category of topological spaces. In this case, the rough idea is as follows. An algebra $X$ over an operad $\mathcal{O}$ is a space with a multiplication $X\times X\rightarrow X$, and the space $\mathcal{O}(n)$ parametrizes ways of multiplying $n$ elements of $X$, i.e., maps $X^n\rightarrow X$. In other words, $\mathcal{O}(n)$ captures homotopies between different ways of multiplying the elements, as well as homotopies between these homotopies, etc. Thus an element of $\mathcal{O}(n)$ is an operation with $n$ inputs and one output. This can be visualized as a tree with $n$ leaves and a root, and in fact, free operads are certain spaces of decorated trees.
For a more detailed introduction, the reader may wish to consult the book of Markl, Shnider, and Stasheff \cite{MarklShniderStasheff}, May's book \cite{goils}, or the expository paper of McClure and Smith \cite{MSIntro}.
\begin{definition}
An operad $\mathcal{O}$ (in the category of spaces) consists of
\begin{itemize}
\item
a space $\mathcal{O}(n)$ for each $n=1,2,...$ with an action of the symmetric group $\Sigma_n$
\item
structure maps
\begin{equation}
\label{OperadMaps}
\mathcal{O}(n) \times \mathcal{O}(k_1) \times ... \times \mathcal{O}(k_n) \rightarrow \mathcal{O}(k_1 + ... + k_n)
\end{equation}
\end{itemize}
such that the following three conditions are satisfied:
\begin{itemize}
\item
\emph{Associativity}: the following diagram commutes:
\[
\xymatrix{
\mathcal{O}(n) \times \prod_{i=1}^n \mathcal{O}(k_i) \times \prod_{i=1}^n\prod_{j=1}^{k_i} \mathcal{O}(\ell_{i,j}) \ar[r] \ar[d] & \mathcal{O}(n) \times \prod_{i=1}^n \mathcal{O}(\sum_{j=1}^{k_i} \ell_{i,j}) \ar[d] \\
\mathcal{O}(k_1+...+k_n) \times \prod_{i=1}^n \prod_{j=1}^{k_i} \mathcal{O}(\ell_{i,j}) \ar[r] & \mathcal{O}(\sum_{i=1}^n\sum_{j=1}^{k_i} \ell_{i,j})
}
\]
\item
\emph{Symmetry}:
Let $\sigma \times \sigma$ denote the diagonal action on the product $\mathcal{O}(n) \times (\mathcal{O}(k_1)\times...\times \mathcal{O}(k_n))$
coming from the actions of $\Sigma_n$ on $\mathcal{O}(n)$ and on $\mathcal{O}(k_1)\times...\times \mathcal{O}(k_n)$ by permuting the factors. For a partition $\vec{k}=(k_1,...,k_n)$ of a natural number $k_1+...+k_n$, let $\sigma_{\vec{k}} \in \Sigma_{k_1+...+k_n}$ denote the ``block permutation'' induced by $\sigma$ and the partition $\vec{k}$.
We require that the following composition agrees with the map (\ref{OperadMaps}):
\[
\xymatrix{
\mathcal{O}(n) \times \prod_{i=1}^n \mathcal{O}(k_i) \ar[r]^{\sigma \times \sigma} & \mathcal{O}(n) \times \prod_{i=1}^n \mathcal{O}(k_{\sigma(i)}) \ar[r] & \mathcal{O}(\sum_{i=1}^n k_i) \ar[r]^{\sigma_{\vec{k}}^{-1}}& \mathcal{O}(\sum_{i=1}^n k_i)
}
\]
We also require that for $\tau_i \in \Sigma_{k_i}$ for $i=1,...,n$, the following diagram commutes:
\[
\xymatrix{
\mathcal{O}(n) \times \prod_{i=1}^n \mathcal{O}(k_i) \ar[d]_{id \times \tau_1 \times... \times \tau_n} \ar[r] & \mathcal{O}(\sum_{i=1}^n k_i) \ar[d]^{\tau_1 \times ... \times \tau_n} \\
\mathcal{O}(n) \times \prod_{i=1}^n \mathcal{O}(k_i) \ar[r] & \mathcal{O}(\sum_{i=1}^n k_i)
}
\]
\item
\emph{Identity}:
There exists an element $1 \in \mathcal{O}(1)$ (i.e., a map $\ast \rightarrow \mathcal{O}(1)$) which induces the identity on $\mathcal{O}(k)$ via
\begin{align*}
\mathcal{O}(1) \times \mathcal{O}(k) &\rightarrow \mathcal{O}(k) \\
(1, L) &\mapsto L
\end{align*}
and which induces the identity on $\mathcal{O}(n)$ via
\begin{align*}
\mathcal{O}(n) \times \mathcal{O}(1) \times \mathcal{O}(1) \times ... \times \mathcal{O}(1) & \rightarrow \mathcal{O}(n) \\
(L,1,1,...,1) &\mapsto L.
\end{align*}
\qed
\end{itemize}
\end{definition}
Some authors define the structure maps via $\circ_i$ operations, i.e., plugging in just one operation into the $i^\mathrm{th}$ input, as opposed to $n$ operations into all $n$ inputs. These $\circ_i$ maps can be recovered from the above definition by setting $k_j=1$ for all $j\neq i$ and using the identity element in $\mathcal{O}(1)$.
\begin{definition}
Given an operad $\mathcal{O}$, an
\emph{action of $\mathcal{O}$ on $X$}
(also called an \emph{algebra $X$ over $\mathcal{O}$}) is a space $X$ together with maps
\[
\mathcal{O}(n) \times X^n \rightarrow X
\]
such that the following conditions are satisfied:
\begin{itemize}
\item
\emph{Associativity}:
The following diagram commutes:
\[
\xymatrix{
\mathcal{O}(n) \times \mathcal{O}(k_1)\times ... \times \mathcal{O}(k_n) \times X^{k_1+...+k_n} \ar[r] \ar[d] & \mathcal{O}(n) \times X^n \ar[d] \\
\mathcal{O}(k_1+...+k_n) \times X^{k_1+...+k_n} \ar[r] & X
}
\]
\item
\emph{Symmetry}:
For each $n$, the action map is $\Sigma_n$-invariant, where $\Sigma_n$ acts on $\mathcal{O}(n)$ by definition, on $X^n$ by permuting the factors, and on the product diagonally. In other words, the action map descends to a map
\[
\mathcal{O}(n) \times_{\Sigma_n} X^n \rightarrow X
\]
\item
\emph{Identity}:
The identity element $1\in \mathcal{O}(1)$ together with the map
\[
\mathcal{O}(1) \times X \rightarrow X
\]
induce the identity map on $X$, i.e., the map takes $(1,x)\mapsto x$.
\end{itemize}
\qed
\end{definition}
\subsection{The little cubes operad}
Our infection colored operad extends Budney's splicing operad, which in turn was an extension of Budney's action of the little 2-cubes operad on the space of long knots. Thus the little 2-cubes operad is of interest here.
\begin{definition}
The \emph{little $j$-cubes operad} $\mathcal{C}_j$ is the operad with $\mathcal{C}_j(n)$ the space of maps
\[
(L_1,...,L_n) \colon\thinspace \coprod_n I^j \hookrightarrow I^j
\]
which are embeddings when restricted to the interiors of the $I^j$ and which are increasing affine-linear maps in each coordinate. The structure maps are given by composition:
\begin{align*}
\mathcal{C}_j(n) \times \mathcal{C}_j(k_1)\times ... \times \mathcal{C}_j(k_n) &\rightarrow \mathcal{C}_j(k_1+...+k_n) \\
(L_1,...,L_n), (L^1_1,...,L^1_{k_1}),...,(L^n_1,...,L^n_{k_n}) &\mapsto (L_1 \circ (L^1_1,...,L^1_{k_1}),..., L_n \circ (L^n_1,...,L^n_{k_n}))
\end{align*}
\qed
\end{definition}
Note that for all $j\geq 2$, the multiplication induced by choosing (any) element in $\mathcal{C}_j(2)$ is commutative \emph{up to homotopy}, which can be seen via the same picture that shows that $\pi_j X$ is abelian for $j\geq 2$.
\subsection{Colored Operads}
Now we present the precise definitions of a colored operad and an action of a colored operad on a space. This generalization of an operad is necessary to generalize Budney's operad from splicing of knots to infection by links.
\begin{definition}
\label{ColoredOperad}
A \emph{colored operad} $\mathcal{O} = (\mathcal{O}, C)$ (in the category of spaces) consists of
\begin{itemize}
\item
a set of colors $C$
\item
a space $\mathcal{O}(c_1,...,c_n; c)$ for each $(n+1)$-tuple $(c_1,...,c_n,c)\in C$ together with compatible maps $\mathcal{O}(c_1,...,c_n; c)\rightarrow \mathcal{O}(c_{\sigma(1)},...,c_{\sigma(n)}; c)$ for each $\sigma \in \Sigma_n$
\item
(continuous) maps
\[
\mathcal{O}(c_1,...,c_n;c) \times \mathcal{O}(d_{1,1},...,d_{1,k_1}; c_1) \times ... \times \mathcal{O}(d_{n,1},...,d_{n,k_n}; c_n) \rightarrow \mathcal{O}(d_{1,1},...,d_{n,k_n}; c)
\]
\end{itemize}
where the maps satisfy the following three conditions:
\begin{itemize}
\item
\emph{Associativity}: The map below is the same regardless of whether one first applies the structure maps to the first two factors or the last two factors on the left-hand side:
\end{itemize}
\[
\xymatrix{
\mathcal{O}(c_1,...,c_n;c) \times \prod_{i=1}^n \mathcal{O}(d_{i,1},...,d_{i,k_i}; c_i) \times \prod_{i=1}^n \prod_{j=1}^{k_i} \mathcal{O}(e_{i,j,1},..., e_{i,j,\ell_{i,j}}; d_{i,j}) \ar[r] &
\mathcal{O}\negthinspace\left(e_{1,1,1},...,e_{n,k_n, \ell_{1,k_n}}\right)
}
\]
\begin{itemize}
\item
\emph{Symmetry}:
The following diagram below commutes. The vertical map is induced by $\sigma$ in both the first factor and the last $n$ factors, and $\sigma_{\vec{k}} \in \Sigma_{k_1+...+k_n}$ is the block permutation induced by $\sigma$ and the partition $(k_1,...,k_n)$.
\[
\xymatrix{
\mathcal{O}(c_1,...,c_n; c) \times \prod_{i=1}^n \mathcal{O}(d_{i,1},...,d_{i,k_i}; c_i) \ar[r] \ar[d]_{\sigma \times \sigma} & \mathcal{O}(d_{1,1},...,d_{n,k_n}; c) \ar[d]^{\sigma_{\vec{k}}} \\
\mathcal{O}(c_{\sigma(1)},...,c_{\sigma(n)}; c) \times \prod_{i=1}^n \mathcal{O}(d_{\sigma(i),1},...,d_{\sigma(i), k_{\sigma(i)}}; c_{\sigma(i)}) \ar[r] &
\mathcal{O}(d_{1,1},...,d_{n,k_n}; c)
}
\]
We also require that, for $\tau_i \in \Sigma_{k_i}$, $i=1,...,n$, the following diagram commutes:
\[
\xymatrix{
\mathcal{O}(c_1,...,c_n; c) \times \prod_{i=1}^n \mathcal{O}(d_{i,1},...,d_{i,k_i}; c_i) \ar[r] \ar[d]_{id \times \tau_1 \times...\times \tau_n} & \mathcal{O}(d_{1,1},...,d_{n,k_n}; c) \ar[d]^{\tau_1 \times... \times \tau_n} \\
\mathcal{O}(c_1,...,c_n; c) \times \prod_{i=1}^n \mathcal{O}(d_{i,1},...,d_{i, k_i}; c_i) \ar[r] &
\mathcal{O}(d_{1,1},...,d_{n,k_n}; c)
}
\]
\item
\emph{Identity}: For every $c\in C$, there is an element $1_c\in \mathcal{O}(c;c)$ which together with
\[
\mathcal{O}(c;c) \times \mathcal{O}(c_1,...,c_n;c) \rightarrow \mathcal{O}(c_1,...,c_n;c)
\]
induces the identity map on $\mathcal{O}(c_1,...,c_n;c)$. We also require that the elements $1_{c_1},...,1_{c_n}$ together with
\[
\mathcal{O}(c_1,...,c_n;c) \times \mathcal{O}(c_1;c_1) \times ...\times \mathcal{O}(c_n;c_n) \rightarrow \mathcal{O}(c_1,...,c_n;c)
\]
induce the identity map on $\mathcal{O}(c_1,...,c_n;c)$.
\end{itemize}
\qed
\end{definition}
The colors $c_1,..,c_n$ can be thought of as the colors of the inputs, while $c$ is the color of the output. A colored operad with $C=\{c\}$ is just an operad, where
$\mathcal{O}(\underset{\mbox{$n$ times}}{\underbrace{c,...,c}};c)$
is $\mathcal{O}(n)$. Sometimes, for brevity, we write ``operad'' to mean ``colored operad.''
Note that if we have a colored operad $\mathcal{O}$ with colors $C$ and a subset $C' \subset C$, we can restrict to another colored operad $\mathcal{O}_{C'}$ consisting of just the spaces $\mathcal{O}(c_1,...,c_n;c)$ with $c_i, c \in C'$ (and the same structure maps as $\mathcal{O}$).
\begin{definition}
Given a colored operad $\mathcal{O}=(\mathcal{O}, C)$,
an \emph{action} of $\mathcal{O}$ on $A$
(also called an $\mathcal{O}$-\emph{algebra} $A$) consists of a collection of spaces $\{ A_c\}_{c\in C}$ together with maps
\begin{equation}
\label{ColoredOperadAction}
\mathcal{O}(c_1,...,c_n; c) \times A_{c_1} \times ... \times A_{c_n} \rightarrow A_c
\end{equation}
satisfying the following conditions:
\begin{itemize}
\item
\emph{Associativity}: The following diagram commutes:
\[
\xymatrix{
\mathcal{O}(c_1,...,c_n;c) \times \prod_{i=1}^n \mathcal{O}(d_{i,1},...,d_{i,k_i}; c_i) \times \prod_{j=1}^n A_{d_{j,k_j}}
\ar[r] \ar[d] &
\mathcal{O}(c_1,...,c_n; c) \times \prod_{i=1}^n A_{c_i}
\ar[d] \\
\mathcal{O}(d_{1,1},...,d_{n,k_n}; c) \times \prod_{j=1}^n A_{d_{j,k_j}}
\ar[r] & A_c}
\]
\item
\emph{Symmetry}: For each $\sigma \in \Sigma_n$, the following diagram commutes, where the vertical map is induced by the $\Sigma_n$-action and permuting the factors of $A$:
\[
\xymatrix{
\mathcal{O}(c_1,...,c_n; c) \times A_{c_1} \times ... \times A_{c_n} \ar[r] \ar[d] & A_c \\
\mathcal{O}(c_{\sigma(1)},...,c_{\sigma(n)}; c) \times A_{c_{\sigma(1)}} \times ... \times A_{c_{\sigma(n)}} \ar[ur] & }
\]
\item
\emph{Identity}:
The map induced by $1_c\in \mathcal{O}(c,c)$ together with $\mathcal{O}(c;c) \times A_c \rightarrow A_c$ is the identity on $A_c$.
\end{itemize}
\qed
\end{definition}
If we have a subset $C' \subset C$, the restriction colored operad $\mathcal{O}_{C'}$ acts on the collection of spaces $\{A_c\}_{c\in C'}$.
\begin{example}
A \emph{planar algebra} as in the work of Jones \cite{JonesPlanarAlg} is an algebra over a certain colored operad. In fact, planar diagrams form a colored operad called the \emph{planar operad} $\mathcal P$. The colors $C$ are the even natural numbers, and $\mathcal{P}(c_1,...,c_n; c)$ is the space of diagrams with $n$ holes, $c_i$ strands incident to the $i$-th boundary circle, and $c$ strands incident to the outer boundary circle. If $A_c$ denotes the space of tangle diagrams in $D^2$ with $c$ endpoints on $\partial D^2$, then the collection $\{ A_c \}_{c\in C}$ is an example of an algebra over $\mathcal{P}$ (a.k.a.~a planar algebra).
\end{example}
\section{A review of Budney's operad actions}
\label{Budney}
\subsection{Budney's 2-cubes action}
The operation of connect-sum of knots is always well defined on isotopy classes of knots. If one considers \emph{long} knots, one can further define connect-sum (or stacking) of knots themselves, rather than just the isotopy classes. That is, there is a well defined map
\[
\#\colon\thinspace \mathcal{K} \times \mathcal{K} \rightarrow \mathcal{K}
\]
where $\mathcal{K}=\mathrm{Emb}(\mathbb{R}, \mathbb{R}\times D^2)$ is the space of long knots. If one descends to isotopy classes, this operation is commutative, i.e., $\#$ is \emph{homotopy-commutative}. See Budney's paper \cite[p.~4, Figure 2]{Budney} for a beautiful picture of the homotopies involved. This picture suggests that one can parametrize the operation $\#$ by $S^1 \simeq \mathcal{C}_2(2)$. Thus it suggests that the little $2$-cubes operad $\mathcal{C}_2$ acts on $\mathcal{K}$.
Budney succeeded in constructing such a 2-cubes action, but to do so, he had to consider a space of \emph{fat} long knots
\[
\mathrm{EC}(1,D^2) := \{f\colon\thinspace \mathbb{R}^1 \times D^2 \hookrightarrow \mathbb{R}^1 \times D^2 | \>\> \mathrm{supp}(f)\subset I\times D^2\}
\]
where $\mathrm{supp}(f)$ is defined as the closure of $\{x \in \mathbb{R}^1 \times D^2 | f(x) \neq x\}$.
The notation $\mathrm{EC}(1,D^2)$ stands for (self-)\emph{embeddings} of $\mathbb{R}^1 \times D^2$ with \emph{cubical} support.
This space is equivalent to the space of \emph{framed} long knots, but one can restrict to the subspace where the linking number of the curves $f|_{\mathbb{R}\times(0,0)}$ and $f|_{\mathbb{R}\times(0,1)}$ is zero; this subspace is then equivalent to the space of long knots.
The advantage of $\mathrm{EC}(1,D^2)$ is that one can compose elements. In the 2-cubes action on this space, the first coordinate of a cube acts on the $\mathbb{R}$ factor in $\mathbb{R} \times D^2$, while the second factor dictates the order of composition of embeddings. Precisely, the action is defined as follows. For one little cube $L$, let $L^y$ be the embedding $I\hookrightarrow I$ given by projecting to the last factor. Let $L^x$ be the embedding $I\hookrightarrow I$ given by projecting to the first factor(s). Let $\sigma\in \Sigma_n$ be a permutation (thought of as an ordering of $\{1,...,n\}$) such that $L^y_{\sigma(1)}(0) \leq ... \leq L^y_{\sigma(n)}(0)$. The action
\[
\mathcal{C}_2(n) \times \mathrm{EC}(1,D^2)^n \rightarrow \mathrm{EC}(1,D^2)
\]
is given by
\[
(L_1,...,L_n) \cdot (f_1,...,f_n) \mapsto L^x_{\sigma(n)} \circ f_{\sigma(n)} \circ (L^x_{\sigma(n)})^{-1} \circ ... \circ L^x_{\sigma(1)} \circ f_{\sigma(1)} \circ (L^x_{\sigma(1)})^{-1}.
\]
\subsection{The splicing operad}
In the above 2-cubes action, the second coordinate is only used to order the embeddings. Thus instead of the 2-cubes operad, one could consider an operad of ``overlapping intervals'' $\mathcal{C}_1'$. An element in $\mathcal{C}_1'(n)$ is $n$ intervals in the unit interval, not necessarily disjoint, but with an order dictating which interval is above the other when two intervals do overlap. Precisely, an element of $\mathcal{C}_1'(n)$ is an equivalence class $(L_1,...,L_n, \sigma)$ where each $L_i$ is an embedding $I\hookrightarrow I$ and where $\sigma\in \Sigma_n$. Elements $(L_1,...,L_n, \sigma)$ and $(L_1',...,L_n', \sigma')$ are equivalent if $L_i=L_i'$ for all $i$ and if whenever $L_i$ and $L_j$ intersect, $\sigma^{-1}(i) \leq \sigma^{-1}(j) \Leftrightarrow (\sigma')^{-1}(i) \leq (\sigma')^{-1}(j)$. It is not hard to see what the structure maps for the operad are (and they are given in Budney's paper \cite{BudneySplicing}). Budney then easily recasts his 2-cubes action as an action of the overlapping intervals operad $\mathcal{C}_1'$.
The splicing operad $\mathcal{SC}_1^{D^2}$ (which we abbreviate for now as $\mathcal{SC}$) is formally similar to the overlapping intervals operad, in that $\mathcal{SC}(n)$ consists of equivalence classes of elements $(L_0,L_1,...,L_n, \sigma)$ with the same equivalence relation as for $\mathcal{C}_1'$. In the splicing operad, however, $L_0$ is in $\mathrm{EC}(1,D^2)$, $L_1,...,L_n$ are embeddings $L_i\colon\thinspace I\times D^2 \hookrightarrow I \times D^2$, and all the $L_i$ are required to satisfy a ``continuity constraint,'' as follows. One considers $\sigma \in \Sigma_n$ as an element of $\Sigma_{n+1}=\mathrm{Aut}\{0,....,n\}$ which fixes 0. If $\sigma^{-1}(i) < \sigma^{-1}(k)$ one can think of $L_i$ as inner (or first in order of composition) with respect to $L_k$. One wants the ``round boundary'' of $L_k$ not to touch $L_i$, but for the operad to have an identity element, one needs to allow for $L_k$ to be flush around $L_i$. The precise requirement needed is that for $0 \leq \sigma^{-1}(i) < \sigma^{-1} (k)$
\[
\overline{\mathrm{im}\> L_i \setminus \mathrm{im}\> L_k} \cap L_k(\overset{\circ}{I} \times \partial D^2)= \emptyset.
\]
Note that $\mathcal{SC}$ is a much ``bigger'' operad than $\mathcal{C}_1'$. One can think of $L_0$ as the ``starting (thickened long) knot'' for the splicing operation and of the other $L_i$ as $n$ ``hockey pucks'' with which one grabs $L_0$ and ties up into $n$ knots. This gives a map
\[
\mathcal{SC}(n) \times \mathcal{K}^n \rightarrow \mathcal{K}
\]
which will define the action of the splicing operad on $\mathcal{K}$. To fully construct $\mathcal{SC}$ as an operad, one needs the operad structure maps, which also come from the map above. Roughly speaking, the structure maps are as follows. Given one splicing diagram with $n$ pucks and $n$ other splicing diagrams each with $k_i$ pucks ($i=1,...,n$), put the $i^\mathrm{th}$ splicing diagram into the $i^\mathrm{th}$ puck by composing the ``starting knots'' $L_0$ and ``taking the pucks along for the ride.'' For a precise definition and pictures, the reader may either consult \cite{BudneySplicing} or read the next Section, which closely follows Budney's construction and subsumes the splicing operad.
\section{The infection colored operad}
\label{InfectionOperad}
\begin{definition}
\label{TrivialStringLink}
Fix for each $c=1,2,3,...$ a trivial $c$-component fat string link
\[
i_c\colon\thinspace \coprod_c I\times D^2 \hookrightarrow I\times D^2.
\]
with image denoted $S_c := \mathrm{im}\> (i_c) \subset I\times D^2$.
\end{definition}
We will be more concerned with this image of the fixed trivial fat string link rather than the embedding itself.
A convenient way of choosing an $i_c$ is to fix an embedding $\coprod_c D^2 \hookrightarrow D^2$ and then take the product with the identity map on $I$. For $c\geq 2$, we choose an embedding which takes the centers of the $c$ $D^2$'s to the points $x_1,...,x_c$ from our definition (\ref{StringLink}) of string links. Beyond that, we remain agnostic about this fixed embedding. For $c=1$, we choose $i_1$ to be the identity map. This will recover Budney's splicing operad from our colored operad when all the colors are 1.
Now we define the space of $c$-component fat string links to be
\[
\mathrm{FSL}_c := \{ f\colon\thinspace \coprod_c I\times D^2 \hookrightarrow I\times D^2 |\> \mbox{$f$ agrees with $i_c$ on $\partial I \times D^2$}\}.
\]
These are the spaces on which the infection colored operad will act.
An element of $\mathrm{FSL}_3$ is displayed in Figure \ref{fatstringlink}. By our condition on the fixed trivial fat string link, we can restrict $f$ to the cores of the solid cylinders to obtain an ordinary string link $f |_{I \times \{x_1,...,x_c\}}$ as in Definition \ref{StringLink}. \\
\begin{figure}[h!]
\begin{picture}(216,130)
\put(52,0){\includegraphics[scale=1.3]{fatstringlink.pdf}}
\end{picture}
\caption{A fat string link, or more precisely, an element of $\mathrm{FSL}_3$.}
\label{fatstringlink}
\end{figure}
\subsection{The definition of the infection colored operad}
We now define our colored operad $\mathcal{I}=(\mathcal{I}, C)$. We put $C=\mathbb{N}^+$, so each color $c$ is a positive natural number.
\begin{definition}[\emph{The spaces in the colored operad $\mathcal{I}$}]
\label{InfectionSpaces}
An \emph{infection diagram} is a tuple $(L_0, L_1,...,L_n, \sigma)$ with $L_0 \in \mathrm{FSL}_c$, $\sigma\in \Sigma_n$, and $L_i$ an embedding $L_i\colon\thinspace I\times D^2 \hookrightarrow I \times D^2$ (for $i=1,...,n$) satisfying a certain continuity constraint. The constraint is that if $0 \leq \sigma^{-1}(i) < \sigma^{-1}(k)$, then
\begin{align*}
\overline{L_i(I \times D^2) \setminus L_k(S_{c_k})} \cap L_k(\overset{\circ}{I} \times (D^2\setminus \overset{\circ}{S_{c_k}})) = \emptyset & & (\dagger)
\end{align*}
where $S_{c_k}$ is the image of a fixed trivial string link, as in Definition \ref{TrivialStringLink}.
As in the splicing operad, we think of $\sigma\in \Sigma_n$ as a permutation in $\Sigma_{n+1}=\mathrm{Aut}\{0,1,...,n\}$ which fixes 0.
The space $\mathcal{I}(c_1,...,c_n;c)$ is the space of equivalence classes $[L_0,...,L_n, \sigma]$ of infection diagrams, where $(L_0,...,L_n, \sigma)$ and $(L_0',...,L_n', \sigma')$ are equivalent if $L_i=L_i'$ for all $i$, and if whenever the images of $L_i$ and $L_k$ intersect, $\sigma^{-1}(i) \leq \sigma^{-1}(k)$ if and only if $(\sigma')^{-1}(i) \leq (\sigma')^{-1}(k)$.
\qed
\end{definition}
\begin{figure}[h!]
\begin{picture}(325,190)
\put(21,0){\includegraphics[scale=.8]{infectionDiagram.pdf}}
\put(61,147){$L_0$}
\put(101,10){$L_4$}
\put(119,21.3){$L_1$}
\put(108,93){$L_2$}
\put(218,7){$L_3$}
\put(306,7){$L_5$}
\put(219,161){$L_4(S_3)$}
\put(162,8){$L_1(S_1)$}
\end{picture}
\caption{An infection diagram, or more precisely, an element of
$\mathcal{I}(1,2,2,3,1;3)$.
}
\label{infectiondiagram}
\end{figure}
Informally, the $L_i$ are like the hockey pucks in Budney's splicing operad, and the permutation $\sigma$ is a map that sends the order of composition to the index $i$ of $L_i$. The difference is that instead of re-embedding a hockey puck into itself, we will re-embed the image of $S_{c_i}$, a subspace of thinner inner cylinders, into the puck. Thus we keep track of the image of $S_{c_i}$, and our pucks can be thought of as having cylindrical holes drilled in them, the holes with which we will grab the string link (or long knot) $L_0$. Following a suggestion of V.~Krushkal, we call the restrictions of the $L_i$ to $(I\times D^2) \setminus \overset{\circ}{S_{c_i}}$ ``mufflers'' (motivated by the picture for $c_i=2$).
The generalization of Budney's continuity constraint to the constraint $(\dagger)$ is the key technical ingredient in defining our colored operad. The need for this constraint is explained precisely in Remark \ref{ContinuityConstraintRmk} below. The rough meaning of this condition is that a muffler which acts earlier should be inside a hole of a muffler that acts later; in other words, the ``solid part'' of a higher $L_k$ (which remains after drilling out the trivial string link) should not intersect any part of a lower $L_i$, where ``higher'' and ``lower'' are in the semi-linear ordering determined by $\sigma$. However, we must allow for the possibility of the boundaries of the mufflers intersecting in certain ways. Figure \ref{mufflercrosssection} displays the cross-section of a set of mufflers which satisfy constraint $(\dagger)$.
\begin{figure}[h!]
\begin{picture}(325,190)
\put(80,0){\includegraphics[scale=.8]{muffler_crosssection.pdf}}
\end{picture}
\caption{The cross-section of a set of thirteen mufflers, including seven one-holed mufflers (or hockey pucks), satisfying the constraint $(\dagger)$. Each grey area is the ``forbidden region'' $L_k(\overset{\circ}{I} \times (D^2\setminus \overset{\circ}{S_{c_k}}))$ of the $k^\mathrm{th}$ muffler, i.e., the region where no other muffler may lie.}
\label{mufflercrosssection}
\end{figure}
So far we haven't finished defining the operad, since we haven't defined the structure maps. We start by defining the action on the space of fat string links. Only after that will we define the structure maps and check that they form a colored operad and that the definition below is a colored operad action.
\begin{definition}[\emph{The action of $\mathcal{I}$ on fat string links}]
\label{InfectionAction}
Consider $[L_0,L_1 ...,L_n,\sigma] \in \mathcal{I}(c_1,...,c_n;c)$ and fat string links $f_1,...,f_n$ where $f_k\in \mathrm{FSL}_{c_k}$. Let $L_k^{in}$ be the map obtained from $L_k$ by restricting the domain to $S_{c_k}$ and restricting the codomain to its image. We use the shorthand notation
\[
L_k \cdot f_k \quad \mbox{ to denote the map } \quad L_k \circ f_k \circ (L_k^{in})^{-1}\colon\thinspace L_k(S_{c_k}) \rightarrow I \times D^2.
\]
Then we define
\begin{align*}
\mathcal{I}(c_1,...,c_n; c)\times \mathrm{FSL}_{c_1}\times ... \times \mathrm{FSL}_{c_n} &\rightarrow \mathrm{FSL}_c \\
\mbox{by} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad & \\
([L_0,L_1,...,L_n, \sigma], f_1,...,f_n) & \mapsto (L_{\sigma(n)}\cdot f_{\sigma(n)}) \circ ... \circ (L_{\sigma(1)}\cdot f_{\sigma(1)}) \circ L_0 .
\qed
\end{align*}
\end{definition}
\begin{figure}[p]
\begin{picture}(325,520)
\put(25,10){\includegraphics[scale=.7]{action.pdf}}
\put(39,455){$f_1$}
\put(341,455){$f_2$}
\put(115,382){$f_3$}
\put(55,186){$L_0$}
\put(180,336){$L_1$}
\put(203,215){$L_3$}
\put(275,317){$L_2$}
\put(135,4){$(L_3 \cdot f_3) \circ ... \circ (L_1 \cdot f_1) \circ L_0$}
\put(2015,158){$L_1 \cdot f_1$}
\put(205,35){$L_3 \cdot f_3$}
\put(275,135){$L_2 \cdot f_2$}
\end{picture}
\caption{The result of an element of an infection diagram acting on three fat string links, or more precisely a map $\mathcal{I}(1,3,2;2) \times \mathrm{FSL}_1 \times \mathrm{FSL}_2 \times \mathrm{FSL}_3 \rightarrow \mathrm{FSL}_2$. The 2-component fat string link at the bottom is the result of this action. }
\label{action}
\end{figure}
\begin{remark}
\label{ContinuityConstraintRmk}
Strictly speaking, each map $L_{\sigma(k)} \cdot f_{\sigma(k)}$ is only defined on $L_{\sigma(k)}(S_{c_{\sigma(k)}}) = \mathrm{im}\> L_{\sigma(k)}^{in}$, so one might worry whether the above composition is well defined. We claim that the conditions on the support of the $f_{\sigma(k)}$ and the continuity constraint ($\dagger$) guarantee that we can continuously extend each $L_{\sigma(k)} \cdot f_{\sigma(k)}$ by the identity on $\mathrm{im}\> L_0 \setminus \mathrm{im}\> L_{\sigma(k)}^{in}$.
In fact, first write
\[
\partial (\mathrm{im}\> L_{\sigma(k)}^{in}) = (\partial I \times \coprod_{c_k} D^2) \cup (I \times \partial \coprod_{c_k} D^2).
\]
Since each $f_{\sigma(k)}$ is the identity on the $\partial I \times \coprod_{c_k} D^2$ part of its domain (the ``flat boundary''), the map $L_{\sigma(k)} \cdot f_{\sigma(k)}$ is the identity on the $\partial I \times \coprod_{c_k} D^2$ part of $\mathrm{im}\> L_{\sigma(k)}^{in}$.
The constraint $(\dagger)$ says that
\[
\overline{\mathrm{im}\> L_0 \setminus \mathrm{im}\> L_{\sigma(k)}^{in}} \cap L_{\sigma(k)}(\overset{\circ}{I} \times \partial (\coprod_{c_k} D^2) ) = \emptyset,
\]
hence
\[
\overline{\mathrm{im}\> L_0 \setminus \mathrm{im}\> L_{\sigma(k)}^{in}} \cap \mathrm{im}\> L_{\sigma(k)} \subseteq \partial I \times \coprod_{c_k} D^2.
\]
So the continuity constraint guarantees that we don't need to worry about extending past the $I \times \partial \coprod D^2$ part of the boundary (the ``round boundary'').
Hence this defines the composition on the whole image of $L_0$.
\qed
\end{remark}
\begin{definition}[\emph{The structure maps in $\mathcal{I}$}]
\label{InfectionMaps}
The structure maps
\begin{equation}
\label{InfectionStructureMap}
\mathcal{I}(c_1,...,c_n;c) \times \mathcal{I}(d_{1,1},...,d_{1,k_1}; c_1) \times ... \times \mathcal{I}(d_{n,1},...,d_{n, k_n}; c_n) \rightarrow \mathcal{I}(d_{1,1},...,d^{n,k_n}; c)
\end{equation}
\[
(J_0,...,J_n,\rho) \times (L_{1,0},...,L_{1,k_1}, \sigma_1) \times ... \times (L_{n,0},...,L_{n,k_n}, \sigma_n) \mapsto ((J\cdot \vec{L})_0,(J\cdot \vec{L})_{1,1}, ..., (J\cdot \vec{L})_{n,k_n}, \tau)
\]
are defined as follows. (Here $\vec{L}= (L_{1,*},...,L_{n,*})$, which can be thought of as $n$ infection diagrams, and $J\cdot \vec{L}$ is just shorthand for the result on the right-hand side.) The ``starting'' fat string link is
\begin{align*}
(J\cdot \vec{L})_0 & := \left( \bigcirc_{i=1}^n J_{\rho(i)} \cdot L_{\rho(i),0} \right) \circ J_0 \\
&:= (J_{\rho(n)} \circ L_{\rho(n),0} \circ (J_{\rho(n)}^{in})^{-1}) \circ ... \circ (J_{\rho(1)} \circ L_{\rho(1),0} \circ (J_{\rho(1)}^{in})^{-1}) \circ J_0.
\end{align*}
Given $a\in \{1,...,n\}$ and $b\in \{1,...,k_a\}$, the $(a,b)^{\mathrm{th}}$ puck is
\[
(J\cdot \vec{L})_{a,b} := (\bigcirc_{i=\rho^{-1}(a) +1}^n J_{\rho(i)} \cdot L_{\rho(i),0} ) \circ (J_a \circ L_{a,b})
\]
Finally, the permutation $\tau$ associated to $J\cdot \vec{L}$ is given by
\[
\tau^{-1}(a,b) :=
\tau^{-1} \left(b + \sum_{i=1}^{a-1} k_i \right) :=
\sigma^{-1}_a(b) + \sum_{i=1}^{\rho(a) -1}k_{\rho(i)}.
\]
In other words
\begin{align}
\label{PermutationList}
\tau^{-1} \colon\thinspace (1,1), (1,2),...,(n, k_n) \mapsto (\rho^{-1}(1), \sigma_1^{-1}(1)), (\rho^{-1}(1), \sigma_1^{-1}(2)),..., (\rho^{-1}(n), \sigma_n^{-1}(k_n))
\end{align}
where the set acted on can be thought of as a set of ordered pairs (though not a cartesian product) with a lexicographical ordering as on the left.
\qed
\end{definition}
\begin{figure}[h!]
\begin{picture}(325,230)
\put(25,0){\includegraphics[scale=.7]{structuremap.pdf}}
\put(122,165){$J_2$}
\put(168,143){$L_1 \cdot J_1$}
\put(207,23){$L_3 \cdot J_3$}
\put(275,121){$L_2 \cdot J_2$}
\end{picture}
\caption{A slight variation of Figure \ref{action}, using the same $(L_0,L_1,L_2,L_3)$ but replacing the fat string link $f_2$ in Figure \ref{action} by the infection diagram $J_2$ shown above, gives an example of the operad structure maps.
The infection diagrams $J_1$ and $J_3$ have zero mufflers, and their 0-th components are respectively $f_1$ and $f_3$.
Thus the picture above is the image of $((L_0,...,L_3),J_1,J_2,J_3)$ under the structure map
$\mathcal{I}(1,3,2; 2) \times \mathcal{I}(\emptyset; 1) \times \mathcal{I}(1;3) \times \mathcal{I}(\emptyset; 2) \rightarrow \mathcal{I}(1; 2)$.}
\label{structure}
\end{figure}
Notice that the action maps are just special cases of the structure maps. In fact, $\mathrm{FSL}_c$ is precisely $\mathcal{I}(\emptyset; c)$ where $\emptyset$ is 0-tuple of positive integers (or the sequence of zero elements). Thus each action map can be written as
\[
\mathcal{I}(c_1,...,c_n; c) \times \mathcal{I}(\emptyset; c_1) \times ... \times \mathcal{I}(\emptyset; c_n) \rightarrow \mathcal{I}(\emptyset; c)
\]
Thus we can make just a slight modification to Figure \ref{action} to produce a picture of a structure map (that is not an action map), as in Figure \ref{structure}.
\begin{theorem}
\label{DefnThm}
\begin{itemize}
\item[(A)]
The spaces and maps in Definitions \ref{InfectionSpaces} and \ref{InfectionMaps} make $\mathcal{I}$ a colored operad with an action on the space of fat string links given by Definition \ref{InfectionAction}.
\item[(B)]
When restricting to the single color $c=1$, one recovers Budney's splicing operad $\mathcal{SC}_1^{D^2}$. Thus $\mathcal{C}_2$ maps to this part of the colored operad.
\item[(C)]
There is a map of the little intervals operad $\mathcal{C}_1$ to the restriction $\mathcal{I}_{\{c\}}$ of $\mathcal{I}$ to \emph{any} single color $c$.
\end{itemize}
\end{theorem}
\begin{proof}
For (A), we can first see that a composed operation (i.e., an infection diagram on the right-hand side $\mathcal{I}(d_{1,1},...,d_{n,k_n}; c)$ of (\ref{InfectionStructureMap})) satisfies the constraint $(\dagger)$, as follows. Any two non-disjoint mufflers in the composed diagram are the images of mufflers in some $\mathcal{I}(d_{i,1,}...,d_{i,k_i}; c_i)$ under a composition of embeddings. But if the constraint $(\dagger)$ holds for $L_i,L_k$, then it holds for the compositions of $L_i, L_k$ with these embeddings, since ``image under an embedding'' commutes with complement, closure, and intersection.
Now we need to check the conditions of (a) associativity, (b) symmetry, and (c) identity for the structure maps. The corresponding conditions for the action maps will then follow because the action maps are special cases of the structure maps.
(a)
Suppose we have $J=(J_0,...,J_j, \sigma)$, $\vec{L} = ((L_{1,0}, ..., L_{1,\ell_1}, \sigma_1),...,(L_{j,0},...,L_{j,\ell_j}, \sigma_j))$,
and $\vec{M} = ((M_{1,1,0},..., M_{1,1,m_{1,1}}, \tau_{1,1}), ... , (M_{j, \ell_j,0},..., M_{j, \ell_j,m_{j,\ell_j}}, \tau_{j, \ell_j}))$. Then $(J\cdot \vec{L}) \cdot {\vec{M}}$ has 0-th component
\[
\left( \bigcirc_{(h,k)= \nu^{-1}(1,1)}^{\nu^{-1}(j, \ell_j)}
\left( \left( \bigcirc_{i=\rho^{-1}(h)+1}^j J_{\rho(i)} \cdot L_{\rho(i),0} \right)
\circ J_h \circ L_{h,k} \right) \cdot M_{h,k,0} \right) \circ
\left( \bigcirc_{i=1}^j J_{\rho(i)} \cdot L_{\rho(i),0}\right) \circ J_0
\]
where $\nu$ is the permutation for $JL$, and where the order of the terms in the leftmost composition is given by the indices $\nu^{-1}(1,1), \nu^{-1}(1,2), ..., \nu^{-1}(j, \ell_j)$. On the other hand, $J\cdot (\vec{L} \cdot {\vec{M}})$ has 0-th component
\[
\bigcirc_{i=1}^j J_{\rho(i)} \cdot
\left( \left( \bigcirc_{k=1}^{\ell_{\rho(i)}} L_{\rho(i),\sigma(k)} \cdot M_{\rho(i), \sigma(k),0} \right) \circ L_{\rho(i),0} \right) \circ J_0.
\]
These two expressions agree by cancelling adjacent terms $J_{\rho(i)}$, $(J_{\rho(i)}^{in})^{-1}$ and adjacent terms $L_{\rho(i),0}$, $(L_{\rho(i),0})^{-1}$ in the expression for $((J \cdot \vec{L}) \cdot \vec{M})_0$. For example, if $J=(J_0, J_1,J_2, \iota), \vec{L} = ((L_{1,0}, L_{1,1}), (L_{2,0}, L_{2,1}),\iota), \vec{M}=((M_{1,1,0}, M_{1,1,1}), (M_{2,1,0}, M_{2,1,1}), \iota)$ (with $\iota$ denoting the identity permutation), then
\begin{align*}
((J \cdot \vec{L}) \cdot \vec{M})_0
= & [(J \cdot \vec{L})_{2,1} \cdot M_{2,1,0} ] \circ [(J \cdot \vec{L})_{1,1} \cdot M_{1,1,0} ] \circ
[J_2 \cdot L_{2,0}] \circ [J_1 \cdot L_{1,0}] \circ J_0 \\
=& [J_2 \circ L_{2,1} \circ M_{2,1,0} \circ (L_{2,1}^{in})^{-1} \circ \cancel{J_2^{-1}}] \circ \\
& [\cancel{J_2} \circ L_{2,0} \circ (J_2^{in})^{-1} \circ J_1 \circ L_1 \circ M_{1,1,0}
\circ (L_{1,1}^{in})^{-1} \circ \cancel{J_1^{-1}} \circ \cancel{J_2} \circ \cancel{(L_{2,0})^{-1}} \circ \cancel{J_2^{-1}}] \circ \\
&[\cancel{J_2} \circ \cancel{L_{2,0}} \circ \cancel{(J_2^{in})^{-1}}] \circ [\cancel{J_1} \circ L_{1,0} \circ (J_1^{in})^{-1}] \circ J_0 \\
=& [J_2 \cdot (\vec{L} \cdot \vec{M})_{2,1,0}] \circ [J_1 \cdot (\vec{L} \cdot \vec{M})_{1,1,0}] \circ J_0 \\
= & (J \cdot (\vec{L} \cdot \vec{M}))_0
\end{align*}
Checking that the $(a,b,c)^{\mathrm{th}}$ mufflers of these two infection diagrams agree similarly involves cancelling adjacent terms in the expression for $((J \cdot \vec{L})\cdot \vec{M})_{a,b,c}$. (Also cf. \cite{BudneySplicing}.)
Finally, to check that the permutations for these two infection diagrams agree, note that the inverse of either one is given (with notation as in (\ref{PermutationList})) by $(i,k,h) \mapsto (\rho^{-1}(i), \sigma_i^{-1}(k), \tau_{i,k}^{-1}(h))$.
(b) We need to check that both diagrams in the symmetry condition in Definition \ref{ColoredOperad} commute for $\mathcal{O}=\mathcal{I}$. The maps involved consist of permutations of labels on mufflers and labels on infection diagrams. The commutativity of these diagrams is easily verified.
(c) The identity $1_c \in \mathcal{I}(c;c)$ is an element $[L_0, L_1, e]$ with $L_0$ the fixed trivial $c$-component fat string link, $L_1$ the identity map on $I\times D^2$, and $e$ the element in $\Sigma_1$.
Part (B) of the theorem follows quickly from our definitions. One can check that by choosing the identity map for the trivial fat 1-string link, our constraint $(\dagger)$ reduces to Budney's continuity constraint. The rest of our definitions are then exactly as in Budney's splicing operad.
For part (C), the map $\mathcal{C}_1 \rightarrow \mathcal{I}_{\{c\}}$ is easy to construct. An element of $\mathcal{C}_1(n)$ is $(a_1,...,a_n)$ where each $a_i\colon\thinspace I \hookrightarrow I$ is the restriction of an affine-linear, increasing map. The map
$\mathcal{C}_1 \rightarrow \mathcal{I}_{\{c\}}$ is given by $(a_1,...,a_n) \mapsto (i_c, a_1 \times id_{D^2},...,a_n \times id_{D^2}, \iota)$ where $i_c$ was the trivial fat $c$-string link, and where $\iota$ is the identity permutation. (Actually, we could choose any permutation since the mufflers are disjoint.)
\end{proof}
\begin{remark} For $c\neq 1$, it is clear that $\mathcal{C}_2$ cannot map to the operad $\mathcal{I}_c$, for then connect-sum of string links would be (homotopy-)commutative. But this is not the case. For $c\geq 3$, the pure braid group is not abelian, and for $c=2$, the monoid of string links up to isotopy is nonabelian. The latter result can be deduced either from our recent results on the structure of this monoid \cite{StringLinkMonoid} or from work of Le Dimet in the late 1980's \cite{LeDimet1988} on the group of string links up to cobordism.
\qed
\end{remark}
Just as Budney's fat long knots are equivalent to framed long knots, our fat string links are equivalent to framed string links.
In more detail, given a fat string link $L \in \mathrm{FSL}_c$, we can restrict to the ``cores of the tubes'' to get an ordinary string link $L|(I \times \{x_1,...,x_c\})$. Thus we have a map $\mathrm{FSL}_c \rightarrow \mathcal{L}_c$, which is a fibration, since in general restriction maps are fibrations. The fiber $\mathrm{Fib}_L$ over $L$ is the space of tubular neighborhoods of $\mathrm{im}\> L$ which are fixed at the boundaries. We express such a neighborhood as a map $\eta\colon\thinspace \coprod_c I \times D^2 \rightarrow I \times D^2$ and associate to $\eta$ a collection of $c$ loops in $SO(2)$; these are obtained by taking the derivative at $(0,0)$ of the map $\{t\} \times D^2 \rightarrow I \times D^2$, for each $t\in \coprod_c I$. Thus we can map the fiber $\mathrm{Fib}_L$ to $(\Omega SO(2))^c$. This ``derivative map'' is a homotopy equivalence (by shrinking $\eta$ to a small neighborhood of $\coprod_c I \times \{0\}$). Since $\Omega SO(2) \cong \mathbb{Z}$, we can write the fibration as
\[
\xymatrix{ \mathbb{Z}^c \ar[r] & \mathrm{FSL}_c \ar[r] & \mathcal{L}_c.
}
\]
For $L \in \mathrm{FSL}_c$, there are $c$ framing numbers $\omega_1,...,\omega_c$, one for each component. The $j^{\mathrm{th}}$ framing number is given by the linking number of $I_j \times (0,0)$ with $I_j \times (1,0)$, where $I_j$ is the $j^\mathrm{th}$ copy of $I$ in $\coprod_c I$. The map $\omega_1 \times ... \times \omega_c \colon\thinspace \mathrm{FSL}_c \rightarrow \mathbb{Z}^c$ gives a splitting of the above fibration. Then we consider the product fibration $\mathbb{Z}^c \rightarrow \mathcal{L}_c \times \mathbb{Z}^c \rightarrow \mathcal{L}_c$ and the map from the above fibration to this one induced by the splitting. The long exact sequence of homotopy groups for a fibration together with the five lemma imply that the map from $\mathrm{FSL}_c$ to $\mathcal{L}_c \times \mathbb{Z}^c$ is a weak equivalence, hence a homotopy equivalence. Thus $\widehat{\mathcal{L}}_c := (\omega_1 \times...\times \omega_c)^{-1}\{(0,0,...,0)\}$ is equivalent to $\mathcal{L}_c$.
\begin{corollary}
By restricting to the subspaces $\widehat{\mathcal{L}}_c\subset \mathrm{FSL}_c$ of fat string links with zero framing number in every component, we obtain an action of $\mathcal{I}$ on spaces homotopy-equivalent to the spaces of $c$-component string links.
\end{corollary}
\subsection{Mufflers, rational tangles, and pure braids}
\label{RationalTangles1}
We now briefly discuss how general an infection our operad $\mathcal{I}$ encodes. Informally, one might wonder how twisted the inner cylinders (i.e., the holes) $L^{in}$ of a muffler could be. Clearly, a fat string link can appear as $L^{in}$ if and only if the pair $(I\times D^2, L^{in})$ is homeomorphic to the pair $(I\times D^2, i_c)$ where $i_c$ the trivial fat $c$-string link. The purpose of the following well known Proposition is just to show an alternative and perhaps more intuitive way of thinking about such string links. Recall from Definition \ref{TrivialStringLink} that $S_c$ is the image of $i_c$.
\begin{proposition}
\label{RationalTFAE}
The following are equivalent:
\begin{itemize}
\item[(1)]
There is a
diffeomorphism of pairs $\xymatrix{(I \times D^2, S_c) \ar[r]^-\cong& (I \times D^2, \mathrm{im}\>(L)).}$
\item[(2)]
There is an isotopy from $L$ to the trivial link which takes $\partial (\coprod_c I)$ into
$\partial (I \times D^2)$. Note that the isotopy need not \emph{fix} the endpoints of $\coprod_c I$.
\end{itemize}
\end{proposition}
\begin{proof}
(1) $\Rightarrow$ (2):
Suppose we have a
diffeomorphism of pairs $h$ as in (1).
It suffices to show that the identity can be connected to this diffeomorphism by a path of diffeomorphisms of $I \times D^2$, for then we can restrict to $S_c$ to obtain the desired isotopy.
By Cerf's Theorem \cite{Cerf}, the space of diffeomorphisms of $S^3$ is connected. As a corollary, so is the space of diffeomorphisms of $D^3$ whose values and derviatives agree with the identity on the boundary. In fact, this follows by considering the fibration
\[
\mathrm{Diff}(D^3,\partial D^3) \rightarrow \mathrm{Diff}(S^3) \rightarrow \mathrm{Emb}(D^3, S^3)
\]
given by restricting to a hemisphere of $S^3$. The base space is homotopy-equivalent to $SO(3)$, which is connected, while the fiber is the space of diffeomorphisms of $D^3$ fixed on the boundary.
Now a diffeomorphism $\varphi\colon\thinspace (I \times D^2, S_c) \overset{\cong}{\rightarrow} (I \times D^2, \mathrm{im}\>(L))$ is clealry isotopic to one that is the identity outside of a ball $D^3$ contained in $I \times D^2$. Combining this with Cerf's Theorem, we get a path from $\varphi$ to the identity, as desired.
(2) $\Rightarrow$ (1): by the isotopy extension theorem (see for example Hirsch's text \cite{Hirsch}), an isotopy as in (3) can be extended to a diffeotopy of the whole space $I\times D^2$. The diffeotopy at time 1 then gives the desired diffeomorphism.
\end{proof}
The 2-string links which satisfy the above condition(s) are by definition precisely the 2-string links which are also \emph{rational 2-tangles}. (Here we consider only string links, not arbitrary tangles; the reader may consult the work of Conway \cite{Conway} for more details about rational tangles in general.) Note that pure braids are examples of rational 2-tangles, since it is easy to see that a pure braid satisfies (2) above.
We immediately have the following result, which informally says that ``a muffler can grab the string link in the shape of
any rational 2-tangle:''
\begin{proposition}
A fat 2-string link $L^{in}$
\[
\xymatrix{
S_c \cong \coprod_c I \times D^2 \ar@{^(->}[r]^-{L^{in}} & I \times D^2
}
\]
extends to a diffeomorphism $L$ of $I\times D^2$ if and only if the core $L^{in}|(I\times \{x_1,x_2\})$ of $L^{in}$ is a rational tangle.
\qed
\end{proposition}
\subsection{Generalizations to other embedding spaces}
For $j \in \mathbb{N}^+$ and $M$ a compact manifold with boundary, let $\mathrm{EC}(j,M)$ be the space of ``cubical embeddings'' $\mathbb{R}^j \times M \hookrightarrow \mathbb{R}^j \times M$, that is, all such embeddings which are the identity outside $I^j \times M$. Budney constructs the actions of the little 2-cubes operad $\mathcal{C}_j$ and the splicing operad $\mathcal{SC}_1^{D^2}$ on the space of long knots as special cases of actions of the operads $\mathcal{C}_j$ and $\mathcal{SC}_j^M$ on $\mathrm{EC}(j,M)$. Our extension of the splicing operad to string links also gives an extension of the more general splicing operad $\mathcal{SC}_j^M$ to a colored operad acting on spaces of embeddings $I^j \times \coprod_c M \hookrightarrow I^j \times M$.
For each $c\in \mathbb{N}^+$ fix an embedding
\[
i_c\colon\thinspace \coprod_c I^j \times M \hookrightarrow \coprod_c I^j \times M
\]
by fixing an embedding $\coprod_c M \hookrightarrow M$. Let $S_c$ be the image of $i_c$.
Let
\[
\mathrm{EC}^{\coprod_c}(j,M) := \{ f\colon\thinspace \coprod_c I^j \times M \hookrightarrow I^j \times M |\> \mbox{$f$ agrees with $i_c$ on $\partial I \times M$}\}.
\]
\begin{definition}[\emph{The spaces in the colored operad $\mathcal{I}_j^M$}]
An element in $\mathcal{I}_j^M(c_1,...,c_n;c)$ is an equivalence class of tuples $(L_0, L_1,...,L_n, \sigma)$. Here $L_0 \in \mathrm{EC}^{\coprod_c}(j,M)$, $\sigma\in \Sigma_n$, and for $i=1,...,n$, $L_i$ is an embedding $L_i\colon\thinspace I^j \times M \hookrightarrow I^j \times M$ subject to the constraint that for $0 \leq \sigma^{-1}(i) < \sigma^{-1}(k)$
\begin{align*}
\overline{\mathrm{im}\> L_i \setminus L_k(S_c)} \cap L_k(\overset{\circ}{I^j} \times (M\setminus \overset{\circ}{S_c})) = \emptyset & &
\end{align*}
Here we think of $\sigma\in \Sigma_n$ as a permutation in $\Sigma_{n+1}=\mathrm{Aut}\{0,1,...,n\}$ which fixes 0.
Tuples $(L_0,...,L_n, \sigma)$ and $(L_0',...,L_n', \sigma')$ are equivalent if $L_i=L_i'$ for all $i$ and if whenever the images of $L_i$ and $L_k$ intersect, $\sigma^{-1}(i) \leq \sigma^{-1}(k)$ if and only if $(\sigma')^{-1}(i) \leq (\sigma')^{-1}(k)$.
\qed
\end{definition}
The structure maps of $\mathcal{I}_j^M$, as well as an action of $\mathcal{I}_j^M$ on the spaces $\{ \mathrm{EC}^{\coprod_c}(j,M)\}_{c\in \mathbb{N}^+}$, can be defined exactly as in the special case where $j=1$ and $M=D^2$.
\section{Decomposing the space of 2-string links using the infection operad}
\label{Decomp}
\subsection{The monoid of 2-string links}
Note that given any monoid $\mathcal{M}$ and subset $\mathcal{C}$ of central elements, the quotient monoid $\mathcal{M}/\mathcal{C}$ is well defined. We are interested in the monoid $\mathcal{M}=\pi_0 \mathcal{L}_c$ of isotopy classes of $c$-string links, especially for $c=2$. The units in $\pi_0\mathcal{L}_c$ are precisely the pure braids \cite[Proposition 2.7]{StringLinkMonoid}.
We say that a non-unit $c$-string link $L$ is \emph{prime} if $L=L_1\# L_2$ implies that either $L_1$ or $L_2$ is a unit (pure braid).
\begin{definition}
\label{SplitAndCable}
\begin{itemize}
\item[(1)]
A string link $L$ is \emph{split} if there exists a properly embedded 2-disk $(D, \partial D) \hookrightarrow (I \times D^2, \partial(I \times D^2))$ whose image is disjoint from $L$ and such that the two 3-balls into which $D$ separates $I \times D^2$ each contain component(s) of $L$. Such a 2-disk is called a \emph{splitting disk}. See Figure \ref{SplitLink}.
\item[(2)]
A \emph{1-strand cable} is a string link $L$ which has a neighborhood $N \cong I \times D^2$ such that $L$ considered as a link in $N$ is a (pure) braid $B$. In other words, ``all the strands are tied into a knot.'' We call $\partial N \setminus \partial(I\times D^2)$ a \emph{cabling annulus} for $L$.
See Figure \ref{Cable}.
\end{itemize}
\end{definition}
\begin{figure}[h!]
\begin{picture}(216,130)
\put(112,0){\includegraphics[scale=.6]{splitlink.pdf}}
\end{picture}
\caption{An example of a split link.}
\label{SplitLink}
\end{figure}
\begin{figure}[h]
\begin{picture}(216,130)
\put(107,0){\includegraphics[scale=.7]{cabling.pdf}}
\end{picture}
\caption{An example of a 1-strand cable, shown together with a cabling annulus.}
\label{Cable}
\end{figure}
Since we are now focusing on 2-string links, we need not consider (or even define) $k$-strand cables for $k>1$. Hence we will often refer to 1-strand cables as just \emph{cables}.
\begin{theorem}[proven in \cite{StringLinkMonoid}]
\label{hypo}
The monoid $\pi_0\mathcal{L}_2$ has center $\mathcal{C}$ generated by the pure braids, split links, and 1-strand cables. The quotient $\pi_0\mathcal{L}_2 / \mathcal{C}$ is free. Furthermore, every 2-component string link can be written as a product of prime factors
\[
L_1\#...\#L_m \# K_1 \#...\# K_{n-m}
\]
where the $K_i$ are precisely the factors which are in the center. Such an expression is unique up to reordering the $K_i$ and multiplying any of the factors by units (pure braids).
\end{theorem}
\subsection{Removing twists}
\label{RemovingTwists}
Next note that the linking number gives a map $\ell\colon\thinspace \mathcal{L}_2\rightarrow \mathbb{Z}$ which descends to a monoid homomorphism $\pi_0\ell\colon\thinspace \pi_0 \mathcal{L}_2 \rightarrow \mathbb{Z}$. For $n\in \mathbb{Z}$, let $\mathcal{L}^n_2 = \ell^{-1}\{n\}$. We might like to think of this as $0 \rightarrow \mathcal{L}^0_2 \rightarrow \mathcal{L}_2 \rightarrow \mathbb{Z} \rightarrow 0$, though if we wanted this to be a short exact sequence of monoids, we should instead write
\[
0 \rightarrow \pi_0 \mathcal{L}^0_2 \rightarrow \pi_0 \mathcal{L}_2 \rightarrow \mathbb{Z} \rightarrow 0,
\]
since $\mathcal{L}_2$ is only a monoid up to homotopy. There is an action of $\mathbb{Z}$ on $\mathcal{L}_2$ where the generator $1\in \mathbb{Z}$ acts by following the embedding in $\mathcal{L}_2$ by the map $D^2 \times I \rightarrow D^2 \times I$ given by $(z,t) \mapsto (e^{2\pi i t} z, t)$. The action of any $m\in \mathbb{Z}$ thus gives a continuous map\footnote{We can see that the map does indeed have this codomain because the resulting twisted link can be taken by an isotopy to a link where the twists are on one end, in which case the linking number is clearly increased by $m$.} $\mathcal{L}^n_2 \rightarrow \mathcal{L}^{n+m}_2$
with continuous inverse given by the action of $-m$.
Thus $\mathcal{L}_2 \cong \mathcal{L}^0_2 \times \mathbb{Z}$, and it suffices to study $\mathcal{L}^0_2$ to understand $\mathcal{L}_2$. Note that Theorem \ref{hypo} above implies that an element of $\pi_0 \mathcal{L}^0_2$ can be written as a product of primes $L_1\#...\#L_m \# K_1 \#...\# K_{n-m}$ which is unique up to \emph{only reordering the $K_i$}. We similarly define a subspace $\widehat{\L}^0_2 \subset \widehat{\L}_2$ in the space of fat string links with zero framing number; note that $\widehat{\L}^0_2 \simeq \mathcal{L}^0_2$.
\begin{proposition}
\label{CommRelations}
The isotopies that yield the commutativity relations in $\pi_0\mathcal{L}^0_2$ (which by Theorem \ref{hypo} are \emph{all} the relations in $\pi_0 \mathcal{L}^0_2$) can be realized as paths in the spaces $\mathcal{I}(c_1,...,c_n;2)$, where $c_i \in \{1,2\}$.
\end{proposition}
\begin{proof}
Note that by Theorem \ref{hypo} any 2-string link can be obtained from infections of the trivial 2-string link by prime knots and non-central prime 2-string links; these infections can be chosen to commute with each other (so that they can be carried out ``all at once''). In terms of fat string links in $\widehat{\mathcal{L}}_2^0 \left(\simeq \mathcal{L}_2^0\right)$, we can express these operations using a relatively small class of 2-holed mufflers and hockey pucks, as follows.
\begin{figure}
\includegraphics[scale=.55]{2-coloredsuboperad.pdf}
\caption{An element of $\mathcal{I}(1,1,2,1;2)$, where the $L_i$ are ordered from left to right. In this element, we have a hockey puck of type (B), then a hockey puck of type (A1), then a two-holed muffler, then a hockey puck of type (A2).}
\label{OperadEncodesCommRelations}
\end{figure}
Recall that an element $a \in \mathcal{C}_1(1)$ is just an affine-linear map $I \hookrightarrow I$.
Let $e_1, e_2\colon\thinspace D^2 \hookrightarrow D^2$ denote the restrictions of
the trivial fat 2-string link $i_2\colon\thinspace \coprod_2 I \times D^2 \hookrightarrow I \times D^2$
to the two components in the 0-time slice:
\[
e_1 \sqcup e_2 \colon\thinspace (\{0\} \times D^2) \sqcup (\{0\} \times D^2) \hookrightarrow \{0\} \times D^2.
\]
(Equivalently, $e_1, e_2$ are the restrictions of $i_2$ to the two components of a time-slice at any time $t\in I$).
Consider infection diagrams $(L_0, M_1,...,M_n, \sigma)$ representing classes in $\mathcal{I}(c_1,...,c_n;2)$ which satisfy the following three conditions (see Figure \ref{OperadEncodesCommRelations}):
\begin{itemize}
\item
$L_0$ is the trivial 2-string link.
\item
If $c_i=1$, then either
\begin{itemize}
\item[(A1)]
$L_i = a_i \times e_1$ for some $a_i \in \mathcal{C}_1(1)$ or
\item[(A2)]
$L_i = a_i \times e_2$ for some $a_i \in \mathcal{C}_1(1)$ or
\item[(B)]
$L_i = a_i \times id \colon\thinspace I \times D^2 \hookrightarrow I \times D^2$ for some $a_i \in \mathcal{C}_1(1)$.
\end{itemize}
\item
If $c_i=2$, then $L_i = a_i \times id \colon\thinspace I \times D^2 \hookrightarrow I \times D^2$ for some $a_i \in \mathcal{C}_1(1)$.
\end{itemize}
Notice that plugging knots into pucks of types (A1) and (A2) produces a split link, while plugging a knot into a puck of type (B) produces a cable. Hockey pucks of types (A1) and (A2) can move through the inside of the two-holed mufflers, while the pucks of type (B) can move through the two-holed mufflers on the outside. These two motions correspond to the centrality of split links and cables, which by Theorem \ref{hypo} are all the commutativity relations in $\pi_0 \mathcal{L}^0_2$. This proves the proposition. (Since $\mathcal{L}_2 \cong \mathcal{L}^0_2 \times \mathbb{Z}$, this is fairly close to a statement about all of $\mathcal{L}_2$.)
\end{proof}
\subsection{A suboperad of the 2-colored restriction}
Let $\mathcal{I}_{\{2\}}$ denote the suboperad of $\mathcal{I}$ corresponding to the color $\{2\} \subset \mathbb{N}^+$. Note that $\mathcal{I}_{\{2\}}$ is an ordinary operad.
\begin{definition}
\label{StackingSuboperad}
We define the \emph{stacking suboperad} $\mathcal{I}_\# \subset \mathcal{I}_{\{2\}}$ as the suboperad
where each space $\mathcal{I}_\#(n)$
consists of elements of $\mathcal{I}(2,...,2;2)$ represented by infection diagrams $(L_0, M_1,...,M_n, \sigma)$ satisfying the following conditions:
\begin{itemize}
\item
$L_0$ is the trivial $2$-string link.
\item
$M_i = a_i \times id \colon\thinspace I \times D^2 \hookrightarrow I \times D^2$ for some $a_i \in \mathcal{C}_1(1)$.
\end{itemize}
\end{definition}
\begin{figure}
\begin{picture}(216,130)
\put(18,0){\includegraphics[scale=0.55]{stackingsuboperad.pdf}}
\end{picture}
\caption{An element of $\mathcal{I}_\#(3)$ ($\cong \mathcal{C}_1(3)$).}
\label{StackingSuboperad}
\end{figure}
The following is obvious:
\begin{proposition}
\label{Little1Cubes}
For each $n$, the space $\mathcal{I}_\#(n)$ is homeomorphic to $\mathcal{C}_1(n)$. Thus $\mathcal{I}_\#(n)$ has contractible components and is equivalent to $\Sigma_n$.
\qed
\end{proposition}
\subsection{A decomposition theorem}
Recall that $\widehat{\L}_c$ is the space of fat $c$-string links with zero framing number in each component; $\widehat{\L}^0_c \subset \widehat{\L}_c$ is the subspace where the linking number is 0 (defined in Section \ref{RemovingTwists}); and we have homotopy equivalences $\widehat{\L}_c \simeq \mathcal{L}_c$ and $\widehat{\L}^0_c \simeq \mathcal{L}^0_c$.
Let $\mathcal{P}_c \subset \widehat{\L}_c$ be the subspace of prime $c$-component fat string links.
We will decompose a certain subspace of $\widehat{\L}_2$ in terms of our infection operad and the prime links in this subspace.
\begin{definition}
Define $\mathcal{S}_2$ to be the subspace of $\widehat{\L}_2$ consisting of certain components of $\widehat{\L}_2$: the component of $\widehat{\L}_2$ corresponding to a string link $L$ is in $\mathcal{S}_2$ if and only if $L$ is a product of prime string links, each of which is \emph{not} in the center of $\pi_0\mathcal{L}_2$. (In other words, each prime factor of $L$ is neither a split link nor a cable.)
Let $\mathcal{PS}_2:= \mathcal{P}_2 \cap \mathcal{S}_2$, let $\mathcal{S}^0_2 := \mathcal{S}_2 \cap \widehat{\L}^0_2$, and let $\mathcal{PS}^0_2 := \mathcal{PS}_2 \cap \widehat{\L}^0_2 = \mathcal{P}_2 \cap \mathcal{S}_2 \cap \widehat{\L}^0_2$.
\end{definition}
Before stating our decomposition theorem, we review a useful lemma, well known to embedding theorists. Before proving the lemma, we need to set some more definitions.
\begin{itemize}
\item
Let
$\widehat{\L}=\coprod_{c=1}^\infty \widehat{\L}_c$.
\item
For $L\in \widehat{\L}$, let $\widehat{\L}(L)$ denote the component of $L$ in $\widehat{\L}$.
\item
Recall that if $L$ is an embedding of a 3-manifold with boundary into $I\times D^2$, $C_L:=D^3 \setminus \overset{\circ}{(\mathrm{im}\> L)}$, where we identify $I\times D^2$ with $D^3$.
\item
For a manifold with boundary $M$, let $\mathrm{Diff}(M; \partial)$ denote the space of diffeomorphisms of $M$ which are the identity on the boundary.
\item
For a group $G$, let $BG$ denote the classifying space of $G$.
\end{itemize}
\begin{lemma}
\label{LinkSpaceCompsAreBGs}
For any $L \in \widehat{\L}$, $\widehat{\L}(L) \simeq B\mathrm{Diff}(C_L; \partial)$.
\end{lemma}
\begin{proof}
Given a diffeomorphism in $\mathrm{Diff}(D^3, \partial)$, we can restrict to the image of $L$ to get a fibration
\[
\xymatrix{\mathrm{Diff}(C_L; \partial) \ar[r] & \mathrm{Diff}(D^3; \partial) \ar[r] & \widehat{\L}(L).}
\]
Hatcher showed that $\mathrm{Diff}(D^3; \partial)$ is contractible (the Smale Conjecture \cite{HatcherSmaleConj}), which implies the result.
\end{proof}
\begin{theorem}
\label{DecompThm}
The subspace $\mathcal{S}^0_2$ is freely generated over the
stacking suboperad $\mathcal{I}_\#$
by its subspace $ \mathcal{PS}^0_2$ of non-split, non-cable prime string links.
More precisely,
\[
\mathcal{S}^0_2 \simeq
\mathcal{I}_\#(\mathcal{PS}^0_2 \sqcup \{*\}) :=
\coprod_{n=0}^\infty \mathcal{I}_\#(n) \times_{\Sigma_n} (\mathcal{PS}^0_2 \sqcup \{*\})^n
\left( \simeq \coprod_{n=0}^\infty (\mathcal{PS}^0_2 \sqcup \{*\})^n \right)
\]
where $\{ * \}$ corresponds to the component of the trivial 2-string link. Furthermore $\mathcal{S}_2 \cong \mathcal{S}^0_2 \times \mathbb{Z}$.
\end{theorem}
\begin{proof}
First note that by Theorem \ref{hypo} we have a bijection on $\pi_0$. In fact, a prime decomposition $L=L_1\#...\#L_n$ corresponds to an isotopy class of an equivalence class of infection diagram in $\mathcal{I}_\#$ with $n$ mufflers exactly as in Definition \ref{StackingSuboperad}.
Now we will check that we have an equivalence on each component of $\mathcal{S}^0_2$. So fix $L \in \mathcal{S}^0_2$.
Let $C_L$ denote the complement of $L$ in $D^3$, as above.
\begin{definition}
For a $c$-component string link $L$, a \emph{decomposing disk} $D \subset C_L$ is a 2-disk with $c$ open 2-disks removed which is properly embedded in $C_L$ in such a way that $c$ of its boundary components are (isotopic to) the $c$ meridians of $L$.
\end{definition}
Note that a decomposing disk $D$ is incompressible in $C_L$ \cite[Lemma 2.9]{StringLinkMonoid}.
A prime decomposition $L= L_1\#...\#L_n$ corresponds to a maximal collection of decomposing disks $D_1,...,D_{n-1}$ such that no two $D_i$ are isotopic.
Thus the decomposing disks $D_1,...,D_{n-1}$ cut $C_L$ into $n$ pieces that are precisely $C_{L_1},...,C_{L_n}$.
Recall the uniqueness of prime decompositions for $L \in \mathcal{S}^0_2$ given by Theorem \ref{hypo}. The proof of this theorem implies that (the image of) such a maximal collection of decomposing disks is unique up to isotopy. Note that the prime factors of $L\in \mathcal{S}^0_2$ cannot even be reordered.
Now consider the fibration
\begin{equation}
\label{DiffDiffEmb}
\xymatrix{
\mathrm{Diff}\left(\coprod_{i=1}^n C_{L_i}; \partial \right) \ar[r] & \mathrm{Diff}(C_L; \partial) \ar[r] &
\mathrm{Emb} \left(\coprod_{i=1}^n D_i, \,C_L \right)
}
\end{equation}
Hatcher proved \cite{HatcherIncomprSurfs} that for a 3-manifold $M$ and a properly embedded incompressible surface $S\subset M$,
the space $\mathrm{Emb}(S,M)$ has contractible components unless $S$ is a torus. (Strictly speaking, Hatcher proves this for connected $S$, but for $S=\sqcup_{i=1}^n S_i$ with each $S_i$ a connected surface, one can use the fibration
\[
\xymatrix{
\mathrm{Emb}\left(S_n,\, M \setminus \left(\coprod_{i=1}^{n-1} S_i\right)\right) \ar[r] &
\mathrm{Emb}\left(\coprod_{i=1}^n S_i,\, M\right) \ar[r] &
\mathrm{Emb}\left(\coprod_{i=1}^{n-1} S_i,\, M\right)
}
\]
and induction on $n$ to get the result, noting that Hatcher's theorem applies when the 3-manifold is a component of $M \setminus (\sqcup_{i=1}^{n-1} S_i)$.)
Thus the components of the base space in (\ref{DiffDiffEmb}) are contractible. Since the images of the $D_i$ are determined up to isotopy, we may replace $\mathrm{Emb}(\coprod_{i=1}^n D_i, C_L)$ by $\mathrm{Diff}(\coprod_{i=1}^n D_i)$ (since the latter space also has contractible components). Note that the fiber in (\ref{DiffDiffEmb}) is $\prod_{i=1}^n \mathrm{Diff}(C_{L_i})$. So we have
\[
\xymatrix{
\mathrm{Diff}(\coprod_{i=1}^n D_i) \ar[r] & \mathrm{Diff}(C_L; \partial) \ar[r] & \mathrm{Diff}(\coprod_{i=1}^n D_i).
}
\]
Now apply the classifying space functor $B(-)$ to the above fibration. By Lemma \ref{LinkSpaceCompsAreBGs}, we get
\[
\xymatrix{
\prod_{i=1}^n \widehat{\L}(L_i) \ar[r] & \widehat{\L}(L) \ar[r] & \prod_{i=1}^n \mathrm{Conf}_2(D^2)
}
\]
where $\mathrm{Conf}_2(D^2)$ is space of ordered distinct pairs in $D^2$ (or the classifying space of the braid group on two strands). The base space is a $\mathrm{K}(\pi,1)$, i.e., it has trivial $\pi_i$ for $i>1$. We claim that on $\pi_1$, the fibration is the zero map: in fact, if $\alpha \in \pi_1(\widehat{\L}(L))$ produced a nontrivial braid (say, in the $i^\mathrm{th}$ factor), then in $\alpha(1)$, at least one of the two summands determined by $D_i$ would have nonzero $\ell$ (number of twists), contradicting that $\alpha$ is a loop (in $\mathcal{S}_2^0$).
So by the long exact sequence in homotopy groups for a fibration, the map from fiber to total space is an isomorphism on $\pi_i$ for all $i\geq 0$. Then by the Whitehead Theorem,
\[
\widehat{\L}(L) \simeq \prod_{i=1}^n \widehat{\L}(L_i).
\]
The right-hand space can be rewritten as $\Sigma_n \times_{\Sigma_n} \prod_{i=1}^n \widehat{\L}(L_i)$, which by Proposition \ref{Little1Cubes} is equivalent to $ \mathcal{I}_\#(n) \times_{\Sigma_n} \prod_{i=1}^n \widehat{\L}(L_i)$. This proves the main assertion of the theorem.
The remaining assertion, that $\mathcal{S}_2 \cong \mathcal{S}^0_2 \times \mathbb{Z}$, follows immediately from Section \ref{RemovingTwists}.
\end{proof}
\subsection{Final remarks and future directions}
We have described the components of links in $\mathcal{S}_2$ in terms of the components of the prime links in $\mathcal{S}_2$. In general, we do not have descriptions of the components of the prime links in $\mathcal{S}_2$ themselves.
However, we can describe some components of $\mathcal{L}_2$. We believe that at least some of these descriptions have been known to experts.
\begin{proposition}
The component of a 2-string link $R \in \widehat{\L}_2$ which is a rational tangle is contractible.
\end{proposition}
\begin{proof}
We have a fibration
\begin{equation}
\label{UnlinkFibn}
\xymatrix{ \mathrm{Diff}(C_R; \partial) \ar[r] & \mathrm{Diff}(D^3; \partial) \ar[r] & \widehat{\L}(R)}
\end{equation}
given by restricting to the image of $R$. The total space is contractible by the Smale Conjecture.
So it suffices to show that the fiber $\mathrm{Diff}(C_R; \partial)$ is contractible. Note that $C_R$ is a genus-2 handlebody.
We claim that for any 3-dimensional handlebody $H$, $\mathrm{Diff}(H; \partial)$ is contractible. This can be proven by induction on the genus. The basis case of genus 0 is the Smale Conjecture. For the induction step, let $S$ be a meridional disk in $H$. Consider the fibration
\[F\rightarrow \mathrm{Diff}(H; \partial) \rightarrow \mathrm{Emb}(S,H)\]
where the base is the space of proper embeddings of $S$ with fixed behavior on $\partial S$. The fiber $F$ is the space of diffeomorphisms of a handlebody whose genus is 1 less than that of $H$, and it is contractible by the induction hypothesis. Hatcher's result on incompressible surfaces says that $\mathrm{Emb}(S,H)$ has contractible components.
Furthermore, we claim that any two such embeddings of $S$ are isotopic; this can be proven using the fact that handlebodies are irreducible (i.e., every 2-sphere in $H$ bounds a 3-ball) and standard ``innermost disk'' arguments from 3-manifold theory. Hence $\mathrm{Emb}(S,H)$ is connected, hence contractible.
Thus $\mathrm{Diff}(H; \partial)$ is contractible. Thus the base space in the fibration (\ref{UnlinkFibn}) is also contractible.
\end{proof}
Recall the definitions of split links and splitting disks from Definition \ref{SplitAndCable}.
\begin{proposition}
\label{SplitLinkProp}
If $L$ is a split string link which splits as links $L_1, L_2$, then $\widehat{\L}(L) \simeq \widehat{\L}(L_1) \times \widehat{\L}(L_2)$.
\end{proposition}
\begin{proof}
Let $D$ be a splitting disk for $L$. Consider the fibration
\[
\mathrm{Diff}(C_{L_1}; \partial) \times \mathrm{Diff}(C_{L_2}; \partial) \rightarrow \mathrm{Diff}(C_L; \partial) \rightarrow \mathrm{Emb}(D, C_L)
\]
where $\mathrm{Emb}(D, C_L)$ is the space of embeddings of $D$ which agree on $\partial D$ with the given embedding of $D$. By Hatcher's theorem on incompressible surfaces, this space has contractible components. Irreducibility of $C_L$ implies further that any two such embeddings of $D$ in $C_L$ are isotopic, showing that the base space is connected, hence contractible.
This gives us the desired equivalence.
\end{proof}
If we restrict our attention to 2-string links, the split links are just those links which are obtained by tying a knot in one or both strands. So Budney's work \cite{BudneyTop} together with Proposition \ref{SplitLinkProp} gives a description of the homotopy type of each such component of $\mathcal{L}_2$.
We conclude by mentioning two open problems that immediately stand out as follow-ups to Theorem \ref{DecompThm}:
\begin{itemize}
\item[(1)]
to determine the homotopy types of components of prime non-central 2-string links.
\item[(2)]
to understand how different types of 2-string links interact, i.e., find a generalization of Theorem \ref{DecompThm} from the subspace $\mathcal{S}_2$ to the space of all 2-string links.
\end{itemize}
\bibliographystyle{plain}
|
2011.07858
|
\section{Introduction}
The Troitsk nu-mass experiment is dedicated to the search for sterile neutrinos. The setup consists of a windowless gaseous tritium source (WGTS), electrostatic spectrometer with adiabatic magnetic collimation (MAC-E spectrometer), and a semiconductor detector (Fig.~\ref{fig:setup}).The setup with a smaller spectrometer was used in measurements of the mass of electron anti-neutrino before 2010 (\cite{Aseev:2011dq, Belesev:2012hx, Belesev:2013cba, Nozik:2019jgm}) and the larger version was used after 2010 to search for sterile neutrinos (\cite{Abdurashitov:2015jha, Abdurashitov:2017kka}). The tritium source remained the same.
The principle of operation is the following:
\begin{enumerate}
\item Electrons are produced in a beta-decay in a WGTS decay volume.
\item Then they are transported via an adiabatic magnetic transport system to the spectrometer. During transport inside the decay volume they could scatter on the tritium molecules (the pressure of the gas is low, but the scattering probability on one pass still could rise to $\sim 0.5 $).
\item In the so-called pinch-magnet at the entrance of the spectrometer (nominal field $7.2~T$) the electrons have the maximum angle between their velocity and the spectrometer axis.
\item The field in the center of the spectrometer (so-called analyzer plane) is down to $10^{-3}~T$. Due to conservation of adiabatic invariant the transversal component of electron velocities in the analyzer plane is quite small (up to $10^{-4}$ of its full energy).
\item The stopping electric field is applied near the analyzing plane parallel to the spectrometer axis. All electrons with energy higher than the stopping field power pass the spectrometer and are registered by the detector. All others are reflected. The spectrum measurements are done by making scans with the stopping field.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width = 0.95\linewidth]{images/setup.jpg}
\caption{Troitsk nu-mass setup. The tritium source is marked by number 8. The spectrometer electrode is marked by number 4. Pinch-magnet is marked by number 6.}
\label{fig:setup}
\end{figure}
The term "trapping effect" was introduced and first discussed in \cite{Lobashev:1999tp}. The magnetic trap is a region in space with the magnetic field limited from both sides with a higher magnetic field. A higher magnetic field causes so-called magnetic reflection for electrons with specific angles between their velocities and the trap axis. There are two large traps in the setup: the spectrometer itself and the WGTS decay volume.
The spectrometer is supposed to be "clean" meaning that it does not contain radioactive materials that could produce high-energy electrons. Low-energy electrons are seldom trapped in it, but they do not affect measurements as long as their energy is lower than the detection threshold. Occasional trapped high energy electrons from cosmic rays produce so-called "bunches" - short signals of higher count rate. Those bunches are treated during the signal analysis and do not critically affect the resulting spectrum (see \cite{Lobashev:1999tp} and \cite{Aseev:2011dq} for details).
The trap in the source on the other hand is constantly fed from tritium decays with large angles. The magnetic bottle scheme showed in Fig.~\ref{fig:trap}. Electrons with large angles relative to the bottle axis are trapped indefinitely unless they suffer scattering on the tritium gas and their angle is changed. The tritium could not be removed from the source since it is used to produce the electrons in the first place. So those electrons keep escaping from the trap with the constant rate, thus creating an irremovable physical background with a distorted original spectrum.
In this article, we will discuss trapping-effect physics and simulation results.
\section{Trapping physics}
Electrons are produced during $\beta$-decay of tritium in a so-called tritium source which is a tube 3 m long and 5 cm in diameter. The source has $0.6~T$ axial field value, and there is a stronger field (with a nominal value of $3.6~T$), created by transport magnets. Those fields are used to adiabatically transport electrons to the spectrometer (field scheme is shown at Fig.~\ref{fig:trap}). Adiabatic mode means the conservation of invariant $\mu = \frac{-mv_ \perp ^ 2} {2B}=const $. Here $v_ \perp$ is the velocity component, perpendicular to the field $B$, $m$ is the electron mass.
\begin{figure}
\centering
\includegraphics[width = 0.9\textwidth]{images/Trap.png}
\caption{Troitsk nu-mass tritium source magnetic field}
\label{fig:trap}
\end{figure}
The principal scheme of the trapping is shown as an angle diagram at Fig.~\ref{fig:scheme}. In this scheme, we show only angles between the electron velocity direction and the trap axis. To simplify the explanation we assume that the density of the gas in the trap is small (the pressure in the source is up to $10^{-4}~mbar$), so if after being born or after scattering electron has some angle relative to the trap axis, it will have the same angle near the trap boundary. Later, the simulation takes into account the density effect, but it does not change the picture dramatically. Electrons are supposed to be born isotropically. The electrons in the left red zone leave the trap at the rear side and are lost. Electrons in the right green region are accepted into the spectrometer. Electrons in the top and bottom white regions are trapped in the magnetic bottle. The red gap to the right is specific to Troitsk nu-mass: electron could escape the trap in the tritium source, but be reflected from pinch magnet. In this case, they pass back through the source and escape through the rear trap exit and are lost.
\begin{figure}
\centering
\includegraphics[width = 0.5\textwidth]{images/scheme}
\caption{The principal scheme for trapping. $\theta$ is the angle between the magnetic field axis and electron's velocity. }
\label{fig:scheme}
\end{figure}
The reflection angles are determined by magnetic field ratio:
\begin{gather}
\theta_{pinch} = \arcsin \sqrt{\frac{B_{source}}{B_{pinch}}} = 16.8^\circ,\\
\theta_{transport} = \arcsin \sqrt{\frac{B_{source}}{B_{transport}}} = 24.1^\circ,
\end{gather}
where $B_{source}$ is the axial field in the tritium source (trap body), $B_{transport}$ - field in transport channels and $B_{pinch}$ - field inside the pinch magnet. Thus, we get that only about 5\% of electrons are accepted by the spectrometer and more than 80\% of all produced electrons are trapped. The remaining 15\% escapes through the rear magnetic mirror. The direct simulation of escaped electrons show that the chance of escaped electrons to get back into the source is rather small (less than $10^{-5}$), so they could be ignored at this level of precision. The number could be different for different field configurations. For example in KATRIN experiment (\cite{Schonung:2016tal}), where is no magnetic reflection, but the back-scattering from the rear wall is the dominant effect.
The electrons scatter on the residual gas and change its angle and energy. They can escape the trap in two ways:
\begin{itemize}
\item Drift to the right side of Fig.~\ref{fig:scheme}, then jump the gap in one single interaction and arrive in the green accepted zone (pass to the spectrometer).
\item Drift to the left and escape through the rear mirror.
\end{itemize}
The electrons that go to the right green zone, but could not cross the gap in one interaction are reflected from the pinch magnet. In most cases, those electrons pass through the source and automatically fall through the rear mirror (since the chance of interaction in one pass is small).
The interaction of electrons with residual gas is described by three processes:
\begin{itemize}
\item Quasi-elastic scattering on a nucleus. In this process, the energy change is rather small (fractions of an eV). At the same time, Coulomb scattering could produce significant angles (more than $3.65^\circ$ required to jump the gap).
\item Ionization of the hydrogen molecule. In this case, the angle change is minimal, but the energy loss probability is inversely proportional to the loss value squared and could rise to 100 eV (more details in \cite{Abdurashitov:2016nrv}).
\item Excitation of the hydrogen molecule. Works the same way as ionization, but with energy loss equal to the energy difference between levels (up to 15.4 eV).
\end{itemize}
The excitation and ionization losses behave similarly, so they could be joined in a group called inelastic scattering. While the elastic cross-section shows the rate of angle change, the ratio between elastic and inelastic cross-sections shows the amount of "friction", the loss of energy per one angle-changing scattering.
The electron drifts in angular space towards the escape angle (both forward and backward) and then goes out with smaller energy. This process could be roughly described as evaporation in angular space.
\section{Simulation}
\begin{figure}
\centering
\includegraphics[width = 0.9\textwidth]{images/Cross.pdf}
\caption{Quasi-elastic, ionization and excitation cross-section dependency on incident electron energy.}
\label{fig:cross-sections}
\end{figure}
The simulation was implemented with the code written in Kotlin language (\cite{Nozik:2019gmi, Nozik:2020wiy}) and utilizes heavy-duty parallel computations for electron propagation. The simulation uses a simplified two-dimensional phase-space geometry tracking only electron energy ($E$), the angle between electron velocity and trap axis ($\theta$), and position along trap axis ($z$). The distribution of electrons across the plane perpendicular to the trap axis is assumed to be uniform. Non-uniformity of the magnetic field near the traps causes a small transversal drift of electrons. But the quantitative estimation shows that the time required for this drift to be significant is much larger than the time needed for an electron to escape the trap or for its energy to drop below the acceptable energy. A polar angle is taken into account during scattering but assumed to be random (it changes continuously during the motion due to rotation around a magnetic field line).
The code for scattering cross-sections, angle change, and energy loss computation was provided by Ferenc Gluck and Sebastian Voecking (\cite{kasiopea}). Computed cross-sections are shown at \ref{fig:cross-sections}. It was rewritten in Kotlin and optimized for parallel computations (\cite{trapping-repo}).
The simulation has following input parameters:
\begin{itemize}
\item Initial electron energy.
\item Low electron energy cutoff. When electron energy drops below this value, the simulation stops. The parameter is introduced to reduce computation time.
\item Maximum magnetic fields in the source, transport magnets and pinch-magnet: $B_{source}$, $B_{transport}$ and $B_{pinch}$.
\item Gas density to calculate the free path.
\item Optionally, a full field map could be provided as well. If it is provided, the angles after scattering are calculated using a field in a specific scattering point (the transport between scattering points is still adiabatic). If the field map is not provided, the field is considered to be uniform. The introduction of the field map reduces the computation speed but does not significantly affect the results.
\end{itemize}
At the end of the simulation, an electron could have one of the following statuses:
\begin{itemize}
\item \textbf{PASS} - the electron has the angle accepted by the spectrometer without scattering.
\item \textbf{REJECTED} - the electron leaves the trap through the rear plug with or without scattering.
\item \textbf{ACCEPTED} - electrons in spectrometer acceptance angle after at least one scattering.
\item \textbf{LOWENERGY} - electrons with energy under the energy cut.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width = 0.9\textwidth]{images/spectra.pdf}
\caption{The energy spectra of escaped electrons for different starting energy. $\varepsilon = E_{initial} - E_{escaped}$ is the loss of energy. Vertical axis shows the number of electrons per 100 eV energy bin for $10^7$ initial electrons.}
\label{fig:energy-spectra}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 0.7\textwidth]{images/precise_tracking.jpg}
\caption{Comparison of full-tracking simulation with simplified model. The normalized number of escaped electrons in 50 eV energy bin versus electron energy in eV.}
\label{fig:track-compare}
\end{figure}
The Fig.~\ref{fig:energy-spectra} shows the resulting energy distributions for \textbf{ACCEPTED} electrons with fixed starting energies from 12 to 18 keV. In each case, we take mono-energetic starting distribution. It could be later convoluted with actual energy distribution if needed. The spectrum is characterized by a small rise near zero losses which is caused by electrons that suffer only one or several scattering before hitting the acceptance zone. This part of the spectrum mostly follows $1/\varepsilon^2$ law of ionization loss cross-section. The rest of the region is represented by an almost flat curve (the height depends only on initial electron energy). Due to this flatness, it was possible to replace the trapping effect with a simplified formula (with a constant rate for each energy) at \cite{Aseev:2011dq} and \cite{Abdurashitov:2017kka}.
The simulation utilized the simplified geometric model and supposes that reflection from the magnetic bottleneck is instantaneous. In reality, the magnetic field rises from $B_{source}$ to $B_{transport}$ gradually on the length of approximately 10 cm. The scattering in magnetic plug produces electrons with different angular distribution so this process could in theory affect the resulting energy distribution of escaped electrons. To check this possibility and take into account the three-dimensional picture, a single simulation was performed with full electromagnetic three-dimensional tracking. The energy range for this simulation was significantly reduced to allow feasible simulation time. The precise tracking simulation was performed by Aino Skasyrskaya at INR RAS. The Fig.~\ref{fig:track-compare} shows the result of this calculation. It could be seen that results of precise and simplified tracking match within statistical errors.
\section{Discussion}
\begin{figure}
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{images/lossVSrun.pdf}
\caption{Energy loss versus full electron run length for ACCEPTED electrons. The initial energy was 20 keV. }
\label{fig:loss-vs-length}
\end{minipage}
~
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{images/runVSangle.pdf}
\caption{Full electron run length versus initial angle relative to field axis for ACCEPTED electrons. The initial energy was 20 keV. }
\label{fig:length-vs-angle}
\end{minipage}
\end{figure}
The resulting flat spectrum is quite peculiar. We see that the mono-energetic initial spectrum is transformed into an almost uniform "white noise" spectrum after evaporation. The result, while being a bit counter-intuitive, is reproduced quite well in different simulations. On the level of common sense, one can understand this spectrum in the following way: the escape time is proportional to the difference between the initial angle and the escape angle. The energy loss is proportional to escape time, which means it is also proportional to the angle difference. It means that the loss will be proportional to the initial angle.
One must note that simulation does not directly track the time. Instead, we compute the total run length, which monotonously depends on time. Figures \ref{fig:loss-vs-length} and \ref{fig:length-vs-angle} show the dependence of energy loss on total flight path (which is a measure of time in this simulation) and dependence of flight path on the initial angle. The dependency is clear in Fig.~\ref{fig:loss-vs-length}. The dependency of length on the angle (Fig.~\ref{fig:length-vs-angle}) is fuzzier due to possible large-angle scatterings in quasi-elastic interactions, but the tendency still could be seen.
\begin{figure}
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width = \linewidth]{images/CrossDivision.pdf}
\caption{The ratio between quasi-elastic and excitation/ionization cross-sections.}
\label{fig:cross-section-div}
\end{minipage}
~
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width = \linewidth]{images/TrapRatio.pdf}
\caption{The ratio between tritium beta-spectrum with trapping and tritium original spectrum without trapping.}
\label{fig:trap-ratio}
\end{minipage}
\end{figure}
The simulation results show only small dependence on magnetic field parameters and no dependence on the pressure. The only effective parameter is the ratio between quasi-elastic and non-elastic cross-sections shown at Fig.~\ref{fig:cross-section-div} (some details on cross-section models could be found in \cite{Abdurashitov:2016nrv}). The ratio between cross-sections decreases slightly with energy, which explains the deviation from the constant spectrum at Fig.~\ref{fig:energy-spectra}. The ratio between tritium beta-spectrum with trapping and tritium beta-spectrum without trapping is shown at Fig.~\ref{fig:trap-ratio}.
The precise experimental check of the trapping effect is not possible at the moment. The integral value of the effect is rather large (like it is shown at Fig.~\ref{fig:trap-ratio}), but the differential value in the measurement region is small and it is not possible to completely disentangle the trapping spectrum from the tritium beta-spectrum (because tritium beta-spectrum amplitude is not fixed). The only way to check trapping directly is to introduce a monochromatic electron radioactive source with isotropic angle distribution. There was an effort to do so with Kr source (see \cite{Belesev:2008zz} for details), but the power of the Kr source was not enough (relative to the residual tritium background) to draw any quantitative conclusions. The simulated spectrum with a full tracking (see Fig.~\ref{fig:track-compare}) was used in \cite{Aseev:2011dq} to search for electron neutrino and fast tracking was used in later articles in search for sterile neutrino. In all cases, there is a good agreement between the data and total model spectrum, which includes the trapping effect. This agreement could be used as an indirect experimental proof of simulations presented in this work.
Our study with different magnetic field configurations shows that the general shape of the spectrum remains the same independently of the magnetic trap shape. It means that the effect could be observed not only in Troitsk nu-mass experiment but in other cases as well.
One of such cases is the KATRIN experiment (\cite{Aker:2019uuj}). There is no significant magnetic trap in the KATRIN source due to the absence of the rear magnetic mirror, but at some scale, trapping is still possible in some places like pumping sections, where the pressure of tritium is still high enough to produce a significant amount of trapped electrons. The simulation code presented in this paper could be useful to calculate the spectrum of evaporated electrons there as well.
Another possible case is the trapping of charged particles in Earth's magnetic field. Of course for this purpose the code should be changed, taking into account that trapped particles are mostly protons and not electrons. Also, it is possible, that one cannot discard transversal drift on such large scales.
\section{Conclusion}
The simulations presented in this article allow taking into account the effect of evaporation of electrons from a magnetic trap, which is noticeable for the Troitsk nu-mass experiment. The shape of the spectrum of electrons escaping the trap could be predicted with good precision. The simulations provide robust estimations of the spectrum. The Kotlin code allows computing millions of trapped particles per minute, which is by few orders of magnitude faster than standard solutions (like GEANT4). It allows us to get enough statistical precision for the final spectrum and to play around with the parameters. The same effect could be estimated in other cases of magnetic trapping as well.
The authors would like to thank people for consulting them on different stages of writing this article: Aino Skasyrskaya, Viktor Matushko, Polina Okuneva, and Oleg Komoltsev. Special thanks to Vladislav Pantuev and Peter Klimai both for valuable discussion and the manuscript proofreading. The JetBrains Research organization provided infrastructure and informational support for the authors.
This work is supported by the Ministry of Science and Higher Education of the Russian Federation under the contract 075-15-2020-778.
\bibliographystyle{unsrt}
|
2108.02068
|
\section{Introduction}\label{sec:intro}
In contrast to the classical notion of vacuum, describing the absence of everything, the vacuum of a quantum field theory (QFT) amounts to a highly non-trivial state.
It is characterized by the omnipresence of quantum fluctuations of all the dynamical degrees of freedom of the underlying QFT.
These fluctuations effectively endow the quantum vacuum with medium-like properties, such as a non-vanishing non-linear response to applied electromagnetic fields.
The latter is in particular triggered by fluctuations of charged particles, which couple directly to electromagnetic fields, and depends on the charges and masses of all fluctuating particles.
Within the Standard Model of particle physics the leading effective interactions between electromagnetic fields are governed by quantum electrodynamics (QED).
A central quantity in
the study of the effective nonlinear interactions of macroscopic electromagnetic fields in the QED vacuum is the Heisenberg-Euler effective action $\Gamma_{\rm HE}$ \cite{Heisenberg:1935qt,Weisskopf:1996bu,Schwinger:1951nm}.
The latter arises from the microscopic theory of QED in a given prescribed (non-quantized) electromagnetic field $\bar F=\bar F^{\mu\nu}$ by integrating out the dynamical degrees of freedom, namely the quantized spinor fields, describing electrons and positrons, and the quantum photon field; cf., e.g., Ref.~\cite{Gies:2016yaa}.
This supplements the classical Maxwell action $\Gamma_{\rm MW}[\bar F]=-\frac{1}{4}\int_x\bar F_{\mu\nu}\bar F^{\mu\nu}$ with effective, nonlinear self-interactions of the prescribed field.
Apart from the applied electromagnetic field $\bar F$ and derivatives $\partial=\partial^\rho$ thereof, at zero temperature and vanishing chemical potential the only physical parameters characterizing the latter are the electron/positron mass $m$, and the elementary charge $e$ mediating the coupling between charges and electromagnetic fields.
As the quantum fields only appear as virtual states, their momenta are integrated over and hence not determined, eliminating the possibility of any explicit reference to them.
In terms of Feynman diagrams $\Gamma_{\rm HE}[\bar F]$ can be represented as an infinite set of vacuum diagrams, with the charged particle lines dressed to all orders in the external electromagnetic field and its derivatives.
The simplest diagram is a one-loop diagram.
Diagrams featuring more loops are parametrically suppressed with powers of the fine-structure constant $\alpha=e^2/(4\pi)\simeq1/137$.
Upon combination with the speed of light $c$ and the Planck constant $\hbar$, the ratio of $m^2$ and $e$ can be converted into electric $E_{\rm cr}=m^2c^3/(e\hbar)\approx1.3\times10^{18}\,{\rm V}/{\rm m}$ and magnetic $B_{\rm cr}=E_{\rm cr}/c\approx4\times10^9\,{\rm T}$ reference field strengths.
Analogously, the inverse of the electron mass can be converted into spatial $\lambdabar_{\rm C}=\hbar/mc\approx3.8\times10^{-13}\,{\rm m}$ and temporal $\tau_{\rm C}=\lambdabar_{\rm C}/c\simeq1.3\times10^{-21}\,{\rm s}$ reference scales.
The former quantities can be used to render the applied electric and magnetic fields dimensionless, and the latter ones the derivatives.
Hence, is slowly varying electromagnetic fields, characterized by typical spatial (temporal) scales of variation much larger than $\lambdabar_{\rm C}$ ($\tau_{\rm C}$) derivative corrections should be suppressed relatively to contributions scaling with the same power of $\bar F$ but featuring no derivatives.
The present work is devoted to the study of the leading derivative corrections to the Heisenberg-Euler effective action.
The one-loop Heisenberg-Euler effective action in constant fields has been worked out by Refs.~\cite{Heisenberg:1935qt,Weisskopf:1996bu,Schwinger:1951nm}, an the leading derivative correction by Refs.~\cite{Gusynin:1995bc,Gusynin:1998bt}.
For $\Gamma_{\rm HE}$ in constant fields at two loops, see Refs.~\cite{Ritus:1975cf,Ritus:1977iu,Gies:2016yaa}. Apart from this, higher-loop results in constant fields and lower space-time dimensions \cite{Huet:2011kd,Huet:2018ksz}, as well as one-loop results for specific purely electric or magnetic (one-dimensional) field inhomogeneities are available \cite{Narozhnyi:1970uv,Mamaev:1981dt,Cangemi:1995ee,Dunne:1997kw,Dunne:1998ni,Kim:2009pg}. See also Ref.~\cite{Navarro-Salas:2020oew} for an adiabatic propertime expansion of $\Gamma_{\rm HE}$ at one-loop, and Ref.~\cite{Pegoraro:2021whz} for a study of nonlinear waves in a dispersive vacuum described with a high order derivative electromagnetic Lagrangian.
Our article is organized as follows: after detailing the strategy devised to determine the leading derivative corrections to the Heisenberg-Euler effective action in Sec.~\ref{sec:approach}, we employ our approach to determine the leading derivative correction to the Heisenberg-Euler effective action in Sec.~\ref{sec:calc}. Thereafter, in Sec.~\ref{sec:purefield} we focus on the special cases of magnetic- and electric-like fields for which only one of the secular invariants of the electromagnetic field does not vanish. Finally, we end with conclusions and an outlook in Sec.~\ref{sec:concls}.
\section{Our Approach}\label{sec:approach}
Here, we demonstrate that the leading derivative correction to the Heisenberg-Euler effective action can efficiently be determined from the vacuum polarization tensor evaluated in a generic constant and homogeneous background field $\bar F$.
In position space, this correction contains exactly two derivatives but arbitrary powers of the electromagnetic field $\bar F$.
Our derivation -- which is somewhat reminiscent of the approach~\cite{Karbstein:2007be} devised in a different context -- constitutes an alternative route to the result of Gusynin and Shovkovy \cite{Gusynin:1995bc,Gusynin:1998bt}, who determined this correction at one-loop order.
To this end, we first note that the photon polarization tensor generically mediates a quantum-fluctuation induced effective interaction between two inhomogeneous electromagnetic fields characterized by the vector potential $A(x)$.
In turn, it is a central ingredient to the effective action describing the physics of arbitrary-frequency fields in the presence of a constant background field \cite{Gies:1999vb,Karbstein:2011ja}. In position space, this effective action reads
\begin{equation}
\Gamma[A(x)]=-\frac{1}{4}\int_xF_{\mu\nu}(x)F^{\mu\nu}(x)-\frac{1}{2} \int_x\int_{x'} A_\mu(x)\Pi^{\mu\nu}(x-x'|\bar F)A_\nu(x') + {\cal O}(A^3)\,. \label{eq:GammaAgenfreq}
\end{equation}
Here, $F(x)$ denotes the field strength tensor of the manifestly inhomogeneous field $A(x)$, and $\Pi^{\mu\nu}(x-x'|\bar F)$ is the polarization tensor evaluated in the background field $\bar F$.
The neglected higher-order terms encode effective self-interactions of the field $A(x)$. To keep notations compact, throughout this work we employ the shorthand notations $\int_x\equiv\int{\rm d}^4x$ and $\int_k\equiv\int{\rm d}^4k/(2\pi)^4$ for integrations over position and momentum space, respectively. Moreover, we use the Heaviside-Lorentz System with $c=\hbar=1$;
$g^{\mu\nu}={\rm diag}(-1,+1,+1,+1)$.
Due to translational invariance in homogeneous constant fields, in momentum space the polarization tensor $\Pi^{\mu\nu}(k,k'|\bar F)=\int_x \int_{x'}{\rm e}^{{\rm i}kx}\,\Pi^{\mu\nu}(x-x'|\bar F)\,{\rm e}^{{\rm i}k'x'}$ does not depend explicitly on both the in- and outgoing momenta, but is a function of the momentum transfer $k$ only. This implies that $\Pi^{\mu\nu}(k,k'|\bar F)\sim(2\pi)^4\delta(k+k')$ and resembles the situation at zero background field, where the vacuum polarization tensor can be solely expressed in terms of $k$.
There, the Ward identity $k_\mu\Pi^{\mu\nu}=\Pi^{\mu\nu}k_\nu=0$ immediately constrains its tensor structure to be spanned by $(k^2g^{\mu\nu}-k^{\mu}k^{\nu})$.
In the present case, the field strength tensor of the background field $\bar F$ provides an additional building block to form tensor structures compatible with the Ward identity.
However, as both $\Pi$ and $\bar F$ have two Minkowski indices, and the former is a function of $k$ and $\bar F$ only, $\Pi^{\mu\nu}(k,k'|\bar F)$ has to be even in $k$.
Besides, it is even in $\bar F$ and regular at $\bar F=0$.
Upon transformation to position space, insertion into \Eqref{eq:GammaAgenfreq}, and making use of partial integrations, the contribution to $\Pi^{\mu\nu}(k,k'|\bar F)$ which is quartic in $k$ gives rise to an effective interaction term which can be schematically expressed as $\Gamma[F,\bar F]|_{\sim\partial^2}=\int_x\,h^{(2)}(\bar F)\,(\partial F)^2$.
Here, the scalar function $h^{(2)}(\bar F)$ accounts for arbitrary powers of the background field $\bar F$, and we explicitly ensured that a single derivative acts on each factor of the inhomogeneous fields $F(x)$.
Finally, substituting $\partial F\to \partial\bar F$, where $\bar F=\bar F(x)$ is now to be understood as slowly varying electromagnetic field, we arrive at
\begin{equation}
\Gamma_{\rm HE}[\bar F]|_{\sim\partial^2}=\int_x\,h^{(2)}(\bar F)\,(\partial\bar F)^2\,, \label{eq:GammaPartial2}
\end{equation}
which corresponds to the desired derivative correction to the Heisenberg-Euler effective action $\Gamma_{\rm HE}$ featuring exactly two derivatives, but arbitrary powers of the slowly varying field.
As the derivation of $\Pi^{\mu\nu}(k,k'|\bar F)$ explicitly accounts for all possible variants of coupling the in- and out-fields with momenta $k$, $k'$ and Minkowski indices $\mu$, $\nu$ to the charged particle loop, the procedure outlined above indeed ensures that \Eqref{eq:GammaPartial2} can be identified with the leading derivative correction to the Heisenberg-Euler effective action in the field $F=\bar F+\partial\bar F+\ldots$
We emphasize that for this identification the regrouping of the terms such that each power of the inhomogeneous field $F$ comes with a derivative acting on it prior to the substitution is absolutely essential.
Moreover, we note that though upon insertion into \Eqref{eq:GammaAgenfreq} and appropriate integrations by parts, the contribution to $\Pi^{\mu\nu}(k,k'|\bar F)$ which is quadratic in $k$ results in a contribution $\sim \int_x\,h^{(0)}(\bar F)\,F^2$, this expression does not reproduce the zero-derivative result for $\Gamma_{\rm HE}$ in the limit of $F\to \bar F$. The reason for this is the fact that in the derivation of the photon polarization tensor and \Eqref{eq:GammaAgenfreq} the fields $\bar F$ and $F$ are assumed to be manifestly different. Inconsistencies arise as soon as (at least) one of the couplings to the field $F$ is identified with a coupling to the background field.
The contributions to $\Pi^{\mu\nu}(k,k'|\bar F)$ beyond quartic order, which translate into higher-order $n$ derivative terms are also not helpful for the purpose of a systematic derivation of higher-order derivative corrections to $\Gamma_{\rm HE}$.
This is a direct consequence of the fact that there is no unambiguous way in assigning the additional derivatives to any of the two inhomogeneous fields $F(x)$ before invoking the substitution $F\to \bar F$. The possibility of partial integrations, which after this substitution also act on the factors of $\bar F$ in the scalar functions $h^{(2n)}(\bar F)$, renders different assignments inequivalent for $n>1$, and imply inconsistent results.
On the other hand, along the lines outlined above the result for the contribution to $\Gamma_{\rm HE}$ containing $n$ derivatives, but arbitrary powers of the field could be extracted from the $n$-rank polarization tensor evaluated in the homogeneous constant background field $\bar F$.
As the determination of the $n$-derivative contribution only requires knowledge of the term scaling as $k^{2n}\sim k^{\sigma_1}\ldots k^{\sigma_{2n}}$ of the $n$-rank polarization tensor, aiming at the evaluation of the respective contribution in cases where the required polarization tensor has not yet been determined, for this endeavor it suffices to determine this tensor only at an accuracy of order $k^{2n}$.
\section{Explicit Calculation}\label{sec:calc}
Subsequently, we employ the strategy outlined above to explicitly determine the quadratic derivative correction to the Heisenberg-Euler effective action at one loop \cite{Gusynin:1995bc,Gusynin:1998bt}.
The determination of this contribution is particularly straightforward because $\Pi^{\mu\nu}(k,k'|\bar F)$ is known analytically at one-loop order \cite{BatShab,Baier:1974hn,Urrutia:1977xb,Dittrich:2000wz,Schubert:2000yt,Dittrich:2000zu}.
However, we emphasize that our approach is not limited to one loop. For instance, a result for the two-loop photon polarization tensor evaluated in a homogeneous constant background field could be readily employed to extract the quadratic derivative correction to $\Gamma_{\rm HE}$ at two loops.
Following the notations of \cite{Dittrich:2000zu}, the photon polarization tensor can be expressed as
\begin{equation}
\Pi^{\mu\nu}(k,k'|\bar F)=(2\pi)^4\delta(k+k')\Bigl\{\Pi_0 P_T^{\mu\nu} +(\Pi_\perp-\Pi_0) P_\perp^{\mu\nu} +(\Pi_\parallel-\Pi_0) P_\parallel^{\mu\nu} +\pi_Q Q^{\mu\nu}\Bigr\}\,, \label{eq:PiF0}
\end{equation}
where
$\Pi_{0,\parallel,\perp}$ and $\pi_Q$ are scalar functions which depend both on the background field $\bar F$ and the transferred momentum $k$. Its tensor structure is spanned by
\begin{equation}
P^{\mu\nu}_T=g^{\mu\nu}-\frac{k^\mu k^\nu}{k^2}\,, \quad
P^{\mu\nu}_\perp=\frac{v_\perp^\mu v_\perp^\nu}{v_\perp^2}\,, \quad
P^{\mu\nu}_\parallel=\frac{v_\parallel^\mu v_\parallel^\nu}{v_\parallel^2}\,, \quad
Q^{\mu\nu}=v_\parallel^\mu v_\perp^\nu +v_\perp^\mu v_\parallel^\nu\,, \label{eq:Tensors}
\end{equation}
where the four-vectors $v_{\parallel,\perp}$ are defined as
\begin{equation}
v_{\parallel/\perp}^\mu=\frac{c_{\pm}(k{}^\star\!\bar F)^\mu\mp c_\mp (k\bar F)^\mu}{c_+^2+c_-^2}\,, \quad\text{such that}\quad v_{\parallel/\perp}^2=\frac{(k\bar F)^2\mp k^2c_\pm^2}{c_+^2+c_-^2}\,.
\end{equation}
Also note that $v_\perp^2-v_\parallel^2=k^2$.
Here, we use the shorthand notation $(k\bar F)^\mu=k_\nu\bar F^{\nu\mu}$, etc., and $c_\pm$ denote the secular invariants of the electromagnetic field. The latter are related to the gauge and Lorentz invariants ${\cal F}=\frac{1}{4}\bar F_{\mu\nu}\bar F^{\mu\nu}$ and ${\cal G}=\frac{1}{4}\bar F_{\mu\nu}{}^\star\!\bar F^{\mu\nu}$ as $c_\pm=(\sqrt{{\cal F}^2+{\cal G}^2}\pm{\cal F})^{1/2}$; ${}^\star\!\bar F^{\mu\nu}$ is the dual field strength tensor.
The above definitions are such that the three tensors $P_{\parallel,\perp}^{\mu\nu}$ and $P_0^{\mu\nu}=P_T^{\mu\nu}-P_\parallel^{\mu\nu}-P_\perp^{\mu\nu}$ are projectors and fulfill the usual projector identities. At the same time, $Q^{\mu\nu}$ is only orthogonal to $P_0^{\mu\nu}$ and not a projector.
Defining $\pi_T=\Pi_0/k^2$ and $\pi_{\parallel/\perp}=(\Pi_{\parallel/\perp}-\Pi_0)/v_{\parallel/\perp}^2$, \Eqref{eq:PiF0} can alternatively
be represented as
\begin{align}
\Pi^{\mu\nu}(k,k'|\bar F)&=(2\pi)^4\delta(k+k')\Bigl\{(k^2g^{\mu\nu}-k^\mu k^\nu)\,\pi_T + (k\bar F)^\mu(k\bar F)^\nu \pi_{\bar F\bar F} + (k{}^\star\!\bar F)^\mu(k{}^\star\!\bar F)^\nu\pi_{{}^\star\!\bar F{}^\star\!\bar F} \nonumber\\
&\hspace*{3.5cm}+[(k{}^\star\!\bar F)^\mu(k\bar F)^\nu + (k\bar F)^\mu(k{}^\star\!\bar F)^\nu]\pi_{{}^\star\!\bar F \bar F}\Bigr\}\,.
\label{eq:Pi_F}
\end{align}
The scalar coefficients $\pi_p$ in \Eqref{eq:Pi_F} are given by
\begin{align}
\pi_{\bar F \bar F}&=\frac{1}{(c_+^2+c_-^2)^2}\bigl[c_+^2\pi_\perp+c_-^2\pi_\parallel-2c_+c_-\pi_Q\bigr]\,, \nonumber\\
\pi_{{}^\star\!\bar F{}^\star\!\bar F}&=\frac{1}{(c_+^2+c_-^2)^2}\bigl[c_-^2\pi_\perp+c_+^2\pi_\parallel+2c_+c_-\pi_Q\bigr]\,, \nonumber\\
\pi_{{}^\star\!\bar F\bar F}&=\frac{1}{(c_+^2+c_-^2)^2}\bigl[c_+c_-(\pi_\perp-\pi_\parallel)+(c_+^2-c_-^2)\pi_Q\bigr]\,.
\label{eq:pi_FF}
\end{align}
While this structure is general, the explicit expressions for the scalar functions encoding the nontrivial dependences on $\bar F$ and $k$ at one loop order can be cast in the following form,
\begin{equation}
\left\{\begin{array}{c}
\!\!\pi_T\!\! \\ \!\!\pi_\parallel\!\! \\ \!\!\pi_\perp\!\! \\ \!\!\pi_Q\!\! \\
\end{array}\right\}
=\frac{\alpha}{2\pi}\int_0^\infty\frac{{\rm d}s}{s}\,{\rm e}^{-{\rm i}m^2s}\!\left[\int_0^1{\rm d}\nu\,{\rm e}^{-{\rm i}(v_\perp^2 n_\perp-v_\parallel^2n_\parallel)s}\,\frac{zz'}{\sin z\sinh z'}
\left\{\begin{array}{c}
N_0 \\ \!\!N_0-N_1\!\! \\ \!\!N_2-N_0\!\! \\ -N_3
\end{array}\right\}
- \left\{\begin{array}{c}
\!\!\frac{2}{3}\!\! \\ 0 \\ 0 \\ 0
\end{array}\right\}\right]\!, \label{eq:pi}
\end{equation}
with
\begin{align}
N_0&=\cos(\nu z)\cosh(\nu z')-\cot z\sin(\nu z)\coth z'\sinh(\nu z')\,, \nonumber\\
N_1&=2\cos z\,\frac{\cosh z'-\cosh(\nu z')}{\sinh^2 z'}\,, \quad N_2=N_1|_{z\leftrightarrow-{\rm i}z'}\,, \nonumber\\
N_3&=\frac{1-\cos z\cos(\nu z)}{\sin z}\frac{1-\cosh z'\cosh(\nu z')}{\sinh z'}+\sin(\nu z)\sinh(\nu z')\,, \nonumber\\
n_\parallel&=\frac{\cosh z'-\cosh(\nu z')}{2z'\sinh z'}\,, \quad n_\perp=n_\parallel|_{z\leftrightarrow-{\rm i}z'}\,,
\end{align}
where we used the shorthand notations $z=ec_+s$ and $z'=ec_-s$. Here and in the following, the prescription $m^2\to m^2-{\rm i}0^+$ for the square of the electron mass $m$ is implicitly assumed. Besides, the integration contour of the propertime integration is implicitly assumed to lie slightly below the real positive axis \cite{Karbstein:2013ufa}.
Note, that the entire momentum dependence of \Eqref{eq:pi} is encoded in the phase of the propertime integral over $s$.
Hence, it is obvious that all the scalar functions $\pi_p$ introduced above can be formally expanded as $\pi_p=\sum_{n=0}^\infty\pi_p^{(2n)}$, with $\pi_p^{(2n)}\sim k^{2n}$.
The contributions $\pi_p^{(2n)}$ constitute the photon polarization tensor $\Pi^{\mu\nu}(k,k'|\bar F)$ at order $k^{2n+2}$. In turn, here we are specifically interested in $\pi_p^{(2)}$; the polarization at this order $\Pi^{(2)\mu\nu}(k,k'|\bar F)$ follows from \Eqref{eq:Pi_F} upon substitution of the coefficients $\pi_p\to\pi_p^{(2)}$.
Clearly, central building blocks to $\pi_p^{(2)}$ are
\begin{equation}
{\cal N}_i^{\parallel}=\int_0^1{\rm d}\nu\,n_{\parallel}N_i \quad\text{and}\quad {\cal N}_i^{\perp}=\int_0^1{\rm d}\nu\,n_{\perp}N_i\,, \label{eq:Niparallelperp}
\end{equation}
with $i\in\{0,1,2,3\}$.
The integral over $\nu$ in \Eqref{eq:Niparallelperp} can be performed explicitly, yielding
\begin{align}
{\cal N}^\parallel_0(z,z')&=\frac{1}{z^2+4z'^2}\frac{z'}{z}\biggl[\frac{3}{2}\frac{z^2}{z^2+z'^2}\coth z'\Bigl(\frac{\cosh z'}{\sin z}-\frac{z'}{z}\frac{\cos z}{\sinh z'}\Bigr)-\frac{\sin z}{\sinh z'}\biggr], \nonumber\\
{\cal N}^\parallel_1(z,z')&=\frac{1}{z'}\frac{\cos z}{\sinh z'}\biggl[1+\frac{3}{2}\frac{1}{\sinh z'}\Bigl(\frac{1}{\sinh z'}-\frac{\cosh z'}{z'}\Bigr)\biggr], \nonumber\\
{\cal N}_2^\parallel(z,z')&=\frac{\cosh z'}{\sin z}\biggl[\frac{1}{z^2+z'^2}\Bigl(\frac{z'}{z}\coth z' +\frac{z^2}{z'^2}\cot z\Bigr)-\frac{\cot z\coth z'}{z'}\biggr], \nonumber\\
{\cal N}^\parallel_3(z,z')&=\frac{3}{4}\frac{\coth z'}{\sin z}\frac{1}{z'}\Bigl(\frac{1}{\sinh z'} -\frac{\cosh z'}{z'}\Bigr)+\frac{1}{2}\frac{1}{z^2+z'^2}\frac{z^2}{z'^2}\frac{\sinh z'}{\sin z} \nonumber\\
&\quad+\frac{3}{2}\frac{1}{z^2+4z'^2}\frac{z'^2}{z^2+z'^2}\biggl[2\coth z'\Bigl(\frac{\cosh z'}{\sin z}-\frac{z'}{z}\frac{\cos z}{\sinh z'}\Bigr)-\frac{\sin z}{\sinh z'}\biggr],
\label{eq:calN}
\end{align}
as well as ${\cal N}_0^\perp={\cal N}_0^\parallel|_{z\leftrightarrow-{\rm i}z'}$, $ {\cal N}^\perp_1={\cal N}^\parallel_2 |_{z\leftrightarrow-{\rm i}z'}$, $ {\cal N}^\perp_2={\cal N}^\parallel_1 |_{z\leftrightarrow-{\rm i}z'}$ and $ {\cal N}^\perp_3={\cal N}^\parallel_3 |_{z\leftrightarrow-{\rm i}z'}$.
For completeness and later reference, we also provide the leading contributions of these quantities in a weak-field expansion. The respective results are
\begin{align}
{\cal N}^\parallel_0(z,z')&=\frac{2}{15}-\frac{e^2}{315}(c_+^2+2c_-^2)s^2+{\cal O}(\bar F^4)\,, \nonumber\\
{\cal N}^\parallel_1(z,z')&=\frac{2}{15}-\frac{e^2}{315}(21c_+^2+13c_-^2)s^2+{\cal O}(\bar F^4)\,, \nonumber\\
{\cal N}^\parallel_2(z,z')&=\frac{2}{15}+\frac{e^2}{315}(10c_+^2+18c_-^2)s^2+{\cal O}(\bar F^4)\,, \nonumber\\
{\cal N}^\parallel_3(z,z')&=-\frac{e^2}{35}c_+c_-s^2+{\cal O}(\bar F^4)\,. \label{eq:calN_series}
\intertext{Moreover, note that}
\frac{zz'}{\sin z\sinh z'}&=1+\frac{e^2}{6}(c_+^2-c_-^2)s^2+{\cal O}(\bar F^4)\,. \nonumber
\end{align}
Introducing the shorthand notations
\begin{equation}
{\cal N}_i^-={\cal N}_i^\parallel-{\cal N}_i^\perp \quad\text{and}\quad {\cal N}_i^+=\frac{c_+^2}{c_+^2+c_-^2}\,{\cal N}_i^\parallel+\frac{c_-^2}{c_+^2+c_-^2}\,{\cal N}_i^\perp\,,
\end{equation}
the functions $\pi_p^{(2)}$ can then be compactly expressed as
\begin{equation}
\pi^{(2)}_p = k_\alpha h^{\alpha\beta}_p(\bar F)k_\beta\,,
\quad\text{with}\quad
h^{\alpha\beta}_p(\bar F) = \frac{\bar F^\alpha_{\ \, \tau}\bar F^{\beta\tau}}{c_+^2+c_-^2} h_p^-(c_+,c_-) -g^{\alpha\beta}h_p^+(c_+,c_-)\,,
\label{eq:h_p}
\end{equation}
where
\begin{equation}
\left\{\begin{array}{c}
h_T^\pm \\ \!h_\parallel^\pm\! \\ \!h_\perp^\pm\! \\ h_Q^\pm
\end{array}\right\}
={\rm i}\frac{\alpha}{2\pi}\int_0^\infty{\rm d}s\,{\rm e}^{-{\rm i}m^2s}\,\frac{zz'}{\sin z\sinh z'}
\left\{\begin{array}{c}
{\cal N}_0^\pm \\ \!{\cal N}_0^\pm-{\cal N}_1^\pm\! \\ \!{\cal N}_2^\pm-{\cal N}_0^\pm\! \\ -{\cal N}_3^\pm
\end{array}\right\}.
\label{eq:hs}
\end{equation}
The fact that the functions $\pi^{(2)}_p$ are regular at $c_+=c_-=0$ and feature asymptotic expansions in terms of combinations of $c_+$ and $c_-$ is not obvious.
However, at least at low orders one can easily convince oneself that this is indeed the case by performing explicit expansions; cf. also \Eqref{eq:calN_series}.
Besides, we note that Eqs.~\eqref{eq:pi_FF} and \eqref{eq:h_p} imply that $h_{\bar F\bar F}^{\alpha\beta}$ ($h_{\bar F\bar F}^\pm$) relates to $h_\perp^{\alpha\beta}$, $h_\parallel^{\alpha\beta}$ and $h_Q^{\alpha\beta}$ ($h_\perp^\pm$, $h_\parallel^\pm$ and $h_Q^\pm$) in exactly the same way as $\pi_{\bar F\bar F}$ relates to $\pi_\perp$, $\pi_\parallel$ and $\pi_Q$, etc.
With these preparations, we can now explicitly determine the quadratic derivative correction $\Gamma_{\rm HE}[\bar F]|_{\sim \partial^2}$ to the Heisenberg-Euler effective action.
Following the strategy outlined above and using the Fourier representation of the gauge field $A^\mu(x)=\int_k{\rm e}^{{\rm i}kx}A^\mu(k)$, we first evaluate the quantity
\begin{equation}
\Gamma[F,\bar F]\big|_{\sim\partial^2} =-\frac{1}{2}\int_k \int_{k'} A_\mu(k)\,\Pi^{(2)\mu\nu}(k,k'|\bar F)\, A_\nu(k')\,.
\label{eq:DeltaGamma}
\end{equation}
A direct consequence of our definition of the momentum space representation of the gauge field is
$F^{\mu\nu}(x)= \int_k{\rm e}^{{\rm i}kx} F^{\mu\nu}(k)$ with $F^{\mu\nu}(k)={\rm i}\bigl(k^\mu A^\nu(k)-k^\nu A^\mu(k)\bigr)$.
Therewith it is easy to show that $(k\bar F)^\nu A_\nu(k)=-{\rm i}\bar{F}^{\rho\nu} F_{\rho\nu}(k)/2$ and analogously $(k {}^*\!\bar F)^\nu A_\nu(k)=-{\rm i}\,{}^*\!\bar{F}^{\rho\nu} F_{\rho\nu}(k)/2$.
Correspondingly, we find
\begin{align}
\Gamma[F,\bar F]\big|_{\sim\partial^2}&=\frac{1}{4}\int_k \,\Bigl\{\bigl[k_\alpha F_{\mu\nu}(k)\bigr]\!\bigl[-k_\beta F^{\mu\nu}(-k)\bigr]\, h^{\alpha\beta}_T(\bar F)\nonumber\\
& \hspace*{2cm} + \frac{1}{2}\bigl[ k_\alpha F_{\sigma\mu}(k)\bigr]\!\bigl[- k_\beta F_{\rho\nu}(-k)\bigr] \Bigl[\bar{F}^{\sigma\mu}\bar{F}^{\rho\nu} h^{\alpha\beta}_{\bar F\bar F}(\bar F) + {}^*\!\bar{F}^{\sigma\mu} {}^*\!\bar{F}^{\rho\nu} h^{\alpha\beta}_{{}^*\! \bar F{}^*\! \bar F}(\bar F)\nonumber\\
&\hspace*{8cm} + 2 {}^*\!\bar{F}^{\sigma\mu}\bar{F}^{\rho\nu} h^{\alpha\beta}_{{}^*\! \bar F \bar F}(\bar F)\Bigr]\Bigr\}\,.
\label{eq:DeltaGamma2}
\end{align}
Accounting for the identities $\int_k {\rm e}^{{\rm i}kx} [k_\alpha F_{\mu\nu}(k)]=-{\rm i}\partial_\alpha F_{\mu\nu}(x)$ and $\int_k u(k)v(-k)=\int_x u(x)v(x)$, this expression can be readily transformed to position space.
Finally substituting $F\to\bar F(x)$ and $\bar F\to\bar F(x)$, \Eqref{eq:DeltaGamma2} yields the desired contribution to the Heisenberg-Euler effective action,
\begin{align}
\Gamma_{\rm HE}[\bar F]\big|_{\sim \partial^2}&=-\frac{1}{4}\int_x \,\Bigl\{\partial_\alpha \bar F_{\mu\nu} \partial_\beta \bar F^{\mu\nu}\, h^{\alpha\beta}_T(\bar F)\nonumber\\
& \hspace*{2.2cm} + \frac{1}{2} \partial_\alpha \bar F_{\sigma\mu} \partial_\beta \bar F_{\rho\nu} \Bigl[\bar{F}^{\sigma\mu}\bar{F}^{\rho\nu} h^{\alpha\beta}_{\bar F\bar F}(\bar F) + {}^*\!\bar{F}^{\sigma\mu} {}^*\!\bar{F}^{\rho\nu} h^{\alpha\beta}_{{}^*\!\bar F{}^*\!\bar F}(\bar F)\nonumber\\
&\hspace*{7cm} + 2 {}^*\!\bar{F}^{\sigma\mu}\bar{F}^{\rho\nu} h^{\alpha\beta}_{{}^*\!\bar F \bar F}(\bar F)\Bigr]\Bigr\}\,,
\label{eq:Gamma2derivs_gen}
\end{align}
where $\bar F=\bar F(x)$ is to be implicitly understood. We note that this expression with tensor structures~\eqref{eq:h_p} is generic and holds at all loop orders.
With the help of \Eqref{eq:calN_series}, we infer the following weak-field limits for the tensor structures in \Eqref{eq:Gamma2derivs_gen} at one loop,
\begin{align}
h_T^{\alpha\beta}(\bar F)&=-\frac{1}{15}\frac{\alpha}{\pi}\frac{1}{m^2}\Bigl[1-\frac{1}{7}\Bigl(\frac{e}{m^2}\Bigr)^2\bar F_{\kappa\lambda}\bar F^{\kappa\lambda}\Bigr]g^{\alpha\beta}+\frac{1}{105}\frac{\alpha}{\pi}\frac{1}{m^2}\Bigl(\frac{e}{m^2}\Bigr)^2 \bar F^\alpha_{\ \, \tau}\bar F^{\beta\tau}+{\cal O}(\bar F^4)\,, \nonumber\\
h_{\bar F\bar F}^{\alpha\beta}(\bar F)&=\frac{11}{315}\frac{\alpha}{\pi}\frac{1}{m^2}\Bigl(\frac{e}{m^2}\Bigr)^2g^{\alpha\beta}+{\cal O}(\bar F^2)\,, \nonumber\\
h_{{}^*\!\bar F{}^*\!\bar F}^{\alpha\beta}(\bar F)&=\frac{4}{63}\frac{\alpha}{\pi}\frac{1}{m^2}\Bigl(\frac{e}{m^2}\Bigr)^2g^{\alpha\beta}+{\cal O}(\bar F^2)\,, \nonumber\\
h_{{}^*\!\bar F\bar F}^{\alpha\beta}(\bar F)&={\cal O}(\bar F^2)\,. \label{eq:weakfield}
\end{align}
Upon plugging these results into \Eqref{eq:Gamma2derivs_gen} and using the identity~\eqref{eq:id:*F*FFF} to eliminate the dependences of the dual field strength tensor, we obtain
\begin{align}
{\cal L}_{\rm HE}^{1\text{-loop}}(\bar F)\big|_{\sim \partial^2}&=\frac{1}{60}\frac{\alpha}{\pi}\frac{1}{m^2}\partial_\alpha\bar F_{\mu\nu}\partial^\alpha\bar F^{\mu\nu} \nonumber\\
&\quad+\frac{\alpha}{\pi}\Bigl(\frac{e}{m^2}\Bigr)^2\frac{1}{m^2}\Bigl[\frac{1}{180}\bar F_{\mu\nu}\bar F^{\mu\nu}\partial^\alpha\bar F_{\rho\sigma}\partial_\alpha\bar F^{\rho\sigma} +\frac{1}{280}\bar F_{\mu\nu}\bar F_{\rho\sigma}\partial^\alpha \bar F^{\mu\nu} \partial_\alpha\bar F^{\rho\sigma} \nonumber\\
&\hspace*{2cm}-\frac{2}{63}\bar F_{\rho\mu}\bar F^{\sigma\mu} \partial^\alpha\bar F_{\sigma\nu} \partial_\alpha\bar F^{\rho\nu}-\frac{1}{420}\bar F_{\rho\sigma}\bar F^{\rho\alpha}\partial^\sigma\bar F_{\mu\nu}\partial_\alpha\bar F^{\mu\nu}\Bigr] \nonumber\\
&\quad+{\cal O}(\bar F^6)\,. \label{eq:LHEpartial^2LO}
\end{align}
It is noteworthy that the contribution to \Eqref{eq:LHEpartial^2LO} which is quartic in the field strength can be expressed in terms of just four different tensor structures.
\section{Magnetic- and Electric-like Field Configurations}\label{sec:purefield}
In the remainder, we focus on the special situation where only one of the two invariants $c_+$ or $c_-$ does not vanish. The remaining parameter may be arbitrarily strong.
This grants access to the cases of a purely magnetic and electric field, respectively.
In this case additional insights are possible and (i) the asymptotic expansion for perturbatively weak fields can be organized in terms of a single infinite sum, with all the expansion coefficients known explicitly. Besides, (ii) the propertime integration over $s$ can even be performed explicitly and the result can be expressed in terms of the Hurwitz zeta function $\zeta(l,\chi)=\sum_{n=0}^\infty(\chi+n)^{-l}$ and derivatives thereof; primes on $\zeta$ denote derivatives with respect to $l$.
First of all, we note that for either $c_+=0$ or $c_-=0$ \Eqref{eq:Gamma2derivs_gen} simplifies significantly due to the fact that in this case $\partial_\alpha\bar F_{\sigma\mu}{}^*\!\bar F^{\sigma\mu}=2\partial_\alpha{\cal G}=0$, which implies that
\begin{align}
{\cal L}_{\rm HE}(\bar F)\big|_{\sim \partial^2}=-\frac{1}{4}\partial_\alpha\bar F_{\mu\nu} \partial_\beta\bar F^{\mu\nu}\, h^{\alpha\beta}_T(\bar F)
- \frac{1}{8} \partial_\alpha\bar F_{\sigma\mu} \partial_\beta\bar F_{\rho\nu}\bar{F}^{\sigma\mu}\bar{F}^{\rho\nu} h^{\alpha\beta}_{\bar F\bar F}(\bar F) \,.
\label{eq:Gamma2derivs_B(E)}
\end{align}
Hence, the only quantities to be determined in this specific limit are $h^{\alpha\beta}_T(F)$ and $h^{\alpha\beta}_{F F}(F)$.
Aiming at their explicit determination, we note that for finite $c_+$ but $c_-=0\leftrightarrow z'=0$ we have
\begin{align}
\frac{z}{\sin z}{\cal N}^\parallel_0(z,0)&=-\frac{1}{z^2}\Bigl[\frac{3}{2}\Bigl(\partial_z+\frac{1}{z}\Bigr)\cot z+1\Bigr], \nonumber\\
\frac{z}{\sin z}{\cal N}^\parallel_1(z,0)&=\frac{2}{15}z\cot z\,, \nonumber\\
\frac{z}{\sin z}{\cal N}^\parallel_2(z,0)&=-\Bigl[\frac{1}{2}\Bigl(\frac{1}{z}+\frac{z}{3}\Bigr)\partial_z+\frac{1}{z^2}\Bigr]\partial_z\cot z, \nonumber\\
\frac{z}{\sin z}{\cal N}^\parallel_3(z,0)&=0\,,
\label{eq:pure_z_parallel}
\end{align}
and
\begin{align}
\frac{z}{\sin z}{\cal N}^\perp_0(z,0)&=-\frac{3}{8}\Bigl\{\frac{1}{z^2}+\Bigl[\frac{1}{2z}\partial_z+\Bigl(\frac{1}{z^2}+\frac{2}{3}\Bigr)\Bigr]\partial_z\cot z\Bigr\}\,, \nonumber\\
\frac{z}{\sin z}{\cal N}^\perp_1(z,0)&=\Bigl[\Bigl(\frac{1}{z^2}+\frac{1}{3}\Bigr)\partial_z+\frac{1}{z^3}\Bigr]\cot z+\frac{1}{z^2}+\frac{1}{3}\,, \nonumber\\
\frac{z}{\sin z}{\cal N}_2^\perp(z,0)&=-\frac{1}{4}\Bigl(\partial_z+\frac{3}{z}\Bigr)\partial_z^2\cot z \nonumber\\
\frac{z}{\sin z}{\cal N}^\perp_3(z,0)&=0\,.
\label{eq:pure_z_perp}
\end{align}
Obviously, these quantities be written entirely in terms of products of powers of $z$ and $\cot z$ as well as derivatives thereof.
The analogous expressions for $c_+=0\leftrightarrow z=0$ but finite $c_-$ follow straightforwardly with the identities given below \Eqref{eq:calN}.
In turn, the only two non-trivial identities needed to determine the perturbative weak field expansions of \Eqref{eq:Gamma2derivs_B(E)} are
\begin{equation}
\cot(z)=\sum_{n=0}^\infty(-1)^n\frac{2^{2n}{\cal B}_{2n}}{(2n)!}z^{2n-1}\quad\text{for}\quad |z|<\pi\,,\quad\quad
(\text{\cite{Gradshteyn}}: 1.411.11)
\end{equation}
where ${\cal B}_{2n}$ denote Bernoulli numbers, and
\begin{equation}
\int_0^\infty{\rm d}s\,z^{n+\epsilon}\,{\rm e}^{-{\rm i}m^2 s} =\frac{1}{ec_+}\frac{\Gamma(n+1+\epsilon)}{{\rm i}^{n+1+\epsilon}}\Bigl(\frac{ec_+}{m^2}\Bigr)^{n+1+\epsilon} \,,\quad\quad
(\text{\cite{Gradshteyn}}: 3.551.2)\label{eq:ints1}
\end{equation}
which holds individually for $n+\epsilon>-1$.
Therewith we infer the following expressions for the scalar coefficients determining the tensors $h_p^{\alpha\beta}$ in \Eqref{eq:Gamma2derivs_B(E)} for the case of $c_-=0$,
\begin{align}
h_T^+(c_+,0)&=-\frac{\alpha}{\pi}\frac{1}{m^2}\sum_{n=0}^\infty\frac{12{\cal B}_{2(n+2)}}{(2n+1)(2n+2)(2n+3)}\Bigl(\frac{2ec_+}{m^2}\Bigr)^{2n}, \nonumber\\
h_T^-(c_+,0)&=\frac{\alpha}{\pi}\frac{1}{m^2}\sum_{n=1}^\infty \frac{1}{4}\frac{1}{n+1}\Bigl[\frac{3(2n-5){\cal B}_{2(n+2)}}{(2n+1)(2n+3)}-{\cal B}_{2(n+1)}\Bigr]\Bigl(\frac{2ec_+}{m^2}\Bigr)^{2n}\,, \label{eq:h+-B}\\
h^+_{\bar F\bar F}(c_+,0)&=-\frac{\alpha}{\pi}\frac{1}{m^2}\Bigl(\frac{e}{m^2}\Bigr)^2 \sum_{n=0}^\infty 4\frac{n+1}{n+2}\Bigl[\frac{4{\cal B}_{2(n+3)}}{(2n+3)(2n+5)}-\frac{{\cal B}_{2(n+2)}}{3}\Bigr]\Bigl(\frac{2ec_+}{m^2}\Bigr)^{2n}\,, \nonumber\\
h^-_{\bar F\bar F} (c_+,0)&=\frac{\alpha}{\pi}\frac{1}{m^2}\Bigl(\frac{e}{m^2}\Bigr)^2\sum_{n=1}^\infty\frac{1}{n+2}\Bigl[\frac{16n^2+50n+49}{(2n+3)(2n+5)}{\cal B}_{2(n+3)}+\frac{4n+7}{3}{\cal B}_{2(n+2)}\Bigr]\Bigl(\frac{2ec_+}{m^2}\Bigr)^{2n}\,. \nonumber
\end{align}
On the other hand, when aiming at performing the propertime integration over $s$ without resorting to an expansion, we need another identity apart from \Eqref{eq:ints1}, namely
\begin{align}
&\int_0^\infty{\rm d}s\,(as)^{n+\epsilon}\,{\rm e}^{-{\rm i}m^2 s}\, {\rm coth}(as) \nonumber\\
&\quad\quad=\frac{1}{a}\frac{\Gamma(n+1+\epsilon)}{2^{n+1+\epsilon}}\biggl[2\zeta\bigl(n+1+\epsilon,\tfrac{{\rm i}m^2}{2a}\bigr)-\Bigl(\frac{2a}{{\rm i}m^2}\Bigr)^{n+1+\epsilon}\biggr] \,, \quad\quad
(\text{\cite{Gradshteyn}}: 3.551.3)\label{eq:ints2}
\end{align}
which holds individually for $n+\epsilon>0$ and $a=|a|\,{\rm e}^{{\rm i}\delta}$ with $0\leq\delta<\frac{\pi}{2}$. The conditions on $n+\epsilon$ are rendered irrelevant upon combination of these integrals in the explicit determination of the coefficients $h_p^\pm$.
To perform the integrals involving derivatives of $\cot z$ we moreover make use of the identity $\partial_z^n \cot z=\frac{1}{z^n}\partial_c^n\,\cot(cz)\big|_{c=1}$.
The resulting expressions for the coefficients encoding the non-trivial field dependence of \Eqref{eq:Gamma2derivs_B(E)} in the limit of $c_-=0$ are
\begin{align}
h_T^+(c_+,0)& =\frac{\alpha}{\pi}\frac{1}{ec_+}\biggl\{-9\zeta'\bigl(-2,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) +6(\tfrac{1}{2}\tfrac{m^2}{ec_+})\zeta'\bigl(-1,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) \nonumber\\
&\hspace*{2cm}+\frac{1}{4}\Bigl[1+2(\tfrac{1}{2}\tfrac{m^2}{ec_+})^2 \Bigr] (\tfrac{1}{2}\tfrac{m^2}{ec_+})
-\Bigl[\frac{3}{2}(\tfrac{1}{2}\tfrac{m^2}{ec_+})-1\Bigr](\tfrac{1}{2}\tfrac{m^2}{ec_+})\ln(\tfrac{1}{2}\tfrac{m^2}{ec_+}) \biggr\}\,, \nonumber\\
h_T^-(c_+,0)& =\frac{\alpha}{\pi}\frac{1}{ec_+}\biggl\{-\frac{27}{4}\zeta'\bigl(-2,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) +3(\tfrac{1}{2}\tfrac{m^2}{ec_+})\zeta'\bigl(-1,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) +\frac{3}{4}(\tfrac{1}{2}\tfrac{m^2}{ec_+})^2 \zeta'\bigl(0,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) \nonumber\\
&\hspace*{2cm}+\frac{1}{4}\Bigl[1+3(\tfrac{1}{2}\tfrac{m^2}{ec_+})^2 \Bigr] (\tfrac{1}{2}\tfrac{m^2}{ec_+})
-\Bigl[\frac{3}{2}(\tfrac{1}{2}\tfrac{m^2}{ec_+})-\frac{5}{8}\Bigr](\tfrac{1}{2}\tfrac{m^2}{ec_+})\ln(\tfrac{1}{2}\tfrac{m^2}{ec_+}) \nonumber\\
&\hspace*{2cm}
+\frac{1}{4}(\tfrac{1}{2}\tfrac{m^2}{ec_+})\psi(\tfrac{1}{2}\tfrac{m^2}{ec_+})+\frac{1}{8} \biggr\}\,,\label{eq:explexpsh}\\
h^+_{\bar F\bar F}(c_+,0)&=\frac{\alpha}{\pi}\frac{1}{ec_+} \frac{1}{c_+^2}\biggl\{3\zeta'\bigl(-2,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) +2\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)\zeta'\bigl(-1,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) -2(\tfrac{1}{2}\tfrac{m^2}{ec_+})^2 \zeta'\bigl(0,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)\nonumber\\
&\hspace*{2.5cm} -\frac{1}{12}\Bigl[5+14(\tfrac{1}{2}\tfrac{m^2}{ec_+})^2\Bigr](\tfrac{1}{2}\tfrac{m^2}{ec_+}) +\Bigl[\frac{3}{2}(\tfrac{1}{2}\tfrac{m^2}{ec_+})-1\Bigr](\tfrac{1}{2}\tfrac{m^2}{ec_+})\ln(\tfrac{1}{2}\tfrac{m^2}{ec_+}) \nonumber\\
&\hspace*{2.5cm} +\frac{1}{3}(\tfrac{1}{2}\tfrac{m^2}{ec_+})\psi(\tfrac{1}{2}\tfrac{m^2}{ec_+}) +\frac{1}{6} (\tfrac{1}{2}\tfrac{m^2}{ec_+})^2 \zeta\bigl(2,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)+\frac{1}{12} \biggr\}\,,\nonumber\\
h^-_{\bar F\bar F}(c_+,0)& =\frac{\alpha}{\pi}\frac{1}{ec_+} \frac{1}{c_+^2} \biggl\{\frac{15}{4}\zeta'\bigl(-2,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) -\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)\zeta'\bigl(-1,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)+\frac{1}{4}(\tfrac{1}{2}\tfrac{m^2}{ec_+})^2 \zeta'\bigl(0,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) \nonumber\\
&\hspace*{2.5cm} -\frac{1}{6}\Bigl[3+3(\tfrac{1}{2}\tfrac{m^2}{ec_+})-\frac{5}{2}(\tfrac{1}{2}\tfrac{m^2}{ec_+})^2 \Bigr](\tfrac{1}{2}\tfrac{m^2}{ec_+}) +\Bigl[\frac{3}{2}(\tfrac{1}{2}\tfrac{m^2}{ec_+})-\frac{5}{8}\Bigr](\tfrac{1}{2}\tfrac{m^2}{ec_+})\ln(\tfrac{1}{2}\tfrac{m^2}{ec_+}) \nonumber\\
&\hspace*{2.5cm}
+\Bigl[\frac{1}{12}-(\tfrac{1}{2}\tfrac{m^2}{ec_+})^2\Bigr](\tfrac{1}{2}\tfrac{m^2}{ec_+})\psi(\tfrac{1}{2}\tfrac{m^2}{ec_+})
+\frac{1}{6} (\tfrac{1}{2}\tfrac{m^2}{ec_+})^2 \zeta\bigl(2,\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) -\frac{1}{24}\biggr\}\,, \nonumber
\end{align}
where $\psi(\cdot)$ denotes the digamma function.
Making use of the all-orders asymptotic expansions of the Hurwitz zeta function and its derivatives for large arguments, given, e.g., in Ref.~\cite{NIST}, it can be straightforwardly checked that \Eqref{eq:h+-B} is recovered from \Eqref{eq:explexpsh}.
The strong-field expansions of \Eqref{eq:explexpsh} follow from the series representations of the Hurwitz zeta function and its derivatives, cf., e.g., Refs.~\cite{Dunne:2004nc,Dowker:2015vya,Wolfram}. They read
\begin{align}
h_T^+(c_+,0)& =\frac{\alpha}{\pi}\frac{1}{ec_+}\biggl\{ -9\zeta'(-2)+\Bigl[\ln\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)-12\zeta'(-1)-\frac{1}{2}\Bigr] \bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)\nonumber\\
&\hspace*{0.8cm}+\frac{3}{2}\Bigl[\ln\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)+\ln(2\pi)-\frac{5}{2}\Bigr] \bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^2-\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^3 \nonumber\\
&\hspace*{0.8cm}+6\sum_{j=0}^\infty (-1)^j\frac{j+1}{(j+2)(j+3)(j+4)} \zeta(j+2)\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^{j+4}\biggr\}\,, \nonumber\\
h_T^-(c_+,0)& =\frac{\alpha}{\pi}\frac{1}{ec_+}\biggl\{ -\frac{27}{4}\zeta'(-2)-\frac{1}{8}
+\frac{1}{2}\Bigl[\frac{5}{4}\ln\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)-\frac{\gamma}{2}-21\zeta'(-1)-\frac{5}{8}\Bigr]\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)
\nonumber\\
&\hspace*{0.8cm}+\frac{3}{2}\Bigl[\ln\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr) +\ln(2\pi) +\frac{\pi^2}{36}-\frac{19}{8}\Bigr]\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^2 -\frac{1}{4}\Bigl[\frac{9}{2}+\zeta(3)\Bigr] \bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^3 \nonumber\\
&\hspace*{0.8cm}
+\frac{1}{4}\sum_{j=0}^\infty(-1)^j\Bigl[\frac{3(j^2+11j+10)}{(j+2)(j+3)(j+4)}\zeta(j+2)+\zeta(j+4)\Bigr]\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^{j+4} \biggr\}\,,\label{eq:sfe_h}\\
h^+_{\bar F\bar F}(c_+,0)&=\frac{\alpha}{\pi}\frac{1}{ec_+} \frac{1}{c_+^2}\biggl\{3\zeta'(-2)-\frac{1}{12}-\Bigl[\ln\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)+\frac{\gamma}{3}-8\zeta'(-1)+\frac{1}{6}\Bigr]\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)\nonumber\\
&\hspace*{0.8cm} -\frac{1}{2}\Bigl[3\ln\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)+3\ln(2\pi)-\frac{\pi^2}{6}-\frac{13}{2}\Bigr]\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^2 +\frac{2}{3}\Bigl[2-\zeta(3)\Bigr]\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^3\nonumber\\
&\hspace*{0.8cm} -\sum_{j=0}^\infty(-1)^j\Bigl[\frac{2(j^2+6j+5)}{(j+2)(j+3)(j+4)}\zeta(j+2)-\frac{j+5}{6}\zeta(j+4)\Bigr]\bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^{j+4} \biggr\}\,,\nonumber\\
h^-_{\bar F\bar F}(c_+,0)& =\frac{\alpha}{\pi}\frac{1}{ec_+} \frac{1}{c_+^2} \biggl\{\frac{15}{4}\zeta'(-2)+\frac{1}{24}-\Bigl[\frac{5}{8}\ln \bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)+\frac{\gamma}{12}-\frac{13}{2}\zeta'(-1)+\frac{3}{16}\Bigr] \bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)\nonumber\\
&\hspace*{0.8cm} -\frac{1}{2}\Bigl[3\ln \bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)+3\ln(2\pi)-\frac{\pi^2}{12}-\frac{45}{8}\Bigr] \bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^2+\frac{1}{12}\Bigl[\frac{43}{2}-5\zeta(3)\Bigr] \bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^3\nonumber\\
&\hspace*{0.8cm}
-\frac{1}{4}\sum_{j=0}^\infty(-1)^j\Bigl[\frac{4j^3+35j^2+101j+70}{(j+2)(j+3)(j+4)}\zeta(j+2)-\frac{2j+7}{3}\zeta(j+4)\Bigr] \bigl(\tfrac{1}{2}\tfrac{m^2}{ec_+}\bigr)^{j+4} \biggr\}\,, \nonumber
\end{align}
where $\gamma$ is the Euler-Mascheroni constant, $\zeta(\cdot)$ is the Riemann zeta function, and $\zeta'(\cdot)$ is its derivative.
The analogous results determining \Eqref{eq:Gamma2derivs_B(E)} for a finite value of $c_-$ but $c_+=0$ follow from Eqs.~\eqref{eq:h+-B}, \eqref{eq:explexpsh} and \eqref{eq:sfe_h}
via the identity $h_p^\pm(0,c_-)=\pm h_p^\pm(c_+,0)|_{c_+\to-{\rm i}c_-}$ with $p\in\{T,FF\}$.
In the special case of a purely magnetic field $\vec{B}$, we have $c_+=|\vec{B}|=B$ and \Eqref{eq:Gamma2derivs_B(E)} can be expressed as
\begin{align}
{\cal L}_{\rm HE}(\vec{B})\big|_{\sim \partial^2}&= -\frac{1}{2}\Bigl\{(\partial_0\vec{B})^2h_T^+(B,0) +[\vec{B}\cdot(\partial_0\vec{B})]^2h_{\bar F\bar F}^+(B,0) \nonumber\\
&\hspace*{1.4cm}+(\partial_i\vec{B})^2\bigl[h_T^-(B,0)-h_T^+(B,0)\bigr] +[\vec{B}\cdot(\partial_i\vec{B})]^2\bigl[h_{\bar F\bar F}^-(B,0)-h_{\bar F\bar F}^+(B,0)\bigr] \nonumber\\
&\hspace*{1.4cm}-\bigl[(\hat{\vec{B}}\cdot\vec{\nabla})\vec{B}\bigr]^2h_T^-(B,0)
-\bigl\{\vec{B}\cdot[(\hat{\vec{B}}\cdot\vec{\nabla})\vec{B}]\bigr\}^2h_{\bar F\bar F}^-(B,0)\Bigr\}\,,
\label{eq:Gamma2derivs_explB}
\end{align}
with $\vec{B}=B\hat{\vec{B}}$ and $|\hat{\vec{B}}|=1$. In \Eqref{eq:Gamma2derivs_explB} the Einstein summation convention over the index $i\in\{1,2,3\}$ is implicitly assumed.
On the other hand, it is well-known that the Heisenberg-Euler Lagrangian develops a manifestly non-perturbative imaginary part in electromagnetic fields for which $c_-\neq0$. The latter can be readily evaluated with the residue theorem. As obvious from Eqs.~\eqref{eq:pure_z_parallel} and \eqref{eq:pure_z_perp}, particularly for the case of $c_+=0$ this evaluation boils down to the use of the single identity
\begin{equation}
{\rm Im}\Bigl\{{\rm i}\frac{\alpha}{2\pi}\int_0^\infty{\rm d}s\,{\rm e}^{-{\rm i}m^2s}\,g(z)\cot z\big|_{z\to-{\rm i}z'}\Bigl\}\,=\frac{\alpha}{2}\frac{1}{ec_-}\sum_{n=1}^\infty {\rm e}^{-\frac{m^2}{ec_-}n\pi}\,g(-n\pi)\,, \label{eq:ImId}
\end{equation}
where $g(z)$ is an analytic function: all expressions $g(z)\cot z$ to be considered here are regular at $z\to0$ such that there is no pole at $z=0\,\leftrightarrow\,n=0$; cf. \Eqref{eq:calN_series}.
Therewith, we infer
\begin{align}
{\rm Im}\bigl\{h_T^+(0,c_-)\bigr\}&= \frac{\alpha}{4}\frac{1}{ec_-}\sum_{n=1}^\infty {\rm e}^{-\frac{m^2}{ec_-}n\pi}\biggl[\frac{m^2}{ec_-} +\frac{3}{n\pi}\biggr]\frac{3}{(n\pi)^2}\,, \nonumber\\
{\rm Im}\bigl\{h_T^-(0,c_-)\bigr\}&= \frac{\alpha}{4}\frac{1}{ec_-}\sum_{n=1}^\infty {\rm e}^{-\frac{m^2}{ec_-}n\pi}\biggl[\Bigl(\frac{m^2}{ec_-}\Bigr)^2\frac{3}{8n\pi}+\frac{1}{2}\frac{m^2}{ec_-}\Bigl(1-\frac{3}{(n\pi)^2}\Bigr)-\frac{27}{4(n\pi)^3}\biggr]\,,\label{eq:Imhc+} \\
{\rm Im}\bigl\{h_{\bar F\bar F}^+(0,c_-)\bigr\}&= -\frac{\alpha}{4}\frac{1}{ec_-} \frac{1}{c_-^2}\sum_{n=1}^\infty {\rm e}^{-\frac{m^2}{ec_-}n\pi}\biggl[\Bigl(\frac{m^2}{ec_-}\Bigr)^2\Bigl(\frac{n\pi}{3}+\frac{1}{n\pi}\Bigr)-\frac{m^2}{ec_-}\Bigl(\frac{2}{3}-\frac{1}{(n\pi)^2}\Bigr)-\frac{3}{(n\pi)^3}\biggr]\,, \nonumber\\
{\rm Im}\bigl\{h_{\bar F\bar F}^-(0,c_-)\bigr\}&=-\frac{\alpha}{4}\frac{1}{ec_-} \frac{1}{c_-^2} \sum_{n=1}^\infty {\rm e}^{-\frac{m^2}{ec_-}n\pi}\biggl[\frac{1}{2}\Bigl(\frac{m^2}{ec_-}\Bigr)^3-\Bigl(\frac{m^2}{ec_-}\Bigr)^2\Bigl(\frac{n\pi}{3}-\frac{1}{8n\pi}\Bigr)\nonumber\\
&\hspace*{5cm}+ \frac{1}{2}\frac{m^2}{ec_-}\Bigl(\frac{1}{3}+\frac{1}{(n\pi)^2}\Bigr)+\frac{15}{4(n\pi)^3}\biggr]\,. \nonumber
\end{align}
These expressions constitute the imaginary part of \Eqref{eq:Gamma2derivs_B(E)} for $c_+=0$ and result in corrections to the Schwinger-formula describing the decay of the quantum vacuum via electron-positron pair production in slowly-varying electric fields: the leading derivative correction to the vacuum decay rate $w(\bar F)=2\,{\rm Im}\{{\cal L}_{\rm HE}(\bar F)\}$ \cite{Heisenberg:1935qt,Schwinger:1951nm} is given by $w(\bar F)|_{\sim\partial^2}=2\,{\rm Im}\{{\cal L}_{\rm HE}(\bar F)|_{\sim\partial^2}\}$; cf. also Refs.~\cite{Dittrich:1985yb,Dunne:2004nc, Cohen:2008wz} and references therein.
Especially for a purely electric field $\vec{E}$ we have $c_-=E$, such that \Eqref{eq:Gamma2derivs_B(E)} becomes
\begin{align}
{\cal L}_{\rm HE}(\vec{E})\big|_{\sim \partial^2}&= \frac{1}{2}\Bigl\{(\partial_0\vec{E})^2\bigl[h_T^-(0,E)+h_T^+(0,E)\bigr] -[\vec{E}\cdot(\partial_0\vec{E})]^2\bigl[h_{\bar F\bar F}^-(0,E)+h_{\bar F\bar F}^+(0,E)\bigr]\nonumber\\
&\hspace*{1.4cm} -(\partial_i\vec{E})^2h_T^+(0,E) +[\vec{E}\cdot(\partial_i\vec{E})]^2h_{\bar F\bar F}^+(0,E)\nonumber\\
&\hspace*{1.4cm}-\bigl[(\hat{\vec{E}}\cdot\vec{\nabla})\vec{E}\bigr]^2h_T^-(0,E)
+\bigl\{\vec{E}\cdot[(\hat{\vec{E}}\cdot\vec{\nabla})\vec{E}]\bigr\}^2h_{\bar F\bar F}^-(0,E)\Bigr\}\,.
\label{eq:Gamma2derivs_explE}
\end{align}
The associated derivative correction to the vacuum decay rate is $w(\vec{E})|_{\sim\partial^2}=2\,{\rm Im}\{{\cal L}_{\rm HE}(\vec{E})|_{\sim\partial^2}\}$.
A comparison of \Eqref{eq:Gamma2derivs_explE} with \Eqref{eq:Gamma2derivs_explB} implies that
\begin{equation}
{\cal L}_{\rm HE}(\vec{E})\big|_{\sim \partial^2}= -{\cal L}_{\rm HE}(\vec{B})\big|_{\sim \partial^2}\Big|_{B\to-{\rm i}E,\partial_0\leftrightarrow\partial_i}\,.
\label{eq:Gamma2derivs_explE2}
\end{equation}
Recall that $h_p^\pm(B,0)\big|_{B\to-{\rm i}E}= \pm h_p^\pm(0,E) $.
It can be straightforwardly checked that for the special cases considered explicitly by Refs.~\cite{Lee:1989vh,Gusynin:1998bt}, namely either a purely magnetic field directed along the $z$ axis which only depends on $x$ and $y$, or a purely electric field directed along the $x$ axis which exclusively depends on $t$ and $x$, the known results are recovered.
In fact, the non-trivial structures of the effective Lagrangians associated with these cases are fully determined by
$h_T^-(B,0)- h_T^+(B,0)+B^2[h_{\bar F\bar F}^-(B,0)- h_ {\bar F\bar F}^+(B,0)]\sim\int_0^\infty{\rm d}s\,{\rm e}^{-{\rm i}m^2s}\frac{z}{\sin z}\,{\cal N}_2^\perp(z,0)$ for the magnetic field $\vec{B}=B(x,y)\,\vec{e}_{\rm z}$, and similarly
$h_T^-(0,E)+ h_T^+(0,E)-E^2[h_{\bar F\bar F}^-(0,E)+ h_ {\bar F\bar F}^+(0,E)]\sim\int_0^\infty{\rm d}s\,{\rm e}^{-{\rm i}m^2s}\frac{z'}{\sinh z'}\,{\cal N}_1^\parallel(0,z')$ for the electric field $\vec{E}=E(t,x)\,\vec{e}_{\rm x}$.
Finally, we note that in the limit of crossed fields of the same amplitude characterized by $\vec{E}(x)\cdot\vec{B}(x)=0$ and $|\vec{E}(x)|=|\vec{B}(x)|$, we have $c_+=c_-={\cal F}={\cal G}=0$. Because of $\partial_\alpha\bar F_{\sigma\mu}\bar F^{\sigma\mu}=2\partial_\alpha {\cal F}=0$, in this case
\Eqref{eq:Gamma2derivs_B(E)} takes an especially simple form, namely
\begin{align}
{\cal L}_{\rm HE}(\bar F)\big|_{\sim \partial^2}=-\frac{1}{4}\partial_\alpha\bar F_{\mu\nu} \partial_\beta\bar F^{\mu\nu}\, h^{\alpha\beta}_T(\bar F) \,.
\label{eq:Gamma2derivs_c+c-=0_v0}
\end{align}
Accounting for the fact that in this limit the tensor structure $h^{\alpha\beta}_T(\bar F)$ can be compactly represented as, cf. Eqs.~\eqref{eq:h_p}, \eqref{eq:weakfield}
and \eqref{eq:h+-B},
\begin{equation}
h^{\alpha\beta}_T(\bar F) = \frac{1}{15} \frac{\alpha}{\pi} \frac{1}{m^2}\Bigl(\bar F^\alpha_{\ \, \tau}\bar F^{\beta\tau}\frac{1}{7}\Bigl(\frac{e}{m^2}\Bigr)^2 -g^{\alpha\beta}\Bigr)\,,
\end{equation}
\Eqref{eq:Gamma2derivs_c+c-=0_v0} becomes
\begin{align}
{\cal L}_{\rm HE}(\bar F)\big|_{\sim \partial^2}=\frac{1}{60} \frac{\alpha}{\pi} \frac{1}{m^2}\Bigl[\partial_\alpha\bar F_{\mu\nu} \partial^\alpha\bar F^{\mu\nu}-\frac{1}{7}\Bigl(\frac{e}{m^2}\Bigr)^2\partial_\alpha\bar F_{\mu\nu} \partial_\beta\bar F^{\mu\nu}\bar F^\alpha_{\ \, \tau}\bar F^{\beta\tau}\Bigr]\,.
\label{eq:Gamma2derivs_c+c-=0}
\end{align}
As to be expected, this expression vanishes identically in plane wave fields \cite{Schwinger:1951nm}.
\section{Conclusions and Outlook}\label{sec:concls}
In this work, we put forward an alternative way to evaluate derivative corrections to the Heisenberg-Euler effective action in slowly varying electromagnetic fields. Using the explicit results available in the literature for the one-loop vacuum polarization tensor in the presence of a constant electromagnetic field as central input, we arrive at a rather compact expression for the quadratic derivative correction to the Heisenberg-Euler effective action at one loop.
For the special cases of magnetic- and electric-like field configurations characterized by the vanishing of one of the secular invariants of the electromagnetic field, we obtain closed-form expressions and work out all-orders weak- and strong-field expansions.
Apart from providing insights into fundamental aspects of strong-field QED, our results are relevant for precision studies of quantum vacuum nonlinearities in experimentally realistic field configurations beyond the locally constant field approximation.
Of course, the strategy devised in the present work to determine derivative corrections to the Heisenberg-Euler effective action for QED in four space-time dimensions can be readily extended to QED in other space-time dimensions as well as to other field theories, such as scalar QED; cf also Ref.~\cite{Gusynin:1998bt}.
\acknowledgments
This work has been funded by the Deutsche Forschungsgemeinschaft (DFG) under Grant No. 416607684 within the Research Unit FOR2783/1. In memoriam \texttt{Maria Rohrmeier (12.8.1981 - 2.8.2021)}.
|
1802.01828
|
\section{Introduction}
Let $\mathcal{H}(\mathbb{D})$ denote the class of analytic functions defined on the unit disk $\D= \{z \in \C:\, |z|<1\}$
with the (metrizable) topology of uniform convergence on compact subsets of $\D$ and we denote the boundary of $\D$ by $\T$.
Weighted composition operator is a combination of multiplication and composition operators.
These operators are mainly studied in various Banach spaces or Hilbert spaces of $\mathcal{H}(\mathbb{D})$.
Recently,
Ar\'evalo et al. \cite{arxiv W-CO}
initiated the study of weighted composition operator restricted to the Carath\'eodory class $\mathcal{P}_{1}$, which consists of all
$f \in \mathcal{H}(\mathbb{D})$ with positive real part and with a normalization $f(0)=1$. Clearly the class $\mathcal{P}_{1}$
is not a linear space but it is helpful to solve some extremal problems in geometric
function theory. See \cite{Hallenbeck:Book}.
In this article, we generalize the recent work of Ar\'evalo et al. \cite{arxiv W-CO} by considering
weighted composition operators preserving the class $\mathcal{P}_{\alpha}$. This class is connected
with various geometric subclasses of $\mathcal{H}(\mathbb{D})$ in the univalent function theory
(see \cite{Duren,Hallenbeck:Book,Pommerenke:Book}).
Since the class $\mathcal{P}_{\alpha}$ is not a
linear space, for a given map on $\mathcal{P}_{\alpha}$, questions about operator theoretic properties are not
meaningful. However, one can talk about special classes of self-maps of $\mathcal{P}_{\alpha}$ and fixed points
of those maps. This is the main purpose of this article.
The article is organized as follows. In Section \ref{MP4Sec2}, we introduce the class $\mathcal{P}_{\alpha}$ and
list down some basic properties about
this class. In Section \ref{MP4Sec3}, we give characterization for weighted composition operators to be self-maps
of the class $\mathcal{P}_{\alpha}$ (see Theorem \ref{MP4Main1}). The above situation is analyzed closely for various
special cases of symbols in Section \ref{MP4Sec4}. In Section \ref{MP4Sec5}, we present some simple examples.
\section{Some preliminaries about the class $\mathcal{P}_{\alpha}$}\label{MP4Sec2}
For $f$ and $g \in \mathcal{H}(\mathbb{D})$, we say that \textit{$f$ is subordinate to $g$} (denoted by $f(z)\prec g(z)$ or $f\prec g$) if
there exists an analytic function $\omega:\D\rightarrow \D$ such that $\omega(0)=0$ and $f=g\circ \omega$. If $f(z)\prec z$, then
$f$ is called \textit{Schwarz} function. For $|\alpha|\leq 1, \alpha \neq -1$, define $h_\alpha$ on $\D$ by
$ h_\alpha(z)=\frac{1+\alpha z}{1-z}$ and the half plane $\mathbb{H}_\alpha$ is described by
$$ \mathbb{H}_\alpha :=
h_\alpha(\D)=\{ w\in \C: 2{\rm Re\,}\{(1+\overline{\alpha})w\}>1-|\alpha|^2 \}.
$$
In particular, if $\alpha \in \R$ and $-1<\alpha\leq1$, then
$$
h_\alpha(\D)=\{ w\in \C: {\rm Re\,} w> (1-\alpha)/2\}
$$
so that ${\rm Re\,}h_\alpha(z)> (1-\alpha)/2$ in $\D$.
For $|\alpha|\leq 1, \alpha \neq -1$, it is natural to consider the class $\mathcal{P}_{\alpha}$ defined by
$$ \mathcal{P}_{\alpha}:= \{f \in \mathcal{H}(\mathbb{D}) : f(z)\prec h_\alpha(z) \}.
$$
It is worth to note that for every $f \in \mathcal{P}_{\alpha}$, there is an unique Schwarz function $\omega$
such that
$$f(z)=\frac{1+\alpha \omega(z)}{1-\omega(z)}.$$
It is well-known \cite[Lemma 2.1]{Pommerenke:Book} that, if $g$ is an univalent analytic function on $\D$, then
$f(z)\prec g(z)$ if and only if $f(0)=g(0)$ and $f(\D)\subset g(\D)$. In view of this result, the class
$\mathcal{P}_{\alpha}$ can be stated in an equivalent form as
$$ \mathcal{P}_{\alpha}:= \{f \in \mathcal{H}(\mathbb{D}) :f(0)=1, f(\D)\subset \mathbb{H}_\alpha\}.
$$
We continue the discussion by stating a few basic and useful properties of the class $\mathcal{P}_{\alpha}$.
\bprop
Suppose $f \in \mathcal{P}_{\alpha}$ and $f(z)=1+\sum_{n=1}^{\infty}a_n z^n$, then $|a_n|\leq |\alpha+1|$ for all $n\in \N$.
The bound is sharp as the function $h_\alpha(z)=1+\sum_{n=1}^{\infty}(1+\alpha)z^n$ shows.
\eprop
\bpf
This result is an immediate consequence of Rogosinski's result \cite[Theorem X]{subordination} (see also \cite[Theorem 6.4(i), p.~195]{Duren})
because $h_\alpha(z)$ (and hence, $(h_\alpha(z)-1)/(1+\alpha)$) is a convex function.
\epf
\bprop {\rm (Growth estimate)}
Let $f \in \mathcal{P}_{\alpha}$. Then for all $z\in \D$, one has
$$ \frac{1-|\alpha z|}{1+|z|}\leq |f(z)|\leq \frac{1+|\alpha z|}{1-|z|}.
$$
\eprop
\bpf
This result trivially follows from clever use of classical Schwarz lemma and the triangle inequality.
\epf
From the `growth estimate' and the familiar Montel's theorem on normal family, one can easily get the following result.
\bprop
The class $\mathcal{P}_{\alpha}$ is a compact family in the compact-open topology (that is, topology of uniform convergence
on compact subsets of $\D$).
\eprop
Because the half plane $\mathbb{H}_\alpha$ is convex, the following result is obvious.
\bprop
The class $\mathcal{P}_{\alpha}$ is a convex family.
\eprop
For $p\in(0,\infty)$, the Hardy space $H^{p}$ consists of analytic functions $f$ on $\mathbb{D}$ with
$$\|f\|_{p}:=\sup\limits_{ r \in [0,1)}\left \{\frac{1}{2\pi}\int_0^{2\pi} |f(re^{i\theta})|^{p}\,d\theta\right \}^{\frac{1}{p}}
$$
is finite and $H^\infty$ denotes the set of all bounded analytic functions on $\mathbb{D}$. We refer to
\cite{Duren:Hpspace} for the theory of Hardy spaces.
By Littlewood's subordination theorem \cite[Theorem 2]{littlewood thm}, it follows that if $f\prec g$ and
$g\in H^p$ for some $0<p\leq \infty$, then $f\in H^p$ for the same $p$. As a consequence we easily have the following.
\bprop
The class $\mathcal{P}_{\alpha}$ is a subset of the Hardy space $H^p$ for each $0<p<1$.
\eprop
\bpf
Because $(1-z)^{-1} \in H^p$ for each $0<p<1$, it follows easily that $h_\alpha \in H^p$ for each $0<p<1$ and
for $|\alpha|\leq 1, \alpha \neq -1$. The desired conclusion follows.
\epf
\brem
Although $\mathcal{P}_{\alpha}$
does not posses the linear structure, due to being part of $H^p$, the results on $H^p$ space, such as results about
boundary behavior, are valid for functions in the class $\mathcal{P}_{\alpha}$
\erem
\section{Weighted composition on $\mathcal{P}_{\alpha}$}\label{MP4Sec3}
For an analytic self-map $\phi$ of $\D$, the composition operator $C_\phi$ is defined by
$$
C_\phi(f)=f\circ \phi \mbox{~for~}
f \in \mathcal{H}(\mathbb{D}). $$
One can refer \cite{Cowen:Book}, for the study of composition operators on various
function spaces on the unit disk. Throughout this article, $\alpha$ denotes a complex number such that
$|\alpha|\leq 1, \alpha \neq -1$ , unless otherwise stated explicitly and $\phi$ denotes an analytic self-map of $\D$.
The following result deals with composition operator when it is restricted to the class
$\mathcal{P}_{\alpha}$.
\bprop
The composition operator $C_\phi$ induced by the symbol $\phi$, preserves the class $\mathcal{P}_{\alpha}$ if and
only if $\phi$ is a Schwarz function.
\eprop
\bpf
Suppose that $C_\phi$ preserves the class $\mathcal{P}_{\alpha}$. Then $C_\phi (h_\alpha) \in \mathcal{P}_{\alpha}$,
and thus
$$\frac{1+\alpha \phi(0)}{1-\phi(0)}=1.
$$
This gives that $\phi(0)=0$ which implies that $\phi$ is a Schwarz function.
The converse part holds trivially.
\epf
For a given analytic self-map $\phi$ of $\D$ and analytic map $\psi$ of $\D$, the corresponding weighted
composition operator $C_{\psi,\phi}$ is defined by
$$ C_{\psi,\phi}(f)= \psi (f\circ \phi) \mbox {~for~} f \in \mathcal{H}(\mathbb{D}).
$$
If $\psi\equiv 1$, then $C_{\psi,\phi}$ reduced to a composition operator $C_{\phi}$ and if $\phi(z)=z$ for all $z\in \D$, then $C_{\psi,\phi}$
reduced to a multiplication operator $M_{\psi}$.
For a given analytic map $\psi$ of $\D$, the corresponding multiplication operator $M_\psi$ is then defined by
$$ M_\psi(f)= \psi f \mbox {~for~} f \in \mathcal{H}(\mathbb{D}).
$$
The characterization of $M_{\psi}$ that preserves the class $\mathcal{P}_{\alpha}$ is given in Section \ref{MP4Sec4}.
Banach begun the study of weighted composition operators. In \cite{Banach}, Banach proved the classical Banach-Stone theorem, that is, the surjective isometries between the spaces of continuous real-valued functions on a closed and bounded interval are certain weighted composition operators. In \cite{Forelli}, Forelli proved that the isometric isomorphism of the Hardy space $H^p, (p\neq 2)$ are also weighted composition operators. The same result for the case of Bergman space is proved by Kolaski in \cite{Bergman isometry}.
The study of weighted composition operators can be viewed as a natural generalization of the well known field in the analytic function theory, namely, the composition operators. Moreover, weighted composition operators appear in applied areas such as dynamical systems and evolution equations. For example, classification of dichotomies in certain dynamical systems is connected to weighted composition operators, see \cite{Dynamical}.
In this section, we discuss
weighted composition operator that preserves $\mathcal{P}_{\alpha}$. Before, we do this, let us recall some useful results from the theory of extreme points.
\blem {\rm (\cite[Theorem 5.7]{Hallenbeck:Book})}
Extreme points of the class $\mathcal{P}_{\alpha}$ consists of functions given by
$$
f_\lambda(z)= \frac{1+\alpha \lambda z}{1-\lambda z},~~ |\lambda|=1.
$$
\elem
A point $p$ of a convex set $A$ is called \textit{extreme point} if $p$ is not a interior point of any line segment
which entirely lies in $A$. We denote, the set of all extreme points of the class $\mathcal{P}_{\alpha}$ by
$\mathcal{E}_{\alpha}$. That is, $\mathcal{E}_{\alpha}= \{f_\lambda: |\lambda|=1\}.$ Now, we recall a well-known
result by Krein and Milman \cite{Milman}.
\blem {\rm (\cite[Theorem 4.4]{Hallenbeck:Book})}
Let $X$ be a locally convex, topological vector space and $A$ be a convex, compact subset of $X$. Then, the closed
convex hull of extreme points of $A$ is equal to $A$.
\elem
The original version of it is proved in \cite{Milman}.
On $\mathcal{H}(\mathbb{D})$, $f_n$ converges to $f$
denoted by $f_n \xrightarrow{\text{u.c}} f$. It is easy to see that $ C_{\psi,\phi}(f_n) \xrightarrow{\text{u.c}}
C_{\psi,\phi}(f)$ whenever $f_n \xrightarrow{\text{u.c}} f$. Thus, $ C_{\psi,\phi}$ is continuous on
$\mathcal{H}(\mathbb{D})$ (in particular on $\mathcal{P}_{\alpha}$).
\bprop
Suppose that $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$. Then $\phi$ is a Schwarz function and
there exists a Schwarz function $\omega$ such that
$$\psi= h_\alpha \circ \omega= \frac{1+\alpha \omega}{1-\omega}.
$$
\eprop
\bpf
Suppose that $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$. Take $f\equiv 1$ to be a constant function,
which belongs to $\mathcal{P}_{\alpha}$. Thus, $C_{\psi,\phi} (f)=\psi \in \mathcal{P}_{\alpha}$ and hence,
there exists a Schwarz function $\omega$ such that
$$\psi= h_\alpha \circ \omega= \frac{1+\alpha \omega}{1-\omega}.
$$
In particular, $\psi(0)=1$.
Since $h_\alpha \in \mathcal{P}_{\alpha}$, we have $\psi (0) (h_\alpha ( \phi(0)))=1$, which gives $\phi(0)=0$.
Hence $\phi$ will be a Schwarz function.
\epf
In view of above result, from now on, we will assume that $\psi= h_\alpha \circ \omega= \frac{1+\alpha \omega}{1-\omega}$
and $\phi,\,\omega$ are Schwarz functions.
\bthm\label{MP4Main1}
Let $\phi$, $\omega$ and $\psi$ be as above. Then, $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$ if and only if
\begin{equation}\label{mp4 eq1}
2Q(\omega) |\phi|< (1-|\omega|^2)+P(\omega) |\phi|^2 \mbox{~on~} \D,
\end{equation}
where, $P(\omega)= |\alpha \omega|^2-|1+(\alpha-1)\omega|^2$ and
$Q(\omega)= |(\alpha-1)|\omega|^2 + \overline{\omega}-\alpha \omega|$.
\ethm
\bpf
At first we prove that, $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$ which is equivalent to
the inclusion $C_{\psi,\phi}(\mathcal{E}_{\alpha})\subset\mathcal{P}_{\alpha}$. To do this, we suppose that
$C_{\psi,\phi} (\mathcal{E}_{\alpha})\subset \mathcal{P}_{\alpha}$. Since $\mathcal{P}_{\alpha}$ is a
convex family, we obtain
$$ C_{\psi,\phi} (\mbox{~convex hull~}(\mathcal{E}_{\alpha}))\subset \mathcal{P}_{\alpha}.
$$
Now, by Krein-Milman theorem and the fact that $ C_{\psi,\phi}$ is continuous on a compact family
$\mathcal{P}_{\alpha}$, we see that $C_{\psi,\phi} (\mathcal{P}_{\alpha})\subset \mathcal{P}_{\alpha}$.
The converse part is trivial.
Next, we prove that $C_{\psi,\phi} (\mathcal{E}_{\alpha})\subset \mathcal{P}_{\alpha}$ if and only if
$$ 2Q(\omega) |\phi|< (1-|\omega|^2)+P(\omega) |\phi|^2 \mbox{~on~} \D.
$$
Assume that $C_{\psi,\phi} (\mathcal{E}_{\alpha})\subset \mathcal{P}_{\alpha}$. This gives
$\psi (f_\lambda \circ \phi)\in \mathcal{P}_{\alpha}$ for all $|\lambda|=1$. Thus, for all $|\lambda|=1$,
there exists a Schwarz function $\omega_\lambda$ such that
$\psi (f_\lambda \circ \phi)= h_\alpha \circ \omega_\lambda$. That is,
$$ \frac{1+\alpha\omega}{1-\omega} \frac{1+\alpha\lambda\phi}{1-\lambda\phi}=
\frac{1+\alpha\omega_\lambda}{1-\omega_\lambda}.
$$
Solving this equation for $\omega_\lambda$, we get that
$$ \omega_\lambda=\frac{\omega+\lambda\phi+(\alpha-1)\lambda\omega\phi}{1+\alpha\lambda\phi\omega}.
$$
For each $|\lambda|=1$, $\omega_\lambda$ is a Schwarz function if and only if
$$ |\omega+\lambda\phi+(\alpha-1)\lambda\omega\phi|^2< |1+\alpha\lambda\phi\omega|^2
~\mbox{ for all }~ |\lambda|=1,
$$
which is equivalent to
$$ 2{\rm Re~}(\lambda\phi\{(\alpha-1)|\omega|^2 + \overline{\omega}-\alpha \omega\}) <
(1-|\omega|^2)+|\phi|^2(|\alpha \omega|^2-|1+(\alpha-1)\omega|^2),
$$
for all $|\lambda|=1$. By taking supremum over $\lambda$ on both sides, the last inequality
gives \eqref{mp4 eq1}.
The converse part follows by repeating the above arguments in the reverse direction.
\epf
\brem \label{p and q}
Suppose that $\alpha=a+ib$ and $\omega(z)=u(z)+iv(z)$. Then,
\beqq
P(\omega)&=& |\alpha \omega|^2-|1+(\alpha-1)\omega|^2\\
&=& a(|\omega|^2-1)+(a-1)|\omega-1|^2+2bv.
\eeqq
Set $q(\omega)=(\alpha-1)|\omega|^2 + \overline{\omega}-\alpha \omega $ so that
$Q(\omega)= |q(\omega)|$. Upon simplifying, we get that
$$q(\omega)= (\alpha-1)\,\overline{\omega}\,(\omega-1)-2i\alpha\,v=(\alpha-1)( |\omega|^2- \omega) - 2iv$$ and thus
\begin{equation}\label{mp4 eq2}
q(\omega)=[(a-1)(|\omega|^2-u)+bv]+i[b(|\omega|^2-u)-v(a+1)].
\end{equation}
Also, it is easy to see that
\begin{equation}\label{mp4 eq3}
-q(\omega)= |1-\omega|^2\psi+(|\omega|^2-1) ~\mbox{ with }~ \psi=\frac{1+\alpha \omega}{1-\omega}.
\end{equation}
\erem
\section{Special cases}\label{MP4Sec4}
In this section, first we recall some familiar results on Hardy space $H^p$ which will help
the smooth traveling of this article.
\bprop {\rm (\cite[Theorem 1.3]{Duren:Hpspace})}\label{radial limit}
For every bounded analytic function $f$ on $\D$, the radial limit $\lim\limits_{r\rightarrow 1} f(re^{i\theta})$
exists almost everywhere (abbreviated by a.e.).
\eprop
In view of Proposition \ref{radial limit}, every Schwarz function has radial limit a.e. and using the fact that the function
$h_{\alpha}$ has radial limit a.e., it is easy to see that, every function $f\in \mathcal{P}_{\alpha}$ has radial limit
a.e. on $\T$. Also, it is well-known that (see \cite[Section 2.3]{Duren:Hpspace})
$$ \sup\limits_{|z|<1}|f(z)|= \esssup_{0\leq \theta\leq 2 \pi}|f(e^{i\theta})|,
$$
for every $f\in H^\infty$. Now, we will state a classical theorem of Nevanlinna.
\bprop {\rm (\cite[Theorem 2.2]{Duren:Hpspace})}\label{posite mre-radial limit}
If $f\in H^p$ for some $p>0$ and its radial limit $f(e^{i\theta})=0$ on a set of positive measure, then $f\equiv 0$.
\eprop
Since every Schwarz function $f$ belongs to $H^\infty$ and every $f\in \mathcal{P}_{\alpha}$
belongs to $H^p$ for $0<p<1$, the above result is valid for functions in the class
$\mathcal{P}_{\alpha}$ and for Schwarz functions.
An analytic function $f$ on $\D$ is said to be an \textit{ inner function } if $|f(z)|\leq 1$ for all $z\in \D$
and its radial limit $|f(\zeta)|=1$ a.e. on $|\zeta|=1$.
\bthm\label{MP4Main2}
Suppose that $\phi$ and $\omega$ are Schwarz functions, $\phi$ is inner and $\psi=\frac{1+\alpha \omega}{1-\omega}$.
Then, $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$ if and only if
$\psi\equiv 1$ (i.e., $\omega\equiv 0$).
\ethm
\bpf
If $\psi\equiv 1$, then $C_{\psi,\phi}$ becomes a composition operator $C_{\phi}$ and thus,
$C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$, because $\phi$ is a Schwarz function.
Conversely, suppose that $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$.
Then, by Theorem \ref{MP4Main1}, one has the inequality
$$
2Q(\omega) |\phi|< (1-|\omega|^2)+P(\omega) |\phi|^2 \mbox{~on~} \D.
$$
With abuse of notation, we denote the radial limits of $\phi$, $\omega$ and $\psi$
again by $\phi$, $\omega$ and $\psi$, respectively. Also, let $\alpha=a+ib$ and $\omega(z)=u(z)+iv(z)$.
By allowing $|z|\ra 1$ in \eqref{mp4 eq1}, we get that
$$
2Q(\omega) \leq (1-|\omega|^2)+P(\omega) \mbox{~a.e. on~} \T,
$$
which after computation is equivalent to
$$
Q(\omega) \leq (a-1)(|\omega|^2-u)+bv \mbox{~a.e. on~} \T.
$$
In view of \eqref{mp4 eq2} in Remark \ref{p and q}, the above inequality can rewritten as
$$
|q(\omega)| \leq {\rm Re\,}[q(\omega)] \mbox{~a.e. on~} \T
$$
which gives that
${\rm Im\,}[q(\omega)]=0 \mbox{~a.e. on~} \T$.
Again by using \eqref{mp4 eq3} in Remark \ref{p and q}, we have
$$|1-\omega|^2{\rm Im}(\psi)=0 \mbox{~a.e. on~} \T.$$
Analyzing the function $\omega$ through the classical theorem of Nevanlinna (see Proposition \ref{posite mre-radial limit}),
one can get that ${\rm Im\,}\psi=0$ a.e. on $\T$. Now the proof of $\psi\equiv 1$ is as follows:
Consider the analytic map $f=e^{-i(\psi-1)}$. Then, $|f|=e^{{\rm Im\,}\psi}=1$ a.e. on $\T$ and
$$
1=f(0)\leq \sup\limits_{|z|<1}|f(z)|= \esssup_{0\leq \theta\leq 2 \pi}|f(e^{i\theta})|=1.
$$
Hence, by the maximum modulus principle, we get that $f\equiv 1$ which gives $\psi\equiv 1$.
\epf
\bcor
$M_\psi$ preserves the class $\mathcal{P}_{\alpha}$ if and only if
$\psi\equiv 1$.
\ecor
\bpf
The desired result follows if we set $\phi(z)\equiv z$ in Theorem \ref{MP4Main2}.
\epf
\bthm\label{MP4Main3}
Suppose that $\alpha$ is a real number, $\phi$ and $\omega$ are Schwarz functions, $\omega$ is an inner function
and $\psi=\frac{1+\alpha \omega}{1-\omega}$.
Then, $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$ if and only if
$\phi$ is identically zero.
\ethm
\bpf
If $\phi\equiv 0$, then $C_{\psi,\phi}$ becomes a constant map $\psi$ and
hence it preserves $\mathcal{P}_{\alpha}$.
Conversely, suppose that $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$.
Then, by Theorem \ref{MP4Main1},
$$
2Q(\omega) |\phi|< (1-|\omega|^2)+P(\omega) |\phi|^2 \mbox{~on~} \D.
$$
By allowing $|z|\ra 1$, we get that
$$
2|1-\omega|^2|\psi|\,|\phi|\leq (2 {\rm Im\,}\alpha\,{\rm Im\,}\omega+
({\rm Re\,}\alpha-1)|1-\omega|^2)|\phi|^2 \mbox{~a.e. on~} \T,
$$
from which we obtain that
$$
|1-\omega|^2|\psi|\,|\phi|\leq 0 \mbox{~a.e. on~} \T.
$$
By the hypothesis on $\omega$ and $\psi$, and the classical theorem of Nevanlinna, we find
that $\phi\equiv 0$.
\epf
Here is an easy consequence of Theorem \ref{MP4Main3}.
\bcor
Let $\alpha$ be a real number, $\phi$ and $\omega$ are Schwarz functions and that $\phi\not \equiv 0$.
Suppose that $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$, and
$$ E=\{ \zeta \in \T: |\omega(\zeta)|=1\}.
$$
Then the Lebesgue arc length measure of the set $E$ is zero, i.e., $m(E)=0$.
\ecor
\section{Examples for special cases}\label{MP4Sec5}
In this section, we give specific examples of $\phi$ and $\psi$ so that
$C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$. For a bounded analytic function on $\D$,
we denote $\sup\limits_{|z|<1}|f(z)|$ by $\|f\|$.
\beg
Suppose that $\|\phi\|<1$. If $\|\omega\|<\frac{1-\|\phi\|}{1+\|\phi\|}$,
then $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$, for $\alpha \in [0,1]$.
\eeg
\bpf In view of Theorem \ref{MP4Main1}, it suffices to verify the inequality \eqref{mp4 eq1}.
This inequality can be rewritten as
$$
2|(1- \alpha)\,\overline{\omega}\,(\omega-1)+2i\alpha\, {\rm Im\,}\omega|\,|\phi|+(1- \alpha)(|\omega|^2 +|1-\omega|^2|\phi|^2-1)
< \alpha(1-|\omega|^2)(1-|\phi|^2).
$$
We may set $\|\omega\|=A$ and $\|\phi\|=B$. Thus, it is enough to check that
$$
2(1- \alpha)A(A+1)B+4\alpha AB+(1- \alpha)(A^2 +(1+A)^2B^2-1)
< \alpha(1-A^2)(1-B^2)
$$
which is equivalent to
$$
[A+B+AB-1][(1- \alpha)(A+B+AB+1)+\alpha (A+B-AB+1)]<0.
$$
This yields the condition $A+B+AB-1<0$. This means that $A<\frac{1-B}{1+B}$ and the desired conclusion follows.
\epf
Since the condition $A+B+AB-1<0$ gives $B<\frac{1-A}{1+A}$, we have the following result.
\beg
Suppose that $\|\omega\|<1$. If $\|\phi\|<\frac{1-\|\omega\|}{1+\|\omega\|}$,
then $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{\alpha}$, for $\alpha \in [0,1]$.
\eeg
\beg
Suppose that $\phi(z)=z(az+b)$, where $a$ and $b$ are non-zero real numbers such that $|a|+|b|=1$.
Take $\omega(z)=z(cz+d)$ with
$$c=-\frac{ab}{K} ~\mbox{ and } ~d=\frac{1-(a^2+b^2)}{K} ~\mbox{ for $K>2+\sqrt{5}$}.
$$
Then $C_{\psi,\phi}$ preserves the class $\mathcal{P}_{1}$.
\eeg
\bpf
Clearly $|\phi(z)|^2\leq a^2+ b^2+ 2abx$ for $z=x+iy$ and thus,
$$
0< 1-(a^2+ b^2)-2abx\leq (1-|\phi|^2).
$$
Also note that
$$ |{\rm Im\,}\omega|\leq |2cx+d| = \frac{1-(a^2+ b^2)-2abx}{K}
$$
and
$$
|\omega(z)|\leq |c|+|d|= \frac{1-|ab|-(|a|-|b|)^2}{K}\leq \frac{1}{K}.
$$
The inequality \eqref{mp4 eq1} for $\alpha=1$ reduces to
$$
4|\phi|\,|{\rm Im\,}\omega|<(1-|\omega|^2)(1-|\phi|^2).
$$
Since $4|\phi|\,|{\rm Im\,}\omega|\leq 4|{\rm Im\,}\omega|\leq 4|2cx+d|$ and
$$ \left (1-\frac{1}{K^2}\right )K|2cx+d|\leq(1-|\omega|^2)(1-|\phi|^2),
$$ to verify the inequality \eqref{mp4 eq1},
it suffices to verify the inequality
$$ \frac{4}{K}< 1-\frac{1}{K^2}, ~\mbox{ i.e., }~ K^2-4K-1>0.
$$
This gives the condition $K>2+\sqrt{5}$ and the proof is complete.
\epf
\brem\label{Rem3-new}
By letting $\alpha=0$ in the Theorem \ref{MP4Main1}, we see that $C_{\psi,\phi}$ preserves the class
$\mathcal{P}_{0}$ if and only if $|1-\omega| \, |\phi|+|\omega|<1 $ on $\D.$
\erem
\beg
If $|\phi| \leq |\omega|<\sqrt{2}-1$ on $\D$, then $C_{\psi,\phi}$ preserves $\mathcal{P}_{0}$.
\eeg
\bpf
In view of Remark \ref{Rem3-new} and the assumption that $|\phi| \leq |\omega|$, it is enough to show that
$|\omega|\,|1-\omega|<1-|\omega|$ which, by squaring and then simplifying, is seen to be equivalent to the
inequality
$$|\omega|^4-2{\rm Re\,}\omega |\omega|^2 +2|\omega|-1<0.
$$
In order to verify the last inequality, we observe that
\beqq
|\omega|^4-2{\rm Re\,}\omega |\omega|^2 +2|\omega|-1 &\leq & |\omega|^4+2|\omega|^3+2|\omega|-1\\
& =&(|\omega|^2+1)(|\omega|^2+2|\omega|-1)
\eeqq
which is negative whenever $|\omega|^2+2|\omega|-1<0$, i.e., $|\omega|<\sqrt{2}-1$.
The desired result follows.
\epf
\beg
If $|\phi| \leq |\omega|<s_0$ or $|\omega|\leq |\phi|<s_0$ on $\D$, then $C_{\psi,\phi}$ preserves
$\mathcal{P}_{\alpha}$ for every $\alpha$ with $-1<\alpha<0$, where $s_0 ~(\approx 0.2648)$ is the unique positive root of the
polynomial $P(x)=2x^4+8x^3+12x^2-1$.
\eeg
\bpf
Without loss of generality, we assume that $|\phi| \leq |\omega|$. In view of Remark \ref{p and q} and the assumption that
$\alpha \in (-1,0)$, the inequality \eqref{mp4 eq1} can be rewritten as
$$
2|(1- \alpha)\,\overline{\omega}\,(\omega-1)+2i\alpha\, {\rm Im\,}\omega|\,|\phi|+(1- \alpha)(|\omega|^2
+|1-\omega|^2|\phi|^2-1)-\alpha(1-|\omega|^2)(1-|\phi|^2)<0.
$$
By setting $\|\omega\|=A$ and $\|\phi\|=B$ (so that $B\leq A$), it sufficies to check that
$$
2(1-\alpha)A(A+1)B-4\alpha AB+(1- \alpha)(A^2 +(1+A)^2B^2-1)- \alpha <0,
$$
which is equivalent to
$$
-\alpha[(A+B+AB)^2+4AB]+(A+B+AB)^2-1<0.
$$
Since $B\leq A$ and $\alpha \in (-1,0)$, the last inequality holds if
$
(2A+A^2)^2+4A^2]+(2A+A^2)^2-1 =2A^4+8A^3+12A^2-1<0.
$$
Clearly, the function $P(x)=2x^4+8x^3+12x^2-1$ is
strictly increasing on $(0,\infty)$ and thus, $P(x)<0$ for $0\leq x< s_0$, where $s_0$
is the unique positive root of $P(x)$. The desired result follows.
\epf
\section{Fixed points}\label{MP4Sec6}
In this section, we discuss the fixed points of weighted composition operators. It is time to recall
a result due to Yu-Qing Chen \cite[Theorem 2.1]{Fixed points}. The modern way of writing it
is as follows:
\bprop
Let $X$ be a metrizable topological vector space and $C$ be a convex compact subset of $X$.
Then, every continuous mapping $T:C\rightarrow C$ has a fixed point in $C$.
\eprop
We set $X=\mathcal{H}(\mathbb{D})$, $C=\mathcal{P}_{\alpha}$, $T=C_{\psi,\phi}$ and observe that
every weighted composition operator on $\mathcal{P}_{\alpha}$ has a fixed point.
Indeed, one has something more than this as we can see below.
\bthm
Let $\phi$, $\psi$, $\omega$ be as before and $\phi$ is not a rotation. Suppose that $C_{\psi, \phi}$
is a self-map of $\mathcal{P}_{\alpha}$. Then, $C_{\psi, \phi}$ has a
unique fixed point which can be obtained by iterating $C_{\psi, \phi}$ for any
$f \in \mathcal{P}_{\alpha}$. Further more, if $\phi$ is an inner function, then the fixed point
is the constant function $1$.
\ethm
\bthm
Let $\phi$, $\psi$, $\omega$ be as before and $\phi$ is a rotation. Suppose that $C_{\psi, \phi}$
is a self-map of $\mathcal{P}_{\alpha}$ and $F$ denotes the set of all fixed points of $C_{\psi, \phi}$.
Then, there are three distinct cases:
\bee
\item If $\phi(z)\equiv z$, then $F=\mathcal{P}_{\alpha}$.
\item If $\phi(z)\equiv \lambda z$ and $\lambda^n\neq 1$ for every $n\in \N$, then $F=\{1\}.$
\item If $\phi(z)\equiv \lambda z$ and $\lambda^n=1$ for some $n>1$, then
$$ F=\{ f: f(z)=g(z^n)\mbox{~ for some ~} g\in\mathcal{P}_{\alpha} \}.
$$
\eee
\ethm
The proofs of these two theorems follow from the lines of the proofs of the corresponding
results of Section $4$ of \cite{arxiv W-CO}. Moreover, the key tools for the proofs are
from Section $6.1$ of Shapiro's book \cite{Shapiro:Book}. So we omit the details.
\subsection*{Acknowledgement}
The first author thanks the Council of Scientific and Industrial Research (CSIR), India,
for providing financial support in the form of a SPM Fellowship to carry out this research.
The second author is currently at ISI Chennai Centre, Chennai, India.
|
1802.01594
|
\section{Introduction}
\cite{Bekenstein1972}, based on the suggestion given by Geroch in 1971, argued
for an engine, namely Geroch-Bekenstein engine, converting mass to energy with
an almost $100\%$ efficiency in the extreme gravitational potential of a black hole.
While such an efficient conversion is generally very difficult in accretion
disc, \cite{Bisnovatyi-Kogan1974,Bisnovatyi-Kogan1976} and later \cite{Narayan2003} by numerical simulations showed that it is possible in the presence
of very strong large-scale fields. Such a strongly
magnetic field dominated flow with a significant advection drags poloidal
magnetic fields to the inner region owing to flux freezing. This is expected to
result in the accumulation of significant field tending to disrupt the axisymmetric
accretion flow relatively far away from the black hole. Inside the radius
of disruption the matter is shown to accrete as discrete blobs with a velocity
much smaller than the free-fall velocity and almost the entire rest mass
of infalling matter is converted to energy, similar to Geroch-Bekenstein engine.
Latter authors, mentioned above, named such an accretion flow as Magnetically Arrested Disc (MAD).
Note that radiatively efficient Keplerian discs are cooler and,
hence, the magnetic field would slip through the matter by means of ambipolar
diffusion preventing the accumulation of fields. The same is true if anomalous
magnetic diffusivity is large \citep{Lovelace1994,Lubow1994}.
Now relativistic jets are very common in accreting black hole sources,
observed in stellar-mass black hole (for a review, see \citealt{Remillard2006})
as well as supermassive black hole in particular AGN \citep{Tremaine2002} sources.
Sometimes powerful jets are observed with energy more than the Eddington limit of the
black hole \citep{Rawlings1991, Ghisellini2010, McNamara2011}, which argues for an efficient engine
lying with their formation. Indeed the signature of dynamically important magnetic
fields was found in the black hole source at the center of our galaxy
\citep{Eatough2013} and also based on the correlation of jet magnetic fields with
accretion disc luminosity for 76 radio-loud galaxies, the jet
launching region was concluded to exhibit dynamically important magnetic fields \citep{Zamaninasab2014}. All these observations/inferences support the idea of MAD.
By global three-dimensional non-radiative
general relativistic magnetohydrodynamics (GRMHD) simulations, \cite{Tchekhovskoy2011} described MAD impeding accretion and producing efficient
outflowing energy in jets and winds. They showed the combined effects of large-scale
magnetic fields and spin of black hole could produce energy even more than $100\%$.
Of course crossing the outflowing energy beyond what is supplied is due to the fast
spinning nature of black hole. However, the spin effect of black hole \citep{Blandford1977} is required to be supplemented by magnetic field to reveal
higher and higher efficiency which is possible for underlying MAD.
The ordered magnetic field threading with disc matter corotates with accreting
material and also helps to power the jets via gravitational energy released \citep{Blandford1982}. However, there is also possible outer movement of magnetic fields
\citep{Bisnovatyi-Kogan1974, Bisnovatyi-Kogan1976, van Ballegooijen1989, Lubow1994, Ogilvie2001} and final inner accumulation of field lines is determined
by the balance between advection and outward diffusion of large-scale magnetic fields.
Nevertheless, it was shown that significant inward dragging of fields is possible
in flows having significant advection, unlike Keplerian flows (see also \citealt{Cao2011}).
When flow acquires strong magnetic fields, in particular far away from the
black hole, magnetic tension and hence the corresponding shearing stress
could be significant. Hence, the underlying Maxwell stress could play role
in transporting angular momentum outward and matter inward, as viscous
shearing stress does (in the presence of molecular viscosity). \cite{Mukhopadhyay2015} showed that large-scale magnetic fields indeed can transport angular momentum
in advective accretion flows as efficiently as the $\alpha$-viscosity \citep{Shakura1973} does. Nevertheless, their choice of field strength was
low enough in particular the inner region of flow to have weaker magnetic barrier
revealing MAD. As a result there is continuous accretion in that model framework
due to large-scale magnetic fields.
In the present paper, we plan to explore large-scale stronger magnetic fields
to transport angular momentum as well as possibility of magnetic barrier created
eventually in the inner flow region.
Here, Alfv\'enic critical points control the flow behaviour, rather than
fast magnetosonic points as was for the weaker field case discussed by \cite{Mukhopadhyay2015}.
The plan of this paper is as follows. In \S \ref{sec:equations} we present the basic equations for magnetized advective accretion flow considering magnetic heating term in more general way. In \S \ref{sec:sonic} we apply those equations to evaluate the critical point conditions. In \S \ref{sec:result} we discuss the results including both the disc flow behaviours and the origin of different magnetic barriers. In \S \ref{sec:discussion} we summarize and give overall conclusions.
\section{HEIGHT-AVERAGED EQUATIONS OF MAGNETIZED ADVECTIVE ACCRETION FLOWS}\label{sec:equations}
Following standard practice, we vertically average the geometrically thick accretion flow equations and consider the motion to be confined in the two-dimensional equatorial $r-\phi$ plane. We assume a steady and axisymmetric flow such that $\partial/\partial t\equiv \partial/\partial \phi \equiv 0$ and all the dynamical flow parameters, namely, radial velocity ($v$), specific angular momentum ($\lambda$), mass density ($\rho$), fluid pressure ($p$), radial ($B_{r}$) and azimuthal ($B_{\phi}$) components of magnetic field are functions of radial coordinate $r$ only.
Here, we plan to investigate the effects of large scale strong magnetic fields on the advective accretion flows in order to transport matter, as well as the possible origin of magnetic barrier supporting ``magnetically arrested disc" (MAD) model, in the pseudo-Newtonian framework with \citet{Mukhopadhyay2002} potential. The choice of this potential allows us to use the Newtonian framework, whereas capturing certain important features of general relativity quite accurately, compared to that would appear in the full general relativistic framework.
Throughout in our calculation, we express the radial and vertical coordinates in units of $GM_{BH}/c^{2}$, where $G$ is the Newton's gravitational constant, $M_{BH}$ is the mass of the black hole and $c$ is the speed of light. We also express the velocity in units of $c$ and the specific angular momentum in $GM_{BH}/c$ to make all the variables dimensionless. Hence, the equation of continuity, the radial and azimuthal components of momentum equation and the energy equation are, respectively,
\begin{eqnarray}
&\frac{d}{dr}\left(r\rho h v\right)=0,\label{eq:continuity}& \\ &v\frac{dv}{dr}-\frac{\lambda^{2}}{r^{3}}+\frac{1}{\rho}\frac{dp}{dr}+F=-\frac{B_{\phi}}{4 \pi \rho}\left(\frac{dB_{\phi}}{dr}+\frac{B_{\phi}}{r}\right),\label{eq:radmomentum}& \\ &v\frac{d\lambda}{dr}=\frac{1}{r\rho h}\frac{d}{dr}\left(r^{2}W_{r\phi}h\right)+\frac{B_{r}}{4 \pi \rho h}\frac{d}{dr}\left(r B_{\phi} h\right),\label{eq:angmomentum}& \\ &\Sigma vT\frac{ds}{dr}=\frac{h(r)v}{\Gamma_{3}-1}\left(\frac{dp}{dr}-\frac{\Gamma_{1} p}{\rho}\frac{d\rho}{dr}\right)=Q^{+}-Q^{-}=f_{m}Q^{+}. \label{eq:energy}&
\end{eqnarray}
Here, $F$ is the magnitude of gravitational pseudo-Newtonian force given by \citet{Mukhopadhyay2002} as
\begin{eqnarray*}
F=\frac{(r^{2}-2a\sqrt{r}+a^{2})^{2}}{r^{3}\lbrace \sqrt{r}(r-2)+a\rbrace ^{2}}\,,
\end{eqnarray*}
where throughout in our calculation the Kerr-parameter $a=0$ as for the non-rotating black hole (same as \citealt{Paczy1980}), $W_{r\phi}$ is the viscous shearing stress which can be written using \citet{Shakura1973} prescription with appropriate modification given by \citet{MukhopadhyayGhosh2003}, and from vertical equilibrium assumption, the half-thickness of the disc in the presence of magnetic field can be written as
\begin{eqnarray}
h(r)= r^{1/2} F^{-1/2} \sqrt{\left(p+\frac{B^{2}}{8\pi}\right)\big/\rho}.
\end{eqnarray}
In equation~(\ref{eq:energy}), the left-hand side is the radial advected entropy, where $\Sigma$ is the vertically integrated mass density, $T$ is the (ion) temperature and $s$ is the entropy density of the flow. Here, the adiabatic exponents can be written as \citep[e.g.][]{Chandrasekhar1939}
\begin{eqnarray*}
\Gamma_{1}=\beta + \frac{\left(4-3\beta\right)^{2}\left(\gamma-1\right)}{\beta+12\left(\gamma-1\right)\left(1-\beta\right)},\qquad \Gamma_{3}=\frac{\Gamma_{1}-\beta}{4-3\beta},
\end{eqnarray*}
where $\gamma$ is the ratio of the specific heats and $\beta$ is the ratio of gas pressure to total pressure, given by
\begin{eqnarray*}
\beta=\frac{\rho k_{B}T/\mu m_{p}}{\overline{a}T^{4}/3+\rho k_{B}T/\mu m_{p}+B^{2}/8\pi}.
\end{eqnarray*}
Here, $\overline{a}$ is the Stefan constant, $k_{B}$ is the Boltzmann constant, $\mu$ is the mean molecular weight and $m_{p}$ is the mass of proton. In the two limiting cases, for a gas pressure-dominated flow $\beta=1$ and $\Gamma_{1}=\gamma=\Gamma_{3}$, and for a radiation-dominated flow $\beta=0$ and $\Gamma_{1}=4/3=\Gamma_{3}$. The right hand side of equation~(\ref{eq:energy}) gives the difference between the net rate of heat energy generated per unit area $Q^{+}$ and the energy radiated out per unit area $Q^{-}$, while the energy generated term can be written as $Q^{+}=Q^{+}_{vis}+Q^{+}_{mag},$ whereas the contribution comes from both viscous and magnetic effects. The details about viscous contribution are given in the existing literature \citep[e.g.][]{Chakrabarti1996,Mukhopadhyay2015}. An abundant supply of magnetic energy and the annihilation of the magnetic fields are responsible for magnetic heating and this magnetic heating contribution per unit area is given by \citep[e.g.][]{Bisnovatyi-Kogan1974,Choudhuri1998,BalbusHawley1998,Mukhopadhyay2015}
\begin{eqnarray}
Q^{+}_{mag}=\frac{h(r)}{4 \pi}\left[B_{r}^{2} \frac{dv}{dr}+B_{\phi} B_{r} \left(\frac{1}{r}\frac{d\lambda}{dr}-\frac{\lambda}{r^{2}}\right)\right].
\end{eqnarray}
The factor $f_{m}$ measures the degree to which the flow is cooling-dominated or advection-dominated (see \citealt{Narayan1994}) and it varies from $0$ to $1$. In the extreme limit of very efficient cooling, $f_{m}=0$, while for no cooling $f_{m}=1.$
In order to have a full dynamical theory of magnetohydrodynamics (MHD) flows, we now require two more equations, namely, magnetic induction equation and equation for no magnetic monopole, given by, respectively
\begin{eqnarray}
&\nabla \times \left(\mathbf{v}\times\mathbf{B} \right)+\nu_{m}\nabla^{2}\mathbf{B}=0, \label{eq:induction}& \\ &\frac{d}{dr}\left(r B_{r}\right)=0, \label{eq:nomonopole}&
\end{eqnarray}
where $\mathbf{v}$ and $\mathbf{B}$ are respectively velocity and magnetic field vectors and $\nu_{m}$ is the magnetic diffusivity. On taking the ratio of the orders of first to second terms of the left hand side in equation~(\ref{eq:induction}), we obtain the dimensionless number $\mathcal{R}_{m}=LV/\nu_{m},$ known as magnetic Reynolds number, when $L$ and $V$ are respectively the characteristic length scale and velocity of the system. When, $\mathcal{R}_{m}$ is very large, which is the case for accretion disc, the second term of equation~(\ref{eq:induction}) can be neglected. Hence, the induction equation becomes
\begin{eqnarray}
\frac{d}{dr}\left(vB_{\phi}-\frac{\lambda B_{r}}{r}\right)=0. \label{eq:induction2}
\end{eqnarray}
To obtain the full dynamical solutions, the initial and the boundary conditions are very important. For the present purpose at the beginning of the sub-Keplerian flow, far away from the black hole at the transition radius, $\lambda=\lambda_{K}$ (where $\lambda_{K}$ being the Keplerian angular momentum per unit mass) which corresponds to the outer boundary $r=r_{out}$, whereas the event horizon of the black hole is the inner boundary, where the velocity becomes of the order of unity. In addition, an important condition has to be supplied at the magnetosonic radius discussed in \S \ref{sec:sonic}. We also have to supply $M_{BH}$, $\dot{M}$, $f_{m}$, $\alpha$ and $\gamma$ for a flow, where $\dot{M}$ is the constant mass accretion rate and $\alpha$ is the \cite{Shakura1973} viscosity parameter.
\section{SOLUTION PROCEDURE AND MAGNETOSONIC POINT ANALYSIS}\label{sec:sonic}
The set of six coupled differential equations $(1)$, $(2)$, $(3)$, $(4)$, $(8)$ and $(9)$ can be solved simultaneously using appropriate boundary conditions including that at sonic/critical point(s) to obtain the solutions for six important dynamical variables: $v$, $\lambda$, $B_{r}$, $B_{\phi}$, $p$ and $\rho$, as functions of the independent variable $r$. Note that in the presence of strong magnetic fields
giving rise to magnetic shearing stress considered here, magnetorotational instability is expected to be suppressed and hence
$\alpha\sim 0$. On the other hand, it can also be checked that for a
reasonable value of $\alpha$, the second (magnetic) term in the right hand side of equation (\ref{eq:angmomentum}) is generally
at least an order of magnitude higher than the
first (nonmagnetic viscous) term for the fields eventually considered in the subsequent sections. Thus in the rest of the computation we assume $\alpha=0$.
Indeed our main aim here is to examine the flow behaviour,
underlying possible transport etc., solely via strong magnetic fields. Now, combining all the above equations, we can write $dv/dr$ in terms of other dynamical variables and the independent variable $r$ only, such as
\begin{eqnarray}
\frac{dv}{dr}=\frac{\mathcal{N}}{\mathcal{D}}=\frac{\mathcal{N}}{\mathcal{A}v^{4}+\mathcal{B}v^{2}+\mathcal{C}}. \label{eq:slope}
\end{eqnarray}
where the numerator $\mathcal{N}$ is
\begin{eqnarray*}
&\mathcal{N}=\frac{p}{\rho}\left(2\frac{p}{\rho}+v_{A}^{2}\right)\frac{1}{F}\frac{dF}{dr}v\left(v_{Ar}^{2}-v^{2}\right)\Gamma_{1}+ \\
&Fv\left(v_{Ar}^{2}-v^{2}\right)\left(2\frac{p}{\rho}(1+\Gamma_{1})+v_{A}^{2}\right)+ \\
&2v^{2}v_{Ar}v_{A\phi}\left(2\frac{p}{\rho}+v_{A}^{2}\right)\frac{\lambda}{r^{2}}+ \\
&\left(2\frac{p}{\rho}+v_{A}^{2}\right)\frac{\lambda^{2}}{r^{3}}v\left(v^{2}-v_{Ar}^{2}\right)+ \\
&\frac{p}{\rho}\left(\frac{6p/\rho}{r}+\frac{v_{A}^{2}}{r}+2\frac{\lambda^{2}}{r^{3}}\right)\Gamma_{1}v\left(v^{2}-v_{Ar}^{2}\right)- \\
&\left(2\frac{p}{\rho}+v_{A}^{2}\right)v^{3}v_{A\phi}^{2}/r+ \\
&\left(2\frac{p}{\rho}+v_{A}^{2}\right)f_{m}\left(v^{2}\lambda +v_{Ar}^{2}\lambda - rv_{Ar}v_{A\phi}v \right)\left(\Gamma_{3}-1\right)\frac{v_{Ar}v_{A\phi}}{r^{2}},
\end{eqnarray*}
when $v_{Ar}=B_{r}/\sqrt{4\pi \rho}$, $v_{A\phi}=B_{\phi}/\sqrt{4\pi \rho}$ and the Alfv\'en velocity $v_{A}=\sqrt{v_{Ar}^{2}+v_{A\phi}^{2}}.$ The coefficients of the denominator $\mathcal{D}$ are
\begin{eqnarray*}
&\mathcal{A}=\left(2\frac{p}{\rho}(1+\Gamma_{1})+v_{A}^{2}\right), \\
&\mathcal{B}=\left(2\frac{p}{\rho}+v_{A}^{2}\right)\left(v_{Ar}^{2}f_{m}\left(\Gamma_{3}-1\right)-v_{A}^{2}-2\frac{p}{\rho}\Gamma_{1}\right)-2v_{Ar}^{2}\frac{p}{\rho}\Gamma_{1}, \\
&\mathcal{C}=v_{Ar}^{2}\left(2\frac{p}{\rho}+v_{A}^{2}\right)\left(2\frac{p}{\rho}\Gamma_{1}-f_{m}v_{A}^{2}\left(\Gamma_{3}-1\right)\right).
\end{eqnarray*}
To guarantee a smooth solution around a point where $\mathcal{D}=0$, $\mathcal{N}$ must be vanished therein. These points are called ``critical points", where $r=r_{c}$. Also the variables with subscript `$c$' refer to the values of that respective variables at that critical radius. Since at $r=r_{c}$, $dv/dr=0/0$, using l'Hospital's rule and after some algebra, it is easy to show that the velocity gradient at the critical point $(dv/dr)_{c}$ of the accreting matter has two values: one is valid for accretion solution and other for wind. The nature of the critical point depends on the values of the velocity gradient at the critical point. When both the velocity gradients are real and of opposite sign, the critical point is `saddle' type. When the gradients are real and of same sign, the critical point is `nodal' type. When the the gradients are complex, the critical point is `spiral' type (or `O'-type for non-dissipative system). For details of the classifications, see \citealt{Chakrabarti1990}.
We assume, the Alfv\'en velocity at $r_{c}$ to be expressed in terms of sound speed $c_{sc}$ as
\begin{eqnarray}
v_{Arc}=\frac{c_{sc}}{f_{r}\sqrt{2}},\\
v_{A\phi c}=\frac{c_{sc}}{f_{\phi}\sqrt{2}},
\end{eqnarray}
where $c_{s} \simeq \sqrt{\frac{p}{\rho}}$, the factor $\sqrt{2}$ is the normalization factor and the constants $f_{r}$ and $f_{\phi}$ imply the inverse of the magnetic field strength.
In general, the steady MHD flow reveals three critical points, at which the radial velocity is of the order of the propagation speed of each of the three different types of mode - the fast magnetosonic wave, the Alfv\'en wave and the slow magnetosonic wave { (\citealt{Weber1967}; also see \citealt{sakurai,Das2007})}. The Alfv\'en wave is purely transverse and the magnetic tension is the only restoring force for it. The other two magnetosonic waves are the mixtures of acoustic and magnetic waves. The slow magnetosonic point is absent here due the cold nature of the flow \citep[e.g.][]{Li1992,Gammie1999}. From $\mathcal{D}=0$, we can obtain the expressions for Mach number $(M_{c})$, the ratio of radial velocity $(v)$ to sound speed $(c_{s})$, at the critical point for two different modes: Alfv\'enic and fast magnetosonic. Figure~\ref{fig:figure1} shows the variation of $M_{c}$ with the change of the constant parameter $f_{r}$, for different relative strengths of radial and azimuthal components of the magnetic field obtained by adjusting the other constant parameter $f_{\phi}$. Figure~\ref{fig:figure1}$(a)$ is for the Alfv\'enic mode, whereas Figure~\ref{fig:figure1}$(b)$ is for the fast magnetosonic mode. Figure~\ref{fig:figure1}$(a)$ signifies that $M_{c}$ corresponding to the Alfv\'enic mode shrinks to disappear when $f_{r}$ is very large (which corresponds to the week magnetic field). On the other hand in Figure~\ref{fig:figure1}$(b)$, in this large $f_{r}$ limit, $M_{c}$ corresponding to the fast magnetosonic mode becomes unity and hence the disc behaves like of a simple hydrodynamics type. Now, in the lower $f_{r}$ limit (corresponding to the very strong magnetic field), for Alfv\'enic mode, $v$ is a more sensible parameter compared to $c_{s}$ depending on either the disc is poloidally or toroidally dominated. For the $B_{r}$ dominated case (dotted line), matter drags inward more rather than orbital circulation, on the other hand, for the $B_{\phi}$ dominated case (dashed line), matter rotates more rather than its inward acceleration making $v$ less. Hence, $M_{c}$ is higher for $B_{r}$ dominated case (dotted line) compared to $B_{\phi}$ dominated case (dashed line). However, in Figure~\ref{fig:figure1}$(b)$, for the fast magnetosonic mode, $v$ as well as $c_{s}$ both are sensible parameters for different relative strengths. Here the matter density is very high in the $B_{r}$ dominated case (dotted line) compared to the $B_{\phi}$ dominated case (dashed line) making $c_{s}$ as well as $v$ smaller for former. For the Alfv\'enic mode, the critical Mach number profile shows a maximum and hence it decreases with lowering the value of $f_{r}$. This is because of the absence of the vertical magnetic field. Since we consider purely the disc (which is vertically integrated), the disc sustains all the magnetic field lines in the two-dimensional flow only, unlike what could be in the presence of the vertical motion. \cite{Mukhopadhyay2015} already initiated exploring the disc dynamics for large $f_{r}$ $(\sim 100)$ and hence for the weak field limit, of fast magnetosonic mode. Here, we plan to address the dynamics for small $f_{r}$, mostly around less than unity, for the Alfv\'enic mode.
\begin{figure*}
\includegraphics[width=16cm]{figure1}\caption{The variation of Mach number at critical point as a function of magnetic field strength. The constant factor $f_{r}$ implies inverse of the field strength, $(a)$ for Alfv\'en wave, $(b)$ for fast magnetosonic wave, when the different lines are for different relative strength of magnetic fields (radial and azimuthal) at the critical point such as $B_{rc}=B_{\phi c}$ (solid lines), $B_{rc}=B_{\phi c}/2$ (dashed lines) and $B_{rc}=2B_{\phi c}$ (dotted lines). The other parameters are $M_{BH}=10M_{\odot}, \ \dot{M}=0.01\dot{M}_{Edd}$ and $f_{m}=0.5$.}
\label{fig:figure1}
\end{figure*}
\section{RESULTS}\label{sec:result}
\citet{Mukhopadhyay2015} showed that the removal of angular momentum is possible in the presence of large scale magnetic stress in geometrically thick, advective, sub-Keplerian accretion flow, in the complete absence of $\alpha$-viscosity. It was suggested that the externally generated large-scale poloidal magnetic field, originating from the environment, say, the interstellar medium, would be dragged inward and greatly squeezed near the black hole by the accreting plasma \citep[e.g.][]{Bisnovatyi-Kogan1974,Bisnovatyi-Kogan1976}. In this case, when the large scale magnetic field is strong enough, the accretion flow will be arrested by the magnetic field in the inner region of the disc and it modifies the disc structure such that it becomes { a MAD \citep[e.g.][]{Narayan2003,Igumenshchev2008,Mck12}.}
Here, we plan to understand the followings. $(1)$ What is the nature of the accretion flow near the black hole in the presence of large scale strong magnetic field. $(2)$ Will there be any magnetic barrier, such that accretion will stop? $(3)$ What will be the fate of matter after knocking the barrier? Will it again go back to infinity?
\begin{figure*}
\includegraphics[width=16cm]{figure2}\caption{$(a)$ Mach number, $(b)$ angular momentum per unit mass, $(c)$ sound speed and $(d)$ plasma-$\beta$, when the different lines are for different relative strength of magnetic field (radial and azimuthal) at the critical point such as $B_{rc}=B_{\phi c}$ (solid lines), $B_{rc}=B_{\phi c}/2$ (dashed lines) and $B_{rc}=2B_{\phi c}$ (dotted lines). Here, $\lambda_{c}=3.2$. The other parameters are $M_{BH}=10M_{\odot}, \ \dot{M}=0.01\dot{M}_{Edd}$ and $f_{m}=0.5$.}
\label{fig:figure2}
\end{figure*}
\subsection{The origin of magnetic barrier}\label{sec:barrier}
To get the idea of different types of magnetic barrier and their origin, we have to understand the contribution from all the forces, say, gravitational force in the pseudo-Newtonian regime, the centrifugal force, forces due to fluid pressure and from magnetic fields, on the accretion phenomenon in the presence of large scale magnetic field. For this purpose, we have to look at the radial momentum balance equation~(\ref{eq:radmomentum}) more carefully. The first term on R.H.S. of this equation comes from magnetic pressure, whereas the second term from magnetic tension. Now, it is very easy to understand that the magnetic tension will always support gravity. What about the magnetic pressure, which generally acts against gravity, as like normal fluid pressure (see \citealt{Spruit2013})? The terms associated with radial magnetic field from pressure and tension parts are exactly equal and opposite, hence they cancel each other. In this circumstances, the profile for azimuthal component plays the key role to create any kind of magnetic barrier. The accretion process suppresses, when the forces against gravity dominate, such that
\begin{eqnarray}
-\frac{1}{4\pi \rho}\left(B_{\phi}\frac{dB_{\phi}}{dr}\right)-\frac{1}{\rho}\frac{dp}{dr}+\frac{\lambda^{2}}{r^{3}}>F+\frac{1}{4\pi \rho}\frac{B_{\phi}^{2}}{r}.
\label{eq:barrierIaI}
\end{eqnarray}
Hence, the essential conditions for which matter facing the barrier are
\begin{eqnarray}
B_{\phi}\frac{dB_{\phi}}{dr}<0 \quad \textrm{and} \quad \Bigg\vert B_{\phi}\frac{dB_{\phi}}{dr}\Bigg\vert \gg \frac{B_{\phi}^{2}}{r} \label{eq:barrierIa}.
\end{eqnarray}
Combining these, we obtain $\frac{dB_{\phi}}{dr} \ll -\frac{B_{\phi}}{r}$ and from equation~(\ref{eq:nomonopole}) we already know, $\frac{dB_{r}}{dr} = -\frac{B_{r}}{r}.$ Hence, the barrier appears, when the disc is poloidally dominated.
Accretion disc can carry small as well as large scale magnetic fields, of course there is a certain upper limit to the amount of magnetic flux the disc around a black hole can sustain. However the strength of the magnetic field plays very crucial role in the dynamics of the accretion flow. The large scale field generally can not be produced in the disc. However, a seed magnetic field can be generated from zero initial field condition through, e.g., the Biermann battery mechanism \citep{Safarzadeh2017}, when there are non-aligned gradients in density and temperature profiles, automatically arising in the accretion disc structure. On the other hand, the externally generated field can be captured from the environment, say, interstellar medium and dragged inward by an accretion flow. This weak magnetic field can be dynamically dominant through flux freezing due to the inward advection of the magnetic field in this quasi-spherical accretion flow. This large amount of poloidal magnetic field cannot be escaped due to continued inward accretion pressure and also cannot be absorbed by the central black hole. In this situation, matter has to fight on its way to fall to the event horizon and facing this type of magnetic barrier. The origin of this magnetic barrier is described elaborately in Figures~\ref{fig:figure2} and \ref{fig:figure3}.
Figure~\ref{fig:figure2} shows the solutions for some important dynamical variables for three different relative field configurations: $B_{rc}=B_{\phi c}$ (for solid line), $B_{rc}=B_{\phi c}/2$ (for dashed line) and $B_{rc}=2B_{\phi c}$ (for dotted line) at $r_{c}=5.9$, where $\lambda_{c}=3.2$ is same for all the three cases. Note that here and subsequent cases discussed below, similar respective results are possible to obtain for highly relativistic ($\gamma\sim 4/3$) as well as less relativistic (with $\gamma\sim 1.4$ or so) flows with the slight readjustment of $r_c$ or $\lambda_c$.
In Figure~\ref{fig:figure2}$(a)$ for the Mach number profile, the solid curve indicates that initially matter drags inward and $M$ increases gradually until it faces the magnetic barrier at $r\approx 3.7$. While it tries to go away from black hole, again faces other barrier at $r\approx 4.4$ and then slowly falls back onto the event horizon. The dashed line indicates that $M$ increases gradually as matter drags inward without facing any magnetic barrier but it is always in the sub-sonic region. The dotted one shows that even there is no magnetic barrier, magnetic field arrests the infalling matter and slows it down to the horizon. The corresponding specific angular momentum profile is shown in Figure~\ref{fig:figure2}$(b)$. Here, angular momentum transport is happening by large scale magnetic stress. The sound speed and the $Plasma-\beta$ are also shown in Figure~\ref{fig:figure2}$(c)$ and \ref{fig:figure2}$(d)$ respectively. The details of dynamics will be explained in the next section, however we try to address here the origin of magnetic barrier only.
Figures~\ref{fig:figure3}$(a)$, \ref{fig:figure3}$(c)$ and \ref{fig:figure3}$(e)$ show the profiles for magnetic field components for three different conditions at the critical point and the corresponding net forces are given in Figures~\ref{fig:figure3}$(b)$, \ref{fig:figure3}$(d)$ and \ref{fig:figure3}$(f)$. Here, the net force indicates sum over all the forces as mentioned above and it will be negative when inward supporting gravity forces dominate. Symbolically, the net force is R.H.S. of equation as given below
\begin{eqnarray}
v\frac{dv}{dr}=\frac{\lambda^{2}}{r^{3}}-\frac{1}{\rho}\frac{dp}{dr}-F-\frac{1}{4\pi \rho}\left(B_{\phi}\frac{dB_{\phi}}{dr}+\frac{B_{\phi}^{2}}{r}\right) \label{eq:netforce}.
\end{eqnarray}
Figure~\ref{fig:figure3}($a$) shows that the disc is poloidally dominated before flow faces barrier at $r\approx 3.7$ and the field components satisfy the barrier conditions as given in equation~(\ref{eq:barrierIa}). The corresponding net force is shown in Fig.~\ref{fig:figure3}$(b)$, initially which increases gradually to larger negative values as matter drags inward and jumps discontinuously at the barrier location at $r\approx 3.7$ from negative to positive direction indicating matter faces a negative impulse and tries to go back to infinity. The long dotted arrows indicate the infinite discontinuous jump and the small arrows indicate the direction of matter movement. Fig.~\ref{fig:figure3}($e$) shows that the disc is toroidally dominated and the net force is always negative shown in Fig.~\ref{fig:figure3}$(f)$. Hence the magnetic barrier does not appear and matter drags inward freely. In between these two cases, in Figures~\ref{fig:figure3}$(c)$ and \ref{fig:figure3}$(d)$, the barrier tries to appear in such a way that the poloidal and toroidal magnetic fields arrest the infalling mass and slows it towards the black hole. Hence, the accretion flow is decelerated near the black hole by large scale magnetic field.
What will be the fate of matter after knocking the barrier? As shown in Figures~\ref{fig:figure2} (solid line), \ref{fig:figure3}$(a)$ and \ref{fig:figure3}$(b)$, after knocking the barrier, matter will try to go away from the black hole. In this context, the behaviour of magnetic stress-tensors is very important, since the field lines already arrest the particles. The magnetic stress acts like a negative pressure along the field lines, as is in the case of a stretched elastic wire and this negative stress is known as `magnetic tension' (see \citealt{Spruit2013}). On the way going away from black hole, matter totally loses its angular momentum and the cumulative action of inward strong gravity and the magnetic tension along the field line controls the system. Hence, the matter is prevented from escaping due to the dominant nature of the net inward supporting forces. This is the origin of the second magnetic barrier as shown in Figures~\ref{fig:figure3}$(a)$ and \ref{fig:figure3}$(b)$ at $r\approx 4.4$.
\begin{figure*}
\includegraphics[width=16cm]{figure3}\caption{The radial (dotted lines) and azimuthal (solid lines) components of magnetic field to describe the origin of magnetic barrier for different relative field strength at the critical point as in Figure~\ref{fig:figure1}, such that at $r=r_{c}$, $(a)$ $B_{rc}=B_{\phi c}$, $(c)$ $B_{rc}=2B_{\phi c}$, $(e)$ $B_{rc}=B_{\phi c}/2$. The corresponding net forces are given in $(b)$, $(d)$ and $(f)$ respectively. In $(b)$ the long dotted arrows indicate the infinite discontinuous jump, whereas the small arrows indicate the direction of matter movement. The other parameters are same as in Figure~\ref{fig:figure2}.}
\label{fig:figure3}
\end{figure*}
\subsection{Disc dynamics}\label{sec:dynamics}
Now we concentrate on disc flow behaviours in details. Depending on the location of the critical points and the corresponding relative magnetic field strengths, the size of the disc with sub-Keplerian flow varies. The conditions for all different cases are supplied in Table~\ref{tab:table1}.
\begin{table}
\centering
\caption{Conditions at the critical point and at outer boundary.}
\label{tab:table1}
\begin{tabular}{lcccr}
\hline
Figure & $r_{out}$ & $r_{c}$ & $\lambda_{c}$ & $B_{ic}$\\
\hline
\ref{fig:figure2} \& \ref{fig:figure3} & 6.74 & 5.87 & 3.2 & $B_{rc}=B_{\phi c}$\\
\ref{fig:figure5} & 67 & 25.1 & 3.2 & $B_{rc}=B_{\phi c}$\\
\ref{fig:figure7} & 220 & 41 & 3.2 & $B_{rc}=B_{\phi c}$\\
\ref{fig:figure8} & 152.5 & 41 & 3.6 & $B_{rc}=B_{\phi c}$\\
\ref{fig:figure2} \& \ref{fig:figure3} & 6.38 & 5.85 & 3.2 & $B_{rc}=2B_{\phi c}$\\
\ref{fig:figure6} & 174 & 55 & 3.2 & $B_{rc}=2B_{\phi c}$\\
\ref{fig:figure2} \& \ref{fig:figure3} & 7.19 & 5.9 & 3.2 & $B_{rc}= B_{\phi c}/2$\\
\ref{fig:figure4} & 122 & 23 & 3.2 & $B_{rc}= B_{\phi c}/2$\\
\hline
\end{tabular}
\end{table}
Figure~\ref{fig:fig4} illustrates the nature of magnetic field lines in accretion flows around a black hole
considered here. Although our model explains behaviours of accretion flows on the upper-half of the disc (positive scale height only),
from symmetry the lower half-plane can be visualized as well. Figure~\ref{fig:fig4} shows that the field lines direct towards the
black hole in the upper half plane, whereas it is opposite in the lower-half plane. This is indeed expected in accordance with
equation (\ref{eq:nomonopole}) which, along with the solution for $B_{\phi}$, for the present model, gives rise to a split-monopole-like
feature.
\begin{figure}
\includegraphics[width=10cm]{figure4}
\caption{The nature of magnetic field vectors in a typical model accretion flow
around a black hole considered here. Vectors in both upper- and lower-half planes are considered together. }
\label{fig:fig4}
\end{figure}
The solution for some important dynamical and thermodynamical variables is shown in Figure~\ref{fig:figure4}, where $r_{c}=23$ and $r_{out}=122$. At $r_{c}$, the Mach number $M_{c}\approx 0.47$, i.e. $v_{c}\approx 0.47c_{sc}$. As matter advances towards event horizon, which is located at $r=2$ for non-rotating black hole, $M$ increases. This is quite common in accretion discs, but the interesting fact arises at $r\approx 5.3$ shown in Figure~\ref{fig:figure4}$(a)$, beyond which matter fails to accelerate and hence $M$ decreases. This is due to the dynamically dominant magnetic field. The strong magnetic field arrests the infalling matter and slows it down further towards black hole. The field components are shown in Figure~\ref{fig:figure4}$(e)$. Since the disc is toroidally dominated, infalling matter rotates more rather than its inward dragging. Figure~\ref{fig:figure4}$(f)$ indicates the corresponding net force given by equation~(\ref{eq:netforce}) and it will be negative when net inward force dominates. Below $r\approx 5.3$, the net force indeed becomes positive, where the disc is already arrested by the field lines.
Figure~\ref{fig:figure4}$(b)$ shows that the outward transport of angular momentum is apparent in the presence of large scale magnetic stress, when at the outer boundary $r_{out} \approx 122$, $\lambda=\lambda_{K}$. However, in the inner region, where the disc is arrested mostly by toroidal magnetic field, $\lambda$ does not decrease and the infalling matter rotates very fast rather than accelerating inward and hence the velocity components $v\approx 0.23$ and $v_{\phi}=\lambda/r \approx 0.97$ near the event horizon. Figure~\ref{fig:figure4}$(c)$ shows that the sound speed increases monotonically towards the horizon and reaches a maximum of $0.32$, corresponding to a temperature $T\approx 10^{12}K$, as expected in advective accretion flow. Figure~\ref{fig:figure4}$(d)$ indicates the trend of $Plasma-\beta$ showing the magnetic pressure comparable to the fluid pressure and sometimes even more than that due to the presence of large scale strong magnetic field.
\begin{figure*}
\includegraphics[width=16cm]{figure5}\caption{$(a)$ Mach number, $(b)$ angular momentum per unit mass, $(c)$ sound speed, $(d)$ plasma-$\beta$, $(e)$ the radial (dotted line) and azimuthal (solid line) components of magnetic field and (f) net force. Here, $r_{c}=23$, $r_{out}=122$, $\lambda_{c}=3.2$, $B_{rc}=B_{\phi c}/2$ and the other parameters are same as in Figure~\ref{fig:figure2}.}
\label{fig:figure4}
\end{figure*}
The nature of magnetic field vectors in the $x-y$ plane around a black hole for the above flow configuration is shown in
Figure~\ref{fig:fig6} and also the corresponding three-dimensional visualization in the upper half plane of the disc is shown
in Figure~\ref{fig:fig7}. Both the results are in accordance with Figure~\ref{fig:figure4} when matter is infalling throughout.
\begin{figure*}
\includegraphics[width=16cm]{figure6}\caption{The nature of magnetic field $(a)$ vectors, and $(b)$ stream lines, in the
$x-y$ plane of the accretion flow around a black hole for the case shown in Figure~\ref{fig:figure4}.}
\label{fig:fig6}
\end{figure*}
\begin{figure}
\includegraphics[width=10cm]{figure7}
\caption{Three-dimensional visualization of the magnetic field lines shown in
Figure~\ref{fig:fig6}.}
\label{fig:fig7}
\end{figure}
For different flow configurations, the solution for the same dynamical variables is shown in Figure~\ref{fig:figure5}. Here $r_{c}=25.1$. Figure~\ref{fig:figure5}$(a)$ shows Mach number profile and $M_{c}\approx 0.8$. Initially $M$ increases monotonically up to $r\approx 9.5$, where the first magnetic barrier appears due the accumulation of significant amount of poloidal magnetic field. After knocking the barrier matter tends to go away from the event horizon but on the way it again faces other barrier at $r\approx 15.6$ and hence the matter is prevented from escaping by the cumulative action of strong gravity and the magnetic tension along the field lines. The origin of these barriers are already explained in \S \ref{sec:barrier}. The small loop in the Mach number profile between these two barriers indicates the existence of `center' type or `O'-type critical point. Inside this region $M$ again increases and matter reaches the event horizon, where the radial velocity $v=1$. The corresponding net force field acting on the matter is shown in Figure~\ref{fig:figure5}$(f)$ and it will be negative if the net inward force dominates. Initially it increases negatively, indicating accretion phenomenon, but at the first barrier location $(r\approx 9.5)$ it shows infinitely discontinuous jump from negative to positive. Similar infinite discontinuous jump from negative to positive also happens at the outer barrier location $(r\approx 15.6)$. After that it again increases negatively, indicating accretion phenomenon. In Figure~\ref{fig:figure5}$(f)$, we particularly focus around the barrier regions.
Figure~\ref{fig:figure5}$(e)$ shows the field components, where the radial magnetic field follows $r^{-1}$ profile independently over the whole solution as given by no-monopole equation~(\ref{eq:nomonopole}), whereas the azimuthal part is coupled with other dynamical variables and can not be expressed in such a simple fashion. Near the first barrier location (at $r\approx 9.5$), the disc is poloidally dominated as discussed in the previous section. The azimuthal magnetic field profile along with the specific angular momentum decides how big the loop (due to presence of `O'-type critical point) in $M$-profile is, in between the two barriers. It can be more visualized in next two cases.
Figure~\ref{fig:figure5}$(b)$ shows that the outward transport of angular momentum occurs in the presence of large scale magnetic stress when the outer boundary corresponds to $\lambda=\lambda_{K}$ is at $r_{out}\approx 67$. Depending on the conditions at the critical point, the slope $\partial \lambda/ \partial r$ here is more compared to the case as shown in Figure~\ref{fig:figure4} and, hence, the steeper $\lambda$ profile helps the matter to lose the angular momentum faster, making the disc size smaller compared to the case as in Figure~\ref{fig:figure4}. Initially $\partial \lambda/ \partial r$ is positive, indicating matter is dragging inward. After knocking the first barrier at $r\approx 9.5$, matter tends to go away from the black hole with $\partial \lambda/ \partial r <0$. But on this way, matter totally loses its angular momentum and due to the dominant nature of the net inward forces, matter again faces other barrier at $r\approx 15.6$ and falls back to horizon. Figure~\ref{fig:figure5}$(c)$ shows that the sound speed reaches a maximum of $0.28$ at horizon, corresponding to a temperature $T\gtrsim 10^{11}K$. Figure~\ref{fig:figure5}$(d)$ shows the trend of $Plasma-\beta$ indicating the domination of magnetic pressure over fluid pressure almost throughout the flow.
\begin{figure*}
\includegraphics[width=16cm]{figure8}\caption{$(a)$ Mach number, $(b)$ angular momentum per unit mass, $(c)$ sound speed, $(d)$ plasma-$\beta$, $(e)$ the radial (dotted line) and azimuthal (solid line) components of magnetic field and $(f)$ net force. Here, $r_{c}=25.1$, $r_{out}=67$, $\lambda_{c}=3.2$, $B_{rc}=B_{\phi c}$ and the other parameters are same as in Figure~\ref{fig:figure2}.}
\label{fig:figure5}
\end{figure*}
Next we address disc dynamics for the case, where those two barriers merge to a single point in such a way that there will be no `O'-type critical point in between these barriers. This is shown in Figure~\ref{fig:figure6} when $r_{c}=55$, $B_{rc}=2B_{\phi c}$, $\lambda_{c} = 3.2$ and $M_{c}=1.03$. Figure~\ref{fig:figure6}$(a)$ shows that the Mach number profile has a small kink at $r\approx 23$, which indicates the merging point of two barriers. Hence, $M$ increases monotonically as is in a accretion flow and it reaches a maximum at the event horizon, where the radial velocity becomes unity. Figure~\ref{fig:figure6}$(b)$ confirms the outward transport of $\lambda$ occurring by the large scale magnetic stress and the outer boundary corresponding to $\lambda=\lambda_{K}$ is at $r_{out} \approx 174$. Figure~\ref{fig:figure6}$(c)$ shows the sound speed basically carrying the information of the temperature of the disc. Figure~\ref{fig:figure6}$(d)$ and \ref{fig:figure6}$(e)$ show the $Plasma-\beta$ parameter and the magnetic field components respectively. The net force in Figure~\ref{fig:figure6}$(f)$ increases negatively as matter drags inward, towards the event horizon, as expected in an accretion flow.
\begin{figure*}
\includegraphics[width=16cm]{figure9}\caption{$(a)$ Mach number, $(b)$ angular momentum per unit mass, $(c)$ sound speed, $(d)$ plasma-$\beta$, $(e)$ the radial (dotted line) and azimuthal (solid line) components of magnetic field and $(f)$ net force. Here, $r_{c}=55$, $r_{out}=174$, $\lambda_{c}=3.2$, $B_{rc}=2B_{\phi c}$ and the other parameters are same as in Figure~\ref{fig:figure2}.}
\label{fig:figure6}
\end{figure*}
Figure~\ref{fig:figure7} shows a very important and unique solution. Here, the critical point is located at $r_{c}=41$ and at this point $B_{rc}=B_{\phi c}$ and $\lambda_{c}=3.2$. Figure~\ref{fig:figure7}$(a)$ shows that $M$ increases continuously as matter drags inward and due to accumulation of large amount of poloidal magnetic flux, matter faces the magnetic barrier at $r\approx 12.5$. After knocking the barrier, matter again goes back to infinity. This type of profile must have very significant contribution to outflow/jet. In a more realistic
three-dimensional model, matter would go vertically after knocking the barrier revealing outflow. The angular momentum profile in Figure~\ref{fig:figure7}$(b)$ shows that initially from the outer boundary corresponding to $\lambda=\lambda_{K}$ at $r_{out}=220$, $\lambda$ decreases as matter drags inward and after knocking the barrier it increases gradually for larger orbits and reaches the Keplerian limit again at $r\approx 110$. The negative sign just signifies that the matter is rotating in opposite direction after facing the barrier. The sound speed profile in Figure~\ref{fig:figure7}$(c)$ shows that initially it increases up to $0.15$ at the barrier location and then decreases monotonically as matter goes away from the black hole, which is expected due to lowering the potential energy. In Figure~\ref{fig:figure7}$(d)$, the $Plasma-\beta$ is less than unity, which indicates that the magnetic pressure is dominating over normal fluid pressure. The origin of this type of profile is hidden in magnetic field strength profile, given in Figure~\ref{fig:figure7}$(e)$. As usual the matter faces the magnetic barrier at $r\approx 12.5$ due to dominant behaviour of poloidal magnetic field in the inner region. After knocking the barrier matter goes far away from the black hole. Since, $B_{\phi}$ and as well as $\partial B_{\phi}/ \partial r$ become very weak gradually, magnetic tension is not large enough to prevent the matter being escaping. The net force in Figure~\ref{fig:figure7}$(f)$ also shows that initially it increases gradually in negative direction as matter drags inward and jumps discontinuously at the barrier location from negative to positive. The arrows indicate the direction of matter.
\begin{figure*}
\includegraphics[width=16cm]{figure10}\caption{$(a)$ Mach number, $(b)$ angular momentum per unit mass, $(c)$ sound speed, $(d)$ plasma-$\beta$, $(e)$ the radial (dotted line) and azimuthal (solid line) components of magnetic field and $(f)$ net force. Here, $r_{c}=41$, $r_{out}=220$, $\lambda_{c}=3.2$, $B_{rc}=B_{\phi c}$ and the other parameters are same as in Figure~\ref{fig:figure2}.}
\label{fig:figure7}
\end{figure*}
The nature of magnetic field vectors in the $x-y$ plane for this unique flow is shown in
Figure~\ref{fig:fig11} and also the corresponding three-dimensional visualization in the upper half plane of the disc is shown
in Figure~\ref{fig:fig12}. Both the figures depict the absence of any magnetic vector in the inner flow region in accordance
with Figure \ref{fig:figure7}. It is important to note that in a region $220\ge r\ge 12.5$, when the matter is falling in,
the field vectors are in the inward direction. On the other hand, in $12.5\le r\le 110$, where the matter is flowing out,
the field vectors are in the outward direction. Hence, there is a zone in Figures \ref{fig:fig11} and \ref{fig:fig12} where
the field lines of either directions appear simultaneously.
\begin{figure*}
\includegraphics[width=16cm]{figure11}\caption{ The nature of $(a)$ magnetic field vectors, and $(b)$ magnetic field vectors near the
barrier location, in the $x-y$ plane of the accretion flow around a black hole for the case shown in Figure~\ref{fig:figure7}.}
\label{fig:fig11}
\end{figure*}
\begin{figure}
\includegraphics[width=10cm]{figure12}
\caption{Three-dimensional visualization of the magnetic field lines shown in
Figure~\ref{fig:fig11}.}
\label{fig:fig12}
\end{figure}
The solutions shown in Figure ~\ref{fig:figure8} are obtained with the same condition as of Figure~\ref{fig:figure7} except
$\lambda_{c}$, which is 3.6 instead of 3.2. The importance of this figure is that it shows how the second barrier can arise with the presence of `O'-type critical point in between two barriers based on the relative dependence of specific angular momentum and magnetic field strength at the critical point. The azimuthal magnetic field profile strongly links with $\lambda$ according to equation~(\ref{eq:angmomentum}). Hence, the large angular momentum at critical point makes the slope $\partial \lambda / \partial r$ larger compared to that in Figure~\ref{fig:figure7}, which not only makes the disc size smaller but also makes the slope $\partial B_{\phi} / \partial r$ larger even after knocking the first barrier. Therefore, net inward force dominates due to magnetic contribution as given in equation~(\ref{eq:netforce}) and the second barrier appears. In other word, in this way these quantities just determine how big or small of the loop size in between these two barriers is in $M$ profile around the `O'-type critical point.
\begin{figure*}
\includegraphics[width=16cm]{figure13}\caption{$(a)$ Mach number, $(b)$ angular momentum per unit mass, $(c)$ sound speed, $(d)$ plasma-$\beta$, $(e)$ the radial (dotted line) and azimuthal (solid line) components of magnetic field and $(f)$ net force. Here, $r_{c}=41$, $r_{out}=152.5$, $\lambda_{c}=3.6$, $B_{rc}=B_{\phi c}$ and the other parameters are same as in Figure~\ref{fig:figure2}.}
\label{fig:figure8}
\end{figure*}
The nature of magnetic field vectors in the $x-y$ plane for this flow is shown in
Figure~\ref{fig:fig14} and also the corresponding three-dimensional visualization in the upper half plane of the disc is shown
in Figure~\ref{fig:fig15}. Both the figures depict a zone where
the field lines of either directions appear simultaneously. This corresponds to the region of loop around `O'-type critical point.
\begin{figure}
\includegraphics[width=\columnwidth]{figure14}\caption{The nature of magnetic field vectors in the $x-y$ plane of the accretion
flow around a black hole for the case shown in Figure~\ref{fig:figure8}.}
\label{fig:fig14}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{figure15}\caption{Three dimensional visualization of the magnetic field lines shown in
Figure~\ref{fig:fig14}.}
\label{fig:fig15}
\end{figure}
\section{DISCUSSION AND CONCLUSIONS}\label{sec:discussion}
We have explored the effects of large scale strong magnetic field in the advective accretion flow in order to transport angular momentum, as well as the origin of different magnetic barriers in the MAD regime. Here, the radial accretion velocity is typically high compared to standard thin disc model and is proportional to magnetic tension. This is because the radial velocity is determined by how fast the magnetic tension can transfer angular momentum outwards. The specific angular momentum of the flow is much smaller than the local Keplerian value and the outer boundary of this model corresponds to $\lambda=\lambda_{K}$, the beginning of the sub-Keplerian flow far away from the black hole. Hence, the disc size principally depends on magnetic field strength.
We have demonstrated the possible formation of four distinct flow classes: (1) no barrier and matter
reaches the black hole, (2) a barrier stops the infall and the matter goes back completely, (3) infalling
matter faces two barriers but eventually reaches the black hole event horizon, and (4) matter decelerates
near the event horizon and eventually falls into the black hole.
The presence of magnetic field plays very crucial role in the dynamics of the accretion flow. The accretion disc can carry small as well as large scale magnetic field. However, there is an upper limit to the amount of magnetic field, the disc around a black hole can sustain in steady-state and it is achieved when an accretion flow reaches the MAD state. Generally the large scale magnetic field can not be produced in the disc, rather it can be captured from the environment, say, interstellar medium and dragged inward by the accretion flow. This magnetic field can be dynamically dominant near the event horizon through flux freezing due to the inward advection of the magnetic flux in this quasi-spherical accretion flow. The accumulated poloidal field can not be absorbed by the black hole and also can not be escaped due to continued inward accretion pressure. At this circumstance, matter faces the magnetic barrier and again goes back to infinity. It exhibits two Mach numbers at same radius, one corresponds to infalling matter whereas the other one for outgoing matter in this quasi-spherical accretion flow. These types of profile are expected to play a very crucial role in the generation of various kinds of outflow. More generally this may be the building block to produce jet/outflow. Further, the Bernoulli parameter $b$ (given in APPENDIX~\ref{sec:appendix A}) is positive here. It also implies that highly magnetized advective accretion flow provides a generic explanation of unbound matter and hence outflows and jets.
This is not the whole story! Since in MAD state, the magnetic field strength is large enough, it can play in its own way. After knocking the barrier when matter tries to go away from the black hole, it totally loses its angular momentum. At the same time, the cumulative action of inward strong gravity and the magnetic tension along the field line could be large enough to prevent the matter being escaped as well. Hence, matter might face a second magnetic barrier and fall back to the black hole. This can be a possible explanation of the episodic jet phenomena, where the magnetic field can lock the matter in between these two barriers depending on the relative dependence between the specific angular momentum and the magnetic field strength.
Is there any observational evidence of such strong magnetic field as discussed here? Here, we find from the magnetic field profile, the field strength in the inner region of the accretion disc is of the order of $10^{7}-10^{8}\, G$ for stellar mass black holes. Interestingly, these field values tally almost nearly with observation, based on a model relating the observed kinetic power of relativistic jet to the magnetic field of the accretion discs \citep{Garofalo2010,Piotrovich2014}.
The sound speed in this magnetized advective flow is much higher than that in Shakura-Sunyaev disc and the ion temperature of the accreting gas is nearly virial, order of $10^{12}$ K. This is expected since there are no sufficient cooling.
In this present context, for simplicity, we have assumed that the flow to be vertically averaged without allowing any vertical component of the flow, but considering the maximum upper limit of the magnetic field strength in order to achieve the MAD regime. Our next move will be to investigate the coupled disc-outflow system more self-consistently by including the vertical components of the flow variables in this strong large scale magnetic field regime.
\section*{ACKNOWLEDGEMENTS}
The work was partly supported by the project funded by ISRO with research Grant No.
ISTC/PPH/BMP/0362.
|
1009.0071
|
\section{Introduction}
Multiwavelength constraints on the thermal emission of hot Jupiters are crucial to precisely defining
the spectral energy distributions of these planets and understanding their energy budgets.
Interestingly most hot Jupiter
thermal emission detections to date have not been at the blackbody peaks of these planets, but at longer
wavelengths with the Spitzer Space Telescope
($\lambda$ $>$ 3 $\mu m$; e.g. \citealt{Charbonneau05,Deming05}).
Probing shorter near-infrared wavelengths at
the blackbody peaks of these planets has only recently been proven feasible first through space-based
observations with the Hubble Space Telescope (HST; \citealt{Swain09}),
and then from the ground (e.g. \citealt{deMooij09,Sing09,Gillon09}).
Our program to detect near-infrared thermal emission from the hottest of the
hot Jupiters has also been successful using the
Wide-field Infrared Camera (WIRCam) on the Canada-France-Hawaii Telescope (CFHT) to detect the Ks-band thermal
emission of: TrES-2b \citep{CrollTrESTwo}, TrES-3b including an H-band upper-limit \citep{CrollTrESThree},
and two eclipses of WASP-3b, including a limit on its temporal variability \citep{CrollWASPThree}.
In the near-infrared multiple band detections have only been performed on a handful of occasions;
such multiple-band detections were performed in narrow wavelength regimes from space via spectroscopy with
HST for HD 209458 and HD 189733 \citep{Swain09,Swain09b}, and arguably
recently from the ground for HD 189733 using the Infrared Telescope Facility \citep{Swain10}, as well as from the ground
using the Very Large Telescope in the H \& K-bands for the highly irradiated
hot Jupiter WASP-19b \citep{Anderson10,Gibson10}.
Multiple band detections in
the near-infrared are therefore rare compared to the frequent
multiple-band detections at longer wavelengths using the IRAC \citep{Fazio04}, IRS \citep{Houck04}, or MIPS \citep{Rieke04} instruments
on the Spitzer Space Telescope.
Multiwavelength thermal emission measurements with Spitzer have revealed a wealth of information, including that the most highly irradiated exoplanets
seem to harbour hot stratospheres and temperature inversions \citep{Knutson08,Charbonneau08,Machalek08,Knutson08b}.
One could imagine that obtaining multiwavelength constraints on a planet's thermal emission in the near-infrared could be equally informative.
Furthermore the near-infrared is also an ideal place to directly constrain these planets' pressure-temperature profiles at depth, dayside bolometric luminosities
and the fraction of the incident stellar radiation that is transported from the tidally locked
day to nightsides deep in these planets' atmospheres \citep{Barman08}.
Here we continue our program using WIRCam on CFHT to detect thermal emission from
some of the hottest of the hot Jupiters. Our target was
the highly irradiated hot Jupiter WASP-12b.
The discovery of the inflated, transiting exoplanet WASP-12b
was of immediate interest to those attempting to measure the loss in flux
during the secondary eclipses of hot Jupiters in the near-infrared - this was because WASP-12b circles
a late F-type star with a period of only $\sim$26 hours \citep{Hebb09}.
It is thus exposed to extremely
high stellar insolation, with an incident flux of $\sim$$9$$\times$$10^{9}$ $erg$$s^{-1}$$cm^{-2}$.
The planet is also one of the most inflated hot Jupiters, with a radius of $R_{P}$$\sim$1.8$R_{J}$ and
a favourable planet-to-star radius ratio ($R_{P}/R_{*}$$\sim$0.12; \citealt{Hebb09}). It should
be heated to an equilibrium temperature of over $\sim$2500 $K$
assuming isotropic reradiation and a zero Bond albedo\footnote{The Bond albedo is the fraction of the bolometric flux reflected from the planet compared to the incident bolometric radiation.}.
For these reasons it was predicted
to display near-infrared thermal emission on the order of 0.1-0.3\% of the stellar flux in the J, H \& Ks near-infrared
bands, assuming isotropic reradiation and a zero Bond albedo.
\citet{LopezMorales10} have already reported a detection of the secondary eclipse of WASP-12b in z'-band
(0.9 $\mu m$), and more recently \citet{Campo10} have presented detections of two eclipses in the four
IRAC channels for WASP-12b. \citet{Campo10}, however, did not report the eclipse depths for WASP-12b, and
for reasons discussed below the \citet{LopezMorales10} detection has recently been called into question.
Thus the atmospheric characteristics of WASP-12b remain largely unconstrained.
In addition to receiving extremely high stellar insolation, WASP-12b is intriguing because the combination of
its close proximity
to its star and its putative original eccentricity ($e$=0.049$\pm$0.015; \citealt{Hebb09}) suggests that it could be
precessing at a rate that is detectable with current instruments.
Such a putative precession signal was recently claimed by \citet{Campo10}.
Although the IRAC eclipses reported by \citet{Campo10} suggest an $e$cos$\omega$ constraint similar to that expected for a circular orbit
($e$cos$\omega$$=$-0.0054$\pm$0.0030), \citet{LopezMorales10} had earlier reported an
eclipse detection that was
considerably offset from a circular orbit ($|$$e$cos$\omega$$|$$=$0.016$^{+0.011}_{-0.009}$).
While at first glance the two measurements
are inconsistent, if the planet precesses this is not the case. By combining their secondary eclipses with those of
\citet{LopezMorales10},
together with the original radial velocity data for the system \citep{Hebb09}, as well as a series of transit-time measurements
from the original detection paper and ground-based amateurs (from the Exoplanet Transit
Database; \citealt{Poddany10}), \citet{Campo10} show that a precessing orbital
model best-fits the data with 2$\sigma$ confidence.
The authors caution that this detection is heavily dependent on the secondary eclipse offset
reported by \citet{LopezMorales10}.
Even more recently,
radial velocity observations of WASP-12b have suggested that the eccentricity of WASP-12b is small
($e$=0.017$^{+0.015}_{-0.010}$; \citealt{Husnoo10}) and likely zero, constraining the
\citet{Campo10} precession signal and calling into question
the \citet{LopezMorales10} eclipse detection.
Nevertheless, the best-fit eccentricity of WASP-12b remains non-zero, and thus this planet
could be precessing at a much slower rate than
\citet{Campo10} claim. The definitive nail in the coffin on the claim that WASP-12b is precessing
at a detectable rate, will thus only result from further detections
of this planet's secondary eclipse well seperated in time from the original eclipse detections.
Also, recently preliminary evidence was presented that material from WASP-12b may be being tidally
stripped from the planet and may possibly form a circumstellar disk
in this system. \citet{Li10} predicted this system may have such a disk from material overfilling the Roche lobe
of WASP-12b, because
WASP-12b's observed radius in the optical ($R_p$$\sim$1.79 $R_{J}$; \citealt{Hebb09})
is already close to its 2.36 $R_{J}$ Roche lobe radius (as quoted in \citealt{Fossati10}).
That WASP-12b may exhibit material overfilling it Roche lobe and a circumstellar disk from this material
has recently received possible confirmation from HST Cosmic Origins Spectrograph (COS)
observations of this system.
From these observations \citet{Fossati10} find increased transit depths in the ultraviolet
when compared to the optical,
indicative of material surrounding WASP-12b overfilling its Roche lobe and blocking out a larger fraction
of the stellar flux at these wavelengths. In addition they observe an early ingress
of the transit of WASP-12b in their near ultraviolet data;
\citet{Fossati10} intrepret this early ingress as a putative sign of
previously stripped material from WASP-12b forming a circumstellar disk.
These putative signs of a disk are interesting to observers in the near-infrared, specifically the K-band,
as \citet{Li10} predicted that such a disk in this system may exhibit CO emission
as bright as 10 mJy at 2.292 $\mu m$. WASP-12 does not, however, display a significant near-infrared excess \citep{Fossati10b}.
Here we present detections of WASP-12b's thermal emission in the Ks (\XSigmaWASPTwelveJointAll$\sigma$), H (\XSigmaSixWASPTwelveJointAll$\sigma$)
and J-bands (\XSigmaNineWASPTwelveJointAll$\sigma$). Our J-band detection is the first thermal emission
measurement in this band.
Our photometry favours a circular orbit for WASP-12b
($e$$\cos$$\omega$$=$\ECosOmegaWASPTwelveJointAll$^{+\ECosOmegaPlusWASPTwelveJointAll}_{-\ECosOmegaMinusWASPTwelveJointAll}$).
By combining our secondary eclipse times with those of \citet{LopezMorales10} and \citet{Campo10}, as well as the radial velocity data of \citet{Hebb09}
and \citet{Husnoo10}, and all the transit-time data for the system, we are able to show that
not only is there no evidence to date that WASP-12b is precessing at a detectable rate,
but also that the orbit of WASP-12b is likely circular.
Our analysis also allows us to constrain the characteristics of the atmospere of WASP-12b;
our Ks-band eclipse depth argues in favour of inefficient redistribution of heat from the day to nightside,
while our J and H-band observations seem to be probing deeper, higher pressure atmospheric
layers that are slightly more homogenized.
We also show that our Ks-band photometry may feature a longer than expected eclipse duration that could
arguably be interpreted as evidence for material streaming from the planet or a circumstellar disk in this system.
\section{Observations and Data Reduction}
\label{SecReduction}
\begin{figure}
\centering
\includegraphics[scale=0.40,angle=90]{fig_Combined_Reference_Stars_Loic_Ks.ps}
\includegraphics[scale=0.40,angle=90]{fig_Combined_Reference_Stars_Loic_H.ps}
\includegraphics[scale=0.40,angle=90]{fig_Combined_Reference_Stars_Loic_J.ps}
\caption{ The normalized flux from our target star and reference stars for our Ks-band photometry (top two panels),
our H-band photometry (middle two panels), and our J-band photometry (bottom panels).
For each set of panels the top panel displays the flux
from the target star (black) and the reference stars (various colours)
that are used to calibrate the flux of WASP-12b in the various sets of photometry.
The bottom panels in each set of panels displays the
residuals from the normalized flux of the target star corrected by the normalized flux of the reference stars.
}
\label{FigWASP12RefStars}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.45, angle = 270]{bin_by_n_plot_WASPTwelveKs.eps}
\includegraphics[scale=0.45, angle = 270]{bin_by_n_plot_WASPTwelveH.eps}
\includegraphics[scale=0.45, angle = 270]{bin_by_n_plot_WASPTwelveJ.eps}
\caption{ The root-mean-square of our out-of-eclipse photometry (solid line)
following the corrections documented in $\S$\ref{SecReduction}
for our Ks-band photometry (top), our H-band photometry (middle), and our J-band photometry (bottom).
The dashed line in each panel displays the
one over the square-root of the bin-size expectation for gaussian noise.
}
\label{FigPoisson}
\end{figure}
We obtained observations with WIRCam on CFHT of WASP-12 ($J$$\sim$10.48, $H$$\sim$10.23, $K$$\sim$10.19)
on 2009 December 26, 27 and 28
in the J, H \& Ks-bands respectively.
Our J-band observations on Dec. 26 lasted for 3.9 hours and started in mid-eclipse and persisted
for 2.2 hours after the end of eclipse.
Our observations on Dec. 27 and 28 lasted
for 6.0 hours in H-band and 6.2 hours in Ks-band, respectively, evenly bracketing the
predicted secondary eclipse of WASP-12b.
Numerous reference stars were also observed in the 21$\times$21 arcmin
field-of-view of WIRCam. The telescope was defocused for our various observations to approximately
1.5mm (J-band), 1.8mm (H-band), and 2.0mm (Ks-band), resulting
in the flux of our target star being spread over a ring $\sim$19, $\sim$23, and $\sim$26 pixels in diameter
(6, 7 and 8\arcsec) on our array.
For each observation, as the telescope temperature changed over the course of the night, we
used the focus stage model and kept the defocus amount constant, thus achieved a
stable PSF over the entire observation set.
We used ``Staring Mode'' for our J and Ks-band observations where we do not dither for the duration of our observations; for the
H-band eclipse the queue observations mistakenly used micro-dithering which featured small 0.5 pixel shifts between
consecutive exposures.
The exposure times for our J, H \& Ks-band observations were 5-seconds.
The effective duty cycle after accounting for readout and for saving exposures was 34\%.
For both observations the data was reduced and aperture photometry was performed on our target
star and our reference stars as discussed in \citet{CrollTrESThree}
(with the details provided in \citealt{CrollTrESTwo}).
We used an aperture with a radius of 17 pixels for our Ks-band photometry, and 16.5 pixels for our H and J-band photometry.
We used an annulus to define the sky with an inner
radius of 22, and an outer radius of 34 pixels for all our photometry.
We ensured that these choices of aperture were optimal
by testing smaller and larger aperture sizes in increments of 0.5 pixels and ensuring
these choices displayed the smallest root-mean-square (RMS) outside of occultation and the least
time-correlated red-noise.
Following our aperture photometry we correct the flux of our target star with a number of nearby reference stars
as discussed in \citet{CrollTrESTwo}.
We used \HowManyJ, \HowManyH, and \HowManyKs \ reference stars to correct our J, H and Ks-band eclipse photometry, respectively.
The normalized flux of WASP-12 and the various reference stars that are used
to correct the flux of our target star
are displayed in Figure \ref{FigWASP12RefStars}.
For our Ks-band photometry we corrected our photometry for a small trend
with the x, and y pixel position of the target star on the chip \footnote{The correction is described in \citet{CrollTrESTwo}}. We didn't notice such trends in our H and J-band photometry.
For our H-band photometry the airmass, $X$,
was high at the start of the observations (X$\sim$1.9), and fell to $X$$\sim$1.2 by mid-eclipse.
We noticed a downward trend in our H-band photometry following the correction with
nearby reference stars that appeared to be correlated with airmass.
We found this effect
was reduced, but not removed, for our H-band photometry
by correcting the flux of WASP-12 with reference stars solely on the same WIRCam chip as WASP-12; this downward trend
in flux of our target star compared to the reference stars is still
apparent at the start of our H-band photometry.
To reduce the impact of these systematic data we scale-up the errors of the first $\sim$25 minutes of data for our H-band photometry
by a factor of 1.3.
The root-mean-square (RMS) of our photometry per minute following the above corrections improved from
\XPointXXKs \ to \YPointYYKs \ in Ks-band, \XPointXXH \ to \YPointYYH \ in H-band and \XPointXXJ \ to \YPointYYJ \ in J-band.
To evaluate the impact of systematics and the presence of red-noise in our photometry we bin our data and compare
the out-of-eclipse photometric precision to the gaussian noise expectation of one over the square-root of the bin-size (Figure \ref{FigPoisson}).
The Ks-band data bins down very close to the gaussian noise limit, while
the J-band data bins down marginally above this limit;
the H-band data is worse, possibly due to the
systematics introduced by the micro-dithering.
To ensure we do not underestimate the uncertainties in our model parameters we
employ the \citet{Winn08} method to account for time-correlated red-noise in our photometry.
We scale-up the uncertainties
on our individual data-points by a factor $\beta$; $\beta$ is equal to the factor that the
binned out-of-eclipse RMS scales above the gaussian noise expectation in the absence of red-noise.
We use a binning time of $\sim$12 minutes.
For our H-band photometry we exclude the first 15 minutes of data
from this calculation, due to the obvious systematic that we believe to be correlated with airmass that
does not appear to affect the rest of the photometry.
For our photometry
$\beta$ is equal to \BetaJ, \BetaH \ and \BetaKs \ for our
J, H \& Ks-band data, respectively.
We note that our observations are still well above the predicted photon noise RMS
limit of 2.7$\times$10$^{-4}$, 2.2$\times$10$^{-4}$
and 3.1$\times$10$^{-4}$ per minute in the J, H \& Ks-bands, respectively.
\section{Analysis}
\begin{deluxetable*}{ccccccc}
\setlength{\tabcolsep}{0.01in}
\tablecaption{Best-fit secondary eclipse parameters}
\tabletypesize{\scriptsize}
\tablehead{
\colhead{Parameter} & \colhead{Ks-band} & \colhead{H-band} & \colhead{J-band} & \colhead{Joint} & \colhead{Ks-band MCMC} & \colhead{Joint MCMC} \\
\colhead{} & \colhead{MCMC} & \colhead{MCMC} & \colhead{ MCMC} & \colhead{MCMC} & \colhead{variable eclipse} & \colhead{variable eclipse} \\
\colhead{} & \colhead{Solution} & \colhead{Solution} & \colhead{Solution} & \colhead{Solution} & \colhead{duration solution} & \colhead{duration solution} \\
}
\startdata
reduced $\chi^{2}$ & \ChiWASPTwelveKs$^{+\ChiPlusWASPTwelveKs}_{-\ChiMinusWASPTwelveKs}$ & \ChiWASPTwelveH$^{+\ChiPlusWASPTwelveH}_{-\ChiMinusWASPTwelveH}$ & \ChiWASPTwelveJ$^{+\ChiPlusWASPTwelveJ}_{-\ChiMinusWASPTwelveJ}$ & \ChiWASPTwelveJointAll$^{+\ChiPlusWASPTwelveJointAll}_{-\ChiMinusWASPTwelveJointAll}$ & \ChiWASPTwelveVariableKs$^{+\ChiPlusWASPTwelveVariableKs}_{-\ChiMinusWASPTwelveVariableKs}$ & \ChiWASPTwelveJointVariableAll$^{+\ChiPlusWASPTwelveJointVariableAll}_{-\ChiMinusWASPTwelveJointVariableAll}$ \\
$\Delta F_{Ks}$ & \FpOverFStarPercentAbstractWASPTwelveKs$^{+\FpOverFStarPercentAbstractPlusWASPTwelveKs}_{-\FpOverFStarPercentAbstractMinusWASPTwelveKs}$\% & n/a & n/a & \FpOverFStarPercentAbstractWASPTwelveJointAll$^{+\FpOverFStarPercentAbstractPlusWASPTwelveJointAll}_{-\FpOverFStarPercentAbstractMinusWASPTwelveJointAll}\% $ & \FpOverFStarPercentAbstractWASPTwelveVariableKs$^{+\FpOverFStarPercentAbstractPlusWASPTwelveVariableKs}_{-\FpOverFStarPercentAbstractMinusWASPTwelveVariableKs}$\% & \FpOverFStarPercentAbstractWASPTwelveJointVariableAll$^{+\FpOverFStarPercentAbstractPlusWASPTwelveJointVariableAll}_{-\FpOverFStarPercentAbstractMinusWASPTwelveJointVariableAll}$\% \\
$\Delta F_H$ & n/a & \FpOverFStarPercentAbstractWASPTwelveH$^{+\FpOverFStarPercentAbstractPlusWASPTwelveH}_{-\FpOverFStarPercentAbstractMinusWASPTwelveH}$\% & n/a & \ParamSixWASPTwelveJointAll$^{+\ParamSixPlusWASPTwelveJointAll}_{-\ParamSixMinusWASPTwelveJointAll}$\% & n/a & \ParamSixWASPTwelveJointVariableAll$^{+\ParamSixPlusWASPTwelveJointVariableAll}_{-\ParamSixMinusWASPTwelveJointVariableAll}$\% \\
$\Delta F_J$ & n/a & n/a & \FpOverFStarPercentAbstractWASPTwelveJ$^{+\FpOverFStarPercentAbstractPlusWASPTwelveJ}_{-\FpOverFStarPercentAbstractMinusWASPTwelveKs}$\% & \ParamNineWASPTwelveJointAll$^{+\ParamNinePlusWASPTwelveJointAll}_{-\ParamNineMinusWASPTwelveJointAll}$\% & n/a & \ParamNineWASPTwelveJointVariableAll$^{+\ParamNinePlusWASPTwelveJointVariableAll}_{-\ParamNineMinusWASPTwelveJointVariableAll}$\% \\
$t_{offset}$ ($min$)\tablenotemark{a} & \TOffsetWASPTwelveKs$^{+\TOffsetPlusWASPTwelveKs}_{-\TOffsetMinusWASPTwelveKs}$ & \TOffsetWASPTwelveH$^{+\TOffsetPlusWASPTwelveH}_{-\TOffsetMinusWASPTwelveH}$ & \TOffsetWASPTwelveJ$^{+\TOffsetPlusWASPTwelveJ}_{-\TOffsetMinusWASPTwelveJ}$ & \TOffsetWASPTwelveJointAll$^{+\TOffsetPlusWASPTwelveJointAll}_{-\TOffsetMinusWASPTwelveJointAll}$ & \TOffsetWASPTwelveVariableKs$^{+\TOffsetPlusWASPTwelveVariableKs}_{-\TOffsetMinusWASPTwelveVariableKs}$ & \TOffsetWASPTwelveJointVariableAll$^{+\TOffsetPlusWASPTwelveJointVariableAll}_{-\TOffsetMinusWASPTwelveJointVariableAll}$ \\
$t_{eclipse Ks}$ (BJD-2450000) & \JDOffsetONEWASPTwelveKs$^{+\JDOffsetPlusONEWASPTwelveKs}_{-\JDOffsetMinusONEWASPTwelveKs}$ & n/a & n/a & \JDOffsetTHREEWASPTwelveJointAll$^{+\JDOffsetPlusTHREEWASPTwelveJointAll}_{-\JDOffsetMinusTHREEWASPTwelveJointAll}$ & \JDOffsetONEWASPTwelveVariableKs$^{+\JDOffsetPlusONEWASPTwelveVariableKs}_{-\JDOffsetMinusONEWASPTwelveVariableKs}$ & \JDOffsetTHREEWASPTwelveJointVariableAll$^{+\JDOffsetPlusTHREEWASPTwelveJointVariableAll}_{-\JDOffsetMinusTHREEWASPTwelveJointVariableAll}$ \\
$t_{eclipse H}$ (BJD-2450000) & n/a & \JDOffsetONEWASPTwelveH$^{+\JDOffsetPlusONEWASPTwelveH}_{-\JDOffsetMinusONEWASPTwelveH}$ & n/a & \JDOffsetTWOWASPTwelveJointAll$^{+\JDOffsetPlusTWOWASPTwelveJointAll}_{-\JDOffsetMinusTWOWASPTwelveJointAll}$ & n/a & \JDOffsetTWOWASPTwelveJointVariableAll$^{+\JDOffsetPlusTWOWASPTwelveJointVariableAll}_{-\JDOffsetMinusTWOWASPTwelveJointVariableAll}$ \\
$t_{eclipse J}$ (BJD-2450000) & n/a & n/a & \JDOffsetONEWASPTwelveJ$^{+\JDOffsetPlusONEWASPTwelveJ}_{-\JDOffsetMinusONEWASPTwelveJ}$ & \JDOffsetONEWASPTwelveJointAll$^{+\JDOffsetPlusONEWASPTwelveJointAll}_{-\JDOffsetMinusONEWASPTwelveJointAll}$ & n/a & \JDOffsetONEWASPTwelveJointVariableAll$^{+\JDOffsetPlusONEWASPTwelveJointVariableAll}_{-\JDOffsetMinusONEWASPTwelveJointVariableAll}$ \\
$c_{1Ks}$ & \cOneWASPTwelveKs$^{+\cOnePlusWASPTwelveKs}_{-\cOneMinusWASPTwelveKs}$ & n/a & n/a & \cOneWASPTwelveJointAll$^{+\cOnePlusWASPTwelveJointAll}_{-\cOneMinusWASPTwelveJointAll}$ & \cOneWASPTwelveVariableKs$^{+\cOnePlusWASPTwelveVariableKs}_{-\cOneMinusWASPTwelveVariableKs}$ & \cOneWASPTwelveJointVariableAll$^{+\cOnePlusWASPTwelveJointVariableAll}_{-\cOneMinusWASPTwelveJointVariableAll}$ \\
$c_{2Ks}$ ($d^{-1}$) & \cTwoWASPTwelveKs$^{+\cTwoPlusWASPTwelveKs}_{-\cTwoMinusWASPTwelveKs}$ & n/a & n/a & \cTwoWASPTwelveJointAll$^{+\cTwoPlusWASPTwelveJointAll}_{-\cTwoMinusWASPTwelveJointAll}$ & \cTwoWASPTwelveVariableKs$^{+\cTwoPlusWASPTwelveVariableKs}_{-\cTwoMinusWASPTwelveVariableKs}$ & \cTwoWASPTwelveJointVariableAll$^{+\cTwoPlusWASPTwelveJointVariableAll}_{-\cTwoMinusWASPTwelveJointVariableAll}$ \\
$c_{1H}$ & n/a & \cOneWASPTwelveH$^{+\cOnePlusWASPTwelveH}_{-\cOneMinusWASPTwelveH}$ & n/a & \ParamSevenWASPTwelveJointAll$^{+\ParamSevenPlusWASPTwelveJointAll}_{-\ParamSevenMinusWASPTwelveJointAll}$ & n/a & \ParamSevenWASPTwelveJointVariableAll$^{+\ParamSevenPlusWASPTwelveJointVariableAll}_{-\ParamSevenMinusWASPTwelveJointVariableAll}$ \\
$c_{2H}$ ($d^{-1}$) & n/a & \cTwoWASPTwelveH$^{+\cTwoPlusWASPTwelveH}_{-\cTwoMinusWASPTwelveH}$ & n/a & \ParamEightWASPTwelveJointAll$^{+\ParamEightPlusWASPTwelveJointAll}_{-\ParamEightMinusWASPTwelveJointAll}$ & n/a & \ParamEightWASPTwelveJointVariableAll$^{+\ParamEightPlusWASPTwelveJointVariableAll}_{-\ParamEightMinusWASPTwelveJointVariableAll}$ \\
$c_{1J}$ & n/a & n/a & \cOneWASPTwelveJ$^{+\cOnePlusWASPTwelveJ}_{-\cOneMinusWASPTwelveJ}$ & \ParamTenWASPTwelveJointAll$^{+\ParamTenPlusWASPTwelveJointAll}_{-\ParamTenMinusWASPTwelveJointAll}$ & n/a & \ParamTenWASPTwelveJointVariableAll$^{+\ParamTenPlusWASPTwelveJointVariableAll}_{-\ParamTenMinusWASPTwelveJointVariableAll}$ \\
$c_{2J}$ ($d^{-1}$) & n/a & n/a & \cTwoWASPTwelveJ$^{+\cTwoPlusWASPTwelveJ}_{-\cTwoMinusWASPTwelveJ}$ & \ParamElevenWASPTwelveJointAll$^{+\ParamElevenPlusWASPTwelveJointAll}_{-\ParamElevenMinusWASPTwelveJointAll}$ & n/a & \ParamElevenWASPTwelveJointVariableAll$^{+\ParamElevenPlusWASPTwelveJointVariableAll}_{-\ParamElevenMinusWASPTwelveJointVariableAll}$ \\
$\phi$ \tablenotemark{a} & \PhaseAbstractWASPTwelveKs$^{+\PhaseAbstractPlusWASPTwelveKs}_{-\PhaseAbstractMinusWASPTwelveKs}$ & \PhaseAbstractWASPTwelveH$^{+\PhaseAbstractPlusWASPTwelveH}_{-\PhaseAbstractMinusWASPTwelveH}$ & \PhaseAbstractWASPTwelveJ$^{+\PhaseAbstractPlusWASPTwelveJ}_{-\PhaseAbstractMinusWASPTwelveJ}$ & \PhaseAbstractWASPTwelveJointAll$^{+\PhaseAbstractPlusWASPTwelveJointAll}_{-\PhaseAbstractMinusWASPTwelveJointAll}$ & \PhaseAbstractWASPTwelveVariableKs$^{+\PhaseAbstractPlusWASPTwelveVariableKs}_{-\PhaseAbstractMinusWASPTwelveVariableKs}$ & \PhaseAbstractWASPTwelveJointVariableAll$^{+\PhaseAbstractPlusWASPTwelveJointVariableAll}_{-\PhaseAbstractMinusWASPTwelveJointVariableAll}$ \\
$\Phi_{II/I}$ & n/a & n/a & n/a & n/a & \WidthFactorWASPTwelveVariableKs$^{+\WidthFactorPlusWASPTwelveVariableKs}_{-\WidthFactorMinusWASPTwelveVariableKs}$ & \WidthFactorWASPTwelveJointVariableAll$^{+\WidthFactorPlusWASPTwelveJointVariableAll}_{-\WidthFactorMinusWASPTwelveJointVariableAll}$\\
$\Phi_{II}$ (hours) & \EclipseDurationWASPTwelveKs & \EclipseDurationWASPTwelveH & \EclipseDurationWASPTwelveJ & \EclipseDurationWASPTwelveJointAll & \EclipseDurationWASPTwelveVariableKs$^{+\EclipseDurationPlusWASPTwelveVariableKs}_{-\EclipseDurationMinusWASPTwelveVariableKs}$ & \EclipseDurationWASPTwelveJointVariableAll$^{+\EclipseDurationPlusWASPTwelveJointVariableAll}_{-\EclipseDurationMinusWASPTwelveJointVariableAll}$ \\
$T_{B Ks}$ ($K$) & \TBrightWASPTwelveKs$^{+\TBrightPlusWASPTwelveKs}_{-\TBrightMinusWASPTwelveKs}$ & n/a & n/a & \TBrightWASPTwelveJointAll$^{+\TBrightPlusWASPTwelveJointAll}_{-\TBrightMinusWASPTwelveJointAll}$ & \TBrightWASPTwelveVariableKs$^{+\TBrightPlusWASPTwelveVariableKs}_{-\TBrightMinusWASPTwelveVariableKs}$ & \TBrightWASPTwelveJointVariableAll$^{+\TBrightPlusWASPTwelveJointVariableAll}_{-\TBrightMinusWASPTwelveJointVariableAll}$ \\
$T_{B H}$ ($K$) & n/a & \TBrightWASPTwelveH$^{+\TBrightPlusWASPTwelveH}_{-\TBrightMinusWASPTwelveH}$ & n/a & \TBrightSixWASPTwelveJointAll$^{+\TBrightSixPlusWASPTwelveJointAll}_{-\TBrightSixMinusWASPTwelveJointAll}$ & n/a & \TBrightSixWASPTwelveJointVariableAll$^{+\TBrightSixPlusWASPTwelveJointVariableAll}_{-\TBrightSixMinusWASPTwelveJointVariableAll}$ \\
$T_{B J}$ ($K$) & n/a & n/a & \TBrightWASPTwelveJ$^{+\TBrightPlusWASPTwelveJ}_{-\TBrightMinusWASPTwelveJ}$ & \TBrightNineWASPTwelveJointAll$^{+\TBrightNinePlusWASPTwelveJointAll}_{-\TBrightNineMinusWASPTwelveJointAll}$ & n/a & \TBrightNineWASPTwelveJointVariableAll$^{+\TBrightNinePlusWASPTwelveJointVariableAll}_{-\TBrightNineMinusWASPTwelveJointVariableAll}$ \\
$e \cos(\omega)$ \tablenotemark{a} & \ECosOmegaWASPTwelveKs$^{+\ECosOmegaPlusWASPTwelveKs}_{-\ECosOmegaMinusWASPTwelveKs}$ & \ECosOmegaWASPTwelveH$^{+\ECosOmegaPlusWASPTwelveH}_{-\ECosOmegaMinusWASPTwelveH}$ & \ECosOmegaWASPTwelveJ$^{+\ECosOmegaPlusWASPTwelveJ}_{-\ECosOmegaMinusWASPTwelveJ}$ & \ECosOmegaWASPTwelveJointAll$^{+\ECosOmegaPlusWASPTwelveJointAll}_{-\ECosOmegaMinusWASPTwelveJointAll}$ & \ECosOmegaWASPTwelveVariableKs$^{+\ECosOmegaPlusWASPTwelveVariableKs}_{-\ECosOmegaMinusWASPTwelveVariableKs}$ & \ECosOmegaWASPTwelveJointVariableAll$^{+\ECosOmegaPlusWASPTwelveJointVariableAll}_{-\ECosOmegaMinusWASPTwelveJointVariableAll}$\\
$e \sin(\omega)$ & n/a & n/a & n/a & n/a & \ESinOmegaWASPTwelveVariableKs$^{+\ESinOmegaPlusWASPTwelveVariableKs}_{-\ESinOmegaMinusWASPTwelveVariableKs}$ & \ESinOmegaWASPTwelveJointVariableAll$^{+\ESinOmegaPlusWASPTwelveJointVariableAll}_{-\ESinOmegaMinusWASPTwelveJointVariableAll}$\\
$f_{Ks}$ & \fReradiationWASPTwelveKs$^{+\fReradiationPlusWASPTwelveKs}_{-\fReradiationMinusWASPTwelveKs}$ & n/a & n/a & \fReradiationWASPTwelveJointAll$^{+\fReradiationPlusWASPTwelveJointAll}_{-\fReradiationMinusWASPTwelveJointAll}$ & \fReradiationWASPTwelveVariableKs$^{+\fReradiationPlusWASPTwelveVariableKs}_{-\fReradiationMinusWASPTwelveVariableKs}$ & \fReradiationWASPTwelveJointVariableAll$^{+\fReradiationPlusWASPTwelveJointVariableAll}_{-\fReradiationMinusWASPTwelveJointVariableAll}$\\
$f_{H}$ & n/a & \fReradiationWASPTwelveH$^{+\fReradiationPlusWASPTwelveH}_{-\fReradiationMinusWASPTwelveH}$ & n/a & \fReradiationSixWASPTwelveJointAll$^{+\fReradiationSixPlusWASPTwelveJointAll}_{-\fReradiationSixMinusWASPTwelveJointAll}$ & n/a & \fReradiationSixWASPTwelveJointVariableAll$^{+\fReradiationSixPlusWASPTwelveJointVariableAll}_{-\fReradiationSixMinusWASPTwelveJointVariableAll}$\\
$f_{J}$ & n/a & n/a & \fReradiationWASPTwelveJ$^{+\fReradiationPlusWASPTwelveJ}_{-\fReradiationMinusWASPTwelveJ}$ & \fReradiationNineWASPTwelveJointAll$^{+\fReradiationNinePlusWASPTwelveJointAll}_{-\fReradiationNineMinusWASPTwelveJointAll}$ & n/a & \fReradiationNineWASPTwelveJointVariableAll$^{+\fReradiationNinePlusWASPTwelveJointVariableAll}_{-\fReradiationNineMinusWASPTwelveJointVariableAll}$\\
\enddata
\tablenotetext{a}{We account for the increased light travel-time in the system \citep{Loeb05}, and use the best-fit period for the non-precessing case reported by \citet{Campo10}.}
\label{TableParams}
\end{deluxetable*}
\begin{figure}
\centering
\includegraphics[scale=0.35,angle=270]{CFHT_analysis_four_panel_WASPTwelveKs.eps}
\caption{
CFHT/WIRCam photometry of the secondary eclipse of WASP-12b observed in
the Ks-band on 28 December 2009.
The top panel shows the unbinned lightcurve with the best-fit secondary eclipse and background from our MCMC analysis of the Ks-band data with the
fixed eclipse duration (red line).
The second panel shows the lightcurve
with the data binned every $\sim$7.0 minutes and again our best-fit eclipse and background.
The third panel shows the binned data after the subtraction of
the best-fit background, $B_f$, along with the best-fit eclipse model.
The bottom panel shows the binned residuals from the best-fit model.
}
\label{FigWASP12Ks}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.35,angle=270]{CFHT_analysis_four_panel_WASPTwelveH.eps}
\caption{
The same as figure \ref{FigWASP12Ks} except that the data is our H-band photometry obtained on 27 December 2009.
}
\label{FigWASP12H}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.35,angle=270]{CFHT_analysis_four_panel_WASPTwelveJ.eps}
\caption{
The same as figure \ref{FigWASP12Ks} except that the data is our J-band photometry obtained on 26 December 2009. Note that the photometry is a partical eclipse only,
and starts in eclipse and extends well out of eclipse.
}
\label{FigWASP12J}
\end{figure}
\begin{figure*}
\centering
\includegraphics[scale=0.45,angle=270]{contour_new_WASPTwelveKsb_13.eps}
\includegraphics[scale=0.45,angle=270]{contour_new_WASPTwelveHb_13.eps}
\includegraphics[scale=0.45,angle=270]{contour_new_WASPTwelveJb_13.eps}
\caption{ The 68.3\% (1$\sigma$; solid-line), 95.5\% (2$\sigma$; dashed-line), and 99.7\% (3$\sigma$; dotted-line)
credible regions
from our individual MCMC analyses
with a fixed eclipse duration of our Ks-band photometry (left), H-band photometry (middle), and J-band photometry (right).
The ``x'' in the middle of the plots marks the best-fit point from our MCMC analyses.}
\label{FigContour}
\end{figure*}
For many of our other WIRCam data-sets we have observed
residual background trends in the reduced
data that seems to affect our target stars differently than our reference stars \citep{CrollTrESTwo,CrollTrESThree,CrollWASPThree}.
For our Ks, H \& J-band photometry these backgrounds, $B_f$,
displayed a near-linear slope.
We fit our Ks, H \& J-band data-sets with linear backgrounds of the form:
\begin{equation}
B_f = 1 + c_1 + c_2 dt
\end{equation}
where $dt$ is the interval from the beginning of the observations
and $c_1$, $c_2$ and $c_3$ are fit parameters.
We use Markov Chain Monte Carlo (MCMC) fitting to fit for our background as well as a secondary eclipse model calculated from the
\citet{Mandel02}
algorithm without limb darkening.
We fit for the background,
the depth of the secondary eclipse ($\Delta F$) and the offset
that the eclipse occurs later than the
expected eclipse center ($t_{offset}$).
Our Markov Chain Monte Carlo method is discussed in \citet{CrollMCMC} and \citet{CrollTrESTwo}.
We obtain our stellar and planetary parameters for WASP-12 from \citet{Hebb09}, while the planetary period
and ephemeris are obtained from \citet{Campo10} from their non-precessing best-fit.
The best-fit secondary eclipes from our individual MCMC analyses with a fixed eclipse duration
are presented in Figures \ref{FigWASP12Ks}, \ref{FigWASP12H} and \ref{FigWASP12J}
and the best-fit eclipse parameters are presented in Table \ref{TableParams} along with associated parameters,
such as the best-fit phase, $\phi$, and the barycentric julian date of the eclipse center in the
terrestrial time format\footnote{As calculated using the routines of \citet{Eastman10}.}, $t_{eclipse}$.
The phase dependence of these fits are presented in Figure \ref{FigContour}.
We also perform a joint analysis of the three secondary eclipses with a common offset from the eclipse
center ($t_{offset}$); the fit parameters are thus $\Delta F_Ks$, $\Delta F_H$, $\Delta F_J$, $t_{offset}$, and $c_1$, and $c_2$
in each band. The resulting best-fit parameters of this joint fit are listed in Table \ref{TableParams}.
We also repeat our fit for our Ks-band photometry, our highest signal-to-noise photometry, and for the joint analysis
while fitting for an additional parameter - the duration of the secondary eclipse, $\Phi_{II}$.
We parameterize this by the duration of the eclipse divided by the duration of the transit, $\Phi_{II/I}$,
using the duration of the transit ($\Phi_I$$\sim$2.93 $h$) reported by \citet{Hebb09}. The results from this fit are presented in
Table \ref{TableParams}. We do not fit our J-band or H-band data individually with this additional parameter, $\Phi_{II}$,
as the J-band data is a partial eclipse and thus the duration of the secondary eclipse is degenerate with an offset of the eclipse center,
and the H-band data suffers from additional time-correlated systematics that could lead to erroneous conclusions.
\section{Discussion}
We strongly detect all three secondary eclipses in the three near-infrared bands that we observed in.
The individual analyses of our three eclipses confirm
that all three secondary eclipses are fit with a consistent phase (Table \ref{TableParams}); thus the
best-fit parameters from our joint analysis are similar to the parameters returned by the analyses
of the individual eclipses. We therefore quote the results of the joint
analysis below. The best-fit eclipse depths from our joint analysis is
\FpOverFStarPercentAbstractWASPTwelveJointAll$^{+\FpOverFStarPercentAbstractPlusWASPTwelveJointAll}_{-\FpOverFStarPercentAbstractMinusWASPTwelveJointAll}$\% in Ks-band,
\ParamSixWASPTwelveJointAll$^{+\ParamSixPlusWASPTwelveJointAll}_{-\ParamSixMinusWASPTwelveJointAll}$\% in H-band and
\ParamNineWASPTwelveJointAll$^{+\ParamNinePlusWASPTwelveJointAll}_{-\ParamNineMinusWASPTwelveJointAll}$\% in J-band.
\subsection{Eccentricity and Precession of WASP-12b}
\label{SecEccentricity}
\begin{deluxetable*}{cccc}
\tabletypesize{\footnotesize}
\tablecaption{WASP-12b orbital parameters}
\tablehead{
\colhead{Parameter} & \colhead{Precessing} & \colhead{Non-Precessing} & \colhead{Non-Precessing Case without }\\
\colhead{} & \colhead{Case} & \colhead{Case} & \colhead{the \citet{LopezMorales10} eclipse }\\
}
\startdata
$P$ (days) & \PrecessionOneLikely$^{+\PrecessionOnePlus}_{-\PrecessionOneMinus}$ & \PrecessionOneNoPrecessLikely$^{+\PrecessionOneNoPrecessPlus}_{-\PrecessionOneNoPrecessMinus}$ & \PrecessionOneNoPrecessNoLopezLikely$^{+\PrecessionOneNoPrecessNoLopezPlus}_{-\PrecessionOneNoPrecessNoLopezMinus}$ \\
$e$ & \PrecessionTwoLikely$^{+\PrecessionTwoPlus}_{-\PrecessionTwoMinus}$ & \PrecessionTwoNoPrecessLikely$^{+\PrecessionTwoNoPrecessPlus}_{-\PrecessionTwoNoPrecessMinus}$ & \PrecessionTwoNoPrecessNoLopezLikely$^{+\PrecessionTwoNoPrecessNoLopezPlus}_{-\PrecessionTwoNoPrecessNoLopezMinus}$ \\
$T_o$ (BJD-2450000) & \PrecessionZeroLikely$^{+\PrecessionZeroPlus}_{-\PrecessionZeroMinus}$ & \PrecessionZeroNoPrecessLikely$^{+\PrecessionZeroNoPrecessPlus}_{-\PrecessionZeroNoPrecessMinus}$ & \PrecessionZeroNoPrecessNoLopezLikely$^{+\PrecessionZeroNoPrecessNoLopezPlus}_{-\PrecessionZeroNoPrecessNoLopezMinus}$ \\
$\omega_o$ ($^{o}$) & \PrecessionThreeLikely$^{+\PrecessionThreePlus}_{-\PrecessionThreeMinus}$ \tablenotemark{a} & \PrecessionThreeNoPrecessLikely$^{+\PrecessionThreeNoPrecessPlus}_{-\PrecessionThreeNoPrecessMinus}$ \tablenotemark{a} & \PrecessionThreeNoPrecessNoLopezLikely$^{+\PrecessionThreeNoPrecessNoLopezPlus}_{-\PrecessionThreeNoPrecessNoLopezMinus}$ \tablenotemark{a} \\
$\dot{\omega}$ ($^{o}$ d$^{-1}$) & \PrecessionFourLikely$^{+\PrecessionFourPlus}_{-\PrecessionFourMinus}$ & 0.0 \tablenotemark{b} & 0.0 \tablenotemark{b} \\
$e$cos$\omega_o$ & \PrecessionECosOmegaLikely$^{+\PrecessionECosOmegaPlus}_{-\PrecessionECosOmegaMinus}$ & \PrecessionECosOmegaNoPrecessLikely$^{+\PrecessionECosOmegaNoPrecessPlus}_{-\PrecessionECosOmegaNoPrecessMinus}$ & \PrecessionECosOmegaNoPrecessNoLopezLikely$^{+\PrecessionECosOmegaNoPrecessNoLopezPlus}_{-\PrecessionECosOmegaNoPrecessNoLopezMinus}$ \\
$e$sin$\omega_o$ & \PrecessionESinOmegaLikely$^{+\PrecessionESinOmegaPlus}_{-\PrecessionESinOmegaMinus}$ & \PrecessionESinOmegaNoPrecessLikely$^{+\PrecessionESinOmegaNoPrecessPlus}_{-\PrecessionESinOmegaNoPrecessMinus}$ & \PrecessionESinOmegaNoPrecessNoLopezLikely$^{+\PrecessionESinOmegaNoPrecessNoLopezPlus}_{-\PrecessionESinOmegaNoPrecessNoLopezMinus}$ \\
$\chi^{2}$ & \PrecessionChiSquaredLikely & \PrecessionChiSquaredNoPrecessLikely & \PrecessionChiSquaredNoPrecessNoLopezLikely \\
BIC & \PrecessionBICLikely & \PrecessionBICNoPrecessLikely & \PrecessionBICNoPrecessNoLopezLikely \\
\enddata
\tablenotetext{a}{These distributions are bimodal with strong peaks at $\omega$$\sim$90$^{o}$ and -90$^{o}$ (where $\cos$$\omega$$\sim$0.)}
\tablenotetext{b}{By definition.)}
\label{TablePrecession}
\end{deluxetable*}
The best-fit phase of the joint analysis is $\phi$=\PhaseAbstractWASPTwelveJointAll$^{+\PhaseAbstractPlusWASPTwelveJointAll}_{-\PhaseAbstractMinusWASPTwelveJointAll}$.
The resulting limit on the eccentricity, $e$, and argument of periastron, $\omega$, is
$e$cos$\omega$=\ECosOmegaWASPTwelveJointAll$^{+\ECosOmegaPlusWASPTwelveJointAll}_{-\ECosOmegaMinusWASPTwelveJointAll}$, a result
that is consistent with a circular orbit and the \citet{Campo10} results.
This value is inconsistent, however, with the \citet{LopezMorales10} $e$cos$\omega$ result. The
discrepancy between the
\citet{LopezMorales10} result and that of
\citet{Campo10} and our own
could be due to WASP-12b precessing - we explore this possibility below.
\begin{figure}
\centering
\includegraphics[scale=0.45,angle=270]{Precession_Plot_Extended__WASP-12.eps}
\includegraphics[scale=0.45,angle=270]{Precession_Plot_Extended_NoPrecess_WASP-12.eps}
\caption{ Transit (black points) and eclipse (red points) times for WASP-12b
compared to the best-fit orbital models for the
precessing case (top),
and the non-precessing case (bottom).
The best-fit models for the transit times (dotted black line)
and the eclipse times (solid red line) are also shown.
Both diagrams show the observed-minus-calculated (O-C) times from
a linear ephemeris calculated using $T_o$ and $P$; the secondary
eclipse O-C times are compared to a calculated eclipse centre of $T_o$ + $\frac{P}{2}$.
The eclipse points in the two panels are from left-to-right, the \citet{Campo10} Spitzer/IRAC eclipses (BJD-2450000$\sim$4750),
the \citet{LopezMorales10} eclipse (BJD-2450000$\sim$5000), while the last
three red-points are the eclipse times reported here (BJD-2450000$\sim$5200).
}
\label{FigPrecession}
\end{figure}
\citet{Campo10} performed an analysis of the reported transit times and secondary eclipse times and presented tentative evidence
that WASP-12b may be precessing at an observable rate, $\dot{\omega}$ = 0.02 $\pm$ 0.01$^{o}$ d$^{-1}$, with a period as short as 40 years.
The primary evidence for the precession was the
ground-based secondary eclipse detection of \citet{LopezMorales10}, which
occured late by approximately $\sim$15 minutes (at a phase of
$\phi$=0.5100$^{+0.0072}_{-0.0061}$ using the \citealt{Hebb09}
ephemeris and period).
We repeat the \citet{Campo10} precession
analysis adding in our three secondary eclipse detections. We summarize the \citet{Campo10} precession
model that we employ here. The mid-transit time of the $N^{th}$ transit, $T_N$,
in our precessing model
is predicted to occur at:
\begin{equation}
T_N = T_o + P_s N - \frac{e P_a}{\pi} (cos \omega_N - cos \omega_o).
\label{EqunPrecession}
\end{equation}
$T_o$ and $\omega_o$ are the transit time and argument of periastron
at the reference epoch, $\omega_N$ is the argument of periastron of the $N^{th}$ transit,
$P_s$ is the sidereal period, $P_a$ is the period between successive periastron passages, and $e$ has
already been defined
as the eccentricity.
$P_a$ is not an independent variable, but is related to the sidereal
period, $P_s$, and the constant precession rate, $\dot{\omega}$:
$P_a = \frac{P_s}{1-P_s\frac{\dot{\omega}}{2\pi}}$.
The argument of periastron of the $N^{th}$ transit is simply $\omega_N$ = $\dot{\omega} (T_N - T_o) + \omega_o$.
Equation \ref{EqunPrecession} is solved iteratively for $T_N$ after it is
expanded to fifth order in $e$
(as shown in equation (22) of \citealt{RagozzineWolf09}).
We fit the radial velocity data from \citet{Hebb09}
and \citet{Husnoo10}\footnote{\citet{Husnoo10} argue that there may be correlated
red noise in the \citet{Hebb09} radial-velocity data, possibly due to a systematic offset
in the RV zero-point from night to night. As a result we scale-up
the errors for the \citet{Hebb09} data by a factor of 8 and those of \citet{Husnoo10} by a factor of 2 to account for possible
offsets between these two data-sets. We refer the reader to \citet{Husnoo10} for further discussion.},
and the transits listed in Table 2 of \citet{Campo10} as well as four additional,
recent transits\footnote{The additional transits have mid-transit times (HJD) of 2455246.77604$\pm$0.00217 (A. Gibson, TRESCA), 2455253.32414$\pm$0.00287 (F. Lomoz, TRESCA), 2455257.69131 (G. Haagen, TRESCA), and 2455265.33327$\pm$0.00129 (H. Kucakova, TRESCA).}
from the Exoplanet Transit database \citep{Poddany10}
and our own secondary eclipse data along with those of
\citet{LopezMorales10}, and \citet{Campo10}. We exclude the in-transit radial
velocity data as we do not model for the Rossiter-McLaughlin effect \citep{GaudiWinn07}.
We follow \citet{Campo10}, and quote the \citet{LopezMorales10} eclipse point
that results from the combined photometry from 1.5 eclipses,
at a single epoch
halfway between their observations (HJD$\sim$2455002.8560 $\pm$ 0.0073).
$T_N$ of course gives the transit
time to compare to the data, we use $e$, $\omega_N$, $T_N$ and $P_a$ to calculate the eclipse times, and $\omega(t)$ to calculate
the radial velocity values. We use the MCMC techniques explained above to calculate the best-fit precessing
model, and non-precessing models, except that we fit for $e$$\cos$$\omega$ and $e$$\sin$$\omega$, instead of $e$ and $\omega$,
as
$\omega$ is poorly constrained as the eccentricity approaches zero.
\begin{figure*}
\centering
\includegraphics[scale=0.60,angle=270]{HIST71_HISTO.eps}
\includegraphics[scale=0.60,angle=270]{HIST70_HISTO.eps}
\includegraphics[scale=0.60,angle=270]{HIST62_HISTO.eps}
\includegraphics[scale=0.64,angle=270]{contour_new_NoPrecessNoLopez_203.eps}
\caption{ Top-left, top-right and bottom-left panels:
Marginalized likelihood for WASP-12b's $e$$\cos$$\omega$, $e$$\sin$$\omega$
and its eccentricity from the non-precessing MCMC chain with
the \citet{LopezMorales10} point excluded.
The best-fit value for each panel is given with the solid vertical line (for the
bottom-left panel this value is nearly indistinguishable from zero),
while the 68\% credible region
is indicated by the dotted vertical line.
Bottom-right panel: Contour parameter showing the eccentricity, $e$,
and the argument or periastron, $\omega$, of WASP-12b again from the
same MCMC chain.
The 68.3\% (1$\sigma$; solid-line), 95.5\% (2$\sigma$; dashed-line), and 99.7\% (3$\sigma$; dotted-line)
credible regions are indicated.
}
\label{FigHisto}
\end{figure*}
We plot our precessing and non-precessing best-fit models in
Figure \ref{FigPrecession} and present the MCMC results in Table \ref{TablePrecession}.
The best-fit models with and without precession are similar. The best-fit precessing model features
a very small rate of precession ($\dot{\omega}$ = \PrecessionFourLikely$^{+\PrecessionFourPlus}_{-\PrecessionFourMinus}$$^{o}$ d$^{-1}$),
that barely provides a superior fit once the extra degrees of freedom are taken into account (a Bayesian Information Criterion\footnote{For the Bayesian Information Criterion \citep{Liddle07} lower-values indicate superior fits corrected for the number of free parameters: $BIC$=$\chi^2$ + $k$ln$N$, where $k$ is the number of free parameters and $N$ is the number of data points.}
of $BIC$=\PrecessionBICLikely \ for the precessing case, compared
to $BIC$=\PrecessionBICNoPrecessLikely \ for the non-precessing case).
Thus there is not convincing evidence at this date that WASP-12b is precessing.
Given that the timing offset of the \citet{LopezMorales10} eclipse detection may be suspect,
we also refit the non-precessing case with this eclipse excluded,
and present the MCMC results in Table \ref{TablePrecession}.
The distribution of eccentricity values from our MCMC chain without the \citet{LopezMorales10} eclipse
is non-gaussian (the bottom left panel of Figure \ref{FigHisto}) and favours a near-zero eccentricity
with a tail to higher eccentricity values;
this limit is
$e$=\PrecessionTwoNoPrecessNoLopezLikely$^{+\PrecessionTwoNoPrecessNoLopezPlus}_{-\PrecessionTwoNoPrecessNoLopezMinus}$.
This is due to the fact that although
the $e$$\cos$$\omega_o$ values for WASP-12b are well-constrained from the radial-velocity data
and the combination of the timing of the eclipses and transits (the top-left panel of Figure \ref{FigHisto}),
the $e$$\sin$$\omega_o$ values are not well-constrained
and thus higher eccentricity values are allowed (the top-right panel of Figure \ref{FigHisto})
for an argument of periastron where
$\cos$$\omega_o$$\sim$0 at $\omega_o$$\sim$90$^{o}$ and -90$^{o}$ (as can be seen in the
contour plot in the bottom-right panel of Figure \ref{FigHisto}).
Although we are not able to rule out
higher eccentricity values for WASP-12b with high confidence,
the orbit of WASP-12b is likely
circular; thus WASP-12b is no longer an outlier from the expectation of the timescale
of tidal circularization for close-in giant exoplanets.
The above analysis would be improved by including an a
priori constraint on $e$$\sin$$\omega_o$ using the eclipse duration values from
our own eclipses and the \citet{Campo10} Spitzer/IRAC eclipses. Unfortunately, although \citet{Campo10} indicate that
their best-fit eclipse durations are similar to that of the transits and should thus place a tight
constrain on $e$$\sin$$\omega_o$ near
zero, \citet{Campo10} do not formally fit for the duration of the eclipse
and do not include the associated uncertainties. We discuss the implications of fitting our own
eclipse durations below.
\subsection{A longer duration secondary eclipse; possible signs of material stripped from the planet?}
\label{SecDisk}
\begin{figure}
\centering
\includegraphics[scale=0.35,angle=270]{CFHT_analysis_four_panel_WASPTwelveVariableKs.eps}
\caption{
The same as figure \ref{FigWASP12Ks} except the best-fit model is our variable eclipse duration model for our Ks-band photometry.
}
\label{FigWASP12KsVariable}
\end{figure}
We also fit our Ks-band photometry and our joint J, H \& Ks-band photometry
with an eclipse model with the eclipse duration as a free parameter.
Our best-fit Ks-band variable eclipse duration fit is presented in Figure \ref{FigWASP12KsVariable}. Our variable eclipse duration fit
does argue for a marginally wider secondary eclipse than transit:
$\Phi_{II/I}$ = \WidthFactorWASPTwelveVariableKs$^{+\WidthFactorPlusWASPTwelveVariableKs}_{-\WidthFactorMinusWASPTwelveVariableKs}$,
although this result is only significant at the \WidthFactorSigmaWASPTwelveVariableKs$\sigma$-level.
The associated eclipse duration is
$\Phi_{II}$ = \EclipseDurationWASPTwelveVariableKs$^{+\EclipseDurationPlusWASPTwelveVariableKs}_{-\EclipseDurationMinusWASPTwelveVariableKs}$ hours,
longer than the $\sim$2.93 hour optical transit found by \citet{Hebb09}, and longer than
the similar duration IRAC eclipses found by \citet{Campo10}.
That the data suggests a wider secondary eclipse than our best-fit model can be seen
in the ingress and egress of our Ks-band
photometry (Figure \ref{FigWASP12Ks}).
Our joint analysis of our J, H \& Ks-band data also argues for a marginally wider secondary eclipse
than transit:
$\Phi_{II/I}$ = \WidthFactorWASPTwelveJointVariableAll$^{+\WidthFactorPlusWASPTwelveJointVariableAll}_{-\WidthFactorMinusWASPTwelveJointVariableAll}$, or that the duration of
the eclipse is $\Phi_{II}$ = \EclipseDurationWASPTwelveJointVariableAll$^{+\EclipseDurationPlusWASPTwelveJointVariableAll}_{-\EclipseDurationMinusWASPTwelveJointVariableAll}$ hours.
As our J-band data is a partial eclipse, it has no ability to constrain the eclipse duration on its own.
Similarly, as our H-band data suffers from significant systematics prior to and during ingress, its ability to constrain
the eclipse duration is compromised; in fact, the systematics at the beginning of the H-band photometry
that manifest themselves as a sharp decrease in flux, can be
well-fit by a significantly wider, and deeper secondary eclipse that is unlikely to be physical.
These facts, combined with a visual inspection of
Figures \ref{FigWASP12Ks}-\ref{FigWASP12J},
suggests that the wider secondary eclipse for our joint analysis, is in-fact dominated by our Ks-band photometry and
the longer duration eclipse may not be
credible for the joint analysis.
Our Ks-band photometry is best-fit with
a wider secondary eclipse than expected:
$\Phi_{II/I}$=\WidthFactorWASPTwelveVariableKs$^{+\WidthFactorPlusWASPTwelveVariableKs}_{-\WidthFactorMinusWASPTwelveVariableKs}$.
The first possibility to explain this wider than expected
eclipse is systematic time-correlated, red-noise in our photometry, which would not be unexpected
as the eclipse is only wider than expected at less than the 3$\sigma$ level.
Another possibility for this wider eclipse is that the planet has
a small eccentricity ($e$$\sin$$\omega$=\ESinOmegaWASPTwelveVariableKs$^{+\ESinOmegaPlusWASPTwelveVariableKs}_{-\ESinOmegaMinusWASPTwelveVariableKs}$).
We have already presented strong evidence that the eccentricity of WASP-12b is quite likely
near zero in $\S$\ref{SecEccentricity} and the \citet{Campo10} Spitzer/IRAC eclipse photometry
does not feature a longer duration secondary eclipse. Also, although the $e$sin$\omega$ of the planet is less well constrained
in $\S$\ref{SecEccentricity}, an $e$sin$\omega$ value necessary to explain our longer duration eclipse
can be ruled out at several sigma and thus we
find this possibility uncompelling.
Another possibility -- perhaps the most intriguing possibility --
is that if this apparently wider secondary eclipse
is not due to
systematic effects or due to a small $e$$\sin$$\omega$
for WASP-12b, then it could be due to radiation from gas
that is escaping from the planet and possibly
forming a cicumstellar disk.
The latter was
predicted by \citet{Li10}, while the former was arguably recently confirmed by \citet{Fossati10} through observations that
WASP-12b displayed increased transit depths in the UV with COS/HST.
An eclipse of
this duration could argue for material surrounding the planet
with a projected radius
that is approximately \RWideWASPTwelve \ times
the optical radius of the planet, or at a radius of \RWideJupiter \ $R_{Jupiter}$,
and would thus argue for material emitting radiation that is exceeding the Roche lobe, and streaming from
the planet.
This emission could be due to CO $\sim$2.292 bandhead emission, as predicted by \citet{Li10}, although the material
around the planet should be cooler than the $\sim$4000-5000 K temperatures they predicted for the cirumstellar disk and will
thus result in reduced emission.
In the ``accretion stream'' hypothesis, advocated by \citep{Lai10},
the material streaming from the planet towards the star may be highly localized in a line passing through the inner
Lagrangian point. The extra emission from this stream would be obscured
by the star earlier than the planet during eclipse
ingress and later than the planet during eclipse egress.
Such a scenario is arguably favoured over simply a
sphere of evaporating material \RWideJupiter \ $R_{Jupiter}$ in radius, as in the accretion stream scenario
the emission will arise from a smaller surface area; otherwise
the Ks-band brightness temperature of the planet
would have to be anomalously low, given that the
$\Delta F_{Ks}$=\FpOverFStarPercentAbstractWASPTwelveJointAll$^{+\FpOverFStarPercentAbstractPlusWASPTwelveJointAll}_{-\FpOverFStarPercentAbstractMinusWASPTwelveJointAll}$\%
that we observe would have to be a combination of the planet and the enveloping
material that would have a much larger surface area of emitting material
than the planet itself.
Alternatively,
this wider eclipse could be interpreted as the planet passing behind a circumstellar disk that is optically thick
at these wavelengths and extends marginally from the star (at least $\sim$1.11 times
the stellar radius), and therefore
obscures the planet earlier and later than expected. If the disk is optically thick it will have to be due to gas
opacity, as the temperature of the disk will be well above the dust sublimation temperature, and
the temperature and
density of the disk will have to be high enough for the material to be largely photoionized
to avoid the ``opacity gap'' \citep{Thompson05}.
The disk would also
have to be optically thick at around 2 $\mu m$, but not at the longer wavelengths probed by Spitzer/IRAC
(3.6 to 8.0 $\mu m$) as the durations of the eclipses are not discrepant from
the expected duration in the \citet{Campo10} photometry. The ``accretion stream'' hypothesis is arguably less contrived, but
the observed eclipse duration could also result from a combination of both scenarios.
An obvious way to differentiate between these two scenarios would be observations
of WASP-12 during transit in the Ks-band band.
If WASP-12 is surrounded by a
circumstellar disk that is emitting in Ks-band then the transit duration will increase.
If there is material surrounding WASP-12b then its transit will be of the
expected duration if the material is optically thin, and the transit will display an increased
depth if the material is optically thick.
We plan to perform such follow-up observations of the transit and eclipse of
WASP-12b in the Ks and H-bands to differentiate between these various scenarios,
and to confirm the near-zero eccentricity of WASP-12b. Until such follow-up observations take place we emphasize that
our Ks-band photometry is best-fit with a wider eclipse at less than the 3$\sigma$ level.
\subsection{The properties of WASP-12b's atmosphere}
Our measurements of the thermal emission of WASP-12b allow us to constrain the characteristics of its atmosphere,
including:
its Bond albedo, the level of redistribution of heat from the day to the nightside at various depths,
and the planet's dayside bolometric luminosity.
We parameterize the level of redistribution by the reradiation factor,
$f$, following the \citet{LopezMoralesSeager07} definition (i.e.
$f$=$\frac{1}{4}$ denotes isotropic reradiation, while $f$=$\frac{1}{2}$
denotes redistribution and reradiation from the dayside only).
Our eclipse depths are consistent with a range of Bond albedos, $A_B$,
and overall day to nightside redistribution of heat, $f_{tot}$ (Figure \ref{FigBondReradiation}).
If we assume a Bond albedo near zero, consistent with observations of other hot Jupiters \citep{Charbonneau99,Rowe08}
and with model predictions \citep{Burrows08}, the best-fit reradiation factor, $f_{tot}$, that results from our three near-infrared
eclipse measurements is
$f_{tot}$ = \fReradiationWASPTwelveALL$^{+\fReradiationPlusWASPTwelveALL}_{-\fReradiationMinusWASPTwelveALL}$.
This suggests that the dayside of WASP-12b reradiates most of the incident stellar flux
without redistributing it to the nightside.
\begin{figure}
\centering
\includegraphics[scale=0.65,angle=270]{contour_new_WASP-12_69.eps}
\caption{
1$\sigma$ (solid-lines), 2$\sigma$ (dashed-lines), and 3$\sigma$
(dotted-lines) constraints on the Bond albedo and reradiation factor, $f_{tot}$
from our Ks, H \& J-band secondary eclipse observations of WASP-12b.
}
\label{FigBondReradiation}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.49,angle=270]{Blackbody_plot_WASP-12.eps}
\includegraphics[scale=0.49,angle=270]{Blackbody_plot_infrared_WASP-12.eps}
\caption{
Dayside planet-to-star flux ratios (top) and dayside flux at the planet's surface (bottom).
The Ks-band ($\sim$2.15 $\mu m$), H-band ($\sim$1.60 $\mu m$) and J-band ($\sim$1.25 $\mu m$)
points are our own, while the z'-band point ($\sim$0.9 $\mu m$) is from \citet{LopezMorales10}.
Blackbody curves for modest redistribution ($f$=0.35; $T_{eq}$$\sim$2735 $K$; blue dashed line),
and dayside only emission ($f$=$\frac{1}{2}$; $T_{eq}$$\sim$2990 $K$; grey dotted line) are
also plotted.
We also plot one-dimensional, radiative transfer spectral models \citep{Fortney06,Fortney08}
for various reradiation factors and with and without TiO/VO.
We plot models with modest redistribution ($f$=0.35)
with and without TiO/VO (magenta-dotted and green-dashed lines, respectively),
and for dayside only emission ($f$=$\frac{1}{2}$) with and without TiO/VO (orange dotted and cyan dot-dashed lines, respectively).
The models with TiO/VO display temperature inversions.
The models on the top panel are divided by a stellar atmosphere model \citep{Hauschildt99} of WASP-12
using the parameters from \citep{Hebb09}.
($M_{*}$=1.35 $M_{\odot}$, $R_{*}$=1.57 $R_{\odot}$, $T_{eff}$=6300 $K$, and log $g$=4.38).
We plot the WIRCam Ks, H and J-band transmission curves
inverted at arbitrary scale at the top
of both panels (dotted black lines). We integrate our models over the WIRCam bandpasses and display the result
in the appropriately coloured triangles.
}
\label{FigModel}
\end{figure}
As the atmospheres of hot Jupiters may be highly vertically stratified,
different atmospheric layers may redistribute heat
much more or much less efficiently than other layers.
The best-fit brightness temperatures and reradiation factors of the individual
atmospheric layers probed by our various wavelengths of observations are:
$T_{B Ks}$=\TBrightWASPTwelveKs$^{+\TBrightPlusWASPTwelveKs}_{-\TBrightMinusWASPTwelveKs}$$K$ and $f_{Ks}$=\fReradiationWASPTwelveKs$^{+\fReradiationPlusWASPTwelveKs}_{-\fReradiationMinusWASPTwelveKs}$ for our Ks-band observations,
$T_{B H}$ =\TBrightWASPTwelveH$^{+\TBrightPlusWASPTwelveH}_{-\TBrightMinusWASPTwelveH}$$K$ and $f_{H}$=\fReradiationWASPTwelveH$^{+\fReradiationPlusWASPTwelveH}_{-\fReradiationMinusWASPTwelveH}$ for our H-band observations,
$T_{B J}$ =\TBrightWASPTwelveJ$^{+\TBrightPlusWASPTwelveJ}_{-\TBrightMinusWASPTwelveJ}$$K$ and $f_{J}$=\fReradiationWASPTwelveJ$^{+\fReradiationPlusWASPTwelveJ}_{-\fReradiationMinusWASPTwelveJ}$ for our J-band observations.
Our three differents bands
should be probing high pressure regions, deep into the atmosphere of WASP-12b.
Specifically if the near-infrared opacity is dominated by water vapour opacity the J, H \& K-bands should
be windows in water opacity \citep{Fortney08}, and the Ks, H \& J-bands should be seeing progressively
deeper into WASP-12b's atmosphere.
Within the errors the brightness temperatures displayed in our three near-infrared bands are similar.
However, the J and H-band brightness temperatures are marginally lower, and taken at face value
compared to the Ks-band
brightness temperature they suggest a modest temperature inversion at
very high pressures of $\sim$100 to 500 mbar, deep in the atmosphere of WASP-12b.
One explanation for why WASP-12b might display decreased flux at these shorter wavelengths as compared to the Ks-band,
is that the atmospheric depths and pressures probed by these shorter wavelength observations
may be more homogenized than higher altitude layers.
The efficiency of redistribution of the incident stellar flux from the dayside to the nightside should
be proportional to the ratio of the reradiative ($\tau_{rad}$) to advective timescales ($\tau_{adv}$).
It is thought that the
reradiative timescale should increase with pressure and
depth\footnote{The radiative time-scale (how quickly the planet reradiates the incident stellar flux) is thought to be proportional to $\tau_{rad}$ $\sim$ $\frac{c_{P} P}{4 g \sigma T^3}$ \citep{ShowmanGuillot02}, where $c_{P}$ is the specific heat capacity, $\sigma$ is the Stefan-Boltzmann constant, $T$ is the temperature of the atmospheric layer, and $g$ is the gravitational acceleration of the planet.}.
The advective timescale \footnote{It is thought that the advective timescale (how quickly the planet advects the heat to the nightside of the planet; $\tau_{adv}$) can be approximated by the radius of the planet, $R_{P}$, divided by the horizontal windspeed, $U$: $\tau_{adv}$ $\sim$ $R_{P}$/U \citep{ShowmanGuillot02}.}
is also thought to increase in pressure, although it is generally thought that advection should
win out over reradiation as one descends through the atmosphere of a typical hot Jupiter \citep{Seager05,Fortney08}.
Thus, one might expect more efficient redistribution of heat
at the layers probed by our shorter wavelength observations compared to the layers probed by our Ks-band observations.
Other explanations
for the relatively higher Ks-band emission
than the H and J-band emission are certainly possible, including:
extra flux from
a circumstellar disk or material streaming from the planet in the Ks-band, an atmospheric emission feature at Ks-band,
or absorption features over the H and J-bands.
The eclipse depths from the \citet{Campo10} Spitzer/IRAC measurements will not shed much additional light on this matter,
as, if water opacity dominates, the Spitzer/IRAC bands to do not probe as deeply as the JHK near-infrared bands.
We compare the depths of our near-infrared eclipses to a series of planetary atmosphere models in Figure \ref{FigModel}.
This comparison is made quantitatively as well as qualitatively by integrating the models over the WIRCam J, H \& Ks band-passes
and calculating the $\chi^{2}$ of the thermal emission data compared to the models.
We include the \citet{LopezMorales10} eclipse depth in Figure \ref{FigModel}, but do not include it in our $\chi^{2}$ calculation due to the aforemtioned
uncertainty with the timing and depth of this eclipse. The Spitzer/IRAC eclipse depths \citep{Campo10} are also not included, as of the time of writing only the
central eclipse times have been reported.
We first plot two blackbody
models, the first one displaying modestly efficient heat redistribution ($f$=0.35; blue dotted line; $T_{eq}$$\sim$2735 $K$),
while the latter features emission from the dayside only ($f$=$\frac{1}{2}$; grey dotted line; $T_{eq}$$\sim$2990 $K$).
The $f$=$\frac{1}{2}$ blackbody model provides an excellent fit to the longer wavelength
Ks-band emission, and does a reasonable job of fitting our H and J-band emission ($f$=$\frac{1}{2}$: $\chi^{2}$=\BlackbodyTwoChi);
nevertheless it proves a quantitatively better fit than the
modest redistribution model ($f$=$0.35$: $\chi^{2}$=\BlackbodyOneChi),
which generally underpredicts the observed emission.
In Figure \ref{FigModel} we also compare our measurements
to a series of one-dimensional, radiative transfer, spectral models
\citep{Fortney05,Fortney06,Fortney08} with different reradiation factors
that specifically include or exclude gaseous TiO/VO
into the chemical equilibrium and opacity calculations.
In these models when TiO/VO are present in gaseous form in the upper atmosphere
they act as absorbers at high altitudes and lead to
hot stratospheres and temperature inversions \citep{Hubeny03}.
We present models with modest redistribution ($f$=0.35)
and dayside only emission ($f$=$\frac{1}{2}$) with and without TiO/VO.
The associated $\chi^{2}$ for the $f$=$\frac{1}{2}$
models with and without TiO/VO are $\chi^{2}$=\FortneyFourChi \ and $\chi^{2}$=\FortneyThreeChi, while the $f$=0.35 models
with and without TiO/VO are $\chi^{2}$=\FortneyTwoChi \ and $\chi^{2}$=\FortneyOneChi.
None of these models provide quantitative improvements over the $f$=$\frac{1}{2}$ blackbody model,
as they do not do as good of job of matching
the longer wavelength Ks-band thermal emission, nor do they feature reduced emission in H and J-band.
Our near-infrared measurements also allow us to estimate the bolometric dayside luminosity of WASP-12b, $L_{day}$. We use a blackbody model
with a total reradiation factor equal to the best-fit value we calculate from our three near-infrared bands ($f_{tot}$=\fReradiationWASPTwelveALL);
by integrating over this model we can estimate $L_{day}$ as \BolometricFluxDayside$\times$10$^{-3}$$L_{\odot}$.
Another way of parameterizing the efficiency of the day-to-nightside heat redistribution rather than the reradiation
factor is comparing the bolometric dayside luminosity, $L_{day}$, to the nightside luminosity, $L_{night}$.
By following elementary thermal equilibrium calculations one can deduce that WASP-12b
should display a total bolometric luminosity of
$L_{tot}$ = \BolometicFluxTotal$\times$10$^{-3}$$L_{\odot}$. This suggests
that \DaysidePercentage\% of the incident stellar irradiation is reradiated by the dayside, leaving a mere \NightsidePercentage\% to be advected to the nightside and reradiated.
However, caution is encouraged with this conclusion as shorter and longer wavelength emission for this planet may deviate
significantly from that of a blackbody.
\subsection{Future Prospects}
We lastly note that the combination of
thermal emission as prominent as that displayed here with near-infrared photometry this precise
suggests the possibility
that thermal phase curve measurements may be possible from the ground.
For the shortest period exoplanets
($P$$\sim$1$d$ or less) even in a single night of observing (8-9 hours) one could conceivably view the flux maximum of the phase
curve where hot gas is advected downwind on the planet, the decrement in flux during the secondary eclipse, and then
view a significant fraction of the near-sinusoidal
decrease as the cool nightside face of the exoplanet rotates into sight.
WASP-12b is an ideal target for such observations with its short 1.09$d$ period, and its bright dayside emission
suggests that thermal phase curve observations for this planet should reveal a large asymmetry over the course of the orbit
as WASP-12b's nightside should be cold.
Thermal phase curve observations from the ground in the near-infrared would require
one to control the background systematic trends that are present in our near-infrared
photometry even after we correct the flux of our target star with a great many reference stars; the feasibility of this task
is, as of yet, unproven.
Nevertheless, we will be investigating the possibility of obtaining
such near-infrared phase curve information in this photometry as well as with
future observations of WASP-12b.
These near-infrared phase curve observations will be accompanied by near-simultaneous, 3.6 and 4.5
$\mu m$ Spitzer/IRAC thermal phase curve observations
of a full orbit of WASP-12b (P.I. P. Machalek) that will allow for an unprecedented understanding of the characteristics of the
day and nightside deep atmosphere of this planet.
We also plan to reobserve a full, rather than partial, eclipse of WASP-12b in J-band
so as to better define its thermal emission at that wavelength. Lastly we plan to observe the transit of WASP-12b
in the near-infrared Ks and H-bands, combined with our aforementioned
planned reobservations of the eclipse of WASP-12b in these bands. These combined transit and eclipse observations will allow us to confirm
if the Ks-band eclipse is indeed longer in duration than the optical transit, and if so whether this is due to material
tidally stripped from the planet that may or may not form a circumstellar disk in this system.
\acknowledgements
The Natural Sciences and Engineering Research Council of Canada supports the research of B.C. and R.J.
The authors thank Daniel Fabrycky, and Darin Ragozzine for helpful discussions on the effects of precession,
and Nicolas Cowan for helpful discussions on the putative disk in this system.
The authors especially appreciate the hard-work and diligence of the CFHT staff
for both scheduling these challenging observations and ensuring these ``Staring Mode'' observations were successful.
|
1009.0229
|
\section{Preliminaries on certain graphs}\label{sec_graph_prelim}
In this section we consider directed graphs $g$ whose vertices are labeled by the letters $A$, $B$,
$C$, $D$, $I$ (as in \textit{Initial}) and $F$ (as in \textit{Final}), and whose edges are labeled by
integers. The sets of vertices and edges are denoted respectively by $V(g)$ and $E(g)$. The labels of an
edge
$e$ and a vertex $v$ are denoted respectively by $L(e)$ and $L(v)$. The starting and final vertices of
$e$ are denoted respectively by $s(e)$ and $t(e)$.
The Hilbert space spanned by the vertices of $g$ is denoted by $l^2g$; elements of its canonical basis
are denoted by $\zeta_v$, $v\in V(g)$. The scalar product in $l^2g$ is denoted by $\langle \zeta_1,
\zeta_2 \rangle$. The convention about which place is linear and which is conjugate linear is such that for a
given vector $\zeta\in l^2g$ and $v\in V(g)$, the coefficient of $\zeta_v$ in the representation of $\zeta$ in
the canonical basis is equal to $\langle \zeta, \zeta_v \rangle$.
We say that a vertex $v$ is \textit{directly smaller} (resp. \textit{directly greater}) than a vertex
$w$, denote it by $v \!\leftarrow\! w$ (resp. $v \!\rightarrow\! w$), if and only if there exists an outgoing
edge from
$w$ to $v$ (resp. from $v$ to $w$.) The denotation $v<w$ will be used for the binary relation generated by the
relation $\leftarrow$. The words ``greatest'' or ``smallest'' will be used with respect to this relation.
Given a graph $g$ we will consider an operator $T^g:l^2g\to l^2 g$ defined in the following way:
$$
T^g (\zeta_v) := \sum_{e\in E(g):\,\, s(e)=v} L(e)\zeta_{t(e)} +
\left\{
\begin{array}{l l}
0 & \quad \mbox{if $L(v) \in \{ I,F \}$}\\
\zeta_v & \quad \mbox{otherwise}\\
\end{array} \right.
$$
Sometimes we use the letter $T$ alone when $g$ is understood.
\begin{definition}
For a vertex $v\in V(g)$ and $\zeta \in l^2 g$ define the \textit{incoming flow at $v$ with
respect to $\zeta$} to be
$$
\sum_{e\in E(g): t(e)=v} L(e) \cdot \langle \zeta, \zeta_{s(e)} \rangle.
$$
\end{definition}
The following lemma will be used many times. It follows directly from the definition of $T^g$.
\begin{lemma}[``flow lemma'']\label{lemma_flow}
If $\zeta \in \ker T$ then for every vertex $v$ with label other than $I$ or $F$, $-\langle \zeta,
\zeta_v \rangle$ is equal to the incoming flow at $v$. For a vertex with label $I$ or $F$ the incoming flow is
$0$.
\end{lemma}
\subsec{The graph $g(k)$}
\label{sec_graph_g}
The graph $g(k)$, $k\in\{1,2,\ldots\}$, is depicted on Figure \ref{fig_graph_g}.
We need some notation for vertices. The greatest vertex with label $A$ will be called $a_1$; for
$m<k$ the vertex with label $A$ which is directly smaller than $a_m$ will be called $a_{m+1}$. The smallest
vertex with label $B$ will be called $b_1$; for $m<k$ the vertex with label $B$ which is directly greater than
$b_m$ will be called $b_{m+1}$.
\begin{figure}[h]%
\resizebox{0.8\textwidth}{!}{\input{graph-g.pdf_t}}
\caption{The graph $g(k)$}
\label{fig_graph_g}
\end{figure}
\begin{lemma}
$\dim \ker T^{g(k)} = 0$
\end{lemma}
\begin{proof}
We first check the case $k=1$, by explicitly writing down the matrix of $T$.
For $k>1$ suppose that $\zeta\in l^2(g(k))$ is such that $T(\zeta)=0$. From the flow lemma we see that
$\langle \zeta, \zeta_{a_1} \rangle = \langle \zeta, \zeta_{b_1} \rangle$, and using induction we
prove that $\langle \zeta, \zeta_{a_1} \rangle = \langle \zeta, \zeta_{b_k} \rangle$ and finally that
$\langle \zeta, \zeta_{a_1} \rangle = \langle \zeta, \zeta_{a_k} \rangle.$
But on the other hand from the flow lemma it follows by induction that $\langle \zeta, \zeta_{a_1} \rangle =
2^{k-1} \langle \zeta, \zeta_{a_k} \rangle$. Since $k>1$, this proves that $\langle \zeta, \zeta_{a_1} \rangle
= 0$, and thus $\langle \zeta, \zeta_{a_i}\rangle = \langle \zeta, \zeta_{b_i} \rangle = 0$.
\end{proof}
\subsec{The graph $h(l)$}\label{sec_graph_h}
The graph $h(l)$, $l\in \{1,2,\ldots\}$, is depicted on Figure \ref{fig_graph_h}.
Let the unique vertex with label $F$ be denoted by $f$. Let the greatest vertex with label $C$ (resp.
$D$) be called $c_1$ (resp.
$d_1$); for $m<l$ the vertex with label $C$ (resp. $D$) which is directly smaller than $c_m$ (resp. $d_m$)
will be called $c_{m+1}$ (resp. $d_{m+1}$).
\begin{figure}[h]%
\resizebox{0.67\textwidth}{!}{\input{graph-h.pdf_t}}
\caption{The graph $h(l)$}
\label{fig_graph_h}
\end{figure}
\begin{lemma}
$\dim \ker T^{h(l)} = 1$
\end{lemma}
\begin{proof}
Let us consider the matrix of $T$ in the basis $\zeta_{c_1},\ldots, \zeta_{c_l}, \zeta_{d_1},\ldots
, \zeta_{d_l}, \zeta_{f}$. This matrix is lower triangular, and the diagonal consists of $2l$ $1$'s and of one
$0$ (the one which corresponds to $\zeta_f$.) This shows the lemma.
\end{proof}
\subsec{The graph $j(k,l)$}\label{sec_graph_j}
The graph $j(k,l)$, $k,l\in \{1,2,\ldots\}$ is depicted on Figure \ref{fig_graph_j}. It consists of a copy of
the graph $g(k)$, a copy of the
graph $h(l)$, and one additional vertex with the label $I$ together with three additional edges. The vertex
with the label $I$ will be denoted by $\iota$. The rest of the vertices will be
denoted in the way described in the two previous subsections.
\begin{figure}[h]%
\resizebox{0.95\textwidth}{!}{\input{graph-j.pdf_t}}
\caption{The graph $j(k,l)$}
\label{fig_graph_j}
\end{figure}
\begin{lemma} If $l= 2^{k-1}-1$ then $\dim \ker T^{j(k,l)} = 2$. Otherwise $\dim \ker T^{j(k,l)}=1$
\end{lemma}
\begin{proof}
We will focus on the case $k>1$. The arguments in the case $k=1$ are very similar and are left to the reader.
First, let $l=2^{k-1}-1$. The first generator of $\ker T$ is $\zeta_f$, and the coefficients of
another generator of $\ker T$ are depicted on Figure \ref{fig_graph_j_ker}.
\begin{figure}[h]%
\resizebox{0.95\textwidth}{!}{\input{graph-j-ker.pdf_t}}
\caption{Coefficients of the second generator of $\ker T^{j(k,2^{k-1}-1)}$}
\label{fig_graph_j_ker}
\end{figure}
To see that these two vectors generate the whole $\ker T$ let us prove a general (i.e. valid for all pairs
$(k,l)$) claim:
\begin{claim*}
Let $\zeta\in \ker T$ be such that $\langle \zeta, \zeta_f \rangle=0$ and $\langle \zeta,
\zeta_{a_1} \rangle=0$.
Then $\zeta=0$.
\end{claim*}
\begin{proof}
First we see from the flow lemma that $\langle \zeta, \zeta_{a_1} \rangle=0$ implies $\langle
\zeta, \zeta_{\iota} \rangle= \langle \zeta, \zeta_{c_1} \rangle=0$ on the one hand, and on the other we see
inductively
that $\langle \zeta, \zeta_{a_i} \rangle= \langle \zeta, \zeta_{b_i} \rangle=0$ for $i=1,\ldots k$.
Now, $\langle \zeta,\zeta_{c_1} \rangle= \langle \zeta,\zeta_{\iota} \rangle = 0$ implies that $\langle
\zeta,\zeta_{d_1} \rangle = 0$, and $\langle \zeta,\zeta_{c_i} \rangle =0$ gives us $\langle
\zeta,\zeta_{c_{i+1}} \rangle=0$. Finally $\langle \zeta,\zeta_{d_i} \rangle = \langle \zeta,\zeta_{c_{i+1}}
\rangle=0$ implies $\langle \zeta,\zeta_{d_{i+1}} \rangle=0$ which shows that in fact also $\langle
\zeta,\zeta_{c_i} \rangle= \langle \zeta,\zeta_{d_i} \rangle =0$ for all $i=1,\ldots, l$; and $\langle
\zeta,\zeta_{f} \rangle$ is equal to $0$ by assumption.
\end{proof}
Thus to finish the proof it is enough to show that if $\zeta\in \ker T$ is such that $\langle \zeta,\zeta_{a_1}
\rangle =1$ then $l=2^{k-1}-1$.
Indeed, $\langle \zeta,\zeta_{a_1} \rangle =1$ implies $\langle \zeta,\zeta_{a_2} \rangle=2$ and, inductively,
$\langle \zeta,\zeta_{a_k} \rangle = 2^{k-1}$. This implies that $\langle \zeta,\zeta_{b_k} \rangle =2^{k-1}$,
and, by induction, $\langle \zeta,\zeta_{b_1} \rangle=2^{k-1}$.
Now, $\langle \zeta,\zeta_{a_1} \rangle =1$ and $\langle \zeta,\zeta_{b_1} \rangle = 2^{k-1}$ imply that
$\langle \zeta,\zeta_{\iota} \rangle = 2^{k-1}-1$. On the other hand $\langle \zeta,\zeta_{a_1} \rangle=1$
implies also $\langle \zeta,\zeta_{c_1} \rangle=1$; since $\langle \zeta,\zeta_{c_i} \rangle=1$ clearly
implies $\langle \zeta,\zeta_{c_{i+1}} \rangle=1$ we get $\langle \zeta,\zeta_{c_i} \rangle=1$ for $i=1,\ldots,
l$.
Note that $\langle \zeta,\zeta_{\iota} \rangle =2^{k-1}-1$ and $\langle \zeta,\zeta_{c_1} \rangle =1$ imply
$\langle \zeta,\zeta_{d_1} \rangle = 2^{k-1}-2$; but from the flow lemma we see $\langle
\zeta,\zeta_{d_{i+1}} \rangle = \langle \zeta,\zeta_{d_i} \rangle - \langle \zeta,\zeta_{c_{i+1}} \rangle =
\langle \zeta,\zeta_{d_i} \rangle - 1$ so using induction we get that $\langle \zeta,\zeta_{d_l} \rangle =
2^{k-1} - l-1$.
Note that $T(\zeta_{d_l}) = \zeta_{d_l} - \zeta_f$, and that $T(\zeta_{d_l}^\perp) \perp \zeta_f$,
where $\zeta_{d_l}^\perp$ denotes the orthogonal complement of the subspace spanned by $\zeta_{d_l}$.
This means that $0=\langle T(\zeta), \zeta_f \rangle = \langle \,\,\langle\zeta,\zeta_{d_l}\rangle
T(\zeta_{d_l}), \zeta_f\,\,\rangle = - \langle\zeta, \zeta_{d_l}\rangle$ and thus $2^{k-1} -l -1=0$ .
\end{proof}
\section{Introduction}
\subsec{Presentation of the result}
Let $G$ be a countable discrete group. We will say that a non-negative real number $r$ is an
\textit{$l^2$-Betti number arising from G} if and only if there exists $\theta\in M_m(\Q G)$, a matrix over the
rational group ring of $G$, such that the von
Neumann dimension of kernel of $\theta$ is $r$. The motivation for the name is as follows: when $G$ is finitely
presented and $r$ is an $l^2$-Betti number arising from $G$, then there exists a closed manifold $M$ whose
fundamental group is $G$, and such that one of the $l^2$-Betti numbers of the universal cover of $M$ is equal
to $r$. We refer to \cite{Eckmann_intro} and \cite{Lueck:Big_book} for more details.
The following problem is a fine-grained version of a question asked by Atiyah in \cite{Atiyah1976}.
\begin{question}[The Atiyah problem for a group $G$] What is the set of $l^2$-Betti numbers arising from $G$?
\end{question}
Let us call this set the \textit{$l^2$-complexity of $G$}, and denote it by $\cc C(G)$. For a class of groups
$\mathbf G$ define $\cc C(\mathbf G) = \cup_{G\in\mathbf G} \cc C(G)$.
So far $\cc C(G)$ has been computed only in cases where $\cc C(G)$ is a subset of $\Q$. In fact, what has
become to be known as the \textit{Atiyah conjecture for torsion-free groups} says that $\cc C(G)=\N$ for any
torsion-free group, and till the article \cite{Dicks_Schick} of Dicks and Schick it was widely
conjectured that $\cc C(G) \subset \Q$ for every group $G$. However, Dicks and Schick gave an example of an
operator $\theta\in \Q((\Zmod{2}\wr\Z)^2)$ together with an heuristic
argument showing why $\dim_{vN} \ker \theta$ is probably irrational. Their work was motivated
by the article \cite{Grigorchuk_Zuk2001} of Grigorchuk and Żuk.
Only recently Austin has been able to obtain a definite result by proving in \cite{arxiv:austin-2009}
that $\cc C(\text{Finitely generated groups})$ is uncountable. This work has been a motivation for much of the
following efforts, by showing that computation of $\dim_{vN}$ can be sometimes done by analyzing certain
dynamical systems and using Pontryagin duality.
Subsequently it has been shown independently by the author in \cite{arxiv:grabowski-2010} and by Pichot,
Schick and Żuk in
\cite{arxiv:pichot_schick_zuk-2010} that in fact $\cc C($Finitely generated groups$)=\R_{\ge 0}$ and that
$\cc C($Finitely presented groups$)
\nsubseteq \Q$. Moreover, in \cite{arxiv:grabowski-2010} it is shown that $\cc C((\Zmod{2}\wr \Z)^3)
\nsubseteq
\Q$.
More recently, Lehner and Wagner showed in \cite{arxiv:lehner_wagner-2010} that $\cc C(\Zmod{p} \wr
F_d)$ contains irrational
algebraic numbers, where $F_d$ is the free group on $d$ generators, and $d\ge 2, p \ge 2d-1$.
In all the articles cited above the following is trivial to check: if it is proven that for a given group $
G$ it holds that $\cc C(G)\nsubseteq \Q$ then there exists $p$ such that $\Zmod{p} \wr \Z \subset G$.
In other words, according to the current state of knowledge, $\Zmod{p} \wr \Z \subset G$ could be the
necessary condition for $\cc C(G)\nsubseteq \Q$. We prove that it is a sufficient condition. Indeed, it is very
easy to see that if $A\subset B$ are groups then $\cc C(A)\subset \cc C(B)$ (see for example Corollary 4.2.2
in \cite{arxiv:grabowski-2010}) and here we prove the following theorem.
\begin{result}\label{result_main}
Let $p\ge 2$. Then $\cc C (\Zmod{p}\wr \Z)$ contains transcendental numbers.
\end{result}
We finish this subsection by stating two related open questions. The first one summarizes the current
state of knowledge on irrational $l^2$-Betti numbers.
\begin{question} Is it the case that $\cc C(G) \nsubseteq \Q$ is equivalent to $\Zmod{p}\wr \Z\subset G$
for some $p$?
\end{question}
As mentioned above, $\cc C(G)$ has been computed only in cases where in fact $\cc C(G)\subset \Q$. The
``easiest'' group known so far for which $\cc C(G)\nsubseteq \Q$ is $\Zmod{2}\wr
\Z$, and hence the following question.
\begin{question} What is $\cc C(\Zmod{2}\wr \Z)$?
\end{question}
This question contains many interesting subquestions. For example, does $\cc C(\Zmod{2}\wr \Z)$ contain
irrational algebraic numbers?
\subsec{Outline of the paper}
In order to prove Theorem \ref{result_main} we need to find an operator in $M_m(\Q (\Zmod{p}\wr \Z))$ whose
kernel has transcendental von Neumann dimension. However, Lemma \ref{lemma_group_ext} says that $|H|\cdot \cc C
(G\times H) = C(G)$, for any group $G$ and any finite group $H$, so we can as well find such an operator in
$\Q(\Zmod{p}\wr \Z \times H)$, where $H$ is some finite group.
In Section \ref{sec_back_to_lamplighter}, \textit{Back to the lamplighter groups}, we see how Pontryagin
duality allows us to exchange the above question with a question about existence of an operator in the von
Neumann algebra $L^\infty(X) \rtimes {\Gamma}$ whose kernel has transcendental von Neumann dimension, where $X:=
\Zmod{p}^\Z \times \Zmod{2}^3$, and ${\Gamma}:= \Z \times GL_3(\Zmod{2})$.
The operator $T\in L^\infty(X)\rtimes {\Gamma}$, whose dimension we are able to calculate, is defined
in Section \ref{sec_the_operator}, \textit{Description of the operator}, in terms of another operator $S$. Our
computational tool is the one developed in \cite{arxiv:grabowski-2010}, and we present it in
Section \ref{sec_tool}, \textit{Our computational tool}.
The main idea is as follows: we are given a probability measure space $(X,\mu)$, an action $\rho\colon{\Gamma}
\curvearrowright X$ by measure preserving maps, an operator $S\in L^\infty(X)\rtimes {\Gamma}$, and another operator
$T$ which is defined in terms of $S$. In order to compute $\dim_{vN} \ker T$ we proceed as follows: we
decompose $X$ into family of sets, each of which is the set of vertices of certain graph $g$ - this
decomposition depends on the operator $S$. Next, we ``restrict'' the operator $T$ to an operator $T^g$ defined
on the Hilbert space $l^2g$ spanned by vertices of $g$ (i.e. points of $X$.) Computing $\dim \ker T^g$ turns
out to be relatively easy, and it turns out that to obtain $\dim_{vN} \ker T$ one needs to ``integrate'' the
function $\dim\ker T^g$ over all the graphs $g$ which appear as ``subgraphs'' of $X$.
The graphs which appear in the decomposition of $X$ induced by our $S$ are described in Section
\ref{sec_graph_prelim}, \textit{Preliminaries on certain graphs}. In Section \ref{sec_application},
\textit{Application of the computational tool}, we prove that the graphs described in Section
\ref{sec_graph_prelim} are indeed all the graphs we need to consider. After this we are
ready to apply the computational tool: Corollary \ref{cory_dimT} shows what is $\dim_{vN} \ker T$;
transcendence of it follows from the work
\cite{Tanaka:Transcendence_of_the_values_of_certain_series_with_Hadamard's_gaps} of Tanaka.
\subsec{Basic notation}\label{subsec_basic_notation}
The symbols $\N$, $\Z$, $\Q$, $\R$ and $\C$ denote respectively the sets $\{0,1,\ldots\}$, the set of
integers, the set of rational numbers, the set of real numbers and the set of complex numbers. We choose one
of the two generators of $\Z$ once and for all and denote it by $t$.
The cyclic group of order $p$ is denoted by $\Zmod{p}$.
For two groups $A$ and $B$, $A^B$ denotes the set of functions $B\to A$. Usually $B$ will be equal to $\Z$
in which case $A$-valued functions will be identified with $A$-valued sequences. $A^{\oplus B}$
denotes the set of finitely supported functions $B\to A$.
The wreath product of a group $A$ with $\Z$ is defined as $A \wr \Z := A^{\oplus \Z}\rtimes_\rho \Z$, where
$[\rho(t)((a_i))]_j :=a_{j+1}$.
Given a group $G$, the Hilbert space spanned by the elements of $G$ is denoted by $l^2G$; elements of
the canonical basis of $l^2 G$ are denoted by $\zeta_g$, $g\in G$. Given a field $K$ of complex numbers we
often consider the group ring $KG$ of linear combinations of elements of $G$ with coefficients in $K$.
$KG$ acts on $l^2 G$ by the linear extension of the rule $g\cdot \zeta_{h} := \zeta_{gh}$, $g,h\in A$.
Given a ring $R$ and a positive integer $m$, $M_m(R)$ denotes the ring of $m\times m$ matrices over $R$.
The elements of the matrix ring $M_m(\Q G)= \Q G \otimes M_m(\Q)$ act on the Hilbert space $(l^2 G)^m
=l^2G\otimes \C^m$.
Given $\theta\in \Q G$, we can investigate the kernel $\ker \theta\subset l^2 G$ of $\theta$.
The von Neumann dimension $\dim_{vN} \ker \theta$ of kernel of $\theta$ is defined as
$$
\dim_{vN} \ker \theta:= \tr_{vN}(P_\theta),
$$
where $P_\theta: l^2 G \to l^2 G$ is the orthogonal projection onto $\ker \theta$, and the von
Neumann trace $\tr_{vN}$ on a given operator $T$ is defined as $\tr_{vN}(T) := \langle T \zeta_{e},
\zeta_{e} \rangle$, with $e$ being the neutral element of $G$. We proceed similarly when
$\theta\in M_m(\Q G)$, by defining the von Neumann trace on $B(l^2G) \otimes M_m(\C)$ as $\tr_{vN} \otimes
\tr$, where $\tr$ is the standard matrix trace. For details and motivations see \cite{Eckmann_intro} or
\cite{Lueck:Big_book}.
\subsec{Thanks and acknowledgements}
I thank Manuel Koehler for commiting his time to discussions which allowed clarifying arguments presented
here.
I also thank Światosław Gal, Jarek Kędra, Thomas Schick and Andreas Thom, who submitted many valuable comments
which greatly improved clarity and readability of this paper.
\section{Description of the operator}\label{sec_the_operator}
Let us fix $p\in\{2,3,\ldots\}$ and let $(X,\mu)$ be the compact abelian group $\Zmod{p}^\Z \times
\Zmod{2}^3$ with the normalized Haar measure,
let
${\Gamma}$ be the group $\Z \times GL_3(\Zmod{2})$, and let $\rho\colon{\Gamma}\curvearrowright X$ be the action of ${\Gamma}$
on
$X$ by the following measure-preserving group automorphisms: the generator $t$ of $\Z$ acts on $\Zmod{p}^\Z$ by
$[\rho(t)( (a_i))]_j = a_{j+1}$, and $GL_3(\Zmod{2})$ acts in the natural way on $\Zmod{2}^3$.
We will now describe an operator $T$ in the von Neumann algebra $L^\infty(X) \rtimes {\Gamma}$. One standard
monograph on the subject of von Neumann algebras is \cite{Sakai}. For our notation see Subsection 2.2 of
\cite{arxiv:grabowski-2010}.
It is convenient to think of elements of $\Zmod{2}^3$ as ``labels''. Thus let $A$, $B$, $C$, $D$, $F$, $I$,
$U_1$, $U_2$ ($U$ stands for ``unimportant'') denote the elements of $\Zmod{2}^3$. The only assumption on the
bijection between the above letters and the elements of $\Zmod{2}^3$ is that the first $6$ symbols correspond
to non-zero elements of $\Zmod{2}^3$.
For every pair $(x,y) \in \{A, B, C,D, F, I\}$, let us fix an automorphism $[xy]\in GL_3(\Zmod{2})$ which sends
$x$ to $y$, in such a way that
\begin{equation}\label{eq_loops_inv}
[xy] = [yx]^{-1}
\end{equation}
and
\begin{equation}\label{eq_loops_init}
[AC][CD]=[AI][ID].
\end{equation}
When dealing with subsets of $\Zmod{p}$ and $\Zmod{p}^\Z$, the symbol $0$ will denote the set $\{0\}\subset
\Zmod{p}$ and the symbol $1$ will denote the set $\{1,2,3,\ldots, p-1\}\subset \Zmod{p}$. Let
$$
({\varepsilon}_{-a}{\varepsilon}_{-a+1}\ldots{\varepsilon}_{-1}\underline{{\varepsilon}_{0}}{\varepsilon}_1\ldots{\varepsilon}_b, x),
$$
where ${\varepsilon}_i\in \{0,1\} \subset 2^\Zmod{p}$ denote the following subset of $X$:
$$
\{((m_i), y)\in \Zmod{p}^\Z \times \Zmod{2}^3: m_{-a} \in {\varepsilon}_{-a}, \ldots, m_b \in {\varepsilon}_b, y= x\}.
$$
Let
$$
\chi({\varepsilon}_{-a}{\varepsilon}_{-a+1}\ldots{\varepsilon}_{-1}\underline{{\varepsilon}_{0}}{\varepsilon}_1\ldots{\varepsilon}_b, x)
$$
be the characteristic function
of $({\varepsilon}_{-a}{\varepsilon}_{-a+1}\ldots{\varepsilon}_{-1}\underline{{\varepsilon}_{0}}{\varepsilon}_1\ldots{\varepsilon}_b, x)$.
Let us define an operator $S$ as the sum of the following summands:
\begin{eqnarray}
\label{eq_Sdef_1} ( -t[ID] + t^{-1}[IA]) &\cdot& \chi(1\underline{0}1,I) \\
\label{eq_Sdef_2} (-t^2[AC] - 2t^{-1} ) &\cdot&\chi(1\underline{1}01,A) \\
\label{eq_Sdef_3} -t^2[AC] &\cdot& \chi(0\underline{1}01,A) \\
\label{eq_Sdef_4} -2t^{-1} &\cdot& \chi(1\underline{1}00,A) \\
\label{eq_Sdef_5} 0 &\cdot& \chi(0\underline{1}00,A) \\
\label{eq_Sdef_6} -2t^{-1} &\cdot& \chi(1\underline{1}1,A) \\
\label{eq_Sdef_7} -[AB] &\cdot& \chi(0\underline{1}1,A) \\
\label{eq_Sdef_8} -t &\cdot& \chi(\underline{1}1,B) \\
\label{eq_Sdef_9} -[BA] &\cdot& \chi(\underline{1}0,B) \\
\label{eq_Sdef_10} (-t + [CD]) &\cdot& \chi(\underline{1}1,C) \\
\label{eq_Sdef_11} +[CD] &\cdot& \chi(\underline{1}0,C) \\
\label{eq_Sdef_12} -t &\cdot&\chi(\underline{1}1,D) \\
\label{eq_Sdef_13} -[DF] &\cdot& \chi(\underline{1}0,D) \\
\label{eq_Sdef_14} 0 &\cdot& \chi(\underline{1}0,F) \\
0 &\cdot& \chi(U),
\end{eqnarray}
where $U$ denotes ``all the rest'', i.e. the complement of the union of the sets $(1\underline{0}1,I)$,
$(1\underline{1}01,A)$, $(0\underline{1}01,A)$, $(1\underline{1}00,A)$, $(0\underline{1}00,A)$,
$(1\underline{1}1,A)$, $(0\underline{1}1,A)$, $(\underline{1}1,B)$, $(\underline{1}0,B)$, $(\underline{1}1,C)$,
$(\underline{1}0,C)$, $(\underline{1}1,D)$, $(\underline{1}0,D)$ and $(\underline{1}0,F)$; and $\chi(U)$ is
the characteristic function of $U$.
The operator $T$ in which we are interested is defined as
\begin{equation}\label{eq_Tdef}
T:=S + (1-\chi(U) -\chi(1\underline{0}1,I) - \chi(\underline{1}0,F))
\end{equation}
\section{Application of the computational tool}\label{sec_application}
We will now compute $\dim_{vN}\ker T$, where $T$ is the operator from Section \ref{sec_the_operator}. First we
compute the (countable) measure space $S\text{-Graphs}_\text{fin}$ ($S$ is also from Section
\ref{sec_the_operator}.)
\subsec{The trivial $S$-graph $\mathbf{u}$}\label{subsec_sgraph_u}
The $S$-graph $\mathbf u$ is shown on Figure \ref{fig_sgraph_u}. It consists of a single vertex with label $U$
and no edges.
\begin{figure}[h]%
\resizebox{0.029\textwidth}{!}{\input{sgraph-u.pdf_t}}
\caption{The $S$-graph $\mathbf{u}$}
\label{fig_sgraph_u}
\end{figure}
\begin{lemma}\label{lemma_sgraph_u}
\mbox{}
\begin{enumerate}
\item $\dim \ker T^{\mathbf u} = 1$
\item The $S$-graph $\mathbf u$ does not possess non-trivial automorphisms.
\item The $S$-graph $\mathbf u$ is simply-connected.
\item $\mu (\mathbf u) =\frac{1}{8}(2+5\frac1p+ p^3 + 2\frac{p-1}{p}p^2 + \frac{p-1}{p} + (\frac{p-1}{p})^2) $
\end{enumerate}
\end{lemma}
\begin{proof}
Note that properties (1)-(3) concern $S$-graphs (as opposed to embedded $S$-graphs.)
(1) is clear since $T^{\mathbf u}$ is the $0$-endomorphism of a one-dimensional space.
(2) and (3) are also clear.
As to (4), note that for every point $x$ of $U$ we get an embedded $S$-graph $(\mathbf u, \phi)$ by sending
the unique vertex of $\mathbf u$ to $x$. We will now check that $(\mathbf u, \phi)$ is in fact a maximal
embedded $S$-graph.
Note that $U = (\underline{0}, A) \cup (\underline{0}, B)\cup (\underline{0},C) \cup (\underline{0}, D) \cup
(\cdot, U_1) \cup (\cdot, U_2) \cup (\underline{0}, F) \cup (\underline{1}1, F) \cup (\underline{1}, I) \cup
(1\underline{0}0, I) \cup (0\underline{0}1, I) \cup (0\underline{0}0,I).$
Suppose for example that $x\in (\underline{0},A)$, and consider for example the summand \eqref{eq_Sdef_1} of
$S$, i.e. $( -t[ID] + t^{-1}[IA]) \cdot \chi(1\underline{0}1,I)$. According to Definition \ref{def_S-graph}
we need to check that $x\notin \rho(t[ID])((1\underline{0}1,I)) \cup
\rho(t^{-1}[IA])((1\underline{0}1,I))$. This is clear since $\rho(t[ID])((1\underline{0}1,I))\subset (\cdot,
D)$ and $\rho(t^{-1}[IA])((1\underline{0}1,I)) = (\underline{1}01, A)$.
All the remaining cases \eqref{eq_Sdef_2} - \eqref{eq_Sdef_14} are checked in an analogous straight-forward
fashion. Similarly when $x$ is an element of another summand of $U$.
This shows that $\mu (\mathbf u) \ge \mu(U)$, which is easily computed to be $\frac{1}{8}(2+5\frac1p+ p^3 +
2\frac{p-1}{p}p^2 + \frac{p-1}{p} + (\frac{p-1}{p})^2)$. The opposite
inequality is clear since the unique vertex of $\mathbf u$ has to be sent to $U$.
\end{proof}
\subsec{The $S$-graph $\mathbf{g}(k)$}\label{subsec_sgraph_g}
The $S$-graph $\mathbf g(k)$, $k\in\{1,2,\ldots\}$, is shown on Figure \ref{fig_sgraph_g}.
\begin{figure}[h]%
\resizebox{0.9\textwidth}{!}{\input{sgraph-g.pdf_t}}
\caption{The $S$-graph $\mathbf{g}(k)$}
\label{fig_sgraph_g}
\end{figure}
It is straightforward to see that there is a unique bijection $V(\mathbf{g}(k)) \to
V(g(k))$ which induces an isomorphism of directed graphs, and which sends vertices with labels of the form
$(...,x)$ to vertices with the label $x$, for every $x\in \{A,B,C,D, I,F\}$. Note that this bijection induces an
isomorphism $l^2 \mathbf g \to l^2 g$ which intertwines $T^{\mathbf g}$ with $T^g$.
\begin{lemma}\label{lemma_sgraph_g}\mbox{}\begin{enumerate}
\item $\dim \ker T^{\mathbf{g}(k)} = 0$
\item The $S$-graphs $\mathbf{g}(k)$ do not possess non-trivial automorphisms.
\item The $S$-graphs $\mathbf{g}(k)$ are simply-connected.
\item $\mu (\mathbf g(k)) \ge 2k\cdot \frac18\cdot (\frac{1}{p})^3\cdot (\frac{p-1}{p})^k$
\end{enumerate}
\end{lemma}
\begin{proof}
(1) follows from the existence of an isomorphism $l^2 \mathbf g \to l^2 g$ intertwining $T^{\mathbf
g}$ with $T^g$.
(2) and (3) are straightforward to check using the fact that $[AB][BA]=\id$ (which follows from equation
\eqref{eq_loops_inv}.)
As to (4), let $x$ be a fixed element of the set $(0 1^{k-1} \underline{1}00, A)$, where $1^x$ denotes $x$
symbols $1$. Let us denote $x$ by $(- 01^{k-1} \underline{1}00-, A)$. Similarly, for
example $t^{-1}(x)$ will be denoted by $(-01^{k-2} \underline{1}100-, A)$.
On Figure \ref{fig_sgraph_g_emb} we show an embedded $S$-graph $(\mathbf g(k),\phi)$. Label of a given vertex
is the value of $\phi$ on this vertex. In particular, different vertices are mapped to different points of
$X$.
\begin{figure}[h]%
\resizebox{0.7\textwidth}{!}{\input{sgraph-g-emb.pdf_t}}
\caption{The embedded $S$-graph $(\mathbf{g}(k),\phi)$}
\label{fig_sgraph_g_emb}
\end{figure}
As in Lemma \ref{lemma_sgraph_u}, it is straightforward, although tedious, to check from the definition of $S$
that Figure \ref{fig_sgraph_g_emb} contains in fact a maximal embedded $S$-graph. It follows that $\mu (\mathbf
g(k))$ is at least equal to $|V(\mathbf g(k))| \cdot \mu((0 1^{k-1} \underline{1}00, A)) = 2k\cdot \frac18\cdot
(\frac{1}{p})^3(\frac{p-1}{p})^k$.
\end{proof}
\subsec{The $S$-graph $\mathbf h(l)$}\label{subsec_sgraph_h}
The $S$-graph $\mathbf h(l)$ is shown on on Figure \ref{fig_sgraph_h}.
\begin{figure}[h]%
\resizebox{0.8\textwidth}{!}{\input{sgraph-h.pdf_t}}
\caption{The $S$-graph $\mathbf{h}(l)$}
\label{fig_sgraph_h}
\end{figure}
As in Subsection \ref{subsec_sgraph_g}, note the existence of a bijection $V(\mathbf{h}(l)) \to V(h(l))$
which induces an isomorphism $l^2 \mathbf{h}(l) \to l^2 h(l)$ intertwining $T^{\mathbf h(l)}$ and $T^{h(l)}$.
\begin{lemma}
\mbox{}
\begin{enumerate}
\item $\dim \ker T^{\mathbf{h}(l)} = 1$
\item The $S$-graphs $\mathbf{h}(l)$ do not possess non-trivial automorphisms.
\item The $S$-graphs $\mathbf{h}(l)$ are simply-connected.
\item $\mu (\mathbf h(l)) \ge (2l+1)\cdot \frac18\cdot (\frac{1}{p})^3 \cdot (\frac{p-1}{p})^l$
\end{enumerate}
\end{lemma}
\begin{proof}
(1), (2) and (3) are proved as in Lemma \ref{lemma_sgraph_g}.
To prove (4) we proceed also as in Lemma \ref{lemma_sgraph_g}, and we use analogous
notation. Thus let $x = (-00\underline{1}1^{l-1}0-,C)$ be a fixed element of the set
$(00\underline{1}1^{l-1}0,C)$. On Figure \ref{fig_sgraph_h_emb} we show an embedded $S$-graph $(h(l),\phi)$.
\begin{figure}[h]%
\resizebox{0.61\textwidth}{!}{\input{sgraph-h-emb.pdf_t}}
\caption{The embedded $S$-graph $(\mathbf{h}(l),\phi)$}
\label{fig_sgraph_h_emb}
\end{figure}
It is again straightforward but tedious to check that Figure \ref{fig_sgraph_h_emb} contains in fact a maximal
embedded $S$-graph. It follows that $\mu (\mathbf h(l))$ is
at least equal to $|V(\mathbf h(l))| \cdot \mu((00 \underline{1} 1^{l-1}0, A)) = (2l+1)\cdot \frac18\cdot
(\frac{1}{p})^3 \cdot (\frac{p-1}{p})^l$
\end{proof}
\subsec{The $S$-graph $\mathbf j(k,l)$}\label{subsec_sgraph_j}
The $S$-graph $\mathbf j(k,l)$ is shown on Figure \ref{fig_sgraph_j}.
\begin{figure}[h]%
\resizebox{0.95\textwidth}{!}{\input{sgraph-j.pdf_t}}
\caption{The $S$-graph $\mathbf{j}(k,l)$}
\label{fig_sgraph_j}
\end{figure}
As in Subsection \ref{subsec_sgraph_g}, note the existence of a bijection $V(\mathbf{j}(k,l)) \to V(j(k,l))$
which induces an isomorphism $l^2 \mathbf j(k,l) \to l^2 j(k,l)$ intertwining $T^{\mathbf j(k,l)}$ and
$T^{j(k,l)}$.
\begin{lemma}\label{lemma_sgraph_j}
\mbox{}
\begin{enumerate}
\item $\dim \ker T^{\mathbf{j}(k,l)} = \left\{
\begin{array}{l l}
2 & \quad \mbox{if $l=2^{k-1}-1$}\\
1 & \quad \mbox{otherwise}\\
\end{array} \right. $
\item The $S$-graphs $\mathbf{j}(k,l)$ do not possess non-trivial automorphisms.
\item The $S$-graphs $\mathbf{j}(k,l)$ are simply-connected.
\item $\mu (\mathbf j(k,l)) \ge (2k+2l+2) \cdot \frac18\cdot (\frac{1}{p})^3\cdot (\frac{p-1}{p})^{k+l}$
\end{enumerate}
\end{lemma}
\begin{proof}
(1), (2) are proved as in Lemma \ref{lemma_sgraph_g}. (3) follows from the fact that $[AB][BA]=[CD][DC]
=1$ (eq. \eqref{eq_loops_inv}) and $[AC][CD]=[AI][ID]$ (eq. \eqref{eq_loops_init}.)
To prove (4) we proceed also as in Lemma \ref{lemma_sgraph_g}, and we use analogous
notation. Thus let $x = (-01^k\underline{0}1^{l}0-,I)$ be a fixed element of the set
$(01^k\underline{0}1^{l}0,I)$. On Figure \ref{fig_sgraph_j_emb} we show an embedded $S$-graph $(j(k,l),\phi)$.
\begin{figure}[h]%
\resizebox{1\textwidth}{!}{\input{sgraph-j-emb.pdf_t}}
\caption{The embedded $S$-graph $(\mathbf{j}(k,l),\phi)$}
\label{fig_sgraph_j_emb}
\end{figure}
It is again straightforward and quite tedious to check from the definition of $S$ that Figure
\ref{fig_sgraph_j_emb} contains in fact a maximal embedded $S$-graph. It follows that $\mu (\mathbf j(k,l))$ is
at least equal to $|V(\mathbf j(k,l))| \cdot \mu((-01^k\underline{0}1^{l}0-,I)) = (2k+2l+2) \cdot \frac18\cdot
(\frac{1}{p})^3\cdot (\frac{p-1}{p})^{k+l}$.
\end{proof}
\subsec{The measure space $S\textnormal{-Graphs}_\textnormal{fin}$}
In this subsection let ${\alpha}=\frac1p$, ${\beta}=\frac{p-1}{p}$.
\begin{cory}\label{cory_is_probabilistic}
The measure space $(S\text{-Graphs}_\text{fin}, \mu)$ is a probability measure space. Its only points with
non-trivial
measure are $\mathbf u$, $\mathbf g(k)$, $k\ge 1$, $\mathbf h(l)$, $l\ge 1$, and $\mathbf j(k,l)$, $k,l\ge
1$.
Their measures are as follows:
\begin{eqnarray*}
\mu(\mathbf u) &=& \frac{1}{8}(2+5\frac1p+ p^3 + 2\frac{p-1}{p}p^2 + \frac{p-1}{p} + (\frac{p-1}{p})^2), \\
\mu(\mathbf g(k)) &=& 2k\cdot \frac18\cdot (\frac{1}{p})^3\cdot (\frac{p-1}{p})^k,\\
\mu (\mathbf h(l)) &=& (2l+1)\cdot \frac18\cdot (\frac{1}{p})^3 \cdot (\frac{p-1}{p})^l, \\
\mu (\mathbf j(k,l)) &=& (2k+2l+2) \cdot \frac18\cdot (\frac{1}{p})^3\cdot (\frac{p-1}{p})^{k+l}.
\end{eqnarray*}
\end{cory}
\begin{proof}
We know from Section 5.4 of \cite{arxiv:grabowski-2010} (see in particular proof of Theorem 5.4.12) that
the measure space $S\text{-Graphs}_\text{fin}$ is always a subspace of a probability measure space. On the
other hand we know already that
\begin{eqnarray*}
\mu(\mathbf u) &\ge& \frac{1}{8}(2+5\frac1p+ p^3 + 2\frac{p-1}{p}p^2 + \frac{p-1}{p} + (\frac{p-1}{p})^2), \\
\mu(\mathbf g(k)) &\ge& 2k\cdot \frac18\cdot (\frac{1}{p})^3\cdot (\frac{p-1}{p})^k,\\
\mu (\mathbf h(l)) &\ge& (2l+1)\cdot \frac18\cdot (\frac{1}{p})^3 \cdot (\frac{p-1}{p})^l, \\
\mu (\mathbf j(k,l)) &\ge& (2k+2l+2) \cdot \frac18\cdot (\frac{1}{p})^3\cdot (\frac{p-1}{p})^{k+l},
\end{eqnarray*}
so to prove the corollary it is enough to check that
\begin{eqnarray*}
\frac{1}{8}(2+5\frac1p+ p^3 + 2\frac{p-1}{p}p^2 + \frac{p-1}{p} + (\frac{p-1}{p})^2) &+& \\
\sum_{k=1}^\infty 2k\cdot \frac18\cdot (\frac{1}{p})^3\cdot (\frac{p-1}{p})^k \,&+&\,\\
\sum_{l=1}^\infty (2l+1)\cdot \frac18\cdot (\frac{1}{p})^3 \cdot (\frac{p-1}{p})^l \,&+&\, \\
\sum_{k,l=1}^\infty (2k+2l+2) \cdot \frac18\cdot (\frac{1}{p})^3\cdot (\frac{p-1}{p})^{k+l} \,&=&\,
1.
\end{eqnarray*}
Recall the formula
$$
\sum_{n=1}^\infty (n+C) x^n = \frac{x}{(1-x)^2} + \frac{Cx}{1-x}
$$
for $0\le x \le 1$. Using this formula we see that
$$
\sum_{k\ge 1} 2k\cdot \frac18\cdot {\alpha}^3\cdot
{\beta}^k= \frac{{\alpha}^3}{4} \sum_{k\ge 1} k {\beta}^k = \frac{{\alpha}^3}{4}\cdot \frac{{\beta}}{{\alpha}^2}=\frac{{\alpha} {\beta}}{4}.
$$
Similarly
$$\sum_{l\ge 1} (2l+1)\cdot \frac18\cdot {\alpha}^3 {\beta}^l =
\frac{{\alpha}^3}{4}\sum_{l\ge 1} l {\beta}^l + \frac{{\alpha}^3}{8}\sum_{l\ge 1} {\beta}^l =
\frac{{\alpha}{\beta}}{4} + \frac{{\alpha}^3}{8} \frac{{\beta}}{1-{\beta}} =
\frac{{\alpha}{\beta}}{4} + \frac{{\alpha}^2{\beta}}{8}.
$$
Finally
\begin{eqnarray*}
\sum_{k,l\ge 1} (2k+2l+2)\cdot \frac18\cdot {\alpha}^3 {\beta}^{k+l} &=& \frac{{\alpha}^3}{4} \sum_k {\beta}^k\sum_l
(l+ (k+1)){\beta}^l \\
&=& \frac{{\alpha}^3}{4} \sum_k {\beta}^k(\frac{(k+1){\beta}}{{\alpha}}+ \frac{{\beta}}{{\alpha}^2}) \\
&=& \frac{{\alpha}^2{\beta}}{4} \sum_k (k+1){\beta}^k + \frac{{\alpha}{\beta}}{4} \sum_k {\beta}^k \\
&=& \frac{{\alpha}^2{\beta}}{4}(\frac{{\beta}}{{\alpha}^2} + \frac{{\beta}}{{\alpha}}) + \frac{{\alpha}{\beta}}{4} \frac{{\beta}}{{\alpha}} \\
&=& \frac{{\beta}^2}{2} +\frac{{\alpha}{\beta}^2}{4}.
\end{eqnarray*}
Putting everything together we get
\begin{multline*}
\frac{1}{8}(2+5{\alpha}+ {\alpha}^3 + 2{\beta}{\alpha}^2 + {\beta} + {\beta}^2) \,+\,
\frac{{\alpha} {\beta}}{4} \,+\,
(\frac{{\alpha}{\beta}}{4} + \frac{{\alpha}^2{\beta}}{8}) \,+\,
(\frac{{\beta}^2}{2} +\frac{{\alpha}{\beta}^2}{4}) \,=\, \\
\,=\,
\left(\frac14 +\frac58{\alpha} + \frac18{\alpha}^3 + \frac14{\alpha}^2 - \frac14{\alpha}^3 + \frac18 -\frac18{\alpha}
+\frac18-\frac14{\alpha}+\frac18{\alpha}^2\right) \,+\, \\
\,+\, \left( \frac14{\alpha} - \frac14{\alpha}^2\right) + \left(\frac14{\alpha} - \frac14{\alpha}^2 +\frac18{\alpha}^2
-\frac18{\alpha}^3\right) + \left(\frac12 -{\alpha}+\frac12{\alpha}^2 + \frac14{\alpha}-\frac12{\alpha}^2+\frac14{\alpha}^3\right) \,=\,
1,
\end{multline*}
as required.
\end{proof}
\begin{cory}\label{cory_dimT} Let $T$ be the operator defined in Section \ref{sec_the_operator}. Then
$$
\dim_{vN}\ker T =
\frac{4p^3+3p^2+2p-1}{8p^3} + \frac1{8p^2(p-1)} \sum_{k=1}^\infty (\frac{p-1}{p})^{k+2^{k-1}},
$$ which is a transcendental number.
\end{cory}
\begin{proof}
As Lemmas \ref{lemma_sgraph_u}-\ref{lemma_sgraph_j} and Corollary
\ref{cory_is_probabilistic} show, we can use Theorem \ref{thm_computational_tool}:
$$
\dim_{vN} \ker T = \sum_{\mathbf g\in S\text{-Graphs}_\text{fin}} \frac{\mu(\mathbf g)}{|V(\mathbf g)|}
\dim\ker T^{\mathbf g}
$$
According to Corollary \ref{cory_is_probabilistic} the above sum can be written as
\begin{eqnarray*}
\frac{1}{8}(2+5{\alpha}+ {\alpha}^3 + 2{\beta}{\alpha}^2 + {\beta} + {\beta}^2) \cdot \dim\ker T^\mathbf{u} &+& \\
\sum_{k=1}^\infty \frac18\cdot {\alpha}^3\cdot {\beta}^k\cdot \dim\ker T^{\mathbf g(k)} &+& \\
+\sum_{l=1}^\infty \frac18\cdot {\alpha}^3 {\beta}^l \dim\ker T^{\mathbf h(l)} &+& \\
\sum_{k,l=1}^\infty \frac18\cdot {\alpha}^3 {\beta}^{k+l} \dim\ker T^{\mathbf j(k,l)}.& &
\end{eqnarray*}
Substituting the values for $\dim\ker T$'s we get
\begin{eqnarray*}
\frac{1}{8}(2+5{\alpha}+ {\alpha}^3 + 2{\beta}{\alpha}^2 + {\beta} + {\beta}^2) &+&\\
0 &+&\\
\sum_{l=1}^\infty \frac18\cdot {\alpha}^3{\beta}^l &+&\\
\sum_{k,l=1}^\infty \frac18\cdot {\alpha}^3 {\beta}^{k+l} + \sum_{k=1}^\infty \frac18\cdot {\alpha}^3
{\beta}^{k+2^{k-1}-1}.& &
\end{eqnarray*}
Noting that $\sum_{k,l=1}^\infty {\beta}^{k+l} = \sum_k {\beta}^k\sum_l{\beta}^l = (\frac{{\beta}}{{\alpha}})^2$ we get
$$
\frac{1}{8}(2+5{\alpha}+ {\alpha}^3 + 2{\beta}{\alpha}^2 + {\beta} + {\beta}^2) +
\frac18{\alpha}^2{\beta} + \frac18 {\alpha}{\beta}^2 +
\frac18 \frac{{\alpha}^3}{{\beta}}\sum_{k=1}^\infty {\beta}^{k+2^{k-1}},
$$
which is easily seen to be what we want.
Clearly to prove transcendence od $\dim_{vN} \ker T$ it is enough to prove that $\sum_{k=1}^\infty
(\frac{p-1}{p})^{k+2^{k-1}}$ is
transcendental. This follows directly from Tanaka's Theorem 1 in
\cite{Tanaka:Transcendence_of_the_values_of_certain_series_with_Hadamard's_gaps}. Although similar series have
been investigated already by Mahler in
\cite{Mahler:Arithmetische_Eigenschaften_der_Losungen_einer_Klasse_von_Funktionalgleichungen}, to the author's
best knowledge \cite{Tanaka:Transcendence_of_the_values_of_certain_series_with_Hadamard's_gaps} is the first
work which implies transcendence of $\sum_{k=1}^\infty (\frac{p-1}{p})^{k+2^{k-1}}$.
\end{proof}
\section{Back to the lamplighter groups}\label{sec_back_to_lamplighter}
In the previous section we have seen that the operator $T\in L^\infty(\Zmod{p}^\Z \times \Zmod{2}^3)\rtimes (\Z
\times Gl_3(\Zmod{2}))$ defined in Section \ref{sec_the_operator} has kernel with transcendental von
Neumann dimension. Using Pontryagin duality (see for example Subsection 4.2 of \cite{arxiv:grabowski-2010} for
details) we get an operator $\wh{T}\in K \Big[ \left(\Zmod{p}^{\oplus \Z} \rtimes \Z\right)
\times\left(\Zmod{2}^3\rtimes Gl_3(\Zmod{2}) \right)\Big]$ with the same dimension of the kernel, where $K$ is
the smallest subfield of $\C$ such all the characteristic functions which appear in the definitions of $S$, i.e.
in equations \eqref{eq_Sdef_1}-\eqref{eq_Sdef_14}, and of $T$, i.e. equation \eqref{eq_Tdef}, are in the image
of the Fourier transform
$$
K(\Zmod{p}^{\oplus \Z} \oplus \Zmod{2}^3) \to L^\infty(\Zmod{p}^\Z\times \Zmod{2}^3),
$$
where $K(\Zmod{p}^{\oplus \Z}\oplus \Zmod{2}^3)$ is the group ring over $K$ of the group $\Zmod{p}^{\oplus
\Z} \oplus \Zmod{2}^3$.
We claim that in our case $K=\Q$. Indeed, all the functions in the equations
\eqref{eq_Sdef_1}-\eqref{eq_Sdef_14} and \eqref{eq_Tdef} are products of functions of two types: (1) functions
of the form
$$
\Zmod{p}^\Z \times \Zmod{2}^3 \to \Zmod{2} \stackrel{f}{\to} \R,
$$
where $f$ is the characteristic function of either the set $\{0\}$ or $\{1\}$; and (2) functions of
the form
$$
\Zmod{p}^\Z \times \Zmod{2}^3 \to \Zmod{p} \stackrel{g}{\to} \R,
$$
where $g$ is either the characteristic function of the set $\{0\}$ or of the set $\{1,2,\ldots,p-1\}$. Thus our
claim follows from functoriality of the Pontryagin duality and the fact that both $f$ and $g$ are in the
images of Fourier transforms
$$
\Q\wh{\Zmod{2}} \to L^\infty(\Zmod{2})
$$
and, respectively,
$$
\Q\wh{\Zmod{p}} \to L^\infty(\Zmod{p}).
$$
Indeed, it is straightforward to check that $\pi:=\frac{0+1}{2}\in \Q\wh{\Zmod{2}}$ is mapped to the
characteristic function of $\{0\}\subset \Zmod{2}$, $1-\pi$ is mapped to the characteristic function of
$\{1\}\subset\Zmod{2}$; and ${\sigma}:= \frac{0+1+\ldots +(p-1)}{p}\in \Q\wh{\Zmod{p}}$ is mapped to the
characteristic function of $\{0\}\subset \Zmod{p}$, $1-{\sigma}$ is mapped to the characteristic function of
$\{1,2,\ldots, (p-1)\}\subset \Zmod{p}$.
It is clear that to finish the proof of Theorem \ref{result_main} it is enough to prove the following lemma.
\begin{lemma}\label{lemma_group_ext}
Let $G$ be a discrete countable group and let $H$ be a finite group. Then $ |H|\cdot \cc C(G \times H) =
\cc C(G)$.
\end{lemma}
\begin{proof}
Note that $ |H|\cdot \cc C(G \times H) \supseteq \cc C(G)$, since there is a projection $\pi$ in $\Q H\subset
\Q G\times H$ whose trace is $\frac{1}{|H|}$ and which commutes with $\Q G\subset \Q(G\times H)$. It is
easy to check that $|H|\cdot \dim_{vN}\ker ((1-\pi) + \pi\theta) = \dim_{vN}\ker \theta$ (see for example proof
of
Proposition 4.2.7 in \cite{arxiv:grabowski-2010}.)
For the other containment note first that the regular representation of $H$ gives rise to a unital injection of
${}^{*}$-algebras
$\iota\colon \Q H \hookrightarrow M_{|H|} (\Q)$ such that $|H|\tr_H(\theta) = \tr(\iota(\theta))$. This means
that
the unital ${}^*$-homomorphism $\wh{\iota}:=\id \otimes \iota: M_k\left(\Q (G\times H)\right) = M_k(\Q G)
\otimes \Q H \to M_k(\Q G) \otimes M_{|H|}(\Q) = M_{k+|H|}(\Q G)$ also has the property $|H|\tr_H(\theta) =
\tr(\wh\iota(\theta))$.
Now the result follows for example from Lemma 4.2.1 in \cite{arxiv:grabowski-2010} by taking $G$ there to be
equal to $G\times H$ here, and $L$ there to be $M_{k+|H|}(\Q G)$ with normalized trace.
\end{proof}
\section{Our computational tool}\label{sec_tool}
In this section $(X,\mu)$ can be taken to be any probability measure space, and $\rho\colon{\Gamma} \curvearrowright
X$
any probability measure preserving action.
Let us recall some definitions from Section 5 of \cite{arxiv:grabowski-2010}.
Let $S\in L^\infty(X)\rtimes {\Gamma}$ be given as $S:= \sum_{i=1}^n \theta_i \chi_i$, where $\theta_i$'s are
elements of the group ring $\C {\Gamma}$, and $\chi_i$'s are characteristic functions of pairwise disjoint
measurable sets $X_i$. The coefficients of $\theta_i$'s will be denoted by $\theta_i({\gamma})$, i.e. $\theta_i =
\sum_{{\gamma}\in {\Gamma}} \theta_i({\gamma}){\gamma}$.
In what follows we can without a loss of generality assume that the union of the sets $X_i$ is the whole
of $X$, by adding to $S$ an additional summand $0\cdot \chi_{X-\cup X_i}$.
In a directed graph $g$ whose edges and vertices are labeled, $L(v)$ and $L(e)$ will denote, as in Section
\ref{sec_graph_prelim}, respectively the label of a vertex $v$ and of an edge $e$. The rest of the
notation from Section \ref{sec_graph_prelim} will also be adopted.
\begin{definition}\label{def_S-graph}
An \textit{$S$-graph} is a directed graph $\mathbf g$ whose vertices are labeled by elements of the set
$\{1,\ldots,
n\}$, and whose edges are labeled by elements of ${\Gamma}$, in such a way that the following conditions hold.
\begin{enumerate}
\item For every vertex $v$ the labels of the edges starting at $v$ are pairwise different.
\item For every vertex $v$ and every ${\gamma}\in \supp\, \theta_{L(v)}$ there exists an edge starting at $v$ with
label ${\gamma}$.
\end{enumerate}
An \textit{$X$-embedded $S$-graph} is a pair $(\mathbf g, \phi)$, where $\mathbf g$ is an $S$-graph and $\phi:
V(\mathbf g)\to
X$ is an injection such that for every edge $e\in E(\mathbf g)$ we have that $\phi(t(e)) = \rho(L(e)) (s(e))$
A \textit{maximal $X$-embedded $S$-graph} is an $X$-embedded $S$-graph $(\mathbf g,\phi)$ such that if $x\in
X_i$ and ${\gamma}\in \supp\,\theta_i$ are such that $\rho({\gamma})(x)\in \phi(V(\mathbf g))$ then $x\in \phi(V(\mathbf
g))$.
\end{definition}
\begin{remark}
In the applications it is often convenient to enumerate the vertices of a given $S$-graph by the sets $X_i$
(instead of numbers $1,\ldots,n$.)
\end{remark}
Given a (not necessarily directed) path $p$ in an $S$-graph $\mathbf g$, one can define the label $L(p)$ of $p$
as the product of labels and inverses of labels of consecutive edges in $p$, depending on their orientation (see
Definition 5.3.5 in \cite{arxiv:grabowski-2010} for details.)
\begin{definition}
We will say that an $S$-graph $\mathbf g$ is \textit{simply connected} if and only if for every closed path
$p$ in
$\mathbf g$ the label
$L(p)$ of $p$ is the neutral element of ${\Gamma}$.
\end{definition}
There is a natural notion of isomorphism for $S$-graphs (bijection between the sets of vertices which is
${\Gamma}$-equivariant wherever it can) and maximal $X$-embedded $S$-graphs (bijection as before which commutes
with the embedding maps) - see Definition 5.3.8 in \cite{arxiv:grabowski-2010} for details. Let
$S\text{-Graphs}_\text{fin}$ denote the set of isomorphism classes of those $S$-graphs $\mathbf g$ such that
$V(\mathbf g)$
is finite and such that there exists a maximal $X$-embedded $S$-graph $(\mathbf g,\phi)$. We will sometimes
identify maximal $X$-embedded $S$-graphs with finite number of vertices with their images
in $S\text{-Graphs}_\text{fin}$.
For an element $\mathbf g\in S\text{-Graphs}_\text{fin}$ define $\mu(\mathbf g)$ to be equal to $\mu(\{x\in X:$
there exists a maximal $X$-embedded $S$-graph $(\mathbf g,\phi)$ such that $x\in \phi(V(h))\, \}$). This gives
a
measure on the countable set $S\text{-Graphs}_\text{fin}$.
For an $S$-graph $\mathbf g$ and every $i=1,\ldots,n$ we define $\chi_i^{\mathbf g}:l^2\mathbf g\to
l^2\mathbf g$ on the canonical basis by
$$
\chi_i^{\mathbf g} (\zeta_v) :=\left\{ \begin{array}{l l}
\zeta_v & \quad \mbox{if $i=L(v)$}\\
0 & \quad \mbox{otherwise.}\\
\end{array} \right.
$$
Similarly for ${\gamma}\in {\Gamma}$, let ${\gamma}^{\mathbf g}$ be given by
$$
{\gamma}^{\mathbf g} (\zeta_{s(e)}) := \left\{ \begin{array}{l l}
\zeta_{t(e)} & \quad \mbox{if $L(e)={\gamma}$}\\
0 & \quad \mbox{otherwise.}\\
\end{array} \right.
$$
Finally define $\theta_i^{\mathbf g} := \sum_{{\gamma}\in {\Gamma}} \theta({\gamma})\cdot {\gamma}^{\mathbf g}$ and $S^{\mathbf
g}:= \sum_{i=1}^n \theta_i^{\mathbf g} {\gamma}_i^{\mathbf g}$.
Let $T\in L^\infty(X)\rtimes {\Gamma}$ be a polynomial expression in $S$ and $\chi_i$'s. For a given $S$-graph
$\mathbf g$ define $T^{\mathbf g}\colon l^2\mathbf g \to l^2 \mathbf g$ to be the same polynomial expression in
$S^{\mathbf g}$ and $\chi_i^{\mathbf g}$'s.
\begin{theorem}\label{thm_computational_tool}
Suppose that
\begin{enumerate}
\item the measure $\mu$ on $S\text{-Graphs}_\text{fin}$ is a probability measure,
\item the elements of $S\text{-Graphs}_\text{fin}$ are simply-connected,
\item the elements of $S\text{-Graphs}_\text{fin}$ do not possess non-trivial automorphisms (as $S$-graphs.)
\end{enumerate}
Then
$$
\dim_{vN} \ker T = \sum_{\mathbf g\in S\text{-Graphs}_\text{fin}} \frac{\mu(\mathbf g)}{|V(\mathbf g)|}
\dim\ker T^{\mathbf g}.
$$
\end{theorem}
This is a direct consequence of Theorem 5.4.12 in \cite{arxiv:grabowski-2010}.
\subsection{Graph} Rysunek grafu z etykietami krawedzi (skalary) i wierzcholkow (A,B,C,D, I, F), opis
( -t[ID] + t^{-1}[IA] ) \chi(I,1(0)1)
(-t^2(AC) - 2t{-1} +1) \chi(A,1(1)01)
(zjedz w dol i na skos na druga strone (A,0(1)01)
(identycznosc) (A,(1)00)
(-2t^{-1} + 1) \chi(A, 1(1)1)
(-[AB] + 1) \chi(A,0(1)1)
(-2t + 1) \chi(B,(1)1)
(-[BA] + 1) \chi(B, (1)0)
(-t + [CD] + 1) \chi(C, (1)1)
([CD] +1) \chi(C, (1)0)
(-t +1) \chi(D, (1)1)
(-[DF] + 1) \chi(D, (1)0)
0 \chi(F, (1)0)
0 \chi(all the rest)
Niech bedzie dany T= \sum theta_i \chi_i, theta_i = \sum k_j {\gamma}_j.
A T-graph - 1) i 3) ponizej,
Maximal X-embedded T-graph is a directed graph whose vertices are labeled by 1,...,n, edges by {\gamma}_i and a map
\iota: V(g) \to X such that
1) L(e)\in supp \theta_{s(e)}
2) equivariance
3) wszystkie etykiety sa zuzyte,
4) jesli jakis punkt trafia w obraz to jest w obrazie
Natural notion of an isomorphism of T-graphs.
Naturala notion of T_g : l^2(g) \to l^2(g)
Assume: 1) If (g, i) is an X-embedded T-graph then V(g) is finite (a.a.). g has no automorphisms.
Let T-graphs_fin be a set whose points are different isomorphisms classes of thoses T-graphs g
for which there exists a maximal X-embeded T-graph (g, i).
For a point g\in T-graphs_fin define \mu(g) = \mu(x\in X: exists (g',i) = maximal embedded, such that g iso
g', x\in i(V) )
Thm: dim_vN ker T = \sum \g\in T-graphs_fin \mu(g)/|V(g)| dimker T_g.
Rem: Since g has no automorphisms \mu(g)/|V(g)| can be computed as follows: (vertex measure)
Obliczamy te grafy g\in T-graphs_fin, ktore maja niezerowe \mu(g) (suma miar bedzie 1 co pokazuje, ze
assumnption 1) jest spelnione.
1) jedno-wierzcholkowy graf z etykieta ``all the rest''
...[X], ...[Y]
(0)[A,B,C,D,F]
(da sie latwo policzyc miare)
2) Grafy przechodzace przez punkt ...0111111(0)11110...[I] -> miara taka to a taka + jakie punkty sa w obrazie
2a) Grafy 01(0)11110 (l>1)
2b) Grafy 0111(0)10 k>1
2c) Grafy 01(0)10
3) Grafy przechodzace przez punkt ....011111(1)00.... [A or B] (l=0) k>=1
4) Grafy przechodzace przez punkt 00(1)111110... [C or D] k=0, l>=1
\subsection{\textrm{\large{ #1 }}}\mbox{}\medskip}}
\newcommand\theoref{Theorem~\ref}
\newcommand\lemref{Lemma~\ref}
\newcommand\propref{Proposition~\ref}
\newcommand\corref{Corollary~\ref}
\newcommand\defref{Definition~\ref}
\newcommand\remref{Remark~\ref}
\newcommand\remsref{Remarks~\ref}
\numberwithin{equation}{section}
\subsection{\textrm{\large{ #1 }}}\mbox{}\medskip}}
\newcommand\theoref{Theorem~\ref}
\newcommand\lemref{Lemma~\ref}
\newcommand\propref{Proposition~\ref}
\newcommand\corref{Corollary~\ref}
\newcommand\defref{Definition~\ref}
\newcommand\remref{Remark~\ref}
\newcommand\remsref{Remarks~\ref}
\numberwithin{equation}{section}
\newcommand{{\partial}}{{\partial}}
\newcommand{{\alpha}}{{\alpha}}
\newcommand{{\Alpha}}{{\Alpha}}
\newcommand{{\beta}}{{\beta}}
\newcommand{{\Beta}}{{\Beta}}
\newcommand{{\Omega}}{{\Omega}}
\newcommand{{\omega}}{{\omega}}
\newcommand{{\varepsilon}}{{\varepsilon}}
\newcommand{{\delta}}{{\delta}}
\newcommand{{\Delta}}{{\Delta}}
\newcommand{{\gamma}}{{\gamma}}
\newcommand{{\Gamma}}{{\Gamma}}
\newcommand{{\iota}}{{\iota}}
\newcommand{{\kappa}}{{\kappa}}
\newcommand{\langle}{{\lambda}}
\newcommand{{\Lambda}}{{\Lambda}}
\newcommand{{\sigma}}{{\sigma}}
\newcommand{{\Sigma}}{{\Sigma}}
\newcommand{{\varphi}}{{\varphi}}
\newcommand{{\varphi}}{{\varphi}}
\newcommand{(M,\omega )}{(M,\omega )}
\newcommand{(W,\omega_W )}{(W,\omega_W )}
\newcommand{(N,\omega_N )}{(N,\omega_N )}
\newcommand{\B C\B P}{\B C\B P}
\newcommand\BS{\operatorname{BSymp}}
\newcommand\ES{\operatorname{ESymp}}
\newcommand\BD{\operatorname{BDiff}}
\newcommand\ED{\operatorname{EDiff}}
\newcommand\BH{\operatorname{BHam}}
\newcommand\EH{\operatorname{EHam}}
\newcommand\E{\operatorname{E}}
\newcommand\gl{\operatorname{GL}}
\newcommand\pgl{\operatorname{PGL}}
\newcommand\psl{\operatorname{PSL}}
\newcommand\Diff{\operatorname{Diff}}
\newcommand\Flux{\operatorname{Flux}}
\newcommand\Symp{\operatorname{Symp}}
\newcommand\Ham{\operatorname{Ham}}
\newcommand\eFlux{\operatorname{\widetilde{Flux}}}
\newcommand\Homeo{\operatorname{Homeo}}
\newcommand\Hom{\operatorname{Hom}}
\newcommand\End{\operatorname{End}}
\newcommand\Aut{\operatorname{Aut}}
\newcommand\Map{\operatorname{Map}}
\newcommand\Der{\operatorname{Der}}
\newcommand\Out{\operatorname{Out}}
\newcommand\vol{\operatorname{vol}}
\newcommand\sgn{\operatorname{sgn}}
\newcommand\hamel{\operatorname{Hamel}}
\newcommand\CAL{\operatorname{Cal}}
\newcommand\im{\operatorname{im}}
\def\operatorname{Im}{\operatorname{Im}}
\newcommand\Int{\operatorname{Int}}
\newcommand\sign{\operatorname{sign}}
\newcommand\id{\operatorname{Id}}
\newcommand\cat{\operatorname{cat}}
\newcommand\wgt{\operatorname{wgt}}
\newcommand\grad{\operatorname{grad}\,}
\newcommand\sgrad{\operatorname{sgrad}}
\newcommand\crit{\operatorname{crit}}
\newcommand\Crit{\operatorname{Crit}}
\newcommand\Jac{\operatorname{Jac}}
\newcommand\supp{{\operatorname{supp}}}
\newcommand{\ms}{{\medskip}}
\newcommand{{\bigskip}}{{\bigskip}}
\newcommand{{\noindent}}{{\noindent}}
\newcommand{{\thinspace}}{{\thinspace}}
\def\langle{\langle}
\def\rangle{\rangle}
\def\overline{\overline}
\renewcommand{\thefootnote}{\fnsymbol{footnote}}
|
2202.01169
|
\section{Introduction}
\label{introduction}
It is a commonly held belief that increasing the size of a neural network leads to better performance, especially when training on large and diverse real-world datasets.
This vague and debated notion has become increasingly justified as large empirical studies have shown that the performance of models on many interesting classes of problems are well understood as power-laws; where a multiplicative increase in model size leads to an additive reduction in the model's loss \citep{kaplan2020scaling, hernandez2021scaling, henighan2020scaling, rosenfeld2019constructive}. These relationships are not well understood, but a key implication is that a sequence of small\footnote{Measured as training or inference floating point operations, devices or time required, financial cost, carbon emissions, etc.} models can be used both to infer the performance of models many times more powerful, but also to provide global information about the scalability of an architecture.
Enter Routing Networks: models with the unusual property that each input interacts with only a subset of the network's parameters --- chosen independently for each datapoint \citep{bengio2015conditional, bengio2013estimating, denoyer2014deep}. For a Routing Network, the number of parameters is nearly independent from the computational cost of processing a datapoint. This bifurcates the definition of size and prevents a scaling law in parameters alone from fully describing the model class.
Specific Routing Networks have been trained successfully at large scales \citep{fedus2021switch, du2021glam, artetxe2021efficient}, but the general scaling behavior is not well understood. In this work we analyze the behavior of routed language models so that we might infer the scaling laws that describe their performance.
\pagebreak
\paragraph{Key contributions.} We analyze three different techniques for training Routing Networks, detailed in \autoref{sec:ref}: \mbox{Sinkhorn-\textsc{base}\xspace}, a sparse mixture-of-experts (\textsc{smoe}\xspace) approach modifying \textsc{base}\xspace \citep{lewis2021base}; non-parametric \textsc{hash}\xspace Layers \citep{roller2021hash}; and routing via Reinforcement Learning (\mbox{\textsc{rl-r}}\xspace). With models up to 200 billion parameters, we observe the following:
\begin{enumerate}[wide,itemsep=0pt,topsep=0pt, labelindent=3pt]
\item Routing improves the performance of language models across all sizes and variants attempted (see \autoref{fig:main}).
\item Training a Routing Network with RL (\autoref{subsec:rl}), a technique used in early routing work \citep{bengio2013estimating}, is of comparable effectiveness to state-of-the-art techniques.
\item The performance of all Routing Networks is accurately described by scaling laws in the number of experts and in the underlying dense model size (\autoref{sec:scaling}) which generalize those from \citet{kaplan2020scaling}.
\item These laws can be restated in terms of parameter count and inference compute, capturing an even wider set of routing architectures under a shared fit (\autoref{sec:generalizations}).
\item They further imply an \textit{Effective Parameter Count}: a mapping equating the performance and scaling for both dense and routed networks (\autoref{sec:applications}).
\end{enumerate}
\section{Background}
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{pdf_figures/fig1_final.pdf}
\vspace{-5mm}
\caption{
\textbf{(a)} The performance achieved by Routing Networks when varying the number of experts for a fixed dense model size is described by a bilinear function (Eq.~\ref{eq:real_joint_scaling_law}), \textbf{(b)} whose level curves indicate how to trade model size with expert count to maintain a fixed performance, \textbf{(c)} and which can be manipulated to align dense and routed model performance under a shared power law.}\label{fig:main}
\end{figure*}
We first review the language modelling problem and existing scaling laws before discussing the process of routing a neural network and how it is applied to language models.
\paragraph{Language modelling.}
We consider the problem of autoregressively predicting natural language, a task with consistent and predictable scaling characteristics across many orders of magnitude \citep{henighan2020scaling, kaplan2020scaling}.
The objective is to maximize the likelihood of a sequence of tokens $P(x_1, \dotsc , x_T)$ factored auto-regressively as $p \left (x_1, \dotsc , x_T \right ) = \prod_i^T p \left (x_i | x_{j < i} \right )$.
Our primary metric of performance is the negative log-likelihood of a validation dataset whose statistics match the training distribution. We focus on this validation loss, but briefly consider zero-shot transfer to other tasks in \autoref{apsec:transfer}.
\paragraph{Scaling laws for large-scale data.} We train on a multi-trillion-token compendium of English language text comprising documents from the internet alongside open-source text datasets, details of which are given in \citet{rae2021scaling}. In this setting \citet{kaplan2020scaling} argue that the converged performance of a model trained on a dataset of infinite size is a power-law in the model's parameter count~$N$. Our dataset is not infinite, but its size -- and the lack of any observed overfitting -- make this a reasonable approximation. We consider the final (and best) evaluation value as the converged value, though this is also an approximation which is discussed further in~ \autoref{apsec:convergence}.
\subsection{Routing Networks}
\label{sec:routing}
Power-law scaling implies the performance of a language model increases with size, but so too does the compute needed to train the model. This undesirable connection between size and computation motivates a search for architectures wherein the two are disentangled. Routing Networks are one such class of model: a type of neural
network that incorporates a specific flavor of conditional
computation. In a Routing Network, each input (e.g., a token of text) is transformed into an output while only interacting with a fixed subset of the network's parameters -- dynamically selected based on the input itself. Many sparsely-activated networks have this property, but here we exclusively study the layout based on Sparse Mixtures of Experts \citep{shazeer2017outrageously} where multiple sub-components of a deep neural network (i.e., several layers) are independently converted to routed equivalents and jointly trained with the rest of the network.
\paragraph{Routing a single layer.} The core idea of a routed layer is that multiple versions of the parameters are kept, and a per-input decision on which version to use is made. To route a layer $f_\theta$ in $E$ ways, we start by creating $E$ separate versions of the parameters $\theta$ ($\{\theta_1, ... \theta_E\}$) where $f$ using the $i$-th version of the parameters ($f_i\triangleq f_{\theta_i}$) is termed the \textit{i-th Expert}. To determine which expert to pick given the input, we introduce an additional \textit{router} function $\rho: \mathbb{R}^M \to \left[1,E\right]$ associated to the layer, typically a small network itself, with parameters $\phi$. The routed form $h$ of $f$ is then given by $h(x) \triangleq f_{\rho(x)}(x)$.
When performance increases with $E$, routing gives a method by which to improve a neural network with minimal computational increase (corresponding only to the compute needed by $\rho(x)$).
We also consider the $K$-way routed generalization, where the router outputs a set of integers as $\rho(\cdot): \mathbb{R}^M \to \left[1,E\right]^K$, and we set the output of the layer to be the sum of the outputs of each expert, namely $h(x) \triangleq \sum_{i \in \rho(x)} f_i(x)$. We default to $K=1$, but revisit this in \autoref{sec:generalizations}.
\paragraph{Routed Transformers}
We apply routing to a decoder-only Transformer \citep{vaswani2017attention} to measure the scaling properties that result: an architecture chosen due to its state-of-the-art performance. Details of the baseline architecture we use are in \autoref{app:archtecture}. We will refer to non-routed Transformers as \textit{dense} models, in opposition to Routed Transformers which \textit{sparsely} activate some of their parameters. Our conversion to a Routed Transformer is the same as is used in prior work \citep{lepikhin2020gshard, fedus2021switch}. Namely, we apply routing to every other set of feedforward components (FFWs) of the Transformer, sub-components that act on each timestep independently. Though different layers can have
different numbers of experts, here all routed
layers share the same number of experts $E$, and we will refer
to the network as being \textit{routed $E$ ways}.
\paragraph{Model size and inference cost.}
We use $N$ to indicate a network's \textit{dense model size}: the number of parameters any one input interacts with. This is in opposition to $P$: the total number of parameters. For a dense model, $P = N$, whereas for a Routing Network $P$ is roughly proportional to $N\cdot E$, with factors that depend on details of the routing architecture (\autoref{sec:generalizations}).
Except for a small overhead due to running the routers, the cost $F$ (in TeraFLOPs) of executing a Routed Transformer is the same as its dense equivalent.
\paragraph{Training Details.}\label{ssec:training_details}
All models are trained on TPUs with JAX \citep{bradbury2020jax} using a combination of data, expert (see \autoref{apsec:routing}) and sharding parallelism \citep{shoeybi2019megatron}. Models were trained with a sequence length of $2048$ and batch size of $256$ for 250,000 steps, i.e. 130 billion tokens, regardless of $N$. This is an important detail, and we discuss some of the implications in \autoref{apsec:convergence}. All were optimized with AdamW \citep{loshchilov2018decoupled} and ZeRO Stage 1 was used to shard the optimizer state \citep{rajbhandari2020zero}. Appendix \ref{app:archtecture} contains further details.
\section{Routing Techniques}
\label{sec:ref}
If the benefit of Routing Networks is the decoupling of parameter capacity from network cost, the fundamental difficulty is in effectively learning the parameters $\phi$ of the router given the non-differentiability of its output. Much research in Routing Networks has therefore focused on techniques for learning $\phi$. A major finding of this work is that three notably different techniques of training Routing Networks are effectively described by the same scaling laws. We now introduce and contextualize these three methods.
\subsection{Sparse Mixture-of-Experts via Weighting}
Sparse Mixture-of-Experts (\textsc{smoe}\xspace) methods \citep{shazeer2017outrageously} solve the problem of non-differentiability by reusing the probability of expert selection as a scalar multiplier on that expert's output, guaranteeing a gradient passed to the logits of selected experts despite the the non-differentiability of sampling from those logits. Formally, the router is given as $\rho(x) = \topk(Wx + b)$, where $Wx + b$ is an unnormalized distribution over $\left[1,E\right]$ from which the experts corresponding to the top $K$ values are selected. In the final output of the routed layer, the normalized logits are reused as \textit{gating weights}, i.e. the final output of the routed layer is $h(x) = \sum_{i \in \rho(x)} g_i(x) f_i(x)$ where $g(x) = \softmax(Wx + b)$.
Though this formulation supplies a gradient to $\phi = (W,b)$, it represents changes to the scalar multiplier and does not directly correspond to optimizing expert selection. This method is nevertheless effective, and can be seen as a sparse approximation to dense mixture of experts models \citep{eigen2013learning, jacobs1991adaptive} where the likelihood of skipping an expert is inversely proportional to the value of its scalar gate $g_i$.
It was conjectured that \textsc{smoe}s\xspace require $(K \geq 2)$-way routing to produce effective gradients in the routers \citep{shazeer2017outrageously}, and many attempts at incorporating routing into large Transformers use $K = 2$ \citep{lepikhin2020gshard, du2021glam}. However recently this has been challenged, and stable modifications have been proposed for $K = 1$; namely the Switch Transformer \citep{fedus2021switch}. Most \textsc{smoe}s\xspace, including Switch, are reliant on auxiliary balancing losses which encourage the router output $\rho(x)$ to be more uniform across minibatches of inputs. To improve on this, \textsc{base}\xspace \citep{lewis2021base} post-processes the router output with a Hungarian Matching algorithm that re-assigns expert selections to ensure that all experts are selected evenly.
Our implementation of \textsc{base}\xspace replaces the Hungarian Matching with a regularized Optimal Transport formulation \citep{cuturi2013sinkhorn} using the Sinkhorn algorithm as an approximate matching step during expert selection. This substantially improves routing efficiency on accelerated hardware (details in \autoref{sec:sinkhorn}). We call the resulting method Sinkhorn-\textsc{base}\xspace (\textsc{s-base}\xspace), and use it as the representative of \textsc{smoe}\xspace methods, as early tests showed the benefit of its balancing mechanism.
\subsection{Input-based Deterministic Hash Routing}
\begin{figure*}[t!]
\centering \includegraphics[width=\linewidth]{pdf_figures/fig2_final.pdf}
\caption{Validation losses with fits from Equation \ref{eq:real_joint_scaling_law} plotted as a dotted line for \textsc{s-base}\xspace, \textsc{hash}\xspace and \mbox{\textsc{rl-r}}\xspace respectively. On the right, the prediction curves for all model sizes and all techniques overlapping to show relative performance. Fits to Eq.~\eqref{eq:no_tailoff} are overlaid in grey.}
\label{fig:joint_curves}
\end{figure*}
An alternative approach eschews extra parameters completely and represents $\rho$ as a fixed function of the input. This is the concept pioneered by \textsc{hash}\xspace Layers \citep{roller2021hash} which circumvents the need to simultaneously learn $\phi$ and $\theta$. Our implementation takes the token ID assigned to the input by the SentencePiece tokenizer \citep{kudo2018sentencepiece} and uses the remainder of it divided by $E$ as the expert selection. See~\autoref{sec:hash_details} for details.
\subsection{Routing via Reinforcement Learning}\label{subsec:rl}
Finally, we re-analyze a technique that optimizes the router via Reinforcement Learning
(a class of methods we call \mbox{\textsc{rl-r}}\xspace), which was proposed in early work on neural conditional computation \citep{bengio2013estimating, bengio2015conditional, bengio2017reinforcement, denoyer2014deep}. In this approach each router is seen as a policy whose actions are the selection of an expert in each routed layer and whose observations are the activations passed to that router. After completing the forward pass, the probability the Routed Transformer assigns to the correct output token can be used as a reward, maximization of which is equivalent to minimization of NLL. To jointly train the experts and the router, we minimize a composite loss formed with the language modelling loss and a policy-gradient term \citep{sutton2000policy} using the selected set of experts as actions. We highlight that the optimal expert selection is dependent not only on the input activations but on the parameters of the rest of the network. This disrupts the theoretical underpinning, crucial to RL, that this is a Markov Decision Process. Nevertheless, it has been observed that this theoretical issue does not affect the practicality of the method \citep{rosenbaum2019routing}.
Relative to \textsc{smoe}\xspace, \mbox{\textsc{rl-r}}\xspace benefits from directly optimizing actions to improve the language modelling loss. However this absence of bias comes with complications, especially the high variance of the gradient \citep{rosenbaum2019routing, denoyer2014deep}. We use \textsc{reinforce} with a learned baseline \citep{williams1992simple, sutton2018reinforcement} to address this issue, so that improving the policy means increasing the likelihood of selecting experts which lead to a better than average next token prediction. As with \textsc{smoe}\xspace, we find it useful to add a balancing term. To our knowledge, we are the first to experiment routing with Reinforcement Learning on large Transformer-based language models---we therefore explore key ablations in Appendix \ref{apsec:rlr_variants}.
\section{Scaling Behavior at Convergence}
\label{sec:scaling}
Our main hypothesis is that the converged log-loss of a Routing Network is bilinear in the terms $\log N$ and $\log \widehat{E}$, where $\hat E$ is a saturating transformation of $E$. Specifically, we fit the 6-parameter scaling law:
\begin{align}
&\!\!\!\log L (N, E) \triangleq a \log N {+} b \log \widehat{E} {+} c \log N \log \widehat{E} {+} d \label{eq:real_joint_scaling_law}\\
&\text{where}\quad \frac{1}{\widehat{E}} \triangleq
\frac{1}{E - 1 + \left(\frac{1}{E_\text{start}} - \frac{1}{E_{\max}}\right)^{-1}} + \frac{1}{E_{\max}}.\notag
\end{align}
We can generalize this law across a wider range of routing architectures by a change of variables, using the model inference cost $F$ and the total number of parameters~$P$, as:
\begin{align}
\log L (F, B) \triangleq a \log F {+} b \log \widehat{B} {+} c \log F \log \widehat{B} {+} d,\label{eq:real_joint_scaling_law_fp}
\end{align}
\begin{table}[t]
\centering
\begin{minipage}{.6\linewidth}
\caption{Leave-One-Out RMSLE Fit in $(N, E)$. The last row is \linebreak computed for each model size independently; this gives an lower \linebreak bound of the error of any joint scaling law.}\label{tab:rmse}
\begin{tabular}{ c c | c c c}
\toprule
$L$ log-log prediction & Eq. & \textsc{s-base}\xspace & \mbox{\textsc{rl-r}}\xspace & \textsc{hash}\xspace \\
\midrule
Separably linear in $N$, $E$ & \eqref{eq:separable} & 80e-4 & 90e-4 & 90e-4 \\
Bilinear in $(N, E)$ & \eqref{eq:no_tailoff} & 60e-4 & 57e-4 & 60e-4 \\
Bilin. + saturat. in $(N, E)$ & \eqref{eq:real_joint_scaling_law} & 58e-4 & 56e-4 & 56e-4 \\
\midrule
Per-$N$ fits in $(E)$ & \eqref{eq:hopeful_power} & \textbf{46e-4} & \textbf{29e-4} & \textbf{19e-4} \\
\bottomrule
\end{tabular}
\end{minipage}%
\begin{minipage}{.4\linewidth}
\centering
\caption{Dense scaling values (see also~\autoref{apsec:convergence}).}\label{tab:joint_alphas}
\begin{tabular}{ c | c c}
& $\alpha_N$ & $N_c$ \\
\midrule
\textbf{Ours} & $0.078$ & $3.568e13$\\
\textbf{\citet{kaplan2020scaling}} & $0.076$ & $8.8e13$ \\
\bottomrule
\end{tabular}
\end{minipage}
\end{table}
where $B \triangleq \frac{P}{F}$ and $B \to \hat B$ is the same saturating transform as $E \to \hat E$. Before justifying Equation~\eqref{eq:real_joint_scaling_law}, we validate its candidacy by fitting it to empirical data obtained on a large sweep of models. This sweep consists of a Routing Network trained for each of the three techniques described in \autoref{sec:ref}: across six model sizes (described in \autoref{tab:architectures}) while varying $E$ across $[2, 4, 8, 16, 32, 64, 128, 256, 512]$.
This totals 168 different models, including dense baselines.
The observed losses for each model are shown in \autoref{fig:joint_curves}(a-c). We fit Eq.~\eqref{eq:real_joint_scaling_law} to each routing method and plot predictions for fixed values of $N$ as dotted lines. The goodness-of-fit across all methods is apparent, as is the clear behavior that increasing $E$ leads to a reduction in validation loss. \autoref{fig:joint_curves}(d) plots the relative predictions for all three techniques, clearly showing that \textsc{s-base}\xspace performs best across all model sizes, followed by \mbox{\textsc{rl-r}}\xspace, followed by \textsc{hash}\xspace (see~\autoref{sec:comparisons}).
The remainder of this section justifies the chosen functional forms~\eqref{eq:real_joint_scaling_law} and~\eqref{eq:real_joint_scaling_law_fp}; first supposing independent power laws in $N$ and $E$ (\autoref{section:conditional_scaling}), then introducing a multiplicative interaction (\autoref{subsec:quadratic}) and saturation in the second term (\autoref{sec:tail_off}), followed by a change of variables (\autoref{sec:generalizations}). The benefit gained by this progression of fits can be seen in~\autoref{tab:rmse}. Notations are recalled in \autoref{fig:alpha_es}.
\subsection{Separable Scaling Laws in Model Size and Experts}
\label{section:conditional_scaling}
\citet{kaplan2020scaling} argue that the converged performance of a dense model with $N$ parameters can be modelled accurately as the two-parameter power law
\begin{equation}\label{eq:dense_law}
\log L(N) \triangleq a \log N + d,
\quad\text{i.e.}\quad
L(N) = {\left(\frac{N_c}{N}\right)}^{\alpha_N}
\end{equation}
where $\alpha_N \triangleq -a$ and $N_c \triangleq 10^{d/-a}$. We can re-estimate these coefficients from the performance of our own dense models, leading to estimations in \autoref{tab:joint_alphas}. The similarity of $\alpha_N$ is a reassuring sanity check (there are differences in dataset, vocabulary, tokenization and model which effect $N_c$).
An immediate hypothesis is that for all values of $N$, scaling in $E$ obeys a similar power law:
\begin{equation}
\log L_N(E) \triangleq b \log E + d'
\label{eq:hopeful_power}
\end{equation}
Because $L_N(1) = L(N)$ (a fact we will call \textit{dense equivalence}),
\eqref{eq:dense_law} and~\eqref{eq:hopeful_power} can be combined into:
\begin{equation}
\log L_N(E) \triangleq a \log N + b \log E + d,\label{eq:separable}
\end{equation}
corresponding to the multiplicative separated power law:
\begin{equation}
L_N(E) = \left( \frac{10^{d/a}}{N}\right)^{a} \left( \frac{1}{E} \right)^{b}\label{eq:multi_separable}
\end{equation}
If Eq.~\eqref{eq:hopeful_power} fits observed data for any $N$ we can proceed with an assumption that scaling in $E$ obeys a power-law for fixed $N$. Observing a constant $b$ across $N$ would allow to fit Eq.~\eqref{eq:separable} to models ranging across $N$ and $E$ simultaneously.
\paragraph{Fitting.}
The first hypothesis is easily tested and confirmed to a reasonable degree. We fit Eq.~\eqref{eq:hopeful_power} for each technique and value of $N$ separately, plotted as colored lines in \autoref{fig:linear_fits}. The values of $b$ are shown in \autoref{fig:alpha_es}.
\begin{figure}[t]
\centering
\hfill
\begin{minipage}{0.3\linewidth}
\includegraphics[width=\linewidth]{pdf_figures/fig4.pdf}
\end{minipage}
\hspace{2em}
\hfill
\begin{minipage}{.45\linewidth}
\begin{tabular}{l|l}
$N$ & Parameter Count in Base Model \\
$E$ & Number of Experts \\
$P$ & Total Number of Parameters \\
$F$ & Compute per Inference (in TeraFLOPs) \\
$B$ & Parameter Utilization Ratio \\
$\bar N$ & \textsc{epc}\xspace: The Effective Parameter Count \\
\end{tabular}
\vfill
\end{minipage}
\caption{\textit{Left}: $b(N)$ increases with $N$. \textit{Right}: Notations.}\label{fig:alpha_es}
\end{figure}
We observe that $b(N)$ is \textit{increasing} with $N$ (values listed in \autoref{tab:alphas}), corresponding to a reduction in benefit from routing as size increases, with a slope that is approximately linear in $\log N$ (\autoref{fig:alpha_es}). Eq.~\eqref{eq:separable} requires that $b$ remains fixed across $N$; therefore we expect it to poorly predict model performance. We can attempt a fit nevertheless: plotted in grey in \autoref{fig:linear_fits}. Qualitatively, this mis-predicts some validation losses by over 0.2, particularly overestimating the performance at large $N$ and $E$. As reported in~\autoref{tab:rmse}, the fit has held-out RMSLE values greater than 80e-4.
\subsection{Quadratic Interaction in $N$ and $E$}
\label{subsec:quadratic}
\begin{figure}[t]
\centering
\includegraphics[width=0.3\linewidth]{pdf_figures/fig3_SparseMixtureOfExperts.pdf}%
\hspace{-0.3em}%
\includegraphics[width=0.3\linewidth]{pdf_figures/fig3_Reinforce.pdf}
\hspace{-0.3em}%
\includegraphics[width=0.3\linewidth]{pdf_figures/fig3_HashLayer.pdf}
\caption{Fits for \textsc{s-base}\xspace, \mbox{\textsc{rl-r}}\xspace and \textsc{hash}\xspace . Dashed lines are solutions to Eq.~\eqref{eq:hopeful_power} with $b(N)$ given by Table \ref{tab:alphas} while dotted lines are solutions to Eq.\eqref{eq:no_tailoff}. Solutions for Eq.~\eqref{eq:separable} are in grey. The separable solution fails to account for decreasing performance given by expert scaling.}\label{fig:linear_fits}
\end{figure}
This motivates us to introduce a simple extension: that of a multiplicative interaction between $\log N$ and $\log E$. This is conveniently the exact function which leads to $b$ scaling with $\log N$ and takes the following form:
\begin{gather}
\!\!\!\log L(N, E) {\triangleq} {+} a \log N {+} b \log E {+}
c \log N \log E {+} d\label{eq:no_tailoff}
\end{gather}
This function has the property that the log-log slope in both $N$ and $E$ are affine in the logarithm of the other variable. In other words, with $E$ or $N$ fixed, the performance $L$ scales with $N$ or $E$ following~\eqref{eq:dense_law} and \eqref{eq:hopeful_power} with slopes given by:
\begin{gather}
a(E) \triangleq - \frac{\partial \log L}{\partial \log N} = a + c \log(E)\label{eq:a_e}\\
b(N) \triangleq - \frac{\partial \log L}{\partial \log E} = b + c \log (N), \notag
\end{gather}
$b(N)$ matches the behavior reported in \autoref{tab:alphas}. A transposed table, fitting sets of models with fixed $E$ and changing $N$, can be found to match the behavior predicted by $a(E)$ (see \autoref{tab:all_alpha_ns}). There are two symmetric non-logarithmic representations of \eqref{eq:no_tailoff}, useful for comparison to \eqref{eq:multi_separable}:
\begin{align}
\stepcounter{equation}
L(N, E) &= \left(\frac{10^{d/a}}{N}\right)^a\left(\frac{1}{E}\right)^{b + c \log (N)}\tag{\arabic{equation}a}\label{eq:factorized},\\[4pt]
&= \left(\frac{10^{d/b}}{E}\right)^b\left(\frac{1}{N}\right)^{a + c\log(E)}.\tag{\arabic{equation}b}\label{eq:bad_factorized}
\end{align}
\begin{figure*}[t]
\centering
\includegraphics[width=0.6\textwidth]{figures/figure_5a.pdf}
\includegraphics[width=0.6\textwidth]{figures/figure_5b.pdf}
\caption{Level curves for Equation \eqref{eq:real_joint_scaling_law} and Equation \eqref{eq:real_joint_scaling_law_fp} on \textsc{s-base}\xspace for $K \in \{1, 2, 4\}$ (left two), $R \in \{1.0, 0.5, 0.25\}$ (right two). Scaling laws in $(N, E)$ differ for models with different values of $(K, R)$: indicated by non-overlapping level-curves. A change of variables to $(F, P)$ leads to almost-overlapping functions: allowing the same fits to be reused across changes in the routing architecture.
}
\label{fig:ben_topk_fits}
\end{figure*}
\paragraph{Fitting.}
Fitting the bilinear~\eqref{eq:no_tailoff} instead of~\eqref{eq:separable} substantially reduces the prediction error for large~$N$ (\autoref{tab:rmse}, Eq.~\eqref{eq:separable} vs Eq.~\eqref{eq:no_tailoff}), as displayed in~\autoref{fig:linear_fits} (dotted lines match the dashed ones, where the grey separable fit doesn't). We verify dense equivalence: $\alpha_N \approx a$, while $N_c\approx \exp(d/a)$, and thus the law~\eqref{eq:no_tailoff} gives similar prediction to the reference law~\eqref{eq:dense_law} for dense models. Predictions for fixed $N$ are visualized as grey lines in \autoref{fig:joint_curves}.
\paragraph{Interpretation.}
In Eq.~\eqref{eq:no_tailoff}, when $c$ is positive, the expert improvement slope~$b(N)$ reduces with model size~$N$. All three routing techniques considered therefore predict \textit{diminishing improvements from routing when increasing scale}. However, the scaling of \textsc{s-base}\xspace is predicted (and seen) to be substantially better. When designing a new technique, we can fit~\eqref{eq:no_tailoff} and predict a better scaling behavior if the fitted $c$ is lower than with other techniques. A clear goal for future work in routing techniques should be to find a method with scaling coefficient $c \approx 0$.
\subsection{Bounded Scaling in $E$}
\label{sec:tail_off}
Equation \eqref{eq:separable} models scaling in $E$ as a power law. For both small and large values of $E$, there are reasons to expect some deviation. If a routing technique degrades with $E$ (for instance, the variance of gradients in \mbox{\textsc{rl-r}}\xspace will increase), performance for large $E$ might be worse than predicted. On the other hand, fixed overhead (e.g., interference from auxiliary losses) might worsen scaling for low values of $E$, counter-intuitively leading to better than expected performance. Both phenomena appear clearly in \autoref{fig:joint_curves}. We seek to model this saturation such that the limit behavior in $E$ is bounded on both sides. We choose the following transformation, but discuss in~\autoref{sec:epc} a number of implications which are independent of the specific saturating form used:
\begin{align}
\frac{1}{\widehat{E}} \triangleq
\frac{1}{E - E_{\min} + \left(\frac{1}{E_\text{start}} - \frac{1}{E_{\max}}\right)^{-1}} + \frac{1}{E_{\max}}.\label{eq:saturation}
\end{align}
This is constructed
so that we have $\hat E(E_{\min}) = E_{\text{start}}$, while $\hat{E} \to E_{\max}$ as $E \to \infty$. We fix $E_{\min} = 1$, indicating the lower bound of meaningful expert counts.
$\hat E$ can be seen as a thresholded version of $E$: increasing past $E_{\max}$ will give improvement, but not following a power law. Similarly, when $E_{\start} > 1$, $\hat{E} > E$ for small values of $E$. Practically, the fit is the same over a wide range of different thresholding functions.
\paragraph{Fitting.} Solving Equation~\eqref{eq:real_joint_scaling_law}, equal to Eq.~\eqref{eq:no_tailoff} with $E \to \hat{E}$, is complicated by its non-convexity.
We find the coefficients $(a, b, c, d, E_{\start}, E_{\max})$ as the best of repeated solutions provided by the L-BFGS-B algorithm \citep{byrd1995limited}. \autoref{fig:joint_curves} shows fitted curves from these equations; coefficients are reported in~\autoref{tab:final_coeffcients}.
\paragraph{Interpretation.} Relative to using the simple bilinear law~\eqref{eq:no_tailoff}, fitting Eq.~\eqref{eq:real_joint_scaling_law} improves prediction for the lowest and highest values of $E$ considered. Crucially, while the deviation from a power-law (and therefore improvement in RMSLE) is relatively minor for the values of $E$ considered, the deviation is nonetheless clear (seen best looking at the raw losses in \autoref{fig:experts_and_params1}). We believe it is important to model this saturation because (as argued in \autoref{sec:n_max}) the limit behavior of model performance as $N$ increases is substantially different when bounded, with important properties that are independent of $E_{\max}$. We further hypothesize that future work, able to test still larger values of $E$, will see a more quantitative benefit from including these terms.
This can be already observed in \autoref{fig:fig3_hash} when noting that the law~\eqref{eq:no_tailoff} does not over and under estimate the performance for $E = \{2, 4, 256, 512\}$ as it does in \autoref{fig:linear_fits}.
Level curves of Eq.~\eqref{eq:real_joint_scaling_law} enumerate the $\{(N, E)\}$ which are predicted to achieve fixed performance, as visualized in Fig~\ref{fig:main}(b). This demonstrates of the power of routing: a model with $N=5M$ and $E=128$ equals the performance of a model with $N=55M$ and $E=1$,which requires over ten times more compute per inference.
\subsection{Generalizing Across Architecture Variants}
\label{sec:generalizations}
The models trained so far use fixed choices for two key details of routing: the number of experts executed per-datapoint $K$ and the frequency of routed layers across depth $R$ (previously set at 1 and $0.5$, respectively). For any selected value of $K$ and $R$ we may fit Eq.~\eqref{eq:real_joint_scaling_law} to observed performance, but since these variables are independent of $N$ and $E$, we do not expect the same coefficients to remain valid across values of $K$ and $R$. To allow for a unified scaling law, we modify Eq.~\eqref{eq:real_joint_scaling_law} to use terms in $F$, the TeraFLOPs required per forward pass, and in the ratio $B \triangleq \frac{P}{F}$ where $P$ is the total number of parameters. Specifically, $F$ is motivated by the approximation from \citet{kaplan2020scaling} that $F = 2N$. $B$, the \textit{parameter utilization ratio}, is an affine function of $E$, close to linear when most parameters lie in the routed components of the model.
Using $(F, B)$ instead of $(N, E)$ (and setting $E_{\min}$ to $\frac{1}{2}$) results in Eq.~\eqref{eq:real_joint_scaling_law_fp}.
To show the advantage of this change of variables we conduct two experiments: varying $K$ across $\{1, 2, 4\}$ and $R$ across $\{0.25, 0.5, 1.0\}$. In both cases, we vary $E \in \{8, 64, 256\}$ and $N \in \{15M, 370M, 870M\}$.
\paragraph{Fitting.} Eq.~\eqref{eq:real_joint_scaling_law_fp} predicts the scaling behavior of models as well as Eq.~\eqref{eq:real_joint_scaling_law} for a given routing architecture, as indicated in~\autoref{apfig:pf_and_expert_fits}.
The benefit of the change of variables is seen most clearly in \autoref{fig:ben_topk_fits}, which plots contours of fixed loss value as functions of $(N, E)$ and of $(F, B)$. For varying $(K, R)$, the loss surface as a function of $N$ and $E$ changes: meaning a joint fit would be inaccurate. Plotted as functions of $(F, B)$, the loss surface is almost the same, suggesting a shared fit between all three methods (see \autoref{apfig:pf_fits_topk_individual} and \autoref{apfig:pf_fits_ben_individual} for joint fits for $K$ and $R$ respectively). We highlight that $R = 0.25$ deviates slightly. Plausible explanations are discussed in \autoref{sssec:route_one_layer}. The possibility to use a shared fit indicates a singular takeaway: the architectural details $K$ and $R$ little affect the scaling behavior of a Routing Network. The loss of the network can thus be predicted based only on inference flops $F$ and total number of parameters $P$.
\section{Scaling Law Applications}\label{sec:applications}
Next we provide two applications of the scaling laws presented. We re-emphasize that all values are only valid at the specific token count all models were trained at: 130B. \autoref{apsec:convergence} provides evidence that our analysis, if not the numerical values, are nevertheless robust to token count.
\subsection{Effective Parameter Equivalence}
\label{sec:epc}
We leverage Eq.~\eqref{eq:real_joint_scaling_law} to compute the size~$\bar N$ of a dense model giving the same performance as a Routing Network. Specifically, we solve for $L(\bar{N}, 1) = L(N, E)$, yielding
\begin{equation}
\label{eq:equivalence_tail_off}
\bar{N} \triangleq
{\left(N\right)}^{\alpha(\hat E) / \alpha(E_{\start})}
{\left(\hat{E} / E_{\start}\right)}^{b / \alpha(E_{\start})}
\end{equation}
Here $\alpha(E) = a + c \log E$. Given a model with $N$ and $E$, we call $\bar{N}$ that model's \textit{Effective Parameter Count} (or \textsc{epc}\xspace). Eq.~\eqref{eq:real_joint_scaling_law} predicts that the performance of all models increases as a power law in this variable
\begin{equation}
\log L(N, E) = a \log \bar{N}(N, E) + d.
\end{equation}
The result of plotting all models as a function of $\bar N$ is shown in \autoref{fig:main}(c): a good fit across four orders of magnitude. Scaling in terms of $\bar{N}$ results in a unifying power law: valid for dense and routed language models alike.
\subsection{Routing Behavior for Large $N$}
\label{sec:n_max}
\begin{table}[t]
\centering
\caption{Solutions to Eq.~\eqref{eq:real_joint_scaling_law}.}
\begin{tabular}{ c | c c c c c c}
\toprule
& a & b & c & d & $E_\text{start}$ & $E_{\max}$ \\
\midrule
\textbf{\textsc{s-base}\xspace} & -0.082 & -0.108 & 0.009 & 1.104 & 1.847 & 314.478 \\
\textbf{\mbox{\textsc{rl-r}}\xspace} & -0.083 & -0.126 & 0.012 & 1.111 & 1.880 & 469.982 \\
\textbf{\textsc{hash}\xspace} & -0.087 & -0.136 & 0.012 & 1.157 & 4.175 & 477.741 \\
\bottomrule
\end{tabular}
\label{tab:final_coeffcients}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=.56\linewidth]{pdf_figures/fig6_final.pdf}
\caption{Maximum effective parameter count as a function of base model size. Routing helps until a certain size $N_{\textrm{cutoff}}$, that varies strongly between methods (\textsc{s-base}\xspace being the best)}
\label{fig:epc_max}
\end{figure}
\textsc{epc}\xspace leads to a better grasp of the behavior of routing as $N$ increases. Of immediate interest is $N_{\textrm{cutoff}}$: the value of $N$ where $\bar{N}(N, E) \leq N$. For larger $N$, routing will not improve performance. This is easily found to obey $\log N_{\textrm{cutoff}} = \frac{b}{c}$. $N_{\textrm{cutoff}}$ equals $937\textrm{B}$, $85\textrm{B}$ and $83\textrm{B}$ for \textsc{s-base}\xspace, \mbox{\textsc{rl-r}}\xspace and \textsc{hash}\xspace respectively. These values are highly dependent on the number of tokens seen, and $N_{\textrm{cutoff}}$ is expected to increase with increased numbers of tokens.
Next we consider $\bar N_{\max}(N) \triangleq \max_E \bar N(N, E)$, i.e. the maximal effective parameter count that a routing network can reach. Eq.~\eqref{eq:equivalence_tail_off} predicts that $\log \bar N$ is an affine function of $\log N$ for any fixed $E$, and $\bar N_{\max}(N) = N$ for $N > N_{\textrm{cutoff}}$. Therefore $\log {\bar N}_{\max}$ is piecewise-affine in $\log N$, as displayed in \autoref{fig:epc_max}:
\begin{align}
\forall \, N \leq N_{\textrm{cutoff}} = 10^{-\frac{b}{c}},\quad \bar N_{\max}(N) &= \bar N(N, E_{\max}),\notag \\
\forall N \geq N_{\textrm{cutoff}}, \bar N_{\max}(N) &= N.
\end{align}
Note that $\bar{N}_{\max}$ is continuous near $N_{\textrm{cutoff}}$, since for all $E$, $\bar N(N_{\textrm{cutoff}}, E) = N_{\textrm{cutoff}}$. Moreover, the slope of $\bar N_{\max}(\cdot)$ for $N \leq N_{\textrm{cutoff}}$ is positive whenever $E_{\max} \leq E_{\start} 10^{-a/c}$, which is true for our coefficients. In this setting $\bar N_{\max}(\cdot)$ is a non-decreasing function of $N$.
Therefore for any routing network where $N < N_{\textrm{cutoff}}$, $N \leq \bar N_{\max}(N) \leq N_{\textrm{cutoff}}$, meaning routing will never let you train a model more powerful than $N_{\textrm{cutoff}}$. Note that despite this value not depending on $E_{\max}$, its existence crucially depends on the saturating transformation: without it ${\bar N}_{\max}$ is unbounded.
\subsection{Comparative Analysis}
\label{sec:comparisons}
\citet{kaplan2020scaling} use scaling laws to encapsulate and contrast the behavior of entire model classes. Here we mirror this analysis by using the scaling laws we have proposed to summarize the relative behavior of the three routing techniques considered. We make four concrete observations:
\begin{itemize}[itemsep=3pt]
\item \textsc{s-base}\xspace consistently outperforms \mbox{\textsc{rl-r}}\xspace and \textsc{hash}\xspace, though \mbox{\textsc{rl-r}}\xspace is very competitive at smaller $N$.
\item All routing techniques suffer from reducing efficacy as $N$ increases. Amongst the three techniques, \textsc{s-base}\xspace scales best: the fitted parameter $c$ is lowest.
\item For small $N$, \mbox{\textsc{rl-r}}\xspace and \textsc{s-base}\xspace scale similarly with expert count and better than \textsc{hash}\xspace (as indicated by computing the effective expert slope $b(N) = b + c \log N$).
\item
\textsc{hash}\xspace and \mbox{\textsc{rl-r}}\xspace maintain power-law behavior for longer than \textsc{s-base}\xspace (larger $E_{\max}$). However they suffer from more interference ($c$); leading to worse performance for most model sizes.
\item
\textsc{hash}\xspace has large initial overhead (bigger $E_\text{start}$), clearly visible as a more obvious curvature at small $E$.
\end{itemize}
For a practitioner interested in applying routing techniques, we conclude with some recommendations:
\begin{enumerate}[itemsep=0pt,topsep=0pt]
\item Use routing when training any model with $N \leq \text{1.3B}$.
\item \textsc{s-base}\xspace is a good default routing algorithm. \mbox{\textsc{rl-r}}\xspace will sometimes match \textsc{s-base}\xspace in performance but is less robust and scalable (\autoref{sec:sensitivity}).
\item Target using $E \in \{64, 128\}$ experts. Larger values will continue to improve, but with diminishing returns.
\item Use $K{=}1$ experts. Route layers at frequency $0.5 \leq R \leq 1$; lower frequency reduces performance.
\item Future routing research should focus on the terms $c$ and $E_{\max}$; indicative of limits to arbitrary scaling.
\item New routing techniques must be validated at multiple values of $N$ and $E$ when comparing with prior work. Results on single sizes cannot be extrapolated.
\end{enumerate}
\section{Related Work}
In studying the empirical aspects of scaling, this work follows \citet{kaplan2020scaling}; which triggered much research including \citet{henighan2020scaling}, \citet{hernandez2021scaling} and \citet{ ghorbani2021scaling}. The underlying theory is less understood, but there is some exploration of this space including \citet{hutter2021learning} and \citet{bahri2021explaining}.
These studies, and ours, are mutually reliant on a large corpus of work improving the scalability of Transformers. This includes models like GPT-2 \citep{radford2019language}, GPT-3 \citep{brown2020language}, Jurassic-1 \citep{lieber2021jurassic} and Gopher \citep{rae2021scaling}, as well as work improving the ability of these models to be efficiently parallelized across multiple devices, including \citet{shoeybi2019megatron}, \citet{harlap2018pipedream}, \citet{kim2021scalable} and \citet{xu2021gspmd}.
Parallel to all this has been a long study of Routing Networks; a term introduced by \citet{rosenbaum2018routing} but developed extensively in the literature as Conditional Computation \citep{bengio2013estimating,bengio2015conditional,bengio2017reinforcement,denoyer2014deep} and Mixture of Experts~\citep{jacobs1991adaptive,collobert2003scaling,eigen2013learning}. The framework is sometimes further generalized, seen as per-example architecture search in \citet{ramachandran2018diversity} or as a graph problem in \citet{denoyer2014deep}. Routing was popularized for large scale training by \citet{shazeer2017outrageously}, and furthered by work including GShard \citep{lepikhin2020gshard}, Switch Transformer \citep{fedus2021switch} and GLaM \citep{du2021glam}. In this vein, \citet{artetxe2021efficient} undertake a comparative analysis of dense networks and \textsc{smoe}s\xspace with $E = 512$ that aligns with our results. Finally, the core routing architecture is still being improved.
\citet{nie2021dense} adapt $K$ through training where \citet{hazimeh2021dselect} learn it via a differentiable loss. \citet{ramachandran2018diversity} increase $K$ through depth and encourage architectural diversity across experts. \citet{caccia2021anytime} grows $E$ throughout training and \citet{rajbhandari2022deepspeedmoe} propose networks where $E$ changes with depth.
\section{Conclusion}
Using conditional computation to scale neural networks has long been a research goal, and methods based on Routing Networks have been increasing in popularity. Here we have introduced a scaling law (Eq.~\eqref{eq:real_joint_scaling_law}) that models the behavior of these networks. This scaling law predicts that, for all models considered, introducing routing into a language model improves performance. That improvement follows a power-law in the number of experts $E$ that diminishes with model size $N$,
and can be further generalized across routing architectures with Eq.~\eqref{eq:real_joint_scaling_law_fp}.
These scaling laws quantify the differences between three different routing techniques and lead to a single scalar (Eq.~\eqref{eq:equivalence_tail_off}) that simultaneously describes the performance of routed and dense models alike.
This work provides an empirical framework with which to analyze future innovations in routing. We hope the overwhelming evidence we provide towards the benefits of routing encourage it to be more rapidly adopted as a powerful tool for model improvement, whose scaling characteristics align with traditional methods of scaling (in depth and width) and which will remain beneficial up to models with base model size greater than 900 billion parameters.
\pagebreak
\section*{Acknowledgments}
We would like to thank Marc'Aurelio Ranzato, Nando de Freitas, Jacob Menick and Andy Brock for useful comments and feedback on early drafts of this paper. The infrastructure needed to train these models wouldn't have been possible without the dedicated work of the JAX and XLA teams, especially Peter Hawkins, Roy Frostig and James Bradbury who all were crucial in the development of the routing software.
\bibliographystyle{plainnat}
|
1910.12648
|
\section{Introduction}
Supersymmetric quantum mechanics (SUSYQM) has proven to be a key
technique in the construction of exactly-solvable potentials and in
the understanding of shape-invariance. The supersymmetric partners of
the harmonic oscillators are known as rational extensions because the
corresponding potentials have the form of a harmonic oscillator plus a
rational term that vanishes at infinity.
There has been some recent interest in rational extensions possessing
ladder operators, which may be thought of as higher order analogues of
the classical creation and annihilation operators. There are
applications of such ladder operators to superintegrable systems
\cite{MQ1,MQ2}, rational solutions of Painlev\'e equations \cite{MN},
and coherent states \cite{HHMZ}.
In this note we classify the ladder operators corresponding to the
class of rational extensions of the harmonic oscillator. Rational
extensions are naturally associated with combinatorial objects called
Maya diagrams. We show that any two rational extensions are related
by an intertwining relation. It therefore makes sense to endow both
Maya diagrams and rational extensions with the structure of a
category, and to interpret the relation Maya diagram $\mapsto$
rational extension as a functor between these categories. This
approach allows us to classify ladder operators and syzygies of ladder
operators, and thereby to generalize the results of \cite{MQ1,MQ2}.
\section{Maya diagrams}
A Maya diagram is a set of integers $M\subset \mathbb{Z}$ containing a finite
number of positive integers, and excluding a finite number of negative
integers. We visualize a Maya diagram as a horizontally extended
sequence of $\rlap{\hskip0.25ex\raise0.2ex\hbox{$\bullet$}}\square$ and $\hbox{$\square$}$ symbols, with the filled symbol
$\rlap{\hskip0.25ex\raise0.2ex\hbox{$\bullet$}}\square$ in position $m$ indicating membership $m\in M$. The
defining assumption now manifests as the condition that a Maya diagram
begins with an infinite filled $\rlap{\hskip0.25ex\raise0.2ex\hbox{$\bullet$}}\square$ segment and terminates with
an infinite empty $\hbox{$\square$}$ segment.
A Maya diagram may also be regarded as a strictly decreasing sequence
of integers $m_1 > m_2> \cdots$, subject to the constraint that
$m_{i+1} = m_i-1$ for $i$ sufficiently large. It follows that there
exists a unique integer $\sigma$, called the index of $M$, such that
$m_i = -i+\sigma$ for $i$ sufficiently large.
Let $\mathcal{M}$ denote the set of all Maya diagrams. The flip at position
$k\in \mathbb{Z}$ is the involution $f_k:\mathcal{M}\to \mathcal{M}$ defined by
\begin{equation}\label{eq:flipdef}
f_k : M \mapsto
\begin{cases}
M \cup \{ k \}, & \text{if}\quad k\notin M, \\
M \setminus \{ k \},\quad & \text{if}\quad k\in M.
\end{cases}\qquad M\in \mathcal{M}.
\end{equation}
\noindent
In the first case, we say that the flip acts on $M$ by a
state-deleting transformation ($\hbox{$\square$}\to$ $\rlap{\hskip0.25ex\raise0.2ex\hbox{$\bullet$}}\square$), and in the
second case, by a state-adding transformation
($\rlap{\hskip0.25ex\raise0.2ex\hbox{$\bullet$}}\square$$\to\hbox{$\square$}$).
Let $\mathcal{Z}_p$ denote the set of subsets of $\mathbb{Z}$ having cardinality $p$,
and $\mathcal{Z} = \bigcup_p \mathcal{Z}_p$ the set of all finite subsets of $\mathbb{Z}$.
For $K\in \mathcal{Z}_p$ consisting of distinct $k_1,\ldots, k_p\in \mathbb{Z}$ we
define the multi-flip $f_K:\mathcal{M} \to \mathcal{M}$ by
\begin{equation}
\label{eq:fKMdef}
f_K(M) = (f_{k_1} \circ \cdots \circ f_{k_p})(M),\quad M\in \mathcal{M},
\end{equation}
Since flips commute, the action of $f_K$ does not depend upon the
order of $k_1,\ldots, k_p$.
It is useful to regard $\mathcal{M}$ as a complete graph whose edges are
multi-flips. For Maya diagrams $M_1,M_2\in \mathcal{M}$, the symmetric
difference
\[ M_1\ominus M_2 = (M_1\setminus M_2) \cup (M_2 \setminus M_1) \]
is precisely the edge that connects $M_1$ and $M_2$. More precisely, if
\[ K = M_1\ominus M_2 = M_2 \ominus M_1, \] then $f_K(M_1) = M_2$ and
$f_K(M_2) = M_1$.
Multi-flips can also be used to define a bijection $\mathcal{Z}\to\mathcal{M}$ given by
$ K\mapsto f_K(M_\emptyset)$, where $M_\emptyset:=\mathbb{Z}_{-}$
denotes the trivial Maya diagram. We refer to $K\in \mathcal{Z}$ as the index
set of the Maya diagram $f_K(M_\emptyset)$.
The additive group $\mathbb{Z}$ acts on $\mathcal{M}$, because for $M\in \mathcal{M}$ and
$n\in \mathbb{Z}$, the set
\[ M+n = \{ m+n \colon m\in M \}\] is also a Maya
diagram. Moreover, we have
\begin{equation}
\label{eq:indexshift}
\sigma_{M+n}=\sigma_M+n.
\end{equation}
We will refer to an equivalence class of Maya diagrams related by
translations as an \textit{unlabelled Maya diagram}, and denote the
set of all unlabelled Maya diagrams by $\mathcal{M}/\mathbb{Z}$. One can visualize
the passage from an unlabelled to a labelled Maya diagram as choosing
the placement of the origin.
For $B\in \mathcal{Z}_p$, where $p=2g+1$ is odd, we define the Maya
diagram
\begin{equation}
\label{eq:MBi}
\Xi(B)= (-\infty,b_{0}) \cup [b_{1},b_{2}) \cup
\ \cdots \cup [b_{2g-1},b_{2g}),
\end{equation}
where $b_0 <b_1<\cdots < b_{2g}$ is an increasing enumeration of $B$
and where $[m,n) = \{ j\in \mathbb{Z} \colon m\leq j < n\}$. Every Maya
diagram has a unique representation of the form $\Xi(B)$ for some
$B\in \mathcal{Z}_{2g+1}$. We will call the corresponding $g\geq 0$ the genus
of $M= \Xi(B)$ and refer to $(b_0,\ldots, b_{2g})$ as the block
coordinates of $M$. The block coordinates may also be characterized
as the unique set $B\in \mathcal{Z}$ such that $f_B(M) = M+1$.
\begin{figure}[h]
\begin{tikzpicture}[scale=0.6]
\draw (1,1) grid +(15 ,1);
\path [fill] (0.5,1.5) node {\huge ...}
++(1,0) circle (5pt) ++(1,0) circle (5pt) ++(1,0) circle (5pt)
++(1,0) circle (5pt) ++(1,0) circle (5pt)
++(2,0) circle (5pt) ++(1,0) circle (5pt)
++ (3,0) circle (5pt) ++(1,0) circle (5pt) ++ (1,0) circle (5pt)
++ (3,0) node {\huge ...} +(1,0);
\draw[line width=1pt] (4,1) -- ++ (0,1.5);
\foreach \x in {-3,...,11} \draw (\x+4.5,2.5) node {$\x$};
\path (6.5,0.5) node {$b_0$} ++ (1,0) node {$b_1$}
++ (2,0) node {$b_2$}++ (2,0) node {$b_3$}++ (3,0) node {$b_4$}
;
\end{tikzpicture}
\caption{Block coordinates $(b_0,\ldots, b_4) = (2,3,5,7,10)$
of a genus $2$ Maya diagram
$M = (-\infty,b_0)\cup [ b_1,b_2) \cup [
b_3,b_4)$. Note that the genus is both the number of
finite-size empty blocks and the number of finite-size filled
blocks.}\label{fig:genusM}
\end{figure}
Figure~\ref{fig:genusM} explains the visual meaning of block
coordinates and of genus. After removal of the initial infinite
$\rlap{\hskip0.25ex\raise0.2ex\hbox{$\bullet$}}\square$ segment and the trailing infinite $\hbox{$\square$}$ segment, a
Maya diagram consists of alternating empty $\hbox{$\square$}$ and filled
$\rlap{\hskip0.25ex\raise0.2ex\hbox{$\bullet$}}\square$ segments of variable length. The genus $g$ counts the
number of such pairs. The even block coordinates $b_{2i}$ indicate
the starting positions of the empty segments, and the odd block
coordinates $b_{2i+1}$ indicate the starting positions of the filled
segments.
\section{Rational extensions}
For $n\in \mathbb{Z}$, set
\[ \psi_n(x) =
\begin{cases}
e^{-\frac{x^2}{2}} H_n(x) & \text{ if } n\geq 0\\
e^{\frac{x^2}{2}} \tilde{H}_{-n-1}(x) & \text{ if } n<0
\end{cases}
\]
where,
\[
H_n(x) = (-1)^n e^{x^2} \frac{d^n}{dx^n} e^{-x^2} ,\quad n=0,1,2,\ldots
\]
are the Hermite polynomials, and
\[ \tilde{H}_n(x) = (-\mathrm{i} )^{n} H_n(\mathrm{i} x) \] are the conjugate Hermite
polynomials. We then have
\[ -\psi_n''(x) + x^2 \psi_n(x) = (2n+1) \psi_n(x),\quad n\in \mathbb{Z}.\]
For $n\geq 0$, the above solutions correspond to the bound states of
the quantum harmonic oscillator. The solutions for $n<0$ do not
satisfy the boundary conditions at $\pm\infty$ and therefore represent
virtual states.
For $M\in \mathcal{M}$ with index set $K\in \mathcal{Z}_p$,
let $s_1>\cdots > s_r\geq 0$ and $t_1>\cdots > t_q\geq 0$
be the uniquely specified lists of natural numbers such that
\[ K = \{ -1-s_1,\ldots, -1-s_r, t_q,\ldots, t_1 \},\quad
p=q+r.\]
We will refer to $(s_1,\ldots, s_r \mid t_q,\ldots ,t_1)$ as the
\textit{Frobenius symbol} of $M$. It is easy to check that the index
of $M$ is given by $\sigma = q-r$.
Let us now define
\begin{equation} H_M(x) = e^{\sigma_M \frac{x^2}{2}}\operatorname{Wr}[\psi_{k_1},\ldots, \psi_{k_p} ],
\end{equation}
where $k_1<\cdots < k_p$ is an increasing enumeration of $K$, where
$\sigma_M\in \mathbb{Z}$ is the index, and $\operatorname{Wr}$ is the usual Wronskian
determinant. The polynomial nature of $H_M(x)$ becomes evident in the
following pseudo-Wronskian \cite{GGM2} realization:
\begin{equation}\label{eq:pWdef2} H_M =
\begin{vmatrix} \tilde{H}_{s_1} & \tilde{H}_{s_1+1} & \ldots &
\tilde{H}_{s_1+r+q-1}\\ \vdots & \vdots & \ddots & \vdots\\ \tilde{H}_{s_r} &
\tilde{H}_{s_r+1} & \ldots & \tilde{H}_{s_r+r+q-1}\\ H_{t_q} & H'_{t_q} &
\ldots & H^{(r+q-1)}_{t_q}\\ \vdots & \vdots & \ddots & \vdots\\
H_{t_1} & H'_{t_1} & \ldots & H^{(r+q-1)}_{t_1}
\end{vmatrix}.
\end{equation}
A suitably normalized pseudo-Wronskian is a translation invariant of
the underlying Maya diagram. The following result was proved in
\cite{GGM2}. Set
\begin{equation}
\label{eq:hHdef}
\widehat{H}_M =
\frac{(-1)^{rq}H_M}{
\prod_{i<j} 2(s_j-s_i)\prod_{i<j} 2( t_i-t_j)}.
\end{equation}
Then for $M\in \mathcal{M}$ and $n\in\mathbb{Z}$ we have
\begin{equation} \label{eq:HMequiv}
\widehat{H}_M = \widehat{H}_{M+n}.
\end{equation}
The potential
\begin{equation}
\label{eq:UMdef}
\begin{aligned}
U_M(x)
&= x^2 - 2 \frac{d^2}{dx^2}\log \operatorname{Wr}[ \psi_{k_1},\ldots, \psi_{k_p}
],\\
&= x^2 + 2\left( \frac{H_M'}{H_M}\right)^2 - \frac{2H_M''}{H_M} - 2 \sigma_M
\end{aligned}
\end{equation}
is known as a rational extension \cite{GGM} of the harmonic
oscillator. The corresponding Hamiltonian operators
\begin{equation}
\label{eq:TMdef}
T_M = -\frac{d^2}{dx^2} + U_M
\end{equation}
are exactly solvable with
\[ T_M[\psi_{M,k}] = (2k+1) \psi_{M,k},\]
where
\[
\psi_{M,k}
= e^{ \frac{\epsilon x^2}{2}} \frac{H_{f_k(M)}}{H_M},\qquad
\epsilon =
\begin{cases}
+1 & \text{ if } k\in M\\
-1 & \text{ if } k\notin M
\end{cases}.
\]
Note that, as a consequence of \eqref{eq:indexshift} and
\eqref{eq:HMequiv}, $T_M$ is translation covariant:
\begin{equation}
\label{eq:TM+n}
T_{M+n} = T_M + 2n,\quad n\in \mathbb{Z}.
\end{equation}
Let $(b_0,b_1,\ldots, b_{2g})$ be the block coordinates of $M$. By
the Krein-Adler theorem \cite{adler,GGM,krein}, the polynomial $H_M$
has no real zeros if and only if $b_{2j}-b_{2j-1}$ is even for all
$j=1,\ldots, g$, i.e., if all the finite $\rlap{\hskip0.25ex\raise0.2ex\hbox{$\bullet$}}\square$ segments of $M$
have even size. For such Maya diagrams, the potential $U_M$ is
non-singular and hence $T_M$ corresponds to a self-adjoint operator.
The bound states of the operator correspond to the empty boxes of $M$,
i.e., to $k\notin M$. It is precisely for such $M\in \mathcal{M}$ and
$k\notin M$ that the eigenfunction $\psi_{M,k}$ is
square-integrable. For such $M$ and $k$, the polynomial part of
$\psi_{M,k}$ is known as an exceptional Hermite polynomial \cite{GGM}.
\section{Categorical Structure}
In this section, we define $\mathbb{MD}$, a category whose objects are Maya
diagrams and whose arrows are multi-flips, and ${\mathbb{REXT}}$, another
category whose objects are rational extensions and whose arrows are
intertwining operators (definition given below). We then
exhibit a functor $\mathbb{MD} \to {\mathbb{REXT}}$ that we use to classify ladder
operators.
In order to define composition of arrows, it will first be necessary
to generalize the notion of a multi-flip. A multi-set is a
generalized set object that allows for multiple instances of each of
its elements.
Let $\widehat{\cZ}_p$ denote the set of integer multi-sets of cardinality $p$
and $\widehat{\cZ} = \bigcup_p \widehat{\cZ}_p$ the set of finite integer multi-sets.
We express a multi-set $K\in \widehat{\cZ}$ as
\begin{equation}
\label{eq:Kki}
K = \{ k_1^{p_1},\ldots, k_q^{p_q}\} \,
\end{equation}
where $k_1,\ldots, k_q\in \mathbb{Z}$ are distinct, and where $p_i>0$ indicate
the multiplicity of element $k_i$. The cardinality is then given by
$p=p_1+\cdots +p_q$. The notion of a multi-flip extends naturally
from sets to multi-sets. Indeed, for $K\in \widehat{\cZ}$, we re-use
\eqref{eq:fKMdef} to define the multi-flip $f_{K}:\mathcal{M}\to \mathcal{M}$.
We say that $K$ is an even multi-set if all of its elements have an
even multiplicity. Since flips are involutions, $f_K$ is the identity
transformation if and only if $K$ is even. If $K$ is an even
multi-set then it has the unique decomposition $K=K_1\cup K_1$ where
$K_1$ has the same elements as $K$ but with the multiplicities divided
by $2$. More generally, every multi-set $K\in \widehat{\cZ}$ has a unique
decomposition of the form
\begin{equation}
\label{eq:KK0K1}
K = K_0 \cup K_1\cup K_1, \quad K_0\in \mathcal{Z},\; K_1 \in \widehat{\cZ},
\end{equation}
where $K_0$ is the set of integers that occur in $K$ with an odd
multiplicity. Again, since flips are involutions, we have
$f_K = f_{K_0}$.
The objects of $\mathbb{MD}$ are labelled Maya diagrams $\mathcal{M}$, and the arrows
are pairs $(M,K)\in \mathcal{M}\times \widehat{\cZ}$.
The source of $(M,K)$ is $M$, and the target is $f_K(M)$.
Composition of morphisms is given by the union of multi-sets:
\[ (M_2,K_2) \circ (M_1,K_1) = (M_1, K_1 \cup K_2), \] where
$M_1\in \mathcal{M},\; K_1,K_2\in \widehat{\cZ},\; M_2 = f_{K_1}(M_1)$.
For differential operators $A,T_1,T_2$, we say that $A$ intertwines
$T_1,T_2$ if
\[ A T_1 = T_2 A. \] The objects of ${\mathbb{REXT}}$ are the rational
extensions $T_M,\; M\in \mathcal{M}$, and the arrows are monic differential
operators that intertwine two rational extensions. Observe that if
$A$ intertwines $T_1, T_2$ then so does $A\circ p(T_1)$, where $p(x)$
is an arbitrary polynomial. Given $T_1, T_2$, we say that $A$ is a
primitive intertwiner if it does not include a nontrivial right factor
$p(T_1)$.
For a Maya diagram
$M\in \mathcal{M}$ and a set $K\in \mathcal{Z}_p$, we define the operator
\[ A_{M,K}[y] = \frac{\operatorname{Wr}[\psi_{M,k_1},\ldots,
\psi_{M,k_p},y]}{\operatorname{Wr}[\psi_{M,k_1},\ldots, \psi_{M,k_p}]} .\] By
construction, $A_{M,K}$ is a monic differential operator of order
$p$. These intertwining operators have their origin in SUSYQM
(supersymmetric quantum mechanics), and
obey the intertwining relation
\[ A_{M_1,K} T_{M_1} = T_{M_2} A_{M_1,K},\quad M_2 = f_K(M_1),\quad
M_1,M_2\in \mathcal{M},\; K\in \mathcal{Z}.\]
It is possible to show that $A_{M,K}$ is a primitive intertwiner
between $T_M$ and $T_{f_K(M)}$. Moreover, it is possible to show
\cite[Proof of Theorem 3.10]{GFGUM} that that every arrow in ${\mathbb{REXT}}$
has the form $A_{M,K}\circ p(T_M)$, where $A_{M,K}$ is primitive
(i.e., $K$ is a set), and $p(x)$ is a polynomial. We also note that
these intertwiners are translation invariant:
\begin{equation}
\label{eq:AMK+n}
A_{M+n,K+n} = A_{M,K},\quad n\in \mathbb{Z}.
\end{equation}
In order to describe the composition of intertwiners, we need to
extend the above definition to include multi-sets. For $K\in \widehat{\cZ}$,
let $K_0\in \mathcal{Z}$ and $K_1\in \widehat{\cZ}$ be as per \eqref{eq:KK0K1}. For
$M\in \mathcal{M}$, we now define
\begin{equation}
\label{eq:AMKK0}
A_{M,K} = A_{M,K_0}\circ \prod_{k\in K_1} (2k+1-T_M).
\end{equation}
In other words, if $K\in \widehat{\cZ}$ contains elements of higher
multiplicity, then $A_{M,K}$ is no longer primitive. The arrows of
${\mathbb{REXT}}$ are the operators $A_{M,K},\; M\in \mathcal{M},\; K\in \widehat{\cZ}$.
Composition of arrows is just the usual composition of differential
operators.
\begin{theorem}
\label{thm:functor}
The correspondence $M\mapsto T_M,\, M\in \mathcal{M}$ and
$(M,K)\mapsto A_{M,K},\, K\in \widehat{\cZ}$ is a covariant
functor $\mathbb{MD}\to {\mathbb{REXT}}$.
\end{theorem}
\begin{proof}
It suffices to observe that for $M_1\in \mathcal{M},\; K_1,K_2\in \widehat{\cZ}$ we
have
\[ A_{M_2,K_2}\circ A_{M_1,K_1} = A_{M_1,K_1\cup K_2},\quad M_2 =
f_{K_1}(M_1).\]
\end{proof}
\section{Ladder operators}
We define a ladder operator to be an intertwiner $A$ such that
\[ A T_M = (T_M+\lambda) A \] for some $M\in \mathcal{M}$ and constant
$\lambda$. Since $T_{M+n} = T_M+2n$, Theorem \ref{thm:functor}
implies that for every rational extension $T_M,\;M\in \mathcal{M}$, and
$n\in \mathbb{Z}$, there exists a ladder operator $A_{M,K}$, where
$K = (M +n)\ominus M$. By \eqref{eq:AMK+n} no generality is lost if
we index such ladder operators in terms of unlabelled Maya diagrams
$[M]\in \mathcal{M}/Z$.
A recent result provides a characterization of translational
multi-flips \cite{GGM4} in terms of cyclic Maya diagrams. This
characterization makes it possible to establish the order of a ladder
operator \cite{GGM3}.
\begin{theorem}
\label{thm:M+nM}
Let $M\in \mathcal{M}$ and $n =1,2,\ldots$.
Then,
\begin{equation}
\label{eq:pngi}
|(M+n) \ominus M| = n + 2\sum_{i=0}^{n-1} g_i ,
\end{equation}
where $g_i$ is the genus of the Maya diagram
\[ M_i = \{ m\in \mathbb{Z} \colon mn + i \in M \},\quad i=0,1,\ldots, n-1
,\]
\end{theorem}
\begin{proof}
Let $B_i\in \mathcal{Z}_{2g_i+1}$ be the block coordinates of $M_i$, and set
\[ B = \bigcup_{i=0}^{n-1} (nB_i+i) = \bigcup_{i=0}^{n-1} \{ nb + i
\colon b \in B_i\}. \] Since $B_i$ is the unique set such that
$f_{B_i}(M_i) = M_i+1$, it follows that $B$ is the unique set such
that $f_B(M) = M+n$. Therefore $B=(M+n)\ominus M$.
\end{proof}
Fix a Maya diagram $M\in \mathcal{M}$. An immediate consequence of Theorem
\ref{thm:M+nM} is the existence of a primitive ladder operator that
intertwines $T_M$ and $T_M+2n$ for every $n\in \mathbb{Z}$. The ladder
operator in question is $L_{n}:=A_{M,K}$, where $K=(M+n)\ominus M$.
The order of $L_{n}$ is given by \eqref{eq:pngi}. If $n>0$, then both
$L_{n}$ and $L_{1}^n$ intertwine $T_M$ and $T_M+2n$; it follows that
there must be a syzygy of the form
\[ L_{1}^n = L_{n} \circ p(T_M),\] where the roots of the polynomial $p$
are determined by \eqref{eq:AMKK0}.
The action of ladder operators on states is that of a lowering
or raising operator according to
\[ L_n[\psi_{M,k}] = C_{M,n,k} \psi_{M,k-n},\quad k\notin M, \] where
$C_{M,n,k}$ is zero if $\psi_{M,k-n}$ is not a bound state, i.e., if
$k-n\in M$. Otherwise, $C_{M,n,k}$ is a rational number whose explicit
form can be derived on the basis of \eqref{eq:hHdef}. As a particular
example, suppose that the index set of $M$ consists of positive
integers $0<k_1<\cdots < k_p$, that $n>0$, and that $k\notin M$. In
this case,
\[ C_{M,n,k} =
\begin{cases}
\prod_{i\in M\setminus(M+n)} (2i-2j) \times(k-n+1)_n\, 2^n &
\text{ if } k-n \notin M\\
0 &\text{ otherwise.}
\end{cases}
\]
\section{Examples} The articles \cite{MQ1,MQ2} considered a particular
class of ladder operators corresponding to Maya diagrams obtained by a
single state-adding transformation. Fix some $n=1,2,\ldots$, and let
$\tilde{M}_n$ be the Maya diagram with index set $\{ -n\}$, i.e., let
$\tilde{M}_n = \mathbb{Z}_{-} \setminus \{ -n \}$. We set
\[ \hat{M}_n = \tilde{M}_n+n = \mathbb{Z}_{-} \cup \{ 1,\ldots, n-1 \},\] and observe
that $\hat{M}_n$ has index set $\{ 1,\ldots, n-1\}$. Hence,
\[ L_n := A_{\tilde{M}_n,\{ -n,1,\ldots, n-1 \}},\] is an $n$th order
ladder operator that intertwines $T_{\tilde{M}_n}$ and $T_{\hat{M}_n}$. Ordering
the flips in ascending order, we obtain the following factorization
into first-order intertwiners:
\[ L_n = A_{\hat{M}_{n-1},\{n-1\}} \cdots
A_{\hat{M}_2,\{2\}} A_{\hat{M}_1,\{1\}} A_{\tilde{M}_n,\{-n\}} ;\] each flip
corresponds to a state-deleting transformation.
Let us also observe that $\tilde{M}_n$ is a genus 1 Maya diagram. It follows
that
\[ L_{1} := A_{\tilde{M}_n,\{-n,-n+1,0\}}.\]
is a third-order ladder operator that intertwines
$\tilde{M}_n$ and $\tilde{M}_n+1$.
The composition $L_{1}^n$ is represented by the multi-set
\[ \bigcup_{j=0}^{n-1}\{ -n+j,-n+j+1,j\} = \{ -n,1,\ldots, n-1\} \cup
\{ (-n+1)^2,\ldots, (-1)^2, (0)^2 \},\] where the superscripts
indicate repetition (and not a square). The syzygy between $L_{n}$
and $L_{1}$ is therefore
\[ L_{1}^n = L_{n} \prod_{j=-n+1}^0 (2j+1-T_{\tilde{M}_n}) .\]
|
1904.03120
|
\section{Introduction}
Electrons and muons carry a magnetic moment, which is correctly predicted by Dirac's original theory of the
electron to within a permille of precision. The proportionality factor between the spin and the magnetic
moment of the lepton $\ell$ is parameterized by the gyromagnetic ratio
$g$. In Dirac's theory, $g=2$, and one characterizes the deviation of
$g$ from this reference value by $a_\ell=(g-2)_\ell/2$. Testing the ability
of Quantum Electrodynamics (QED) to correctly predict this precision
observable has played a crucial role in the development of quantum
field theory in general. Presently, the achieved experimental
precision of 540\,ppb on the measurement of the anomalous magnetic
moment of the muon~\cite{Bennett:2006fi}, $a_\mu$, requires the effects of all three
interactions of the Standard Model (SM) of particle physics to be
included in the theory prediction. In fact, a tension of about 3.5
standard deviations exists between the SM prediction and the
experimental measurement. For reviews on the subject, we refer the
reader to~\cite{Jegerlehner:2009ry,Blum:2013xva,Jegerlehner:2017gek}.
Presently, the E989 experiment at Fermilab is performing a new direct
measurement of $a_\mu$~\cite{Grange:2015fou}, and a further experiment using a
different experimental technique is planned at J-PARC~\cite{Mibe:2011zz}. The final goal
of these experiments is to reduce the uncertainty on $a_\mu$ by a
factor of four. A reduction of the theory error is thus of paramount
importance, as the first results from the Fermilab experiment are
expected within the next few months. These will likely
reach the same precision as the current world average.
On the theory side, the precision of the SM prediction for $a_\mu$ is
completely dominated by hadronic uncertainties. The leading
hadronic contribution enters at second order in the fine-structure
constant $\alpha$ via the vacuum polarization and must be determined
at the few-permille level in order to match the upcoming precision
of the direct measurements of $a_\mu$. In this paper we undertake a
first-principles lattice QCD calculation of this hadronic
contribution (see~\cite{Meyer:2018til} for a recent review of previous
lattice results). A further hadronic effect, the light-by-light
scattering contribution which enters at third order in the
fine-structure constant, currently contributes at a comparable level
to the theory uncertainty budget and is being addressed
both by dispersive and lattice methods
(see~\cite{Blum:2016lnc,Asmussen:2018oip,Colangelo:2017urn} and references therein).
Our calculation of the hadronic vacuum polarization to the anomalous
magnetic moment of the muon, $a_\mu^{\rm hvp}$, fully includes the effects of
the up, down and strange quarks, while the charm quark (whose
contribution to $a_\mu^{\rm hvp}$ is small) is treated only at the valence
level. We use ensembles of SU(3) gauge field configurations generated
with an O($a$) improved Wilson quark action as part of the Coordinated
Lattice Simulations (CLS)
initiative~\cite{Bruno:2014jqa,Bali:2016umi}. In particular, the generation
of a physical-mass ensemble~\cite{Mohler:2017wnb} (labelled E250) was largely motivated
by the goal of improving the lattice determination of $a_\mu^{\rm hvp}$.
We use four different
lattice spacings to control the continuum limit, and the $(u,d,s)$
quark masses are varied at constant average quark mass in order to
perform a chiral interpolation to the physical values of the quark
masses~\cite{Bruno:2014jqa}. Our calculation is performed at equal up and
down quark masses, and no QED effects are included; however, in the
future both of these isospin-breaking effects will be taken into
account as corrections~\cite{Risch:2017xxe,Risch:2018ozp}.
Lattice QCD, which is formulated in Euclidean space, is well suited
for computing $a_\mu^{\rm hvp}$, since the latter only involves the two-point
function of the hadronic component of the electromagnetic current at
spacelike momenta~\cite{Blum:2002ii}. In this work we employ the
representation of $a_\mu^{\rm hvp}$ as a Euclidean-time integral over the
two-point function in the time-momentum
representation (TMR)~\cite{Bernecker:2011gh}, i.e.\ projected to
vanishing spatial momentum. This representation does not require a
parameterization of the vacuum polarization function and has a clear
spectral interpretation in terms of vector hadronic states in the
center-of-mass frame. The main difficulty in obtaining $a_\mu^{\rm hvp}$ with
good statistical precision is that it probes the TMR
correlator at Euclidean times well beyond 2\,fm, where its relative
precision deteriorates rapidly. Therefore a dedicated treatment of the
tail of the correlator which does not compromise the first-principles
nature of the calculation is needed. Here the spectral representation
of the correlator plays a central role.
An important source of systematic uncertainty is the correction to
$a_\mu^{\rm hvp}$ due to the use of a finite spatial torus. On our lattice
ensembles, this finite-size effect (FSE) mostly stems from the tail of
the isovector component of the TMR correlator. Thanks to precise
relations~\cite{Luscher:1991cf,Meyer:2011um} between the properties of
the discrete quantum states on the torus and the pion form factor at
timelike momenta, we are able to correct for the dominant part of the
FSE. Finally, the quark-disconnected diagrams, while making only a
few-percent contribution to $a_\mu^{\rm hvp}$, require a dedicated set of
calculations for their evaluation, which demand a large computing-time
investment.
The rest of this paper is organized as follows.
Section~\ref{sec:metho} describes the methodology followed in our calculation,
including the renormalization and improvement of the TMR correlator
and the treatment of the charm contribution. Section~\ref{sec:results}
presents our lattice data and the extraction of the observable $a_\mu^{\rm hvp}$
on each individual lattice ensemble. In section~\ref{sec:phys}, the
lattice-spacing and quark-mass dependence of these intermediate
results is fitted in order to arrive at our final result. Finally, we
compare the latter with phenomenological as well as other recent
lattice determinations in section~\ref{sec:discussion}.
\section{Methodology\label{sec:metho}}
\subsection{Time-momentum correlators}
We start by providing all relevant relations in the continuum and infinite-volume Euclidean theory.
In the time-momentum representation (TMR), the leading-order hadronic vacuum polarization contribution to $(g-2)_\mu$
is given by the convolution integral
\begin{equation}\label{eq:TMRamu}
a_\mu^{\rm hvp} = \left(\frac{\alpha}{\pi}\right)^2\int_0^{\infty}\,
dt\,\widetilde K(t)G(t),
\end{equation}
where an analytic expression for the QED kernel function
$\widetilde K(t)$ is given in Appendix~B of Ref.\ \cite{DellaMorte:2017dyu}, and
\begin{equation}\label{eq:Gx0def}
G(t)\,\delta_{kl} = -\int d^3x\,\Big\< J_k(t,\vec x)\;J_l(0) \Big\>
\end{equation}
is the spatially summed QCD two-point function of the electromagnetic current
$\vec J=\frac{2}{3}\bar u\vec\gamma u - \frac{1}{3} \bar d \vec\gamma d -\frac{1}{3} \bar s \vec\gamma s + \frac{2}{3} \bar c \vec\gamma c$.
In isospin-symmetric QCD, we can write
\begin{equation}\label{eq:Gdecomp}
G(t) = \frac{5}{9} G_l(t) + \frac{1}{9}G_s(t) + \frac{4}{9} G_c(t) + G_{\rm disc}(t),
\end{equation}
where $G_f(t)$ denotes a quark-connected contribution associated with flavour $f$
and $G_{\rm disc}(t)$ is the quark-disconnected contribution.
An alternative decomposition based on the isospin quantum number $I$ yields
\begin{equation}\label{eq:GdecompI}
G(t) = G^{I=1}(t) + G^{I=0}(t),\qquad
G^{I=1}(t) = \frac{1}{2} G_l(t).
\end{equation}
Physically, the latter decomposition is more transparent. In particular, at light pion masses
the dominant finite-size effects, as well as a logarithmic singularity as $m_\pi\to0$, only concern the isovector
contribution, $a_\mu^{{\rm hvp},I=1}$. Computationally however, the disconnected contributions
are obtained very differently from the connected ones: they are costly and amount only to a few percent of the total.
Therefore, in our numerical analysis there is an interesting interplay between the two choices of bases to compute $a_\mu^{\rm hvp}$.
With $m_\mu$ the muon mass, the kernel behaves as $\widetilde K(t)\sim \frac{\pi^2}{9}m_\mu^2 t^4$
for $t\ll m_\mu^{-1}$ and as $\widetilde K(t)\sim 2\pi^2 t^2$ for $t\gg m_\mu^{-1}$.
Since the lattice data for the correlator $G(t)$ is in lattice units, the muon mass must be known in those units, $am_\mu$.
The knowledge of the lattice spacing in $\mathrm{GeV}^{-1}$ thus plays a crucial role in a
precision determination of $a_\mu^{\rm hvp}$~\cite{DellaMorte:2017dyu,DellaMorte:2017khn}.
There are then two ways to proceed.
In lattice QCD, where often the physical quark masses are reached only after an extrapolation or interpolation,
$a_\mu^{\rm hvp}$ can either be calculated using the fixed, physical value of $m_\mu=105.66\,\mathrm{MeV}$; or the muon mass
can be rescaled by a quantity with dimension of mass known experimentally~\cite{Feng:2011zk}.
In our calculation, we have explored both paths. In our final results,
we adopt the ``rescaling strategy'' for the connected light contribution.
As a rescaling quantity, we choose the pion decay constant $f_\pi$,
so that we set\footnote{We use the normalization convention $f_\pi\simeq 92$\,MeV.}
\begin{equation}
am_\mu = \Big(\frac{m_\mu}{f_\pi}\Big)_{\rm pheno}\cdot (af_\pi)_{\rm lattice} = 1.144\cdot (af_\pi)_{\rm lattice}
\end{equation}
on every lattice ensemble. Our choice is motivated, first, by $f_\pi$
being determined precisely and reliably, both in phenomenology and on the
lattice; and secondly, since $f_\pi$ increases with the pion mass,
this choice has the effect of making the $m_\pi$ dependence of $a_\mu^{\rm hvp}$ weaker.
To intuitively understand the effect of the rescaling, it is instructive to consider
the calculation of the anomalous magnetic moment of the electron; in this case,
obtaining $a_e^{\rm hvp} = ({4\alpha^2}/{3}) m_e^2 \Pi_1$
requires computing the time moment $\Pi_1\equiv ({1}/{12})\int_0^\infty dt\;t^4\,G(t)$.
Thus the rescaling simply amounts to computing the dimensionless quantity $f_\pi^2 \Pi_1$,
and converting the result into $a_e^{\rm hvp}$ by using the phenomenological value of $(m_e^2/f_\pi^2)$.
\subsection{Simulation parameters\label{sec:simpar}}
\begin{table}[t!]
\caption{Parameters of the simulations: $\beta=6/g_0^2$ is the bare gauge coupling, $\kappa_{l,s}$ are the hopping parameters
of the light and strange quarks, $a$ is the lattice spacing and $(L,T)$ are the lattice dimensions in space and time.
Ensembles E250 and B450 have periodic boundary conditions in time, all others have open boundary conditions.
The last column contains the number of gauge configurations used.
Ensembles with an asterisk are not included in the final analysis but are used to control finite-size effects.}
\vskip 0.1in
\begin{centering}
{\footnotesize
\begin{tabular}{lcl@{\hskip 01em}c@{\hskip 01em}l@{\hskip 01em}l@{\hskip 01em}c@{\hskip 01em}c@{\hskip 01em}c@{\hskip 01em}c@{\hskip 01em}l}
\hline
id & $\quad\beta\quad$ & $L^3\times T$ & $a\,[{\rm{fm}}]$ & $~~~\kappa_l$ & $~~~\kappa_s$ &
$m_{\pi}\,[\mathrm{MeV}]$ & $m_{K}\,[\mathrm{MeV}]$ & $m_{\pi}L$ & $L\,[{\rm{fm}}]$ & conf. \\
\hline
H101 & 3.40 & $32^3\times96$ & 0.08636 & 0.136760 & 0.136760 & 416(5) & 416(5) & 5.8 & 2.8 & 2000 \\
H102 & & $32^3\times96$ & & 0.136865 & 0.13654934 & 354(5) & 438(4) & 5.0 & 2.8 & 1900 \\
H105$^*$ & & $32^3\times96$ & & 0.136970 & 0.13634079 & 284(4) & 460(4) & 3.9 & 2.8 & 2800 \\
N101 & & $48^3\times128$ & & 0.136970 & 0.13634079 & 282(4) & 460(4) & 5.9 & 4.1 & 1500 \\
C101 & & $48^3\times96$ & & 0.137030 & 0.13622204 & 221(2) & 472(8) & 4.7 & 4.1 & 2600 \\
\hline
B450 & 3.46 & $32^3\times64$ & 0.07634 & 0.136890 & 0.136890 & 416(4) & 416(4) & 5.2 & 2.4 & 1500 \\
S400 & & $32^3\times96$ & & 0.136984 & 0.13670239 & 351(4) & 438(5) & 4.3 & 2.4 & 2800\\
N401 & & $48^3\times128$ & & 0.137062 & 0.13654808 & 287(4) & 462(5) & 5.3 & 3.7 & 1100 \\
\hline
H200$^*$ & 3.55 & $32^3\times96$ & 0.06426 & 0.137000 & 0.137000 & 419(5) & 419(5) & 4.4 & 2.1 & 2000 \\
N202 & & $48^3\times128$ & & 0.137000 & 0.137000 & 410(5) & 410(5) & 6.4 & 3.1 & 900 \\
N203 & & $48^3\times128$ & & 0.137080 & 0.13684028 & 345(4) & 441(5) & 5.4 & 3.1 & 1500 \\
N200 & & $48^3\times128$ & & 0.137140 & 0.13672086 & 282(3) & 463(5) & 4.4 & 3.1 & 1700 \\
D200 & & $64^3\times128$ & & 0.137200 & 0.13660175 & 200(2) & 480(5) & 4.2 & 4.1 & 2000 \\
E250 & & $96^3\times192$ & & 0.137233 & 0.13653663 & 130(1) & & 4.1 & 6.2 & 500 \\
\hline
N300 & 3.70 & $48^3\times128$ &0.04981 & 0.137000 & 0.137000 & 421(4) & 421(4) & 5.1 & 2.4 & 1700 \\
N302 & & $48^3\times128$ & & 0.137064 & 0.13687218 & 346(4) & 458(5) & 4.2 & 2.4 & 2200 \\
J303 & & $64^3\times192$ & & 0.137123 & 0.13675466 & 257(3) & 476(5) & 4.2 & 3.2 & 600 \\
\hline
\end{tabular} }
\end{centering}
\label{tab:simul}
\end{table}
Our work is based on a subset of the Coordinated Lattice Simulations
(CLS) ensembles with $N_{\rm f} = 2+1$ dynamical quarks. They are generated~\cite{Bruno:2014jqa}
using the open-QCD suite\footnote{\tt
http://luscher.web.cern.ch/luscher/openQCD/} \cite{Luscher:2012av} and are based on the
O($a$)-improved Wilson-Clover action for fermions, with the parameter
$c_{\rm sw}$ determined non perturbatively in Ref.\ \cite{Bulava:2013cta}, and the tree-level
O($a^2$) improved L\"uscher-Weisz gauge action. The ensembles used in this analysis
were generated at a constant value of the average bare quark
mass such that the improved bare coupling $\tilde g_0$ is kept constant
along the chiral trajectory~\cite{Bruno:2014jqa}. In particular, five of the ensembles are
at the SU(3)-symmetric point, $m_u=m_d=m_s$. The parameters of the
simulations are summarized in Table \ref{tab:simul}.
Results are obtained at four values of the lattice spacing in the
range $a=0.050 - 0.086$\,fm. The scale setting was performed in Ref.\ \cite{Bruno:2016plf}
using a linear combination of the pion and kaon decay constants with a
precision of 1\%.
The pion masses used in our determination of $a_\mu^{\rm hvp}$ lie in the range $m_\pi\approx 130-420$\,MeV.
All the ensembles included in the
final analysis satisfy $m_\pi L > 4$. Furthermore, at two values of the pion mass ($m_\pi
= 280$ and 420\,MeV), two ensembles with the same bare lattice
parameters but different volumes are used to study finite-size
effects. These ensembles with smaller volumes are not included in the
final analysis and are marked by an asterisk in Table \ref{tab:simul}.
All ensembles have periodic boundary conditions (BC) in space.
In the time direction, ensembles E250 and B450 have periodic BCs,
while all others have open temporal BCs.
The choice of open boundary conditions was
made in order to address the issue of long auto-correlation times
associated with the topological charge at small lattice
spacing~\cite{Luscher:2011kk}.
Our use of ensembles with open BCs constitutes part of our motivation
for employing correlators in the time-momentum representation.
The boundary couples to a tower of states with
vacuum quantum numbers. Therefore, in order to extract vacuum
correlators, sources and sinks of correlation functions should be
placed at a sufficient Euclidean-time separation away from the
boundaries\footnote{In a large volume, the energy of the first excited
state emanating from the boundary is expected to be $2m_\pi$.}.
On the ensembles with periodic temporal BCs on the other hand, we exploit the translation invariance
in time to increase statistics.
For all ensembles, except E250, the TMR correlation functions
are computed using point sources, randomly distributed in space and in
the center of the lattice in the time direction. As described in the
next subsection, we use the local vector current at the source and
both the local and the conserved vector currents at the sink. For the
ensemble E250, propagators are estimated using stochastic sources,
with noise partitioning in spin, colour and
time~\cite{Wilcox:1999ab,Foley:2005ac}. Each source has support on a
single, randomly chosen timeslice. To improve statistics, the TMR
correlator in Eq.\ (\ref{eq:Gx0def}) is averaged over the three spatial
directions. Errors are estimated throughout the calculation using the
jackknife procedure with blocking in order to take into account
auto-correlation effects.
In addition to the direct calculation of the TMR
correlators, the auxiliary calculation of the $\pi\pi$ $I=\ell=1$
scattering phase plays an important role in our determination of
$a_\mu^{\rm hvp}$. In Ref.\ \cite{Andersen:2018mau}, it has been determined on
ensembles C101, N401, N200, D200 and J303. On all these ensembles except
C101, the pion form factor at timelike kinematics has also been
determined in~\cite{Andersen:2018mau}.
As compared to the latter reference,
the number of gauge configurations used for our spectroscopy calculation on
ensemble D200 has roughly been doubled.
Additionally, we have performed a spectroscopy calculation
on ensemble N203 with a statistics of about 200 gauge configurations.
We have computed the
quark-disconnected contribution to $a_\mu^{\rm hvp}$ on ensembles N401, N203,
N200, D200 and N302. This selection provides us with a handle on the
discretization effects at $m_\pi\simeq 345\,\mathrm{MeV}$ and $m_\pi\simeq 285\,\mathrm{MeV}$,
and allows us to investigate the chiral behaviour of the disconnected contribution
via the fixed lattice-spacing sequence of ensembles N203, N200, D200.
The disconnected quark loops are computed using four-dimensional,
hierarchically probed noise sources~\cite{Stathopoulos:2013aci} with
512 Hadamard vectors. More technical details on our implementation can
be found in~\cite{Djukanovic:2019jtp}.
\subsection{Lattice correlators, renormalization and O($a$) improvement\label{sec:improvt}}
There are two commonly used discretizations of the vector current in Wilson lattice QCD,
the local and the conserved current. For a single quark flavour $q$, their expressions are
\begin{eqnarray}
V_\mu^{\scriptscriptstyle\rm L}(x) &=& \bar q(x) \gamma_\mu q(x),
\\
\label{eq:Jdef}
V_\mu^{\scriptscriptstyle\rm C} ( x ) &=& \frac{1}{2} \Big(\bar q ( x + a\hat\mu ) ( 1 + \gamma_\mu ) U_\mu^\dagger ( x ) q ( x )
- \bar q ( x )( 1 - \gamma_\mu ) U_\mu ( x ) q ( x + a\hat\mu ) \Big).
\end{eqnarray}
In our calculation of correlation functions, we always place the local vector current at the origin
in Eq.\ (\ref{eq:Gx0def}); at point $x$, we use either the local or the
conserved vector current. This provides us with two discretizations of the TMR correlator which share the same continuum limit.
The conserved vector current has the advantage of not undergoing any renormalization or flavour-mixing.
As for the flavour structure, we note that the electromagnetic current can be decomposed in the SU(3) Gell-Mann basis as
$J_\mu = V_\mu^3 + \frac{1}{\sqrt{3}} V_\mu^8$,
where $V_\mu^a = \bar\psi \gamma_\mu \frac{\lambda^a}{2}\psi$, with $\bar\psi=(\bar u,\;\bar d,\;\bar s)$. Therefore, the
local current only requires the \emph{non-singlet} renormalization factor $Z_{\rm V}$. The charm-quark contribution is treated separately, at the ``partially
quenched'' level; our treatment of this (small) contribution is described in the next subsection.
We have implemented the Symanzik O($a$) improvement programme as described in Ref.\ \cite{Luscher:1996sc}.
Since our lattice action is O($a$) improved, we now describe the improvement and renormalization of the vector currents
in order to consistently carry out the Symanzik programme.
The first step is to add to the local vector current an additive O($a$) counterterm with a tuned coefficient $c_{\rm V}^{\,\scriptscriptstyle\rm L}$
(respectively $c_{\rm V}^{\,\scriptscriptstyle \rm C}$ for the conserved current) compensating chiral-symmetry violating effects in on-shell correlation functions,
\begin{equation}\label{eq:Vimp}
(V_\mu^{{\scriptscriptstyle\rm L},a})^{\rm I}(x) = V_\mu^{{\scriptscriptstyle\rm L},a}(x) + a\,c_{\rm V}^{\,\scriptscriptstyle\rm L}\; \widetilde\partial_\nu \Sigma_{\mu\nu}^a(x),
\end{equation}
where $\widetilde\partial_\nu$ denotes the symmetric lattice derivative\footnote{For the charm, we actually make a different choice described at the end
of this subsection.} and
$\Sigma_{\mu\nu}^a(x) = -\frac{1}{2}\bar\psi(x) [\gamma_\mu,\gamma_\nu]\frac{\lambda^a}{2}\psi(x)$.
The second step, which is only required for the local current, is to take into account the following
renormalization pattern,
\begin{eqnarray}
\widehat V_\mu^{{\scriptscriptstyle\rm L,3}} = Z_3 (V_\mu^{{\scriptscriptstyle\rm L,3}})^{\rm I},
&\qquad&
\widehat V_\mu^{{\scriptscriptstyle\rm L,8}} = Z_8 (V_\mu^{{\scriptscriptstyle\rm L,8}})^{\rm I} + Z_{80} V_\mu^{{\scriptscriptstyle\rm L,0}}.
\end{eqnarray}
We denote by $V_\mu^{{\scriptscriptstyle\rm L,0}} = \frac{1}{2}\bar\psi \gamma_\mu \psi$
the flavour-singlet current, and the mass-dependent renormalization
factors are given by~\cite{Bhattacharya:2005rb,Gerardin:2018kpy}
\begin{eqnarray}
Z_3 &=& Z_{\rm V}(\tilde g_0)\; (1+ 3\overline{b}_{\rm V}\; am_{\rm q}^{\rm av} + b_{\rm V}\;am_{{\rm q},l}),
\\
Z_8 &=& Z_{\rm V}(\tilde g_0) \Big(1+ 3 \overline{b}_{\rm V}\; am_{\rm q}^{\rm av} + \frac{b_{\rm V}}{3}\; a(m_{{\rm q},l}+2m_{{\rm q},s})\Big),
\\
Z_{80} &=& Z_{\rm V}(\tilde g_0) ({\textstyle\frac{1}{3}}b_{\rm V}+f_{\rm V})\; \frac{2}{\sqrt{3}}a(m_{{\rm q},l}-m_{{\rm q},s}).
\end{eqnarray}
Here $(m_{{\rm q},l},m_{{\rm q},l},m_{{\rm q},s})$ are the bare subtracted quark masses, $m_{\rm q}^{\rm av}$ is their average and $\tilde g_0$ is the
O($a$) improved bare coupling.
We note that the mixing coefficient $Z_{80}$ is of order $a$ and vanishes in the SU(3)-flavour-symmetric limit.
We use the values of the renormalization factor $Z_{\rm V}$, the critical hopping parameter $\kappa_{\rm crit}$, as well as the improvement coefficients
$b_{\rm V}$, $\overline{b}_{\rm V}$, $c_{\rm V}^{\,{\scriptscriptstyle\rm L}}$ and $c_{\rm V}^{\,{\scriptscriptstyle\rm C}}$,
which are functions of the bare coupling $g_0$, determined recently in~\cite{Gerardin:2018kpy}.
There it was shown that the obtained values of $Z_{\rm V}$ differ by percent-level
O($a^2$) effects from an independent high-precision determination \cite{DallaBrida:2018tpn}.
The improvement coefficient $f_{\rm V}(g_0)$, which is of order $g_0^6$ and only affects the isoscalar contribution
to $a_\mu^{\rm hvp}$, is neglected. We estimate that the systematic error incurred by this approximation is at present negligible.
Strictly speaking, the connected strange correlator taken in isolation requires an independent, partially quenched improvement coefficient
in the mass-dependent part of the renormalization factor in order to be consistent with O($a$) improvement;
however, we have neglected this effect.
The desired quantity $a_\mu^{\rm hvp}$ is obtained using Eq.\ (\ref{eq:TMRamu}),
where the integral is replaced by a sum over timeslices. Note that in
the improvement terms entering the TMR correlator, only the temporal
derivative of the tensor current contributes. For the connected light
and strange contributions, we have compared the use of the symmetric
lattice derivative in Eq.\ (\ref{eq:Vimp}) with an alternative implementation
where an integration by parts is used in order to apply the temporal
derivative on the QED kernel $\widetilde K(t)$, and found the
difference to be negligible; therefore we have used the symmetric lattice derivative throughout.
For the charm contribution, however, we have
found it advantageous to use the discrete derivative on the `away'
side, i.e.\ in such a way that the vector-tensor correlator is not
evaluated at a shorter time separation than the vector-vector
correlator itself~\cite{Harris:2015vfa}. Finally, we remark that we
do not include the O($a^2$) term consisting of the correlation of two
tensor currents.
\subsection{Treatment of the charm contribution}
We treat the charm quark at the partially quenched level: it does not
appear in the simulated action, nor do we include the contribution of
quark-disconnected diagrams containing charm loops. Given that the charm
contribution is about two percent of the total, these approximations
appear fully sufficient at our present level of precision.
The first task is to tune the value of the bare charm quark mass on
each lattice ensemble. The mass of the ground state pseudoscalar
$c\bar s$ meson is computed for several values of $\kappa_c$, using
stochastic sources with colour, spin and time dilution. The value of
$\kappa_c$ used in the calculation of $a_\mu^{\rm hvp}$ is then obtained from a
linear interpolation of the squared mass of the lightest $c\bar s$
meson in $1/\kappa_c$ to the point where this mass equals the
experimental value of the $D_s$ meson mass.
We perform a dedicated determination of the multiplicative,
mass-dependent renormalization factor $Z_{\rm V}^c$ for the local
charm current on every lattice ensemble. The determination is based
on requiring the charm quantum number of the pseudoscalar $c\bar s$ meson to be
exactly unity. It follows the method used in \cite{Gerardin:2018kpy}
for the light isovector current, and a similar method was already used
in \cite{DellaMorte:2017dyu}. As for the improvement coefficients
$c_{\rm V}^{\,{\scriptscriptstyle\rm L}}$ and $c_{\rm V}^{\,{\scriptscriptstyle\rm C}}$, we use the same values as for
the $u,d,s$ quark flavours. The results for $\kappa_c$ and $Z_{\rm
V}^c$ are given in Table \ref{tab:resultsSC}, while the individual
pseudoscalar $c\bar s$ meson masses used for the determination of $\kappa_c$ are
collected for reference in Table \ref{tab:charmTableLong} of Appendix~\ref{sec:apda}.
\subsection{Infrared aspects of $a_\mu^{\rm hvp}$: correlator tails, finite-size effects and the chiral limit \label{sec:IR}}
There are a number of aspects of the calculation of $a_\mu^{\rm hvp}$ related to
the long-distance physics of vector correlators that are best
discussed together. Here, we summarize our understanding of these
issues before applying it to the treatment of lattice data.
In preparation, recall that the TMR correlator can be written, via the
spectral decomposition in finite volume, as the sum of the (positive)
contributions of individual vector states. In particular, only
isovector vector states contribute in the correlator
\begin{equation}\label{eq:specsum}
G^{I=1}(t) = \sum_{n=0}^\infty \frac{Z_n^2}{2E_n} e^{-E_n t},
\end{equation}
where the amplitudes $Z_n$ are real, and the discrete, ordered energies $E_n$ are
real and positive. A similar expression holds for the isoscalar correlator $G^{I=0}(t)$.
\subsubsection*{Controlling the long-time tail of the TMR correlators}
The contribution of the tail of the correlator to $a_\mu^{\rm hvp}$ is enhanced
by the QED kernel. Yet the correlator is affected by a growing
statistical error, as well as a large relative finite-size effect. We
discuss these two issues in turn.
In order to handle the tail of the correlators, two types of
treatment have been proposed. Both are based on the fact that at
large Euclidean times, a few terms in the sum of
Eq.~(\ref{eq:specsum}) saturate the correlator to a high degree of
precision, which was one of the motivations for introducing the time-momentum
representation~\cite{Bernecker:2011gh}. In the first type of
treatment, one explicitly constructs an extension of the correlator for
$t>t_c$, motivated by the spectral representation
(\ref{eq:specsum}). The simplest incarnation of this method, partly
used in our earlier calculation~\cite{DellaMorte:2017dyu}, is to keep
only the lightest of those states and thus to perform a
one-exponential fit to the correlator for Euclidean times around
$t_c$. When a dedicated spectroscopy calculation is available, several
energy levels $E_n$ as well as the overlaps $Z_n$ can be used, so that
the summed contributions of these states already saturate the TMR
correlator at smaller Euclidean times.
A second type of treatment consists in bounding the Euclidean
correlator from above and
below~\cite{CLehnerBounding,Borsanyi:2017zdw,Blum:2018mom}, exploiting
the positivity of the prefactors $Z_n^2/(2E_n)$,
\begin{equation}\label{eq:bndg}
0\leq G(t_c) e^{-E_{\rm eff}(t_c)(t-t_c)} \leq G(t) \leq G(t_c) e^{-E_N(t-t_c)}, \qquad t\geq t_c,
\end{equation}
where $N=0$ in the simplest variant,
and $E_{\rm eff}(t)\equiv -\frac{d}{dt}\log G(t)$ is the ``effective mass'' of the correlator.
As a refined variant of this method, a dedicated spectroscopy calculation delivering the energies and matrix elements
of the $N$ lowest-lying states allows one to improve
the control over the tail by applying the bound Eq.\ (\ref{eq:bndg}) to the subtracted correlator
\begin{equation}\label{eq:Gsub}
\widetilde G(t) = G(t) - \sum_{n=0}^{N-1} \frac{Z_n^2}{2E_n} e^{-E_n t}.
\end{equation}
A challenge one eventually faces in exploiting lattice spectroscopy information is that
the number of states required to saturate the TMR correlator at a given $t_c$
increases with decreasing pion masses and (roughly proportionally) with the volume.
However, for the ensembles used in this work, the number of states needed is at most four.
\subsubsection*{Finite-size effects on $a_\mu^{\rm hvp}$ in the time-momentum representation}
We now come to the closely related issue of the finite-size effect
on the observable $a_\mu^{\rm hvp}$ calculated in the time-momentum representation.
At asymptotically large volumes, the finite-size effect is of order $e^{-m_\pi L}$ and can be computed
in chiral perturbation theory~\cite{Aubin:2006xv,Francis:2013fzp,Aubin:2015rzx}. At low
pion masses, the leading finite-size effect is expected to come from
the $\pi\pi$ channel, and thus affects the isovector channel only, $G^{I=1}(t)$.
Working in the flavour decomposition of Eq.\ (\ref{eq:Gdecomp}), we take this observation into account
by applying $10/9$ of the isovector finite-size correction to the connected light-quark contribution,
and $-1/9$ of the same correction to the disconnected contribution.
Looking at the finite-size effect on the correlator as a function of
Euclidean time, it has been pointed
out~\cite{Bernecker:2011gh,Francis:2013fzp} that for a given spatial
box size $L$, the tail of the correlator is affected by an
unsuppressed finite-size effect.
One may define a time $t_i$ beyond which the finite-size effect becomes
sizeable. While $t_i$ grows with $L$, we find that the
overall finite-size effect on $a_\mu^{\rm hvp}$ is dominated by the tail
in our present calculation.
For $m_\pi L= 4-5$, the tail of the finite-volume isovector correlator
is accurately described by the contribution of a handful of energy
eigenstates; this point will be illustrated in
Fig.\ \ref{fig:GEVPD200}. On the other hand, the tail of the
infinite-volume correlator can be obtained from the timelike pion
factor. Thus, knowledge of this form factor allows one to correct the
tail of the isovector correlator~\cite{Bernecker:2011gh}. In this
work, we apply the same finite-size correction method as in our
previous calculation~\cite{DellaMorte:2017dyu}, parameterizing the
pion form factor with the Gounaris-Sakurai (GS)
model~\cite{Gounaris:1968mw}. While too simplistic a model for a
study of the form factor for its own
sake~\cite{Feng:2014gba,Andersen:2018mau}, we expect it to be
sufficient for the purpose of reducing the residual finite-size
effects to a level that is small compared to our current statistical
precision. We emphasize that we only use the GS parametrization of
the pion form factor for the finite-size
correction, and not for the treatment of the tail of the correlators.
\subsubsection*{The chiral dependence of $a_\mu^{\rm hvp}$}
The TMR correlator for non-interacting pions was given in Ref.\ \cite{Francis:2013fzp}.
For massless pions, it is given by $G(t) = {1}/(24\pi^2
|t|^3)$; combined with the asymptotic form of the QED kernel for a
finite muon mass, $\widetilde K(t) \sim 2\pi^2 t^2$, this contribution
generates a logarithmic divergence, which is made finite by a small
but finite pion mass and then yields
\begin{equation}\label{eq:amu_chpt1}
a_\mu^{\rm hvp} {\sim} \frac{\alpha^2}{24\pi^2}\log \frac{m_\mu^2}{4m_\pi^2},
\qquad m_\pi\to0,~m_\mu {\rm ~fixed}.
\end{equation}
This result and further terms in the expansion have been derived in~\cite{Golterman:2017njs},
where the systematics of the chiral extrapolation has been studied in detail.
The asymptotic form (\ref{eq:amu_chpt1}) only becomes a decent approximation for $m_\pi/m_\mu$ well below
$1/10$. Thus this logarithmic divergence is largely
irrelevant when describing the pion-mass dependence of $a_\mu^{\rm hvp}$ in the
range $130<m_\pi/{\rm MeV}<300$.
On the other hand, if $m_\mu\ll m_\pi$ and both are small compared to the $\rho$ meson mass,
one finds the leading behaviour
\begin{equation}\label{eq:amu_chpt2}
a_\mu^{\rm hvp} {\sim} \frac{\alpha^2}{90\pi^2} \frac{m_\mu^2}{4m_\pi^2},
\qquad m_\mu\ll m_\pi\ll m_\rho.
\end{equation}
It turns out that this asymptotic form is rather robust, holding down
to fairly small values of $m_\pi/m_\mu$. In fact, within the
framework of chiral perturbation theory at next-to-leading
order\footnote{The expression for the momentum-space vector
correlators at next-to-next-to-leading order can be found
in~\cite{Golowich:1995kd,Amoros:1999dp}.} underlying
Eqs.\ (\ref{eq:amu_chpt1}) and (\ref{eq:amu_chpt2}), the combination
$(1+\frac{4m_\pi^2}{m_\mu^2})a_\mu^{\rm hvp}$ only varies by $2\%$ for
$m_\pi/m_\mu$ in the interval $[1.25,3.0]$ relevant to our lattice
calculations.
At physical quark masses, the overall magnitude of
expression (\ref{eq:amu_chpt2}) is enhanced by the (squared) pion
form factor at timelike kinematics.
In addition, the contribution of the $\pi\pi$ states with
a center-of-mass energy well below the $\rho$-meson mass is numerically
subdominant compared to the resonant contribution. The $\rho$-meson
mass depends only mildly on the light-quark mass, and thus the steep
behaviour predicted by Eq.\ (\ref{eq:amu_chpt2}) as a function of
$m_\pi$ is superimposed on a larger, more slowly varying contribution.
In our chiral extrapolations, presented in section \ref{sec:phys}, we use these
observations to construct suitable fit ans\"atze for the chiral extrapolation.
The singular chiral behaviour comes from the isovector channel,
while we expect the isoscalar channel to have a much milder dependence on the pion mass.
Working in the basis of Eq.\ (\ref{eq:Gdecomp}),
the singular chiral behaviour is split between the connected light-quark contribution
and the disconnected contribution.
Indeed, in the limit that $m_\mu$ and $m_\pi$ are much smaller than the hadronic scale,
we have $a_\mu^{{\rm hvp,\,disc}}= -\frac{1}{9} a_\mu^{{\rm hvp}}$, and hence, from Eq.\ (\ref{eq:amu_chpt2}),
\begin{equation}\label{eq:amudiscchiral}
a_\mu^{{\rm hvp,\,disc}} {\sim} -\frac{\alpha^2}{810\pi^2} \frac{m_\mu^2}{4m_\pi^2},
\qquad m_\mu\ll m_\pi\ll m_\rho.
\end{equation}
For orientation, we note that if one inserts the physical pion mass
into this expression, one obtains $a_\mu^{{\rm hvp,\,disc}} =
-10\times 10^{-10}$, and we expect this value to be further enhanced
by the pion form factor. The important point is that the singular chiral behaviour
present in the connected light-quark contribution to $a_\mu^{\rm hvp}$ must be present in
the disconnected contribution as well, with a relative factor of $-1/10$.
\section{Results\label{sec:results}}
In this section we describe the main features of the TMR correlators obtained
on the different lattice ensembles with a view to computing
$a_\mu^{\rm hvp}$. Particular attention is devoted to the correlators at
Euclidean times in the range [1.5, 4.0]\,fm.
In the rescaling of the muon mass, we use the values of $af_\pi$ values
given in Table \ref{tab:resultsL}, corrected for finite-size effects~\cite{Colangelo:2005gd}
and interpolated via a global fit in the pion mass and the lattice spacing.
\subsection{The quark-connected contributions\label{sec:conn}}
The integrand of Eq.\ (\ref{eq:TMRamu}) for the connected light,
strange and charm contributions is displayed in
Fig.\ \ref{fig:integrand} for our two ensembles with quark masses
closest to their physical values. The left (right) panel corresponds
to a pion mass of about 200\,MeV (131\,MeV). The light contribution is
clearly very dominant; note that the charm and strange contributions have been scaled by a factor of six
for better visibility. On a given ensemble, the integrand peaks at
increasingly longer distances as one goes from the charm to the strange to the
light quarks, and the tail becomes more extended. At the same time,
the statistical precision deteriorates. Comparing the left to the
right panel, it is clear that the light contribution becomes harder to
determine with the desired precision as the physical quark masses are approached.
Nevertheless, these plots by themselves do not fully reflect all the known constraints
on the TMR correlator, which is well known to be given by a
sum of decaying exponentials with positive coefficients, as discussed in section \ref{sec:IR}.
\begin{figure}[t!]
\includegraphics*[width=0.49\linewidth]{plotz/Integrand_D200_all}
\includegraphics*[width=0.49\linewidth]{plotz/Integrand_E250_all}
\caption{Integrand of Eq.\ (\ref{eq:TMRamu}) in the time-momentum representation for the connected light,
strange and charm contributions. Left: ensemble D200 with a pion mass of 200 MeV. Right: ensemble E250 at the physical pion mass.
For better visibility, the strange and charm contributions have been scaled by a factor six. The displayed discretization is the local-local one
for the light and strange contributions, and the local-conserved one for the charm.
The muon mass is the $f_\pi$ rescaled one for the light integrand and the physical one for the strange and charm integrands.}
\label{fig:integrand}
\end{figure}
\begin{figure}[t!]
\includegraphics*[width=0.72\linewidth]{plotz/Integrand_D200_GEVP_lowstat.pdf}
\caption{Reconstruction of the TMR correlator at long distances using a dedicated spectroscopy analysis on ensemble D200.
The same gauge configurations are used for the spectroscopy and for the TMR correlator calculation.}
\label{fig:GEVPD200}
\end{figure}
Having described the state-of-the-art methods to handle the tail of the correlation function in section \ref{sec:IR},
we now describe how we applied these methods to our data.
For the strange and charm quark contributions, the TMR correlator is
determined so accurately that practically no particular treatment of
the tail is needed. We apply the bounding method, Eq.\ \ref{eq:bndg} with $N=0$,
and obtain the results given in Table \ref{tab:resultsSC}.
As for the connected contribution of the light quarks, our
choice for the final analysis is again the bounding method on all
ensembles; the only exception is the physical-pion-mass ensemble E250,
to which we return below. In applying Eq.\ (\ref{eq:bndg}), we employ
the expression containing the effective mass as a lower bound, and use
as an estimate for the lowest-lying energy level in the channel the
energy obtained by a one-exponential fit to the tail of the TMR
correlator. On ensemble D200, on which the ground state
lies clearly below the $\rho$ mass and has a relatively weak coupling
to the vector current, we use the auxiliary spectroscopy calculation
to determine its energy. We find it to be close to, but slightly below
the value corresponding to two non-interacting pions,
$E_0^{\rm free} \equiv 2[\left({2\pi}/{L}\right)^2+m_\pi^2]^{1/2}$.
Table \ref{tab:resultsL} contains our results for the connected contributions of the light quarks.
As discussed in detail in the next subsection,
the improved statistical precision gained
by exploiting spectroscopic information can be quite significant for light
pion masses, $m_\pi\lesssim200\,$MeV. Indeed we find that on the
physical mass ensemble E250, on which we do not have direct
spectroscopic information, we cannot achieve a comparable control over
the statistical and systematic error with the simplest variant of the
bounding method. Therefore we proceed as follows. The isovector
vector energy levels computed on ensembles N203, N200 and D200 allow
us to determine the scattering phase in the $I=\ell=1$ $\pi\pi$
channel~\cite{Andersen:2018mau} for energies up to the four-pion
threshold via the L\"uscher formalism~\cite{Luscher:1991cf}\footnote{See ~\cite{Alexandrou:2017mpi,Fu:2016itp,Guo:2016zos,Wilson:2015dqa,Bali:2015gji}
for other recent calculations of the scattering phase in the $\rho$ channel.}.
The scattering phase is well described by the effective range formula,
\begin{equation}\label{eq:effrange}
\frac{k^3}{E}\cot\delta_{11} = \frac{4k_\rho^5}{m_\rho^2 \Gamma_\rho} \Big(1- \frac{k^2}{k_\rho^2}\Big),
\end{equation}
with $k\equiv \frac{1}{2}\sqrt{E^2-4m_\pi^2}$ and $k_\rho$ being the
value of $k$ for $E=m_\rho$. The parameters $m_\rho$ and
$\Gamma_\rho$ correspond to the $\rho$ meson mass and width.
Furthermore, it has been observed in lattice simulations that
parameterizing the width by
\begin{equation}\label{eq:Gamrho}
\Gamma_\rho = \frac{g_{\rho\pi\pi}^2}{6\pi}\,\frac{k_\rho^3}{m_\rho^2},
\end{equation}
the coupling $g_{\rho\pi\pi}$ only has a weak pion-mass
dependence.
Therefore, we extrapolate the parameters $(m_\rho,g_{\rho\pi\pi})$
determined on the ensembles N203, N200 and D200 (see Table \ref{tab:spectro}) to obtain their values
for the pion mass corresponding to ensemble E250.
Using these values, we can predict the low-lying energy levels $E_n$ on ensemble E250
by using the L\"uscher correspondence between them and the scattering phase in reverse.
In order to obtain an extension of the TMR correlator on E250, we then fit the
squared amplitudes $Z_n^2$, given the energy levels. Note that this can be formulated as a linear fit.
In our final choice of parameters, we fit the TMR correlator on E250 in the
interval $26 < t/a < 37$. Then the TMR is summed from
$t=0$ to $t=28a$ and the multi-exponential extension is used beyond
that time. The numbers given for E250 in Table \ref{tab:resultsL} are
the results from this procedure.
\begin{figure}[t!]
\includegraphics*[width=0.49\linewidth]{plotz/amu_D200_bounding_lowstat.pdf}
\includegraphics*[width=0.49\linewidth]{plotz/amu_D200_improved_bounding_lowstat.pdf}
\caption{Bounding method with the contribution of $N=0$ (method (1), left) and $N=2$ (method (2), right) states subtracted
on ensemble D200 for the local-local correlator and the $f_\pi$-rescaled muon mass. Results based on 1100 gauge configurations. }
\label{fig:bounding}
\end{figure}
\subsection{Comparing different methods of extracting $a_\mu^{{\rm hvp},l}$ on ensemble D200 \label{sec:D200}}
\begin{figure}[t!]
\includegraphics*[width=0.49\linewidth]{plotz/amu_D200_GEVP_lowstat.pdf}
\includegraphics*[width=0.49\linewidth]{plotz/amu_D200_fit_tail_lowstat.pdf}
\caption{Determination of $a_\mu^{{\rm hvp},l}$ with the $f_\pi$-rescaled muon mass
using the extension of the connected light (local-local) correlator
using $N=2$ energy levels on ensemble D200.
On the left (method 3), the amplitudes corresponding to energy levels were predetermined in
a spectroscopy calculation, while on the right (method 4), they are fitted to the TMR correlator.
Results based on 1100 gauge configurations.}
\label{fig:D200tail}
\end{figure}
On ensemble D200 at $m_\pi=200\,$MeV, we have detailed information on
the scattering phase and the timelike pion form factor. We can thus
test the validity of the procedure we applied on the physical
pion-mass ensemble E250, described in the previous subsection.
Thus on D200 we apply and compare four different methods to handle the tail of the
light connected correlator:
\begin{enumerate}
\item[(1)] the bounding method without subtractions ($N=0$);
\item[(2)] the bounding method after subtracting the contribution of $N=2$ states;
\item[(3)] the extension of the correlator using the auxiliary information on the first two energy levels $E_n$
and their amplitudes $Z_n$;
\item[(4)] the extension of the correlator using the auxiliary information on the first two energy levels $E_n$,
but fitting the amplitudes to the TMR correlator.
\end{enumerate}
One motivation for comparing these particular methods is that on E250, we cannot apply the second or third method,
while the first method would result in a large statistical error. Therefore, we apply the last method on E250, and
presently test whether it gives consistent results on ensemble D200.
Fig.\ \ref{fig:bounding} compares the results for $a_\mu^{\rm hvp}$ from methods
(1) and (2), as a function of the time $t_c$ at which the upper and
lower bounds start to be used instead of the TMR correlator itself.
The values are consistent with each other, however method (2) yields a
significantly reduced statistical uncertainty. This outcome is not
surprising, since important auxiliary information is used in method
(2).
A comparison of methods (3) and (4) is shown in
Fig.\ \ref{fig:D200tail}, showing the resulting $a_\mu^{\rm hvp}$ as a
function of the time $t_{c}$ at which the TMR correlator is
replaced by the multi-exponential extension. The result of method (4)
is consistent with that of method (3), albeit with an enlarged
statistical uncertainty. In addition we have checked that the values
of the amplitudes of the first two states as extracted from the fit in
method (4) are well consistent with their direct spectroscopic
determination. Table \ref{fig:D200methods} presents the results
obtained on D200 with the four different methods.
\begin{table}
\caption{Dependence of the D200 result for
$10^{10}\times a_\mu^{{\rm hvp},l}$
on the methods described in the text, using the local-local TMR correlator.}
\vskip 0.1in
\begin{tabular}{c@{~~~}c@{~~~}c}
\hline
Method & No rescaling & With $f_\pi$ rescaling \\
\hline
1. & 605.9(6.3) & 604.2(7.4) \\
2. & 599.0(4.0) & 597.1(5.2) \\
3. & 599.4(3.9) & 597.7(5.0) \\
4. & 607.7(11.8) & 605.7(11.8) \\
\hline
\end{tabular}
\label{fig:D200methods}
\end{table}
\subsection{Finite-volume effects\label{sec:FSE}}
\begin{figure}[t!]
\includegraphics*[width=0.72\linewidth]{plotz/FSE_280_light_zoom.pdf}
\caption{Testing the finite-size correcting procedure described in the main text
on the ensembles N101 and H105 at a pion mass of 280\,MeV.
The scale-setting uncertainty is not displayed, since both ensembles have the same lattice spacing.
}
\label{fig:FSE}
\end{figure}
As explained in section \ref{sec:IR}, in the isospin basis, we would
correct the $I=1$ correlator for finite-size effects stemming from the
$\pi\pi$ states, and neglect such effects on the $I=0$ correlator. However,
we work in the basis of Eq.\ (\ref{eq:Gdecomp}). In this basis, such a correction
corresponds to applying an additive finite-size correction to the
connected light contribution ($\frac{5}{9}G_l(t)$), weighted by a
factor of $10/9$ relative to the correction of the $I=1$
correlator. At the same time, the disconnected contribution $G_{\rm
disc}$ must be corrected by $-1/9$ of the $I=1$ correction. It is
indeed well known that the tail of $G_{\rm disc}(t)$ is given by
$(-1/9)G^{I=1}(t)$~\cite{Francis:2013fzp}.
The $I=1$ finite-size corrections are given in Table \ref{tab:FSE} for
every ensemble. They are computed as in~\cite{DellaMorte:2017dyu},
assuming a GS parametrization of the pion form factor. However, in
contrast to~\cite{DellaMorte:2017dyu}, the parameters of the GS
parametrization are obtained either by fitting the tail of the TMR
correlator using the relations between the $(E_n,Z_n)$ and the pion
form factor~\cite{Luscher:1991cf,Meyer:2011um}, or by using the
results for $m_\rho$ and $g_{\rho\pi\pi}$ from a dedicated pion form
factor calculation, when available. This concerns ensembles C101,
N401, N203, N200, D200 and J303.
We have neglected finite-size effects for the connected strange
contribution, except for the SU(3) symmetric ensembles, where
finite-size effects are the same as for the light-connected
contribution\footnote{At the SU(3) symmetric point, the isovector
correlator receives an additional finite-size correction due to kaon
loops, which amounts to half the correction due to the pion loop.}.
Similarly, no finite-volume correction is applied to the charm-quark
contribution.
We have performed a direct lattice calculation of the FSE on two ensembles,
N101 and H105, with different volumes, $L=2.8\,$fm and 4.1\,fm, at a
common pion mass of 280\,MeV. Figure~\ref{fig:FSE} shows that a
finite-size effect is clearly visible and statistically significant.
After the finite-size correction obtained via the GS model for the
pion form factor, the two correlators are in excellent agreement. This
test gives us confidence that the finite-size correction we apply is
reliable at our level of statistical precision.
\begin{figure}[t!]
\includegraphics*[width=0.64\linewidth]{plotz/Integrand_N200_disc.pdf}
\caption{Integrand of Eq.\ (\ref{eq:TMRamu}) in the time-momentum representation for the disconnected contribution on ensemble N200, using the local-local discretization and the physical muon mass.
}
\label{fig:disc_intgd}
\end{figure}
\subsection{The quark-disconnected contribution}
We have computed the quark-disconnected contribution on a number of
lattice ensembles, namely H105, N401, N203, N200, D200, N302. A
typical integrand is shown in Fig.\ \ref{fig:disc_intgd}. The signal
for the quark disconnected contribution is lost around $t=1.5\,$fm.
Given that the \emph{absolute} error of the integrand for $a_\mu^{\rm hvp}$
grows asymptotically, it is clear that additional information
constraining the tail of the disconnected TMR correlator is mandatory.
We have therefore adopted the following strategy. In our $N_{\rm f}=2+1$ simulations,
the isoscalar correlator $G^{I=0,c\!\!/}(t)$ of the $(u,d,s)$ quarks\footnote{The notation $G^{I=0,c\!\!/}$ is introduced
to distinguish this correlator from the full isoscalar contribution $G^{I=0}$, which also contains the charm contribution.}
admits a positive spectral representation
analogous to Eq.\ (\ref{eq:specsum}), with positive prefactors
multiplying the exponentials. We expect that on the ensembles on which we have
computed the disconnected diagrams, the dominant exponential in a large window of Euclidean times
corresponds to the $\omega$ meson mass.
As we did not perform a dedicated calculation of the $\omega$ mass,
we use our determination of the $\rho$ resonance mass.
Since the latter is slightly lower than the $\omega$ mass, this is a conservative choice.
We can therefore apply the bounding method in the following form,
\begin{equation}
0\leq G^{I=0,c\!\!/}(t) \leq G^{I=0,c\!\!/}(t_c) e^{-m_\rho(t-t_c)}, \qquad t\geq t_c.
\end{equation}
In order to quote a value $a_\mu^{\rm hvp,disc}$ for the quark-disconnected contribution to $a_\mu^{\rm hvp}$,
we subtract the connected light and strange contributions from the isoscalar contribution $a_\mu^{{\rm hvp},I=0,c\!\!/}$,
\begin{equation}
a_\mu^{\rm hvp,disc} = a_\mu^{{\rm hvp},I=0,c\!\!/} - \frac{1}{10} a_\mu^{{\rm hvp},l} - a_\mu^{{\rm hvp},s}.
\end{equation}
Our results for $a_\mu^{\rm hvp,disc}$ are listed in Table \ref{tab:disc} in Appendix \ref{sec:ResTabs}.
\section{Results at the physical point\label{sec:phys}}
Having determined the various contributions to $a_\mu^{\rm hvp}$ on a number of
gauge ensembles, we proceed to extrapolate these results to the continuum
and to the physical pion mass, $m_\pi=134.97\,$MeV.
We use as chiral expansion variable the dimensionless ratio
\begin{equation}
\widetilde y = \frac{m_\pi^2}{16\pi^2 f_\pi^2},
\end{equation}
where $m_\pi$ and $f_\pi$ have been determined on each ensemble.
\subsection{The connected strange and charm contributions}
\begin{figure}[t!]
\includegraphics*[width=0.49\linewidth]{plotz/fit_y_strange_std}
\includegraphics*[width=0.49\linewidth]{plotz/fit_y_charm_std}
\caption{Extrapolation of the connected strange and charm contributions to $a_\mu^{\rm hvp}$ with a muon mass fixed to its physical value.
The black curve represents the chiral dependence in the continuum, and the black point the final result at the physical pion mass.}
\label{fig:amusc}
\end{figure}
For the strange-quark contribution, the statistical error
(excluding the lattice spacing uncertainty) is below 1\% for all the
ensembles, and in many cases below 0.5\%, typically for those ensembles with close-to-physical quark masses.
See Table \ref{tab:resultsSC}.
The error is therefore dominated by the scale-setting uncertainty, which enters through the combination $tm_\mu$
in the integrand (\ref{eq:TMRamu}).
We extrapolate the results of the individual ensembles to the physical point using the fit ansatz
\begin{equation}
a_\mu^{{\rm hvp},s}(a,\widetilde y,d) = a_\mu^{{\rm hvp},s}(0,\widetilde y_{\rm exp})
+ \delta_d\;a^2 + \gamma_1 (\widetilde y - \widetilde y_{\rm exp} )
+ \gamma_2 \,(\widetilde y\log{\widetilde y}- \widetilde y_{\rm exp} \log \widetilde y_{\rm exp}).
\end{equation}
The index $d$ labels the discretization, local-local or local-conserved.
We observe a rather mild continuum extrapolation and both discretizations are in very good
agreement. The fit goes perfectly through our physical mass ensemble
and our final result for the connected strange-quark contribution is
\begin{equation}
a_\mu^{{\rm hvp},s} = (54.5\pm 2.4\pm 0.6)\times 10^{-10},
\end{equation}
where the first error is statistical and the second is the
systematic error from the chiral extrapolation. The latter
is estimated from the difference between the results
obtained if one includes or excludes ensembles with $m_\pi>300\,$MeV.
The chiral and continuum extrapolation is illustrated in the left panel of Fig.\ \ref{fig:amusc}.
For the charm-quark contribution, the statistical error is below 0.3\%
for all the ensembles, and the error on the tuning of the charm
hopping parameter is of similar magnitude. The error is again
dominated by the scale-setting uncertainty. As can be seen on the
right panel of Fig.\ \ref{fig:amusc}, the lattice discretization of
the correlator using two local vector currents leads to large cut-off
effects: we observe a discretization effect of almost 70\% at our
coarsest lattice spacing. By contrast, for the local-conserved discretization the
discretization effect is only 8\%. Thus we prefer not to use the
local-local discretization in our continuum
extrapolation of the connected charm contribution. Furthermore, the data also suggest a very flat chiral
behaviour, and we therefore use the fit ansatz
\begin{equation}
a_\mu^{{\rm hvp},c}(a,\widetilde y) = a_\mu^{{\rm hvp},c}(0,\widetilde y_{\rm exp}) + \delta\;a^2 + \gamma_1 (\widetilde y - \widetilde y_{\rm exp} ).
\end{equation}
At the physical point, we obtain
\begin{equation}
a_\mu^{{\rm hvp},c} = (14.66\pm 0.45 \pm 0.06)\times 10^{-10},
\end{equation}
where the first error is statistical and the second is the systematic error induced by the chiral extrapolation.
The chiral and continuum extrapolation is illustrated in Fig.\ \ref{fig:amusc} (right panel).
A comparison of the strange and charm contributions to $a_\mu^{\rm hvp}$ with recent publications is shown in Fig.\ \ref{fig:amucomp}.
\subsection{The connected light-quark contribution}
\begin{figure}[t!]
\includegraphics*[width=0.49\linewidth]{plotz/fit_y_light_std}
\includegraphics*[width=0.49\linewidth]{plotz/fit_y_light_fpi}
\caption{Extrapolation of the connected light contribution to $a_\mu^{\rm hvp}$, using
the physical value of the muon mass in the kernel $\widetilde K(t)$ on all ensembles (left panel),
and using the rescaled mass $m_\mu^{\rm phys}\cdot \frac{f_\pi^{\rm latt}}{f_\pi^{\rm phys}}$ (right panel).
The result of the fit based on Eq.\ (\ref{eq:extrap_1ovy}) is shown. The black curve represents the chiral
dependence in the continuum, and the black point the final result at the physical pion mass.
}
\label{fig:amul}
\end{figure}
We have achieved a statistical error of just over two percent on
$a_\mu^{{\rm hvp},l}$ on the physical-mass ensemble E250, and of
$1.0-1.2\%$ on all other ensembles. An important role of the other
ensembles is to constrain the continuum limit, which would be very
costly to achieve directly at the physical pion mass. Our lattice
data points are displayed as a function of $\widetilde y$ in
Fig.~\ref{fig:amul}, with and without the rescaling of the muon mass
with $f_\pi$. We observe that the rescaled data on the right panel
has a reduced dependence on $\widetilde y$, as well as on the lattice
spacing. We therefore decide to use the rescaled data for our primary analysis,
but also perform the analysis of the unrescaled data in parallel for comparison.
The expected chiral behaviour of the light connected contribution is reviewed in section \ref{sec:IR}.
Taking into account these considerations, we have used the following ans\"atze to simultaneously extrapolate our results to the continuum
and to physical quark masses:
\begin{subequations}
\begin{align}
a_\mu^{{\rm hvp},l}(a,\widetilde{y},d) &=a_\mu^{{\rm hvp},l}(0,\widetilde{y}_{\exp}) + \delta_d \, a^2 + \gamma_1 \, \left( \widetilde{y} - \widetilde{y}_{\exp} \right) + \gamma_2 \, \left( \log \widetilde{y} - \log \widetilde{y}_{\exp} \right), \label{eq:extrap_log} \\
a_\mu^{{\rm hvp},l}(a,\widetilde{y},d) &=a_\mu^{{\rm hvp},l}(0,\widetilde{y}_{\exp}) + \delta_d \, a^2 + \gamma_3 \, \left( \widetilde{y} - \widetilde{y}_{\exp} \right) + \gamma_4 \, \left( \widetilde{y}^2 - \widetilde{y}_{\exp}^2 \right), \label{eq:extrap_ys} \\
a_\mu^{{\rm hvp},l}(a,\widetilde{y},d) &=a_\mu^{{\rm hvp},l}(0,\widetilde{y}_{\exp}) + \delta_d \, a^2 + \gamma_5 \, \left( \widetilde{y} - \widetilde{y}_{\exp} \right) + \gamma_6 \, \left( 1/\widetilde{y} - 1/\widetilde{y}_{\exp} \right), \label{eq:extrap_1ovy} \\
a_\mu^{{\rm hvp},l}(a,\widetilde{y},d) &=a_\mu^{{\rm hvp},l}(0,\widetilde{y}_{\exp}) + \delta_d \, a^2 + \gamma_7 \, \left( \widetilde{y} - \widetilde{y}_{\exp} \right) + \gamma_8 \, \left(\widetilde{y} \log \widetilde{y} - \widetilde{y}_{\exp} \log \widetilde{y}_{\exp} \right), \label{eq:extrap_ylog}
\end{align}
\end{subequations}
where $d$ is a label for the local-local or local-conserved correlator.
All ans\"atze contain four parameters to be fitted, including an O($a^2$) term to account for discretization errors.
Ansatz (b) assumes a purely polynomial behaviour in the variable $\widetilde y$,
while fit (d) allows for a non-analytic $\widetilde y \log\widetilde y$ term. The latter ansatz was used in our previous $N_{\rm f}=2$
calculation~\cite{DellaMorte:2017dyu}. Ans\"atze (a) and (c) are directly motivated by the discussion in section \ref{sec:IR},
(a) containing the logarithmic singularity that appears in the limit $m_\pi\to 0$ at fixed muon mass,
while (c) contains the $1/m_\pi^2$ term relevant in the regime $m_\mu\ll m_\pi\ll m_\rho$.
We give the results we obtain from these four ans\"atze, with and without rescaling $m_\mu$, in Table \ref{tab:fit}.
We have performed these fits either including all ensembles, or imposing cuts on $\widetilde y$, corresponding to pion masses
below 360\,MeV or, alternatively, below 300\,MeV.
Focusing first on the rescaled data, we note that fits (a), (c) and (d) yield $\chi^2/{\rm d.o.f.}\approx 1.0$
while fit (b) produces higher values of around 1.6.
With the pion-mass cut at 360\,MeV, one sees that results (a) and (c) show good consistency
and yield somewhat larger values of $a_\mu^{{\rm hvp},l}$ than fits (b) and (d).
Given the more singular chiral behaviour of ans\"atze (a) and (c), this outcome is not unexpected.
Looking at the stability of the final value for $a_\mu^{{\rm hvp},l}$ as a function of the pion-mass cut,
we observe excellent stability in the case of fits (a) and (c), while the results of fits (b) and (d) systematically
drift upward as a stronger pion-mass cut is imposed. With the strongest cut, $m_\pi<300\,$MeV, all four ans\"atze
yield the same result within half a standard deviation. In view of the greater stability of fits (a) and (c) against
pion-mass cuts, and the stronger theoretical motivation underlying them,
we choose to average the results of fit (a) and (c) with the cut $m_\pi<360\,$MeV for our final central value.
As a systematic error, we take the full difference between the results of these fits, and thus our final
result for the connected light-quark contribution is
\begin{equation} \label{eq:amulight_final}
a_\mu^{{\rm hvp},l} = (674\pm 12 \pm 5)\times 10^{-10}.
\end{equation}
A few further remarks are in order.
It is important to note that the results of fits (a) and (c) are in very good agreement with the values of $a_\mu^{{\rm hvp},l}$
directly obtained on ensemble E250 with the rescaled muon mass; see Table \ref{tab:resultsL}.
We also remark that the statistical uncertainty on the final result Eq.~(\ref{eq:amulight_final})
is only 20\% lower than the statistical uncertainties on E250; we conclude that the chiral extrapolation of our results obtained at heavier
pion masses, which tend to be more precise, does not lead to an artificially small final uncertainty.
A comparison with the extrapolated results obtained from the standard kernel, shown in the left part of Table~\ref{tab:fit},
shows that the latter lie systematically higher than the rescaled ones. Their statistical uncertainty is larger by about 50\%
than in the unrescaled case.
Still, when combining statistical and systematic uncertainties in quadrature of Eq.\ (\ref{eq:amulight_final}),
the central value of fit (c) only lies 1.6 standard deviations higher than our final central value Eq.\ (\ref{eq:amulight_final}).
\begin{table}[t]
\caption{Results of the connected light-quark contribution in units of $10^{-10}$ using different fits and cuts. Left: using the standard kernel. Right: using the rescaling of the muon mass using $f_{\pi}$. }
\vskip 0.1in
\renewcommand{\arraystretch}{1.1}
{\footnotesize
\begin{tabular}{l@{\hskip 01em}|@{\hskip 01em}c@{\hskip 01em}c@{\hskip 01em}c@{\hskip 01em}|@{\hskip 01em}c@{\hskip 01em}c@{\hskip 01em}c}
\hline
&
\multicolumn{3}{c|@{\hskip 01em}}{Standard kernel} &
\multicolumn{3}{c@{\hskip 01em}}{Kernel with rescaling using $f_{\pi}$} \\
&
cut 300~MeV &
cut 360~MeV &
no cut &
cut 300~MeV &
cut 360~MeV &
no cut \\
\hline
Fit Eq.~(\ref{eq:extrap_log}) &
700(22) & 695(19) &
700(18) & 675(14) & 671(11) & 671(10) \\
Fit Eq.~(\ref{eq:extrap_ys}) &
700(23) & 689(19) &
683(17) & 669(14) & 656(09) & 645(07) \\
Fit Eq.~(\ref{eq:extrap_1ovy})
& 700(22) &
697(19) & 704(18) & 677(14) & 676(12) & 681(11) \\
Fit Eq.~(\ref{eq:extrap_ylog})
& 700(22) &
692(19) & 692(17) &
672(14) & 663(10) & 657(08) \\
\hline
\end{tabular}
}
\label{tab:fit}
\end{table}
\subsection{The quark-disconnected contribution}
\begin{figure}[t!]
\includegraphics*[width=0.72\linewidth]{plotz/fit_y_disc_comb_imp}
\caption{Extrapolation of the disconnected contribution to $a_\mu^{\rm hvp}$ in the SU(3)-breaking variable
$\Delta_2\equiv m_K^2-m_\pi^2$. The data points for the local-local and the local-conserved discretizations are shown.
A linear fit (straight black line), as well as a fit based on ansatz (\ref{eq:ansatzdisc}) are shown.}
\label{fig:amudisc}
\end{figure}
The quark-disconnected contributions have been computed on a subset of
the gauge ensembles, as described in Section \ref{sec:simpar}.
Three ensembles at the same lattice spacing -- N203, N200 and D200 --
allow us to study the chiral behaviour. Two other ensembles, N401
and N302, enable us to constrain the discretization effects.
The quark-disconnected contribution vanishes exactly for the ensembles generated at the SU(3) symmetric point.
In fact, it is a double zero in the SU(3) breaking combination $(m_s-m_l)$.
Since our ensembles follow a chiral trajectory at fixed bare average quark mass $(2m_{{\rm q},l}+m_{{\rm q},s})$,
we can consider the values of $a_\mu^{\rm disc}$ as being to a good approximation\footnote{A residual dependence on
the independent combination $(\frac{1}{2}m_\pi^2 +m_K^2)$ persists at higher orders in the chiral expansion and
via O($a$) discretization effects.} a function
of the single variable $m_K^2-m_\pi^2$.
The results of all five ensembles are thus displayed in Fig.\ \ref{fig:amudisc}
as a function of $\Delta_2^{\,2}$, where $\Delta_2\equiv m_K^2-m_\pi^2$, since close to the SU(3) symmetric point,
the dependence of $a_\mu^{\rm disc}$ on $\Delta_2^{\,2}$ is linear.
We observe that discretization effects are negligible at the current level of
precision. The result of an extrapolation to the physical point $\Delta_2=0.227\,\mathrm{GeV}^2$ assuming a
linear proportionality to $\Delta_2^{\,2}$ is $a_\mu^{{\rm hvp,\,disc}}=-18.6(2.2)\times10^{-10}$.
As discussed below Eq.\ (\ref{eq:amudiscchiral}), the disconnected contribution
has a singular behaviour in the limit $m_\pi\to 0$, closely related to
the corresponding behaviour of the connected light contribution.
Therefore, we consider the possibility that the disconnected contribution contains
a term with precisely the dependence given in Eq.\ (\ref{eq:amudiscchiral}).
In order to make this term consistent with the double zero of the disconnected correlator
at $\Delta_2=0$, we fix $\hat M^2 \equiv \frac{1}{2}m_\pi^2 + m_K^2$ to its physical value,
express $m_\pi^2$ through the variable $\Delta_2$ and use the ansatz
\begin{equation}\label{eq:ansatzdisc}
a_\mu^{{\rm hvp,\,disc}}(\Delta_2) = \gamma_8 \Delta_2^{\,2} -\frac{\alpha^2m_\mu^2}{3240\pi^2} \cdot
\frac{3}{2}\Big[ \frac{1}{\hat M^2 - \Delta_2} - \frac{\Delta_2}{\hat M^4} - \frac{1}{\hat M^2}\Big].
\end{equation}
Fitting the single free parameter $\gamma_8$, we obtain $a_\mu^{{\rm hvp,\,disc}}=-27.7(2.2)\times10^{-10}$.
From Fig.\ \ref{eq:amudiscchiral}, it is clear that both the linear fit in $\Delta_2^{\,2}$ and the one
based on ansatz (\ref{eq:ansatzdisc}) are consistent with the lattice data.
While a singular chiral behaviour must be present in $a_\mu^{{\rm hvp,\,disc}}$,
the ansatz (\ref{eq:ansatzdisc}) may lead to an overestimate of this effect.
Therefore, we quote as our final result the average of the linear and the chirally singular fit,
\begin{equation}\label{eq:amudisc}
a_\mu^{{\rm hvp,\,disc}}= (-23.2 \pm 2.2 \pm 4.5)\times 10^{-10},
\end{equation}
where the first error is statistical and
the second is a systematic error associated with
the extrapolation to the physical point, taken to be the half-distance between the two extrapolated values.
\subsection{The total $a_\mu^{\rm hvp}$ }
In summary, adding up the connected light, strange and charm contributions as well as the quark-disconnected
contribution, our result for $a_\mu^{\rm hvp}$ in isospin-symmetric QCD at $m_\pi=134.97\,$MeV and $f_\pi=92.4\,\mathrm{MeV}$ is
\begin{equation}\label{eq:amuQCD}
a_\mu^{\rm hvp} = (720.0\pm 12.4 \pm 6.8)\times 10^{-10},
\end{equation}
where the first error is statistical and the second is the systematic error.
The latter is dominated by the chiral extrapolation of the light-connected and the disconnected contributions.
The result Eq.\ (\ref{eq:amuQCD}) does not contain any correction for QED or strong isospin-breaking effects.
For now, we do not attempt to include such a correction, but rather add (in quadrature) a systematic uncertainty
of $7.2\times 10^{-10}$ corresponding to a recent lattice calculation of these effects~\cite{Giusti:2019xct}.
This then leads to our final result given in Eq.\ (\ref{eq:final}) below.
\section{Discussion and comparison\label{sec:discussion}}
\begin{figure}[t!]
\includegraphics*[width=0.99\linewidth]{plotz/amuHVP_all}
\caption{Compilation of lattice results for the connected contributions to
$a_\mu^{\rm hvp}$ from individual charm, strange and light quarks (left to right).
In the rightmost panel,
the full results, including (where available) the contributions from
quark-disconnected diagrams and corrections due to isospin-breaking,
are compared to the phenomenological determination of
Ref. \cite{Keshavarzi:2018mgv}, represented by the red vertical
band. Our result is compared to the calculations labelled
FNAL-HPQCD-MILC\,19 \cite{Chakraborty:2014mwa, Chakraborty:2017tqp,
Davies:2019efs}, PACS\,19 \cite{Shintani:2019wai}, ETMC\,19
\cite{Giusti:2017jof, Giusti:2018mdh, Giusti:2019xct}, RBC/UKQCD\,18
\cite{Blum:2018mom}, BMW\,17 \cite{Borsanyi:2017zdw}, as well as our
previous calculation in two-flavour QCD \cite{DellaMorte:2017dyu}
(Mainz/CLS\,17).
}
\label{fig:amucomp}
\end{figure}
In this paper we have presented a calculation of the hadronic vacuum
polarization contribution to $a_\mu$ based on gauge ensembles with
$N_f=2+1$ flavours of O($a$) improved Wilson quarks. Our final result
is
\begin{equation} \label{eq:final}
a_\mu^{\rm hvp}=(720.0\pm12.4_{\rm stat}\,\pm9.9_{\rm syst})\cdot10^{-10},
\end{equation}
where the first error is statistical, and the second is an estimate of
the total systematic uncertainty, which also accounts for the fact
that the corrections due to isospin breaking have not been included.
We thus find that the overall error of our determination is 2.2\%. In
Fig.~\ref{fig:amucomp} we compare our results to those of several
other recent lattice calculations \cite{DellaMorte:2017dyu,
Borsanyi:2017zdw, Blum:2018mom, Giusti:2019xct, Shintani:2019wai,
Davies:2019efs}. While our estimate is at the higher end of lattice
results, we note that the direct difference with the result based on
dispersion theory of Ref.~\cite{Keshavarzi:2018mgv} is $26.6\pm16.0$,
which amounts to $\sim1.7$ standard deviations and may signal a slight
tension.
There are several ways in which our result can be improved without
relying on the obvious strategy of adding more ensembles and
increasing the overall statistics. First, we have seen in section
\ref{sec:D200} that the use of detailed spectroscopy information in
the isovector channel is a huge advantage, as it nearly halves the
statistical uncertainty in the estimate for $a_\mu^{{\rm hvp},l}$ on ensemble
D200. This is the result of either constructing the vector correlator
from the energies and overlaps determined via the GEVP or of using
this information in the improved bounding method. Extending these
calculations to more ensembles -- in particular those with physical
and near-physical pion masses -- will boost the statistical accuracy
and reliability significantly.
Second, we have pointed out that it is advantageous to split the
correlator into isovector and isoscalar components according to
Eq.~(\ref{eq:GdecompI}) rather than focussing on separating the
contributions from individual quark flavours. One reason is that the
singular chiral behaviour expected from Eq.~(\ref{eq:amu_chpt2}) is
shared between the light quark connected and the disconnected
contributions. This will help to better constrain the pion mass
dependence of the quark-disconnected contribution, which is often
still obtained from an extrapolation to the physical point from a set
of results at heavier pion masses. The decomposition according to
isospin also gives a better handle on finite-volume effects, which are
partly compensated between the light connected and disconnected
contributions. This is of particular importance, since finite-volume
corrections for the disconnected part of the vector correlator could
be sizeable but, to our knowledge, have not been estimated so far.
The third refinement concerns the determination of isospin-breaking
corrections. We stress again that our final estimate in
Eq.~(\ref{eq:final}) is valid at a well-defined reference point of the
isospin-symmetric theory, given by the mass of the neutral pion in the
continuum limit. The determination of the corrections due to isospin
breaking relies on the definition of an alternative reference point
that is consistent with the effects induced by a non-vanishing mass
splitting among the up and down quarks and the coupling between quarks
and photons. This requires an adjustment of bare parameters and the
re-evaluation of a number of observables that enter the calculation of
$a_\mu^{\rm hvp}$. An account of the status of our activities in this direction
is given in Refs. \cite{Risch:2017xxe,Risch:2018ozp}. In the absence
of a complete evaluation, we have refrained from simply adding results
for the isospin-breaking correction from the literature. Instead, we
have opted for an additional systematic error which is as large as the
correction determined in \cite{Giusti:2019xct}.
As the community awaits the first results from the E989 experiment at
Fermilab, it is remarkable that several collaborations using different
setups and discretizations of the QCD action obtain largely consistent
estimates for $a_\mu^{\rm hvp}$ with overall errors at the level of
2\%. However, the collection of available results does not allow for a
firm conclusion as to whether the phenomenological estimate or the
so-called ``No New Physics'' scenario is confirmed.
\vspace{-0.1in}
\acknowledgments{\noindent
We thank D.\ Djukanovic, T.\ Harris, K.\ Miura, A.\ Nyffeler and A.\ Risch for helpful discussions.
This work is partly supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) grant HI 2048/1-1
and by the DFG-funded Collaborative Research Centre SFB\,1044 \emph{The low-energy frontier of the Standard Model}.
The Mainz $(g-2)_\mu$ project is also supported by the Cluster of Excellence \emph{Precision Physics, Fundamental Interactions, and Structure of Matter} (PRISMA+ EXC 2118/1) funded by the DFG within the German Excellence Strategy (Project ID 39083149).
Calculations for this project were partly performed on the HPC clusters ``Clover'' and ``HIMster II'' at the Helmholtz-Institut Mainz and ``Mogon II'' at JGU Mainz. M.C.\ thanks A.\ Rago for pointing out Ref.\ \cite{Boyle:2017xcy}
on how to best exploit the network performance on Mogon II and HIMster II in the early stages of running.
Additional computer time has been allocated through project HMZ21 on the BlueGene supercomputer system ``JUQUEEN'' at NIC, J\"ulich.
The authors also gratefully acknowledge the Gauss Centre for Supercomputing
e.V.\ (www.gauss-centre.eu) for funding this project by providing
computing time on the GCS Supercomputer HAZEL HEN at H\"ochstleistungsrechenzentrum Stuttgart (www.hlrs.de) under project GCS-HQCD.
Our programs use the deflated SAP+GCR solver from the openQCD package~\cite{Luscher:2012av}, as well as the QDP++ library
\cite{Edwards:2004sx}.
We are grateful to our colleagues in the CLS initiative for sharing ensembles.
}
|
2207.06726
|
\section{Introduction}
In recent years, the continuous development of face recognition systems has opened various applications such as automatic phone unlocking, border control, public surveillance, and many more convenient applications. Current state-of-the-art face recognition systems~\cite{deng2019arcface, meng2021magface} achieve impressive performance on popular benchmark datasets as LFW~\cite{huang2014lfw}, MegaFace~\cite{kemelmacher2016megaface}, or IJB-B~\cite{whitelam2017iarpa}. However, these systems are primarily designed to operate in controlled environments, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, on images with high quality or resolution, and their performance significantly deteriorates in uncontrolled environments, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, on low-resolution images~\cite{li2018face}. With the advances towards ever more robust face recognition systems applicable in such crucial scenarios, more and more approaches are being published. Various approaches focus on robust face recognition against age gaps, head pose variances, alignments, adversarial attacks, occlusions, and masks. Only a few authors focus on image resolution (cf. \cref{sec:related-work}).
Knoche \emph{et al}\onedot~\cite{knoche2021image} extensively analyzed the susceptibility of face recognition systems to image resolution. Their work demonstrates that face verification accuracy for the popular ArcFace~\cite{deng2019arcface} approach with a ResNet50~\cite{he2016resnet} as the backbone network is dropping significantly for image resolutions below about $50$$\times$$50\,$px. As illustrated later in our experiments, we confirm this effect also on other architectures such as MobileNet~\cite{sandler2018mobilenetv2} or iResNet50~\cite{duta2021iresnet}. In~\cite{knoche2021image}, the authors also stated that the face transformer structure~\cite{zhong2021facetransformer} is less affected by varying image resolution, which is in line with our findings in \cref{sec:results}.
Generally, one can distinguish between two face recognition scenarios concerning image resolution: 1) Low-resolution face verification considers two facial images with the same low resolution. 2) The validation of two images with different resolutions is described as cross-resolution face verification. Despite the increased amount of information present in high-resolution images, the latter problem is even more challenging due to the distinct inherent visual properties of high- and low-resolution images. This emerges in the context of surveillance applications where, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, low-resolution surveillance images are compared with high-quality passport images. Another example is the automatic tagging of people in movies or social media, where image resolution is often compromised due to compression.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{figures/figures_1-loss-vis-alt.eps}
\caption{The proposed octuplet loss exploits the relation between four high-resolution images (upper left) and four low-resolution images (lower right) incorporating four triplet loss ($\lossTri$) terms.}
\label{fig:1-loss-vis}
\end{figure}
In this work, we tackle cross-resolution face recognition with a novel metric learning approach called octuplet loss fine-tuning. This objective constitutes fine-tuning an existing network to increase its robustness against image resolution while maintaining its performance in controlled scenarios. As depicted in~\cref{fig:1-loss-vis}, we exploit the advantages of the widespread triplet loss~\cite{schroff2015facenet} and build upon it. Our key innovation is the combination of four triplet loss terms, which exploit the relationship between high- and low-resolution images and identity labels.
Our main contributions are summarized as follows:
\begin{itemize}
\item We propose a novel loss function called octuplet loss that leverages four triplet loss terms to capture the relationships between high- and low-resolution faces.
\item A fine-tuning strategy is introduced, which can be easily applied to existing networks to improve their robustness against image resolution while maintaining comparable performance on high-resolution images.
\item We demonstrate that fine-tuning several state-of-the-art networks with our proposed octuplet loss leads to significant improvements for cross-resolution and low-resolution face verification on numerous popular datasets.
\end{itemize}
The rest of our paper is organized as follows: in~\cref{sec:related-work}, we review the literature related to this area; \cref{sec:method} introduces the triplet loss concept, describes the applied mining strategy, and presents the octuplet loss function in detail; in~\cref{sec:results}, we describe datasets and experimental settings, followed by quantitative results, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, improvements on existing networks with our approach and ablation studies; finally, \cref{sec:conclusion} concludes this work and indicates possible directions for further research.
\section{Related Work}
\label{sec:related-work}
There is a wide variety of works related to robust face recognition. Reviewing it would be out of the scope of this paper, so we only briefly describe the most relevant recent work in cross-resolution face recognition. Those methods can be divided into transformation-based and non-transformation-based approaches.
\subsection{Transformation-Based Approaches}
Transformation-based or hallucination-based approaches tackle this challenging problem by super-resolving low-resolution faces prior to matching them with high-resolution images. Jiang \emph{et al}\onedot~\cite{jiang2021deep} provided an exhaustive review of face super-resolution in general.
Recently, prior guided~\cite{kalarot2020component, wang2021heatmap, li2021organ, wang2021dclnet, liu2021face} and attribute constrained~\cite{lu2018attribute, yu2018super, yu2019semantic, xin2020facial} face super-resolution approaches were presented. However, they aim for a visually pleasant reconstruction ignoring identity-related information. Therefore, numerous works~\cite{zhang2018super, bayramli2019fh, huang2019wavelet, lai2019low, abello2019optimizing, grm2019face, cheng2021face, kim2021edge, ataer2019verification, li2020learning, cheng2019identity} leveraged face recognition networks to ensure face feature similarity and optimized the super-resolution to preserve identity information.
To cope with weakly labeled datasets, Hsu \emph{et al}\onedot~\cite{hsu2019sigan} apply an identity-preserving contrastive loss, whereas Kazemi \emph{et al}\onedot~\cite{kazemi2019identity} utilize an adversarial face verification loss. Very recently, Ghosh \emph{et al}\onedot~\cite{ghosh2022suprear} presented an end-to-end supervised resolution enhancement and recognition network using a heterogeneous quadruplet loss metric to train a generative adversarial network (GAN), which super-resolves images without corrupting the discriminative information of the low-resolution images.
\subsection{Non-Transformation-Based Approaches}
Non-transformation-based approaches aim to directly project facial features from arbitrary resolution images into a common feature space. In \cite{zangeneh2020low}, this was accomplished by a non-linear coupled mapping architecture using two deep convolutional neural networks (CNNs). \cite{massoli2020cross} approached the problem differently with a student-teacher method. A deep coupled ResNet model containing one trunk network and two branch networks was introduced by~\cite{lu2018deep}. The trunk network extracts facial features, while the two branch networks transform high-resolution and the corresponding low-resolution features into a common feature space. In \cite{talreja2019attribute}, Talreja \emph{et al}\onedot proposed an attribute-guided cross-resolution face recognition model utilizing a coupled GAN and multiple loss functions. Ge \emph{et al}\onedot~\cite{ge2018low} focused on low computational costs and introduced a new learning approach via selective knowledge distillation. A two-stream technique, comprising a large teacher model and a lightweight student model, is employed to transfer selected knowledge from the teacher model to the student model. Sun \emph{et al}\onedot~\cite{sun2020classifier} proposed a shared classifier between high- and low-resolution images to further narrow the domain gap. To fully exploit intermediate features and loss constraints, they embed a multi-hierarchy loss into intermediate layers, reducing the distance of intermediate features after the max-pooling layer and avoiding an over-utilization of intermediate features.
Knoche \emph{et al}\onedot~\cite{knoche2021image} provided the BT-M model, which was trained straightforwardly with half the number of images within each batch being low-resolution. Additionally, they contributed two networks (ST-M1 and ST-M2), which both incorporate a siamese network structure, enabling optimization with respect to an additional feature distance loss. Similar to the BT-M model of ~\cite{knoche2021image}, Zeng \emph{et al}\onedot~\cite{zeng2016towards} presented a resolution-invariant deep network and trained it directly with unified low- and high-resolution images. In~\cite{mudunuri2018genlr}, the authors applied a cross-resolution contrastive loss on higher-level features of two separate network branches, with each branch focusing precisely on one resolution (high and low). The following two methods go one step further: \cite{lai2021deep} tackled the problem with a deep siamese network structure and combined a classification loss with a cross-resolution triplet loss. Zha and Chao~\cite{zha2019tcn} also applied cross-resolution triplet loss, but in contrast to~\cite{lai2021deep}, they used a two-branch network similar to~\cite{mudunuri2018genlr}.
In the recent past, \cite{mishra2021multiscale} proposed a multi-scale parallel deep CNN feature fusion architecture. In contrast to most other face recognition systems, they provide an end-to-end approach and directly predict the similarity score of two input images.
Very recently, Li \emph{et al}\onedot~\cite{li2022deep} proposed a novel deep rival penalized competitive learning strategy for low-resolution face recognition.
The work of Terh\"orst \emph{et al}\onedot~\cite{terhorst2021qmagface} pursued a distinct goal. They focused on a more general quality-aware face recognition, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, they do not concentrate solely on the physical image quality but also consider pose and age variations. Their approach combines a quality-aware comparison score, utilizing model-specific face image qualities, with a face recognition model based on a magnitude-aware angular margin loss. A rather unusual method in this field of research but still relevant is the work of Zhao~\cite{zhao2021homogeneous}, which shows a new technique for correlation feature-based face recognition.
Despite the recent advances in resolution robust face recognition, a closer look at the works on this topic reveals that there is no standard benchmark method. Not only are different datasets for training and cross/same resolution evaluation used, but the synthetically down-sampling is also different across coding platforms/tools.
\section{Experiments}
\label{sec:results}
\subsection{Datasets}
\label{subsec:datasets}
This work uses the MS1M-V2~\cite{guo2016ms, deng2019arcface} database for training and validation, comprising $5.7\text{M}$ images of $87$k identities. The vast majority ($\sim99.9\%$) is used for our fine-tuning strategy, and only $\sim1\permil$ is retained for validation. From the latter subset, we randomly generated $3000$ genuine and $3000$ imposter image pairs to measure face verification performance during training. Due to our condition that each identity within a mini-batch must appear exactly twice (cf. \cref{subsec:hard-batch-mining}), we employ an algorithm that creates the mini-batches. Images are picked from the entire dataset according to the number of unpicked images per identity. By updating the underlying probability distribution after every batch, we ensure diverse batches even at the end of every epoch.
We evaluate all models on the well-known face verification dataset Labeled Faces in the Wild (LFW)~\cite{huang2014lfw}. Moreover, we apply our models to several publicly available variants of LFW: XQLFW~\cite{knoche2021xqlfw} (large image quality difference), CALFW~\cite{zheng2017calfw} (large age gap), CPLFW~\cite{zheng2018cplfw} (large pose variations), and SLLFW~\cite{deng2017sllfw} (similar faces). Finally, we evaluate the face verification accuracy on AgeDB~\cite{moschoglou2017agedb} (large age gap) and CFP-FP/CFP-FF~\cite{sengupta2016cfp-fp} (frontal-profile/frontal-frontal image pairs). All protocols consist of $3000$ genuine and $3000$ imposter pairs, except for CFP-FP/CFP-FF, which contain $3500$ genuine and $3500$ imposter pairs.
\subsection{Settings}
\label{subsec:settings}
To demonstrate the effectiveness of our octuplet loss, we employ it on various pre-trained approaches, i.e., we take a pre-trained model and fine-tune it only with our proposed octuplet loss function. For the MagFace~\cite{meng2021magface} model, we use stochastic gradient descent with a learning rate of $0.001$ for one epoch. The FaceTransformer~\cite{zhong2021facetransformer} is fine-tuned one epoch employing the AdamW~\cite{loshchilov2017adamw} algorithm ($\epsilon = 10^{-8}$), with a learning rate of $0.0005$. Both latter networks converge already within the first epoch. All other networks are fine-tuned for $6$ epochs with AdaGrad~\cite{duchi2011adaptive} optimizer ($\epsilon = 1.0$) using a learning rate of $0.01$, which is divided by $10$ after epochs $2$, $4$, and $5$. Due to hardware restrictions, we use a mini-batch size $B = 64$ for the FaceTransformer and iResNet50~\cite{duta2021iresnet}, whereas $B = 256$ for all remaining architectures. If not stated otherwise, we utilize the Euclidean distance, set the margin $m$ to $25$, and do not normalize our features.
Fine-tuning on an NVIDIA RTX 3090 (24GB) took approximately $18$ hours for ResNet50~\cite{he2016resnet} ($3$ hours per epoch), which is more time-consuming by a factor of two ($1.5$ hours per epoch) than pre-training with ArcFace~\cite{deng2019arcface} loss. Fine-tuning on iResNet50~\cite{duta2021iresnet} took $16$ hours, $34$ hours for FaceTransformer~\cite{zhong2021facetransformer}, and $2$ hours for the MobileNetV2~\cite{sandler2018mobilenetv2} architecture. We follow~\cite{deng2019arcface} in data preprocessing and generate normalized face crops ($112$$\times$$112$\,px) with five facial landmarks extracted with the MTCNN~\cite{zhang2016joint} for all our experiments. Additionally, besides horizontal flipping, random brightness and saturation variation are applied as data augmentation. For the generation of deteriorated images, bicubic down-sampling with anti-aliasing is used. To retrieve the face verification performance for different image resolutions, we deteriorate the second (according to the protocol) image of each pair to the particular resolution in the evaluation protocol (cf. \cref{subsec:octuplet-loss}).
We assess the robustness of the face recognition systems to image resolution in terms of their face verification accuracy. We employ the cosine distance as our distance metric for all evaluations and determine the absolute accuracy with 10-fold cross-validation.
\subsection{Results}
\label{subsec:results}
\subsubsection{Improvements on SOTA Methods}
We apply our fine-tuning strategy to several state-of-the-art face recognition models. For evaluation purposes, XQLFW~\cite{knoche2021xqlfw} fits the purpose of our investigation perfectly since the pairs in the evaluation protocol show a large difference in resolution. In addition, we synthetically deteriorate images of several other datasets to analyze the robustness of our approach to specific image resolutions $r$.
\Cref{tab:sota-comp} summarizes the results and highlights the tremendous robustness increase originating from the octuplet loss \lossOct.
\begin{table*}[t]
\centering
\caption{Improvement of cross-resolution face verification accuracy [$\%$] with our proposed octuplet loss $\lossOct$, evaluated on several datasets (see~\cref{subsec:datasets}) for different image resolutions.}
\resizebox{\linewidth}{!}{\input{tab-sota-comp}}%
\label{tab:sota-comp}%
\end{table*}%
Without \lossOct, the models (BT-M, ST-M1, and ST-M2)~\cite{knoche2021image} are already trained to be resolution invariant and perform best on XQLFW~\cite{knoche2021xqlfw} and very low-resolution images ($7\,$px) of the other datasets. All remaining models are very susceptible to image resolution and show a decrease in accuracy for low-resolution images. However, although the FaceTransformer~\cite{zhong2021facetransformer} network tends to be more robust than structures solely based on CNNs, its performance is still worse for very low resolution images. This is in line with the findings of~\cite{knoche2021xqlfw} and renders a reliable real-world application impossible.
After fine-tuning with our proposed octuplet loss \lossOct, all models perform significantly better on images with low resolutions while maintaining their performance on high-resolution images.
Only a few minor deteriorations can be observed for the FaceTransformer~\cite{zhong2021facetransformer} and iResNet~\cite{duta2021iresnet} architecture, which we investigate in \cref{subsec:analysis}. The most considerable improvement holds for the ResNet50~\cite{he2016resnet} architecture pre-trained with the ArcFace~\cite{deng2019arcface} method. We boost the accuracy from $74.22\%$ to $93.27\%$ on the most realistic cross-resolution dataset XQLFW~\cite{knoche2021xqlfw} while even slightly surpassing the baseline accuracy on LFW~\cite{huang2014lfw} with $99.55\%$. Our method further improves the face verification accuracy for BT-M~\cite{knoche2021image}, ST-M1~\cite{knoche2021image}, and ST-M2~\cite{knoche2021image} on high-resolution images, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, it recovers the prior drop in accuracy reported in \cite{knoche2021image}. This behavior shows that our method better exploits the available network capabilities and makes them more robust. With the exception of the $7$\,px resolution, the best overall performance after fine-tuning with \lossOct is accomplished by the FaceTransformer network. The vast increase in accuracy, which is observed on four different architectures and four unique pre-training loss functions, demonstrates that our approach is universally applicable and works on various network architectures.
Finally, we measure the face verification accuracy with pairs of images with the same image resolution. Our experimental results in \cref{tab:same-res-comp} indicate that our baseline model is slightly worse in same-resolution face verification than in the cross-resolution scenario (cf. \cref{tab:sota-comp}). This discrepancy is understandable due to the reduced information content of both low-resolution images. However, our approach substantially increases the performance from $77.57\%$ to $89.74\%$ on average across all image resolutions. These outcomes show that our technique is not limited to cross-resolution scenarios and can also be applied in same-resolution scenarios.
\begin{table}[b]
\centering
\caption{Improvement of same-resolution face verification accuracy [$\%$] with our proposed octuplet loss $\lossOct$. Values are averaged across several datasets (see~\cref{subsec:datasets}) for each image resolution.}
\resizebox{0.6\linewidth}{!}{\input{tab-same-res}}%
\label{tab:same-res-comp}%
\end{table}%
In conclusion, these improvements testify to a further contribution toward universal, resolution-independent face recognition systems.
\subsubsection{Comparison with other SOTA Approaches}
After demonstrating that our octuplet loss \lossOct improves the robustness of various face recognition models in cross-resolution scenarios, we compare \lossOct with state-of-the-art cross-resolution methods. For this purpose, we evaluated our two best-performing approaches (FaceTransformer~\cite{zhong2021facetransformer} and MagFace~\cite{meng2021magface} with \lossOct) on LFW~\cite{huang2014lfw} with particular resolutions to match the evaluation conditions of other approaches and enable a direct comparison. The results are reported in~\cref{tab:sota-comp-2} and show that our approaches outperform all other methods except for $r=8$\,px image resolution, where Lai and Lam~\cite{lai2021deep} achieved a higher accuracy. However, a notable drawback of their approach is the weak performance for high-resolution images. We must interpret the results of Ge \emph{et al}\onedot~\cite{ge2018low} carefully as their approach is based on a teacher model that performs worse on high-resolution images (only $97.15\%$) and they report numbers of specific models for each image resolution. Moreover, the training resolution is inconsistent across the compared methods and can lead to slight deviations. Concluding, these results provide a reasonable classification of our approach as the state-of-the-art and underline its advantages.
\begin{table}[h]
\centering
\caption{Cross-resolution face verification accuracy [$\%$], evaluated on LFW for different image resolutions. The best accuracy per resolution is marked in bold.}
\resizebox{0.6\linewidth}{!}{\input{tab-sota-comp-2}}%
\label{tab:sota-comp-2}%
\end{table}%
In addition, we compare our octuplet loss \lossOct with the approach of Terh\"{o}rst \emph{et al}\onedot~\cite{terhorst2021qmagface}. While they perform worse on XQLFW ($83.95\%$), they report a slightly higher accuracy on LFW and much better results on AgeDB and CFP-FP. However, they aim at general quality-robust face recognition encompassing resolution, age, and pose. In contrast, we focus exclusively on the images' resolution; hence, this is not a fair comparison and should be considered with caution.
\subsection{Analysis and Characteristics}
\label{subsec:analysis}
For the analysis, we are using a re-implementation of the popular ArcFace~\cite{deng2019arcface} approach, pre-trained on MS1M-V2~\cite{guo2016ms, deng2019arcface}. It consists of a ResNet50~\cite{he2016resnet} backbone network followed by a $512$-dimensional fully connected layer, which acts as a bottleneck layer during pre-training and provides the facial features $\vec{f}$ for the octuplet loss $\lossOct$. We denote this in the following as our baseline network.
\begin{figure}[b]
\centering
\includegraphics[width=0.5\linewidth]{figures/figures_5-roc-vis.eps}
\caption{Cross-resolution receiver operating characteristics (ROC) curve comparison of the baseline model (dashed) with our proposed octuplet loss $\lossOct$ fine-tuning (solid) on XQLFW, and LFW for selected image resolutions. The equal error rate (EER) is indicated by a dotted line.}
\label{fig:roc_curve}
\end{figure}
We provide the receiver operating characteristics curve in~\cref{fig:roc_curve} to obtain deeper insights into the fine-tuning effect on our baseline network on XQLFW~\cite{knoche2021xqlfw} and the LFW~\cite{huang2014lfw} database at different scales. Primarily at very low false acceptance rates (FAR), the performance gain after the fine-tuning is tremendous. While the baseline model fails for challenging situations, the fine-tuned version achieves superior results. On the XQLFW dataset, our approach increases the true acceptance rate (TAR) for very low FARs from $0\%$ to over $65\%$. This improvement is similar to the behavior on the LFW dataset at $7$\,px. The effect vanishes the higher the resolution until, at $112$\,px, the rates remain nearly equal. Overall, this improvement shows the benefit of our method, especially in security applications, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, manhunts via surveillance cameras.
Moreover, we investigate the deviation in the accuracy change between several datasets. In~\cref{fig:dataset-comp}, we fanned out the increase for several datasets and different image resolutions. We observe a significant performance reduction of the baseline model on challenging datasets that focus on age, pose, person similarity, or low image quality. This indicates that the image resolution is even more critical in combination with other adverse conditions. Our proposed octuplet loss fine-tuning strategy accomplishes the best accuracy for LFW~\cite{huang2014lfw} and CFP-FF~\cite{sengupta2016cfp-fp} with over $90\%$ at all resolutions, which are the easiest benchmarks. In contrast, CPLFW~\cite{zheng2018cplfw} seems to be the most challenging dataset, with a performance below $90\%$ at all scales. The chart also reveals that for large pose variations datasets such as CPLFW~\cite{zheng2018cplfw} and CFP-FP~\cite{sengupta2016cfp-fp}, there is still a moderate increase of accuracy at $28\,$px image resolution, whereas the boost at that scale is marginal for all other datasets. Only on data with a large age gap (AgeDB~\cite{moschoglou2017agedb} and CALFW~\cite{zheng2017calfw}) and similar faces (SLLFW~\cite{deng2017sllfw}), our approach marginally reduces the accuracy on $56\,$px and $112\,$px image resolution.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{figures/figures_2-dataset-vis.eps}
\caption{Cross-resolution face verification accuracy comparison of the baseline model and our proposed octuplet loss $\lossOct$ fine-tuning on several datasets (see~\cref{subsec:datasets}) for different image resolutions. An improvement is highlighted with darker colors, whereas a deterioration is indicated with lighter colors.}
\label{fig:dataset-comp}
\end{figure}
In conclusion, this analysis uncovers that our proposed approach is not limited to relatively simple datasets like LFW~\cite{huang2014lfw} but also offers immense benefits to more challenging datasets, which involve, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, large age gaps or head pose variances.
\subsection{Ablation Studies}
\label{subsec:ablation}
We conduct multiple ablation studies to understand the influence of our loss terms, distance metric, feature normalization, margins, and batch size. Firstly, we study each single triplet loss term's contribution and then investigate the distance metric's influence, followed by clarifying the impact of normalizing features. Since the margin is crucial in our proposed octuplet loss, we empirically search for the optimal value for fine-tuning the baseline network. This study, combined with the effect of the batch size, is finally presented in this section. As in \cref{subsec:analysis}, we use the re-implementation of ArcFace~\cite{deng2019arcface} with a ResNet50~\cite{he2016resnet} as the pre-trained network for all ablation studies.
\begin{table*}[t]
\centering
\caption{Two ablation studies: Influence on the cross-resolution face verification accuracy $[\%]$ from each triplet loss term (upper part) and the influence of the distance metric and normalization of features (lower part) using our proposed octuplet loss $\lossOct$ fine-tuning evaluated on several datasets (see~\cref{subsec:datasets}) and different image resolutions. The best performance within each study is marked in bold.}
\resizebox{\linewidth}{!}{\input{tab-loss-dist-comp}}%
\label{tab:loss-dist-comp}%
\end{table*}%
\subsubsection{Loss Terms}
Our proposed octuplet loss consists of four different triplet loss functions (cf. \cref{eq:Octupletloss}), and each term affects the overall performance. Hence, we conduct experiments to obtain the contribution of each term. In this study, we use the Euclidean distance and no feature normalization. As depicted in the upper part of~\cref{tab:loss-dist-comp}, we start from the best mean accuracy across all datasets and image resolutions, which is obtained by including all triplet loss terms.
As expected, utilizing only $\lossTri(\Th)$ leads to the worst results on XQLFW~\cite{knoche2021xqlfw} and images with resolutions ($7\,$-$28\,$px), but interestingly, it does not improve accuracy on the high-resolution dataset LFW~\cite{huang2014lfw}, whereas it does improve the accuracy on average across all datasets. We suspect that: 1) The performance is already saturating for LFW, and 2) there might be a few lower-quality images in the LFW dataset, although we expect them to be exclusively in high resolution. However, this term is essential to constrain the network and not focus entirely on low resolution. In contrast, utilizing only $\lossTri(\Tl)$ significantly improves the performance on low-resolution images. Nevertheless, it drastically reduces the verification accuracy on high-quality images and thus is not considered preferable. Considering only $\lossTri(\Thl)$, $\lossTri(\Tlh)$, or the inclusion of both terms, leads to a moderate increase of robustness to image resolution but also comes with the trade-off of reducing the accuracy on high-resolution images. A similar effect occurs for the combination of $\lossTri(\Th)$ and $\lossTri(\Tl)$. Interestingly, the $\lossTri(\Th)+\lossTri(\Thl)$ or $\lossTri(\Th)+\lossTri(\Tlh)$ configuration yields the best performance on intermediate image resolutions ($28\,$px and $56\,$px). Furthermore, experiments with three \lossTri terms reveal that removing $\lossTri(\Th)$ or $\lossTri(\Tl)$ leads to a marginal decline in performance.
This breakdown of the individual loss terms shows that each term contributes to the overall performance. However, the benefit of including both $\lossTri(\Th)$ and $\lossTri(\Tl)$ instead of simply one of them is only minor since they both connect high-resolution images with low-resolution images (\emph{cf}\onedot} \def\Cf{\emph{Cf}\onedot~\cref{fig:1-loss-vis}).
\subsubsection{Distance Metric and Feature Normalization}
As described in~\cref{subsec:settings}, our proposed approach follows the work of Hermans \emph{et al}\onedot~\cite{hermans2017defense} and uses the Euclidean distance metric without feature normalization. However, as proposed in other works~\cite{schroff2015facenet, boutros2022self, parkhi2015deep, feng2020triplet}, we experimented with the squared Euclidean distance. Additionally, we conduct experiments with the cosine distance metric. Since those configurations consequently affect the magnitude of the margin, we empirically determine the best margins for each configuration.
\Cref{tab:sota-comp-2} illustrates the face verification accuracies and points out that the Euclidean distance is best for low-resolution images. In contrast, for intermediate and high-resolution images (from $28\,$px up to $112\,$px), the Euclidean squared distance and feature normalization leads to the best results. The improvement for this configuration is also evident in the XQLFW~\cite{knoche2021xqlfw} dataset and might be preferred for real-world applications. Due to the utilization of the cosine distance in our evaluation protocols, one would expect that utilizing this metric in our octuplet loss fine-tuning strategy leads to the best results. However, this is not true, as seen in the bottom row of \cref{tab:sota-comp-2}. We can achieve similar performance on LFW~\cite{huang2014lfw} and XQLFW, but for lower image resolutions, fine-tuning with cosine distance leads to a smaller improvement.
\subsubsection{Margin and Batch Size}
To conclude our ablation studies, we report the face verification accuracy after fine-tuning our baseline network with different margins $m$ and batch sizes $B$. Keeping in mind that our baseline network achieves $84.90\%$ accuracy, in~\cref{fig:margin-batchsize}, it is obvious that the improvement is more prominent for a larger number of samples within each batch. This effect is unsurprising since a larger batch size increases the probability of the hard sample mining algorithm finding even more challenging samples. Batches containing less than $64$ samples lead to even worse accuracy, and hence, they are not further investigated in this work. Due to hardware limitations, we were unable to conduct experiments with larger batch sizes. Nevertheless, we expect this trend to continue until our hard sample mining algorithm only selects outliers, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, incorrectly labeled images.
\begin{figure}[h]
\centering
\includegraphics[width=0.7\linewidth]{figures/figures_3-margin-vis.eps}
\caption{Cross-resolution face verification accuracy $[\%]$ with our proposed octuplet loss $\lossOct$ comparison of different values for margin $m$ and batch-size, evaluated on our validation split of MS1M-V2. Note that we report mean accuracy for images resolutions $r \in \{7, 14, 28, 56, 112\}$.}
\label{fig:margin-batchsize}
\end{figure}
In addition, we evaluated the face verification accuracy for margin values between $1$ and $500$. A value of $m = 25$ leads to peak performance. This maximum is also consistent for different batch sizes and indicates that the margin is independent of the batch size.
\section{Method}
\label{sec:method}
\subsection{Triplet Loss}
\label{subsec:triplet-loss}
Recent works~\cite{schroff2015facenet, feng2020triplet, zha2019tcn, lai2021deep} showed that triplet-based learning helps extract more discriminative face embeddings. Being the fundament of our octuplet loss function, we first review the concept of triplet loss~\cite{schroff2015facenet} in the face recognition domain in more detail.
Let $\B = \{\im{1},\im{2},\,\ldots\,,\boldsymbol{I}_B\}$ be a mini-batch of $B$ facial images $\im{} \in \R^{112\times112\times3}$, whereby each image belongs to a particular identity $\id{\im{}}$. Given that every identity within the mini-batch is represented by at least two images, we define a set of triplets according to the following rule:
\begin{equation}
\T(\B_1, \B_2, \B_3) := \big\{ (\A,\P,\N) : \A \in \B_1, \P \in \B_2,
\N \in \B_3, \id{\A} = \id{\P},
\id{\A} \ne \id{\N}, \A \ne \P \big\},\hspace{0.6cm}
\label{equ:setT}
\end{equation}\vspace{1mm}
with $\A$ denoting an anchor image, $\P$ being its related positive image, which belongs to the same identity, and $\N$ being its related negative image of a different identity. Using a feature extractor $\feat{\im{}}$, we obtain facial embeddings $\vec{f}=\feat{\im{}}$ in a $d$-dimensional Euclidean space. Then, the triplet loss aims to indirectly enlarge the feature distance $\dist{\cdot,\cdot}$ between $\P$ and $\N$ by pulling $\feat{\P}$ and $\feat{\A}$ together and simultaneously repelling the $\feat{\N}$ from $\feat{\A}$. In this work, we consider three different feature distance metrics: cosine $\dcos$, Euclidean $\deuc$, and Euclidean squared $\deucsqa$, which are defined by
\begin{align}
&\dcos(\vec{f}_1,\vec{f}_2) = 1 - \frac{\vec{f}_1 \cdot \vec{f}_2}{\lVert \vec{f}_1 \rVert_2 \, \lVert \vec{f}_2 \rVert_2},\\[0.2cm]
&\deuc(\vec{f}_1,\vec{f}_2) = \lVert \vec{f}_1 - \vec{f}_2 \rVert_{2},\text{ and}\\[0.2cm]
&\deucsqa(\vec{f}_1,\vec{f}_2) = \deuc(\vec{f}_1,\vec{f}_2)\smallskip^2.
\end{align}\vspace{1mm}
A margin $m$, as a minimum distance between the positive and negative image, enforces that triplets of which the distance of the negative and positive image is already larger than the margin will not affect the loss (cf. \cite{schroff2015facenet, kaya2019deep}). The objective triplet loss function $\lossTri$ can then be formulated as follows:
\begin{equation}
\lossTri(\T) = \frac{1}{|\T|}\sum_{\substack{(\A,\P,\N)\\ \in \T}}
\Big[\text{d}\big(\text{f}(\A),\text{f}(\P)\big) -
\text{d}\big(\text{f}(\A),\text{f}(\N)\big)+ m\Big]_+,
\label{equ:tripletloss}
\end{equation}\vspace{1mm}
where $[\cdot]_+$ denotes $\text{max}(0,\cdot)$.
\subsection{Hard Sample Mining}
\label{subsec:hard-batch-mining}
Given the constraint that in all our experiments, a mini-batch strictly contains two randomly selected images of the same identity, the number of identities within each mini-batch is $B/2$. For each anchor image $\A$, we can find exactly one positive image $\P$ and $B-2$ negative images $\N$. Hence, the cardinality of the set $|\T| =B^2-2B$ (cf. \cref{equ:setT}). With this set of triplets $\T$, the maximum information within each mini-batch is exploited by the triplet loss. However, the majority of triplets within $\T$, according to \cref{equ:setT}, do not contribute towards $\lossTri$ as they are already correctly classified and thus fulfill $\text{d}\big(\feat{\A},\feat{\P}\big) + m < \text{d}\big(\feat{\A},\feat{\N}\big)$ (cf. \cref{equ:tripletloss}). To accelerate the training procedure, we follow Hermans \emph{et al}\onedot~\cite{hermans2017defense} and select only the most relevant negative sample $\Nh$, which is obtained for a given anchor image $\A$ by
\begin{equation}
\Nh = \argmin_{\N}\,\text{d}\big(f(\A), f(\N)\big)\,.
\end{equation}\vspace{1mm}
This additional constraint leads to a more meaningful set $\T$ and hence a less costly minor cardinality $|\T| = B$. However, selecting the most challenging sample is prone to include outliers, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, incorrectly labeled data, and thus hinders $f$ in learning meaningful associations. Nevertheless, in line with~\cite{hermans2017defense}, we observed in our experiments that a large number of triplets mitigates this effect within each mini-batch. Thus, we consider this hard sample mining strategy a valid method for fine-tuning.
\subsection{Octuplet Loss}
\label{subsec:octuplet-loss}
The primary purpose of this work constitutes improving the robustness of existing face recognition models by elegantly exploiting the triplet loss. Inspired by~\cite{gomez2019triplet,lai2021deep}, we formulate four different triplet loss terms combining high- and low-resolution images. In contrast to \cite{lai2021deep}, we follow the idea of fine-tuning rather than training from scratch utilizing a classification loss. With our octuplet loss, we aim to allow any network to directly learn the connection between high- and low-resolution while maintaining its performance on high-resolution images. The concept of applying triplet loss to features from different image resolutions is also proposed in \cite{zha2019tcn}. However, their features are computed via two separate branches of the network, thus increasing the computational costs. We aim to directly project embeddings from images with arbitrary resolutions $r$ into a common feature space.
Nowadays, benchmarks and applications typically utilize the distance between facial embeddings to distinguish between same or different identities. Therefore, it is reasonable to employ the feature distances directly in the training phase. Due to the lack of large face recognition training datasets containing both low and high-resolution images, we simulate a lower image resolution by synthetically down-sampling images to a particular resolution $r \in {7, 14, 28}$ and subsequent up-sampling to restore the original resolution (in our experiments $112$$\times$$112$, cf. \cref{subsec:triplet-loss}). For both operations, we apply a bicubic kernel and anti-aliasing. Since we only use square images, we specify the image resolution by the first dimension for the remaining part of this work. With this image degradation method, we double the size of every mini-batch, such that it comprises $B$ high-resolution images $\B$ with their corresponding low-resolution images $\lr{\B}$. Together with the hard sample mining strategy (cf. \cref{subsec:hard-batch-mining}), we define the following four sets of triplets:
\begin{equation}
\Th := \big\{(\A,\P,\Nh_1) \in \T(\B,\B,\B)\,:
\N_1^* = \argmin_{\N}\,\text{d}\big(\text{f}(\A), \text{f}(\N)\big)\big\}\,,
\end{equation}\vspace{1mm}
which exclusively consists of high-resolution images.
\begin{equation}
\Thl := \big\{(\A,\lr{\P},\lr{\Nh_2}) \in \T(\B,\lr{\B},\lr{\B})\,:
\lr{\Nh_2} = \argmin_{\lr{\N}}\,\text{d}\big(\text{f}(\A), \text{f}(\lr{\N})\big)\big\}\,
\end{equation}\vspace{1mm}
and
\begin{equation}
\Tlh := \big\{(\lr{\A},\P,\Nh_3) \in \T(\lr{\B},\B,\B)\,:
\Nh_3 = \argmin_{\N}\,\text{d}\big(\text{f}(\lr{\A}), \text{f}(\N)\big)\big\}\,,
\end{equation}\vspace{1mm}
which contain a mix of low- and high-resolution images. Lastly,
\begin{equation}
\Tl := \big\{(\lr{\A},\lr{\P},\lr{\Nh_4}) \in \T(\lr{\B},\lr{\B},\lr{\B})\,:
\lr{\Nh_4} = \argmin_{\lr{\N}}\,\text{d}\big(\text{f}(\lr{\A}), \text{f}(\lr{\N})\big)\big\}\,.
\end{equation}\vspace{1mm}
which comprises solely low-resolution images. With this configuration, we ensure that the $\P$ and $\N$ are both either degraded or non-degraded.
Note that the hard sample mining strategy is applied separately for each set of triplets. Simultaneously calculating the triplet loss for each set will result in considering the feature distances between up to eight different images for every $\A \in \B$. Thus, the combination of all four triplet losses consequently depends on the octuplet $(\A,\lr{\A},\P,\lr{\P},\Nh_1,\lr{\Nh_2},\Nh_3,\lr{\Nh_4})$. As a result, our novel loss is named octuplet loss \lossOct and is computed by
\begin{equation}
\lossOct = \lossTri(\Th) + \lossTri(\Thl) +
\lossTri(\Tlh) + \lossTri(\Tl)\,.
\label{eq:Octupletloss}
\end{equation}\vspace{1mm}
This way, \cref{eq:Octupletloss} encompasses all three cases: low-resolution face pairs ($\lossTri(\Tl)$), cross-resolution face pairs ($\lossTri(\Thl)$ and $\lossTri(\Tlh)$), and high-resolution face pairs ($\lossTri(\Th)$). Consequently, we not only increase the robustness against low- and cross-resolution face pairs but also guarantee that the network does not forget to handle high-resolution face pairs.
\section{Conclusion}
\label{sec:conclusion}
This work conducts further research on low-/cross-resolution face recognition and proposes a novel fine-tuning strategy with an octuplet loss function for existing models to boost their robustness against varying image resolutions. Our contribution involves a combination of four triplet loss terms applied simultaneously to high- and low-resolution images. This interaction exploits not only the relationship between different resolutions of the same image but also between different images of the same identity. The most significant advantage compared to other approaches is that this method can be built on top of existing approaches instead of a costly re-training.
We demonstrated the effectiveness of our fine-tuning strategy with several state-of-the-art face recognition approaches and observed a vast increase of robustness against image resolution without any significant trade-off on high-resolution images. Our approach performs best on the recently published cross-quality labeled faces in the wild dataset achieving $95.12\%$ accuracy. Additionally, we exhaustively analyzed the improvements on several popular datasets and concluded that our method is universally applicable. Moreover, our ablation study revealed that all four triplet loss terms are needed to perform superior. We discovered that the distance metric and feature normalization plays a less important role as long as the margin for the triplet loss terms is chosen correctly.
Our future work will focus on reducing the amount of data needed for the octuplet loss fine-tuning strategy to further reduce the training time. In other words, we want to follow up on the hard sample mining strategy. An intelligent distillation process of the training set, \emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot, keeping only the most relevant images, could potentially achieve even faster convergence. In masked face recognition, we witness many analogies to cross-resolution face recognition, so it would be interesting to explore if our octuplet loss concept could be beneficial there.
We believe that our contribution can help the community build more robust face recognition systems in the future. Code and details are released under the MIT license.
|
2103.15344
|
\section{1. Introduction}
Breakthroughs in synthesis could evoke new material concepts. For example, graphene together with a variety of single-layer structures have substantiated and popularized the 2-D materials \cite{Geim}. On the other hand, varying the attributes affiliated with existing conceptions could help discover new types of compounds. For example, the finding of high-entropy alloys (HEA) \cite{HEA04, HEA} was motivated by the idea of tuning the number of alloy constituents and concentrations, which are two basic attributes based on present concepts of alloys. Such synthesis-conception duality is essential for material development, and the dual aspects are mutually inspiring. The recent discovery and study of $AeA$Fe$_4$As$_4$ ($A$ = alkali, $Ae$ = alkaline earth) 1144-phase compounds \cite{Iyo, Mou, MeierPRM,MeierNat, Song, pinSC, bilayer, 2dSC} is such an example. Its synthesis implied creating quaternary compounds by judicious combinations of related ternary materials $A$Fe$_2$As$_2$ and $Ae$Fe$_2$As$_2$; on the other hand, choosing and varying the cation attributes can yield broader intermetallic compounds, as conceptualized in this work.
\begin{figure}
\includegraphics[scale=0.7]{figure1.png}
\caption{\label{fig:epsart}(color online): (a) The crystal structure of the 1144-phase. If \textit{A}, \textit{B} sites are occupied by the same cations, it becomes the 122-phase (b) Structure parameters with broken glide symmetry. (c) Charge transfer from cation-layers to TM-layers. (d) Schematic charge distribution of KCaRu$_4$P$_4$. The layers of K$^+$ and Ca$^{2+}$ take positive charges; each TM-layer takes -1.5e per unit cell. Pn stands for pnictogen and TM stands for transition metals\label{f1}}
\end{figure}
The 1144-phase (Fig.~\ref{f1}(a)) can be regarded as alternating cation layers in the 122-phase being replaced by hetero-cations \cite{Iyo}. Stoichiometrically, it is equivalent to 50\% doping of \textit{Ae}Fe$_2$As$_2$ by \textit{A} (or \textit{A} by \textit{Ae}). However, this view obscures that Fe-As layers have been significantly altered due to cation polarization in 1144-phases, which sets it apart from 122-phases \cite{Sasmal, Ralloy, SrNa}. Compared with 122-phases, 1144-phases feature distinct parameters (e.g., bonding angles, pnictogen heights) above and below the transition metal (TM) layer (Fig.~\ref{f1}(b)). Consequently, the \textit{n}-glide plane across the Fe-layer disappears, and the space group is reduced from $I4$/$mmm$ in 122-phases, to $P4$/$mmm$ in 1144-phases \cite{Song}. This fact might be relevant to the emergence of hedgehog spin textures \cite{MeierNat}, low-dimensional superconductivity (SC) \cite{2dSC}, FM-SC coexistence \cite{CaoRbEu}, suppressed $T_c$ with pressure \cite{Xiang}, and disputed nematic behavior \cite{MeierNat, NoNeum, Anna_Boehmer, SrNa}. Besides, the 1144 crystal surface has been suggested to be a platform for testing the Majorana zero modes \cite{DingH, Surf}.
The 1144-phase is a representative of “ordered stacking” \cite{ChgCao}, and poses an intriguing conjecture that such compounds widely exist subject to certain rules about building blocks: cation- and TM-layers. Alloys mix elementary atoms into random solid solution (SS), while 1144-phases have layers as basic units and pack them into an ordered pattern. Since interesting phenomena (e.g., SC \cite{MeierNat, pinSC, 2dSC}, magnetism \cite{CaoRbEu}) arise from \textit{d}-orbitals in TM-layers, the most revealing view is that the TM-layer resides in between two hetero-cation layers (Fig.~\ref{f1}(a)). It is reminiscent of 2-D materials being sandwiched by the substrate and vacuum. Varying species of hetero-cations serves as attributes to adjust chemical potentials and symmetry, just like choosing an underneath substrate and on-top dopants as the counterpart tuning for graphene \cite{Geim, Gap}. The pnictogen height \textit{h} (Fig.~\ref{f1}(a)) exhibits considerable asymmetry on different sides with 5\%-10\% changes compared with 122-phases. The manipulation could be further enriched by the diverse candidates for substitution: Ru-Si \cite{RRuSi}, VO$_2$\cite{VO2} for TM-layers; and \textit{A}, \textit{Ae}, rare earth (RE) for cation layers \cite{bqs}.
The paper is organized as follows. Sec. 2 introduces a general class of intermetallics inspired by the 1144-phase. In Sec. 3, four families of phosphides \textit{AB}(TM)$_4$P$_4$ (TM=Fe, Ru, Co, Ni) are chosen for a case study. We discuss their formation conditions at zero and finite temperatures, accounting for the configurational, vibrational, and electronic degrees of freedom. A number of 1144-compounds (especially Ru- and Fe-based phosphides) are predicted to be stable. Other questions are also examined regarding the stability mechanism, electronic structures, and latent features of the 1144 crystals. Sec. 4 presents our work of synthesizing high-purity KCaRu$_4$P$_4$. Its structure is determined as a proof of principle example.
In Sec. 5, we survey that 1144-like crystals are broadly existent, tuning-friendly, and of versatile physical properties, such as interplay of magnetism and superconductivity \cite{David, DaiP}, volume-collapsed phases \cite{Collapse, Borisov, TwoTr}, or heavy fermion behaviors \cite{RRuSi}. It also presents an avenue for designing complex many-constituent intermetallic compounds beyond ternary alloys.
\section{2. Definition of hetero-layer intermetallic crystals}
The 1144-phase raises many questions. Why an ordered phase emerges at 50\% concentration, where configuration entropy gets to maximum? This is utterly opposite to high entropy alloys \cite{HEA}, which target the equal-concentration for a maximum tendency to randomness. Whether such ordered compounds exist beyond Fe-arsenides, or beyond 1144-phases? If so, what are their common features? Which minor distinctions can be ignored? Is there a general descriptor for them?
Working on these questions, we naturally obtain a broader class of $A$-$B$ hetero-layer intermetallic crystals defined below. For short, we call it hetero-crystal: ``hetero" refers to $A$-$B$ stacking, ``crystal" reminds that it is bulk rather than low-dimension or thin film. Note that hetero-crystal is merely an abbreviation in this context.
(i) They are bulk crystals (i.e., stacking units are atomic thick and periodic along the $z$-axis), manufactured by liquid growth \cite{MeierPRM} or solid state reaction \cite{Iyo}, in contrast to low-dimensional or thin-film hetero-structures (like super-lattice \cite{Superlat} or tunneling junctions \cite{Hstr}), which are usually prepared with vapor deposition.
(ii) They consist of two sub-systems: cation layers (usually metal elements) and skeleton layers (e.g., TM-pnictogen or TM-chalcogen layers). Note that ``layer" is indispensible, i.e., it requires bonding along the $z$-axis is weaker than that in the $x$-$y$ plane. Thus, structures without layer divisions, such as pyrochlores and fluorite, are not within the present definition, although it might exhibit cation ordering \cite{PFtr, NaYbO2}. The layer construction significantly affect physical properties. For example, the two sub-systems differ in electronegativity and charge transfer between them (Fig.~\ref{f1}(c)(d)) leads to an ionic type of inter-layer bonding. The TM-layer motif (usually tetrahedron) causes particular hybridizations of \textit{d}-orbitals that generate partially filled bands, promoting metallic properties and itinerant magnetism \cite{David}. These features set it apart from hetero-structures in the context of semiconductors \cite{Hstr}.
(iii) They show an \textit{A}-\textit{B} stacking of alternating cation layers. As such, the concentration of \textit{A} (or \textit{B}) is fixed to 50\% and disorder in cation layers is much suppressed. More importantly, the \textit{A}-\textit{B} stacking creates asymmetric up-and-down environments and distorts the TM-layer, which could be Fe-Pn, NiO$_2$ \cite{NiX, NiL}, VO$_2$, and Mo$_N$O$_{3N-1}$ ($N$ =1, 2, 3...) \cite{VO2}. The structural change associated with the symmetry breaking is huge compared to that by applying pressure. For instance, our calculation shows that the asymmetry for pnictogen height $h$ (Fig.~\ref{f1}(b)) is up to 5\%${\sim}$10\%. Thus it seems plausible that HC will behave distinctively from SS phases with identical stoichiometry.
(iv) They are formed by combining two \textit{stable} parent phases. For example, the 1144-phase can be synthesized by mixing two 122-compounds, as done in this work. In general, the parent phase refers to the phase that has the common skeleton layer as hetero-crystals, but has mono-cations. It requires that the involved parent phases are stable (or meta-stable) \cite{bqs}. (iv) suggests that seeking HC should begin with looking for stable parent phases. Hundreds of 122-phases have been synthesized \cite{122Base}, and the reservoir of parent compounds facilitate synthesis.
Besides the four definitions above, it is worth mentioning that although the 1144-phase is quaternary, it is effectively binary, as the main variables are the cations $A$ and $B$. When the parent phases are mixed, entropy favors random mixing of $A$ and $B$, forming a uniform SS phase; whereas if an ordered phase occurs, it must have been favored by enthalpy. Thus, the 1144-phase's emergence is closely linked to enthalpy battling with configuration entropy.
\begin{figure}
\includegraphics[scale=0.46]{figure2.png}
\caption{\label{fig:epsart}(Color online): Lattice-parameters $a$ (red) and $c$ (blue) of the tetragonal unit of 122-phases. Filled and open points represent experimental and theoretical values. Missing points are unstable or not reported yet. Experimental results are cited from Springer Materials data base.\label{f3}}
\end{figure}
\section{3. Theoretical results}
\subsection{3A. Stability of 1144-phase}
The 1144-phase is presently limited to Fe-arsenides with cations from the alkali metal group (IA) or alkaline earth group (IIA) \cite{bqs}, and it is highly desirable to extend the chemical scope. Empirical rules for stable 1144-phases have been recognized: a large cation-radius mismatch ${\Delta}R$ seems a favorable factor; besides, the lattice mismatch ${\Delta}a$ is also relevant \cite{Iyo, MeierPRM}. The dependence of the two descriptors ${\Delta}R$ and ${\Delta}a$ can be rationalized with the enthalpy change due to elastic deformations \cite{Song}. The mechanism is generic, so it applies to Ru-P 1144 systems and other TM-pnictides. On the other hand, descriptors ${\Delta}R$ and ${\Delta}a$ fail to encode charge information (i.e., valence states exhibited by the cation) \cite{ChgCao}, thus are incapable of treating situations with valence variation, e.g. when trivalent cations rare earth RE$^{3+}$ are introduced. This implies additional descriptors are needed.
We start the search from parent compounds of 122-phase pnictides, which are broadly existing for different valences ($X=$Ai$^+$, Ae$^{2+}$, or RE$^{3+}$), an ideal ground for sheding light on the charge effect. Also, the iso-valence substituion P$\rightarrow$As is likely to yield stable phases. In terms of properties, Ru might introduce strong spin-orbital coupling; radius of P is smaller, thus it is easier to realize the half-collapsed phase \cite{Collapse, Borisov, TwoTr}. The lattice parameters (experimental and calculated) are listed in Fig.~\ref{f3}(a)-(d). Notably, 122-phases are unstable with certain cations. The calculation is based on density functional theory (DFT) \cite{Perdew} with Perdew Burke Ernzerhof (PBE) exchange-correlation functional, implemented by the Vienna ab initio simulation package (VASP). Both the unit cell and internal coordinates are fully relaxed. Calculation parameters are shown in the supplementary information (SI) \cite{SI}. The results of the calculated lattice parameters are close to the experimental values for most of systems with differences $\leq$1\%. Except for Ce-contained 122 phases where lattice parameters are consistently overestimated, probably due to inherent issues of pseudo-potentials in treating variant valences of Ce.
The 1144-phase is achieved by mixing two 122-phases with common skeleton layers (choose from the four groups in Fig.~\ref{f3}). The reaction merely involves redistribution of cations, either ordered or random. Thus, the question becomes the competition between 1144-phase and 122-solution-phase (122(s)-phase) \cite{Iyo, Song}. In general, the simplifying to two-phase competition is made possible by condition (iv) defined in Sec. 2, i.e., the existence of stable parent phases. This is indispensable, because having stable parent phases provides starting points and allows the reaction to be studied in a simpler landscape.
At zero temperature, phase stability is dictated by the formation enthalpy ${\Delta}H = H_{122(s)} - H_{1144}$. The enthalpy of the ordered phase can be estimated from super-cell calculations within DFT. For the solution phase, however, calculation of enthalpy involves random configurations and we adopt the ideal solution approximation.
\begin{equation}
H_{122(s)}=x{\cdot}H_{122}^{A}+(1-x){\cdot}H_{122}^{B},
\label{Hx}
\end{equation}
where $x$ is the concentration of cation $A$, and $x=$1/2 in this case. The stable 1144-phases (${\Delta}H{>}5$ meV/atom) are summarized in Fig.~\ref{f4}(a) (complete list seen in Sec. 2 of SI \cite{SI}) where
${\Delta}H$ is plotted against two descriptors:
\begin{equation}
{\Delta}a = -{\mid}a_{122}^{A}-a_{122}^{B}{\mid},
\label{descriptor_a}
\end{equation}
\begin{equation}
{\Delta}c = {\mid}c_{122}^{A} - c_{122}^{B}{\mid}
\label{descriptor_c}
\end{equation}
with $a$ and $c$ denoting the lattice parameters of the tetragonal unit cell of parent 122-phases. A number of 1144-phases are found stable, especially for Fe- and Ru-phosphides. The stable 1144-phases concentrate in a region of small $|{\Delta}a|$ and large $|{\Delta}c|$ (upper right corner of Fig.~\ref{f4}(a)), a tendency aligned with earlier findings \cite{Iyo, Song}. However, a closer inspection shows some discrepancies. For example, the two red hexagons (black arrow) deep in the upper-right corner (CsErRu$_4$P$_4$ and CsHoRu$_4$P$_4$) are not the most stable albeit they best satisfy the criteria. This implies additional descriptors, as discovered shortly.
At finite temperature, the phase stability is determined by the free energy:
\begin{equation}
\begin{split}
{\Delta}G&=G_{122(s)}-G_{1144} \\
&={\Delta}H+{\Delta}E_0-{\Delta}S_{conf}T-{\Delta}S_{vib}T-{\Delta}S_{e}T,
\end{split}
\label{eq4}
\end{equation}
where $E_0$ is the zero-point vibration energy. $S_{conf}$, $S_{vib}$ and $S_e$ are configurational, vibrational and electronic entropies, respectively. The configuration entropy $S_{conf}$ for the 1144-phase is zero. The $S_{conf}$ of the 122(s)-phase (per unit cell) is estimated by \cite{Gaskell}
\begin{equation}
S_{conf}=k_{B}(x{\log}x+(1-x){\log}(1-x)),
\label{eq5}
\end{equation}
where \textit{x} is defined in Eq.~\ref{Hx}. In this case, $S_{conf}$ is a constant 0.012 meV/(atom K). The electron entropy is estimated by
\begin{equation}
S_{e}=-k_{B}{\int}D(E)({f{\cdot}{\log}f+(1-f){\log}(1-f)}{\cdot})dE,
\label{eq6}
\end{equation}
where $f$ is the Fermi distribution function and $D(E)$ is the density of states, which are obtained from DFT calculations. Calculations of zero-point energy and vibration entropy $S_{vib}$ are performed by the code \textit{phonopy} within the harmonic approximation \cite{Phonon}. For the 122(s)-phase, $S_{vib}$ and $S_{e}$ are estimated with the average of two 122-phases, similar to Eq.~\ref{Hx}.
\begin{figure}
\includegraphics[scale=0.52]{figure3.png}
\caption{\label{fig:epsart}(Color online): (a) ${\Delta}H$ (proportional to the hexagon size) plotted against two descriptors: ${\Delta}a = -{\mid}a_{122}^{A}-a_{122}^{B}{\mid}$, ${\Delta}c = {\mid}c_{122}^{A} - c_{122}^{B}{\mid}$, where $a$ and $c$ are lattice parameters of the two parent 122-phases \textit{A}(TM)$_2$P$_2$ and \textit{B}(TM)$_2$P$_2$. The most promising systems are highlighted with solid hexagons and the corresponding cations are denoted. (b) Temperature dependence of free energy ${\Delta}G=G_{122(s)}-G_{1144}$. (${\Delta}G>0$ 1144-phase is stable.) The dashed line only includes $S_{conf}$. The blue line further includes $S_{vib}$ and $S_{e}$.\label{f4}}
\end{figure}
In Fig.~\ref{f4}(b) we plot ${\Delta}G$ as a function of temperature for three typical stable systems. At high temperature, the entropy will eventually bring the ordered 1144-phase into the 122(s)-phase. We define $T^{*}$ as the temperature where ${\Delta}G$=0 and, below which, the 1144-phase is stable. Comparing the dashed (only includes $S_{conf}$) and solid curves in Fig.~\ref{f4}(b), we find that including $S_{vib}$ and $S_{e}$ modifies minimally the free energy, and thus conclude that the $S_{conf}$ is the main contribution, which is consistent with previous findings \cite{Song}. As such, ${\Delta}H/ S_{conf}$ may give a rough estimate of $T^{*}$. In Sec. 3 of \cite{SI} we provide ${\Delta}G$ calculations for more systems.
\begin{figure*}
\includegraphics[scale=0.43]{figure4.png}
\caption{\label{fig:epsart}(Color online): The formation enthalpy ${\Delta}H$ of (a) \textit{AB}Fe$_4$As$_4$ with $A$ $B$ being alkali metal (Ai) and Alkaline earth (Ae), (b) of \textit{AB}Ru$_4$P$_4$. Each spot represents a particular pair of $A$-$B$. Color scales represent ${\Delta}H$, and red means stable 1144-phase (unit: meV/atom). I+II means alkali metal (AI) plus alkaline earth (AII); II+II means AII+AII; I+II means AI+AII. Eu, La, etc., mean the specific rare earth (RE) combined with another AI or AII. The ${\Delta}H$ of subgroups (c) I+II and (d) II+III. Other subgroups are shown in \cite{SI}. The shapes of points are consistent with the legend in (b). (e)(f) ${\Delta}H$ contribution from ${\Delta}H_a$, ${\Delta}H_c$ and ${\Delta}H_0$. The dashed line represents the magnitude $|{\Delta}H_0 |$. Note that ${\Delta}H_0$ is positive for (e), but negative for (f). \label{f5}}
\end{figure*}
\subsection{3B. General rules of phase stability}
The stability of 1144-phases obeys certain rules. As shown previously, descriptors ${\Delta}c$ given by Eq.~\ref{descriptor_c} (or atomic radius mismatch ${\Delta}R$), and ${\Delta}a$ given by Eq.~\ref{descriptor_a}, proved useful \cite{Iyo,Song}. They were first exemplified in Fe-arsenide 1144-systems and understood
with an elastic model \cite{Song}. Next we will find a descriptor associated with the charge balance between cation layers and pnictogen (Pn) layers, which will subtly affect the 1144/122(s) stability.
The formation enthalpies ${\Delta}H$ of $AB$Fe$_4$As$_4$ are plotted in Fig.~\ref{f5}(a), which shows that the stability gets enhanced from the lower-left to the upper-right in the graph. Thus, $AB$Fe$_4$As$_4$ endorse the established rule \cite{Iyo, Song} that the 1144-phase is stabilized with large $|{\Delta}c|$ and small $|{\Delta}a|$. However, the rule seems inadequate for Ru-phosphides (Fig.~\ref{f5}(b)), as the stable and unstable systems remain unseparated. To elucidate the situation, we divide these data into subgroups by cation valence states, e.g., I + II (Fig.~\ref{f5}(c)) and II + III (Fig.~\ref{f5}(d)); then the rule manifests itself. Basically, the cation pair affects the stability: group I+II form a stable 1144-phase and II+III do not. Such distinction, obviously, relies on charge balance between the cation layer and TM-layer. In fact, charge balance is a generic factor, which also affects the 122-phases (Fig.~\ref{f3}(a)-(d)). For instance, mono-valence cations do not favor the 122-phase $A$Ni$_2$P$_2$ (Fig.~\ref{f3}(d)). On the other hand, tri-valence cations never stabilize $A$Fe$_2$As$_2$ \cite{bqs}. Note that the favorable charge balance for Ru-phosphides is alkali metal (AI) + alkaline earth (AII) (or AI+Eu), i.e., the effective valence state 1.5+, which is the same as in the Fe-arsenide \cite{Song, bqs} systems, consistent with the anticipation of iso-valence substitution.
At the same time, the size effect still plays a role, as ${\Delta}H$ manifests a linear correlation with $({\Delta}c)^2$ in each subgroup (insets of Fig.~\ref{f5}(c) and (d)). It suggests that the charge effect cooperates with the size effect, for which we formulate the formation enthalpy ${\Delta}H$ as two additive parts regarding the size and charge effects,
\begin{equation}
{\Delta}H={\Delta}H_{size}+{\Delta}H_{charge}
\label{eq7}
\end{equation}
The size part essentially accounts for the elastic energy and could be approximated as \cite{Song}
\begin{equation}
{\Delta}H_{size}= -{\frac{1}{4}}k_a({\Delta}a)^2+{\frac{1}{2}}k_c({\Delta}c)^2
\label{eq8}
\end{equation}
The term ${\Delta}H_{charge}$ is a function of charge distribution ${\rho}(r)$, and the exact form could be exceedingly complex. However, it can be simplified in the 1144 scenario by noticing that there are limited choices of ${\rho}(r)$. Neglecting the minor distinctions due to the lattice constants, the charge distributions of, for instance, CaKRu$_4$P$_4$ and SrRbRu$_4$P$_4$ should be similar, both of which can be represented by the sketch of Fig.~\ref{f1}(c). Such resemblance helps classify 1144-phosphides into 13 groups, by the criterion that the in-group compounds should be composed of the same TM and cation valence (Sec. 5 of SI \cite{SI}). For example, BaRbFe$_4$P$_4$ belongs to the same group as CaKFe$_4$P$_4$; however, BaRbRu$_4$P$_4$ (for distinct TM) and LaKFe$_4$P$_4$ (distinct cation valence) do not. In each group, the ${\rho}(r)$ are considered identical. The limited options for ${\rho}(r)$ reduce the functional ${\Delta}H_{charge}[{\rho}(r)]$ to a function that takes 13 distinct values (Table~\ref{tab0}). Within each group, we assume the ${\Delta}H_{charge}$ being a constant ${\Delta}H_{0}$ (positive means favorable for 1144). In all, there are three parameters pertaining to each group: $k_a$, $k_c$ characterizing size effects in $a$-$b$ plane and $c$-axis, respectively; and ${\Delta}H_{0}$ characterizing the charge balance effects. These parameters can be evaluated by fitting ${\Delta}H$ obtained from first-principle calculations with Eq.~\ref{eq7} and Eq.~\ref{eq8}. The results are tabulated in Table~\ref{tab0}.
\begin{table}
\caption{\label{tab:table1} Force constants $k_a$ $k_c$ and ${\Delta}H_{charge}$ for 13 different subgroups, obtained by fitting Eq.~\ref{eq7} and \ref{eq8} to DFT results. Units for $k$ and ${\Delta}H_0$ are meV/(atom${\cdot}$\AA{}$^2$) and meV/atom.\label{tab0}}
\begin{ruledtabular}
\begin{tabular}{c c c c c c c c}
Ru & $k_a$ & $k_c$ & ${\Delta}H_0$ & Co & $k_a$ & $k_c$ & ${\Delta}H_0$ \\
\hline
I+II & 1675 & 1.48 & 7.17 & I+II & 2147 & -0.69 & -6.31 \\
I+III & 3395 & -0.097 & 4.34 & I+III & -1016 & -1.68 & -20.93 \\
II+II & 3816 & -1.78 & 0.39 & -- & & & \\
II+III & -10206 & -3.92 & -5.74 & II+III & -2632 & -4.74 & -1.81 \\
\hline
Fe & $k_a$ & $k_c$ & ${\Delta}H_0$ & Ni & $k_a$ & $k_c$ & ${\Delta}H_0$ \\
\hline
I+II & 4879 & 0.45 & 5.51 & -- & & & \\
I+III & 4370 & -0.24 & 6.34 & -- & & & \\
II+II & 1869 & -1.18 & 0.71 & II+II & 15177 & -1.43 & 0.76 \\
II+III & 393 & -2.44 & -3.86 & II+III &679 & 0.06 & 1.57
\end{tabular}
\end{ruledtabular}
\end{table}
All the groups are yielding $|k_c|{\ll}|k_a|$ due to the common feature of 1144-phases: stronger in-plane bonding and weaker inter-layer bonding. That is understandable because covalent bonding (intra-layer) is usually stronger than ionic interaction (inter-layer). Note that negative ${\Delta}a$ and positive ${\Delta}c$ adopted in Fig.~\ref{f5} are merely a sign convention; while the signs of $k_a$, $k_c$ are not subject to choices, but are physical, determined from DFT fitting (insets of Fig.~\ref{f5}(c)(d)). The $k_a$, $k_c$ have physical meanings of force constants defined in \cite{Song} and thus should be positive. If either $k_a$ or $k_c$ is negative, it indicates unstable structures, which are seen in II+II, II+III for Fe and Ru. On the other hand, when $k_a$, $k_c$ are both positive (e.g., I+II of Fe, II+III of Ni), it suggests stable structures. Knowing $k_a$ and $k_c$, one can straightforwardly evaluate the contributions by the $a$-$b$ plane ${\Delta}H_a=-\frac{1}{4}k_a ({\Delta}a)^2$, and the $z$-axis ${\Delta}H_c=\frac{1}{2}k_c ({\Delta}c)^2$. Let us take Ru-phosphide based 1144 systems as an example. (Results for Fe-, Co-, and Ni-phosphides are found in SI \cite{SI}.) Despite $|k_c|{\ll}|k_a|$, ${\Delta}H_c$ is substantially larger than ${\Delta}H_a$ (Fig.~\ref{f5}(e) and (f)) due to the fact that ${\Delta}a{\ll}{\Delta}c$. Thus, ${\Delta}c$ serves as the primary descriptor for Ru-phosphides; this rationalizes why solely ${\Delta}c$ well accounts for the change of ${\Delta}H$ of Ru-phosphides as showed by insets of Fig.~\ref{f5}(c) and (d). (Mind that the contribution of ${\Delta}a$ is not always negligible, Sec. 4 of \cite{SI}).
As for the charge balance, the ${\Delta}H_0$ of \textit{AB}Ru$_4$P$_4$ with I+II cations amounts to 7.17 meV/atom, comparable to the total ${\Delta}H{\sim}$10 meV/atom of 1144-phases that have been experimentally realized \cite{Song}. For II + III, ${\Delta}H_0$ is negative, battling against the positive ${\Delta}H_c$. However $|{\Delta}H_0|$ is overwhelmingly larger than ${\Delta}H_c$. Thus, the instability for II+III is due to unfavorable charge balance, rather than size effects. Note that the iso-valence Fe and Ru, which are fitted independently, yield proximate ${\Delta}H_0$ in counterparts I+II, II+II, etc. This justifies our approximation of neglecting lattice-parameter effects on ${\rho}(r)$. In contrast, the iso-valence replacement of Fe by Ru will drastically alter $k_a$ and $ k_c$. The distinct responses for size effects ($k_a$, $ k_c$) and charge effects (${\Delta}H_0$) to TM-replacement is one latent feature of the 1144 phase, for which it is beneficial to separate ${\Delta}H$ in the two terms of Eq.~\ref{eq7}. In addition, for the case II + II, 1144-phases share the same charge distribution as 122-phases, for which ${\Delta}H$ is contributed solely by size effects and it yields ${\Delta}H_0{\sim}0$ as expected.
In short, the entangled contributions in ${\Delta}H$ are separated into size and charge effects, which are quantitatively evaluated. The significance hierarchy for Ru-phosphide is ${\Delta}H_0{\simeq}{\Delta}H_c{\gg}{\Delta}H_a$ (Fig.~\ref{f5}(e) and (f)). For broader HC, however, ${\Delta}H_0$, ${\Delta}H_c$, and ${\Delta}H_a$ are comparable, neither of which could be safely neglected. On that account, ${\Delta}a$ ${\Delta}c$ are incomplete, although they are useful descriptors to provide screening rules for stable 1144-phases. Complexity caused by charge effects still demands quantitative evaluation.
\begin{figure*}
\includegraphics[scale=0.40]{figure5.png}
\caption{\label{fig:epsart} (Color online) (a)-(c) FS in $k_z$=0 plane (unit for \textbf{k}: $1/a$, orbitals are colored). The main nesting vector is denoted (the nesting vector is from origin to the highest peak of ${\chi}(\textbf{q})$). (d)-(f) Non-interacting electronic susceptibility ${\chi}(\textbf{q})$ (arbitary units with $\chi(0)$ normalized to unity). (g)(f) FS of CaKFe$_4$As$_4$ and CaFe$_2$As$_2$. The B.Z. is based on the tetragonal cell (Fig.~\ref{f1}(a)) which contains 10 atoms. Dashed lines represent an alternative B.Z. based on primitive cell of 122-phases (containing 5 atoms) projected to $k_z=0$ plane (octagon), where $\bar{X}$ is equivalent to $M$ point for 1144-phases. \label{f6}}
\end{figure*}
\subsection{3C. Electronic structures}
A fascinating observation about Fe-based SC is a phase diagram that involves various degrees of freedom and symmetries. \cite{Xiang, David, DaiP} Understanding the interplay between spins, orbitals and lattices is at the heart of unraveling the origin of SC \cite{feld, Chu, Valenti} and other intertwined orders \cite{Kivelson, neumatic}. For instance, it is suggested that SC is mediated by magnetic fluctuations, which are intensified with a scenario of two-pocket Fermi surface (FS) \cite{Anderson, Schmalian}. Thus, it is interesting to examine the FS and nesting vectors of 1144-phase Ru-phosphides and compare them with 122-phase or 1144-phase Fe-arsenide.
Generally speaking, the two-pocket features include: hole-pockets near ${\Gamma}$ and electron-pockets near corner $M$ in B.Z. (Fig.~\ref{f6}). The FS pocket is cylinder-like along $k_z$ owing to the weakening band dispersion along the $z$-axis (Sec. 6 of SI \cite{SI}). That is, introducing hetero-cations impacts the FS similarly as reducing the dimension, although the periodicity along the $z$-axis remains intact. Thus, the FS could be characterized by $k_z$=0 plane (Fig.~\ref{f6}), and two-dimensional simplification for FS \cite{2Dmodel} seems plausible for the 1144-phase. On the other hand, the FS of LaRu$_2$P$_2$ strays away from the cylinder shape. This system exhibits isotropic superconductivity with $T_c$~$\sim$~4.1 K, which could be understood with phonon mediation, evidenced by a consistent e-phonon coupling strength and large density of carriers at Fermi level. \cite{Razzoli, La}.
In 1144-phases, $d$-electrons of Ru$^{2+}$ occupy two spins equally, exhibiting zero magnetic moments, which suggests a ground state of diamagnetism or paramagnetism. In contrast, in Fe-asenide, Fe has a fraction of effective magnetic moments \cite{MeierNat, Anderson, Sasmal}. For instance, KCaFe$_4$As$_4$ bears$\sim$0.2 ${\mu}_B$, and forms long range spin vortex ordering after Ni- or Co-doping \cite{MeierNat}. Note that even paramagnetic phases might exhibit AFM magnetic fluctuations \cite{Yuji}, which is a relic of long-range ordering. Thus, it is unclear whether the absence of long-range order might be changed with doping or other tunings. To shed light on this, we estimate the non-interacting electronic susceptibility \cite{Hubbard, Kiq} (Fig.~\ref{f6}(d)-(f)).
\begin{equation}
\begin{split}
{\chi}(\textbf{q})=&-\frac{1}{V}{\sum}_{\textbf{k}, n, n'}\frac{f_n(\textbf{k}+\textbf{q})-f_{n'}(\textbf{k})}{{\varepsilon}_n-{\varepsilon}_{n'}+i{\delta}}\\
&{\times}{\langle}\textbf{k},n|e^{-i\textbf{q}{\cdot}\textbf{r}}|\textbf{k}+\textbf{q},n'{\rangle}{\langle}\textbf{k}+\textbf{q},n'|e^{i\textbf{q}{\cdot}\textbf{r}}|\textbf{k},n{\rangle}.
\end{split}
\label{eq9}
\end{equation}
The $f_n$ and ${\varepsilon}_n$ are Fermi distribution and $n$th band energies. The $V$ is volume and $|\textbf{k},n\rangle$ stands for Bloch states. In our calculation, the transition matrix ${\langle}\textbf{k}+\textbf{q},n'|e^{i\textbf{q}{\cdot}\textbf{r}}|\textbf{k},n{\rangle}$ is taken to be 1. Thus, $\chi(\textbf{q})$ is the non-interacting susceptibility under the constant phase approximation, and can shed light on the actual susceptibility.
The main nesting vector (black in Fig.~\ref{f6}(a)-(c)) varies with cation combinations. CaCsRu$_4$P$_4$ displays a commensurate (0, ${\pi}$) or (${\pi}$, 0), reflecting the nesting between pockets at two neighboring $M$. LaCsRu$_4$P$_4$ shows a FS similar to CaKFe$_4$As$_4$ (Fig.~\ref{f6}(g)), and exhibits the same vector (${\pi}$, ${\pi}$), due to the nesting between ${\Gamma}$ and $M$. It has been found that below certain value of Pn-height the nesting vector (0, ${\pi}$) will change to (${\pi}$, ${\pi}$) \cite{Nest}. In our case, the Pn-height (the minimum) for CsCa is 1.13 \AA{} and CsLa is 1.11 \AA{}, which endorses the tendency found in the scenario of 11-phase \cite{Nest}. In EuCsRu$_4$P$_4$, the pocket at ${\Gamma}$ is absent, but has an additional FS sheet, for which ${\chi}(q)$ reach cusps at incommensurate positions $\frac{2}{3}({\pi}, {\pi})$ (Fig.~\ref{f6}(f)). In comparison, 122-phase Fe-arsenide, such as CaFe$_2$As$_2$ (Fig.~\ref{f6}(h)), exhibits a vector $\frac{1}{2}({\pi}, {\pi})$. Note that two choices of first B.Z. have been commonly used for 122-phases \cite{David}: body-centered tetragonal (two formula units) and primitive. The latter one is more convenient to compare with the 1144-phase. The two choices are compared in Fig.~\ref{f6}(h).
\subsection{3D. Crystal structures}
\begin{figure}
\includegraphics[scale=0.55]{figure6.png}
\caption{\label{fig:epsart} (Color online) (a) The $c$-lattice parameters (calculation) for various 1144-phases (solid) and its parent phases (hollow). The solid spot is on the average of the two hollow spots. The full list of $AB$ (from left to right): KRb, KCs, RbCs, KCa, RbCa, CsCa, KSr, RbSr, CsSr, CaSr, KBa, RbBa, CsBa, CaBa, SrBa, (K, Rb, Cs, Ca, Sr, Ba)Ln (Ln=La, Ce, Pr, Eu). For brevity, only $B$ is listed on section boundaries. (b) The middle-point rule fails for $a$-lattice parameters, as considerable deviations are observed for CsCa, RbBa, CsBa, etc. (c) Linear fitting of ${\Delta}c$ with ${\Delta}R$. The slopes of different 1144-systems are quite close: \textit{AB}Fe$_4$As$_4$: 4.16, \textit{AB}Ru$_4$P$_4$: 4.90, \textit{AB}Fe$_4$P$_4$: 4.78, \textit{AB}Co$_4$P$_4$: 5.69. The inset shows the fitting with grouping cation combinations, not distinguishing TM. The slopes are 4.91, 5.00, and 4.61 for I+II, I+III and II+III cations, respectively.\label{f7}}
\end{figure}
The crystal parameters (e.g., pnictogen angles and lattice constants) are interesting to study \cite{Angle, StrPro, PnHa, PnHb, Broholm}. We discuss them from three aspects. The first aspect regards two general rules about 1144-phases, namely \textit{middle-point rule} and $R$-$c$ \textit{rule}. The middle-point rule relates the lattice constants of the 1144-phase with those of its parent phases. That is, the $c$-lattice parameter of 1144-phase $c_{1144} = \frac{1}{2}(c_{122}^A + c_{122}^B)$, where $c_{122}^A$ and $c_{122}^B$ are the $c$-lattice parameters of the two 122-phases (Fig.~\ref{f7}(a)). For example, KCaRu$_4$P$_4$ is at the middle point of KRu$_2$P$_2$ and CaRu$_2$P$_2$. Remarkably, the rule generally applies to 1144-phases of phosphides and arsenides, and of various TM. However, the rule is only for the $c$-lattice, and considerable deviations are observed for $a$-lattice parameters (Fig.~\ref{f7}(b)). This finding suggests that complexity in 1144-systems (probably also for general hetero-crystals) could be segregated, i.e., the $z$-axis is the easy part and complexity mainly arises from the $x$-$y$ plane. The other rule, $R$-$c$ rule, links crystal features with atomic information: the mismatch of the $c$-lattice parameters of two 122-phases ${\Delta}c=|c_{122}^A-c_{122}^B |$ is proportional to the mismatch of atomic radius ${\Delta}R=|R_A-R_B|$, where $R_A$ and $R_B$ are atomic radii of cation elements (metallic radius \cite{radius}). Noteworthy, a linear correlation is satisfactorily yielded by the fitting of various TMs (Fig.~\ref{f7}(c)) and cation combinations (inset of Fig.~\ref{f7}(c)); again, such a correlation does not hold for $a$-lattice parameters.
The linearity could be partically rationalized by the intuitve picture of imagining the cation as a ``hard ball". Since each unit cell contains two layers of cations, ${\Delta}c$ should be linear with two times the diameter difference, i.e., 4${\cdot}{\Delta}R$. However, the average slope is around 5${\cdot}{\Delta}R$, somewhat deviating from the simple intuitive picture. The $R$-$c$ rule is well verified by existing experimental data \cite{Iyo, Song} and a comparison is given in Sec. 8 of SI \cite{SI}. This reconciles the distinct descriptors used in previous studies \cite{Iyo, Song}, i.e., ${\Delta}R$ and ${\Delta}c$ are equivalent.
The second aspect concerns the strong asymmetry in structural parameters. Since the 1144-phase is the stacking of two 122-phases, we may wonder whether the TM-plane separation (inter-layer distance) in the 1144-phase remains the same as in the 122-phase. We define ${\Delta}d=d_{TM}^{1144}-d_{TM}^{122}$ to describe its displacement direction and amplitude (Sec. 7c of SI \cite{SI}). For Ru-phosphides, the TM-planes tend to move toward the higher-valence cation and away from the lower-valence. For instance in CaKRu$_4$P$_4$, the separation between TM-layer and Ca$^{2+}$ shrinks, while the separation between TM-layer and K$^+$ increases. On the contrary, for Co, the TM-plane will displace toward the lower-valence side. Ni- and Fe-1144-phases display both tendencies. \cite{SI} Whichever direction it will move, the displacement magnitudes $|{\Delta}d/d_{122}|{\sim}5\%$-$10\%$(Sec. 7 of \cite{SI}) are considerably larger than the structural change that is achievable by applying pressure. Besides, pnictogen (Pn) heights and Pn-TM-Pn bonding angles also undergo large changes. The large change does not happen in solid solution phase, for each cation layer takes equal amounts of charge, unlike $A$, $B$ are living separately in 1144-phases, which intensely breaks the glide symmetry and affects superconductivity. \cite{Angle, PnHa, PnHb}.
The third aspect is regarding derivative phases, known as the half-collapsed phase (h-CT). In general, derivative phases can arise from particular architectures; for example, molecule folding emerges from long-chain architectures. h-CT refers to an abrupt collapse of the lattice constant $c$ at certain external pressure, first observed in KCaFe$_4$As$_4$ \cite{Collapse}. The collapse happens in two steps: the first-step in Ca-layers, the second in K-layers \cite{TwoTr}; so h-CT refers to such a state where one layer collapses and the other does not. h-CT is peculiar for HC as it is determined by two general features: layer-building and distinct compressive capacities of hetero-cation layers\cite{Xiang, Borisov, TwoTr}. h-CT is a consequence of $A$-$B$ symmetry breaking, and also grants a method of tuning asymmetry, which leads to, for instance, suppression of SC \cite{LoseSC}. Forming the collapsed phase is related to the energy valley of a double-minimum \cite{Pbond}. In the language of bonding, it requires the Pn-Pn bonding to be strengthened and Pn-TM bonding to be weakened. Therefore, there exists a critical distance $d_P$(defined in Fig.~\ref{f1}(a)), below which the collapsed phase will be formed. The critical $d_P$ is found to be 2.8$\sim$3.0 \AA{} for Fe-arsenide 1144-phases, which is achievable with a moderate external pressure \cite{Borisov}. Noteworthy, Ru-phosphides 1144-systems exhibit smaller $d_P$ and thereby may form h-CT with no or very little pressure, easier than Fe-arsenide. The critical $d_P$ for phosphide is about 2.5 \AA{} (Sec. 7e of \cite{SI}).
\section{4. Experimental results}
In this section we present our design and discovery of CaKRu$_4$P$_4$, a new example of a 1144-type compound and a proof of principle of our ability to identify and create new complex phases. CaKRu$_4$P$_4$ was chosen as a target phase based on the simultaneous large difference in the $c$-lattice parameter values and small difference in $a$-lattice parameter values exhibited by the KRu$_2$P$_2$ and CaRu$_2$P$_2$ ternary compounds. In addition, from a technical point of view, we had already mastered the simultaneous use of K and Ca as part of our efforts to grow CaKFe$_4$As$_4$. \cite{MeierPRM} We first synthesized KRu$_2$P$_2$ and CaRu$_2$P$_2$ as precursor compounds and then combined them to create the quaternary CaKRu$_4$P$_4$. We want to emphasize that this synthesis truly was based on designing the compound, based on our understanding of the requirements needed to stabilize the 1144 structure, and then implementing this design in our synthesis.
\subsection{4A. Synthesis procedure}
Polycrystalline CaKRu$_{4}$P$_{4}$ samples were growth by solid state reaction. CaRu$_{2}$P$_{2}$ and KRu$_{2}$P$_{2}$ polycrystalline samples were synthesized first as precursors which were prepared by putting lumps of potassium metal (Alfa Aesar 99.95\% metals basis) or calcium metal pieces (Ames Laboratory, Materials Preparation Center (MPC) 99.9\%), ruthenium powder (Alfa Aesar 99.95\% metals basis) and lumps of red phosphorus (Alfa Aesar 99.999\% metals basis) into an alumina crucible that was arc-welded into a Ta tube and finally sealed into an amorphous silica ampoule. Because of the high vapor pressure of phosphorus, the temperature of furnace was ramped slowly, dwelling at 230\textcelsius\ and 500\textcelsius\ for 1 and 10 hours respectively, before being increased to 900\textcelsius\ and reacting for 96 hours. After cooling down to room temperature the initial reaction product was ground into powder and pressed into a pellet by tungsten carbide die press in argon filled glove-box. The pressed pellet was sealed in the manner described above and reacted at 900\textcelsius\ for another 96 hours. This process was repeated until phase pure KRu$_{2}$P$_{2}$ polycrystalline samples were synthesized. The similar process was repeated several times for CaRu$_{2}$P$_{2}$ until the phase was almost pure. Finally a pellet was pressed from a 1:1 molar ratio of CaRu$_{2}$P$_{2}$ and KRu$_{2}$P$_{2}$. This pellet was heated to 850\textcelsius\ and kept there for 95 hours after which the pellet was cooled to room temperature, ground, pressed and resealed. This procedure was repeated until the CaKRu$_{4}$P$_{4}$ phase (88\% purity) was formed. Higher purity is thought to be possible with proper adjustment of precursors ratio before reaction.
\subsection{4B. Structural analysis}
Powder x-ray diffraction measurement were carried out on finely ground CaRu$_{2}$P$_{2}$ and KRu$_{2}$P$_{2}$ using a Rigaku MiniFlex II powder diffactometer with Cu K$\alpha$ radiation ($\lambda$ = 1.5406 \AA{}) with refinements done by GSAS.{\cite{1}} The PXRD data of CaKRu$_{4}$P$_{4}$ was recorded at room temperature on a PANalytical X-Pert Pro Diffraction System with Co K$\alpha$ radiation ($\lambda$ = 1.78897 \AA{}). Powdered sample was evenly dispersed on a zero-background Si-holder with the aid of a small quantity of vacuum grease. Intensities were collected for the 2$\theta$ range from 15$\degree$ to 110$\degree$ in step sizes of 0.02$\degree$ and dwell times of 300 s for each step. FullProf Suite program package was used for Rietveld refinement of the crystal structures {\cite{2}}.
\begin{table}
\caption{X-ray data refinement including space group, formula units/cell and refined lattice parameters for
CaRu$_2$P$_2$ and KRu$_2$P$_2$. \label{table0}}
\begin{ruledtabular}
\begin{tabular}{c c c c c c}
CaRu$_2$P$_2$ & \textit{I4/mmm} & $a$: 4.044(4) & $c$: 9.834(1) & & \\
\hline
Atom(Mult) & $x$ & $y$ & $z$ & Uiso & Occ. \\
\hline
Ca (2) & 0 & 0 & 0 & 0.00361 & 1.0 \\
Ru (4) & 0 & 1/2 & 1/4 & 0.00357 & 1.0 \\
P (4) & 0 & 0 & 0.368(7) & 0.00300 & 1.0 \\
\hline
KRu$_2$P$_2$ & \textit{I4/mmm} & $a$: 4.016(7) & $c$: 12.356(9) & & \\
\hline
Atom(Mult) & $x$ & $y$ & $z$ & Uiso & Occ. \\
\hline
K (2) & 0 & 0 & 0 & 0.00869 & 1.0 \\
Ru (4) & 0 & 1/2 & 1/4 & 0.00160 & 1.0 \\
P (4) & 0 & 0 & 0.340(5) & 0.00437 & 1.0 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}
\caption{X-ray data collection, refinement and residual parameters for the experimental sample including space group(SG), formula units/cell (\textit{Z}) and refined lattice parameters for each of the constituent phase. \label{table1}}
\begin{ruledtabular}
\begin{tabular}{c c c c}
& CaKRu$_4$P$_4$ & RuP & CaRu$_2$P$_2$ \\
\hline
SG, \textit{Z} & \textit{P4/mmm}(123), 1 & \textit{Pnma}(62), 4 & \textit{I4/mmm}(139),2 \\
$a$(${\AA}$) & 4.048(1) & 5.524(1) & 4.050(1) \\
$b$(${\AA}$) & & 3.158(1) & \\
$c$(${\AA}$) & 10.913(1) & 6.128(1) & 9.756(1) \\
$V$(${\AA}^3$) & 178.85(3) & 106.95(2) & 160.34(3) \\
Radiation & & Co $K_a$=1.78901 & \\
2${\theta}$ & & 15-110$^{\circ}$ & \\
No. of data & & 5679 & \\
Uniq. data & 152 & 112 & 70 \\
No. of Var. & & 31 & \\
residuals & $R_B$=0.12 & $R_B$=0.22 & $R_B$=0.17
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table}
\caption{Atomic Coordinates for CaKRu$_4$P$_4$, Isotropic Displacement Parameters (\AA{}$^{2}$), and Site Occupancies for the main phase\label{table2}}
\begin{ruledtabular}
\begin{tabular}{c c c c c c c c}
atom & position & Symm. & $x$ & $y$ & $z$ & Biso & Occ. \\
\hline
Ca & 1\textit{a} & 4/\textit{mmm} & 0 & 0 & 0 & 0.83(1) & 1.0 \\
K & 1\textit{d} & 4/\textit{mmm} & 1/2 & 1/2 & 1/2 & 1.25(1) & 1.0 \\
Ru & 4\textit{i} & 2\textit{mm} & 0 & 1/2 & 0.217(1) & 1.22(1) & 1.0 \\
P1 & 2\textit{g} & 4\textit{mm} & 0 & 0 & 0.340(1) & 1.92(1) & 1.0 \\
P2 & 2\textit{h} & 4\textit{mm} & 1/2 & 1/2 & 0.107(1) & 0.28(1) & 1.0
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}
\includegraphics[width=8.0cm]{figure7
\caption{(a)(b)Powder XRD pattern and Refinement of CaRu$_2$P$_2$ and KRu$_2$P$_2$. (c) PXRD and Rietveld refinement results for the CaKRu$_4$P$_4$. (003), (005) and (007) peaks are indicated by the red arrows. The observed profile is indicated by red circles and the calculated profile by the solid line. Bragg peak positions are indicated by vertical tics, and the difference diffractogram is in blue. \label{figure8}}
\end{figure}
The powder XRD pattern and Rietveld refinement for KRu$_{2}$P$_{2}$, CaRu$_{2}$P$_{2}$ and CaKRu$_{4}$P$_{4}$ are shown in Fig.~\ref{figure8}. X-ray refinement for CaRu$_2$P$_2$ and KRu$_2$P$_2$ are shown in Table \ref{table0}. The small difference in a-axis lattice parameters (4.044 versus 4.016 \AA{}) and the large difference in c-lattice parameters (9.834 versus 12.356 \AA{}) for CaRu$_2$P$_2$ and KRu$_2$P$_2$ respectively are clearly seen. For the CaKRu$_4$P$_4$ sample, the X-ray data collection specifications, refined lattice parameters along with residual factors for all three phases are given in Table \ref{table1}, whereas the refined coordinates and isotropic displacement factors are presented for the main phase in Table \ref{table2}. The clear appearance of peaks with h + k + l = odd in Fig.~\ref{figure8}(c). is the qualitative indication that the CaKRu$_4$P$_4$ phase has formed \cite{Iyo}.
For the CaKRu$_{4}$P$_{4}$ refinement, first the zero-point shift and scale factor were refined. Next the lattice constants were refined, followed by the background parameters which involved a gradual increase of the polynomial used in the fit up to the sixth order. Consequently, the positional parameters are refined, beginning with the ruthenium atoms and followed by all phosphorus atoms. This is followed by refinement of the isotropic thermal parameters for all atoms at fixed 100\% occupancies. Then the occupancy parameters are unfixed and refined together with thermal parameters. This step confirms 1:1:4:4 stoichiometry of the phase, as well as the Ca/K segregation to their separate unique sites, since minimal fluctuation ($<$1 \%) of the occupancies is revealed at virtually same thermal parameters. Afterwards, profile-shape parameters are included with a peak-asymmetry correction for 2$\theta<40\degree$. The final refinements with 31 parameters, including preferential alignment, and 5,679 data points were performed. Phase analysis of the data in Fig~\ref{figure8}(c) gives $\sim$ 88\% of CaKRu$_{4}$P$_{4}$ estimated with $\sim$10\% CaRu$_{2}$P$_{2}$ and $\sim$ 2\% RuP after the final reaction. The existence of other phases may be due to imbalance of moles of CaRu$_2$P$_2$ and KRu$_2$P$_2$ during reaction.
\section{5. Discussion}
Here we discuss hetero-crystals beyond 1144-phases. The so-called 12442-phase (e.g., K(Cs)Ca$_2$Fe$_4$As$_4$F$_2$, \cite{C12442, CaoNew} Pr$_4$Fe$_2$As$_2$TeO$_4$ \cite{C42214}) is obtained by stacking two 1111-phases on either side of one 122-phase (Fig.~\ref{f9}(a)). Since Ln-O or Ca-F layers effectively serve as 1+ cations, it can be considered as alternative stacking of elemental cations Ca$^{2+}$ and composite cations (Ln-O)$^{+}$ or (Ca-F)$^{+}$; the TM-layer is (Fe-As)$^{1.5-}$. Notice that the 12442-phase has a second stacking type (Fig.~\ref{f9}(b)), with Fe-As and La-O interchanging their roles. We may still think of it as 1111+122, but an alternative viewpoint is the Fe-As layer being sandwiched by two dislocated layers of 122-phases. The latter viewpoint is more insightful because the Fe-As layer is the playground for SC and magnetism, while Ca or La-O altogether merely serve as electron donators. A particular stacking of the two could be achieved by selecting proper precursors \cite{C12442, C42214}. The 12442-phase implies that the cation could be a composite one, like (Ln-O)$^{+}$, expanding the pool for cations, as well as allowing extra tuning parameters: the stacking type (distinction of Fig.~\ref{f9}(a) and (b)). Other examples include the 22241-phase (Fig.~\ref{f9}(c)) \cite{C22241}, which contains alternative stacking of 1221-phases and 122-phases. Interestingly, it builds in oxide motifs, which are another big familiy of parent compounds. Recently, 112-phase nickelate SC attracted much attention \cite{NiX, NiL}, which belong to the same structure family of cuprate SC \cite{Cufmly} and is a limiting case $n\rightarrow{\infty}$ for the series of compounds La$_{n+1}$Ni$_n$O$_{2n+1}$ \cite{NiOfmly}. In 112-phases, cations RE or IIA atoms are sandwiched by NiO$_2$ layers. We have seen many hetero-crystals in Fe-SC and it is interesting to examine Ni- or Cu-based SC.
\begin{figure}
\includegraphics[scale=0.42]{figure8.png}
\caption{\label{fig:epsart} (Color online) Examples of constructing hetero-crystals. (a)(b) Two types of 12442-phase. Differences: type A: 4 layers of Fe-As and 2-layers of composite cation layers (Ca-F)$^{+}$ per unit cell; type B: 2 layers of Fe-As and 4 layers of cation layers (Pr-O)$^{+}$ (c) 22241-phase.\label{f9}}
\end{figure}
Here, we shall revisit hetero-crystal's definition and compare it with some similar-looking compounds. (1) Layered structure ((ii) in Sec. 2) is an indispensible defining feature, i.e., bonding along z-axis should be weaker. On top of that, quasi-2D FS can form, nesting and unconventional SC are established. On the other hand, structures, such as pyrochlores or fluorite, \cite{PFtr} are 3D corner-sharing networks, featuring very different properties. Since they lack “layers”, the notions developed here, such as charge balance between layers, layer-stacking distortions, and corresponding descriptors may not apply. Thus, even with similar looks ($A$-$B$ ordering), they are different types. (2) Asymmetry is another crucial characteristic ((iii) in Sec. 2). It does not exclude such situation as, for instance, $AABAAB$-stacking, because the double-layer $AA$ could be viewed as a single combined layer and asymmetry still occurs at the interface between $AA$ and $B$. On the same account, it is acceptable for having a certain amount of stacking faults or disorder (e.g., the $A$-layer contains 10\% $B$), as long as imperfections will not eliminate all the asymmetry. Conceptually, hetero-crystal is $not$ equated with disorder free; in fact, disorder could be introduced on purpose as seen in Ni- or Co-doped CaKFe$_4$As$_4$ \cite{NoNeum}. In that case, disorder enter through TM-layers not cation layers, and cation polarization maintains. The asymmetry can be characterized by defining $\varepsilon=(d_{TM}(A)-d_{TM}(B))/{\Delta}d_0$ or ${\varepsilon}=({\alpha}(A)-{\alpha}(B))/{\Delta}{\alpha}_0$, where ${\Delta}d_0$ and ${\Delta}{\alpha}_0$ are normalization factors. Then, the definition refers to a finite regime of $\varepsilon$, rather than on the single perfect point ${\varepsilon}=1$. (3) HC hybridizes two \textit{stable} phases/motifs ((iv) in Sec. 2). However, compounds like NaYbO$_2$ \cite{NaYbO2} cannot be regarded as NaO+YbO since each individual is unstable. Thus, NaYbO$_2$ is one phase with $A$-$B$ ordered cations, rather than $A$-$B$ ordered hybrids of two phases. Note that in the context of, for instance, pyrochlores-fluorite transition, the initial phase is ``parent” and the resultant phase is ``derivative”. Here, ``parent phases" have different meanings, because firstly they must be in pair and then be hybridized. Without the \textit{two} parent phases, pseudo-binary system might not be guaranteed, entropy formula Eq.~\ref{eq5} will become groundless.
\begin{figure}
\includegraphics[scale=0.42]{figure9.png}
\caption{\label{fig:epsart} (Color online) (a) Traditional phase diagram with $N=3$ (triangle) and $N=4$ (tetrahedron). The complexity of phase diagram will quickly increase with ingredient number $N$. Stable phases only sparsely distribute, leaving phase separation in most regions. \cite{Gaskell} (b) Formation enthalpy of 1144-phases $AB$Ru$_4$P$_4$, $AB$Fe$_4$P$_4$, $AB$Fe$_4$As$_4$ (color scales, meV/atom) vs collective descriptors as coordinates, with which the phase diagram could include compounds with flexible compositions and concentrations. It has a clear separation betwen the stable (red) and the unstable (blue) and is a generalization of phase diagram in \cite{Iyo}.\label{f10}}
\end{figure}
Hetero-crystals allude to a fundamental view of exploring complex alloys, as reflected by a new type of phase diagram (Fig.~\ref{f10}). A traditional phase diagram is coordinated by information of an individual atom (e.g., elemental species and concentrations) and becomes cumbersome when $N>3$ (Fig.~\ref{f10}(a)). We introduce a different set of coordinates: ${\Delta}c$ (or ${\Delta}a$) defined by Eq.~\ref{descriptor_a}, \ref{descriptor_c}, and a tuple ${\Omega}=(n_A+n_B, |n_A-n_B|)$, where $n_A$, $n_B$ are valence electron numbers for cations $A$ $B$. For example, KCaRu$_4$P$_4$ has $n_A=1$(K), $n_B=2$(Ca), thus ${\Omega}=(3,1)$. These coordinates rely on the whole collection of involved atoms, rather than each individual, thus referred to as ``collective". Kind of the idea of using transformed coordinates $q$ to describe collective modes; meanwhile, the information remains unchanged, since ${\Delta}c$, ${\Delta}a$, ${\Omega}$ are ultimately from the individual information.
\begin{table}
\caption{\label{tab:table2} Formation enthalpy for stable 1144-systems. Units for ${\Delta}a$ (${\Delta}c$) and ${\Delta}H_0$ are ${\AA}$ and meV/atom, respectively\label{table4}}
\begin{ruledtabular}
\begin{tabular}{c c c c c c c c}
Ru-P & $|{\Delta}a|$ & $|{\Delta}_c|$ & ${\Delta}H$ & Fe-P & $|{\Delta}a|$ & $|{\Delta}c|$ & ${\Delta}H$ \\
\hline
KCa & 0.026 & 2.623 & 14.98 & KEu & 0.009 & 1.806 & 8.14 \\
RbCa & 0.022 & 3.328 & 15.44 & RbEu & 0.007 & 2.380 & 8.68 \\
CsCa & 0.033 & 4.057 & 18.04 & CsEu & 0.009 & 3.130 & 8.87 \\
KLa & 0.009 & 1.721 & 10.04 & KLa & 0.030 & 1.966 & 7.58 \\
RbLa & 0.005 & 2.426 & 10.19 & RbLa & 0.028 & 2.540 & 8.50 \\
CsLa & 0.016 & 3.155 & 11.79 & CsLa & 0.012 & 3.290 & 9.40 \\
KEu & 0.008 & 1.611 & 12.90 & & & & \\
RbEu & 0.004 & 2.316 & 14.19 & & & & \\
CsEu & 0.015 & 3.045 & 15.17 & & & &
\end{tabular}
\end{ruledtabular}
\end{table}
As such, one obtains a new phase diagram (Fig.~\ref{f10}(b)) of reduced dimensions, which can embody entries with flexible chemical ingredients, such as Fig.~\ref{f10}(b) that has included Fe-arsenide, Ru- and Fe-phosphide. Descriptors ${\Delta}c$, ${\Omega}$ do not depend on the number of constituents, thus complexity will not rely on the compounds being ternary, quaternary or beyond. In addition, the separating between stable and unstable structures is sharp, which reflects the fact that ${\Delta}c$ and ${\Omega}$ have captured two universal factors for hetero-crystals: size and charge effects. That is, $x$-axis ${\Delta}c$ is in charge of ${\Delta}H_{size}$ and $y$-axis ${\Omega}$ in charge of ${\Delta}H_{charge}$.
Traditional phase diagram is an efficient apparatus for SS that forms when entropy dictates. The hetero-crystal is an abnormality emerging out of the SS region, thus invokes new principles and methodology. The new paradigm changes the viewpoint about composite materials, which were based on combining multiple elementary atoms and tagged by the number of species, such as binary, ternary. In contrast, a hetero-crystal is always viewed as a ``binary" system that combines two building units; but these units could be composite objects. In other words, for complex alloys, it is beneficial (also necessary) to introduce intermediate structure units in between the elementary level and the final crystal. Just like we consider biological structures as aggregation of protein molecules rather than elementary atoms. Parent phases or motifs play the role of the building units. Accordingly the attributes must be modified: ${\Delta}c$, ${\Delta}a$ are derived from parent phases, known as collector descriptors. The strategy for synthesizing will also be modified. A common procedure to discover new compounds is choosing a structure prototype and substituting certain atom with another of similar individual properties, like radius or electronegativity. It shifts the focus to studying the matching (or mismatch) of parent phases and motifs, which might lead to numerous hidden matter states.
At last we discuss the experimental realization. The compound La$A$Fe$_4$As$_4$ ($A$=K, Rb, Cs) is predicted to be stable\cite{bqs}, but its synthesis is still lacking. This is probably due to the parent phase LaFe$_2$As$_2$ being meta-stable, a significant difference from phosphides examined in present work. However, LaFe$_2$As$_2$ is recently synthesized \cite{LaFeAs}. Using LaFe$_2$As$_2$ as precursors renews the hope. This ``failure" example also suggests the importance of stable parent phases (condition (iv) in Sec. 2).
\section{6. Summary}
In summary, we introduce a class of $A$-$B$ hetero-layer intermetallic crystals and address the following specific questions. What are they? Broadly existing or not? What are the main features? What influences or insights will they bring?
It is inspired by simple intuition: make the random substitution in metallic alloys into an ordered $A$-$B$ stacking; in other words, replace the layers in semiconductor hetero-structures by some metallic ingredients, such as TM-layers and cations. To make the idea precise, four definitions are given (Sec. 2): (i) bulk materials, (ii) cation layers plus TM-layers, (iii) $A$-$B$ stacking and reduced symmetry, (iv) formed with two parent phases.
Hetero-crystals may exist broadly, as seen in Fig.~\ref{f5}, \ref{f10}. By computational searching in phosphides of 1144-phase (TM=Fe, Ru, Co, Ni), we found a series of stable structures at ambient temperatures and pressures, particularly for Fe- and Ru-phosphides. The most promising ones are put in Table~\ref{table4} for a quick survey. Our prediction is supported by synthesis of high-purity KCaRu$_4$P$_4$. The mechanism is the battling of enthalpy and configuration entropy. The enthalpy rises from size effects and charge balance, which could be characterized by two descriptors ${\Delta}c$ and ${\Omega}$ (Fig.~\ref{f10}).
For crystal features, the TM-layers commonly has strong distortion (5\%${\sim}$10\%) compared with the parent phase. Besides, we suggest two universal latent rules: middle point rule $c_{1144} = \dfrac{1}{2}(c_{122}^A + c_{122}^B)$, and $R$-$c$ rule ${\Delta}c{\simeq}5.0{\cdot}{\Delta}R$. For electronic features, they commonly exhibit Fermi pockets, and weakening dispersion along the $z$-axis (Sec. 3C). Because of having $d$- and $f$-orbitals, they are often compounds interesting for SC, magnetism, heavy Fermions, etc.
These compounds upgrade our viewpoint about alloys, as they are always viewed as ``binary" systems that combine two parent phases; that is, the parent phase plays a role of a structure motif that bridges the atom and the crystal. Thus, It brings renewed interest of many known phases that can serve as the parent phase. Such insights also bring a new phase diagram that is coordinated by the so-called collective descriptors originating from parent phases. (Fig.~\ref{f10}).
\input acknowledgement.tex
|
1404.3739
|
\section{Introduction}
There has been stimulating progress in observational inflationary cosmology.
The BICEP2 experiment \cite{Ade:2014xna} has announced the observation of the ratio of tensor to scalar perturbations
of the metric to be
\begin{eqnarray}
r = 0.2^{+0.07}_{-0.05}
\end{eqnarray}
which has triggered a discussion \cite{Kehagias:2014wza,Ibanez:2014zsa,Dvali:2014ssa,Lyth:2014yya,
Kaloper:2008fb,Kaloper:2014zba} of the implications of these findings on
theoretical models for physics beyond the electroweak scale.
The simplest model which is favored by the combined PLANCK and BICEP2 \cite{Ade:2014xna,Ade:2013uln} data is the quadratic model
of chaotic inflation \cite{Linde:1983gd}, with potential
\begin{eqnarray}
\label{chaotic}
{\cal V} = \frac{1}{2} m^2 \phi^2
\end{eqnarray}
which for $50$
e-foldings of inflation gives rise to a ratio of tensor to scalar perturbation of $r \approx 0.16$.
The new experimental results and the fact that inflation is a high energy process gives us a new
possibility to study models of physics beyond the Standard Model. In particular, any serious candidate
for new physics will have to be able to reproduce these results.
In this work we will focus on supersymmetric theories.
It is then important to embed the chaotic model of inflation (\ref{chaotic}) into
the theory of supergravity \cite{Lyth:1998xn,Linde:2007fr}.
An embedding into old-minimal supergravity was carried out successfully in Ref. \cite{Kawasaki:2000yn},
by employing chiral multiplets and was further discussed in Ref. \cite{Demozzi:2010aj}.
A general chaotic inflation was also discussed in Ref. \cite{Kallosh:2010ug,Kallosh:2010xz},
where these proposals are easier to embed into phenomenological models.
An interesting attempt to give a geometric origin to a chaotic inflationary phase in supergravity
was initiated recently in Ref. \cite{Ferrara:2014ima},
again involving two chiral multiplets in the dual picture,
and further discussed in Ref. \cite{KLr,El,jap}.
Another interesting possibility is to study models with massive vector multiplets,
first coupled to the old-minimal supergravity in Ref. \cite{Mukhi:1979wc,VanProeyen:1979ks},
where also the quadratic potential was initially discussed \cite{VanProeyen:1979ks}.
The relation of massive vector multiplets to new-minimal higher derivative supergravity was
first pointed out in Ref. \cite{Cecotti:1987qe}.
In a series of papers \cite{Farakos:2013cqa,Ferrara:2013rsa,Ferrara:2013kca}
single-field inflationary models utilizing
massive vector multiplets and possible higher derivative corrections were systematically studied,
and a single-field chaotic model was introduced in Ref. \cite{Ferrara:2013rsa}.
Moreover, the gauged isometries of the minimal supergravity models of inflation
were investigated in Ref. \cite{Ferrara:2013eqa,Ferrara:2014rya}.
Quadratic chaotic inflationary models where the D-term dominates over the F-term
were studied in Ref. \cite{Kadota:2007nc,Kadota:2008pm}.
Finally, a different perspective on chaotic inflation from D-terms
in supergravity may be found in Ref. \cite{Dalianis:2014sqa}.
In general,
the potential (\ref{chaotic}) is not straightforward to reproduce.
The usual issues one encounters are
\begin{itemize}
\item Identify the one and only scalar which drives inflation.
\item Stabilize the other scalars, and explain why they do not ruin inflation.
\item Higher order corrections may spoil inflation; the notorious $\eta$-problem.
\end{itemize}
In this work we investigate the possibility of embedding the quadratic model of inflation in supergravity,
with the use of massive vector multiplets and we will see how the aforementioned issues are addressed.
In particular, a generic property of inflationary models utilizing massive vector multiplets
is that there is no need to stabilize any additional fields nor identify the inflaton;
these are by construction single-field models, and thus the first two aforementioned issues are automatically solved.
To address the issue of higher order corrections we will rely on the existence of a softly broken shift symmetry
and invoke technical naturalness in the sense of 't Hooft \cite{'tHooft:1979bh}.
\section{Chaotic inflation in new-minimal supergravity}
Let us start with a real linear multiplet,
and couple it to the new-minimal supergravity \cite{Sohnius:1981tp,Sohnius:1982fw,Ferrara:1988qxa,Ovrut:1988fd}.
The definition of the real linear superfield in this framework is
\begin{eqnarray}
\label{new}
\nabla^2 L = \bar \nabla^2 L = 0 .
\end{eqnarray}
The definitions of the bosonic components are
\begin{eqnarray}
L | = \phi \ \ , \ \ -\frac{1}{2} [\nabla_\alpha , \bar \nabla_{\dot \alpha} ] L | = h_{\alpha \dot \alpha}
\end{eqnarray}
with
\begin{eqnarray}
h_m = - \frac{1}{2} \epsilon_{mnrs} \partial^n b^{rs} - 2 \phi H_m
\end{eqnarray}
where $b_{mn}$ is the two form of the real linear multiplet and $H_m$
is an auxiliary field of the new-minimal supergravity formulation which we will review later.
It is easy to verify that the minimal kinematic term
\begin{eqnarray}
\label{kin}
- \int d^4 \theta E L^2 = - \frac{1}{2} e \partial \phi \partial \phi + \frac{1}{2} e h_m h^m
\end{eqnarray}
allows a shift symmetry for the superfield
\begin{eqnarray}
\label{shift}
L \rightarrow L + c M_P
\end{eqnarray}
for some real constant $c$, which translates into
\begin{eqnarray}
\label{shiftphi}
\phi \rightarrow \phi + c M_P
\end{eqnarray}
for the real scalar lowest component.
We would like to stress that the shift (\ref{shift}) is also a symmetry of (\ref{new}) and thus
does not violate the definition of the real linear multiplet.
The shift symmetry (\ref{shift}) protects the minimal kinematic term from higher order corrections
and will later shield the model against the $\eta$-problem.
For example the possible higher order correction
\begin{eqnarray}
\nonumber
\label{L4}
-\frac{1}{(\text{some scale})^4}\int d^4 \theta E L^4 \rightarrow \text{violates symmetry}
\end{eqnarray}
is ruled out since the quartic term ($L^4$) violates the shift symmetry.
Of course, this exact shift symmetry (\ref{shift}) rules
out the possibility of introducing a potential for the theory.
Here is where the Green-Schwarz term comes in.
It is well established that a gauge anomaly can be canceled by introducing a two-form which
couples to the gauge field and gives rise to tree diagrams that cancel the anomalous loop diagrams.
This mechanism exists also in supergravity theories and in four dimensions the coupling of the two-form
with the gauge field is given by the gauge invariant contact term
\begin{eqnarray}
\begin{split}
\label{GS}
- g M \int d^4 \theta \,E \, L \, V = - \frac{1}{2} e g M \phi \text{D}
+ \frac{1}{2} e g M v_m \left( h^m + 2 \phi H^m \right)
\end{split}
\end{eqnarray}
where $V$ the vector superfield of the would-be-anomalous $U(1)$,
and $L$ a real linear multiplet containing the two-form.
The fields $\text{D}$ and $v_m$ are the auxiliary real scalar
and the physical vector of the $U(1)$ vector superfield.
In Ref. \cite{CFG87} it was shown that the effect of this term is to introduce a realization of the
Stueckelberg mechanism where the two-form of the real linear is eaten by the vector.
The Green-Schwarz mechanism was further investigated in minimal supergravity in Ref. \cite{Lopes Cardoso:1991zt}.
It is in fact a way to write down massive vector multiplets \cite{Siegel:1979ai,CFG87}.
On the other hand this same term (\ref{GS}) violates the shift symmetry (\ref{shift}) and as we will see it creates a potential.
Thus the small breaking of the shift symmetry
is generated by the Green-Schwarz term
and it is expected that as long as
\begin{eqnarray}
\label{gmh}
g M << H
\end{eqnarray}
where $H$ is the Hubble constant,
the effect of the small breaking on the kinematic term (\ref{kin}) is negligible.
This is natural in the 't Hooft sense since for $g M \rightarrow 0$ the shift symmetry is restored,
and the symmetry of the system is enhanced \cite{'tHooft:1979bh}.
To better understand this argument it is convenient to think of the mass $M$ as
parameterizing the flow of the theory through some parameter space.
When the theory sits on the $M_*=0$ point, all the operators that violate the symmetry vanish.
Far away from the special point $M_*=0$, the symmetry violating operators become large.
Thus, for a small value of $M$ close to $M_*=0$ the operators that violate the shift symmetry have to be highly suppressed.
More specifically for example
\begin{eqnarray}
\nonumber
-\frac{1}{(\text{some scale})^4}\int d^4 \theta E L^4 \rightarrow \text{naturally suppressed} .
\end{eqnarray}
For similar considerations in the old-minimal supergravity framework,
in a model of two chiral multiplets, see Ref. \cite{Kawasaki:2000yn}.
Taking this into account we may proceed to investigate the model in more detail.
As we have mentioned, our interest now lies in the new-minimal supergravity \cite{Sohnius:1981tp},
which originates from the superconformal supergravity after appropriate
gauge fixing \cite{Kugo:1982cu,Ferrara:1983dh,Gates:1983nr,Buchbinder:1995uq,Butter:2009cp}.
Note that the new-minimal supergravity allowes only R-invariant Lagrangians.
The bosonic sector of the pure theory reads
\begin{eqnarray}
-2 \int d^4 \theta \,E \, V_{\text{R}} = \frac{1}{2} e \left( R + 6 H_m H^m \right) + 2 e A^-_m H^m .
\end{eqnarray}
On top of the graviton $e^a_m$ and the gravitino $\psi_m^\alpha$,
this theory contains two auxiliary fields:
the $A_m$ which gauges the R-symmetry and the two-form $B_{mn}$,
which only appears through the dual of its field strength $H^m$.
Let us mention that
\begin{eqnarray}
A^-_m = A_m -3 H_m.
\end{eqnarray}
The properties of this minimal supergravity were investigated in a series of papers \cite{Sohnius:1981tp,Sohnius:1982fw,Ferrara:1983dh,CFG87,Cecotti:1987qe,Cecotti:1987mr,
Ferrara:1988pd,Ferrara:1988qxa,Ovrut:1988fd,Ovrut:1989bh,Ovrut:1990nk}.
For the total Lagrangian to be manifestly supersymmetric we will write it down
in new-minimal supergravity superspace \cite{Ferrara:1988qxa},
and then turn to component form.
The theory we wish to consider is
\begin{eqnarray}
\begin{split}
\label{A1}
{\cal L}=
&
-2 M_P^2 \int d^4 \theta \,E \, V_{\text{R}}
+ \frac{1}{4} \left[ \int d^2 \theta \, {\cal E} \, W^2(V) +h.c. \right]
\\
&
-g M \int d^4 \theta \,E \, L V
- \int d^4 \theta \,E \, L^2
\end{split}
\end{eqnarray}
which as we mentioned has a matter sector combination of real linear superfield $L$
and vector superfield $V$ that reproduce the Green-Schwarz mechanism in supergravity \cite{CFG87}.
Here
\begin{eqnarray}
W_{\alpha}(V) = - \frac{1}{4} \bar \nabla^2 \nabla_{\alpha} V
\end{eqnarray}
is the standard field strength chiral superfield.
For the pure gauge sector we have the bosonic components
\begin{eqnarray}
\begin{split}
\frac{1}{4 } \int d^2 \theta \, {\cal E} \, W^2(V) + c.c. =
- \frac{1}{4 } e F^{mn} F_{mn} (v) + \frac{1}{2 } e \text{D}^2
\end{split}
\end{eqnarray}
for
\begin{eqnarray}
F_{mn}(v) = \partial_m v_n - \partial_n v_m .
\end{eqnarray}
Now we can find the full bosonic sector of (\ref{A1})
\begin{eqnarray}
\label{A2}
\begin{split}
e^{-1} {\cal L}^B = & \frac{1}{2} M_P^2 \left( R + 6 H_m H^m \right) + 2 M_P^2 A^-_m H^m
\\
& - \frac{1}{2} \partial \phi \partial \phi + \frac{1}{2} h_m h^m
+ \frac{1}{2 } \text{D}^2 - \frac{1}{4 } F^{mn} F_{mn}
\\
& + \frac{1}{2} g M v_m \left( h^m + 2 \phi H^m \right)
- \frac{1}{2} g M \phi \text{D} .
\end{split}
\end{eqnarray}
Let us integrate out the supergravity auxiliary and the $h$-field.
First we make $h$ and $H$ unconstrained by introducing Lagrange multipliers $X$ and $Y$
\begin{eqnarray}
\begin{split}
e^{-1} {\cal L}_{aux} =& 3 M_P^2 H_m H^m + 2 M_P^2 A^-_m H^m + \partial_n X H^n
\\
& - \frac{1}{2} g M \phi \text{D} + \frac{1}{2} g M v_m \left( h^m + 2 \phi H^m \right)
\\
& + \frac{1}{2} \text{D}^2
+ \frac{1}{2} h_m h^m + \partial_m Y ( h^m + 2 \phi H^m ) .
\end{split}
\end{eqnarray}
{}From the equations of motion of $A^-_m$ we find the condition
\begin{eqnarray}
\label{Heq}
H^m = 0
\end{eqnarray}
and from the $H_m$ equations we find
\begin{eqnarray}
\begin{split}
6 M_P^2 H_m + 2 M_P^2 A^-_m
+gM v_m \phi
+ \partial_m X + 2 \phi \partial_m Y = 0
\end{split}
\end{eqnarray}
which combined with (\ref{Heq}) leads to
\begin{eqnarray}
\label{A-}
A^-_m =
- \frac{gM v_m \phi
+ \partial_m X + 2 \phi \partial_m Y }{( 2 M_P^2) } .
\end{eqnarray}
In fact the equation (\ref{A-}) never shows up, except in the supersymmetry transformations
of the on-shell theory.
The auxiliary Lagrangian becomes
\begin{eqnarray}
\begin{split}
e^{-1} {\cal L}_{aux} =
- \frac{1}{2} g M \phi \text{D} + \frac{1}{2}g M v_m h^m
+ \frac{1}{2} \text{D}^2
+ \frac{1}{2} h_m h^m + \partial_m Y h^m .
\end{split}
\end{eqnarray}
Integrating out $h^m$ and D we find
\begin{eqnarray}
e^{-1} {\cal L}_{aux} =
- \frac{1}{8} (g M v_m +2 \partial_m Y )^2
- \frac{g^2}{ 8} M^2 \phi^2.
\end{eqnarray}
The model (\ref{A1}) after integrating out all the non-propagating fields is
\begin{eqnarray}
\label{fin}
\begin{split}
e^{-1} {\cal L}^B =
\frac{1}{2} M_P^2 R - \frac{1}{2} \partial \phi \partial \phi - \frac{g^2}{ 8} M^2 \phi^2
- \frac{1}{4} F^{mn} F_{mn} (v)
- \frac{1}{8} g^2 M^2 v^m v_m
\end{split}
\end{eqnarray}
where we have shifted
\begin{eqnarray}
\label{stueckelberg}
v_m \rightarrow v_m + \frac{2}{g M} \partial_m Y .
\end{eqnarray}
It is clear from (\ref{stueckelberg}) that the Stueckelberg mechanism is at work.
The sector relevant to inflation reads
\begin{eqnarray}
e^{-1} {\cal L}_{scalar} = \frac{1}{2} M_P^2 R - \frac{1}{2} \partial \phi \partial \phi
- \frac{m^2}{ 2} \phi^2
\end{eqnarray}
where we have replaced
\begin{eqnarray}
g M = 2 m
\end{eqnarray}
which is fixed by the observational data (see for example \cite{Lyth:1998xn}) to be
\begin{eqnarray}
m \sim 10^{13} GeV .
\end{eqnarray}
During inflation since the $\eta$ slow-roll parameter is small we see that indeed
\begin{eqnarray}
\frac{g M}{H} \sim \frac{M_P}{\phi} << 1
\end{eqnarray}
and relation (\ref{gmh}) holds.
We see that the model (\ref{A1}) successfully reproduces the simplest model of chaotic inflation (\ref{chaotic}).
Moreover there is no ambiguity in choosing the inflaton field,
which is a common issue in supergravity inflation.
The ambiguity is resolved by the simple fact that there is no other scalar in the first place.
Indeed, thanks to the Stueckelberg mechanism the second scalar of the inflaton multiplet is eaten by the vector field.
Thus we have a model of single-field chaotic inflation in supergravity which is technically natural
employing a softly broken shift symmetry.
We should mention that single-field inflationary models with the use of massive vector multiplets
have been introduced only recently in the literature \cite{Farakos:2013cqa,Ferrara:2013rsa,Ferrara:2013kca},
and in particular in Ref. \cite{Ferrara:2013rsa} a very interesting discussion on their properties can be found,
including a realization of the chaotic model.
In fact the model studied here is dual to the chaotic models of Ref. \cite{Ferrara:2013rsa},
and thus essentially reproduces equivalent results.
On the other hand the "linear superfield - vector superfield" picture we presented makes
the technical naturalness of the
model manifest in the new-minimal supergravity formulation.
As we have mentioned earlier,
the Lagrangian (\ref{A1}) describes a massive vector multiplet.
This may be seen by rewriting (\ref{A1}) as
\begin{eqnarray}
\begin{split}
\label{A2}
{\cal L}=& -2 M_P^2 \int d^4 \theta \,E \, V_{\text{R}}
-g M \int d^4 \theta \,E \, L (V + \frac{1}{g M} \Phi + \frac{1}{g M} \bar \Phi)
\\
&
+ \frac{1}{4} \left[ \int d^2 \theta \, {\cal E} \, W^2(V) +h.c. \right]
- \int d^4 \theta \,E \, L^2
\end{split}
\end{eqnarray}
where now $L$ is unconstrained.
{}From (\ref{A2}) we see that the chiral superfield $\Phi$ has to carry a vanishing R-charge
for the chiral-linear duality to be possible.
Then by integrating out $L$ from (\ref{A2}) we have
\begin{eqnarray}
\begin{split}
\label{A3}
{\cal L}=& -2 M_P^2 \int d^4 \theta \,E \, V_{\text{R}}
+ \frac{1}{4} \left[ \int d^2 \theta \, {\cal E} \, W^2(V) +h.c. \right]
+ \frac{1}{4} g^2 M^2 \int d^4 \theta \,E \, V^2
\end{split}
\end{eqnarray}
where we have shifted
\begin{eqnarray}
V \rightarrow V - \frac{1}{g M} \Phi - \frac{1}{g M} \bar \Phi .
\end{eqnarray}
The Lagrangian (\ref{A3}) describes a massive vector multiplet coupled to the new-minimal supergravity.
It is easy to see that the bosonic sector of (\ref{A3}) will be given again by (\ref{fin}).
In the limit $gM \rightarrow 0$ gauge invariance is restored,
and the theory will be described by a massless vector multiplet
and a massless chiral multiplet.
Indeed in this limit the theory will become
\begin{eqnarray}
\begin{split}
\label{A33}
{\cal L}=& -2 M_P^2 \int d^4 \theta \,E \, V_{\text{R}}
+ \frac{1}{4} \left[ \int d^2 \theta \, {\cal E} \, W^2(V) +h.c. \right]
+ \frac{1}{2} \int d^4 \theta \,E \, \bar \Phi \Phi
\end{split}
\end{eqnarray}
and the shift symmetry will translate into
\begin{eqnarray}
\label{dd}
\Phi \rightarrow \Phi + d \, M_P .
\end{eqnarray}
In (\ref{dd}) the constant $d$ can be complex.
Note that (\ref{A2}) allowed only for a pure imaginary shift of the chiral superfield
and thus only in the $gM \rightarrow 0$ limit $d$ can be complex.
The fact that the shift (\ref{dd}) is a symmetry of (\ref{A33}) is connected to the
structure of the new-minimal supergravity, which gives
\begin{eqnarray}
\int d^4 \theta \,E \, \ \Phi = \int d^4 \theta \,E \, \ \bar \Phi = 0
\end{eqnarray}
for a chiral superfield with vanishing R-charge.
If we instead start with the most general (up to two derivatives),
gauge invariant coupling of the real linear with the vector multiplet,
the superspace Lagrangian reads
\begin{eqnarray}
\begin{split}
\label{A11}
{\cal L}=& -2 M_P^2 \int d^4 \theta \,E \, V_{\text{R}}
+ \frac{1}{4} \left[ \int d^2 \theta \, {\cal E} \, W^2(V) +h.c. \right]
\\
&
-g M \int d^4 \theta \,E \, L V
- \int d^4 \theta \,E \, {\cal F}(L)
\end{split}
\end{eqnarray}
and after integrating out all the auxiliary field sector we find the bosonic part
\begin{eqnarray}
\label{fin2}
\begin{split}
e^{-1} {\cal L}^B = \frac{1}{2} M_P^2 R - \frac{1}{4} {\cal F}'' \partial \phi \partial \phi - \frac{g^2}{ 8} M^2 \phi^2
- \frac{1}{4} F^{mn} F_{mn} (v)
- \frac{1}{4 {\cal F}''} g^2 M^2 v^m v_m
\end{split}
\end{eqnarray}
where
\begin{eqnarray}
{\cal F}''(\phi) = \frac{\partial^2 {\cal F}(\phi)}{\partial \phi \partial \phi} .
\end{eqnarray}
Again in (\ref{fin2}) the vector has eaten the Lagrange multiplier $Y$.
For a general kinematic function
\begin{eqnarray}
\label{FF}
{\cal F}(\phi) = c_0 + c_2 \phi^2 + c_3 \phi^3 + \dots
\end{eqnarray}
when all the higher order terms are present there is no shift symmetry,
and in general ${\cal F}(\phi)$ will receive large corrections.
Only in the case when
\begin{eqnarray}
c_n = 0 \ \ , \ \ n\ge 3
\end{eqnarray}
does $\cal F$ respect the shift symmetry and any correction to the higher order terms will have to be generated
by the Green-Schwarz term and thus be suppressed.
\section{Chaotic inflation in old-minimal supergravity}
Now we investigate the embedding of the
quadratic chaotic model in the old-minimal superspace \cite{Wess:1992cp}.
This supergravity also originates from the superconformal supergravity after appropriate
gauge fixing \cite{Kugo:1982cu,Ferrara:1983dh,Gates:1983nr,Buchbinder:1995uq,Butter:2009cp}.
A complete treatment of the curvature superfields of this theory can be found in Ref. \cite{Ferrara:1988qx}.
In addition to the graviton and the gravitino, the pure theory contains two auxiliary fields:
a complex scalar $u$, and a real vector $b_m$.
It is well known how to couple a self-interacting massive vector multiplet
to the old-minimal supergravity \cite{Mukhi:1979wc,VanProeyen:1979ks}.
Following the results of the previous section,
we consider the superspace Lagrangian
in the old-minimal formulation
\begin{eqnarray}
\begin{split}
\label{B1}
{\cal L}=& - 3 M_P^2 \int d^2 \Theta \,2 {\cal E} \, {\cal R} + h.c.
+ \frac{1}{4} \left[ \int d^2 \Theta \, 2{\cal E} \, W^2(V) +h.c. \right]
\\
&
-g M \int d^4 \theta \,E \, {\cal Q} V
- \int d^4 \theta \,E \, {\cal G}( {\cal Q})
\end{split}
\end{eqnarray}
where $\cal Q$ is a real linear multiplet with definition \cite{Binetruy:2000zx}
\begin{eqnarray}
\label{oldL}
(\bar {\cal D}^2 - 8 {\cal R}) {\cal Q} = 0 .
\end{eqnarray}
We see from (\ref{oldL}) that the shift symmetry argument used in the new-minimal case does not apply,
since the shift
\begin{eqnarray}
{\cal Q} \rightarrow {\cal Q} + c M_P
\end{eqnarray}
for a real constant $c$, violates the definition (\ref{oldL}).
Thus a choice of a quadratic kinematic function
\begin{eqnarray}
\label{Q^2}
{\cal G}( {\cal Q}) = {\cal Q}^2
\end{eqnarray}
is not protected by a shift symmetry.
To find the component form and relate to the known results,
it is better to rewrite the Lagrangian (\ref{B1}) as the coupling of a massive vector
multiplet to supergravity.
For the part containing the real linear superfield we have
\begin{eqnarray}
\begin{split}
\label{B2}
{\cal L}_{\cal Q} =& -g M \int d^4 \theta \,E \, {\cal Q} V
- \int d^4 \theta \,E \, {\cal G}( {\cal Q})
\\
= & -g M \int d^4 \theta \,E \, {\cal Q} (V + \frac{\Phi}{g M} + \frac{\bar \Phi}{g M} )
- \int d^4 \theta \,E \, {\cal G}( {\cal Q}) .
\end{split}
\end{eqnarray}
In (\ref{B2}) the superfield $\cal Q$ is unconstrained,
and it may be integrated out via the equations of motion
\begin{eqnarray}
{\cal G}'( {\cal Q}) =- g M (V + \frac{\Phi}{g M} + \frac{\bar \Phi}{g M} ) .
\end{eqnarray}
The theory then becomes
\begin{eqnarray}
\label{oldminZ}
\begin{split}
{\cal L} = \frac{1}{4} \int d^2 \Theta \, 2 {\cal E} \, W^2(V) + c.c.
+ \int d^2 \Theta \, 2 {\cal E} \left\{ -\frac{1}{8} ( \bar {\cal D}^2 -8 {\cal R} ) {\cal Z}(V) \right\} +c.c.
\end{split}
\end{eqnarray}
with
\begin{eqnarray}
{\cal Z}(V) = -3 M_P^2 - \left[ {\cal G}( {\cal Q}) +g M {\cal Q} V \right]_{ {\cal G}'( {\cal Q})=-gMV} .
\end{eqnarray}
The Lagrangian (\ref{oldminZ}) is defined for
any hermitian function $ {\cal Z}(V)$ of dimension $ [{\cal Z}(V)]=2$ of the real vector superfield $V$.
To turn to component form let us first define the components of the massive vector multiplet as
\begin{eqnarray}
\begin{split}
C &=V|
\\
N &= -\frac{1}{4} {\cal D}^2 V|
\\
v_{\alpha \dot \alpha} &= - \frac{1}{2} [{\cal D}_\alpha , \bar {\cal D}_{\dot \alpha} ] V|
\\
\text{D} &= \frac{1}{8} {\cal D}^\alpha ( \bar {\cal D}^2 -8 {\cal R} ) {\cal D}_\alpha \, V| .
\end{split}
\end{eqnarray}
The bosonic sector of (\ref{oldminZ}) reads
\begin{eqnarray}
\label{compZ}
\begin{split}
e^{-1} {\cal L}^B = & - \frac{1}{4} F^{mn} F_{mn} (v) +\frac{1}{2} \text{D}^2 -\frac{1}{4} {\cal Z}'' b^m b_m
-\frac{1}{3} {\cal Z}' \bar u \bar N -\frac{1}{3} {\cal Z}' u N + \frac{1}{9} {\cal Z} u \bar u
\\
&+ \frac{1}{6} {\cal Z} R + {\cal Z}'' N \bar N -\frac{1}{9} {\cal Z} b^m b_m
+ \frac{1}{2} {\cal Z}' \text{D} - \frac{1}{4} {\cal Z}'' \partial C \partial C +\frac{1}{3} {\cal Z}' b^m v_m
\end{split}
\end{eqnarray}
where now ${\cal Z}$ is a function of the lowest component $C$ and
\begin{eqnarray}
{\cal Z}' = \frac{\partial {\cal Z}}{\partial C} \ \ , \ \ {\cal Z}'' = \frac{\partial^2 {\cal Z}}{\partial C \partial C} .
\end{eqnarray}
After integrating out the auxiliary sector and performing the appropriate Weyl rescalings one finds \cite{VanProeyen:1979ks}
\begin{eqnarray}
\label{OLD}
\begin{split}
e^{-1} {\cal L}^B = & \frac{1}{2} M_P^2 R
+ \frac{1}{2} M_P^2 {\cal J}'' \partial C \partial C
- \frac{1}{ 2} M_P^4 ({\cal J}')^2
\\
&- \frac{1}{4} F^{mn} F_{mn} (v)
+ \frac{1}{2} M_P^2 {\cal J}'' v^m v_m
\end{split}
\end{eqnarray}
where
\begin{eqnarray}
{\cal J}(C) = \frac{3}{2} \text{ln} \left[ - \frac{1}{3 M_P^2} {\cal Z}(C) \right].
\end{eqnarray}
For a ghost-free theory one should have
\begin{eqnarray}
{\cal J}'' < 0 .
\end{eqnarray}
It can be easily seen that the ${\cal J}(C)$ which reproduces the quadratic chaotic model is given by
\begin{eqnarray}
\label{chaold}
{\cal J} = - \frac{m^2}{2 M_P^2} C^2
\end{eqnarray}
and the part of (\ref{OLD}) relevant to inflation will read
\begin{eqnarray}
\label{infold}
e^{-1} {\cal L}_{scalar} = \frac{1}{2} M_P^2 R - \frac{1}{2} \partial \psi \partial \psi
- \frac{m^2}{ 2} \psi^2
\end{eqnarray}
for the inflaton
\begin{eqnarray}
\psi = m C .
\end{eqnarray}
Again this is a single-field inflationary model.
Nevertheless the function (\ref{chaold})
does not correspond to a vector superfield self-coupling of the form
\begin{eqnarray}
\label{oldmass}
- \frac{1}{2} m^2 \int d^4 \theta E \, V^2
\end{eqnarray}
but will come from a more involved function of $V$.
Indeed, the vector superfield function inside (\ref{oldminZ}) has to have the form
\begin{eqnarray}
\label{expV}
{\cal Z}(V) = - 3 M_P^2 e^{-\frac{m^2}{3 M_P^2} V^2}
\end{eqnarray}
which leads to (\ref{chaold}) and (\ref{infold}).
On the other hand, a quadratic term as (\ref{oldmass}) will give rise to an exponential potential.
In fact the coupling of the massive vector multiplet to standard supergravity was
investigated in Ref. \cite{Ferrara:2013rsa} reproducing, among other models, also quadratic chaotic inflation.
Let us recapitulate for a moment.
We have shown that starting with a theory of quadratic chaotic inflation we can embed it in supergravity in two distinct ways.
In one case (the new-minimal)
there is a softly broken shift symmetry in superspace which can be used to give a technical naturalness argument
against higher order corrections.
The embedding can be also straightforwardly carried out in the old-minimal formulation,
but in this case there is no obvious way to render the theory technically natural.
Finally, one may also interpret (\ref{oldminZ}) as a supergravity
model with a gauged chiral sector \cite{Wess:1992cp}
of the form
\begin{eqnarray}
\label{oldmingauged}
\begin{split}
{\cal L} = & \int d^2 \Theta \, 2 {\cal E} \left\{ -\frac{1}{8} ( \bar {\cal D}^2
-8 {\cal R} ) {\cal Z}(\text{ln}[ \bar \Phi e^V \Phi] ) \right\} +c.c.
\\
& + \frac{1}{4} \int d^2 \Theta \, 2 {\cal E} \, W^2(V) + c.c.
\end{split}
\end{eqnarray}
where the real part of the lowest component of the chiral superfield
\begin{eqnarray}
S = \text{ln}\, \Phi
\end{eqnarray}
will drive inflation,
while the imaginary will be eaten by the massive vector.
\section{Conclusions}
We have studied the embedding of the quadratic model of chaotic inflation into the new-minimal and the
old-minimal theories of supergravity,
with the use of massive vector multiplets.
This embedding is quite straightforward and reproduces a single-field inflation model.
This stems from the underlying Stueckelberg (or BEH) mechanism,
where the second unwanted component of the scalar multiplet is eaten by the massive vector.
Indeed this new mechanism has been investigated previously in a series
of papers \cite{Farakos:2013cqa,Ferrara:2013rsa,Ferrara:2013kca}.
The simplicity and generality of the models indicate that
their detailed cosmological properties deserve further study, a problem
to which we will return in the future.
Our additional interest here was the notion of technical naturalness in superspace.
As we have demonstrated, the model in the new-minimal formulation naturally evades the $\eta$-problem,
due to a softly broken superspace shift symmetry of the real linear superfield.
In the old-minimal formulation, even though the quadratic chaotic model can be successfully embedded,
we could not identify a corresponding mechanism.
Closing, let us make a final comment on the 4D Green-Schwarz terms.
These were merely introduced to generate a potential for the inflaton,
and not to cancel some anomaly.
Nevertheless,
the specific combination of couplings we have used
exactly reproduces the 4D analog of the Green-Schwarz mechanism,
which leads one to hope that it would be possible to embed the theory into a UV complete superstring model.
However, in order to protect the form of the potential it is essential that the model exhibits the
shift symmetry.
If such a model existed one could extend the technical naturalness of this paper to a top-down naturalness.
The identification of specific superstring sectors with the supergravity properties discussed in this article
would offer a new insight to string inflation which we leave for future research.
\section*{Acknowledgements}
We thank I. Dalianis and A. Kehagias for discussion and correspondence.
We thank R. Kallosh, A. Linde and M. Porrati for comments on the first version.
This work is supported by the Grant agency of the Czech republic under the grant P201/12/G028.
|
1404.3822
|
\section{Introduction}
Let $G$ be a semisimple Lie group and $\mathcal{X}$ the associated symmetric space of dimension $n$.
Let $M$ be a connected, orientable, aspherical, tame manifold of the same dimension as $\mathcal{X}$. First assume that $M$ is compact. To each representation $\rho :\pi_1(M) \rightarrow G$, one can associate a volume of $\rho$ in the following way. First, associate a flat bundle $E_\rho$ over $M$ with fiber $\mathcal{X}$ to $\rho$. Since $\mathcal{X}$ is contractible, there always exists a section $s : M \rightarrow E_\rho$. Let $\omega_{\mathcal{X}}$ be the Riemannian volume form on $\mathcal{X}$. One may think of $\omega_{\mathcal{X}}$ as a closed differential form on $E_\rho$ by spreading $\omega_{\mathcal{X}}$ over the fibers of $E_\rho$. Then the volume of $\rho$ is defined by
$$\mathrm{Vol}(\rho)=\int_M s^*\omega_{\mathcal{X}}.$$
Since any two sections are homotopic to each other, the volume $\mathrm{Vol}(\rho)$ does not depend on the choice of section.
The volume of representations has been used to characterize discrete faithful representations.
Let $\Gamma$ be a uniform lattice in $G$. Then the volume of representations satisfies a Milnor-Wood type inequality. More precisely, it holds that for any representation $\rho :\Gamma\rightarrow G$, \begin{eqnarray}\label{MWinequality} |\mathrm{Vol}(\rho)| \leq \mathrm{Vol}(\Gamma\backslash \mathcal{X}).\end{eqnarray}
Furthermore, equality holds in (\ref{MWinequality}) if and only if $\rho$ is discrete and faithful. This is the so-called \emph{volume rigidity theorem}. Goldman \cite{Go92} proved the volume rigidity theorem in higher rank case and, Besson, Courtois and Gallot \cite{BCG07} proved the theorem in rank $1$ case.
Now assume that $M$ is noncompact.
Then the definition of volume of representations as above is not valid anymore since some problems of integrability arise. So far, three definitions of volume of representations have been given under some conditions on $M$. Let us first fix the following notations throughout the paper.
\smallskip
\noindent {\bf Setup.} Let $M$ be a noncompact, connected, orientable, aspherical, tame manifold. Denote by $\overline M$ the compact manifold with boundary whose interior is homeomorphic to $M$. Assume that each connected component of $\partial \overline M$ has amenable fundamental group. Let $G$ be a rank $1$ semisimple Lie group with trivial center and no compact factors. Let $\mathcal{X}$ be the associated symmetric space of dimension $n$. Assume that $M$ has the same dimension as $\mathcal{X}$.
\smallskip
First of all, Dunfield \cite{Du99} introduced the notion of pseudo-developing map to define the volume of representations of a nonuniform lattice $\Gamma$ in $\mathrm{SO}(3,1)$. It was successful to make an invariant associated with a representation $\rho :\Gamma \rightarrow \mathrm{SO}(3,1)$ but he did not prove that the volume of representations does not depend on the chosen pseudo-developing map.
After that, Francaviglia \cite{Fr04} proved the well-definedness of the volume of representations. Then Francaviglia and Klaff \cite{FK06} extended the definition of volume of representations and the volume rigidity theorem to general nonuniform hyperbolic lattices.
We call the definition of volume of representations via pseudo-developing map {\bf D1}. For more detail about {\bf D1}, see \cite{FK06} or Section \ref{sec:pseudo}.
The second definition {\bf D2} of volume of representations was given by Bucher, Burger and Iozzi \cite{BBI}, which generalizes the one introduced in \cite{BIW10} for noncompact surfaces. They used the bounded cohomology theory to make an invariant associated with a representation.
Given a representation $\rho : \pi_1(M) \rightarrow G$, one can not get any information from the pull-back map in degree $n$ in continuous cohomology, $\rho^*_c : H^n_c(G,\mathbb R) \rightarrow H^n(\pi_1(M),\mathbb R)$, since $H^n(\pi_1(M),\mathbb R) \cong H^n(M,\mathbb R)$ is trivial. However the situation is different in continuous bounded cohomology. Not only may be the pull-back map $\rho^*_b : H^n_{c,b}(G,\mathbb R) \rightarrow H^n_b(\pi_1(M),\mathbb R)$ nontrivial but also encodes subtle algebraic and topological properties of a representation such as injectivity and discreteness.
Bucher, Burger and Iozzi \cite{BBI} gave a proof of the volume rigidity theorem for representations of hyperbolic lattices from the point of view of bounded cohomology. We refer the reader to \cite{BBI} or Section \ref{sec:bounded} for further discussion about {\bf D2}.
Recently, S. Kim and I. Kim \cite{KK14} give a new definition, called {\bf D3}, of volume of representations in the case that $M$ is a complete Riemannian manifold with finite Lipschitz simplicial volume. See \cite{KK14} or Section \ref{sec:lipschitz} for the exact definition of {\bf D3}. In {\bf D3}, it is not necessary that each connected component of $\partial \overline M$ has amenable fundamental group while the amenable condition on $\partial \overline M$ is necessary in {\bf D2}. They only use the bounded cohomology and $\ell^1$-homology of $M$. It is quite useful to define the volume of representations in the case that the amenable condition on $\partial \overline M$ does not hold. They give a proof of the volume rigidity theorem for representations of lattices in an arbitrary semisimple Lie group in their setting.
In this note, we will give another definition of volume of representations, called {\bf D4}. In {\bf D4}, $\rho$-equivariant maps are involved as {\bf D1} and the bounded cohomology of $M$ is involved as {\bf D2} and {\bf D3}. In fact, {\bf D4} seems a kind of definition connecting the other definitions {\bf D1}, {\bf D2} and {\bf D3}. Eventually we show that all definitions are equivalent.
\begin{theorem}\label{thm:main}
Let $G$ be a rank $1$ simple Lie group with trivial center and no compact factors. Let $M$ be a noncompact, connected, orientable, aspherical, tame manifold. Suppose that each end of $M$ has amenable fundamental group. Then all definitions {\bf D1}, {\bf D2} and {\bf D3} of volume of representations of $\pi_1(M)$ into $G$ are equivalent. Furthermore if $M$ admits a complete Riemannian metric with finite Lipschitz simplicial volume, all definitions {\bf D1}, {\bf D2}, {\bf D3} and {\bf D4} are equivalent.
\end{theorem}
The paper is organized as follows: For our proof, we recall the definitions of volume of representations in the order {\bf D2, D4, D1, D3}.
In Section \ref{sec:bounded}, we first recall the definition {\bf D2}. In Section \ref{sec:cone}, we give the definition {\bf D4} and then prove that {\bf D2} and {\bf D4} are equivalent. In Section \ref{sec:pseudo}, after recalling the definition {\bf D1}, we show the equivalence of {\bf D1} and {\bf D4}. Finally in Section \ref{sec:lipschitz}, we complete the proof of Theorem \ref{thm:main} by proving that {\bf D3} and {\bf D4} are equivalent.
\section{Bounded cohomology and Definition {\bf D2}}\label{sec:bounded}
We choose the appropriate complexes for the continuous cohomology and continuous bounded cohomology of $G$ for our purpose.
Consider the complex $C^*_c(\mathcal X,\mathbb R)_\mathrm{alt}$ with the homogeneous coboundary operator, where
$$C^k_c(\mathcal X,\mathbb R)_\mathrm{alt} =\{ f : \mathcal X^{k+1} \rightarrow \mathbb R \ | \ f \text{ is continuous and alternating} \}.$$
The action of $G$ on $C^k_c(\mathcal X,\mathbb R)_\mathrm{alt}$ is given by $$g\cdot f (x_0,\ldots,x_k)=f(g^{-1}x_0,\ldots,g^{-1}x_k).$$ Then the continuous cohomology $H^*_c(G,\mathbb R)$ can be isomorphically computed by the cohomology of the $G$-invariant complex $C^*_c(\mathcal X,\mathbb R)_\mathrm{alt}^G$ (see \cite[Chapitre III]{Gu80}). According to the Van Est isomorphism \cite[Proposition IX.5.5]{BW}, the continuous cohomology $H^*_c(G,\mathbb R)$ is isomorphic to the set of $G$-invariant differential forms on $\mathcal{X}$. Hence in degree $n$, $H^n_c(G,\mathbb R)$ is generated by the Riemannian volume form $\omega_{\mathcal X}$ on $\mathcal{X}$.
Let $C^k_{c,b}(\mathcal{X},\mathbb R)_\mathrm{alt}$ be subcomplex of continuous, alternating, bounded real valued functions on $\mathcal{X}^{k+1}$.
The continuous bounded cohomology $H^*_{c,b}(G,\mathbb R)$ is obtained by the cohomology of the $G$-invariant complex $C^*_{c,b}(\mathcal{X},\mathbb R)_\mathrm{alt}^G$ (see \cite[Corollary 7.4.10]{Mo01}). The inclusion of complexes $C^*_{c,b}(\mathcal{X},\mathbb R)^G_\mathrm{alt} \subset C^*_{c}(\mathcal{X},\mathbb R)^G_\mathrm{alt}$ induces a comparison map $H^*_{c,b}(G,\mathbb R) \rightarrow H^*_{c}(G,\mathbb R)$.
Let $Y$ be a countable CW-complex. Denote by $C^k_b(Y,\mathbb R)$ the complex of bounded real valued $k$-cochains on $Y$.
For a subspace $B \subset Y$, let $C^k_b(Y,B,\mathbb R)$ be the subcomplex of those bounded $k$-cochains on $Y$ that vanish on simplices with image contained in $B$. The complexes $C^*_b(Y,\mathbb R)$ and $C^*_b(Y,B,\mathbb R)$ define the bounded cohomologies $H^*_b(Y,\mathbb R)$ and $H^*_b(Y,B,\mathbb R)$ respectively. For our convenience, we give another complex which computes the bounded cohomology $H^*_b(Y,\mathbb R)$ of $Y$. Let $C^k_b(\widetilde Y,\mathbb R)_\mathrm{alt}$ denote the complex of bounded, alternating real valued Borel functions on $(\widetilde Y)^{k+1}$. The $\pi_1(Y)$-action on $C^*_b(\widetilde Y,\mathbb R)_\mathrm{alt}$ is defined as the $G$-action on $C^*_c(\mathcal X,\mathbb R)$. Then Ivanov \cite{Iva85} proved that the $\pi_1(Y)$-invariant complex $C^*_b(\widetilde Y,\mathbb R)_\mathrm{alt}^{\pi_1(Y)}$ defines the bounded cohomology of $Y$.
Bucher, Burger and Iozzi \cite{BBI} used bounded cohomology to define the volume of representations.
Let $\overline M$ be a connected, orientable, compact manifold with boundary. Suppose that each component of $\partial \overline M$ has amenable fundamental group. In that case, it is proved in \cite{BBIPP,KK} that the natural inclusion $i:(\overline M,\emptyset) \rightarrow (\overline M,\partial \overline M)$ induces an isometric isomorphism in bounded cohomology, $$i_b^* : H^*_b(\overline M, \partial \overline M,\mathbb R) \rightarrow H^*_b(\overline M,\mathbb R),$$ in degrees $* \geq 2$. Noting the remarkable result of Gromov \cite[Section 3.1]{Gro82} that the natural map $H^n_b(\pi_1(\overline M),\mathbb R)\rightarrow H^n_b(\overline M,\mathbb R)$ is an isometric isomorphism in bounded cohomology, for a given representation $\rho : \pi_1(M) \rightarrow G$ we have a map $$\rho^*_b : H^n_{c,b}(G,\mathbb R) \rightarrow H^n_b(\pi_1(\overline M),\mathbb R) \cong H^n_b(\overline M,\mathbb R) \cong H^n_b(\overline M,\partial \overline M,\mathbb R).$$
The $G$-invariant Riemannian volume form $\omega_\mathcal{X}$ on $\mathcal{X}$ gives rise to a continuous bounded cocycle $\Theta :\mathcal{X}^{n+1} \rightarrow \mathbb R$ defined by $$\Theta(x_0,\ldots,x_n)=\int_{[x_0,\ldots,x_n]}\omega_\mathcal{X},$$ where $[x_0,\ldots,x_n]$ is the geodesic simplex with ordered vertices $x_0,\ldots,x_n$ in $\mathcal{X}$. The boundedness of $\Theta$ is due to the fact that the volume of geodesic simplices in $\mathcal{X}$ is uniformly bounded from above \cite{IY82}. Hence the cocycle $\Theta$ induces a continuous cohomology class $[\Theta]_c \in H^n_c(G,\mathbb R)$ and moreover, a continuous bounded cohomology class $[\Theta]_{c,b} \in H^n_{c,b}(G,\mathbb R)$.
The image of $((i^*_b)^{-1} \circ \rho^*_b)[\Theta]_{c,b}$ via the comparison map $c : H^n_b(\overline M,\partial \overline M,\mathbb R) \rightarrow H^n(\overline M,\partial \overline M,\mathbb R)$ is an ordinary relative cohomology class. Its evaluation on the relative fundamental class $[\overline M,\partial \overline M]$ gives an invariant associated with $\rho$.
\begin{definition}[{\bf D2}]
For a representation $\rho : \pi_1(M) \rightarrow G$, define an invariant $\mathrm{Vol}_2(\rho)$ by
$$\mathrm{Vol}_2(\rho)= \left\langle (c\circ (i^*_b)^{-1} \circ \rho^*_b) [\Theta]_{c,b}, [\overline M, \partial \overline M] \right\rangle.$$
\end{definition}
In the definition {\bf D2}, a specific continuous bounded volume class $[\Theta]_{c,b}$ in $H^n_{c,b}(G,\mathbb R)$ is involved. The question is naturally raised as to whether, if another continuous bounded volume class is used in {\bf D2} instead of $[\Theta]_{c,b}$, the value of the volume of representations changes or not. One could expect that the definition {\bf D2} does not depend on the choice of continuous bounded volume class but it seems not easy to get an answer directly. It turns out that {\bf D2} is independent of the choice of continuous bounded volume class. For a proof, see Section \ref{sec:lipschitz}.
\begin{proposition}\label{prop:indepwb}
The definition {\bf D2} does not depend on the choice of continuous bounded volume class, that is, for any two continuous bounded volume classes $\omega_b$, $\omega_b' \in H^n_{c,b}(G,\mathbb R)$,
$$\left\langle (c\circ (i^*_b)^{-1} \circ \rho^*_b) (\omega_b), [\overline M, \partial \overline M] \right\rangle=\left\langle (c\circ (i^*_b)^{-1} \circ \rho^*_b) (\omega_b'), [\overline M, \partial \overline M] \right\rangle.$$
\end{proposition}
Bucher, Burger and Iozzi proved the volume rigidity theorem for hyperbolic lattices as follows.
\begin{theorem}[Bucher, Burger and Iozzi, \cite{BBI}]
Let $n\geq 3$. Let $i :\Gamma \hookrightarrow \mathrm{Isom}^+(\mathbb H^n)$ be a lattice embedding and let $\rho:\Gamma \rightarrow \mathrm{Isom}^+(\mathbb H^n)$ be any representation. Then $$| \mathrm{Vol}_2(\rho)| \leq |\mathrm{Vol}_2(i)|=\mathrm{Vol}(\Gamma \backslash \mathbb H^n),$$
with equality if and only if $\rho$ is conjugated to $i$ by an isometry.
\end{theorem}
\section{New definition {\bf D4}}\label{sec:cone}
In this section we give a new definition of volume of representations.
It will turn out that the new definition is useful in proving that all definitions of volume of representations are equivalent.
\subsection{End compactification}
Let $\widehat{M}$ be the end compactification of $M$ obtained by adding one point for each end of $M$.
Let $\widetilde M$ denote the universal cover of $M$. Let $\widehat{\widetilde{M}}$ denote the space obtained by adding to $\widetilde{M}$ one point for each lift of each end of $M$. The points added to $M$(resp. $\widetilde M$) are called \emph{ideal points} of $M$(resp. $\widetilde M$). Denote by $\partial \widehat M$(resp. $\partial \widehat{\widetilde M}$) the set of ideal points of $M$(resp. $\widetilde M$). Let $p : \widetilde{M} \rightarrow M$ be the universal covering map. The covering map $p : \widetilde{M} \rightarrow M$ extends to a map $\widehat{p}: \widehat{\widetilde{M}} \rightarrow \widehat{M}$ and moreover, the action of $\pi_1(M)$ on $\widetilde{M}$ by covering transformations induces an action on $\widehat{\widetilde{M}}$. The action on $\widehat{\widetilde{M}}$ is not free because each point of $\partial \widehat{\widetilde{M}}$ is stabilized by some peripheral subgroup of $\pi_1(M)$.
Note that $\widehat M$ can be obtained by collapsing each connected component of $\partial \overline M$ to a point.
Similarly, $\widehat{\widetilde M}$ can be obtained by collapsing each connected component of $\bar p^{-1}(\partial \overline M)$ to a point where $\bar p : \widetilde{\overline M} \rightarrow \overline M$ is the universal covering map.
We denote the collapsing map by $\pi : \widetilde {\overline M} \rightarrow \widehat{\widetilde M}$.
One advantage of $\widehat M$ is the existence of a fundamental class in singular homology.
While the top dimensional singular homology of $M$ vanishes, the top dimensional singular homology of $\widehat M$ with coefficients in $\mathbb Z$ is isomorphic to $\mathbb Z$. Moreover, it can be easily seen that $H_*(\widehat M,\mathbb R)$ is isomorphic to $H_*(\overline M,\partial \overline M,\mathbb R)$ in degree $* \geq 2$. Hence the fundamental class of $\widehat M$ is well-defined and denote it by $[\widehat M]$.
\subsection{The cohomology groups}
Let $Y$ be a topological space and suppose that a group $L$ acts continuously on $Y$.
Then the cohomology group $H^*(Y;L,\mathbb R)$ associated with $Y$ and $L$ is defined in the following way. Our main reference for this cohomology is \cite{Du}.
For $k>0$, define $$F^k_\mathrm{alt}(Y,\mathbb R)=\{ f : Y^{k+1} \rightarrow \mathbb R \ | \ f \text{ is alternating} \}.$$ Let $F^k_\mathrm{alt}(Y,\mathbb R)^L$ denote the subspace of $L$-invariant functions where the action of $L$ on $F^k_\mathrm{alt}(Y,\mathbb R)$ is given by
$$(g \cdot f)(y_0,\ldots,y_k)=f(g^{-1}y_0,\ldots, g^{-1}y_k),$$ for $f \in F^k_\mathrm{alt}(Y)$ and $g \in L$.
Define a coboundary operator $\delta_k : F^k_\mathrm{alt}(Y,\mathbb R) \rightarrow F^{k+1}_\mathrm{alt}(Y,\mathbb R)$ by the usual formula
$$(\delta_k f)(y_0,\ldots,y_{k+1})=\sum_{i=0}^{k+1} (-1)^i f(y_0,\ldots, \hat y_i,\ldots, y_{k+1}).$$
The coboundary operator restricts to the complex $F^*_\mathrm{alt}(Y,\mathbb R)^L$. The cohomology $H^*(Y;L,\mathbb R)$ is defined as the cohomology of this complex. Define $F^*_{\mathrm{alt},b}(Y,\mathbb R)$ as the subspace of $F^*_\mathrm{alt}(Y,\mathbb R)$ consisting of bounded alternating functions. Clearly the coboundary operator restricts to the complex $F^*_{\mathrm{alt},b}(Y,\mathbb R)^L$ and so it defines a cohomology, denoted by $H^*_b(Y;L,\mathbb R)$.
In particular, for a manifold $M$, the cohomology $H^*(\widetilde M;\pi_1(M),\mathbb R)$ is actually isomorphic to the group cohomology $H^*(\pi_1(M),\mathbb R)$ and, $H^*_b(\widetilde M;\pi_1(M), \mathbb R)$ is isomorphic to the bounded cohomology $H^*_b(\pi_1(M),\mathbb R)$.
\begin{remark}\label{remark1}
Let $L$ and $L'$ be groups acting continuously on topological spaces $Y$ and $Y'$, respectively. Given a homomorphism $\rho :L \rightarrow L'$, any $\rho$-equivariant continuous map $P:Y \rightarrow Y'$ defines a chain map $$P^* :F^*_\mathrm{alt}(Y',\mathbb R)^{L'}\rightarrow F^*_\mathrm{alt}(Y,\mathbb R)^L.$$
Thus it gives a morphism in cohomology. Let $Q:Y \rightarrow Y'$ be another $\rho$-equivariant map. For each $k>0$, one may define $$H_k (y_0,\ldots,y_k)=\sum_{i=0}^i (-1)^k (P(y_0),\ldots,P(y_i),Q(y_i),\ldots, Q(y_k)).$$
Then by a straightforward computation, $$(\partial_{k+1}H_k + H_{k-1}\partial_k)(y_0,\ldots,y_k)=(P(y_0),\ldots,P(y_k))-(Q(y_0),\ldots, Q(y_k)).$$
It follows from the above identity that for any cocycle $f \in F^k_\mathrm{alt}(Y',\mathbb R)^{L'}$, $$ (P^*f - Q^*f)(y_0,\ldots,y_k)=\delta_k (f \circ H_{k-1})(y_0,\ldots,y_k).$$
From this usual process in cohomology theory, one could expect that $P$ and $Q$ induce the same morphism in cohomology.
However, since $f \circ H_{k-1}$ may be not alternating, $P$ and $Q$ may not induce the same morphism in cohomology.
\end{remark}
Recalling that $\Theta :\mathcal{X}^{n+1} \rightarrow \mathbb R$ is a $G$-invariant continuous bounded alternating cocycle, it yields a bounded cohomology class $[\Theta]_b \in H^n_b(\mathcal{X};G,\mathbb R).$
Let $\overline \mathcal{X}$ be the compactification of $\mathcal{X}$ obtained by adding the ideal boundary $\partial \mathcal{X}$.
Extending the $G$-action on $\mathcal{X}$ to $\overline{\mathcal{X}}$, we can define a cohomology $H^*(\overline{\mathcal{X}};G,\mathbb R)$ and bounded cohomology $H^*_b(\overline{\mathcal{X}};G,\mathbb R)$. In rank $1$ case, since the geodesic simplex is well-defined for any $(n+1)$-tuple of points of $\overline X$, the cocycle $\Theta$ can be extended to a $G$-invariant alternating bounded cocycle $\overline \Theta :\overline X^{n+1} \rightarrow \mathbb R$. Hence $\overline \Theta$ determines a cohomology class $[\overline \Theta] \in H^n(\overline \mathcal{X};G,\mathbb R)$ and $[\overline \Theta]_b \in H^n_b(\overline \mathcal{X};G,\mathbb R)$.
Let $\widehat D:\widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$ be a $\rho$-equivariant continuous map whose restriction to $\widetilde{M}$ is a $\rho$-equivariant continuous map from $\widetilde M$ to $\mathcal{X}$. We will consider only such kinds of equivariant maps throughout the paper. Denote by $D : \widetilde M \rightarrow \mathcal X$ the restriction of $\widehat D$ to $\widetilde M$.
Then $\widehat D$ induces a homomorphism in cohomology, $$\widehat D^* : H^n(\overline \mathcal{X};G,\mathbb R) \rightarrow H^n(\widehat{\widetilde{M}};\pi_1(M),\mathbb R).$$
Note that the action of $\pi_1(M)$ on $\widehat{\widetilde M}$ is not free and hence $H^*(\widehat{\widetilde{M}};\pi_1(M),\mathbb R)$ may not be isomorphic to $H^*(\widehat M,\mathbb R)$.
Let $H^*_{simp}(\widehat M,\mathbb R)$ be the simplicial cohomology induced from a simplicial structure on $\widehat M$.
Then there is a natural restriction map $H^*(\widehat{\widetilde M};\pi_1(M),\mathbb R) \rightarrow H^*_{simp}(\widehat M,\mathbb R) \cong H^*(\widehat M,\mathbb R)$.
Thus we regard the cohomology class $\widehat D^*[\overline \Theta]$ as a cohomology class of $H^n(\widehat M,\mathbb R)$.
Let $[\widehat M]$ be the fundamental cycle in $H_n(\widehat M,\mathbb R)\cong \mathbb R$.
\begin{definition}[{\bf D4}] Let $D:\widetilde M \rightarrow \mathcal{X}$ be a $\rho$-equivariant continuous map which is extended to a $\rho$-equivariant map $\widehat D : \widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$. Then we define an invariant $\mathrm{Vol}_4(\rho,D)$ by
$$\mathrm{Vol}_4(\rho,D)=\langle \widehat D^*[\overline{\Theta}], [\widehat M] \rangle.$$
\end{definition}
As observed before, $\widehat D^*[\overline{\Theta}]$ may depend on the choice of $\rho$-equivariant map. However it turns out that the value $\mathrm{Vol}_4(\rho,D)$ is independent of the choice of $\rho$-equivariant continuous map as follows.
\begin{proposition}\label{prop:1}
Let $\rho :\pi_1(M) \rightarrow G$ be a representation. Then
$$\mathrm{Vol}_2(\rho)=\mathrm{Vol}_4(\rho,D).$$
\end{proposition}
\begin{proof}
Reminding that the continuous bounded cohomology $H^*_{c,b}(G,\mathbb R)$ can be computed isomorphically from the complex $C^*_{c,b}(\mathcal X,\mathbb R)_\mathrm{alt}$, there is the natural inclusion $C^*_{c,b}(\mathcal X,\mathbb R)_\mathrm{alt} \subset F^*_{\mathrm{alt},b}(\mathcal X,\mathbb R)$. Denote the homomorphism in cohomology induced from the inclusion by $i_G : H^k_{c,b}(G,\mathbb R)\rightarrow H^k_b(\mathcal{X};G,\mathbb R)$. Clearly, $i_G([\Theta]_{c,b})=[\Theta]_b$.
The bounded cohomology $H^*_b(\pi_1(M),\mathbb R)$ is obtained by the cohomology of the complex $C^*_b(\widetilde M,\mathbb R)_\mathrm{alt}^{\pi_1(M)}$. Since $C^*_b(\widetilde M,\mathbb R)_\mathrm{alt}= F^*_{\mathrm{alt},b}(\widetilde M,\mathbb R)$, the induced map $i_M : H^k_b(\pi_1(M),\mathbb R) \rightarrow H^k_b(\widetilde M;\pi_1(M),\mathbb R)$ is the identity map.
Let $\widehat D : \widehat{\widetilde M}\rightarrow \overline{\mathcal X}$ be a $\rho$-equivariant map which maps $\widetilde M$ to $\mathcal X$. Then consider the following commutative diagram.
$$ \xymatrixcolsep{4pc}\xymatrix{
H^n(\overline \mathcal{X};G,\mathbb R) \ar[r]^-{\widehat D^*} & H^n(\widehat{\widetilde{M}};\pi_1(M),\mathbb R) \ar[rd]^-{\pi^*} \\
H^n_b(\overline{\mathcal{X}};G,\mathbb R) \ar[r]^-{\widehat D^*_b} \ar[d]^-{res_\mathcal{X}} \ar[u]_-{\bar c} &
H^n_b(\widehat{\widetilde{M}};\pi_1(M),\mathbb R) \ar[d]^-{res_M} \ar[rd]^-{\pi^*_b} \ar[u]_-{\hat c} & H^n(\overline M,\partial \overline M,\mathbb R)\\
H^n_b(\mathcal{X};G,\mathbb R) \ar[r]^-{D_b^*} &
H^n_b(\widetilde M;\pi_1(M),\mathbb R) &
H^n_b(\overline M,\partial \overline M,\mathbb R) \ar[l]_-{i^*_b} \ar[u]_-{c} \\
H^n_{c,b}(G,\mathbb R) \ar[u]_-{i_G} \ar[r]^-{\rho^*_b} & H^n_b(\pi_1(M),\mathbb R) \ar[u]_-{i_M}
}$$
where $\pi : \widetilde{\overline M} \rightarrow \widehat{\widetilde M}$ is the collapsing map. Note that the map $\rho^*_b$ in the bottom of the diagram is actually induced from the restriction map $D: \widetilde M \rightarrow \mathcal{X}$. However it does not depend on the choice of equivariant map but only on the homomorphism $\rho$. In other words, any continuous equivariant map from $\widetilde M$ to $\mathcal{X}$ gives rise to the same map $\rho^*_b: H^*_{c,b}(G,\mathbb R) \rightarrow H^*_b(\pi_1(M),\mathbb R)$. For this reason, we denote it by $\rho^*_b$ instead of $D^*_{c,b}$.
Note that $\pi$ induces a map $\pi^* : F^*_\mathrm{alt}(\widehat{\widetilde M},\mathbb R) \rightarrow F^*_\mathrm{alt}(\widetilde{\overline M},\mathbb R)$. It follows from the alternating property that the image of $\pi^*$ is contained in $C^*(\overline M,\partial \overline M,\mathbb R)$. Hence the map $\pi^* : H^n(\widehat{\widetilde M};\pi_1(M),\mathbb R) \rightarrow H^n(\overline M,\partial \overline M,\mathbb R)$ makes sense. One can understand $\pi^*_b : H^n_b(\widehat{\widetilde M};\pi_1(M),\mathbb R) \rightarrow H^n_b(\overline M,\partial \overline M,\mathbb R)$ in a similar way.
Noting that $\bar c([\overline \Theta]_b)=[\overline \Theta]$ and $res_\mathcal{X}([\overline \Theta]_b)=[\Theta]_b$, it follows from the above commutative diagram that
{\setlength\arraycolsep{2pt}
\begin{eqnarray*}
((i^*_b)^{-1}\circ i_M \circ \rho_b^*)[\Theta]_{c,b} &=& ((i^*_b)^{-1}\circ D^*_b \circ i_G) [\Theta]_{c,b} \\
&=& ((i^*_b)^{-1}\circ D^*_b \circ res_\mathcal{X}) [\overline \Theta]_b\\
&=& ((i^*_b)^{-1}\circ res_M \circ \widehat D^*_b) [\overline \Theta]_b \\
&=& (\pi_b^* \circ \widehat D^*_b)[\overline \Theta]_b
\end{eqnarray*}}
Hence
{\setlength\arraycolsep{2pt}
\begin{eqnarray*}
\mathrm{Vol}_2(\rho)&=& \langle (c \circ (i^*_b)^{-1}\circ i_M \circ \rho_b^*)[\Theta]_{c,b}, [\overline M, \partial \overline M] \rangle \\
&=& \langle (c \circ \pi_b^* \circ \widehat D^*_b)[\overline \Theta]_b, [\overline M, \partial \overline M] \rangle \\
&=& \langle (\pi^* \circ \widehat D^* \circ \bar c)[\overline \Theta]_b, [\overline M, \partial \overline M] \rangle \\
&=& \langle (\pi^* \circ \widehat D^*)[\overline \Theta], [\overline M, \partial \overline M] \rangle \\
&=& \langle \widehat D^*[\overline \Theta], \pi_* [\overline M, \partial \overline M] \rangle \\
&=& \langle \widehat D^*[\overline \Theta], [\widehat M] \rangle \\
&=& \mathrm{Vol}_4(\rho,D)
\end{eqnarray*}}
This completes the proof.
\end{proof}
Proposition \ref{prop:1} implies that the value $\mathrm{Vol}_4(\rho,D)$ does not depend on the choice of continuous equivariant map. Hence from now on we use the notation $\mathrm{Vol}_4(\rho):=\mathrm{Vol}(\rho,D)$. Furthermore,
Proposition \ref{prop:1} allows us to interpret the invariant $\mathrm{Vol}_2(\rho)$ in terms of a pseudo-developing map via $\mathrm{Vol}_4(\rho)$ in the next section. Note that a pseudo-developing map for $\rho$ is a specific kind of $\rho$-equivariant continuous map $\widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$.
\section{Pseudo-developing map and Definition {\bf D1}}\label{sec:pseudo}
Dunfield \cite{Du99} introduced the notion of pseudo-developing map in order to define the volume of representations $\rho : \pi_1(M)\rightarrow \mathrm{SO}(3,1)$ for a noncompact complete hyperbolic $3$-manifold $M$ of finite volume. We start by recalling the definition of pseudo-developing map.
\begin{definition}[Cone map]
Let $\mathcal A$ be a set, $t_0 \in \mathbb R$, and $Cone(\mathcal A)$ be the cone obtained from $\mathcal A\times [t_0,\infty]$ by collapsing $\mathcal A \times \{\infty\}$ to a point, called $\infty$. A map $\widehat D:Cone(\mathcal A) \rightarrow \overline \mathcal{X}$ is a \emph{cone map} if $\widehat D (Cone(\mathcal A))\cap \partial \mathcal{X} =\{\widehat D(\infty)\}$ and, for all $a \in \mathcal A$ the map $\widehat D|_{a\times [t_0,\infty]}$ is either the constant to $\widehat D(\infty)$ or the geodesic ray from $\widehat D(a,t_0)$ to $\widehat D(\infty)$, parametrized in such a way that the parameter $(t-t_0)$, $t\in [t_0,\infty]$, is the arc length.
\end{definition}
For each ideal point $v$ of $M$, fix a product structure $T_v \times [0,\infty)$ on the end relative to $v$. The fixed product structure induces a cone structure on a neighborhood of $v$ in $\widehat M$, which is obtained from $T_v \times [0,\infty]$ by collapsing $T_v \times \{\infty\}$ to a point $v$. We lift such structures to the universal cover.
Let $\tilde v$ be an ideal point of $\widetilde M$ that projects to the ideal point $v$. Denote by $E_{\tilde v}$ the cone at $\tilde v$ that is homeomorphic to $P_{\tilde v} \times [0,\infty]$, where $P_{\tilde v}$ covers $T_v$ and $P_{\tilde v} \times \{\infty\}$ is collapsed to $\tilde v$.
\begin{definition}[Pseudo-developing map]\label{def:3.2}
Let $\rho : \pi_1(M) \rightarrow G$ be a representation. A \emph{pseudo-developing map} for $\rho$ is a piecewise smooth $\rho$-equivariant map $D : \widetilde M \rightarrow \mathcal{X}$. Moreover $D$ is required to extend to a continuous map
$\widehat D: \widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$ with the property that there exists $t \in \mathbb R^+$ such that for each end $E_{\tilde v}=P_{\tilde v} \times [0,\infty]$ of $\widehat{\widetilde{M}}$, the restriction of $\widehat D$ to $P_{\tilde v} \times [t,\infty]$ is a cone map.
\end{definition}
\begin{definition}
A \emph{triangulation} of $\widehat M$ is an identification of $\widehat M$ with a complex obtained by gluing together with simplicial attaching maps. It is not required for the complex to be simplicial, but it is required that open simplicies embed.
\end{definition}
Note that a triangulation of $\widehat M$ always exists and it lifts uniquely to a triangulation of $\widehat{\widetilde M}$.
Given a triangulation of $\widehat M$, one can define the straightening of pseudo-developing maps as follows.
\begin{definition}[Straightening map]
Let $\widehat M$ be triangulated.
Let $\rho:\pi_1(M)\rightarrow G$ be a representation and $D :\widetilde{M} \rightarrow \mathcal{X}$ a pseudo-developing map for $\rho$.
A straightening of $D$ is a continuous piecewise smooth $\rho$-equivariant map $Str( D):\widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$ such that
\begin{itemize}
\item for each simplex $\sigma$ of the triangulation, $Str( D)$ maps $\widetilde \sigma$ to $Str( D \circ \widetilde \sigma)$,
\item for each end $E_{\tilde v}=P_{\tilde v}\times [0,\infty]$ there exists $t\in \mathbb R$ such that $Str( D)$ restricted to $P_{\tilde v} \times [t,\infty]$ is a cone map.
\end{itemize}
where $\widetilde \sigma$ is a lift of $\sigma$ to $\widehat{\widetilde M}$ and $Str(D \circ \widetilde \sigma)$ is the geodesic straightening of $ D\circ \widetilde \sigma : \Delta^n \rightarrow \overline \mathcal{X}$.
\end{definition}
Note that any straightening of a pseudo-developing map is also a pseudo-developing map.
\begin{lemma}
Let $\widehat M$ be triangulated. Let $\rho :\pi_1(M) \rightarrow G$ be a representation and $ D:\widetilde{M} \rightarrow \mathcal{X}$ a pseudo-developing map for $\rho$. Then a straightening $Str(D)$ of $D$ exists and furthermore, $Str(D) : \widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$ is always equivariantly homotopic to $\widehat D$ via a homotopy that fixes the vertices of the triangulation.
\end{lemma}
\begin{proof}
First, set $Str(D)(V)=f(V)$ for every vertex $V$ of the triangulation. Then extend $Str(D)$ to a map which is piecewise straight with respect to the triangulation. This is always possible because $\mathcal{X}$ is contractible.
Note that $\widehat D$ and $Str(D)$ agree on the ideal vertices of $\widehat{\widetilde{M}}$ and are equivariantly homotopic via the straight line homotopy between them. Hence it can be easily seen that the extension is a straightening of $D$.
\end{proof}
For any pseudo-developing map $D:{\widetilde{M}} \rightarrow \mathcal{X}$ for $\rho$,
$$\int_M D^*\omega_\mathcal{X}$$ is always finite. This can be seen as follows.
We stick to the notations used in Definition \ref{def:3.2}. We may assume that the restriction of $\widehat D$ to each $E_{\tilde v}=P_{\tilde v} \times [0,\infty]$ is a cone map. Choose a fundamental domain $F_0$ of $T_{v}$ in $P_{\tilde v}$.
Then, there exists $t\in \mathbb R^+$ such that $$ \left|\int_{T_v \times [t,\infty)} D^*\omega_\mathcal{X} \right| =\mathrm{Vol}_n (\mathrm{Cone}(D( F_0 \times \{t\}))) \leq \frac{1}{n-1}\mathrm{Vol}_{n-1}(D(F_0\times \{t \}))$$ where $\mathrm{Vol}_{n-1}$ denotes the $(n-1)$-dimensional volume.
The last inequality holds for any Hadamard manifold with sectional curvature at most $-1$. See \cite[Section 1.2]{Gro82}.
Hence the integral of $D^*\omega_\mathcal{X}$ over $M$ is finite.
\begin{definition}[{\bf D1}]
Let $D:{\widetilde{M}} \rightarrow \mathcal{X}$ be a pseudo-developing map for a representation $\rho : \pi_1(M) \rightarrow G$.
Define an invariant $\mathrm{Vol}_1(\rho,D)$ by
$$\mathrm{Vol}_1(\rho,D)=\int_M D^*\omega_\mathcal{X}. $$
\end{definition}
In the case that $G=\mathrm{SO}(n,1)$, Francaviglia \cite{Fr04} showed that the definition {\bf D1} does not depend on the choice of pseudo-developing map. We give a self-contained proof for this in rank $1$ case.
\begin{proposition}\label{prop:14equi}
Let $\rho : \pi_1(M) \rightarrow G$ be a representation. Then for any pseudo-developing map $D :\widetilde M \rightarrow \mathcal{X}$, $$\mathrm{Vol}_1(\rho,D)=\mathrm{Vol}_4(\rho).$$
Thus, $\mathrm{Vol}_1(\rho,D)$ does not depend on the choice of pseudo-developing map.
\end{proposition}
\begin{proof}
Let $\mathcal T$ be a triangulation of $\widehat M$ with simplices $\sigma_1, \ldots, \sigma_N$. Then the triangulation gives rise to a fundamental cycle $\sum_{i=1}^N \sigma_i$ of $\widehat M$. Let $Str(D)$ be a straightening of $D$ with respect to the triangulation $\mathcal T$.
Since $Str(D)$ is a $\rho$-equivariant continuous map,
we have
\begin{eqnarray*}
\mathrm{Vol}_4(\rho):=\mathrm{Vol}_4(\rho,D)&=&\langle Str(D)^*[\overline \Theta], [\widehat M] \rangle
= \langle \overline \Theta, \sum_{i=1}^N Str(\widehat D(\sigma_i))\rangle \\
&=& \sum_{i=1}^N \int_{Str(\widehat D(\sigma_i))} \omega_\mathcal{X}
= \int_M Str(D)^*\omega_\mathcal{X}.
\end{eqnarray*}
Since both $Str(D)$ and $\widehat D$ are pseudo-developing maps for $\rho$ that agree on the ideal points of $\widehat{\widetilde{M}}$, it can be proved, using the same arguments as the proof of \cite[Lemma 2.5.1]{Du99}, that $$\int_M Str(D)^*\omega_\mathcal{X} =\int_M D^*\omega_\mathcal{X}=\mathrm{Vol}_1(\rho,D)$$
Finally we obtain the desired equality.
\end{proof}
\begin{remark}
While {\bf D1} is defined with only pseudo-developing map, the definition {\bf D4} is defined with any equivariant map. This is one advantage of the definition {\bf D4}. By Proposition \ref{prop:14equi}, the notation $\mathrm{Vol}_1(\rho):=\mathrm{Vol}_1(\rho,D)$ makes sense.
\end{remark}
\section{Lipschitz simplicial volume and Definition {\bf D3}}\label{sec:lipschitz}
In this section, $M$ is assumed to be a Riemannian manifold with finite Lipschitz simplicial volume.
Gromov \cite[Section 4.4]{Gro82} introduced the Lipschitz simplicial volume of Riemannian manifolds. One can define the Lipschitz constant for each singular simplex in $M$ by giving the Euclidean metrics on the standard simplices. Then the Lipschitz constant of a locally finite chain $c$ of $M$ is defined as the supremum of the Lipschitz constants of all singular simplices occurring in $c$. The Lipschitz simplicial volume of $M$ is defined by the infimum of the $\ell^1$-norms of all locally finite fundamental cycles with finite Lipschitz constant. Let $[M]_\mathrm{Lip}^{\ell^1}$ be the set of all locally finite fundamental cycles of $M$ with finite $\ell^1$-seminorm and finite Lipschitz constant. If $[M]_\mathrm{Lip}^{\ell^1}=\emptyset$, the Lipschitz simplicial volume of $M$ is infinite.
In the case that $[M]_\mathrm{Lip}^{\ell^1} \neq \emptyset$, S. Kim and I. Kim \cite{KK14} give a new definition of volume of representations as follows:
Given a representation $\rho : \pi_1(M) \rightarrow G$, $\rho$ induces a canonical pullback map
$\rho^*_b : H^*_{c,b}(G,\mathbb{R}) \rightarrow H^*_b(\pi_1(M),\mathbb{R})\cong H^*_b(M,\mathbb R)$ in continuous bounded cohomology.
Hence for any continuous bounded volume class $\omega_b \in H^n_{c,b}(G,\mathbb R)$, we obtain a bounded cohomology class $\rho^*_b(\omega_b)\in H^n_b(M,\mathbb{R})$.
Then, the bounded cohomology class $\rho^*_b(\omega_b)$ can be evaluated on $\ell^1$-homology classes in $H^{\ell^1}_n(M,\mathbb{R})$ by the Kronecker products
$$ \langle\cdot ,\cdot \rangle : H^*_b(M,\mathbb{R}) \otimes H^{\ell^1}_*(M,\mathbb{R}) \rightarrow \mathbb{R}.$$
For more detail about this, see \cite{KK14}.
\begin{definition}[{\bf D3}]
We define an invariant $\mathrm{Vol}_3(\rho)$ of $\rho$ by
$$\mathrm{Vol}_3(\rho) = \inf \langle \rho^*_b(\omega_b), \alpha \rangle$$
where the infimum is taken over all $\alpha\in [M]^{\ell^1}_\mathrm{Lip}$ and all $\omega_b \in H^n_{c,b}(G,\mathbb R)$ with $c(\omega_b)=\omega_{\mathcal X}$.
\end{definition}
One advantage of {\bf D3} is to not need the isomorphism $H^n_b(\overline M,\partial \overline M,\mathbb R) \rightarrow H^n_b(\overline M,\mathbb R)$.
When $M$ admits the isomorphism above, we will verify that the definition {\bf D3} is eventually equivalent to the other definitions of volume of representations.
\begin{lemma}\label{lem:indep}
Suppose that $M$ is a noncompact, connected, orientable, apherical, tame Riemannian manifold with finite Lipschitz simplicial volume and each end of $M$ has amenable fundamental group.
Then for any $\alpha \in [M]^{\ell^1}_\mathrm{Lip}$ and any continuous bounded volume class $\omega_b$, $$\langle \rho_b^* (\omega_b), \alpha \rangle = \langle (c\circ (i^*_b)^{-1} \circ \rho^*_b)(\omega_b), [\overline M,\partial \overline M] \rangle$$
\end{lemma}
\begin{proof}
When $M$ is a $2$-dimensional manifold, the proof is given in \cite{KK14}. Actually the proof in general case is the same.
We here sketch the proof for the reader's convenience.
Let $K$ be a compact core of $M$. Note that $K$ is a compact submanifold with boundary that is a deformation retract of $M$. Consider the following commutative diagram,
$$ \xymatrixcolsep{2pc}\xymatrix{
C^*_b(M,\mathbb{R}) &
C^*_b(\overline M,\mathbb{R}) \ar[l]_-{j_b^*} &
C^*_b(\overline M,\partial \overline M,\mathbb{R}) \ar[l]_-{i^*_b} \\
& C^*_b(\overline M, \overline M-K, \mathbb{R}) \ar[u]^-{l_b^*} \ar[ru]_-{q_b^*} & }$$
where every map in the above diagram is the map induced from the canonical inclusion.
Every map in the diagram induces an isomorphism in bounded cohomology in $*\geq2$.
Thus, there exists a cocycle $z_b \in C^n_b(\overline M,\overline M-K,\mathbb{R})$ such that
$l^*_b([z_b]) = \rho^*_b(\omega_b)$.
Let $c=\sum_{i=1}^\infty a_i \sigma_i$ be a locally finite fundamental $\ell^1$-cycle with finite Lipschitz constant representing $\alpha \in [M]^{\ell^1}_\mathrm{Lip}$. Then, we have
$$\langle \rho^*_b(\omega_b),\alpha \rangle = \langle l_b^* ([z_b]), \alpha \rangle = \langle z_b, c \rangle.$$
Since $z_b$ vanishes on simplices with image contained in $\overline M-K$, we have $\langle z_b, c \rangle =\langle z_b, c|_K \rangle$
where $c|_K=\sum_{\mathrm{im}\sigma_i \cap K \neq \emptyset} a_i \sigma_i$. It is a standard fact that $c|_K$ represents the relative fundamental class $[\overline M,\overline M-K]$ in $H_n(\overline M, \overline M-K,\mathbb{R})$ (see \cite[Theorem 5.3]{Loh07}.)
On the other hand, we have
\begin{eqnarray*}
\langle (c \circ (i^*_b)^{-1} \circ \rho_b^*)(\omega_b), [\overline M,\partial \overline M] \rangle &=& \langle (c\circ q^*_b)([z_b]), [\overline M,\partial \overline M]) \rangle \\
&=& \langle [z_b], q_*[\overline M,\partial \overline M] \rangle \\
&=& \langle [z_b], [\overline M,\overline M-K] \rangle=\langle z_b,c|_K \rangle .
\end{eqnarray*}
Therefore, we finally get the desired identity.
\end{proof}
By Lemma \ref{lem:indep} we can reformulate the definition {\bf D3} as follows.
$$ \mathrm{Vol}_3(\rho)= \inf_{\omega_b} \langle (c\circ (i^*_b)^{-1} \circ \rho_b^*)(\omega_b), [\overline M,\partial \overline M] \rangle$$
where infimum is taken over all continuous bounded volume classes. Noting that $[\Theta]_{c,b} \in H^n_{c,b}(G,\mathbb R)$ is a continuous bounded volume class, it is clear that $$\mathrm{Vol}_3(\rho)\leq \mathrm{Vol}_2(\rho).$$
It is conjecturally true that the comparison map $H^n_{c,b}(G,\mathbb R) \rightarrow H^n_c(G,\mathbb R)$ is an isomorphism for any connected semisimple Lie group $G$ with finite center. Hence conjecturally, $\mathrm{Vol}_2(\rho)=\mathrm{Vol}_3(\rho)$. In spite of the absence of the proof of the conjecture, we will give a proof for $\mathrm{Vol}_2(\rho)=\mathrm{Vol}_3(\rho)$ by using the definition {\bf D4}.
\begin{lemma}\label{lem:extend}
Let $\omega_b \in H^n_{c,b}(G,\mathbb R)$ be a continuous bounded volume class. Let $f_b :\mathcal X^{n+1} \rightarrow \mathbb R$ be a continuous bounded alternating $G$-invariant cocycle representing $\omega_b$. Then $f_b$ is extended to a bounded alternating $G$-invariant cocycle $\bar f_b : \overline{\mathcal X}^{n+1} \rightarrow \mathbb R$. Furthermore, $\bar f_b$ is uniformly continuous on $\mathcal{X}^n \times \{\xi\}$ for any $\xi \in \partial \mathcal{X}$.
\end{lemma}
\begin{proof}
For any $(\bar x_0, \ldots, \bar x_n) \in \overline{\mathcal X}^{n+1}$, define
$$\bar f_b(\bar x_0,\ldots, \bar x_n) = \lim_{t\rightarrow \infty} f_b(c_0(t),\ldots,c_n(t)),$$
where each $c_i(t)$ is a geodesic ray toward $\bar x_i$. Here, for $x \in \mathcal{X}$, we say that $c : [0,\infty) \rightarrow \mathcal{X}$ is a geodesic ray toward $x$ if there exists $t\in [0,\infty)$ such that the restriction map $c|_{[0,t]}$ of $c$ to $[0,t]$ is a geodesic with $c(t)=x$ and $c|_{[t,\infty)}$ is constant to $x$.
Then it is clear that $\bar f_b(x_0,\ldots,x_n)=f_b(x_0,\ldots, x_n)$ for $(x_0,\ldots,x_n) \in \mathcal X^{n+1}$.
To see the well-definedness of $\bar f_b$, we need to show that for other geodesic rays $c_i'(t)$ toward $\bar x_i$,
\begin{eqnarray}\label{eqn:welldefine} \lim_{t\rightarrow \infty} f_b(c_0(t),\ldots,c_n(t))=\lim_{t\rightarrow \infty} f_b(c_0'(t),\ldots,c_n'(t)).\end{eqnarray}
Note that the limit always exists because $f_b$ is bounded.
In rank $1$ case, the distance between two geodesic rays with the same endpoint decays exponentially to $0$ as they go to the endpoint. Moreover since $f_b$ is $G$-invariant and $G$ transitively acts on $\mathcal{X}$, $f_b$ is uniformly continuous on $\mathcal{X}^{n+1}$. Thus, for any $\epsilon>0$ there exists some number $T>0$ such that
$$ | f_b(c_0(t),\ldots,c_n(t)) - f_b(c_0'(t),\ldots,c_n'(t)) | <\epsilon$$
for all $t>T$. This implies (\ref{eqn:welldefine}) and hence $\bar f_b$ is well-defined.
The alternating property of $\bar f_b$ actually comes from $f_b$. Due to the alternating property of $f_b$, we have \begin{eqnarray*}
\bar f_b(\bar x_0, \ldots, \bar x_i,\ldots, \bar x_j,\ldots, \bar x_n) &=& \lim_{t\rightarrow \infty} f_b(c_0(t),\ldots,c_i(t),\ldots,c_j(t),\ldots, c_n(t)) \\
&=&\lim_{t\rightarrow \infty} -f_b(c_0(t),\ldots,c_j(t),\ldots,c_i(t),\ldots, c_n(t))\\
&=&-\bar f_b(\bar x_0,\ldots, \bar x_j,\ldots, \bar x_i,\ldots, \bar x_n)
\end{eqnarray*}
Therefore we conclude that $\bar f_b$ is alternating.
The boundedness and $G$-invariance of $\bar f_b$ immediately follows from the boundedness and $G$-invariance of $f_b$. Furthermore, it is easy to check that $\bar f_b$ is a cocycle by a direct computation.
Now it remains to prove that $\bar f_b$ is uniformly continuous on $\mathcal{X}^n\times \{\xi\}$. It is obvious that $\bar f_b$ is continuous on $\mathcal{X}^n\times \{\xi\}$. Noting that the parabolic subgroup of $G$ stabilizing $\xi$ acts on $\mathcal{X}$ transitively, it can be easily seen that $\bar f_b$ is uniformly continuous on $\mathcal{X}^n\times \{\xi\}$.
\end{proof}
The existence of $\bar f_b$ allows us to reformulate $\mathrm{Vol}_3$ in terms of $\mathrm{Vol}_4$.
Following the proof of Proposition \ref{prop:1}, we get
\begin{eqnarray}\label{eqn:A} \langle (c\circ (i^*_b)^{-1} \circ \rho_b^*)(\omega_b), [\overline M,\partial \overline M] \rangle = \langle \widehat D^* [\bar f_b], [\widehat M] \rangle \end{eqnarray}
The last term $\langle \widehat D^* [\bar f_b], [\widehat M] \rangle$ above is computed by $\langle \widehat D^*\bar f_b, \widehat c \rangle$ for any equivariant map $\widehat D$ and fundamental cycle $\widehat c$ of $\widehat M$. By choosing proper equivariant map and fundamental cycle, we will show that $\langle \widehat D^* [\bar f_b], [\widehat M] \rangle$ does not depend on the choice of continuous bounded volume class.
\begin{proposition}\label{lem:indepwb}
Let $\omega_b$ and $\omega_b'$ be continuous bounded volume classes. Let $\bar f_b$ and $\bar f_b'$ be the bounded alternating cocycles in $F^n_\mathrm{alt}(\overline X;G,\mathbb R)$ associated with $\omega_b$ and $\omega_b'$ respectively as in Lemma \ref{lem:extend}. Then
$$\langle \widehat D^* [\bar f_b], [\widehat M] \rangle=\langle \widehat D^* [\bar f_b'], [\widehat M] \rangle.$$
\end{proposition}
\begin{proof}
It suffices to prove that for some $\rho$-equivariant map $\widehat D :\widehat{\widetilde M} \rightarrow \overline \mathcal{X}$ and fundamental cycle $\widehat c$ of $\widehat M$,
$$\langle \widehat D^*\bar f_b, \widehat c\rangle = \langle \widehat D^*\bar f_b', \widehat c\rangle.$$
To show this, we will prove that for some sequence $(\widehat c_k)_{k\in \mathbb N}$ of fundamental cycles of $\widehat M$ $$\lim_{k\rightarrow \infty} \left( \langle \widehat D^*\bar f_b, \widehat c_k \rangle - \langle \widehat D^*\bar f_b', \widehat c_k\rangle \right)=0.$$
Let $v_1,\ldots,v_s$ be the ideal points of $M$.
As in Section \ref{sec:pseudo}, fix a product structure $T_{v_i} \times [0,\infty]$ on the end relative to $v_i$ for each $i=1,\ldots,s$ and then lift such structures to the universal cover. We stick to the notations used in Section \ref{sec:pseudo}.
Set $$M_k = M-\cup_{i=1}^s T_{v_i} \times (k,\infty].$$
Then $(M_k)_{k\in \mathbb N}$ is an exhausting sequence of compact cores of $M$. The boundary $\partial M_k$ of $M_k$ consists of $\cup_{i=1}^s T_{v_i} \times \{k\}$. Let $\mathcal T_0$ be a triangulation of $M_0$. Then we extend it to a triangulation on $\widehat M$ as follows. First note that $\mathcal T_0$ induces a triangulation on each $T_{v_i}$.
Let $\tau$ be an $(n-1)$-simplex of the induced triangulation on $T_{v_i}$ for some $i \in \{1,\ldots,s\}$.
Then we attach $\pi(\tau \times [0,\infty])$ to $T_{v_i}\times\{0\}$ along $\tau \times \{0\}$ where $\pi :\overline M \rightarrow \widehat M$ be the collapsing map.
Since $\pi$ is an embedding on $\tau \times [0,\infty)$ and $\pi$ maps $\tau \times \{\infty\}$ to the ideal point $v_i$,
it can be easily seen that $cone(\tau):=\pi(\tau \times [0,\infty])$ is an $n$-simplex. Hence we can obtain a triangulation of $\widehat M$ by attaching each $cone(\tau)$ to $\partial M_0$, which is denoted by $\widehat{\mathcal T_0}$.
Next, we extend $\mathcal T_0$ to a triangulation of $M_k$. In fact, $M_k$ is decomposed as follows. $$M_k = M_0 \cup \bigcup_{i=1}^s T_{v_i} \times [0,k].$$ Hence attach each $\tau \times [0,k]$ to $M_0$ along $\tau \times\{0\}$ and then triangulate $\tau \times [0,k]$ by using the prism operator \cite[Chapter 2.1]{Hatcher}. Via this process, we obtain a triangulation of $M_k$, denoted by $\mathcal T_k$. Note that $\mathcal T_0$ and $\mathcal T_k$ induce the same triangulation on each $T_{v_i}$. In addition, one can obtain a triangulation $\widehat{\mathcal T_k}$ of $\widehat M$ from $\mathcal T_k$ in a similar way that $\widehat{\mathcal T_0}$ is obtained from $\mathcal T_0$ as above.
Let $c_k$ be the relative fundamental class of $(M_k,\partial M_k)$ induced from $\mathcal T_k$. Then it can be seen that
$$\widehat c_k = c_k + (-1)^{n+1}cone(\partial c_k)$$
is the fundamental cycle of $\widehat M$ induced from $\widehat{\mathcal T_k}$. Any simplex occurring in $c_k$ is contained in $M_k$.
Now we choose a pseudo-developing map $\widehat D :\widehat{\widetilde M} \rightarrow \overline X$.
Let $\tilde v_i$ be a lift of $v_i$ to $\widehat{\widetilde M}$. Let $P_{\tilde v_i} \times [0,\infty]$ be the cone structure of a neighborhood of $\tilde v_i$ where $P_{\tilde v_i} $ covers $T_{v_i}$ and $P_{\tilde v_i}\times \{\infty\}$ is just the ideal point $\tilde v_i$. We may assume that $\widehat D$ is a cone map on each $P_{\tilde v_i} \times [0,\infty]$. Let $\tilde c_k$ be a lift of $c_k$ to a cochain in $\widetilde M$ and $\widetilde{\partial c_k}$ be a lift of $\partial c_k$. Let $\tau \times \{0\}$ be an $(n-1)$-simplex in $T_{v_i} \times \{0\}$ occurring in $\partial c_0$ and
$\tilde \tau$ be a lift of $\tau$ to $P_{\tilde v_i}$. Then $\tilde \tau \times \{k\}$ is a lift of $\tau \times \{k\} \in \partial c_k$. Since $\widehat D$ is a cone map on $P_{\tilde v_i} \times [0,\infty]$, $D(\tilde \tau \times [0,\infty])$ is the geodesic cone over $\tilde \tau \times \{0\}$ with top point $\tilde v_i$ in $\overline \mathcal{X}$. Hence the diameter of $D(\tilde \tau \times \{k\})$ decays exponentially to $0$ as $k \rightarrow \infty$ for each $\tau$.
By a direct computation, we have
\begin{eqnarray*}
\langle \widehat D^*\bar f_b - \widehat D^*\bar f_b' , \widehat c_k\rangle &=& \langle \widehat D^*\bar f_b - \widehat D^*\bar f_b', \tilde c_k \rangle + (-1)^{n+1}\langle \widehat D^*\bar f_b - \widehat D^*\bar f_b', cone(\widetilde{\partial c_k}) \rangle \\
&=& \langle \bar f_b - \bar f_b', \widehat D_*(\tilde c_k) \rangle + (-1)^{n+1}\langle \bar f_b - \bar f_b', \widehat D_*(cone(\widetilde{\partial c_k}))\rangle \\
&=& \langle f_b - f_b', D_*(\tilde c_k) \rangle + (-1)^{n+1}\langle \bar f_b - \bar f_b', \widehat D_*(cone(\widetilde{\partial c_k}))\rangle
\end{eqnarray*}
The last equality comes from the fact that $\widehat D_*(\tilde c_k)$ is a singular chain in $\mathcal{X}$. Since $f_b$ and $f_b'$ are continuous bounded alternating cocycles representing the continuous volume class $\omega_{\mathcal X} \in H^n_c(G,\mathbb R)$, there is a continuous alternating $G$-invariant function $\beta : \mathcal{X}^n \rightarrow \mathbb R$ such that $f_b -f_b' =\delta \beta$. Hence
$$\langle f_b - f_b', D_*(\tilde c_k) \rangle = \langle \delta \beta, D_*(\tilde c_k) \rangle = \langle \beta, \partial D_*(\tilde c_k) \rangle = \langle \beta, D_*(\widetilde{\partial c_k}) \rangle.$$
As observed before, since the diameter of all simplices occurring in $D_*(\widetilde{\partial c_k})$ decays to $0$ as $k \rightarrow \infty$ and moreover $\beta$ is uniformly continuous on $\mathcal{X}$, we have $$\lim_{k\rightarrow \infty} \langle \beta, D_*(\widetilde{\partial c_k}) \rangle =0$$
Note that $D(cone(\tilde \tau \times \{k\}))$ is the geodesic cone over $D(\tilde \tau \times \{k\})$ with top point $\tilde v_i$. By Lemma \ref{lem:extend}, both $\bar f_b$ and $\bar f_b'$ are uniformly continuous on $\mathcal{X}^n \times \{\tilde v_i\}$.
Since the diameter of $D(\tilde \tau \times \{k\})$ decays to $0$ as $k\rightarrow \infty$,
$$\lim_{k \rightarrow \infty} \langle \bar f_b, D(cone(\tilde \tau \times \{k\})) \rangle =\lim_{k \rightarrow \infty} \langle \bar f_b', D(cone(\tilde \tau \times \{k\})) \rangle = 0.$$
Applying this to each $\tau$, we can conclude that
$$\lim_{k \rightarrow \infty} \langle \bar f_b,D_*(cone(\widetilde{\partial c_k})) \rangle =\lim_{k \rightarrow \infty} \langle \bar f_b', D_*(cone(\widetilde{\partial c_k}))\rangle =0.$$
In the end, it follows that $$\lim_{k\rightarrow \infty} \langle \widehat D^*\bar f_b - \widehat D^*\bar f_b' , \widehat c_k \rangle=0.$$
As we mentioned, the value on the left hand side above does not depend on $\widehat c_k$. Thus we can conclude that $\langle \widehat D^*\bar f_b - \widehat D^*\bar f_b' , \widehat c_k \rangle=0$.
This implies that $\langle \widehat D^*\bar f_b, \widehat c \rangle = \langle \widehat D^*\bar f_b', \widehat c\rangle$ for any fundamental cycle $\widehat c$ of $\widehat M$, which completes the proof.
\end{proof}
Combining Proposition \ref{lem:indepwb} with (\ref{eqn:A}), Proposition \ref{prop:indepwb} immediately follows.
\begin{proposition}
The definitions of {\bf D3} and {\bf D4} are equivalent.
\end{proposition}
\begin{proof}
By Lemma \ref{lem:indep} and Proposition \ref{prop:1}, we have
\begin{eqnarray*}
\mathrm{Vol}_3(\rho) &=& \inf \{ \langle \rho^*_b(\omega_b),\alpha \rangle \ | \ c(\omega_b)=\omega_{\mathcal{X}} \text{ and } \alpha\in [M]_\mathrm{Lip}^{\ell^1} \} \\
&=& \inf \{ \langle (c\circ (i^*_b)^{-1} \circ \rho^*_b)(\omega_b), [\overline M,\partial \overline M] \rangle \ | \ c(\omega_b)=\omega_\mathcal{X} \} \\
&=& \inf \{ \langle \widehat D^* [\bar f_b], [\widehat M] \rangle \ | \ c(\omega_b)=\omega_\mathcal{X} \} \\
&=& \langle \widehat D^* [\overline \Theta], [\widehat M] \rangle \\
&=& \mathrm{Vol}_4(\rho),
\end{eqnarray*}
which completes the proof.
\end{proof}
|
1804.02469
|
\section{Introduction}
A popular research problem in data mining is graph anomaly detection, which has applications in areas ranging from finance to power grid operation to detecting social trends \cite{noble2003graph,eberle2007anomaly,akoglu2015graph}. In this paper we explore using description
length for graph anomaly detection; that is, we encode the graph using
a lossless source coder, and use the resulting codelength as the decision
criteria. While minimum description length (MDL) has been used in
the connection with graph anomaly detection, the application has only
been for model selection in time-series analysis. As far as we know, this paper
is the first to consider using description length directly for anomaly
detection.
Reference \cite{ChoiSzpankowski12} was the first paper to develop practical source coding
algorithms for graphs. To use source coding for description length analysis, the codelength has
to reflect the information in the graph, and the only information \cite{ChoiSzpankowski12} reflects really is the edge probability $p$ (see discussion later). This paper therefore develops new practical
(universal) source coding algorithms based on more informative statistics. This focus is different than other recent papers
in graph coding \cite{DelgoshaAnantharam17,LuczakSzpankowski17, LuczakSzpankowski17b, AsadiAbbeVerdu17} that are aimed more at entropy analysis.
\subsection{Graphs}
The structure of a graph is defined by the set of \textbf{vertices}
(also called nodes) $\mathcal{V}$, and the set of \textbf{edges},
$\mathcal{E}$. Usually, the ordering of the vertices are irrelevant, and in that
case we call the graph \textbf{unlabeled}; we will only consider unweighted, unlabeled, undirected graphs in this paper.
A graph, $G(\mathcal{V},\mathcal{E})$, is often represented by the
\textbf{adjacency matrix}, $\mathbf{A}=[A_{ij}]$, a $|\mathcal{V}|\times|\mathcal{V}|$
matrix where $A_{ij}=1$ if $(i,j)\in\mathcal{E}$. The degree of a vertex is the number of edges emanating from the vertex.
The \textbf{degree distribution} is the collection of the degrees
of all the nodes in the graph and is an often used statistics to differentiate between different classes of random graphs such as Erd\"{o}s-R\'{e}nyi \, Barab\'{a}si-Albert\, or Watts-Strogatz graphs \cite{barabasi2016network}. There is a one-to-one correspondence between binary, symmetric matrices
and unweighted, undirected graphs, and coding of graphs is therefore equivalent to coding
binary, symmetric matrices.
\subsection{Description Length}
The description length of the data is the number of bits required
to describe the data exactly: the data is turned into a stream of
bits, and from this the data should be able to be recovered exactly
by a decoder. We are only concerned with the length of the encoding, i.e.,
the number of bits output be encoder.
The central idea here is that the description length has some relationship
with the \textquotedbl{}meaning\textquotedbl{} of data. For example,
Rissanen considered \textquotedbl{}useful information\textquotedbl{}
in \cite{Rissanen86b}. More concretely, description length can be
used for data analysis. A traditional application, in particular in
terms of minimum description length (MDL) \cite{Rissanen83},
has been for model selection in data analysis.
The methodology
we will develop for graph coding can also be used for model selection
for more general data sets. However, we are more interested in description
length as a general data processing tool beyond simple model selection.
One example is atypicality which is described in Section \ref{Anomaly.sec}.
A central principle of description length is the constraint that a
decoder should be able to reconstruct the original data from an (infinite)
stream of bits. One manifestation is of course the Kraft inequality
\cite{CoverBook}, but the principle is more general. Since most source
coding algorithms are sequential, decodability then means that the
decoder can only use past decoded information to decode future data.
For graphs, this is much more complicated to satisfy than for sequences. Decodability
now becomes an algorithmic constraint rather than a probabilistic
one, moving description length theory closer to Kolmogorov complexity
\cite{LiVitanyi,CoverBook}.
\section{\label{Coding.sec}Coding}
We will base graph coding on the adjacency matrix \textendash{} due
to symmetry, only the lower triangular part has to be coded. However,
usually the numbering of nodes is irrelevant. The resulting graph
modulo automorphisms is called the structure \cite{ChoiSzpankowski12}.
Using this in encoding can lead to smaller codelength. Importantly,
for data analysis, clearly the structure is more relevant, and description
length therefore should be based on the structure.
\begin{table}[tbh]
\begin{centering}
\begin{tabular}{cccc}
1 & \textcolor{blue}{1} & \textcolor{red}{1} & $\cdots$\tabularnewline
1 & \textcolor{blue}{1} & \textcolor{red}{1} & $\cdots$\tabularnewline
1 & \textcolor{blue}{1} & \textcolor{red}{0} & $\cdots$\tabularnewline
1 & \textcolor{blue}{0} & \textcolor{cyan}{1} & $\cdots$\tabularnewline
1 & \textcolor{blue}{0} & \textcolor{cyan}{0} & $\cdots$\tabularnewline
0 & \textcolor{green}{1} & \textcolor{purple}{1} & $\cdots$\tabularnewline
0 & \textcolor{green}{1} & \textcolor{purple}{0} & $\cdots$\tabularnewline
0 & \textcolor{green}{0} & \textcolor{pink}{1} & $\cdots$\tabularnewline
0 & \textcolor{green}{0} & \textcolor{pink}{0} & $\cdots$\tabularnewline
\end{tabular}
\par\end{centering}
\caption{\label{Stein.tab}The first column has one group, the second two (blue/green),
the third four (red/cyan/purple/pink).}
\vspace{-0.3in}
\end{table}
The adjacency matrix is a binary matrix, and coding this is therefore
similar to the problem considered by Steinruecken in \cite{Steinruecken15},
on which we will base our coding. Steinruecken considered coding of
unordered iid sequences, which we will think of as a matrix. We can
state the approach more abstractly as follows: we first sort the rows
according to some criterion (e.g., lexicographically). The coding
is done on the sorted matrix, and only the sorted matrix is reproduced
(exactly) at the receiver. The trick is to sort in such a way that
coding of the sorted matrix is more efficient than coding the original
matrix. The procedure in \cite{Steinruecken15} is to first sort the
sequences lexicographically (with 1 coming before 0). We say that
the sequences are grouped: the first group is all sequences, the next
two groups are sequences that start with 1/0, which is then subgrouped
into sequences starting with 11/10/01/00, see Table \ref{Stein.tab}.
An efficient code is as follows: we first transmit the number of ones
in the first column (the first group). The next column is divided
into two groups: those rows that has 1 in the first column, and those
that have 0. We transmit the number of ones in each group. When the
sequences are iid, the number of ones is binomially distributed, which
can be used for encoding. We continue this way (with empty groups
not encoded).
This approach can also be applied to adjacency matrices, with the
modification that when we permute the rows during sorting, we have
to do the same permutation of columns to preserve symmetry. This turns
out to be equivalent to the algorithm in \cite{ChoiSzpankowski12},
but describing it this way reveals that the approach in \cite{ChoiSzpankowski12}
is strongly aimed at Erd\"{o}s-R\'{e}nyi graphs. From a data analysis point of view
this is problematic. The only parameter the algorithm in \cite{ChoiSzpankowski12}
is sensitive to is the average node degree $\bar{k}$ (equivalently
$p$). Consider anomalous graph detection in terms of atypicality
(this is described in more detail in Section \ref{AnomolousGraph.sec}):
We compare the codelength of encoding the graph with a given learned
coder and a universal coder. Since the only parameter \cite{ChoiSzpankowski12}
is sensitive to is $p$, this corresponds to a hypothesis test of
$p=p_{0}$ versus $p\neq p_{0}$. This is not irrelevant, but it is
far from what we do with sequences, where we can test a given FSM
against the whole class of alternative FSM. Thus, to be effective
for data analysis, we need much more effective coders. In the following
we will describe two such coders.
\subsection{\label{Degree.sec}Coding Using Degree Distribution}
Assume we know the degree distribution $P(k)$, either from a model,
from learning, or from the given graph. How can we take this into
account in coding? Consider coding of a given column of the sorted
adjacency matrix, as outlined above. Important here is what the decoder
already knows, from previous columns: it knows the number of ones
above the diagonal, it knows the number of groups $g$, and it knows
the size $s_{i}$ of each group; let $s=\sum_{i=1}^{g}s_{i}$. We
first encode the (total) degree of the node. Call the number of ones
above the diagonal $\bar{k}$. We can use the coding distribution
\begin{equation}
P(k|k\geq\bar{k})=\frac{P(k)}{\sum_{j=\bar{k}}^{\infty}P(j)}\label{Pk.eq}
\end{equation}
The decoder now has encoded the number of new ones (or edges) to encode.
The encoder needs to encode which configuration of the $k-\bar{k}$
is seen; that is, how many ones $k_{i}$ are in each group, subject
to the total count being $k-\bar{k}$. We assume that every sequence
with $k-\bar{k}$ ones is equally likely, so calculating the probability
of seeing a specific configuration is just a counting problem. In
total there are $\left(\begin{array}{c}
s\\
k-\bar{k}
\end{array}\right)$ sequences with $k-\bar{k}$, and there are $\left(\begin{array}{c}
s_{i}\\
k_{i}
\end{array}\right)$ ways to arrange the $k_{i}$ ones in each group. The coding probability
of a specific configuration therefore is
\[
\log P=\sum_{i-1}^{g}-\log\left(\begin{array}{c}
s_{i}\\
k_{i}
\end{array}\right)+\log\left(\begin{array}{c}
s\\
k-\bar{k}
\end{array}\right)
\]
A central assumption here is that at time of decoding a given column,
the decoder knows the number of ones $\bar{k}$ above the diagonal
so that it can calculate (\ref{Pk.eq}). This is satisfied if the
rows and columns are first sorted lexicographically, which can be
seen as follows. Suppose $i$ columns have been coded/decoded. The
decoder knows the first $i$ columns and rows in the (sorted) adjacency
matrix: this is clearly possible to reconstruct from the number of
ones in each group until column $i$ and the fact of the sorting.
The next row is chosen by the encoder among those among the remaining
$n-i$ columns that has highest sort order based on the first $i$
columns. No matter which column is chosen, the decoder knows the first
$i$ bits, and therefore the number of ones above the diagonal.
It is not necessary to explicitly sort the adjacency matrix. Instead
one can use the same partitioning algorithm from \cite{ChoiSzpankowski12}.
While not very explicit in the paper, they actually sort the adjacency
matrix in the way they choose the next node to encode. It is seen most
clearly from \cite[Fig. 3]{ChoiSzpankowski12}.
\subsection{Coding of Triangles}
Edges are the most fundamental building block of graphs. A more complex
building block is triangles, i.e., a cycle graph with three nodes,
which is also a 3-clique. Statistics about triangles are often used
to characterize graphs \cite{barabasi2016network}. One statistic is the following. Consider
three connected nodes $i\leftrightarrow j\leftrightarrow k$; we let
$p_{\triangle}$ be the probability that there is also an edge $i\leftrightarrow k$.
We can use this for coding as follows. Let the current node to be
coded be node $i$, and suppose we want to code whether or not there
is an edge to node $k$. We now look for a common neighbor $j$ of
nodes $(i,k)$ \emph{among nodes already coded}. If such a node exists,
we encode the edge $i\leftrightarrow k$ using $p_{\triangle}$; otherwise,
we use $p$. This can be used together with the structure encoding
of Table \ref{Stein.tab}: Notice that all groups have exactly the
same connections to prior encoded nodes. Thus all the nodes $k\in G$
in a group either has a common previously encoded neighbor with node
$i$, or none have. Therefore, they can all be encoded with either
$p_{\triangle}$ or $p$. That is, the number of ones in the group
can be encoded with a binomial distribution with probability either
$p_{\triangle}$ or $p$.
\subsection{\label{Calc.sec}Calculation and Encoding of Statistics}
We consider encoding in two scenarios: learned coding, where we are
given a set of training graphs and have to learn the statistics; this
statistics is known both by encoder and decoder. Second, universal
coding, where the encoder encodes a single graph and also has to communicate
to the decoder what is the statistic.
For learned coding, the edge probability $p$ can be estimated straightforwardly
as an average. The degree distribution is estimated through a histogram.
To estimate $p_{\triangle}$ is more tricky. We select randomly three
connected nodes $i\leftrightarrow j\leftrightarrow k$ and calculate
$p_{\triangle}$ as a an average. However, the value of $p_{\triangle}$
depends on how the nodes are selected. When $p_{\triangle}$ is used
for coding, the triple of nodes is chosen in a specific way. The best
estimate is therefore found by performing the coding on the training
graphs. Notice that in that case the edges are divided into those
coded with the triangle probability $p_{\triangle}$ and those coded
with $p$. However, those edges not (coded) in a triangle could be
special. Instead of using the general $p$, we could estimate that
$p$ directly; we call this $\check{p}_{\triangle}$. In general $p\neq\check{p}_{\triangle}$,
but in many cases they are very close.
For universal coding, there are two possible approaches, best outlined
in \cite[Section 13.2]{CoverBook}: the encoder can estimate the parameters
of the coding distribution and inform the decoder of the estimate.
Or, the coding distribution can be sequentially calculated. For encoding $p$ for iid coding
the two approaches are essentially equivalent. The number of bits
required to encode the number of ones is about $\log\frac{n(n-1)}{2}\approx2\log n$
bits. For the degree distribution, we calculate the degree histogram
for the whole graph, and use this for coding. The degree of a node
is between 0 and $n-1$. We can therefore think of the degree histogram
as putting each of the $n$ (unlabeled) nodes into one of $n$ buckets,
and encoding this can be done by encoding the counts in the buckets.
The number of possible configurations is a standard problem in combinatorics:
$\left(\begin{array}{c}
2n-1\\
n
\end{array}\right)$, which can be transmitted with $\log\left(\begin{array}{c}
2n-1\\
n
\end{array}\right)=nH\left(\frac{n}{2n-1}\right)+\frac{1}{2}\log\frac{2n-1}{n^{2}}+c\approx n-\frac{1}{2}\log n$ bits ($|c|\leq2)$ . Of course, there is a relationship between the
degrees of nodes in the graph, and if we took this into consideration,
it might be possible to encode the degree histogram slightly more
efficient.
For triangle coding, we use sequential estimation of $p_{\triangle}$
and $\check{p}_{\triangle}$, specifically the KT estimator \cite{KrichevskyTrofimov81,WillemsAl95},
which is
$
\hat{p}=\frac{n_{1}+\frac{1}{2}}{n_{1}+n_{0}+1}
$,
where $n_{1},n_{0}$ is the number of ones and zeros seen previously.
The probabilities $p_{\triangle}$ and $\check{p}_{\triangle}$ are
not updated after each bit, but rather after each group is encoded.
\subsection{Numerical Results}
Some results can be seen in Fig. \ref{ERcomp.fig}-\ref{nws.fig}.
In all cases, learning was done on 50 graphs prior to coding. For
Erd\"{o}s-R\'{e}nyi graphs, the iid structure code is most efficient, but all structure
codes give about the same codelength. For Barab\'{a}si-Albert\ graphs, coding using
the degree distribution is most efficient, and for Newman Watts Strogatz
graphs \cite{watts1998collective}, using the triangle probability is most efficient. This shows
that there is no single efficient code for all graph structures.
\begin{figure}[tbh]
\begin{centering}
\includegraphics[width=3.5in]{ERcl2}
\par\end{centering}
\caption{\label{ERcomp.fig} Comparison of different codelengths for a ER graph
with $p=\min\{\frac{1}{2},\frac{100}{n}\}$}
\vspace{-0.2in}
\end{figure}
\begin{figure}[tbh]
\begin{centering}
\includegraphics[width=3.5in]{BAcl2}
\par\end{centering}
\caption{\label{GraphCodeComp.fig} Comparison of different codelengths for
a BA graph with $m=20$.}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=3.5in]{NWScl2}
\caption{\label{nws.fig}Comparison of different codelengths for a Watts Strogatz graph \cite{watts1998collective}
with $k=5$ and $p=0.1$.}
\vspace{-0.2in}
\end{figure}
We also did some experiments on real-world graphs, both obtained from \cite{Davis11}. For those graphs
there is no training, so the universal coding is needed. For both
graphs, using degree distribution is most efficient. However, transmitting
the degree histogram is expensive, and considering that, the triangle
coding is most efficient. In light of this one could consider better
ways to represent the degree distribution (e.g., a parametric representation),
but we have not explored that.
\begin{table}[tbh]
\begin{centering}
\begin{tabular}{|c||c|c|}
\hline
Codelength $\searrow$ & Protein graph & Power graph\tabularnewline
\hline
\hline
Labeled iid & 20513 & 81077\tabularnewline
\hline
Structure iid & 8796 & 32013\tabularnewline
\hline
Degree distribution & 7290 & 27651\tabularnewline
\hline
Degree distribution with overhead & 8743 & 32586\tabularnewline
\hline
Triangle & 8369 & 26507\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\label{RealGraph.tab}Real-world graphs. The protein graph is the largest connected component of a network of protein interactions in the yeast Saccharomyces cerevisiae. The power graph represents the US Western power grid.}
\vspace{-0.3in}
\end{table}
\section{\label{Anomaly.sec}Anomaly Detection}
For detecting anomalous graphs, we will use atypicality developed
in \cite{HostSabetiWalton15}, which is described by
\begin{defn}
\label{atypdef.thm}A sequence is atypical if it can be described
(coded) with fewer bits in itself rather than using the (optimum)
code for typical sequences.
\end{defn}
The papers \cite{HostSabetiWalton15,Host16BD} show that atypicality
has many desirable theoretical properties and that it works experimentally for
sequences. Specifically for anomaly detection, the paper \cite{Host16BD}
shows that atypicality is (asymptotically) optimum for finite state
machine (FSM) We will say that two FSM are \emph{distinct} if they
have no identical classes. Then
\begin{thm}[\cite{Host16BD}]
\label{FSMoptimality.thm}
Suppose that the typical model is an FSM. Let the atypicality detector
be given an anomalous sequence generated by an FSM distinct from the
typical FSM. Then as the length of the sequence $l\to\infty$, the
probability of detecting the anomaly converges to 1, while the probability
of false alarm converges to 0.
\end{thm}
As far as we know, nothing similar has been proven for any other anomaly
detection methods.
\subsection{\label{AnomolousGraph.sec}Anomalous Graph Detection}
In anomalous graph detection, we are given a set of training graphs
$G_{1},\ldots,G_{T}$, and the problem is then to determine if a given
graph $G$ is anomalous based on the training. We will apply atypicality
to this problem. The methodology follows directly from Definition
\ref{atypdef.thm}. We learn coding of typical graphs, Section \ref{Calc.sec},
and compare this with applying a universal source coder to $G$. In this paper, we consider unweighted, undirected graphs.
For Erd\"{o}s-R\'{e}nyi \ graphs, atypicality reduces to a hypothesis
test of $\hat{p}=p$ versus $\hat{p}\neq p$, which is of the form
$|\hat{p}-p|\geq\tau$ for some threshold. There is no reason to use
coding, and even coding structure as in \cite{ChoiSzpankowski12}
does not help: in a test of $\hat{p}=p$ versus $\hat{p}\neq p$ ,
the structural decomposition would be the same, only the coding of
the resulting bitstreams would be different.
For more complicated classes of random graphs such Barab\'{a}si-Albert\ or Watts Strogatz \cite{watts1998collective}, more information can be obtained using the coding algorithms developed in Section \ref{Coding.sec}. The general procedure is as follows
\begin{enumerate}
\item On the set of training graphs, we run all the coding algorithms. For
each we learn the values of the parameters (e.g., the histogram) for
the algorithm. We choose the coder that gives the shortest codelength.
The typical coder is now that algorithm with the learned parameters.
Both coder and decoder know the values of the parameters, so this
does not need to be encoded.
\item On the set of test graphs, we run first the typical coder and obtain the typical code length $L_T$. We then
run all the coding algorithms from Section \ref{Coding.sec}; to each
codelength we have to add the overhead of encoding the parameters
(e.g., histogram). The atypical codelength, $L_A$, is now the minimum of these
codelengths, plus a few bits to tell which coder was the shortest.
The atypicality measure is the difference between the atypical
codelength and the typical codelength, $L_A-L_T$. If $L_A-L_T < 0$, or is smaller than some threshold\footnote{The threshold has a coding interpretation: it is the number of bits required to tell the decoder an atypical coder is used \cite{HostSabetiWalton15}}, then following Definition~\ref{atypdef.thm}, the graph is declared atypical (anomalous).
\end{enumerate}
We tested this procedure by generating various random graphs with
$n=100$ nodes.
The typical graphs were generated by using Barab\'{a}si-Albert \ graphs model ($m=10$). We
trained on 100 randomly generated graphs. We then generated
500 test graphs each of:
\begin{enumerate}
\item Barab\'{a}si-Albert\ graphs ($m=10$) (i.e., typical graphs)
\item Barab\'{a}si-Albert\ graphs ($m=9$)
\item Erd\"{o}s-R\'{e}nyi graphs ($p=0.182$), chosen so that the graph has the same average degree as the typical graph.
\item Mixture graph: combination of Barab\'{a}si-Albert\ graphs ($m=10$) and Erd\"{o}s-R\'{e}nyi graphs with $p=0.5$; these are essentially
Barab\'{a}si-Albert\ graphs with extra edges added ($p$) to make more triangles.
\end{enumerate}
We then estimated the probability density function (pdf) of the atypicality measure:
$L_A-L_T$. The results are in Fig.
\ref{Anomaly1.fig}. We can see that Erd\"{o}s-R\'{e}nyi and Barab\'{a}si-Albert ($m=9$) test graphs can be easily distinguished from the typical graphs, Barab\'{a}si-Albert\ ($m=10$). Identifying mixture graph from Barab\'{a}si-Albert\ ($m=10$) is more difficult. However, due to the law of large numbers, anomaly detection improves as graph size increases. Figure~\ref{Anomaly2.fig} shows the estimated pdf of atypicality measures between mixture graph and Barab\'{a}si-Albert\ ($m=10$) for graphs with $n=400$ nodes; if we choose the threshold to be 305, we get $P_{\text{false alarm}}=P_{\text{miss}}=2.4\%$.
\begin{figure}[tbh]
\includegraphics[width=3.5in]{Anomaly1}
\caption{\label{Anomaly1.fig}Pdf of atypicality measure for different types
of graphs ($n=100$). The typical graphs are BA(10), which are used
for training.}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=3.5in]{Anomaly2}
\caption{\label{Anomaly2.fig}Pdf of atypicality measure for different types
of graphs ($n=400$). The typical graphs are BA(10), which are used
for training.}
\end{figure}
\section{Conclusions and Future Work}
In this paper we have developed a number of new universal graph coding
algorithms. The minimum codelength is found by coding with each algorithm,
and then finding the minimum (or weighting as in \cite{VolfWillems98}).
However, this still far from the state of the art for sequences, where
there a single algorithms such as CTW \cite{WillemsAl95} and Lempel-Ziv \cite{CoverBook}
that can code sequences with variable complexity. One possibility
is to generalize the triangle coding to consider structures of variable
complexity, and weight these in an approach similar to CTW.
We have shown that the coding algorithms can be used for graph anomaly
detection based on structure alone. We will consider a number of extensions. First, in most
graph-based anomaly detection problems, the anomaly is in the data
on the graph. Our idea is to combine graph structure coding with
coding of the data to get a single measure that takes into account
both data and structure. Second, we need to be able to consider graphs
of variable size; the complication here is that statistics might very
well depend on size. Finally, we will consider detecting anomalous
subgraphs.
\section{Introduction}
A popular research problem in data mining is graph anomaly detection, which has applications in areas ranging from finance to power grid operation to detecting social trends \cite{noble2003graph,eberle2007anomaly,akoglu2015graph}. In this paper we explore using description
length for graph anomaly detection; that is, we encode the graph using
a lossless source coder, and use the resulting codelength as the decision
criteria. While minimum description length (MDL) has been used in
the connection with graph anomaly detection, the application has only
been for model selection in time-series analysis. As far as we know, this paper
is the first to consider using description length directly for anomaly
detection.
Reference \cite{ChoiSzpankowski12} was the first paper to develop practical source coding
algorithms for graphs. To use source coding for description length analysis, the codelength has
to reflect the information in the graph, and the only information \cite{ChoiSzpankowski12} reflects really is the edge probability $p$ (see discussion later). This paper therefore develops new practical
(universal) source coding algorithms based on more informative statistics. This focus is different than other recent papers
in graph coding \cite{DelgoshaAnantharam17,LuczakSzpankowski17, LuczakSzpankowski17b, AsadiAbbeVerdu17} that are aimed more at entropy analysis.
\subsection{Graphs}
The structure of a graph is defined by the set of \textbf{vertices}
(also called nodes) $\mathcal{V}$, and the set of \textbf{edges},
$\mathcal{E}$. Usually, the ordering of the vertices are irrelevant, and in that
case we call the graph \textbf{unlabeled}; we will only consider unweighted, unlabeled, undirected graphs in this paper.
A graph, $G(\mathcal{V},\mathcal{E})$, is often represented by the
\textbf{adjacency matrix}, $\mathbf{A}=[A_{ij}]$, a $|\mathcal{V}|\times|\mathcal{V}|$
matrix where $A_{ij}=1$ if $(i,j)\in\mathcal{E}$. The degree of a vertex is the number of edges emanating from the vertex.
The \textbf{degree distribution} is the collection of the degrees
of all the nodes in the graph and is an often used statistics to differentiate between different classes of random graphs such as Erd\"{o}s-R\'{e}nyi \, Barab\'{a}si-Albert\, or Watts-Strogatz graphs \cite{barabasi2016network}. There is a one-to-one correspondence between binary, symmetric matrices
and unweighted, undirected graphs, and coding of graphs is therefore equivalent to coding
binary, symmetric matrices.
\subsection{Description Length}
The description length of the data is the number of bits required
to describe the data exactly: the data is turned into a stream of
bits, and from this the data should be able to be recovered exactly
by a decoder. We are only concerned with the length of the encoding, i.e.,
the number of bits output be encoder.
The central idea here is that the description length has some relationship
with the \textquotedbl{}meaning\textquotedbl{} of data. For example,
Rissanen considered \textquotedbl{}useful information\textquotedbl{}
in \cite{Rissanen86b}. More concretely, description length can be
used for data analysis. A traditional application, in particular in
terms of minimum description length (MDL) \cite{Rissanen83},
has been for model selection in data analysis.
The methodology
we will develop for graph coding can also be used for model selection
for more general data sets. However, we are more interested in description
length as a general data processing tool beyond simple model selection.
One example is atypicality which is described in Section \ref{Anomaly.sec}.
A central principle of description length is the constraint that a
decoder should be able to reconstruct the original data from an (infinite)
stream of bits. One manifestation is of course the Kraft inequality
\cite{CoverBook}, but the principle is more general. Since most source
coding algorithms are sequential, decodability then means that the
decoder can only use past decoded information to decode future data.
For graphs, this is much more complicated to satisfy than for sequences. Decodability
now becomes an algorithmic constraint rather than a probabilistic
one, moving description length theory closer to Kolmogorov complexity
\cite{LiVitanyi,CoverBook}.
\section{\label{Coding.sec}Coding}
We will base graph coding on the adjacency matrix \textendash{} due
to symmetry, only the lower triangular part has to be coded. However,
usually the numbering of nodes is irrelevant. The resulting graph
modulo automorphisms is called the structure \cite{ChoiSzpankowski12}.
Using this in encoding can lead to smaller codelength. Importantly,
for data analysis, clearly the structure is more relevant, and description
length therefore should be based on the structure.
\begin{table}[tbh]
\begin{centering}
\begin{tabular}{cccc}
1 & \textcolor{blue}{1} & \textcolor{red}{1} & $\cdots$\tabularnewline
1 & \textcolor{blue}{1} & \textcolor{red}{1} & $\cdots$\tabularnewline
1 & \textcolor{blue}{1} & \textcolor{red}{0} & $\cdots$\tabularnewline
1 & \textcolor{blue}{0} & \textcolor{cyan}{1} & $\cdots$\tabularnewline
1 & \textcolor{blue}{0} & \textcolor{cyan}{0} & $\cdots$\tabularnewline
0 & \textcolor{green}{1} & \textcolor{purple}{1} & $\cdots$\tabularnewline
0 & \textcolor{green}{1} & \textcolor{purple}{0} & $\cdots$\tabularnewline
0 & \textcolor{green}{0} & \textcolor{pink}{1} & $\cdots$\tabularnewline
0 & \textcolor{green}{0} & \textcolor{pink}{0} & $\cdots$\tabularnewline
\end{tabular}
\par\end{centering}
\caption{\label{Stein.tab}The first column has one group, the second two (blue/green),
the third four (red/cyan/purple/pink).}
\vspace{-0.3in}
\end{table}
The adjacency matrix is a binary matrix, and coding this is therefore
similar to the problem considered by Steinruecken in \cite{Steinruecken15},
on which we will base our coding. Steinruecken considered coding of
unordered iid sequences, which we will think of as a matrix. We can
state the approach more abstractly as follows: we first sort the rows
according to some criterion (e.g., lexicographically). The coding
is done on the sorted matrix, and only the sorted matrix is reproduced
(exactly) at the receiver. The trick is to sort in such a way that
coding of the sorted matrix is more efficient than coding the original
matrix. The procedure in \cite{Steinruecken15} is to first sort the
sequences lexicographically (with 1 coming before 0). We say that
the sequences are grouped: the first group is all sequences, the next
two groups are sequences that start with 1/0, which is then subgrouped
into sequences starting with 11/10/01/00, see Table \ref{Stein.tab}.
An efficient code is as follows: we first transmit the number of ones
in the first column (the first group). The next column is divided
into two groups: those rows that has 1 in the first column, and those
that have 0. We transmit the number of ones in each group. When the
sequences are iid, the number of ones is binomially distributed, which
can be used for encoding. We continue this way (with empty groups
not encoded).
This approach can also be applied to adjacency matrices, with the
modification that when we permute the rows during sorting, we have
to do the same permutation of columns to preserve symmetry. This turns
out to be equivalent to the algorithm in \cite{ChoiSzpankowski12},
but describing it this way reveals that the approach in \cite{ChoiSzpankowski12}
is strongly aimed at Erd\"{o}s-R\'{e}nyi graphs. From a data analysis point of view
this is problematic. The only parameter the algorithm in \cite{ChoiSzpankowski12}
is sensitive to is the average node degree $\bar{k}$ (equivalently
$p$). Consider anomalous graph detection in terms of atypicality
(this is described in more detail in Section \ref{AnomolousGraph.sec}):
We compare the codelength of encoding the graph with a given learned
coder and a universal coder. Since the only parameter \cite{ChoiSzpankowski12}
is sensitive to is $p$, this corresponds to a hypothesis test of
$p=p_{0}$ versus $p\neq p_{0}$. This is not irrelevant, but it is
far from what we do with sequences, where we can test a given FSM
against the whole class of alternative FSM. Thus, to be effective
for data analysis, we need much more effective coders. In the following
we will describe two such coders.
\subsection{\label{Degree.sec}Coding Using Degree Distribution}
Assume we know the degree distribution $P(k)$, either from a model,
from learning, or from the given graph. How can we take this into
account in coding? Consider coding of a given column of the sorted
adjacency matrix, as outlined above. Important here is what the decoder
already knows, from previous columns: it knows the number of ones
above the diagonal, it knows the number of groups $g$, and it knows
the size $s_{i}$ of each group; let $s=\sum_{i=1}^{g}s_{i}$. We
first encode the (total) degree of the node. Call the number of ones
above the diagonal $\bar{k}$. We can use the coding distribution
\begin{equation}
P(k|k\geq\bar{k})=\frac{P(k)}{\sum_{j=\bar{k}}^{\infty}P(j)}\label{Pk.eq}
\end{equation}
The decoder now has encoded the number of new ones (or edges) to encode.
The encoder needs to encode which configuration of the $k-\bar{k}$
is seen; that is, how many ones $k_{i}$ are in each group, subject
to the total count being $k-\bar{k}$. We assume that every sequence
with $k-\bar{k}$ ones is equally likely, so calculating the probability
of seeing a specific configuration is just a counting problem. In
total there are $\left(\begin{array}{c}
s\\
k-\bar{k}
\end{array}\right)$ sequences with $k-\bar{k}$, and there are $\left(\begin{array}{c}
s_{i}\\
k_{i}
\end{array}\right)$ ways to arrange the $k_{i}$ ones in each group. The coding probability
of a specific configuration therefore is
\[
\log P=\sum_{i-1}^{g}-\log\left(\begin{array}{c}
s_{i}\\
k_{i}
\end{array}\right)+\log\left(\begin{array}{c}
s\\
k-\bar{k}
\end{array}\right)
\]
A central assumption here is that at time of decoding a given column,
the decoder knows the number of ones $\bar{k}$ above the diagonal
so that it can calculate (\ref{Pk.eq}). This is satisfied if the
rows and columns are first sorted lexicographically, which can be
seen as follows. Suppose $i$ columns have been coded/decoded. The
decoder knows the first $i$ columns and rows in the (sorted) adjacency
matrix: this is clearly possible to reconstruct from the number of
ones in each group until column $i$ and the fact of the sorting.
The next row is chosen by the encoder among those among the remaining
$n-i$ columns that has highest sort order based on the first $i$
columns. No matter which column is chosen, the decoder knows the first
$i$ bits, and therefore the number of ones above the diagonal.
It is not necessary to explicitly sort the adjacency matrix. Instead
one can use the same partitioning algorithm from \cite{ChoiSzpankowski12}.
While not very explicit in the paper, they actually sort the adjacency
matrix in the way they choose the next node to encode. It is seen most
clearly from \cite[Fig. 3]{ChoiSzpankowski12}.
\subsection{Coding of Triangles}
Edges are the most fundamental building block of graphs. A more complex
building block is triangles, i.e., a cycle graph with three nodes,
which is also a 3-clique. Statistics about triangles are often used
to characterize graphs \cite{barabasi2016network}. One statistic is the following. Consider
three connected nodes $i\leftrightarrow j\leftrightarrow k$; we let
$p_{\triangle}$ be the probability that there is also an edge $i\leftrightarrow k$.
We can use this for coding as follows. Let the current node to be
coded be node $i$, and suppose we want to code whether or not there
is an edge to node $k$. We now look for a common neighbor $j$ of
nodes $(i,k)$ \emph{among nodes already coded}. If such a node exists,
we encode the edge $i\leftrightarrow k$ using $p_{\triangle}$; otherwise,
we use $p$. This can be used together with the structure encoding
of Table \ref{Stein.tab}: Notice that all groups have exactly the
same connections to prior encoded nodes. Thus all the nodes $k\in G$
in a group either has a common previously encoded neighbor with node
$i$, or none have. Therefore, they can all be encoded with either
$p_{\triangle}$ or $p$. That is, the number of ones in the group
can be encoded with a binomial distribution with probability either
$p_{\triangle}$ or $p$.
\subsection{\label{Calc.sec}Calculation and Encoding of Statistics}
We consider encoding in two scenarios: learned coding, where we are
given a set of training graphs and have to learn the statistics; this
statistics is known both by encoder and decoder. Second, universal
coding, where the encoder encodes a single graph and also has to communicate
to the decoder what is the statistic.
For learned coding, the edge probability $p$ can be estimated straightforwardly
as an average. The degree distribution is estimated through a histogram.
To estimate $p_{\triangle}$ is more tricky. We select randomly three
connected nodes $i\leftrightarrow j\leftrightarrow k$ and calculate
$p_{\triangle}$ as a an average. However, the value of $p_{\triangle}$
depends on how the nodes are selected. When $p_{\triangle}$ is used
for coding, the triple of nodes is chosen in a specific way. The best
estimate is therefore found by performing the coding on the training
graphs. Notice that in that case the edges are divided into those
coded with the triangle probability $p_{\triangle}$ and those coded
with $p$. However, those edges not (coded) in a triangle could be
special. Instead of using the general $p$, we could estimate that
$p$ directly; we call this $\check{p}_{\triangle}$. In general $p\neq\check{p}_{\triangle}$,
but in many cases they are very close.
For universal coding, there are two possible approaches, best outlined
in \cite[Section 13.2]{CoverBook}: the encoder can estimate the parameters
of the coding distribution and inform the decoder of the estimate.
Or, the coding distribution can be sequentially calculated. For encoding $p$ for iid coding
the two approaches are essentially equivalent. The number of bits
required to encode the number of ones is about $\log\frac{n(n-1)}{2}\approx2\log n$
bits. For the degree distribution, we calculate the degree histogram
for the whole graph, and use this for coding. The degree of a node
is between 0 and $n-1$. We can therefore think of the degree histogram
as putting each of the $n$ (unlabeled) nodes into one of $n$ buckets,
and encoding this can be done by encoding the counts in the buckets.
The number of possible configurations is a standard problem in combinatorics:
$\left(\begin{array}{c}
2n-1\\
n
\end{array}\right)$, which can be transmitted with $\log\left(\begin{array}{c}
2n-1\\
n
\end{array}\right)=nH\left(\frac{n}{2n-1}\right)+\frac{1}{2}\log\frac{2n-1}{n^{2}}+c\approx n-\frac{1}{2}\log n$ bits ($|c|\leq2)$ . Of course, there is a relationship between the
degrees of nodes in the graph, and if we took this into consideration,
it might be possible to encode the degree histogram slightly more
efficient.
For triangle coding, we use sequential estimation of $p_{\triangle}$
and $\check{p}_{\triangle}$, specifically the KT estimator \cite{KrichevskyTrofimov81,WillemsAl95},
which is
$
\hat{p}=\frac{n_{1}+\frac{1}{2}}{n_{1}+n_{0}+1}
$,
where $n_{1},n_{0}$ is the number of ones and zeros seen previously.
The probabilities $p_{\triangle}$ and $\check{p}_{\triangle}$ are
not updated after each bit, but rather after each group is encoded.
\subsection{Numerical Results}
Some results can be seen in Fig. \ref{ERcomp.fig}-\ref{nws.fig}.
In all cases, learning was done on 50 graphs prior to coding. For
Erd\"{o}s-R\'{e}nyi graphs, the iid structure code is most efficient, but all structure
codes give about the same codelength. For Barab\'{a}si-Albert\ graphs, coding using
the degree distribution is most efficient, and for Newman Watts Strogatz
graphs \cite{watts1998collective}, using the triangle probability is most efficient. This shows
that there is no single efficient code for all graph structures.
\begin{figure}[tbh]
\begin{centering}
\includegraphics[width=3.5in]{ERcl2}
\par\end{centering}
\caption{\label{ERcomp.fig} Comparison of different codelengths for a ER graph
with $p=\min\{\frac{1}{2},\frac{100}{n}\}$}
\vspace{-0.2in}
\end{figure}
\begin{figure}[tbh]
\begin{centering}
\includegraphics[width=3.5in]{BAcl2}
\par\end{centering}
\caption{\label{GraphCodeComp.fig} Comparison of different codelengths for
a BA graph with $m=20$.}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=3.5in]{NWScl2}
\caption{\label{nws.fig}Comparison of different codelengths for a Watts Strogatz graph \cite{watts1998collective}
with $k=5$ and $p=0.1$.}
\vspace{-0.2in}
\end{figure}
We also did some experiments on real-world graphs, both obtained from \cite{Davis11}. For those graphs
there is no training, so the universal coding is needed. For both
graphs, using degree distribution is most efficient. However, transmitting
the degree histogram is expensive, and considering that, the triangle
coding is most efficient. In light of this one could consider better
ways to represent the degree distribution (e.g., a parametric representation),
but we have not explored that.
\begin{table}[tbh]
\begin{centering}
\begin{tabular}{|c||c|c|}
\hline
Codelength $\searrow$ & Protein graph & Power graph\tabularnewline
\hline
\hline
Labeled iid & 20513 & 81077\tabularnewline
\hline
Structure iid & 8796 & 32013\tabularnewline
\hline
Degree distribution & 7290 & 27651\tabularnewline
\hline
Degree distribution with overhead & 8743 & 32586\tabularnewline
\hline
Triangle & 8369 & 26507\tabularnewline
\hline
\end{tabular}
\par\end{centering}
\caption{\label{RealGraph.tab}Real-world graphs. The protein graph is the largest connected component of a network of protein interactions in the yeast Saccharomyces cerevisiae. The power graph represents the US Western power grid.}
\vspace{-0.3in}
\end{table}
\section{\label{Anomaly.sec}Anomaly Detection}
For detecting anomalous graphs, we will use atypicality developed
in \cite{HostSabetiWalton15}, which is described by
\begin{defn}
\label{atypdef.thm}A sequence is atypical if it can be described
(coded) with fewer bits in itself rather than using the (optimum)
code for typical sequences.
\end{defn}
The papers \cite{HostSabetiWalton15,Host16BD} show that atypicality
has many desirable theoretical properties and that it works experimentally for
sequences. Specifically for anomaly detection, the paper \cite{Host16BD}
shows that atypicality is (asymptotically) optimum for finite state
machine (FSM) We will say that two FSM are \emph{distinct} if they
have no identical classes. Then
\begin{thm}[\cite{Host16BD}]
\label{FSMoptimality.thm}
Suppose that the typical model is an FSM. Let the atypicality detector
be given an anomalous sequence generated by an FSM distinct from the
typical FSM. Then as the length of the sequence $l\to\infty$, the
probability of detecting the anomaly converges to 1, while the probability
of false alarm converges to 0.
\end{thm}
As far as we know, nothing similar has been proven for any other anomaly
detection methods.
\subsection{\label{AnomolousGraph.sec}Anomalous Graph Detection}
In anomalous graph detection, we are given a set of training graphs
$G_{1},\ldots,G_{T}$, and the problem is then to determine if a given
graph $G$ is anomalous based on the training. We will apply atypicality
to this problem. The methodology follows directly from Definition
\ref{atypdef.thm}. We learn coding of typical graphs, Section \ref{Calc.sec},
and compare this with applying a universal source coder to $G$. In this paper, we consider unweighted, undirected graphs.
For Erd\"{o}s-R\'{e}nyi \ graphs, atypicality reduces to a hypothesis
test of $\hat{p}=p$ versus $\hat{p}\neq p$, which is of the form
$|\hat{p}-p|\geq\tau$ for some threshold. There is no reason to use
coding, and even coding structure as in \cite{ChoiSzpankowski12}
does not help: in a test of $\hat{p}=p$ versus $\hat{p}\neq p$ ,
the structural decomposition would be the same, only the coding of
the resulting bitstreams would be different.
For more complicated classes of random graphs such Barab\'{a}si-Albert\ or Watts Strogatz \cite{watts1998collective}, more information can be obtained using the coding algorithms developed in Section \ref{Coding.sec}. The general procedure is as follows
\begin{enumerate}
\item On the set of training graphs, we run all the coding algorithms. For
each we learn the values of the parameters (e.g., the histogram) for
the algorithm. We choose the coder that gives the shortest codelength.
The typical coder is now that algorithm with the learned parameters.
Both coder and decoder know the values of the parameters, so this
does not need to be encoded.
\item On the set of test graphs, we run first the typical coder and obtain the typical code length $L_T$. We then
run all the coding algorithms from Section \ref{Coding.sec}; to each
codelength we have to add the overhead of encoding the parameters
(e.g., histogram). The atypical codelength, $L_A$, is now the minimum of these
codelengths, plus a few bits to tell which coder was the shortest.
The atypicality measure is the difference between the atypical
codelength and the typical codelength, $L_A-L_T$. If $L_A-L_T < 0$, or is smaller than some threshold\footnote{The threshold has a coding interpretation: it is the number of bits required to tell the decoder an atypical coder is used \cite{HostSabetiWalton15}}, then following Definition~\ref{atypdef.thm}, the graph is declared atypical (anomalous).
\end{enumerate}
We tested this procedure by generating various random graphs with
$n=100$ nodes.
The typical graphs were generated by using Barab\'{a}si-Albert \ graphs model ($m=10$). We
trained on 100 randomly generated graphs. We then generated
500 test graphs each of:
\begin{enumerate}
\item Barab\'{a}si-Albert\ graphs ($m=10$) (i.e., typical graphs)
\item Barab\'{a}si-Albert\ graphs ($m=9$)
\item Erd\"{o}s-R\'{e}nyi graphs ($p=0.182$), chosen so that the graph has the same average degree as the typical graph.
\item Mixture graph: combination of Barab\'{a}si-Albert\ graphs ($m=10$) and Erd\"{o}s-R\'{e}nyi graphs with $p=0.5$; these are essentially
Barab\'{a}si-Albert\ graphs with extra edges added ($p$) to make more triangles.
\end{enumerate}
We then estimated the probability density function (pdf) of the atypicality measure:
$L_A-L_T$. The results are in Fig.
\ref{Anomaly1.fig}. We can see that Erd\"{o}s-R\'{e}nyi and Barab\'{a}si-Albert ($m=9$) test graphs can be easily distinguished from the typical graphs, Barab\'{a}si-Albert\ ($m=10$). Identifying mixture graph from Barab\'{a}si-Albert\ ($m=10$) is more difficult. However, due to the law of large numbers, anomaly detection improves as graph size increases. Figure~\ref{Anomaly2.fig} shows the estimated pdf of atypicality measures between mixture graph and Barab\'{a}si-Albert\ ($m=10$) for graphs with $n=400$ nodes; if we choose the threshold to be 305, we get $P_{\text{false alarm}}=P_{\text{miss}}=2.4\%$.
\begin{figure}[tbh]
\includegraphics[width=3.5in]{Anomaly1}
\caption{\label{Anomaly1.fig}Pdf of atypicality measure for different types
of graphs ($n=100$). The typical graphs are BA(10), which are used
for training.}
\end{figure}
\begin{figure}[tbh]
\includegraphics[width=3.5in]{Anomaly2}
\caption{\label{Anomaly2.fig}Pdf of atypicality measure for different types
of graphs ($n=400$). The typical graphs are BA(10), which are used
for training.}
\end{figure}
\section{Conclusions and Future Work}
In this paper we have developed a number of new universal graph coding
algorithms. The minimum codelength is found by coding with each algorithm,
and then finding the minimum (or weighting as in \cite{VolfWillems98}).
However, this still far from the state of the art for sequences, where
there a single algorithms such as CTW \cite{WillemsAl95} and Lempel-Ziv \cite{CoverBook}
that can code sequences with variable complexity. One possibility
is to generalize the triangle coding to consider structures of variable
complexity, and weight these in an approach similar to CTW.
We have shown that the coding algorithms can be used for graph anomaly
detection based on structure alone. We will consider a number of extensions. First, in most
graph-based anomaly detection problems, the anomaly is in the data
on the graph. Our idea is to combine graph structure coding with
coding of the data to get a single measure that takes into account
both data and structure. Second, we need to be able to consider graphs
of variable size; the complication here is that statistics might very
well depend on size. Finally, we will consider detecting anomalous
subgraphs.
|
2111.08514
|
\section{Introduction}
Electronics and signal processing, especially in their linear modalities,
have largely relied on the algebraic operations of sum, subtraction, and product of signals.
Filtering, self- and cross correlations are just some examples of the interesting
applications of linear signal processing
(e.g.~\cite{brigham:1988,oppenheim:2009,raikos:2009,Lathi}) and electronics
(e.g.~\cite{horowitz:2015,streetman:2016,thomson:1976}).
One problem with classic signal processing applications concerns the fact that
the involved real products are not easily implemented, requiring specific
high performance digital circuits.
Multisets (e.g.~\cite{Hein,Knuth,Blizard,Blizard2,Thangavelu,Singh}) corresponds to
an interesting and conceptually powerful extension of set theory that allows repeated
elements to be taken into account. In a sense, multiset theory seems to be even more
compatible with human intuition than the now classic set theory.
While multisets had been mostly applied to categorical or non-negative values, they
can be generalized to real values, including possibly negative values~\cite{CostaJaccard,CostaMset,CostaAnalogies}. This can be achieved by
allowing the multiset difference operation to lead to negative multiplicities, which
implies the universe multiset to be identical to the empty multiset, therefore establishing
a stable complement operation.
Real-value multisets have been further generalized to real function spaces~\cite{CostaMset}, allowing
the integration of multiset concepts and properties with the whole set of algebraic operations,
so that hybrid expressions such as:
\begin{equation}
\left[f(t) \cup g(t) \right]^C \cos(h(t) \cap -g(t)) \nonumber
\end{equation}
can be obtained~\cite{CostaMset,CostaAnalogies}.
When applied to real function spaces, multisets have been called \emph{multifunctions},
while their image values are associated with the real-valued multisets multiplicities.
These generalizations paved the way to a wide range of possible developments and applications
in the most diverse areas. For instance, it has been shown that the common product
between two functions provides substantially enhanced potential for performing filtering
and pattern recognition operations, including template matching~\cite{CostaMset,CostaComparing}.
More specifically, sharper and narrower matching peaks are typically obtained at the
same time as secondary matches and noise are effectively eliminated. These desirable
features stem from the fact that, though involving the extremely simple operations as
the minimum and maximum binary operations (in the sense of taking two arguments),
the common product, as well as several multifunction operations, are ultimately
non-linear. Results derived from these developments have also been found to
allow impressive performance for clustering (non-supervised pattern recognition)~\cite{CostaCluster}
and Complex network representations and community finding~\cite{CostaCCompl}.
The present work addresses the impementation of signal processing methods involving real-valued
multisets and multifunctions as electronic circuits. There
are several motivations for doing so. First, we have that the effective implementation of
operations such as the common product allowed especially accurate and
effective real-time applications in several related areas, including pattern recognition,
deep learning, an control systems. Particularly promising is the
implementation of the suggested electronic operators in integrated electronics.
Second, the implementation of the multifunction operations in electronic devices
paves the way to their effective incorporation into the area of signal and image
processing.
After introducing and illustrating some of the main multiset/multifunction
operations, the common product in its elementwise and functional forms,
as well as the respectively obtainable correlation methods, are briefly outlined.
Subsequently, we propose respective implementations in relatively simple electronic circuits,
involving a combination of a few standard linear and digital devices, including analog switches,
operational amplifiers, comparators and equivalence logic operation.
A complete implementation of the elementwise common product is then proposed and
discussed.
\section{Basic Real-Valued Multiset Operations} \label{sec:basic}
Given a signal $f(t)$, its multiset \emph{complement} is immediately obtained as
$-f(t)$.
The \emph{sign function} of $f(t)$ is henceforth understood to corresponds to:
\begin{equation}
s_f(t) =
\left\{
\begin{array}{l}
+1 \quad \emph{ if } f(t) \geq 0 \nonumber \\
-1 \quad \emph{ otherwise.} \\
\end{array}
\right.
\end{equation}
Observe that $s_f(x) f(x) = |f(x)|$.
Given an additional signal $g(t)$, the \emph{intersection} between these signals can
be expressed as:
\begin{equation}
\min \left\{ f(t), g(t) \right\} =
\left\{
\begin{array}{l}
f(t) \quad \emph{ if } f(t) \leq g(t) \nonumber \\
g(t) \quad \emph{ otherwise.} \\
\end{array}
\right.
\end{equation}
Similarly, the \emph{union} between the two signals can be expressed as:
\begin{equation}
\max \left\{ f(t), g(t) \right\} =
\left\{
\begin{array}{l}
f(t) \quad \emph{ if } f(t) \geq g(t) \nonumber \\
g(t) \quad \emph{ otherwise.} \\
\end{array}
\right.
\end{equation}
The \emph{conjoint sign function} between the signals $f(t)$ and $g(t)$ is defined as:
\begin{equation}
s_{fg}(t) = s_f(t) s_g(t)
\end{equation}
Figure~\ref{fig:functions} illustrates two signals, namely a cosine (a) and
sine (b) along a complete respective period, as well as the associated sign (c-d) and
conjoint (e) sign functions.
\begin{figure*}[h!]
\begin{center}
\includegraphics[width=0.9\linewidth]{functions.png} \\
\caption{Two functions, namely a complete period of cosine (a) and sine (b),
as well as their respective sign function s(c-d) and conjoint sign function (e). }
\label{fig:functions}
\end{center}
\end{figure*}
\vspace{0.5cm}
Shown in Figure~\ref{fig:ex} are the operations of these real-valued multiset operations
with respect to two signals $f(t)$ and $g(t)$ shown in Figure~\ref{fig:functions}.
\begin{figure*}[h!]
\begin{center}
\includegraphics[width=0.8\linewidth]{operations.png} \\
\caption{Real-valued multifunction operations of
intersection (a), union (b) of $f(t)$ and $g(t)$, and
absolute value (c) respectively to $f(t)$. The elementwise
common product (Section~\ref{sec:cprod}) is shown in
(d) together with the original functions $f(t)$ and $g(t)$. Observe
that the common product corresponds to the regions of the two
functions that are common while taking as reference the horizontal
axis. The common product on the signal in (d), which corresponds to
integrating this signal along its support, yields zero,
indicating null relationship between the cosine and sine function.}
\label{fig:ex}
\end{center}
\end{figure*}
\vspace{0.5cm}
\section{The Common Product and Correlation} \label{sec:cprod}
Given two signals $f(t)$ and $g(t)$, their \emph{elementwise common
product}~\cite{CostaMset,CostaSimilarity} can be defined as:
\begin{equation}
f(t) \diamond g (t) = s_{fg} \min\left\{ s_f(t) f(t), s_g(t) g(t) \right\}
\end{equation}
This operation is illustrated in Figure~\ref{fig:ex}(g) with respect to a full period of the
sine and cosine functions. Observe that the respective result can be
understood as the common region of the functions comprised between their
extrema and the horizontal axis.
The functional associated to the common, along a support region $S$, can now be
expressed~\cite{CostaJaccard,CostaMset,CostaSimilarity} as:
\begin{equation}
\ll f(t), g (t) \gg \ = \int_{S} s_{fg} \min\left\{ s_f(t) f(t), s_g(t) g(t) \right\} dt
\end{equation}
Observe that, though analogous to the classic inner product,
this functional is actually non-bilinear, therefore not constituting formally an inner
product. It is precisely the non-linear characteristics of this operation that
allow its enhanced performance when applied to filtering and pattern recognition.
Yet, this operation is characterized by great conceptual and informational simplicity,
requiring only a signed addition in computational terms.
Given the common product functional, the respective cross-correlation can be
immediately obtained as:
\begin{equation}
f \Box g \left[\tau \right] \ = \int_{S} \ll f(t), g (t-\tau) \gg dt
\end{equation}
This operation has been observed to yield interesting results in filtering and
pattern recognition applications~\cite{CostaComparing}. When employed jointly with other multifunction
operations, the common product convolution becomes the real-valued Jaccard
and coincidence indices~\cite{CostaJaccard}, which have been verified to allow remarkable performance
for tasks such as non-supervised classification and complex networks representation
and community enhancement~\cite{CostaComparing,CostaCluster,CostaCCompl}.
As such, it becomes of particular interest to contemplate the implementation of the
elementwise common product, which provides the basis for a wide range of applications
including those commented above, in electronic hardware, which is addressed in the
two following sections.
\section{Electronic Implementation}
Interestingly, all the basic real-valued multiset operations presented in Section~\ref{sec:basic}
can be ready and effectively implemented in analog circuitry
(e.g.~\cite{AnalogDesign,ArtAnalogDesign}) though, as we will see, special
attention is required regarding switching noise, as well as ensuring that the relative
delays between the involved operations are synchronized as much as possible.
All the proposed circuit implementations in the remainder
of this work have been mostly conceived from the didactic perspective and as
a proof of concept of the possibilities proposed in the current work.
Figure~\ref{fig:sign} illustrates a possible implementation of the sign function
by using the electronic device known as \emph{comparator}, which basically
corresponds to an operational amplifier optimized for fast switching. This is
a classic basic circuit involving an operational amplifier~\cite{tobey:1971,graeme:1973,raikos:2009}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7\linewidth]{sign_func.png} \\
\caption{The sign detection operation can be immediately
implemented by using a comparator. }
\label{fig:sign}
\end{center}
\end{figure}
\vspace{0.5cm}
The \emph{intersection} between signals $f(t)$ and $g(t)$ can be conveniently obtained
by using an operational amplifier and an analog switch as illustrated in Figure~\ref{fig:minmax}(a),
while the signal \emph{union} can be readily implemented by swapping the operational
amplifier inputs as shown in Figure~\ref{fig:minmax}(b).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7\linewidth]{minimum.png} \\ (a) \vspace{0.3cm} \\
\includegraphics[width=0.7\linewidth]{maximum.png} \\ (b) \\
\caption{The intersection and union real-valued multiset operations can be
readily implemented by using an analog switch and an
operational amplifier, both of which being standard devices
in electronics. }
\label{fig:minmax}
\end{center}
\end{figure}
\vspace{0.5cm}
The absolute value of $f(t)$, namely $s_f f(t)$, can be easily obtained by employing an
analog switch, a comparator, and an inverting amplifier, as illustrated in Figure~\ref{fig:absolute}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.9\linewidth]{absolute.png} \\
\caption{The absolute value operation $s_f f(t)$ can be
implemented by using an analog switch, a comparator,
and a unit gain inverting operational amplifier. }
\label{fig:absolute}
\end{center}
\end{figure}
\vspace{0.5cm}
The conjoint sign function between the signals $f(t)$ and $g(t)$, illustrated in
Figure~\ref{fig:conjoint}, requires two comparators and an analog equivalence gate.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.9\linewidth]{conjoint.png} \\
\caption{The conjoint sign function $s_{fg}$ can be obtained by
combining two comparators and an analog equivalence gate.}
\label{fig:conjoint}
\end{center}
\end{figure}
\vspace{0.5cm}
Another multiset operation that needs to be electronically implemented concerns the
here called \emph{signification}, which consists of multiplying the sign
provided by a sign function $s_{f}$ into a respective function $f(t)$.
Observe that this operation can be understood as corresponding to
the inverse of the absolute value operation, respectively to the same
sign function. Indeed, the absolute operation
on any signal $f(t)$ followed by the respective signification will recover the
original function $f(t)$.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.9\linewidth]{unfold.png} \\
\caption{The \emph{signification operation} takes an
absolute value function $s_f f(t)$ and recovers its
respective signed original form $f(t)$. Observe that, except for eventual
electronic artifacts, the conjoint operator followed by the respective
significator will have not effect on the input signal $f(t)$, as these
two operations are one the inverse of the other. }
\label{fig:ex}
\end{center}
\end{figure}
\vspace{0.5cm}
<<<<
\section{Elementwise Common Product Implementation}
Having proposed preliminary respective electronic implementations for several important
multifunction operations, we are now in position to propose a complete design
of an elementwise common product operator, which is shown in Figure~\ref{fig:cprod}.
This suggested design involves only three operational amplifiers, five comparators,
for switches and an analog equivalence gate.
This implementation is aimed mostly as a proof or concept, being by no means
intended to be particularly operational or effective. Indeed, much more efficient
designs can be achieved at the level of more basic componentes such as transistors,
especially when considering implementations in integrated electronics.
\begin{figure*}[h!]
\begin{center}
\includegraphics[width=1\linewidth]{common_prod.png} \\
\caption{An implementation of the elementwise common product employing
comparators, operational amplifiers, and analog switches. This circuit is
capable of identifying the common area between the two input signals $f(t)$
and $g(t)$ as illustrated in Fig.~\ref{fig:ex}(d). The integration of the elementwise
product, which can be
obtained by adding just another operational amplifier, provides an effective
quantification of the similarity between the two signals. Observe that this
operation is intrinsically non-linear as a consequence of the minimum operator.
More effective designs can be obtained by taking advantage of redundant portions of this
configuration. Particularly effective integrated electronics implementations can be obtained.
A point deserving special attention regards the switching noise implied by the
analog switches as well as the synchronization of the delays involved by the
incorporated modules.}
\label{fig:cprod}
\end{center}
\end{figure*}
\vspace{0.5cm}
One aspect deserving particular attention regards the need to condition and control
the high frequency switching noise implied by the four analog switches. This can
be addressed by incorporating respective low-pass filtering and related
techniques, though at the probably expense of signal speed. Additional research is
required before an operational version of the proposed implementation of the elementwise
common product can be obtained.
Another important issue regards the relative delays implied by each of the composing
subsystems. In other words, it is important to ensure that these delays are properly
matches so as to ensure proper synchronization along the combination of the partial
results. This issue, however, is more critical only for particularly high frequency signals.
\section{Concluding Remarks}
Electronics and signal processing have intensively relied on algebraic operations such
as sums, subtractions and products between functions, the latter being particularly
complex and involving relatively large respective circuitry.
The recent generalization of multiset concepts to take into account real-valued functions has
paved the way to a wide range of possible new concepts, developments, and applications.
In this work, we addressed the possibility to establish analogous implementations of each
of the main multiset/multifunction operations, including the sign and conjoint sign functions,
the minimum (intersection) and maximum (union) between pairs of signals, as well
as the absolute value and the inverse operation of signification.
These developments allowed us to propose a complete possible implementation of the
elementwise common product, which is the basic element in the respective common product
and common product correlation between signals, all of which have been shown to have impressive
potential for several applications such as in signal processing, pattern recognition, deep
learning, and control systems.
Further developments include, but are not limited to, devising more effective and operational
implementations of the elementwise common product, as well as circuits capable of
performing the common product and respective correlation. Also of interest are respective
implementations in the context of digital signal processing
(e.g.~\cite{Proakis,brigham:1988,oppenheim:2009}), as well as shape and image
analysis~\cite{shapebook}.
\vspace{0.7cm}
\emph{Acknowledgments.}
Luciano da F. Costa
thanks CNPq (grant no.~307085/2018-0) and FAPESP (grant 15/22308-2).
\vspace{1cm}
|
astro-ph/0502167
|
\section{Introduction}
The first ever stellar trigonometric parallax was reported by
F. Bessel in 1838 for 61 Cygni, after a ``race'' in which he narrowly
defeated F. G. Wilhelm Struve and T. Henderson, who published the
parallaxes for Vega and $\alpha$ Centauri, respectively, in the next
year. Since then, trigonometric parallax measurements have provided
one of the most important parameters for understanding stellar
astronomy --- distance --- and provide one of the sturdiest rungs on
the cosmic distance ladder. Trigonometric parallaxes are used to
derive the intrinsic luminosities of stars, calculate accurate masses
for binary system components, and answer questions about stellar
populations and Galactic structure. In addition, the solar
neighborhood is mapped out via trigonometric parallaxes, and these
nearby objects provide the brightest examples of many stellar types,
supplying the benchmarks to which more distant stars are compared.
Two of the most important parallax references are the Yale Parallax
Catalog \citep{YPC} and Hipparcos Catalog \citep{Hipparcos}.
Combined, they offer $\sim$120,000 parallaxes both from space and
ground observations. Of these, 92\% of trigonometric parallaxes are
from the Hipparcos mission. However, because of the relatively bright
magnitude limit of Hipparcos, many nearby stars candidates were
excluded. Consequently, the faint members of the solar neighborhood
are under-presented, and these faint red, brown, and white dwarfs are
the objects targeted by recent trigonometric parallax efforts,
including the one discussed in this paper. Recent results for nearby
red and brown dwarfs include the efforts of \citet{Ianna1996},
\citet{Tinney1995, Tinney2003}, \citet{Dahn2002}, and
\citet{Vrba2004}, which together have provided $\sim$130 total
ground-based parallaxes since 1995.
\section{The CTIOPI Effort and Sample}
In order to recover ``missing'' members in the solar neighborhood, the
Research Consortium On Nearby Stars (RECONS) group is currently
carrying out a southern sky parallax survey known as Cerro Tololo
Inter-American Observatory Parallax Investigation (CTIOPI). The
primary goals of CTIOPI are to discover and characterize nearby red,
brown, and white dwarfs that remain unidentified in the solar
neighborhood. This program was selected as a NOAO Survey Program, and
observations commenced in 1999 August. CTIOPI used both 0.9-m and
1.5-m telescopes during the NOAO Survey Program, and has continued on
the 0.9-m as part of the SMARTS (Small and Moderate Aperture Research
Telescope System) Consortium beginning in 2003 February. The RECONS
team at Georgia State University is responsible for data reduction for
the 0.9-m program, while data from the 1.5-m program is being analyzed
by E. Costa and R. M\'{e}ndez of the Universidad de Chile in Santiago.
The extended 0.9-m program has recently surpassed 400 systems on the
observing list, while the final 1.5-m program included $\sim$50
systems that were fainter (and for which less observing time was
awarded).
Most of the target stars (hereafter, the ``pi stars'') are selected
for CTIOPI because available astrometric (e.g, high proper motion),
photometric, or spectroscopic data indicate that they might be closer
than 25 pc. In the 0.9-m program, roughly 95\% of the pi stars are
red dwarfs and the remainder are white dwarfs. The fainter brown
dwarf candidates were included in the 1.5-m program. In all,
$\sim$30\% of the 0.9-m targets are members of what we call the MOTION
sample --- stellar systems having proper motions of at least
1\farcs0/yr. This paper describes the first definitive astrometric
results of CTIOPI, focusing on the results for 36 MOTION systems.
\section{Observations}
\subsection{Astrometry Observations}
The 0.9-m telescope is equipped with a 2048 $\times$ 2048 Tectronix
CCD camera with 0\farcs401/pixel plate scale \citep{Jao03}. All
observations were made using the central quarter of the chip, yielding
a 6\farcm8 square field of view, through $V_{J}$, $R_{KC}$ and
$I_{KC}$ filters (hereafter, the subscripts are not
given)\footnote{Subscript: J $=$ Johnson, KC $=$ Kron-Cousins. The
central wavelengths for $V_{J}$, $R_{KC}$ and $I_{KC}$ are 5475\AA,
6425\AA~and 8075\AA, respectively.}. The dewar containing the CCD
camera is mounted on the telescope with columns oriented in the
north-south direction. A slight rotation relative to the sky is
possible because of instrument flexure and repositioning during
telescope maintenance. This rotation angle can be calibrated, as
discussed in section~\ref{astro.initial.steps}.
The observing procedures employed during CTIOPI mimic those used in
University of Virginia southern parallax program at the Siding Spring
Observatory in Australia, led by P. Ianna, who is a member of the
CTIOPI team. When a star is observed for the first time, exploratory
exposures are taken in the $VRI$ filters to find a suitable set of
reference stars in the field. The parallax filter and position of the
field are selected to balance the brightness of the pi star with
available potential reference stars, and that filter is used for all
subsequent parallax frames. Because most of our pi stars are nearby
star candidates, they are brighter than most of the field stars. We
attempt to place the pi star on the chip so that 5 to 15 field stars
of adequate flux can be included. Typically, a good reference star is
not more than twice as bright as the pi star (in the few cases when
the pi star is not the brightest star in the field), but has at least
1000 peak counts during a typical parallax exposure.
Bias frames and dome flats are taken at the beginning of each night to
allow for basic data reduction calibration. Parallax observations are
usually made within $\pm$30 minutes of a pi star's transit in order to
minimize the corrections required for differential color refraction
(DCR) corrections, which are discussed in section~\ref{dcr}. A few
faint pi stars are observed with a wider hour angle tolerance because
frame acquisition takes longer. Exposure times for parallax frames
typically provide a peak of $\sim$50,000 counts for the pi star
(saturation occurs at 65,535 counts), in an effort to maximize the
number of counts available for pi star and reference star centroiding.
Usually, 3--10 frames are taken in each visit, depending primarily on
the exposure time required. Multiple frames are taken to reduce the
errors on the pi star and reference star positions at each observation
epoch. The typical set of observations required to determine a final
parallax and proper motion includes four seasons of observations
carried out over at least 2.5 years (further details in
section~\ref{pi.quality.check}).
\subsection{$VRI$ Photometry Observations}
\label{sec:phot.reduce}
The $VRI$ photometry reported here was acquired at the CTIO 0.9-m
utilizing the same instrument setup used for the astrometry frames.
All of the results are from observations taken between November 1999
and September 2004. As with the astrometry observations, bias and
dome flat images were acquired nightly and used for basic science
frame calibration.
Most pi stars were observed at $\sec z$ $<$ 1.8 or less (a few were
between 1.8 and 2.0 airmasses because of extreme northern or southern
declinations). Various exposure times were used to reach S/N $>$ 100
for pi stars in each of the $VRI$ filters. Standard star fields with
averagely total 10 stars from \citet{Landolt1992} and/or E-regions
from \citet{Graham1982} were observed several times each night to
derive transformation equations and extinction curves. In addition,
one or two very red standard stars with $V-I >$ 3.7 were observed in
order to derive extinction coefficients for stars with a wide range of
colors. Typically, a total of 4--5 standard star fields were observed
2--3 times each per night.
\section{Astrometry Reductions}
\subsection{Initial Data Processing, Reference Star Selection, and Trail Plate Selection}
\label{astro.initial.steps}
The basic data reduction for the astrometry CCD frames includes
overscan correction, bias subtraction and flat-fielding, performed
using a customized IRAF package called {\em redpi} (because our pi
stars are primarily red dwarfs). After processing the raw data,
frames are sorted into storage directories by object until there are
enough parallax frames and time coverage to derive a reliable
astrometric solution, typically at least 40 frames over at least two
years. When sufficient frames are available, {\em SExtractor}
\citep{sextractor} is used to determine centroids for each pi star and
a set of reference stars that is chosen using the following general
guidelines:
\begin{enumerate}
\item A single frame is selected that was taken using the parallax
filter. The seeing is required to be better than 1\farcs5 and images
must have ellipticity less than $\sim$20\%.
\item Five to 15 reference stars in the field are selected that evenly
surround the pi star in order to determine reliable plate
rotation, translation, and scaling coefficients.
\item Each reference star must have a complete set of $VRI$
photometry, which is required for DCR corrections and the conversion
of relative to absolute parallax.
\item Using the IRAF {\it imexam} task, each reference star is checked
to make sure that it is not a resolved binary or galaxy.
\item Ideally, all of the reference stars are have peak counts above
1000, although some fields require the selection of fainter stars in
order to have a sufficient number of reference stars.
\end{enumerate}
\noindent After the first round of parallax reductions, each reference
star is reexamined for a sizable parallax or proper motion, and
removed from the reference field if necessary.
In order to calculate the parallax factors, accurate coordinates and a
value for the Earth to Solar System barycenter distance need to be
known. The coordinates used for parallax factor calculations are
extracted from the Two Micron All Sky Survey (2MASS) All-Sky Point
Source Catalog via OASIS. Because these objects are high proper
motion stars, all of them were manually identified by comparison with
finding charts instead of retrieving data blindly by setting a
searching radius around a given RA and DEC. The coordinates listed in
Table~\ref{tbl:pi.result} for the pi stars have been shifted to epoch
2000 using available proper motion measurements, primarily
\citet{LHS}, instead of the epoch at which the images were taken by
2MASS. To compute an accurate distance from the Earth to the Solar
System barycenter at the time of observation, the JPL ephemeris DE405
is used.
Before calculating the parallax and proper motion of the pi star using
frames taken at many epochs, a single ``trail plate'' is selected as a
fundamental reference frame to which all other images are compared.
This trail plate is used to remove any rotation, translation, and
scaling changes between frames. A customized program organizes the
set of frames used during the reductions for a particular field, is
run to calculate the hour angle, parallax factors, and FWHM of images
for each frame, and a trail plate is selected using the results, using
the following criteria:
\begin{enumerate}
\item All reference stars and pi star(s) have peak counts less than
65500 and greater than 100.
\item All reference stars and pi star(s) have ellipticity less than
$\sim$20\%.
\item All the reference stars and pi star(s) have FWHM less than
2\farcs5. This criterion has been relaxed relative to the frame used
for the initial selection of reference stars (when 1\farcs5 is the
limit) in order for the trail plate to be nearly coincident with the
meridian.
\item The hour angle is within 2 minutes of the meridian at the
midpoint of the integration. A frame taken very near the meridian
provides a trail plate with minimal DCR.
\end{enumerate}
\noindent Usually, the definitive trail plate is the one having the
smallest hour angle and best seeing of the frames available.
The rotation angle of the trail plate is calculated relative to the
Guide Star Catalog 2.2 (GSC2.2) using WCSTools/imwcs\footnote{WCSTools
is available at
\url{http://tdc-www.harvard.edu/software/wcstools/}.}. Our parallax
images are usually deeper than GSC2.2, so stellar objects with
apparent magnitudes brighter than 18.0 and FWHM smaller than 2\farcs5
(but larger than 0\farcs6 to avoid centroiding on cosmic rays and bad
pixels) are used. Once the rotation angle is determined, the parallax
factors and centroids for all reference stars and pi stars on the
trail plate are recalculated and used as the fundamental reference
frame.
\subsection{Differential Color Refraction Corrections}
\label{dcr}
DCR corrections are required because the pi star and reference stars
are not of identical color; therefore, their positions as seen from
underneath Earth's atmosphere shift relative to one another because of
different, but calibrateable amounts of refraction. Although most of
our parallax observations suffer minimal DCR because they are made
within 30 minutes of the meridian, sometimes frames are taken far
enough from the meridian that it is advantageous to make DCR
corrections, e.g. for important targets observed in non-ideal
observing seasons and in cases when the total number of available
frames can be boosted by utilizing photometry frames taken in the
parallax filter. Different observing and reduction methods used to
measure DCR have been discussed by \citet{Monet1992},
\citet{Tinney1993}, \citet{Stone1996}, and \citet{Stone2002}. Here we
use both the theoretical methods proposed by \citet{Stone1996} and the
empirical methodology proposed by \citet{Monet1992} to measure DCR for
the CTIOPI program, and to make final corrections during astrometric
reductions.
DCR calibration observations for CTIOPI were made during four
photometric nights in December 2002 using the 0.9-m telescope at CTIO.
This is the identical combination of telescope, filters, and CCD
camera used during the parallax program. Ten different fields spread
from zenith to low altitude that contained blue ($V-I = 0.57$) to red
($V-I = 3.22$) stars were selected and observed through the $V$, $R$
and $I$ filters. Ten fields were each observed up to five times per
night through hour angles of $\pm$4 hours. Exposure times were chosen
to be the same as used in the parallax program for each field so that
the faintest reference stars could be analyzed for DCR. In total, 72
stars were included in the final DCR analysis. Although refraction
is, in general, a function of temperature, pressure, and humidity, due
to the stable observing conditions throughout this run, these factors
can be ignored, as discussed in \citet{Monet1992} and
\citet{Stone2002}.
In order to provide a zero point reference frame for the DCR
calculation, one set of images must be taken when the field transits.
In other words, there is no refraction in the RA direction, and we
assign zero refraction in the DEC direction, when the hour angle is
zero. The components of refraction in the RA and DEC directions,
$R_{m}Z_{x}$ and $R_{m}Z_{y}$ respectively (where $R_{m}$ is the mean
refraction), are given by
\begin{eqnarray}
(\alpha-\alpha^{\prime})\cos\delta =\frac{R_{m}\sin HA
\cos\phi}{\cos\zeta}= R_{m}\cos\phi\sin HA\sec\zeta=R_{m}Z_{x} \\
\delta-\delta^{\prime} = R_{m} S \sin\phi\sec\delta(\sec\zeta-\sec(\phi-\delta))=R_{m}Z_{y},
\label{eqn:dcr.correction}
\end{eqnarray}
\noindent where ($\alpha$, $\delta$) are the coordinates without
atmospheric refraction and ($\alpha^{\prime}$, $\delta^{\prime}$) are
the coordinates after atmospheric refraction. The angle $\phi$ is the
latitude of the observing site, HA is the hour angle of a given star
and $\zeta$ is its zenith distance. As discussed in
\citet{Monet1992}, $S$ merely represents the sign of the declination
term, i.e.~$S = 1$ if $(\phi-\delta)\geq 0$ and $S = -1$ if
$(\phi-\delta)< 0$. These empirical measurements assume that $R_{m}$
is a polynomial function of $V-I$ color (see also \citet{Monet1992}).
We have determined the $V$, $R$, and $I$ magnitudes for all 72 stars
in the ten fields used to calculate the DCR so that each filter can be
calibrated against the $V-I$ color (thereby producing three sets of
equations as shown in the next section).
\subsection{The Final DCR Model for CTIOPI}
The images taken for the DCR model were reduced in a manner identical
to the parallax frames, as discussed in
section~\ref{astro.initial.steps}. The three $VRI$ frames having the
smallest hour angle were selected as trail plates assumed to have no
DCR. Plate constants are calculated using the GaussFit\footnote{This
is a program for least squares and robust estimation that is available
from the Hubble Space Telescope (HST) Astrometry Team {\em
ftp://clyde.as.utexas.edu/pub/gaussfit/manual/}.} program
\citep{Jefferys1987}. Six plate constants are derived so that field
rotation, translation, and scaling can be removed (see
section~\ref{astro.many.epochs}). We ignore any effects of source
proper motion or parallax during the four nights of DCR observations
because they are negligible on that time scale. Consequently, after
calculating the plate constants, the only shifts in stellar centroids
are because of atmospheric refraction. The amount of centroid shift
from the trail plate in the X direction is a direct measure of the
refraction, as represented by the quantity on the far left side of
Equation 1. \citet{Monet1992} (see their Figure 2) showed that because
the refraction in the Y direction has been defined as shown in
Equation 2 (effectively removing any shift in the Y direction for zero
hour angle), the X shift ($R_{m}Z_{x}$) will have more variation than
the Y shift ($R_{m}Z_{y}$) when the hour angle is different from
zero. Therefore, we concentrate on the RA direction to determine the
empirical polynomial function for $R_{m}$.
To determine the functional form of $R_{m}$, first the hour angle and
$Z_{x}$ for every useful star in the ten DCR calibration fields are
derived. Then, based on the centroid shift and $Z_{x}$ for various
stars observed during the observing run, the slope of $R_{m}$ versus
$Z_{x}$ can be found. Figure~\ref{fig:dcr.slope} shows an example for
LHS 158 and a reference star in the field. The field was observed
from 1.47 hours east of the meridian to 3.71 hours west, including
eight sets of $VRI$ observations at different hour angles. A linear
fit, whose slope is $R_{m}$, was made for each filter to each of the
72 stars selected in the ten fields in order to provide an ensemble of
values, $R_{m}$, as a function of $V-I$ color.
We set the zero point for DCR to be $R_{m} = 0$ when $V-I = 0$,
thereby defining a star of that color to show no DCR, while all other
stars' DCR is measured relative to that. The $0^{th}$ order
coefficient for each field is slightly different from the others
because there is rarely a star with $V-I = 0$ in a frame, but the
offset can be computed by a least squares fit for a polynomial
function to all stars that are present in a given field. By combining
the $R_{m}$ slopes and the $V-I$ values for all 72 stars, we generate
the plots in Figure~\ref{fig:dcr.fit.plot}, showing the empirical fits
with solid curves.
The mean empirical DCR functions\footnote{Different orders of
polynomial fits were calculated for each filters. The ones with
reasonable slope and points distribution are given.} for three
different filters are given by:
\begin{eqnarray}
R_{m,V} =-0.0407(V-I)+0.00941(V-I)^{2}, \nonumber \\
R_{m,R} =-0.0417(V-I)+0.0482(V-I)^{2}-0.0245(V-I)^{3}+0.0036(V-I)^{4}, \nonumber \\
R_{m,I} =+0.0007(V-I).
\label{eqn:dcr.function}
\end{eqnarray}
The theoretical curves for all three filters were also calculated
using the model from \citet{Stone1996} and are shown in
Figure~\ref{fig:dcr.fit.plot} as dashed lines\footnote{The FORTRAN
code used to generate the curves was kindly provided by M. Begam from
the Siding Spring Observatory parallax project, led by P. Ianna.}. A
hypothetical field at DEC $=$ $-$26, ``observed'' during a night with
temperature $T = 12^{\circ}C$ and $40\%$ humidity was chosen to
generate the model curves. These conditions are similar to those
encountered during CTIOPI observations. Twelve stars with spectral
types of A0 V to M5 V were selected for the model, and were
``observed'' at positions that were 0 to 3 hours from the meridian.
As expected, Figure~\ref{fig:dcr.fit.plot} shows that $I$ band has the
least DCR of the three filters. In all three filters the average
difference between the model and the empirical curve is always less
than 6 mas for stars with $V-I <$ 3.2. Because our DCR sample is
deficient in very red stars, the difference between the empirical and
theoretical curves increases at the red end of the $R$ band
calibration. Note that when the stellar color is redder than $V-I =
2.6$, stars observed through the $R$ filter will actually experience
more DCR than they will when observed through the $V$ filter. This
result can be explained because we are discussing ``differential''
color refraction among a set of stars. At a given position in the
sky, the amount of refraction is caused primarily by two factors ---
how photons in a given filter bandpass are refracted by the Earth's
atmosphere, and how the number of photons changes within the bandpass,
i.e. the slopes of the various stellar spectra. In the case of the
$VRI$ filters, the $R$ filter has the largest amount of refraction for
the reddest stars because both factors are important, whereas in the
$V$ band the slopes of the stellar spectra do not change much for very
red stars, and in the $I$ band, the atmosphere does not refract the
photons significantly, regardless of a star's color. Consequently, the
DCR correction for each star can be made by obtaining its $V$ and $I$
photometry and applying Equation 1 to 3.
A valuable comparison of astrometry reductions is shown in
Figure~\ref{fg:dcr.gj1061}, in which results from two reductions are
presented for the same data --- one with and one without DCR
corrections. A series of high hour angle measurements were taken in
mid-2003 to test our DCR protocol. The effects of DCR corrections are
clearly seen when comparing these two panels. In the case of no DCR
corrections (the two plots on the left), the X direction residuals
show a very deep ``valley'' and the Y direction residuals show a large
scatter. After the DCR corrections are applied, the X residuals
flatten out, and the Y residuals are reduced and more symmetric around
zero. The standard deviations for X and Y residuals drop from 12.5 and
8.6 mas to 5.6 and 8.4 mas, respectively, when DCR corrections are
made. The larger reduction in the X direction is expected because the
differential refraction is more significant in the RA direction.
\subsection{Least Squares Reduction of Images Taken at Many Epochs}
\label{astro.many.epochs}
Once DCR corrections are incorporated into the data reduction
pipeline, the positions of a pi star and a set of reference stars can
be accurately computed for an ensemble of frames, with each frame in
the ensemble being compared to the trail plate. The relationship
between a frame and the trail plate is based on the measured positions
of reference stars only (not the pi star). A new set of coordinates
for each reference star is derived as a function of the trail plate
coordinates and a set of constants:
\begin{eqnarray}
\xi=Ax+By+C, \nonumber \\
\eta=Dx+Ey+F,
\label{eq:plate.constant}
\end{eqnarray}
\noindent where $(x,y)$ are the original coordinates of a reference
star, A--F are the {\em plate constants}, and $(\xi,\eta)$ are the
coordinates after the transformation. This six-constant model allows
for a different scale in both the X and Y directions, compensates for
different amounts of translation in both directions, and includes a
correction for any instrument rotation. The higher order plate
constants --- radial distortion: $Rx(x^{2}+y^{2})$, coma: $Smx$ (m is
magnitude), and decentering distortion: $P(x^{2}+y^{2})$
\citep{Eichhorn1974} --- are not included in the current calculations
because parallax results from our standard stars are within 2$\sigma$
of all other observations and no systematic differences are seen
(discussed in section~\ref{pi.quality.check}).
Analysis of the stellar path of the pi star must take into account
both proper motion and parallax, but each reference star also
experiences both motions on the sky. Because accurate proper motions
and parallaxes are rarely known for reference stars, we assume that
the reference grid has $\sum_{i}\pi_{i} = 0$ and $\sum_{i}\mu_{i} = 0$
(\citealt{Altena1986}, \citealt{Benedict1999}). Hence, the set of
constants for each frame outlined in Equation 4 above is expanded to
include the reference star motions, resulting in an expanded set of
equations:
\begin{eqnarray}
\xi_{1}^{t} = A^{1}x_{1}^{1}+B^{1}y_{1}^{1}+C^{1}+\mu_{x1}T+\pi_{1} P_{\alpha 1}, \nonumber \\
\xi_{2}^{t} = A^{1}x_{2}^{1}+B^{1}y_{2}^{1}+C^{1}+\mu_{x2}T+\pi_{2} P_{\alpha 2}, \nonumber \\
\xi_{3}^{t} = A^{1}x_{3}^{1}+B^{1}y_{3}^{1}+C^{1}+\mu_{x3}T+\pi_{3} P_{\alpha 3}, \\
... \nonumber\\
\xi_{n}^{t} = A^{1}x_{n}^{1}+B^{1}y_{n}^{1}+C^{1}+\mu_{xn}T+\pi_{4} P_{\alpha n}, \nonumber
\end{eqnarray}
\noindent where superscripts indicate frame numbers, subscripts
indicate the identification numbers of reference stars, the product
$\mu$$T$ is the star's total proper motion relative to the date of the
trail plate, the product $\pi$$P$ is the parallax offset from the date
of the trail plate ($P_{\alpha}$ is the parallax factor in RA), and
$\xi_{n}^{t}$ represents the x coordinate for the trail plate. The
plate constants, $A$, $B$, and $C$ can be calculated from these
equations using least squares methods which are constrained by the
conditions of reference star parallaxes and proper motions summing to
zero. A similar set of equations is obtained for the y coordinate
(plate constants $D$, $E$, and $F$ in Equation 4). After the plate
constants and reference star values for $\mu$ and $\pi$ are acquired,
$\mu$ and $\pi$ (and their errors) are computed for the pi star.
The least squares calculation is run using Gaussfit (discussed in
section 4.3), which typically requires three iterations to minimize
$\chi$$^2$. The image quality of each frame and the reliability of
reference stars are determined using the results of the initial run of
Gaussfit. At this stage, reference stars with high proper motion,
large parallax, large centroid residuals, or high photometric parallax
are deleted. Entire frames with high residuals are also removed. The
Gaussfit program is then run again to derive the final pi star $\mu$
and $\pi$ values.
\subsection{Conversion from Relative Parallax to Absolute Parallax}
What we have measured reflects the parallax of the pi star relative to
the set of reference stars is the relative trigonometric parallax,
$\pi$. As discussed in \cite{Altena1974} and \cite{Altena1988}, there
are (at least) three different ways to convert this {\em relative
parallax} to the~{\em absolute parallax}, which is a measure of the
true distance to the pi star --- using statistical methods,
spectroscopic parallaxes, or photometric parallaxes for the reference
stars.
Statistical methods rely on a model of the Galaxy for the disk and
halo. By adopting a Galactic model and knowing the apparent
magnitudes and Galactic coordinates of the reference stars, parallaxes
can be estimated for the reference stars. No reference star color
information is used. For example, \cite{Altena1988} concludes that
faint halo stars have ($14.5 < V < 15.5$) with a narrow distribution
in their parallaxes for fields near the north Galactic pole. However,
bright disk stars ($10.5 < V < 11.5$) exhibit a wide range of
parallaxes. Therefore, faint reference stars have smaller mean
parallaxes and require a small correction for the relative to absolute
parallax conversion, while brighter reference stars require larger
corrections. As discussed in section~\ref{astro.initial.steps}, the
reference stars chosen for CTIOPI are the brightest available in the
pi star fields (in order to obtain better centroids), so we do not use
a statistical methodology for the conversion of relative to absolute
parallax.
Using spectroscopic parallaxes is arguably the most reliable method to
determine the correction from relative to absolute parallax because
the spectral type and luminosity class of every reference star are
determined. This allows us to distinguish main sequence stars from
giants and subdwarfs, and to apply correct $M_{V}-color$ relations for
each class of star. However, this method requires a significant
amount of observing time, and is not practical for CTIOPI, in which
several hundred stars with $\sim$10 reference stars each are observed.
Instead, we use the photometric parallax method to convert the pi
star's relative parallax to its absolute parallax. $VRI$ magnitudes
for the pi star and all reference stars have already been acquired for
the DCR corrections, so the same data can be used to estimate a
parallax of each reference star. However, because of the lack of
information about the luminosity class of these stars, these
corrections assume that all of the reference stars are main-sequence
stars. Additional corrections for the contamination by giants or
galactic reddening have not been included because such corrections are
anticipated to be much smaller than the typical errors on the final
parallaxes.
The fundamental relations between $M_{V}$ and color used in CTIOPI are
based on the sample of stars within 10 pc (Henry et al.~1997, 2004).
Close multiple stars, subdwarfs, evolved stars, and stars with poor
trigonometric parallaxes have been deleted from this sample to provide
reliable $M_{V}-color$ relations. Three different colors, $V-R$,
$V-I$, and $R-I$, are used to calculate the mean photometric parallax
for each reference star. The error on the photometric parallax for an
individual star is taken to be the average difference between the mean
photometric parallax and the parallax from each color. The weighted
mean photometric parallax of the entire set of reference stars is then
calculated, and represents the final correction from relative to
absolute parallax. The error in the final correction is determined
from
\begin{equation}
\frac{err_{1}/\pi^{phot}_{1}+err_{2}/\pi^{phot}_{2}+...+
err_{n}/\pi^{phot}_{n}}{n}\times \pi_{weighted-mean},
\end{equation}
\noindent where $n$ is the number of reference stars, $err$ is the
photometric parallax error of each star and $\pi_{weighted-mean}$ is
the weighted mean photometric parallax of the ensemble of reference
stars. We note that the mean absolute parallax correction for all 36
MOTION stars in Table~\ref{tbl:pi.result} is 1.47 $\pm$ 0.17 mas.
\section{Photometry Reductions}
The same {\em redpi} package discussed in
section~\ref{astro.initial.steps} is used to process the raw
photometry data. Stars of interest, including pi stars, reference
stars, and photometric standard stars, are tagged and enclosed in an
aperture with a 7\arcsec~radius if there are no nearby background
stars that might contaminate the photometry. A 7\arcsec~radius
aperture was used for the standard stars in order to match the
aperture typically used by \citet{Landolt1992}. After removing cosmic
rays, the instrumental magnitude is determined by summing all of the
counts for pixels falling in the aperture. In the few cases where a
contaminating source is within the 7\arcsec~aperture, an aperture
correction is performed. A sky annulus with 20\arcsec~inner radius
and 3\arcsec~width was applied to calculate the sky background counts.
The transformation equation for apparent magnitude is
\begin{equation}
m_{standard}=m_{inst}+a_{1}+a_{2}(AM)+a_{3}(color)+a_{4}(color)(AM),
\end{equation}
\noindent where $m_{inst}$ is the instrumental magnitude from {\em
IRAF/DAOPHOT}, $a_{1}$ through $a_{4}$ are the transformation
coefficients, $color$ is the color term (which may have various
permutations using $VRI$ magnitudes), $AM$ is the airmass and
$m_{standard}$ is the standard magnitude from \citet{Landolt1992}.
The {\em IRAF/fitparam} task is used to compute these coefficients via
a least squares method. To generate the final $VRI$ magnitudes on the
Johnson-Kron-Cousins system, the transformation equation is applied
using a custom-made Perl task. The advantage of this Perl script over
the {\em IRAF/evalfit} task is that the output file contains not only
the $VRI$ apparent magnitudes, but image names, magnitude errors, and
the date of data reduction. These output files are then concatenated
into a large master photometry database for future access.
\section{Parallax Results}
\subsection{Parallax Results for Calibration Stars}
\label{pi.quality.check}
Seven parallax standard stars were selected to check the reliability
of CTIOPI results. They were selected so that different parts of the
sky were represented. All but one, LHS 1777, are within 10 pc and
have final parallax determinations with more than 60 frames spanning
more than 2.5 years.
The trigonometric parallax results for these stars from CTIOPI and
other sources are shown in Table~\ref{tb:parallax.standard} and
Figure~\ref{fg:parallax.standard.plot}. Note that all of the measured
CTIOPI parallaxes are within $2\sigma$ of all other observations,
indicating that the current parallax pipeline, DCR corrections, and
conversion from relative to absolute parallax produce reliable
results. The final parallax error is a combination of many factors,
including (1) the accuracy of the coordinates, (2) the quality of the
reference star frame (brightness, distribution), (3) the accuracy of
the (x,y) centroids, including any ellipticity caused by any close
component (4) the total number of parallax images, (5) the time span
of the available frame series, (6) the parallax factor coverage, (7)
the DCR corrections, and (8) the correction of relative to final
absolute parallax. The first three factors can not easily be modified
after they are chosen. However, the number of observations, the
duration of the frames series, and the parallax factor coverage, can
be controlled and depend only on the resources, staffing, and stamina
of the CTIOPI Team. At present, a pi star is generally considered
``finished'' when all of the following criteria are
met\footnote{Exceptions occur when the pi star is faint, when poor
reference star configurations are available, or when a pi star is
blended with a background source or close physical companion.}:
\begin{enumerate}
\item the relative parallax error is less than 3 mas
\item the pi star has been observed for 4 or more seasons (one season
includes 2-3 months of observations)
\item the pi star has been observed for at least 2.5 years
\item there are at least 40 frames of the field
\item $VRI$ photometry has been obtained for the field
\end{enumerate}
In practice, an extended time span results in meeting most of these
criteria, so it is perhaps the best single benchmark to be used to
evaluate parallax errors for the entire survey.
Figure~\ref{fg:pi.time.line} illustrates how time coverage affects the
relative parallax error for 10 different stars within 10 pc (six are
calibration stars and four are additional CTIOPI targets). Parallax
reductions were executed using various subsets of the complete data
sets (each star indicated with a different symbol). A few stars show
only $\sim$2 mas error after only about one year of observations. In
these cases, the parallaxes determined can be quite inaccurate, but a
good fit with minimal formal error can be made to the proper motion
and parallactic motion even though they have not yet been adequately
decoupled. When key high parallax factor images are taken later, a
different stellar path is determined and the error represents reality.
The mean error for all of the reductions for all 10 fields is 2.45
mas. This error is reached at a time point 2.32 years into an
observing sequence. We therefore conclude that $\sim$2.5 years of
coverage is sufficient to determine accurate parallaxes with
acceptable final errors based on the current time baseline we
have. This is consistent with the results of \citet{Dahn2002}, who
find that the USNO parallaxes are stable after about 2 years
observation.
\subsection{Parallax Results for MOTION Stars}
Complete astrometric results for 36 MOTION systems and the seven
calibration stars are presented in Table~\ref{tbl:pi.result}. These
are the first trigonometric parallaxes for 33 of the MOTION systems
(GJ 545, GJ 754, and LHS 500/501 have improved parallaxes; see
section~\ref{sec:notes} below). The first two columns are the
identifiers and coordinates. The third column reports the filter used
for parallax frames. The next four columns provide observational
statistics. N$_{sea}$ indicates the number of seasons observed, where
2-3 months of observations count as one season. The letter ``c''
indicates a continuous set of observations where multiple nights of
data were taken in each season, whereas an ``s'' indicates scattered
observations when some seasons have only a single night of
observations. Generally, ``c'' observations are better. A $+$
indicates that three or fewer individual images are used in one or
more seasons that are not counted in N$_{sea}$. N$_{frm}$ is the
total number of frames used in the final reduction, and Years
indicates the number of years spanned by the full reduction set.
N$_{ref}$ indicates the number of reference stars used during parallax
reductions. Columns 8-10 report the relative parallax, size of the
relative to absolute parallax correction, and the final absolute
parallax, respectively. The next two columns are the proper motion
and the direction of proper motion. The thirteenth column is the
derived tangential velocity for each pi star. The last column has a
``!'', if there are notes in section~\ref{sec:notes}.
\subsection{Notes on Individual Systems}
\label{sec:notes}
Here we comment on individual systems that have ! in the notes column
of Table~\ref{tbl:pi.result}.
{\bf GJ 1050 (LHS 157)} The field lacks bright reference stars, so
some reference stars with fewer than 100 peak counts are included,
causing a relatively large parallax error of 4.44 mas. The
photometric distance from the \citet{Henry2004} relations is
14.9$\pm$2.2 pc, which is comparable to our trigonometric parallax,
thereby precluding any relatively bright unseen companion that may
cause the high error.
{\bf GJ 1068 (LHS 22)} is a new RECONS sample member at a distance of
6.97 $\pm$ 0.09 pc. \citet{Ianna1994} reported a preliminary parallax
of 0\farcs1416$\pm$0\farcs0029 (Ianna, 2004, private communication,
not in print).
{\bf LHS 193AB} is a new multiple system reported in \citet{Jao03}
with a separation of 12\farcs6. A parallax has been determined only
for LHS 193A because LHS 193B is too faint. The LHS 193AB system is a
member of the MOTION sample based on the LHS catalog \citep{LHS} value
of $\mu =$ 1\farcs023/yr, but \citet{Bakos2002} flag this object as
having a problematic proper motion. Our result of $\mu =$
0\farcs9964/yr indicates a proper motion slightly less than
1\arcsec/yr. We now have a longer time base than given in
\citet{Jao03}, but no orbital motion is detected. Reference star \#3
(RA $=$ 04 32 25.54, DEC $=$ $-$39 03 14.6, epoch $=$ J2000.0) is
relatively nearby, having $\pi_{rel} =$ 0\farcs03054$\pm$0\farcs00168,
$\mu =$ 0\farcs035/yr, and $V-I = 2.71$, and was dropped from the
final reduction.
{\bf LHS 225AB} is a multiple system reported in \citet{Jao03} and
also in NLTT Catalog \citep{NLTT} with a separation of 2\farcs5.
Parallaxes are determined for both components, but images with
ellipticity greater than 20\% had to be included during data reduction
because of the proximity of the two sources. This causes both
parallaxes to have relatively high errors.
{\bf GJ 1123 (LHS 263)} is a new RECONS sample member at a distance of
9.02 $\pm$ 0.16 pc. The spectroscopic and photometric distances
estimated by \citet*[7.6pc]{Henry2002} and \citet*[7.5$\pm$1.2
pc]{Henry2004} are both with error less than 17\% from this
measurement.
{\bf GJ 1128 (LHS 271)} is a new RECONS sample member at a distance of
6.53 $\pm$ 0.10 pc, confirming the distance estimates of 6.6 pc in
\citet{Henry2002} and 6.4 $\pm$ 1.0 pc in \citet{Henry2004}.
{\bf GJ 1129 (LHS 273)} is a new NStars sample member at 11.00 $\pm$
0.46 pc, confirming the distance estimates of 11.6 pc in
\citet{Henry2002}. Images of the pi star are contaminated by a faint
background star within a few arcseconds throughout the frame series,
and blended during the last two epochs. This contamination causes the
parallax residuals to have a ''perturbation-like" curve resulting in a
relatively large parallax error of 3.78 mas. Nonetheless, the
parallax result after the first 2.02 years matches the result after
the full 4.27 years of the current dataset, so the result is reliable.
{\bf DENIS J1048-3956} is a new RECONS sample member at a distance of
4.04 $\pm$ 0.03 pc, confirming the distance estimate of 4.5 $\pm$ 0.7
pc in \citet{Henry2004}. \citet{Deacon2001} determined a
trigonometric parallax of 0\farcs192$\pm$0\farcs037 using five
SuperCOSMOS photographic plates. CTIOPI has improved the result to
0\farcs24771$\pm$0\farcs00155\footnote{The result from CTIOPI 1.5m
\citep{Costa2004} is 0\farcs24978$\pm$0\farcs00181}, making DENIS
J1048-3956 the 28th nearest stellar system (after including two stars
that are slightly closer for which we have preliminary, but as yet
unpublished, parallax values).
{\bf LHS 300AB} is a new multiple system reported in \citet{Jao03}
with a separation of 4\farcs3. A mixture of resolved and unresolved
images are included in the dataset, but because the B component is 4.9
mag fainter than A in the filter chosen for parallax frames, $R$, the
centroid is not significantly corrupted by B.
{\bf LHS 382} is close to the ecliptic, so the axis of the parallactic
ellipse is small in the Y direction. Strong nebulosity is seen in this
field.
{\bf LTT 6933 (LHS 3292)} is a member of the MOTION sample based on
the \cite{Bakos2002} value of $\mu =$ 1\farcs03/yr derived using
POSS-I and POSS-II plates separated by 13.8 years. The LHS catalog
reports $\mu =$ 0\farcs996/yr, which is confirmed by our result of
$\mu =$ 0\farcs9593/yr.
{\bf GJ 1226AB (LHS 263AB)} is a multiple system reported in
\citet{Jao03} and \citet{Vilkki1984} with a separation of 1\farcs4.
Parallaxes were determined for both components. All 105 available
frames were examined manually and only 59 images with good seeing were
selected for data reduction. The absolute parallax correction for
this field is over 5 mas and it is much larger than our mean
corrections. Consequently, the mean correction 1.47 mas is
adopted. Further investigation is necessary.
The Yale Parallax Catalog \citep{YPC} gives parallaxes for {\bf GJ 545
(LHS 369)}, $\pi_{trig}$ $=$ 0\farcs0911$\pm$0\farcs015, {\bf GJ 754
(LHS 60)}, $\pi_{trig}$ $=$ 0\farcs1752$\pm$0\farcs0101, and {\bf LHS
500/501}, $\pi_{trig}$ $=$ 0\farcs075$\pm$0\farcs0171). Our results
have significantly improved the parallaxes by factors of 11, 7, and
11, respectively. LHS 500/501 is a wide binary with separation
107\arcsec~for which we have determined parallaxes for both
components. The two parallaxes are entirely consistent, differing by
1.8$\sigma$.
{\bf Proxima Centauri (LHS 49)} is one of our parallax calibration
stars. Proxima is brighter than the 0.9-m telescope limit in the $I$
band, so $VRI$ from \citet{Bessel1990} has been adopted for it, with
proper transformations to the Johnson-Kron-Cousins system used for the
calculation of the DCR corrections. The last epoch of data presented
here (Dec 2003) was taken at an hour angle greater than 4 hours, so
Stone's (1996) theoretical model was used for DCR, rather than the
empirical model. We note that our value of $\pi_{trig}$ $=$
0\farcs77425$\pm$0\farcs00208 is the most precise ground-based
parallax ever determined for the nearest star to our Solar System, and
has a formal error 13\% smaller than the Hipparcos result.
\section{$VRIJHK_{s}$ Photometry Results}
The $VRIJHK_{s}$ photometry for the 48 stars in 43 systems is
presented in Table~\ref{tbl:phot.result}. After the two names, the
next four columns are the new optical $VRI$ photometry and the number
of new observations taken during CTIOPI. For comparison purposes,
references for previously published photometry are listed in the
seventh column. The next three columns are the infrared $JHK_{s}$
photometry (rounded to the hundredth) from 2MASS. Spectral types and
references are given in the last two columns.
The $VRI$ data have been reduced as discussed in
section~\ref{sec:phot.reduce}. Most of the stars are reduced using an
aperture 7\arcsec~in radius. A few stars required smaller aperture
sizes in order to separate two close components: LHS 193B (4\arcsec),
LHS 225AB (2\arcsec), LHS 300AB (2\arcsec), and GJ 1226AB (1\arcsec).
Errors from the fits of standard stars (external errors) are estimated
to be $\pm$0.02 at $V$, $R$ and $I$. Because most of the pi stars are
bright, the signal to noise ratio errors (internal errors) are usually
from 0.001 to 0.008 mag. The exceptions are LHS 193B (0.04, 0.05,
0.05 mag at $VRI$, respectively), DENIS 1048-3956 (0.02 at $V$ band),
and LHS 300B (0.01, 0.02, 0.02). We estimate that night-to-night
repeatability errors for the faintest stars in the CTIOPI program (the
worst case) are $\sim$0.03, as discussed in \citet{Henry2004}, except
for those stars that are possibly variable, e.g.~DEN 1048-3956. Thus,
the combination of all three errors for the relatively bright stars
presented here is typically $\sim$0.03 mag at $VRI$.
Infrared photometry in the $JHK_{s}$ system has been extracted from
2MASS. The $JHK_{s}$ magnitude errors from the total photometric
uncertainties, including global and systematic terms, are almost
always less than 0.05 mag and are typically 0.02-0.03 mag. The
exceptions are LHS 193B (errors of 0.11, 0.16 and 0.18 at $JHK_{s}$,
respectively), LHS 271 (0.05 at $H$), Proxima Cen (0.06 at $H$) and
LHS 3292 (0.06 at $H$).
\section{Discussion}
\label{discussion}
In this paper, the CTIOPI team presents the first substantial set of
trigonometric parallaxes for stars with proper motion greater than
1\farcs0/year since the Yale Parallax Catalog and the Hipparcos
mission. \citet{Hambly1999}, \citet{Dahn2002}, \citet{Tinney2003} and
\citet{Vrba2004} have reported a total of 1, 1, 2 and 5 first
trigonometric parallaxes for MOTION systems, respectively. All of
those studies concentrated on the (very) cool end of main sequence, L
or T dwarfs, or in a single case, a white dwarf. Obviously, the
MOTION systems are potentially nearby stars, and this is borne out by
our results --- Table~\ref{tbl:pi.result} shows that four of the
systems are new entrants to the RECONS 10 pc sample, which requires a
reliable trigonometric parallax published in a refereed journal for
inclusion. Furthermore, 22 additional systems are new members of the
NStars (25 pc) sample, and only seven systems lie beyond the NStars
horizon. In sum, the first trigonometric parallaxes reported here for
33 MOTION systems provide reliable distances to 41\% of the MOTION
systems south of $\delta$ $=$ 0 that previously had no trigonometric
distance measurements.
The combination of accurate $\pi_{trig}$ and $VRIJHK_{s}$ photometry
permits the construction of reliable HR diagrams and offers the
opportunity for insight into the MOTION sample. Here we present HR
diagrams for the MOTION stars with new and improved parallaxes from
CTIOPI, split into single-star systems and binaries (the parallax
standard stars are not included in this discussion). In particular,
we discuss the identification of new nearby subdwarfs, and two
remarkable new K/M type subdwarf-white dwarf binaries (hereafter,
sdK/M+WD).
\subsection{HR Diagram for Single MOTION Stars}
In Figure~\ref{fg:color.mag.1}, we plot $M_{K_{s}}$ against the $V-K_{s}$
color for all stars in Table~\ref{tbl:phot.result}, excluding the
parallax calibration stars and the five binary systems in the sample
--- LHS 193AB, LHS 225AB, LHS 300AB, LHS 500/501, and GJ 1226AB.
These binaries will be discussed in the next section. Because of the
high quality $\pi_{trig}$ and $VRIJHK_{s}$ photometry, the errors in
$M_{K_{s}}$ and $V-K_{s}$ are roughly the size of the symbols.
By comparing our sample with the main sequence stars from the RECONS
10 pc sample and subdwarfs from \citet{Gizis1997}, we can estimate the
luminosity classes for several stars without spectral types. Of the
31 single MOTION stars, the seven labeled on the plot (and indicated
with open circles) do not have spectral types. Three of these are new
subdwarfs --- LHS 158 at 40.1 pc, LHS 382 at 48.3 pc and LHS 521 at
46.3 pc. It is clear from Figure~\ref{fg:color.mag.1} that LHS 521 is
an extreme subdwarf. The remaining four stars without spectral types
are main sequence stars that are all new NStars members --- ER 2 at
11.9 pc, WT 1827 at 12.3 pc, pc LTT 6933 at 16.4 pc, and LHS 539 at
18.9 pc. Among the 24 stars with spectral types, three have been
misclassified as main sequence stars, but are likely to be nearby
subdwarfs --- LHS 406 at 21.1 pc ($M_{K_{s}}=$ 7.39, $V-K_{s}=$ 4.04), WT
248 at 26.0 pc ($M_{K_{s}}=$ 7.79, $V-K_{s}=$ 4.65), and LHS 440 at 27.1
pc ($M_{K_{s}}=$ 6.79, $V-K_{s}=$ 4.03). Spectroscopic observations are
necessary to confirm their luminosity classes.
This highly kinematically biased sample is of course likely to include
Galactic thick disk members and even a few high velocity field halo
subdwarfs. The tangential velocities of the new subdwarfs LHS 158
(191 km/sec), LHS 382 (327 km/sec), and LHS 521 (221 km/sec) are,
indeed, quite high, implying that they belong to an old population.
In order to analyze the full kinematics for these systems, future
radial velocity observations are necessary.
\subsection{HR Diagram for Binary MOTION Stars}
\label{sec:binary}
Binary systems provide several opportunities to glean additional
insight into stellar properties because the components are assumed to
have formed simultaneously (so have the same age), and from the same
gas cloud (so have identical composition). If parallaxes can be
determined for both stars in a binary, a consistent match also
indicates that our observing and reduction methodology is sound. The
five binary systems in Tables 2 and 3 are shown on the $M_{V}$
vs.~$V-I$ HR diagram in Figure~\ref{fg:color.mag.2}. $M_{V}$ has been
used instead of $M_{K_{s}}$ because $K_{s}$ magnitudes are not available
for the individual components in LHS 225AB, LHS 300AB, and GJ 1226AB.
As discussed in the previous section, the RECONS and subdwarf samples
are plotted for comparison, and have been supplemented in
Figure~\ref{fg:color.mag.2} with white dwarfs from \citet[hereafter,
BLR]{Bergeron2001}.
It is clear from Figure~\ref{fg:color.mag.2} that the components of
the close binaries LHS 225AB and GJ 1226AB are nearly identical main
sequence M dwarfs. Our separate $\pi_{trig}$ determinations for the
components of the wide LHS 500/501 pair are consistent, and show that
the two components are both main sequence M dwarfs.
The two remaining binaries, LHS 193AB and LHS 300AB, are both
comprised of a subdwarf of late K/early M type and a white dwarf. A
search of the literature indicates that few such systems are known.
\citet{Gizis1997b} reported that LHS 2139/2140 is a common proper
motion sdK/M+WD pair, based on a noisy but featureless spectrum for
the B component, but no available parallaxes are available to confirm
the nature of the system. \citet{Gizis1998} argued that GJ 781AB, an
unresolved spectroscopy binary, is another sdK/M+WD binary based on
its mass function.
The two new wide sdK/M+ WD pairs reported here have complete parallax
and $VRI$ photometry. In both cases, Figure~\ref{fg:color.mag.2}
clearly indicates that the primary star is a subdwarf and that the
secondary is a white dwarf. However, both white dwarfs are redder
than all white dwarfs with $V-I$ and parallax reported in BLR. The
locations of LHS 193B and LHS 300B in the HR diagram can possibly be
explained several ways, including multiplicity, composition, very low
mass (and hence large size), and/or dust.
In particular, models by \citet{Bergeron1995} indicate that very low
mass helium white dwarfs may have the colors observed. Both
\citet{Hansen1998} and \citet{Bergeron2001b} argue that as the
$T_{eff}$ decreases to 3000 K for old ($t \gtrsim$ 11 Gyr) white
dwarfs with hydrogen atmospheres, their location in the HR diagram
swings back $blueward$ of the white dwarf cooling sequence. This is
caused by strong H$_{2}$ molecular absorption features expanding into
the optical regions. This implies that both LHS 193B and LHS 300B,
which lie outside the grid for typical hydrogen white dwarfs, may be
helium white dwarfs (or hydrogen white dwarfs with lower surface
gravity values than have been included in the model grid -- unlikely,
because of the observed distribution of surface gravities for hydrogen
white dwarfs). Very low S/N spectra currently in hand indicate that
the two white dwarfs are featureless, in particular having no
H$\alpha$ line. Obviously, high S/N spectra are desirable, and will
be the focus of future work.
\section{Conclusions}
Accurate $\pi_{trig}$ and $VRIJHK_{s}$ for nearby stars assist in
constructing the basic framework of stellar astronomy. Here we
provide a valuable contribution to studies of the solar neighborhood
by targeting MOTION stars. A total of 46 parallaxes from CTIOPI are
presented, including 39 parallaxes for 36 MOTION systems and 7
additional parallaxes for calibration stars. Thirty-three MOTION
systems have trigonometric parallaxes determined for the first time.
Already, several new nearby systems have been revealed. Four of the
MOTION systems --- GJ 1068, GJ 1123, GJ 1128 and DENIS J1048-3956 ---
are new members of the RECONS 10 pc sample \citep{Henry1997}. An
additional 22 systems are new members of the NStars 25 pc sample
\citep{Henry2003}. In addition, valuable new nearby subdwarfs have
been identified, and two rare sdK/M+WD pairs have been discovered.
Both of these samples are valuable probes of the history of our
Galaxy.
This work once again shows that faint, high proper motion stars are
excellent candidates to discover nearby stars. Yet, 48 MOTION systems
south of $\delta$ $=$ 0 still do not have parallaxes. In future
papers, we will present $\pi_{trig}$ and $VRIJHK_{s}$ for several
additional samples of stars, including more MOTION systems, stars
neglected in the LHS Catalog, and new discoveries from our SuperCOSMOS
RECONS search (\citealt*{Hambly2004}, \citealt*{Henry2004},
\citealt*{Subasavage2005}), as well as others.
Finally, CTIOPI has been expanded in recent years under the SMARTS
Consortium to carry out a program called ASPENS (Astrometric Search
for Planets Encircling Nearby Stars), led by David Koerner at Northern
Arizona University. Red and white dwarf systems within 10 pc south of
$\delta$ $=$ 0, including the four new RECONS members and six of the
seven calibration stars, are being observed intensely to reveal any
possible long term astrometric perturbations.
\section{Acknowledgments}
We would like to thank Barbara McArthur, Mike Begam, Dave Monet, and
Myles Standish for their assistance in building the data reduction
pipeline. We gratefully acknowledge assistance in the early stages of
the CTIOPI effort from Claudio Anguita, Rafael Pujals, Maria Teresa
Ruiz and Pat Seitzer. Without the extensive observing support of
Alberto Miranda, Edgardo Cosgrove, Arturo Gomez and the staff at CTIO,
CTIOPI would not be possible. We also thank Jacob Bean, Thom
Beaulieu, Charlie Finch, and Jennifer Winters for their assistance on
data organization, reduction and observing. We thank Pierre Bergeron,
John Gizis, Hugh Harris, Dave Latham, James Liebert, Hektor Monteiro,
and Terry Oswalt for suggestions concerning the subdwarf-white dwarf
binaries.
We are deeply indebted to NOAO for providing us a long term observing
program at CTIO via the NOAO Surveys Project, using both the CTIO
0.9-m and 1.5-m telescopes. We also thank the continuing support of
the members of the SMARTS Consortium without whom the completion of
many astrometric series reported here would not have been
possible. The early phase of CTIOPI was supported by the NASA/NSF
Nearby Star (NStars) Project through NASA Ames Research Center. The
RECONS team at Georgia State University is supported by NASA's Space
Interferometry Mission and GSU. This work has used data products from
the Two Micron All Sky Survey, which is a joint project of the
University of Massachusetts and the Infrared Processing and Analysis
Center at California Institute of Technology funded by NASA and NSF.
EC and RAM acknowledge support by the Fondo Nacional de Investigaci\'on
Cient\'ifica y Tecnol\'ogica (proyecto Fondecyt No. 1010137), and by the
Chilean Centro de Astrof\'isica FONDAP (No. 15010003).
This project has made generous use of the 10\% Chilean time.
|
astro-ph/0502501
|
\section{Introduction}
The jets observed emanating from the nuclei of quasars and other
active galactic nuclei (AGN) represent the most energetic long-lived phenomenon
in the universe. Although jet imaging is no longer the sole prerogative of radio
astronomy, the exquisite resolution of Very Long Baseline Interferometry (VLBI)
at radio wavelengths remains an unreached goal for sub-millimeter (sub-mm)
and shorter wavelengths. It is thought that accretion onto a black hole drives the
jet outward via magnetic forces \citep[e.g.][]{Meier00}. Currently, the most
direct way to provide observational evidence for such a model is to study the
linear polarization of jets at different
frequencies (from radio to optical) along with changes in the innermost jet
structure. The most promising objects in such investigations
are blazars, flat-spectrum radio-loud quasars and BL~Lac objects characterized
by high optical polarization up to 46\% \citep{Mead90,IT90}, pronounced and
rapid variability of flux, and one-sided jets with knots that move at superluminal
apparent velocities, as fast as $\gtrsim$30$c$ \citep{J01,KL04}. According to \citet
{Beverley92}, there is a highly significant correlation between optical polarization and
dominance of a compact radio core-jet structure, which suggests beaming of the optical
flux along with the radio emission. Comparison of radio-loud quasars with low and
high optical polarization shows a higher fractional
polarization of the radio core for the latter \citep{LS00}, which implies a co-spatial
origin of the emission at these wavelengths. In support of this, the optical
polarization in blazars is affected by the formation and emergence of new VLBI
knots \citep{G94,G96}.
These results indicate that simultaneous multifrequency polarization
monitoring, together with high resolution polarimetric imaging of the radio jets,
provides a unique tool for identifying the location of the regions responsible for
variability at different wavelengths and for relating the magnetic field geometry to
the structure of the jet.
We have obtained total and polarized intensity images of 15 AGNs with the Very Long
Baseline Array (VLBA) at 7~mm (43~GHz) at 17 epochs over three years. The VLBA observations
are accompanied at many epochs by nearly simultaneous (within two weeks) measurements
of polarization at 1.35/0.85 mm (230/350~GHz) and at optical wavelengths. In the
second half of the program simultaneous polarization observations at 3~mm were
performed at several epochs. The main goals of the project are to relate emission
regions at high frequencies to the parsec-scale jet structure and to investigate
the strength, direction, and variability of the magnetic field close to the central
engine. These can be achieved only after detailed study of the jet kinematics.
This paper is devoted to an analysis of the jet structure and its variability
associated with ejection and propagation of disturbances down the jet, which
appear on radio maps as knots of enhanced brightness.
The paper is the first in a series based on the entire data set collected during
the project. Some results on individual sources have been published already
by \citet[3C~120]{MAR02}, \citet[BL~Lac]{AS03}, and \citet[3C~279]{J04}.
Other papers will include (1) comparison of the polarization parameters at different
frequencies with the jet structure and disturbances in the jet; (2) analysis of the
available radio, sub-mm, optical, and X-ray light curves to relate the flux variability
to activity in the jet; (3) results regarding stability
of the VLBI core position based on phase-referencing observations obtained for
five sources in the sample; and (4) structure of the core and intraday total and polarized
intensity variability in the parsec-scale jets.
Although much progress has been made in studying the kinematics of jets since
the VLBA started to operate fully in 1995 \citep{Pear98,KL98,HO01,J01,G43,KL04},
our program is unique regarding the number of sources observed in such detail
over a rather long
period of time. This allows us to separate the fast and slow, ballistic and curved
motion in the jet flow, define physical parameters for both individual jet features
and entire parsec-scale jets, and compare the results across different classes of AGNs.
\section{Sample Selection}
Our program is designed for comparison of the linear polarization at high frequencies
(mm, sub-mm, and optical wavelengths) with the parsec scale jet structure of AGNs.
This has defined the main criteria for selecting the sample and frequency of our VLBA
observations:
\begin{enumerate}
\item{The sources should be bright, $\geq$0.5~Jy, and polarized, $\geq$3\%,
at sub-mm wavelengths.}
\item{The size of the sample and brightness of sources should allow us to perform VLBA
observations at a single epoch during 24 hours with sufficient uv-coverage to produce
total and polarized intensity images at 43~GHz with high dynamic range.}
\item{The sample should contain sources with resolved radio structure from
different sub-classes of AGNs for which variability in the jet flow can
be expected on timescales of months.}
\item{The sources should be convenient for monitoring in the northern hemisphere
and their coordinates should cover the whole range of right ascensions.}
\end{enumerate}
Following these constraints, we have formed a sample of 15 AGNs that consists of 8
quasars, 5 BL~Lac objects, and 2 radio galaxies (3C~120 of Fanaroff-Riley type 1
and 3C~111 of type 2) after obtaining information on the
linear polarization at sub-mm wavelengths from \citet{N98}. The sources are listed
in Table \ref{Sample}.
\section{Observations and Data Reduction}
We have observed the objects in our sample in four different regions of the
electromagnetic spectrum: at 43~GHz (7 mm) with the VLBA, at 350/230~GHz (0.85/1.3~mm)
with the {\it James Clerk Maxwell Telescope} (JCMT, Mauna Kea, Hawaii) using SCUBA
\citep{SCUBA} and its polarimeter \citep{POL03}, at the Steward Observatory 1.5~m
telescope (Mt. Lemmon, Arizona) with the Two-Holer Polarimeter/Photometer
\citep{SSS85} over an effective wavelength range of $\sim$6000--7000${\AA}$,
and at 86~GHz (3~mm) with the {\it Berkeley-Illinois-Maryland Array}
(BIMA, Hat Creek, California). The observations were performed from 1998 March to
2001 April. The VLBA monitoring was carried out roughly bimonthly (17 epochs).
For the JCMT and optical observations the number of epochs depends on the source,
with the maximum being 11 and seven epochs and the minimum five and three epochs, respectively,
except for the BL~Lac object 1803+784, which was not observed in the optical
region owing an inaccessibly high declination.
The majority of the JCMT polarization observations were accompanied
by total flux measurements. Differential $V$-band photometry
($R$-band photometry in the case of 3C~273) was carried out
when conditions were photometric. The BIMA polarization observations started
in 2000 April and for many sources in the sample were performed simultaneously
with the VLBA observations at 3-4 epochs. However, each of these epochs
usually was accompanied by a series of BIMA observations separated by several
days so that for some sources polarization measurements at 3~mm were obtained at
20 epochs. The polarization data at 3~mm include all four Stokes parameters.
In this paper the optical, JCMT, and BIMA polarization results are shown only graphically
to illustrate the data collected. Detailed descriptions of the data reduction
and tables of measurements will be presented in a later paper.
\subsection{Radio Observations}
The 43~GHz observations were carried out with the VLBA recording
system using eight 8 MHz wide channels, each in right and left circular
polarization, with 15-20 scans of 3-5 minute duration for each object.
All 10 antennas were used for each source except at epochs affected by weather or
receiver failure.
Table \ref{Antenna} lists the antennas operated at each epoch and typical parameters of
the synthesized beam (those of the OJ 287 observations) for uniform weighting
(used for imaging all sources) to illustrate the consistency of the $uv$-coverage
over epochs. Initial correlation was carried out at the National Radio
Astronomy Observatory (NRAO) Array Operations Center in Socorro, NM.
Subsequent calibration was performed with the Astronomical Image Processing
System (AIPS) software supplied by NRAO, while images were made with
the Caltech software Difmap \citep{Difmap}. The calibration included
application of
the nominal antenna-based gain curves and system temperatures, and correction
for sky opacity, followed by iterative imaging plus phase and amplitude
self-calibration. For each epoch we calculated
the total flux density in the images of sources that are known to have very
weak emission outside the angular size range of the VLBA images
(0420-014, 0528+134, OJ 287, and BL~Lac). These values were compared
with total flux densities obtained by interpolating in time the
measurements of the monitoring program at 37 GHz at the Mets\"ahovi Research
Station, Finland \citep{T04}. The comparison produced the flux density correction
factors, $f_{\rm amp}$, given in Table \ref{Antenna}. The factors were
applied for the final adjustment of the flux density scale in the images.
We have constructed
light curves of the VLBI core for each source to confirm the absence of correlation
between their behavior, indicating no evidence for residual amplitude calibration
errors. The light curves of the 15 sources peak at 9 different epochs
and the correlation coefficients are spread between $-$0.4 and +0.3, consistent
with no systematic calibration errors in the flux density scaling.
We performed a cross-hand fringe fit using a scan of 3C~279 averaged over
all baselines (the data were corrected for parallactic angle rotation
of phases in advance).
The resulting right-left phase and rate delay corrections
were processed with the AIPS task POLSN and applied to the full data set.
After preliminary images were produced, the images were used in AIPS
task CALIB to self-calibrate the phases of all sources; this includes
the removal of residual R-L phase differences owing to calibration errors.
The instrumental polarization ``D-terms'' were determined via the
method of \citet{RWB94} and \citet{LZD95}. A final set of D-terms at each epoch
was obtained by averaging of the solutions for those sources in the sample
for which there is the best agreement between the D-terms
(usually, 3C~111, OJ~287, 3C~345, and BL~Lac). The electric vector position angle
(EVPA) calibration was obtained by different methods: comparison between the
Very Large Array (VLA) and VLBA integrated EVPAs at quasi-simultaneous epochs,
D-terms method (see below), and using EVPA-stable features in the images
of the jets of 3C~279, OJ 287, and CTA102. Over the period 1998 March -- 1999 February
we used results of the VLA observations performed for 0420$-$014, 3C~120,
OJ 287, 3C~279, BL~Lac, and 3C~454.3 by \citet{Dterm}. Later we obtained polarization
measurements for 0420$-$014, 0528+134, OJ 287, and 1803+784 at two epochs, 2000 April
and 2000 July, with the VLA $C$ and $CnD$ configuration, respectively. At epochs
where VLA data were not available we used the D-terms method, which
is based on the assumption that the instrumental polarization parameters change
slowly with time \citep{LZD95}. A detailed description of the method is given in
\citet{Dterm}. In addition, later we checked our calibration for these epochs using
the NRAO data base at {\bf http://www.vla.nrao.edu/astro/calib/polar/} for sources
0420$-$014, 0528+134,
OJ287, and BL~Lac when the NRAO data were available at epochs close to the VLBA dates.
The results are completely consistent with our calibration.
The EVPA-stable feature in 3C~279 is the well-known superluminal component
$C4$ \citep[e.g.,][]{H03}, which underwent a change in EVPA in the beginning of
1998 \citep{J04} but maintained this EVPA until the end of our program.
The EVPA-stable features in OJ~287 and CTA~102
are quasi-stationary components at $\sim$1~mas and $\sim$2~mas from the core,
respectively. However, these both became very weak in the second half of 2000 in the
43~GHz images. The final EVPA calibration is a result of the best agreement among
the different methods. The accuracy of the calibrated EVPA measurements,
$\sigma$(EVPA), is indicated in Table \ref{Antenna}.
We have combined the images obtained for every source in a sequence
of total intensity maps convolved with the same beam, corresponding
to the average beam during the epochs when all 10 antennas were in operation.
The contours are in terms
of the global peak of the maximum total intensity observed over all epochs.
The sequences are presented in Figures \ref{Alla}-\ref{Allo}, which also show the JCMT, BIMA,
and optical polarization measurements. All images are oriented with north toward the
top and east toward the left. The scale in mas is indicated along one of the axes,
while the epoch of each image is given along the other axis. The peaks of the total,
$I_{\rm peak}$, and polarized, $I^p_{\rm peak}$, intensity, parameters of the average beam,
the total intensity of the lowest contour, $I_{\rm min}$, and the lowest level
of the polarized intensity, $I^p_{\rm min}$, of the combined maps are indicated
in Table \ref{Sample}. The polarization at 7~mm is shown inside the
total intensity images by line segments, which are plotted if the local polarized
intensity exceeds $I^p_{\rm min}$. The segments are oriented in the local direction
of the polarization and their length is proportional to the local polarized intensity,
with the maximum length corresponding to the global peak of the polarized intensity
over all epochs, $I^p_{\rm peak}$. Table \ref{Antenna} marks epochs when the BIMA, JCMT,
and optical observations were performed within two weeks of the corresponding VLBA epoch.
\subsection{Model Fitting of the VLBA Images}
We employed the task MODELFIT in Difmap to represent each total intensity image
(Stokes parameter $I$) as a sequence of circular Gaussian components that are
characterized by flux density, size, and position relative to the map center.
Initially, point-like components are used to obtain an image with 100:1 dynamic
range. When a set of point-like components was found, a final group of 100 iterations
was executed with all parameters of all components allowed to vary to define their
properties. In addition, because we have roughly bimonthly observations, we repeated the
model fitting with the components from the previous epoch used as the initial model
(except for the first epoch, 1998 March). This improves identification of components
over epochs and provides an estimate of the accuracy of the parameters by comparison
of the outcomes obtained with different initial models. For several sources (3C~66A,
OJ 287, 3C~279, PKS~1510$-$089, and 1803+784) at four epochs (1998 March 1998, 1999
October, 2000 July 2000, and 2001 January) we derived estimates of the
1$\sigma$ uncertainties of the best-fit models using the method described
by \citet{DIFER} and realized in the package Difwrap
\citep{Difwrap}. The uncertainties depend significantly on the brightness and size
of components:
1) for {\it bright} (flux of knot $\geq 100~\rm{rms}$ noise level) and {\it compact}
(size $\leq$0.1~mas) features,
uncertainties in flux density $\sim$1\%, in position $\sim$0.01~mas, and in size
$<$ 1\%;
2) for the majority of components with size 0.1-0.3~mas and flux $\geq$50~mJy,
uncertainties in the flux density $\sim$3\%, in position
$\sim$1/5 of the beam size, and in size $\sim$5\%;
3) for diffuse components,
uncertainties in flux density $\sim$10\%, in position are
comparable with half the size of the knot, and in size $\sim$10\%;
4) for components weaker than 50~mJy, uncertainties in flux density $\sim$50\%
and positional uncertainties correspond to the size of the average beam
(see Table \ref{Sample}); however, for the majority of
weak components these uncertainties were later modified as discussed in \S 4
(Errors of Polynomial Parameters).
We determined the parameters of polarized components by applying the task MODELFIT
to the $uv$-data in Stokes parameters $Q$ and $U$ separately. For this purpose only
point-like components were used in the models. Because the Q and U components can
be either positive or negative, the procedure needs to be carried out carefully by
checking with the polarized intensity image to avoid the inclusion of false features
generated by the imaging procedure. The derived positional parameters of the $Q$ and $U$
components were compared with the parameters of $I$ components. We consider a $Q$ and/or
$U$ component to be associated with an $I$ component if they are co-spatial to within
the uncertainties. If both polarized components are present,
the position of the polarized component is defined by the average location, weighted by
the absolute values of the $Q$ and $U$ fluxes. There are few cases
when a significant polarized component does not have an $I$ counterpart.
Uncertainties in the values of the parameters of polarized components are
difficult to define. Estimates of the accuracy of polarization parameters in jet
features at 15 and 22~GHz are discussed by \citet{HO02}. For components that are bright
in the total intensity and highly polarized (percent polarization $\ge$5\%), we
find that the uncertainties in fractional polarization $\sim 1$\% and in polarization angle
$\sim 5^\circ$.
A table of parameters of jet features for each source over all epochs can be found at
the web-site {\bf www.bu.edu/blazars/multi.html}. The columns of these tables are as follows:
1 - epoch, 2 - flux density in Jy, 3 - relative right ascension in mas,
4 - relative declination
in mas, 5 - distance from the core in mas, 6 - position angle relative to the core in
degrees, 7 - angular size in mas, 8 - polarized flux density of a polarized component
associated with a total intensity component, in Jy, 9 - distance of polarized component
from the core in mas, 10 - position angle of polarized component relative to the core
in degrees, 11 - EVPA in degrees.
\section{Technique to Measure Motion in the Jets}
The data allow us to follow the evolution of major features of the jets
that are observed in the total and polarized intensity VLBA images
of the 15 AGNs at 17 epochs over 3 years. For each image we identify
a component, $A0$, as the VLBI core. For all sources $A0$ is located at one
end of the jet, although it is not always the brightest feature. The position of the core
in right ascension $x$ and declination $y$ is defined as $x=0$ and $y=0$. In our
analysis we assume that the core is stationary over all epochs. Locations of other
features are determined relative to the core. Components are categorized as follows:
knots $A$ (other than $A0$) are features that either are stationary
(the proper motion is less than or equal to its uncertainty), undergo reverse motion
(toward the core), or move at subluminal apparent speeds.
Knots $B$, $C$, and $D$ are superluminal features, where $B$ components are the
fastest knots in the jet, $D$ components are the farthest knots
from the core, ejected before our monitoring period started, and the remainder are
labeled as $C$. The designation
of components is different for 3C~120, 3C~279, 3C~345, and BL~Lac, where the naming
corresponds to \citet{G43}, \citet{W01}, \citet{RZL00}, and \citet{AS03}, respectively.
Each component is characterized by the following parameters: total $S$ and polarized
$S_p$ flux density; positions $x$ (RA), $y$ (Dec), and $R$, where $R=\sqrt{x^2+y^2}$;
position angle $\Theta=\tan^{-1}(x/y)$; size (FWHM) of component $a$; and electric
vector position angle, EVPA. To define the temporal evolution of the jet
and determine the apparent velocities of the jet flow we perform the following
steps.
1. {\it Identification of components at different epochs.}
The identification of a component is based on comparison of the parameters during
the epochs when it is visible on the images. We assume that the
same component has similar total, $S$, and polarized, $S_{\rm p}$, fluxes,
position angles $\Theta$ and EVPA, and size, $a$, at successive epochs.
However, some components evolve dramatically even over a two-month interval
between epochs, splitting into one or more subcomponents or merging with
other features of the jet. In this paper we analyze those jet features
whose identification is supported by the similarity of a number of
parameters at different epochs.
2. {\it Fitting Various Polynomials.}
The thorough sequences of images allow us to search for acceleration/deceleration
of the jet flow and non-ballistic projected trajectories.
We fit the $x$,$y$ positions of a component over $N$ epochs by different
polynomials of order $l$:
\begin{equation}
x(t_i)=a_o+a_1\times (t_i-t_{mid})+a_2\times (t_i-t_{mid})^2+...
a_l\times (t_i-t_{mid})^l,\label{e1}
\end{equation}
\begin{equation}
y(t_i)=b_o+b_1\times (t_i-t_{mid})+b_2\times (t_i-t_{mid})^2+...
b_l\times (t_i-t_{mid})^l, \label{e2}
\end{equation}
where $t_i$ is the epoch of observation, $i$=1,...,N, and $t_{mid}$=$(t_1+t_N)/2$.
We use the program LSQPOL of the FORTRAN version of the package DATAN \citep{DATAN}
to find the optimal polynomials of order $l$, where $l$ runs from $0$ to $4$.
The upper limit, $l=4$, is a consequence of the maximum number of epochs of observation
of a knot, equal to 17. The program provides the value $M_l$, which is the
goodness-of-fit by a polynomial of order $l$. For each order, we perform a $\chi^2$
test to determine the polynomial that best fits the data. We choose a polynomial of the
lowest order for which $M_l< M_{\chi^2}$, where $M_{\chi^2}$ is the
value of the $\chi^2$ distribution corresponding to significance level
$\zeta$=0.05 for $f=N-l-1$ degrees of freedom \citep{BL72}.
Examples of selection of polynomials that fit the data are given in Tables
\ref{Poly1} and \ref{Poly2} for component $C1$ identified in the jet of
3C~66A at 16 epochs and component $B1$, also seen at 16 epochs, in 3C~273.
These tables show parameters of the best-fit polynomials of order from 0 to 4,
the corresponding values of the goodness of fit to the data $M$, and the
goodness required by the $\chi^2$-test.
Table \ref{Poly1} shows that $l$=1 satisfies the $\chi^2$ test for coordinates
$x$ and $y$ in the case of $C1$ for 3C~66A.
For $B1$, $l$=4 is required to match motion in RA, while a second-order
polynomial adequately describes the data for declination.
In a few cases the $\chi^2$ test is not satisfied even by polynomials of order
4. An increase in the order of the polynomial does not improve
the situation since, given the limited number of observations, the number of degrees
of freedom becomes too small. Such cases are approximated by a straight line
with special consideration for uncertainties in the $x$,$y$ values
(see below). These cases are marked in Table \ref{Speed}
by ``1*'' under the polynomial order.
3. {\it Uncertainties in Polynomial Parameters.}
After the best-fit polynomial is found, we run the subroutine LSQASN of the package DATAN
to estimate errors in the derived parameters. This method is valid for
a normal distribution of the unknowns and when the true value lies with probability
$W$ within the confidence region around the estimated value ($W$=0.95 is used).
However, the errors depend significantly on the uncertainties in the individual
observations that are obtained from the model fitting (see \S 3). This allows us
to lessen the weight of data at epochs with bad weather, a failure of one or more antennas,
or when a given component was very weak or diffuse. For component
motion that cannot be represented well by polynomials $l\leq$4 (see above),
we approximate the motion by a first-order polynomial and determine uncertainties
in the $x$,$y$ position
using the program LSQPOL and the method suggested by \citet{HO01}. We set the uncertainty
for each data point equal to the average beam size and
calculate the optimal polynomial of first order corresponding to a preliminary
$\chi^2$ value. Taking this preliminary $\chi^2$ value, we uniformly re-scale the
uncertainties in the data points such that $\chi^2$ would correspond to the $\chi^2$
value for $\zeta$=0.05 and $f=N-2$.
4. {\it Calculation of Proper Motion, Acceleration, and Ejection Time.}
We define the average proper motion as a vector ($<\mu>$,$<\Phi>$), where
$<\mu>$ represents the mean angular speed of motion and $<\Phi>$ gives the average
direction of motion. The average values are derived as follows:
$<\mu>=\sqrt{<\mu_x>^2+<\mu_y>^2}$ and $<\Phi>=\tan^{-1}(\frac{<\mu_x>}{<\mu_y>})$,
where $<\mu_x>=\int_{t_1}^{t_N}{\dot{x}dt}/(t_N-t_1)$ and
$<\mu_y>=\int_{t_1}^{t_N}{\dot{y}dt}/(t_N-t_1)$.
In the case of first or second-order polynomials, $<\mu_x>$=$a_1$ and $<\mu_y>$=$b_1$.
For a polynomial of order higher than unity we compute an average vector of
acceleration, ($\dot{\mu}_\parallel$,$\dot{\mu}_\perp$),
where $\dot{\mu}_\parallel$ is along the direction of the average velocity, $<\Phi>$.
In the case of a second-order polynomial
$\dot{\mu}_x=2\;a_2$ and $\dot{\mu}_y=2\;b_2$. For a higher order
polynomial, $\dot{\mu}_x=\int_{t_1}^{t_N}{\ddot{x}dt}/(t_N-t_1)$ and
$\dot{\mu}_y=\int_{t_1}^{t_N}{\ddot{y}dt}/(t_N-t_1)$.
\section {Jet Velocities}
We find superluminal apparent speeds for 19 of 22 components in the radio galaxies, 19 of
31 knots in the BL Lac objects, and 46 of 53 knots in the quasars.
Table \ref{Speed} lists the average apparent speed, $<\beta_{app}>$, calculated in units
of the speed of light, $c$, using the average proper motion, $<\mu>$.
An inhomogeneous Friedmann-Lema$\hat{\rm i}$tre-Robertson-Walker cosmology,
with $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, and Hubble constant
$H_\circ$=70~km~s$^{-1}$~Mpc$^{-1}$ \citep{KKT00}, is adopted for the calculations.
We derive the uncertainties in
the average proper motion, acceleration, and apparent speed from the uncertainties in the
polynomial coefficients. Table \ref{Speed} contains also the number of epochs, $N$,
at which the component has been observed; the average total flux, $<S>$, in Jy;
the average distance from the core, $<R>$, in mas; the average position angle
of the component, $<\Theta>$, in degrees, with uncertainty corresponding to the scatter
across the epochs; the average direction of the velocity vector, $<\Phi>$, in degrees;
and the ejection time (epoch of zero separation), $T_\circ$.
The ejection time is the extrapolated time of coincidence of the position of
a moving knot with the core in the VLBA images. $T_\circ$ is the average of
$t_{x\circ}$ and $t_{y\circ}$ weighted
by their uncertainties, where $t_{x\circ}$ and $t_{y\circ}$ are roots of the best-fit
polynomials. In the case of a polynomial of order 3 and higher, the roots are computed
by successive iterations with an accuracy of $10^{-5}$.
Table \ref{Accel} gives the parameters of acceleration for
components in Table \ref{Speed} having best-fit polynomials of order $\ge$2
(some columns repeat information from Table \ref{Speed} for convenience).
Figure \ref{Ident} shows the positions of the jet components relative
to the core for both the total (open circles) and polarized (filled
circles) intensity images of each source. The solid lines/curves indicate the
best polynomial fit to the data. Components presented in Figure \ref{Ident}
are marked at the first epoch in Figures \ref{Alla}-\ref{Allo} if they are detected
then or at the later epoch at which they appear in the jet.
Figure \ref{Map} displays an image from one particular
epoch for each source when the most prominent jet features are seen. Table \ref{Tmap}
lists the parameters of the maps shown in Figure \ref{Map} (rms(I) and rms(Ip) are
the root mean squares of the residual total and polarized intensity on the images,
respectively). The trajectories of all components with non-ballistic motion are plotted
in Figure \ref{traj}. Appendix A contains a description of our results for each
individual object in the sample.
\section{Apparent Speed as a Probe of Parsec-Scale Jets}
Our intensive, prolonged high-frequency VLBI monitoring reveals complex,
changing structure of parsec scale jets, including significant variations
in the apparent speed of knots in the jet, $\beta_{app}$, within individual
objects. The apparent speed is determined by the intrinsic velocity,
$\beta$, and the angle between the trajectory and the line of sight, $\Theta_\circ$:
\begin{equation}
\beta_{app}=\beta\;\sin\;\Theta_\circ(1-\beta\;\cos\;\Theta_\circ)^{-1}. \label{e3}
\end{equation}
The intrinsic velocity defines the Lorentz factor of the knot, $\Gamma=1/\sqrt{1-\beta^2},$
where $\beta$ is in units of the speed of light.
Large scatter in the apparent speed within a source can be caused by
(1) different patterns of jet components, such as ``blobs'' of energetic
plasma or forward, reverse, or stationary shocks; (2) variable Lorentz factor of
the jet flow; or (3) different trajectories of components in the jet.
The latter can result from a change in the direction of the jet
or in the paths of components if each given knot does not fill
the entire cross-section of the jet.
The observed flux density of a superluminal component is boosted in the
direction of the observer by a factor $\delta^{3+\alpha}$, where
\begin{equation}
\delta=[\Gamma(1-\beta\;\cos\;\Theta_\circ)]^{-1}\label{e4}
\end{equation}
is the Doppler factor and
$\alpha$ is the spectral index ($S_\nu\propto\nu^{-\alpha}$) of the knot.
The flux of a component decreases with distance from the core as the result
of radiative energy losses and expansion (``adiabatic'' cooling). The emergence
of a new component is associated with one or more flares in the radio light
curves \citep[e.g.,][]{SAV02}. The flare usually has a sharp peak and
nearly symmetric exponential rise and decay at high frequencies \citep{TV94}.
In the case of the shock-in-jet model for the appearance of superluminal knots,
variations in flux, spectral energy distribution, and polarization \citep{MG85,HAA85},
the symmetric light curves of flares suggest that the flux variability is
controlled by light travel delays across the shocked region \citep[see][]{SOK04}.
This assumption allows us to calculate the
Doppler factor for each superluminal component using its flux density variability
and size measured from the VLBA data.
We can then use the derived Doppler factors to study the physical parameters
of the jets in our sample.
\subsection{Physical Parameters of the Jet Components}
Figure \ref{flux} shows light curves of all superluminal components listed in
Table \ref{Speed} (except knots classified as trailing features, see \S 6.6).
Each light curve is normalized by the corresponding average flux indicated
in Table \ref{Speed}. We define the timescale of the variability for each
superluminal component as $\Delta t_{\rm var}=dt/ln(S_{\rm max}/S_{\rm min})$
\citep{BJO74}, where $S_{\rm max}$ and $S_{\rm min}$ are the measured maximum
and minimum flux densities, respectively, and $dt$ is the time in years between
$S_{\rm max}$ and $S_{\rm min}$. The variability Doppler
factors are derived as
\begin{equation}
\delta_{\rm var}=\frac{s\;D}{c\;\Delta t_{\rm var}\;(1+z)}, \label{e5}
\end{equation}
where $D$ is the luminosity distance, $s$ is the angular size of the component,
equal to $1.6a$ for a Gaussian with FWHM=$a$ measured at the epoch of maximum flux
if the true geometry is similar to a uniform face-on disk.
In the case of a point-like knot, we adopt $a$=0.1~mas, which
yields an upper limit to $\delta_{\rm var}$. This definition of the Doppler
factor assumes that the variability timescale corresponds to the light-travel time
across the knot. This will be true if the radiative cooling time is shorter
than the light crossing time, which in turn is shorter than the timescale for
cooling by adiabatic expansion. We can roughly verify this assumption by using
the relation between flux density and size of a shocked region derived in
\citet{MG85} during the ``adiabatic'' stage. We calculate the timescale,
$\Delta t_{\rm a}$, of the variability in size for each non-point-like component
as $\Delta t_{\rm a}=dt/ln(a_{\rm max}/a_{\rm min})$
(we use $a_{\rm max}=a_{\rm S_{min}}$ and $a_{\rm min}=a_{\rm S_{max}}$
if $a_{\rm S_{min}}>a_{\rm S_{max}}$, which is valid for the majority of
components). Figure \ref{tau_tau} plots the size variability timescale
versus the flux variability timescale. The straight line indicates the expected
relation between $\Delta t_{\rm a}$ and $\Delta t_{\rm var}$ for adiabatic
losses for optically thin shocked gas with $\alpha=0.7$ \citep{MG85}.
Figure \ref{tau_tau} shows that the majority of components have shorter flux
variability timescales than those predicted for adiabatic expansion.
This implies that at high radio frequencies the decay in flux is
driven by radiative losses and, therefore, equation \ref{e5} should apply to our data.
A widely used method to
estimate the Doppler factor from VLBI data assumes
that the highest apparent speed detected in a source defines the lowest possible
Lorentz factor of the jet, $\Gamma\ge\sqrt{1+\beta_{\rm app}^2}$.
The viewing angle is then taken to be $\Theta_\circ\leq\sin^{-1}(1/\beta_{\rm app})$;
this leads to $\delta_{\beta_{\rm app}}\sim\beta_{\rm app}$.
In Figure \ref{delta_delta} Doppler factors thus derived are
plotted versus the Doppler factor computed from the
flux variability and size of components having the highest apparent speed in the jets.
Figure \ref{delta_delta} demonstrates
that there is reasonable agreement between the Doppler factors estimated
by these two different methods. The best least-square linear fit to the dependence
is $\delta_{\beta_{\rm app}}=(0.72\pm 0.15)\delta_{\rm var}$,
consistent with $\delta_{\rm var}$ being the true value and $\delta_{\beta_{\rm app}}$
being a lower limit taking in account that $\delta_{\beta_{\rm app}}$ is estimated
for the lowest possible Lorentz factor. We conclude that the combination of
variability of flux and measurement of angular sizes provides a new, robust
method for deriving Doppler factors from well-sampled sequences of VLBI data.
Tables \ref{Q_Param}, \ref{B_Param}, and \ref{G_Param} give estimates of
the Lorentz factor, viewing angle, Doppler factor, observed brightness
temperature, $T_{\rm b,obs}$, and intrinsic brightness temperature,
$T_{\rm b,int}$, for 43 knots in the quasars, 19 knots in the BL Lacs, and
15 knots in the radio galaxies. The Lorentz factor and viewing
angle are solutions of the system that combines equations (\ref{e3}) and (\ref{e4})
for a knot having measured apparent speed with the Doppler factor derived via equation
(\ref{e5}). The observed brightness
temperature is computed based on the VLBI measurements as
$T_{\rm b,obs}=7.5\times 10^8\;S_{\rm max}/s^2$~K, where $S_{\rm max}$ is
the maximum flux of the component in Jy and $s$ is its measured angular size
in mas at the epoch of maximum flux (see above). We apply the Doppler factors derived with
equation (\ref{e5}) to estimate the intrinsic brightness temperature,
$T_{\rm b,int}= T_{\rm b,obs}\;(1+z)^{1.7}/\delta^{1.7}$, where we adopt
$\alpha=0.7$.
\subsection{Opening Angle of the Jet}
We assume that jets have conical structure and constant angle, $\theta$,
between the jet axis and surface of the cone that contains the entire
region of emission. Therefore, $\theta$ is the actual half opening angle of the jet.
This assumption is likely correct for the section of the
jet within 1-2~mas from the core, while at larger distances ($\ge$10~mas)
the jet could trace out a helical structure covering a wide range of position angles
even if the body of the jet remains narrow \citep[e.g.,][]{LK03}.
We estimate the projected half opening angle, $\theta_{\rm p}$, for each source
using the ratio between apparent transverse size, $s_{\rm t}$, of the jet and
apparent longitudinal distance, $s_{\rm l}$, of components:
$\theta_{\rm p}=\tan^{-1}\;\psi$, where $\psi$ is the slope
of the best linear fit to the relation between $s_{\rm t}$ and $s_{\rm l}$ as defined
at the position of each component that is brighter than 1\% of the peak intensity.
The values of $s_{\rm t}$ and $s_{\rm l}$ are calculated as follows: $s_{\rm l}=R$,
where $R$
is the observed separation of the component from the core; and
$s_{\rm t}=R\;\sin\;(|\Theta_{\rm jet}-\Theta|)+a/2$, where $\Theta_{\rm jet}$ is the
projected direction of the jet as defined by the mean position angle of all
sufficiently bright components over all epochs, $\Theta$ is the position
angle of the component, and $a$ is the size of the component. We first plot
the values of $s_{\rm t}$ against $s_{\rm l}$ to estimate a preliminary linear dependence.
The preliminary relationship is used to remove
points that deviate from the dependence by more than 3$\sigma$
of the linear model. Then the value of $\psi$ is obtained by minimizing
$\chi^2$ of the adjusted data. In Figure \ref{Opan} all pairs of $(s_{\rm t},s_{\rm l})$ are
plotted and the best linear fit of the adjusted data is presented for each source.
Figure \ref{Opan} shows that a jet model with constant opening angle provides
a good approximation to the inner jet, although the plots of two BL~Lac objects
(3C~66A and 1803+784) display an increase in $\theta_{\rm p}$ beyond 2 and 1~mas,
respectively, which might be a general feature of many BL~Lac jets. In the radio
galaxy 3C~111 and quasar 3C~345 a decrease in $\theta_{\rm p}$ is observed,
and is most likely caused by the weakness and diffuse nature of components at large
distance from the
core, such that our images only contain portions of these features.
Table \ref{Source_par} gives the parameters $\Theta_{\rm jet}$, $\theta_{\rm p}$,
$<\Theta_\circ>$,
$\theta$, $<\Gamma>$, and $<\delta>$ for each source. Parameters $<\Gamma>$,
$<\delta>$, and $<\Theta_\circ>$ are weighted averages of the values for individual
components listed in Tables \ref{Q_Param}, \ref{B_Param}, and \ref{G_Param}, with
weights inversely proportional to the uncertainty in apparent speed. The intrinsic
half opening angle, $\theta$, is estimated as
$\theta=\theta_{\rm p}\;\sin<\Theta_\circ>$, where $<\Theta_\circ>$ is the angle
between the jet axis and the line of sight.
\subsection{Jet Parameters}
We have constructed the dependence between the Doppler beaming factor, $\delta_{\rm var}$,
and apparent speed (Fig. \ref{V_Doppler}) to compare derived parameters
with results from the 2-cm VLBA survey \citep{KL04}. These authors determined
Doppler factors using the variability method of \citet{LV99}, with an intrinsic
brightness temperature of $2\times 10^{10}$~K that they assume corresponds to
the region of the VLBI core where the main variability at radio frequencies
occurs \citep{SAV02}. In Figure \ref{V_Doppler}
the majority of points follow the expectation that $\beta_{app}\lesssim\delta_{\rm var}$ and
lie inside the ``$1/ \beta_{app}$ cone''. Moreover, despite our sample consisting
of blazars or blazar-like sources for which a higher Lorentz factor is expected,
the upper limit to $\beta_{app}$ corresponding to $\Gamma=25$ \citep{KL04}
applies to our sample as well (dotted curve in Fig. \ref{V_Doppler}).
Figure \ref{h_Gamma} shows the distribution of Lorentz factors of superluminal
components. There is a significant scatter in $\Gamma$ for the quasars and BL~Lacs,
and the Lorentz factors of the radio galaxies are lower than for the other classes.
The distributions, in general, agree with the distribution of Lorentz factors
found in the 2-cm survey.
Monte-Carlo simulations for flux limited
samples of radio sources predict a connection - although not direct correlation -
between the Lorentz factor and viewing angle of the jets: the higher the Lorentz factor,
the smaller is the viewing angle \citep{LM97}.
Our sample of the blazars and blazar-like sources shows a significant
correlation between 1/$\Gamma$ and $\Theta_\circ$ (coefficient of correlation 0.83,
Fig. \ref{GT}); this supports the above prediction. Small intrinsic bends in jets
oriented very close to the line of sight should be greatly amplified in
projection on the sky \citep{R78}.
The expected morphology has not been confirmed by VLBI surveys. For example,
$\gamma$-ray blazars, which exhibit the highest
apparent speeds \citep{J01,KL04}, do not possess more pronounced bends than
sources not yet detected in $\gamma$-rays \citep{KL98,J01}. Estimates of the viewing angle
obtained for our sample allow us to test the relation between morphology
and viewing angle of the jets more directly. We have
determined the average projected position angle of the jet, $\Theta_{\rm jet}$, and
its standard deviation, $\sigma(\Theta_{\rm jet})$, for each source (see Table
\ref{Source_par}). In sources with jet direction close to
the line of sight, a small intrinsic change of the component trajectory
should result in a significant bend in the projected path, leading
to a large scatter in $\Theta_{\rm jet}$ and yielding a high value of
$\sigma(\Theta_{\rm jet})$. A similar test is to compare the projected opening angle
with the viewing angle of the jets: jets viewed very close to the
line of sight should have a wide projected opening angle on VLBI maps.
Figure \ref{SigmaT} presents
both plots. The left panel shows the relationship between $\Theta_\circ$ and
$\sigma(\Theta_{\rm jet})$. Although there is a general decrease
in $\sigma(\Theta_{\rm jet})$ with increasing angle between the jet axis and line
of sight (coefficient of correlation $-$0.58), the relationship is very weak
or absent for viewing angles $\lesssim 5^\circ$ (coefficient of correlation $-$0.18).
The right panel shows that there is no
connection between the projected opening angle and viewing angle of the jet
(coefficient of correlation $-$0.16). This result
implies that amplification of the projected size by a small angle between the
jet axis and line of sight is partly (or sometimes completely) canceled by
the proportionality between opening angle and $1/\Gamma$ (see below). Possible
reasons behind the lack of correlation between degree of apparent bending
and small viewing angle include: (1) an inverse relationship between bulk
Lorentz factor and degree of intrinsic bending, given that the momentum of
the jet is proportional to $\Gamma$, (2) broader opening angles in slower
jets, with filamentary structure appearing similar to bending of the axis,
and (3) the resolution available transverse to the jet not being sufficient to
detect bending in many high-$\Gamma$ objects.
The Lorentz factor plays a significant role in determining the jet geometry.
According to standard models of relativistic jets, the opening angle of the jet
should be inversely proportional to the Lorentz factor \citep[e.g.,][]{BK79}.
In Figure \ref{Open_G}, the estimated half opening angles are plotted versus
the derived Lorentz factors. There is an obvious decrease of $\theta$ toward higher
values of $\Gamma$. According to a $\chi^2$ test the {\it observed} dependence
can be described by the relation $\theta\approx\rho/\Gamma$~rad, where
$\rho=0.17\pm 0.08$ (solid line in Fig. \ref{Open_G}) at a 2.5\% level
of significance ($\chi^2$=6.875, f=2). The current
models for the formation of relativistic jets that employ confinement of
a jet by magnetic forces \citep[e.g.][]{Meier00,VK04} do not yet specify
the opening angle of the jet as a function of the model parameters.
Such an expression, however, has been derived based on
gas dynamics for a
relativistic jet confined by pressure equilibrium with its surroundings
\citep{DM88}. In this model the opening angle depends on the Lorentz
factor of the flow and the ratio of the external pressure, $P_{ext}$,
to the initial pressure, $P_\circ$, of the plasma in the core region,
$\xi=\sqrt{P_{ext}/P_\circ}$.
Using equations (18), (19a), and (19b) in \citet{DM88}, we calculate
dependences of the opening angle on
Lorentz factor for different values of $\xi$ (dotted curves in Fig. \ref{Open_G}).
The best fit of the {\it observed} dependence (5\% level of significance)
coincides with the model in which $\xi=0.6$ ($\chi^2$=4.586, f=2), corresponding
to a pressure ratio in the core region $P_{ext}/P_\circ\approx 1/3$.
Figure \ref{TI} presents the distribution of intrinsic brightness temperature of
the jet components. Note that $T_{\rm b,int}$ is measured at distances ranging
from several parsecs to a few kiloparsecs from the core (see Fig. \ref{TI},
{\it right panel}), where the knots are generally optically thin. The distribution
peaks at $T_{\rm b,int}\sim 2\times 10^9$~K for the quasar knots and $T_{\rm b,int}
\sim 6\times 10^7$~K for the BL~Lac knots, while for the radio galaxies
the temperatures are evenly distributed between these values.
The maxima should be adjusted to slightly higher temperatures since for 30\%
of the quasar and BL~Lac components we have obtained only upper limits to the Doppler
factors. Comparison of these brightness temperatures with
the intrinsic equipartion brightness temperature of the optically thick part
of the jet ($T_{\rm b,int}\sim 2-5\times 10^{10}$~K), thought to depend
weakly on source parameters \citep{Readhead,LV99}, implies a faster drop of the
intrinsic temperature from the compact core region to the more extended structure
in the BL~Lac jets. One possible explanation for a lower intrinsic brightness
temperature is the presence of a stronger magnetic field \citep{Readhead}. This
would lead to more severe radiative losses and weaker jet components relative to
the core in the BL~Lac objects, and perhaps a lower intrinsic brightness temperature
in the cores as well.
In Figure \ref{DT} we plot the average Doppler factor vs. average viewing angle
({\it left panel}), and the average viewing angle vs.
the intrinsic half opening angle of the jet for each source ({\it right panel}).
The bold crosses indicate the average position of the quasars, BL~Lac objects,
and radio galaxies, with the size of each cross equal to the 1$\sigma$ uncertainty
of the parameters. Although each subclass has a large scatter around the average
values and statistically the difference in the parameters of the quasars and
BL~Lac objects is negligible, the points form a continuous sequence on a 3-D plot of
$\delta$, $\Theta_\circ$, $\theta$ with:
quasars ($<\delta>$=23$\pm$11, $<\Theta_\circ>$=2.6$^\circ\pm$1.9$^\circ$, $\theta$=0.5$^\circ
\pm$0.3$^\circ$)
$\longrightarrow$ BL~Lacs ($<\delta>$=13.5$\pm$6.7, $<\Theta_\circ>$=4.4$^\circ\pm$3.0$^\circ$,
$\theta$=0.6$^\circ\pm$0.4$^\circ$) $\longrightarrow$ radio galaxies
($<\delta>$=2.8$\pm$0.9, $<\Theta_\circ>$=19.5$^\circ\pm$3.2$^\circ$, $\theta$=3.2$^\circ
\pm$0.5$^\circ$).
\subsection{Accelerating/Decelerating Flow}
According to the $\chi^2$ test, one component in the radio galaxy 3C~120,
five components in two BL Lac objects, and 13 components in six quasars
exhibit a statistically significant change in the proper motion
with time corresponding to acceleration/deceleration with distance from the core.
Therefore, in 9 out of 15 sources the apparent speed of individual
components varies. Moreover, although 86 knots are classified
as moving ballistically, $\sim 38\%$ have trajectories that we
are unable to fit well by a polynomial of any order, causing
us to suspect non-ballistic motion for these as well. Figure \ref{V_Change}
shows the evolution of apparent velocity, including direction along the jet,
derived from the best polynomial approximation of component positions across
epochs for knots with a detected change in apparent speed.
We have analyzed the results to search for a general trend in the variation
of apparent speed with distance from the core. For each component, we compare
the instantaneous apparent speed at different distances from the core with the
average (indicated in Table \ref{Speed}), and assign a value representing the
apparent speed at each angular distance of $+1$ if the velocity is
higher than the average, $-1$ if it is lower than the average, and $0$ if it
equals the average. Then we construct the distribution of such changes along
the jet using the derived viewing angle for each component
(Tables \ref{Q_Param}-\ref{G_Param}), which allows deprojection of observed
distances from the core.
The result is shown in Figure \ref{h_Vchange} where the distribution above the
$x$-axis represents accelerating components and the distribution under the $x$-axis
indicates decelerating components at corresponding deprojected distances
from the core. Figure \ref{h_Vchange} reveals that at distances larger than
$\sim$5~pc from the core an increase of apparent speed is more common.
However, there are some
jet components that undergo alternating periods of both acceleration and deceleration.
The latter could be a signature of helical motion
\citep[e.g.,][]{D00,TK04} or pinch instabilities that cause the jet
cross-section, pressure, and Lorentz factor to oscillate with distance
from the core \citep[e.g.,][]{GO97}.
The cause of the accelerations could be bending combined with selection of
objects with high Doppler factors such that the mean angle to the line of
sight of the region near the core is less than optimal for
superluminal motion. Statistically, such jets are more likely to bend
away from the line of sight, thus increasing their apparent speeds with
distance from the core. Alternatively, the acceleration could be physical,
caused by considerably higher energy density in relativistic particles than
in rest mass \citep{DM88} or magnetic acceleration \citep{VK04}.
\subsection{Forward and Reverse Shocks}
Our data reveal the coexistence of both very fast and much slower (but moving) knots
in the jets of the majority of the sources in the sample (see Fig. \ref{Ident}).
Diversity in the apparent speeds of jet features might reflect intrinsic variations
in the pattern speed of disturbances in the jet flow. In the shock wave model for interpretation
of the radio light curves and features propagating down a relativistic jet,
a disturbance such as an increase in velocity or energy flux in a jet
can create both forward and reverse shocks \citep[e.g,][]{HAA91}. Both move away from
the central engine,
but the reverse shock has a lower velocity than does the forward shock.
In this context fast jet features can be associated with the
forward shock and slow moving knots with the reverse shock.
Analysis of the brightness of these features might reveal a prevalence
of different types of shock waves in the jets for different classes of AGNs.
For each knot in Table \ref{Speed} (except knots classified as trailing features; see \S 6.6)
we have computed parameter $F_{\rm rel}=S_{\rm max}/<S_{\rm A0}>$,
which characterizes the flux of a knot relative to the core, where $S_{\rm max}$ is
the maximum observed flux density of the knot and $<S_{\rm A0}>$ is the average flux density
of the core over epochs. Figure \ref{h_FR} shows the distributions of the derived
values of this parameter for fast ($\beta_{\rm app}> 3c$) and slow
($\beta_{\rm app}\leq 3c$) moving knots in the quasars,
BL~Lac objects, and radio galaxies. The separation into fast and slow features
is not strictly defined, and the distributions do not change significantly if the dividing
velocity is in the range from 2$c$ to $4$c. The distributions of $F_{\rm rel}$
of fast knots in the quasars
and BL~Lacs are different at 99.5\% level of confidence ($f$=5) according to
the $\chi^2$ test. The brightness of $\sim$50\% of the fast knots in the quasars is
comparable to the brightness of the core ($F_{\rm rel}\gtrsim$0.5), while
the brightness of fast knots in the BL~Lac
objects never exceeds half the brightness of the core, and the distribution peaks at
$F_{\rm rel}\leq$1/4. The distributions of $F_{\rm rel}$
in the radio galaxies cannot be classified due to the small number of objects.
The distributions for slow moving knots indicate the presence of two populations:
bright ($F_{\rm rel}\geq$1) and faint ($F_{\rm rel}\leq$0.5 for the quasars
and radio galaxies and $F_{\rm rel}<$1 for the BL~Lacs).
The population of bright slow knots consists of jet features with
subluminal apparent speeds that most likely represent stationary shocks in the jets.
This population is most prominent in the quasars. The population of faint slow
knots is prominent in the BL~Lac objects, including 39\% of all identified jet
features, while in the quasars such features comprise only 7\% of detected knots.
About 70\% of faint slow knots in the BL~Lac objects
have flux within 25\% to 100\% of the flux of the core, significantly brighter
than fast knots in these objects.
If fast knots represent forward shocks and slow knots correspond to reverse
shocks, then the forward shock is stronger in the quasars and, perhaps,
the radio galaxies, while the reverse shock dominates in the jets of BL~Lac objects.
A reverse shock will be strong relative to its corresponding forward shock
when the disturbance is prolonged such that the faster flow enters the rear of the shock
structure over an extended period of time. Otherwise, only the forward shock
will be prominent and the knot will have a Lorentz factor only slightly ($\lesssim$6\%)
less than that of the forward shock front \citep[e.g., ][]{SOK04}. BL~Lac objects
might therefore have more prolonged disturbances of lower amplitude than those
in quasars. If jets possess spine-sheath structure \citep{L99}, then an
alternative explanation for the above effect could be that the power
of high-$\Gamma$ spines in BL~Lacs is lower than in quasars.
The proposal that the
nature of shock waves in quasars is different from that of BL~Lac objects has been
suggested by \citet{W94}, who interpreted the evolution of the flux and
polarization of the quasar 3C~345 as forward shocks that have greater speed
than the underlying jet. This is in contrast to the BL~Lac object OJ~287, studied by \citet{CW88},
who modeled the polarization, kinematics, and X-ray variability as reverse shocks,
which are slower than the underlying jet. Our data support the idea
that this could be a key difference between the two classes of blazars.
\subsection{Trailing Shocks}
Numerical hydrodynamical simulations \citep{A01,A03} show that the interaction
of disturbances in the jet flow with the underlying jet and/or the external medium
can play a significant role in the variability of the jet emission. In particular,
multiple conical shocks can form behind a strong shock wave propagating down the jet.
These {\it trailing} shocks appear
to be released in the wake of the primary superluminal component rather than ejected
from the core. The simulations show that the ratio between the apparent velocity of the main
and trailing components is a function of distance along the jet: the closer to the core
a trailing component first appears the slower it moves, although for a given trailing
component the simulations predict deceleration with distance from the core.
Trailing components are oblique shocks in three-dimensional models and
should possess different polarization properties than the leading shock.
\citet{G43} identified several knots in the jet of the radio galaxy
3C~120 as having the characteristics of trailing shocks. Our sequences of
high resolution VLBA images reveal a number of jet features in different sources
that have properties matching those expected for this phenomenon.
These are (see Fig. \ref{Ident}): $c1,c2$ behind knot $C1$ in the
radio galaxy 3C~111; $o2$ behind $o1$ found previously by \citet{G43} in 3C~120;
$b1, b2$ following $B1, B2$, respectively, in the quasar
3C~273; $c9$ behind $C9$ in 3C~345; $b5,b6$ following $B5,
B6$, respectively, in CTA~102; and $b3$ behind $B3$ in 3C~454.3.
Components $c2$ (3C~111), $b2$ (3C~273), and $b5$ and $b6$ (CTA~102)
emerge from bright knots at some distance from the core
(see Fig. \ref{Ident}), while the remainder are already trailing bright features
from the first epoch of our
observations. All of them have apparent speeds less than the corresponding
main component and exhibit different EVPAs than the fast feature (see Fig. \ref{Map}).
Although our observations do not allow us to check every trailing component
for deceleration, a decrease in apparent speed is
pronounced for $c1$ and $o2$ in 3C~111 and 3C~120, respectively,
and can be inferred for $b5$ in CTA~102 (see Fig. \ref{V_trail}).
For $b5$ in CTA~102 a second-order polynomial does not satisfy the $\chi^2$
test but improves goodness-of-fit to the data significantly.
We characterize each trailing component by three parameters:
$\beta_{\rm app}$ is the average apparent speed listed in Table \ref{Speed},
$\beta_{\rm app}/\beta_{\rm app}^M$ is the ratio of apparent velocities of trailing component
to corresponding main component, and $R$ is the average deprojected distance from the core
at which a trailing component is detected in our data ($R$ is
calculated using $<R>$ indicated in Table \ref{Speed} and jet parameters
given in Table \ref{Source_par}).
Figure \ref{Trail} ({\it left panel}) demonstrates that trailing components form in the wake
of bright knots having values of $\Gamma$ extending from 5 to $>$20 and
different values of $\beta_{\rm app}/\beta_{\rm app}^M$ (all of them $<$1)
are observed for similar Lorentz factors. Figure \ref{Trail} ({\it right panel})
shows that there is an increase
of the apparent speed of trailing components with
distance from the core (coefficient of correlation 0.71).
According to \citet{A01}, trailing components represent
pinch waves excited by the main disturbance, and an increase of their speed at larger distance
reflects acceleration of the expanding jet. Our data shown in Figure \ref{Trail}
imply such an acceleration of the underlying jet that
appears not to depend strongly on the Lorentz factor of the main disturbance,
consistent with the conversion of internal energy into bulk kinetic energy that
accompanies expansion.
Figure \ref{Trail} ({\it left panel}) shows a possible inverse correlation between
$\beta_{\rm app}/\beta_{\rm app}^M$ and Lorentz factor of the main disturbance
(coefficient of correlation $-$0.40). The correlation is most likely
the result of selection effects. Detection of a trailing component close to
a leading shock with a high Lorentz factor is complicated by stretching of the
longitudual size of the main component in the observer's frame by a factor
$\propto\Gamma$ owing to light-travel time delays. Detection of trailing components
that lag greatly behind a main disturbance with a low Lorentz factor is hampered
by difficulties in associating with confidence trailing components to the leading knot.
For this reason, many of the subluminal or quasi-stationary features detected
near the core ($A$-components) in the majority of sources in the sample
could represent a superposition of many trailing components formed behind a
number of superluminal features. This possibility is valid only if the direction
of the jet ``nozzle'' changes between ejections, since a second major
disturbance passing through a trailing shock would destroy it.
\citet{KL04} have found that there is a systematic decrease in $\beta_{\rm app}$
with increasing
wavelength, which they suggest results from sampling different parts
of the jet structure at different frequencies. Although this might be the case
for average apparent speeds derived from surveys at different wavelengths,
for individual sources with superluminal components detected at
similar distances at different frequencies, the effect
can be caused by the structure containing the leading perturbation plus slower trailing shocks
being unresolved at the longer wavelengths. Perhaps the difference in
the proper motions of the polarized and total intensity components $B8p$ and $B8$
noted in the quasar 0528+134 (see Table \ref{Speed}) is an example
of this effect, with the polarized intensity image (which reveals the leading
compact component) playing the role of the finer resolution of
shorter wavelength observations.
\subsection{Frequency of Superluminal Ejections}
Our monitoring is unique in terms of the number of blazars
observed in a regular manner at high angular resolution over 3 years.
The excellent time coverage allows us to determine the rate of superluminal ejections,
thought to be controlled by activity in the cental engine.
We list in Table \ref{Source_par} the rate, $f_{\rm ej}$
(multiplied by the time dilation factor 1+z), of superluminal ejections
over three successive years based on the results given in Table \ref{Speed}.
In Figure \ref{Eject} we plot the ejection rate versus the average jet
Lorentz factor for blazars. There is a trend suggesting a positive correlation
between $f_{\rm ej}$ and $\Gamma$ (coefficient of correlation 0.5)
The formal $t_{\zeta,\nu}$ test rejects the hypothesis that there is
no relation between $f_{\rm ej}$ and $\Gamma$ at a confidence level $\zeta$=0.05.
However, the trend could be an artifact of the bias toward highly Doppler
boosted objects in our sample. Nevertheless,
blazars form the largest class of identified $\gamma$-ray sources \citep{H99},
for which a connection between $\gamma$-ray events and radio jet activity
has been found \citep{J01g}. If nonthermal flares are associated with ejections
of new superluminal knots, as seems to be the case at mm wavelengths \citep{SAV02},
then we would expect $\gamma$-ray light curves to reflect the rate of ejections.
In this case, the possible correlation between
the Lorentz factor and ejection rate would imply that
the most variable $\gamma$-ray sources (in terms of major flares per year)
should possess the highest Lorentz
factors. This can be tested by $\gamma$-ray light curves
of blazars obtained by the Gamma Ray Large Area Space Telescope (GLAST,
expected to begin operation in 2007), along with VLBI monitoring of the
radio jets.
\section{Summary}
In this paper we have presented the entire data set collected
during a three-year program of monitoring AGNs using the VLBA,
and have discussed the main results
relating to the kinematics of jets. The sequences of images reveal
short timescales (for some sources shorter than 2 months) of
variability of the jet structure and even more rapid variability in
polarization. This program illustrates the importance of
intensive monitoring for understanding jet physics in
superluminal radio sources.
We have measured the apparent speed of 106 features in the inner jets
(within 4~mas of the core) of two radio galaxies, five BL Lac objects,
and eight quasars from 1998 March to 2001 April. Superluminal apparent speeds
occur in 80\% of the knots, 26\% of which show statistically significant
deviations from ballistic motion. The majority of non-ballistic
components undergo an increase of apparent speed with distance from
the core, although local decelerations are observed in some cases
alongside the general acceleration. This could be the result of physical
accelerations or from selection of sources whose angles to the line of
sight $< {\rm sin}^{-1}(1/\Gamma)$ near the core and closer to this value
farther out. Many of the jets contain both very fast and much slower moving features,
which might be explained as forward and reverse shocks, respectively.
It appears that fast features are pronounced in the jets of the
quasars, while slower moving knots dominate in the BL~Lac jets.
This suggests a different nature of the main disturbances seen in the jets
of the two classes of blazars.
The properties of 11\% of the
superluminal components are consistent with the characteristics of trailing
shocks expected to form in the wake of strong disturbances in the flow.
Trailing components emerging from bright knots farther down the jet
have a faster apparent speed than those generated near the core while a given
trailing feature most likely decelerates along the jet.
According to the numerical simulations by \citet{A01}, the trailing
components are caused by the triggering of pinch modes by the main disturbance.
The jet accelerates close to the core as internal energy is converted into
bulk kinetic energy, leading to trailing shocks being faster farther downstream.
In four sources (3C~120, OJ~287, 3C~345, and CTA~102)
the behavior of jet components appears to be affected by the
interaction with the external medium (see \S A). For each source a number
of components decelerate, change trajectory, brighten
in total and polarized flux, and undergo a rotation of the EVPA at the
same distance from the core along an edge of the jet.
This implies the existence of gas clouds
a few parsecs to $>$1~kpc (deprojected) from the central engine,
intermediate between the locations of the very dense clouds of the broad line
regions and more rarefied clouds of the narrow line regions.
Using measurements from the VLBI images of parameters such as flux density,
apparent speed, and size of components, we have estimated Lorentz and Doppler
factors as well as viewing angles for superluminal knots and the opening angle
of each jet. This is a new method to define jet parameters, based on
the assumption that the decay in flux of the superluminal components is caused
by radiative losses rather than by cooling from expansion, and is subject
to light-travel delays.
We demonstrate that at high radio frequencies these assumptions are
most likely correct. The derived parameters of the jets indicate
that in our sample the quasars have the highest Doppler factors ($\delta$) and
smallest viewing ($\Theta_\circ$) and opening ($\theta$) angles, while the
two radio galaxies possess significantly lower Doppler factors, larger viewing
angles, and wider opening angles despite their ``blazar-like'' radio properties.
This implies that in the 3-dimensional parameter space
($\delta$,$\Theta_\circ$,$\theta$) the radio galaxies, BL~Lacs, and quasars in
our sample occupy different regions. The regions of
the quasars and BL~Lac objects partly overlap, while the radio galaxies
are significantly distinguished from the blazars in all three parameters.
Since our sample is not a complete
one, the major differences suggested by these results need to
be investigated further with larger, complete samples.
The inferred relationship between the half opening angle and the Lorentz factor agrees
with the expectation of gas-dynamic models that predict smaller values of $\theta$
for higher Lorentz factors and a dependence of $\theta$ on the ratio
of the external and internal jet pressure. The best
approximation to the relation is very close to $P_{\rm ext}/P_\circ\sim 1/3$.
This does not, however, exclude collimation by magnetic pinching,
which might produce a similar $\theta-\Gamma$ relation.
We have estimated the intrinsic brightness temperatures of jet components in
the quasars, BL~Lacs, and radio galaxies on parsec scales, obtaining averages of
$1.1\times 10^9$~K, $5.5\times 10^7$~K, and $3.5\times 10^9$~K, respectively.
Comparison of these values with the
equipartition brightness temperature of the optically thick part of the jets,
$T_{\rm b,int}=2-5\times 10^{10}$~K, suggests a stronger magnetic field in the
BL~Lac objects.
There is a possible positive correlation between the Lorentz factor of the jet
and ejection rate of superluminal components for the blazars. This can be
tested with the $\gamma$-ray light curves that will be measured by the
GLAST mission.
|
astro-ph/0502113
|
\section{Introduction}
\label{sec:introduction}
Measurements of the cosmic microwave background radiation \cite{cmb1}
have revealed a highly uniform energy density background with super-horizon
perturbations on the order of one part in $10^5$. In the standard
inflationary paradigm \cite{ref,infref}, these density perturbations were
created in the inflationary epoch when quantum fluctuations of the inflaton
field expanded beyond the Hubble radius and were converted into density
perturbations upon inflaton decay. However, to obtain the observed level of
density perturbations from this mechanism requires tight constraints on the
inflaton potential \cite{cmb2}.
Recently, Dvali, Gruzinov, Zaldarriaga and independently Kofman (DGZK)
proposed a new mechanism \cite{DGZ} for producing density perturbations.
A nice feature of their scenario is that the only requirements on the
inflaton potential are to produce the required e-foldings of inflation and
at a scale consistent with WMAP data. The
DGZK mechanism posits the existence of some heavy particle $S$ with a mass
and decay rate that depend on the vacuum expectation value of
some light field $\chi$. Here $\chi$ is presumed to have acquired
super-horizon fluctuations during the inflationary epoch; however $\chi$
never contributes significantly to the energy density of the
universe\footnote{The scenario where $\chi$ contributes significantly toward
the energy density is called the ``curvaton'' scenario and was first proposed
in \cite{curv}.}. Nevertheless, the fluctuations in $\chi$ persist and
result in fluctuations in the mass and decay rate of $S$, so long as the
$\chi$ mass $m_\chi$ is less than the Hubble rate $H$ at the time at which
fluctuations are transferred to radiation. In the DGZK mechanism the
field $S$ comes to dominate the energy density of the universe and decays
into radiation while $m_\chi < H$. Fluctuations in the mass and decay rate
of $S$ result in fluctuations in the duration of $S$ energy domination, which
in turn lead to adiabatic density perturbations since the energy of a massive
$S$ field redshifts more slowly than that of radiation.
The DGZK mechanism has been studied extensively. For example, the evolution
of the density perturbations that result from this mechanism has been
studied in detail using gauge invariant formalisms in \cite{gaugeinv}. These
perturbations are shown to possess a highly scale invariant spectrum in
\cite{scaleinv} and are shown to contain significant non-Gaussianities in
\cite{nongauss}. The original DGZK mechanism has also been extended to apply
to preheating as studied in \cite{extensions}. For
discussions of the limitations of this mechanism see for example
\cite{limitations}.
In the original DGZK scenario \cite{DGZ} it is assumed that $S$ decouples
while being relativistic. In this paper we generalize this to apply to the
case where $S$ freezes-out of equilibrium with a fluctuating annihilation rate
$\langle\sigma v\rangle$. We use the term ``freeze-out'' to refer
specifically to the scenario where $S$ decouples from thermal equilibrium
after it has become non-relativistic. In this case the number density of $S$
at a temperature $T$ after freeze-out is
\begin{eqnarray}
n_S &\simeq& \frac{T^3}{m_Sm_{\rm pl}\langle \sigma v\rangle } \,,
\label{nSfreeze-out}
\end{eqnarray}
where $m_{\rm pl}$ is the Planck mass and $m_S$ is the mass of $S$.
Therefore we expect fluctuations in the mass and annihilation rate of $S$
during freeze-out to result in fluctuations in the number density of $S$. If
$S$ lives long enough to dominate the energy density of the universe and
subsequently decays, these entropy perturbations are converted into adiabatic
perturbations. These add to the ones produced by the original DGZK mechanism
and the quantum fluctuations of the inflaton.
This paper is organized as follows. In Section \ref{sec:analytical} we
describe the density perturbations produced by our generalized DGZK mechanism.
Sections \ref{sec:Smodels} and \ref{sec:chimodels} contain explicit models
for implementing our mechanism and for producing the fluctuating masses and
coupling constants, respectively. Conclusions are given in Section
\ref{sec:conclusions}. In Appendix \ref{sec:pertanalysis} an alternate
analytical description is given which allows to track the evolution of the
perturbations, while in Appendix \ref{sec:numerical} Boltzmann equations are
derived and solved numerically to confirm the analytical arguments presented
in other sections of this paper.
\section{Analytical determination of the perturbations}
\label{sec:analytical}
Our generalized DGZK mechanism includes a heavy particle $S$ with mass
$m_S$, decay rate $\Gamma$ and annihilation cross section
$\langle\sigma v\rangle$, where $S$ decays to and interacts with radiation.
We begin by identifying several key temperature scales. The temperature at
which $S$ begins to thermalize with radiation is denoted as $T_{\rm therm}$.
We assume for simplicity that $S$ particles are produced only as they
thermalize from radiation annihilation below $T=T_{\rm therm}$. We also
define:
\begin{enumerate}
\item $T_{\rm f.o.}$: Temperature at which $S$ freezes-out of thermal
equilibrium;
\item $T_{\rm dom}$: Temperature at which $S$ begins to dominate the energy
density of the universe;
\item $T_{\rm dec}$: Temperature at which $S$ decays.
\end{enumerate}
Since the number density of $S$ particles falls off exponentially after $S$
becomes non-relativistic, $T_{\rm f.o.}$ is typically within an order of
magnitude of $m_S$. Therefore in this paper we always take
$T_{\rm f.o.}\simeq m_S$. In terms of $m_S$, $\Gamma$ and
$\langle\sigma v \rangle$ we also find
\begin{eqnarray}
T_{\rm dom} &\simeq& \frac{1}{m_{\rm pl} \langle\sigma v\rangle} \nn\\
T_{\rm dec} &\simeq& m_{\rm pl} \, \Gamma^{2/3}\langle\sigma v\rangle ^{1/3}\,,
\label{Tdefs}
\end{eqnarray}
where we have assumed $T_{\rm dec}<T_{\rm dom}$ in the last equation. This
condition is necessary for significant density perturbations to be produced
by this mechanism. In Eq.~(\ref{Tdefs}) the cross section is to be evaluated
at the freeze-out temperature $T_{\rm f.o.}$. Note that for $S$ particles to
be produced in the first place we require $T_{\rm therm}>T_{\rm f.o.}$.
As described in \cite{DGZ}, the period of $S$ domination between
$T_{\rm dom}$ and $T_{\rm dec}$ gives rise to an enhancement of the resulting
energy density compared to a scenario where the $S$ domination is absent.
Comparing energy densities at common scale factor one finds that after $S$
decays
\begin{eqnarray}
\rho &=& \left( \frac{\rho_{\rm dom}}{\rho_{\rm dec}} \right)^{1/3} \!\!\!\!
\rho_{\rm rad}\,,
\label{rhoDGZ}
\end{eqnarray}
where
\begin{eqnarray}
\rho_{\rm dom} &\simeq& T_{\rm dom}^4\,, \qquad
\rho_{\rm dec} \,\simeq\, \frac{T_{\rm dec}^3}{m_{\rm pl}
\langle\sigma v\rangle} \,,
\label{rhodefs}
\end{eqnarray}
and $\rho_{\rm rad}$ is the energy density which would result without any
period of matter domination. As discussed in detail in Section IV, couplings
to an additional field $\chi$ can give rise to fluctuations in $m_S$,
$\Gamma$, and $\langle \sigma v\rangle $:
\begin{eqnarray}
m_S &=&\overline{m}_S \left( 1 + \delta_m \right) \nn\\
\langle\sigma v\rangle &=& \langle\overline{\sigma v}\rangle
\left( 1 + \delta_{\langle\sigma v\rangle} \right) \nn\\
\Gamma &=& \overline\Gamma\left( 1 + \delta_\Gamma\right) \,,
\label{pertdefs}
\end{eqnarray}
where the barred quantities refer to background values. According to
Eqs.~(\ref{Tdefs}-\ref{pertdefs}), these fluctuations give rise to
fluctuations in $T_{\rm dom}$ and $T_{\rm dec}$ which result in energy
density perturbations
\begin{eqnarray}
\frac{\delta\rho}{\rho} &=& -\frac{2}{3}\,\delta_\Gamma
-\frac{4}{3}\,\delta_{\langle\sigma v\rangle} \,.
\end{eqnarray}
Note that although $\delta\rho/\rho$ contains no explicit dependence on
$\delta_m$, both $\delta_{\langle \sigma v\rangle }$ and $\delta_\Gamma$ are
in general functions of $\delta_m$.
Comparing the energy density at a common scale factor corresponds to choosing
a gauge where the perturbation in the scale factor vanishes, $\psi=0$. Thus
the fluctuation in the energy density computed here can be directly related
to the gauge invariant Bardeen parameter \cite{bardeen}
\begin{eqnarray}
\zeta &=& -\psi+\frac{\delta\rho}{3( \rho + p)} \,.
\label{zeta}
\end{eqnarray}
Thus we find after $S$ decays
\begin{eqnarray}
\zeta = -\frac{1}{6}\,\delta_\Gamma
-\frac{1}{3}\,\delta_{\langle\sigma v\rangle} \,.
\label{pertresults}
\end{eqnarray}
We can obtain the same result in synchronous gauge, where different
regions all have the same global time. Since $\rho \sim 1/t^2$ in both matter and radiation dominated universes, one finds that $\delta\rho=0$ on surfaces of constant time. Thus
the Bardeen parameter is
\begin{eqnarray}
\zeta &=& -\psi = \frac{\delta a}{a}\,.
\label{uniHubblezeta}
\end{eqnarray}
To obtain $\zeta$, we only need to determine
$a(t,\Gamma,\langle\sigma v\rangle,m)$ and then compare two regions at fixed
$t$, but different $\Gamma$, $\langle\sigma v\rangle$ and $m_S$.
Assuming the $S$ particles freeze-out while non-relativistic and decay after
dominating the energy density of the universe, this gives
\begin{eqnarray}
a(t) &=& \frac{a(t)}{a(t_{\rm dec})} \frac{a(t_{\rm dec})}{a(t_{\rm dom})}
\frac{a(t_{\rm dom})}{a(t_{\rm f.o.})}\frac{a(t_{\rm f.o.})}{a(t_{0})}a(t_0)
\nn\\
&=& \left(\frac{t}{t_0}\right)^{1/2}
\left(\frac{t_{\rm dec}}{t_{\rm dom}}\right)^{1/6} a(t_0) \,,
\end{eqnarray}
where $t_{\rm dec}\simeq\Gamma^{-1}$ is the time when $S$ decays,
$t_{\rm dom}\simeq m^3_{\rm pl} \langle\sigma v\rangle ^2$ is the time at
which it dominates the energy density of the universe, and $t_{\rm f.o.}$
is the time at which it freezes-out. Substituting gives
\begin{eqnarray}
a(t) &=& \left(\frac{t}{t_0}\right)^{1/2} m^{-1/2}_{\rm pl}
\Gamma^{-1/6} \langle\sigma v\rangle^{-1/3} a(t_0) \,.
\end{eqnarray}
Using this result and Eq.~(\ref{uniHubblezeta}) we again obtain
Eq.~(\ref{pertresults}).
The above discussion is approximate and requires that $S$ completely
dominates the energy density of the universe. Obtaining the perturbations
when $S$ does not dominate requires that we include the matter contribution
to the scale factor or energy density during radiation domination. This is
done in Appendix \ref{sec:pertanalysis} using a different formalism. In
Appendix \ref{sec:numerical} we confirm these analytic results using a
numerical calculation of the density perturbations using Boltzmann equations.
\section{Explicit Models for Coupling $S$ to Radiation}
\label{sec:Smodels}
It is important to verify that models exist which exhibit the features
discussed in the previous section. We present two models in which the
annihilation cross section is determined by renormalizable and
non-renormalizable operators, respectively.
The first model is given by the Lagrangian
\begin{eqnarray}
{\cal L} &=& \sqrt{-g} \left[ \frac{(\partial_\mu S)^2}{2}
+\frac{(\partial_\mu X)^2}{2} -\frac{m_S^2}{2} S^2 -\frac{m_X^2}{2} X^2
\right. \nn\\
& & \hspace{1.5cm} \left.
- \frac{g\,m_S}{2}\,S\,X^2 -\frac{\lambda}{4}\,S^2 X^2 \right] \,.
\label{Smodel1}
\end{eqnarray}
We assume that $X$ is in thermal equilibrium with the remaining radiation
and that $S$ particles are only produced through their coupling to $X$.
The interaction terms in the above model yield an $S$ decay rate and
cross section
\begin{eqnarray}
\Gamma &\sim& g^2m_S \,,\qquad
\langle \sigma v\rangle \,\sim\, \frac{\lambda^2}{M^2} \,,
\label{gammasigma}
\end{eqnarray}
where
\begin{eqnarray}
M &\simeq& \left\{
\begin{array}{ll}
T & \,{\rm when}\,\,\, T \,>\, m_S \\
m_S & \,{\rm when}\,\,\, T \,<\, m_S
\end{array}
\right. \,.
\label{Mdef}
\end{eqnarray}
Note that we neglect the ${\cal O}(g^4)$ contribution to the cross section.
This is justified given the limits on the coupling constants derived below.
The requirement that $T_{\rm therm} > m_S$ and that $S$ remains in thermal
equilibrium down to $T \simeq m_S$ gives the condition on the coupling
$\lambda$
\begin{eqnarray}
\lambda &>& \sqrt{\frac{m_S}{m_{\rm pl}}} \,.
\label{lambdaconstraint}
\end{eqnarray}
On the other hand the condition $T_{\rm dec} < T_{\rm dom}$ implies
\begin{eqnarray}
g^2 \lambda^4 &<& \frac{m_S^3}{m_{\rm pl}^3}\,.
\label{glambdaconstraint}
\end{eqnarray}
Thus a necessary (but not sufficient) condition on $g$ to satisfy both
Eq.~(\ref{lambdaconstraint}) and Eq.~(\ref{glambdaconstraint}) is
\begin{eqnarray}
g &<& \sqrt{\frac{m_S}{m_{\rm pl}}}\,.
\end{eqnarray}
Finally, we require that the period of $S$ domination does not disrupt
big bang nucleosynthesis (BBN). Thus the decay of $S$ must reheat the
universe to a temperature $T_{\rm rh}>T_{\rm BBN}$, where
$T_{\rm rh}\simeq \sqrt{\Gamma m_{\rm pl}}$. This gives
\begin{eqnarray}
g^2 &>& \frac{T_{\rm BBN}^2}{m_S m_{\rm pl}} \,.
\end{eqnarray}
Using $T_{\rm BBN}\simeq 10^{-21}m_{\rm pl}$, the above relations provide
the constraint $m_S\gtrsim 10^{-21}m_{\rm pl}$. Given any $m_S$ satisfying
this constraint, limits on $\lambda$ and $g$ are calculated using
Eq.~(\ref{lambdaconstraint}) and Eq.~(\ref{glambdaconstraint}).
Note that in this model the $S$ particles are produced at
$T = T_{\rm therm}$ and remain in thermal equilibrium with the radiation
until they freeze-out at $T \simeq m_S$. This is different from the
assumption made in \cite{DGZ}, where $S$ starts in thermal equilibrium and
decouples while still relativistic. In order to achieve this scenario, the
coupling of $S$ to radiation has to proceed via a higher dimensional
operator, or in other words via the propagation of an intermediate particle
with mass much greater than $m_S$.
This brings us to our second model. Consider a heavy fermion $\psi_S$ and
a light fermion $\psi_X$, coupled via an additional heavy scalar $\phi_H$
with mass $m_H$,
\begin{eqnarray}
{\cal L}_{\rm int} = g_S\,\bar{\psi}_S\psi_S\,\phi_H
+ g_X\,\bar{\psi}_X\psi_X \, \phi_H\,.
\end{eqnarray}
We also assume that the fermion $\psi_S$ decays to radiation with rate
$\Gamma$. The annihilation cross section is given by
\begin{eqnarray}
\langle \sigma v \rangle \sim \frac{g_S^2 g_X^2}{m_H^4} M^2\,,
\end{eqnarray}
where $M$ is defined in Eq.~(\ref{Mdef}).
In this case, thermalization occurs for temperatures bounded by
\begin{eqnarray}
m_S \left( \frac{m_H^4}{m_S^3 m_{\rm pl}} \frac{1}{g_S^2 g_X^2} \right)^{1/3}
< T < g_S^2 g_X^2 m_{\rm pl}\,.
\end{eqnarray}
The conditions that $S$ is in in thermal equilibrium when it reaches
$T \simeq m_S$ gives the condition
\begin{eqnarray}
g_S^2 g_X^2 > \frac{m_H^4}{m_S^3 m_{\rm pl}}\,.
\label{gSgXlimit}
\end{eqnarray}
Note that one still needs to have a decay rate that is small enough such that
$\psi_S$ decays after it dominates the universe. The point of this second
example is to show that in non-renormalizable models the heavy species can
either decouple while non-relativistic or while relativistic, depending on
whether Eq.~(\ref{gSgXlimit}) is satisfied or not.
\section{Models for producing the fluctuations}
\label{sec:chimodels}
The density perturbations in the DGZK mechanism and our generalization
originate in fluctuations in a light scalar field $\chi$. In this section we
write down explicit models for couplings between $S$ and $\chi$. The reason
for doing this is that these interactions can give rise to back reactions
which can constrain the magnitude of the produced density perturbations.
Similar results hold for couplings between $\psi_S$ and $\chi$.
We find it convenient to define $\delta \chi\equiv \chi - \langle\chi\rangle$.
Note that this does not correspond to a perturbative expansion. The fluctuations in $\chi$
are created during the inflationary era with
$\delta\chi\sim H_{\rm inf}$.
Then the leading order equation of motion for $\chi$ can be split into
homogeneous and inhomogeneous parts,
\begin{eqnarray}
\langle \ddot{\chi}\rangle &=& -3H\langle\dot{\chi}\rangle-
\langle V' \rangle \,,\nn\\
\delta\ddot{\chi} &=& -3H\delta\dot{\chi}+4\dot{\phi}\langle\dot{\chi}\rangle
-\delta V'-2\phi \langle V' \rangle \,.
\label{chieom}
\end{eqnarray}
Here $\delta V' \equiv V'-\langle V'\rangle$, where $V$ is the potential of $\chi$
and the prime denotes a derivative
with respect to $\chi$. Also, $\phi$ is the time perturbation in conformal
Newtonian gauge. The terms proportional to $\phi$ enter into the leading
order equation of motion for $\delta\chi$ because their homogeneous
coefficients do not.
To simplify the analysis, we first consider the scenario where
$\langle\chi\rangle$ is negligible. From Eqs.~(\ref{chieom}) we see this
is the case when $\langle\chi\rangle<\delta\chi$. Thus we require the
equation of motion for $\langle\chi\rangle$ to be Hubble friction dominated
for $\langle\chi\rangle<\delta\chi$. This gives the condition
\begin{eqnarray}
H^2\delta\chi &>& H^2\langle\chi\rangle \,\,>\,\, \langle V' \rangle \,.
\label{simpcond}
\end{eqnarray}
The fluctuations $\delta\chi$ persist so long as the equation of motion
for $\delta\chi$ is Hubble friction dominated. With
$\langle\chi\rangle<\delta\chi$ this translates into the condition
\begin{eqnarray}
H^2\delta\chi &>& \delta V' +2\phi\langle V'\rangle \,.
\label{chiweakmasscond}
\end{eqnarray}
Note that we can combine our simplifying condition that $\langle\chi\rangle$
be negligible, Eq.~(\ref{simpcond}), with the condition that the fluctuations
in $\delta\chi$ be Hubble friction dominated, Eq.~(\ref{chiweakmasscond}).
Adding these two equations and dropping factors of 2
this gives the single condition
\begin{eqnarray}
H^2\delta\chi &>& V' \,.
\label{chimasscond}
\end{eqnarray}
We consider the constraints this condition imposes on models for
transferring $\chi$ fluctuations to the radiation. We first consider
the renormalizable interactions
\begin{eqnarray}
{\cal L}_{\chi} &=& \sqrt{-g}\left[ -\frac{\alpha_S}{4} S^2\chi^2
- \frac{\mu_S}{2} S^2\chi \right] \,,
\label{chifluctmodels1}
\end{eqnarray}
and neglect any couplings between $\chi$ and $X$ as they are irrelevant to
our mechanism. When $\chi$ fluctuates these interactions result in $S$ mass
fluctuations of
\begin{eqnarray}
\delta_m &=& \frac{\alpha_S\delta\chi^2}{4m_S^2}
+ \frac{\mu_S\delta\chi}{2m_S^2} \nn\\
&\sim& \sqrt{\left(\frac{\alpha_S H_{\rm inf}^2}{m_S^2}\right)^2 \!\!
+ \left(\frac{\mu_S H_{\rm inf}}{m_S^2}\right)^2}\,,
\label{mfluct}
\end{eqnarray}
where in the second line we have estimated the size of the rms fluctuation at
two widely separated co-moving points. This mass fluctuation gives rise to
fluctuations in the decay rate and the annihilation cross section of $S$
according to the mass dependence of Eqs.~(\ref{gammasigma}).
As described above, for this fluctuation to persist and for
$\langle\chi\rangle$ to remain negligible requires that
$H^2\delta\chi >V'$. Although we assume the self interaction of $\chi$
is always negligible, the interactions of ${\cal L}_\chi$ contribute to
$V$ and provide the constraint
\begin{eqnarray}
H^2\delta\chi &>& \left( \frac{\alpha_S}{2}\delta\chi
+ \frac{\mu_S}{2} \right) \langle S^2 \rangle \,,
\end{eqnarray}
where $\langle S^2 \rangle$ is evaluated in the thermal bath. This
constraint is tightest at $T=m_S$ when
$\langle S^2 \rangle/H^2\sim m_{\rm pl}^2/m_S^2$. Thus we obtain the
constraints
\begin{eqnarray}
\alpha_S &<& \frac{m_S^2}{m_{\rm pl}^2} \,,\qquad
\mu_S \,\,<\,\, \frac{m_S^2}{m_{\rm pl}^2}H_{\rm inf} \,.
\label{alphamuconstraints}
\end{eqnarray}
The constraints of Eqs.~(\ref{alphamuconstraints}) provide the same upper
bound to both terms in Eq.~(\ref{mfluct}). Thus the back reactions of
${\cal L}_\chi$ limit the level of density perturbations produced via
this mechanism to
\begin{eqnarray}
\zeta &\sim& \delta_m \,<\, \frac{H_{\rm inf}^2}{m_{\rm pl}^2}
\,\lesssim\, 10^{-8}\,,
\label{zetaconstr1}
\end{eqnarray}
where the last limit on $H_{\rm inf}/m_{\rm pl}$ is measured by the WMAP
collaboration \cite{cmb2}.
The fluctuations resulting from the second interaction in ${\cal L}_\chi$
are linear in $\delta\chi$ and are therefore predominantly Gaussian in
their distribution. Since the observed level of Gaussian fluctuations
sets $\zeta\sim 10^{-5}$, this interaction cannot provide a significant
fraction of the observed density perturbations. However, the fluctuations
resulting from the first term in ${\cal L}_\chi$ are quadratic in
$\delta\chi$ and therefore non-Gaussian \cite{extensions}. Recent analysis
\cite{cmb3} limits the amplitude of non-Gaussian perturbations to about
$10^{-8}$. Thus we see our model can provide non-Gaussian perturbations
right at the limit of current observation. A lower level of
perturbations is obtained by reducing $\alpha_S$ or $\mu_S$.
As a variant on the above scenario, we next consider the non-renormalizable
couplings
\begin{eqnarray}
{\cal L}'_{\chi} &=& \sqrt{-g} \left[ -\frac{\lambda}{4}\,
\frac{\chi^2}{M_1^2}\,S^2 X^2 -\frac{\lambda}{4}\,
\frac{\chi}{M_2} S^2X^2\right] \,.
\label{chifluctmodel2}
\end{eqnarray}
When $\chi$ fluctuates these interactions
result in fluctuations in $\langle \sigma v\rangle $
\begin{eqnarray}
\delta_{\langle \sigma v\rangle } &=& \frac{2\delta \chi^2}{M_1^2 }
+ \frac{2\delta \chi}{M_2}
\,\sim\, \sqrt{ \left(\frac{ H_{\rm inf}^2}{M_1^2}\right)^2 \!\!
+ \left(\frac{ H_{\rm inf}}{M_2}\right)^2} . \quad
\label{svfluct}
\end{eqnarray}
As above, we require that Eq.~(\ref{chimasscond}) be satisfied. For the
interactions of ${\cal L}'_\chi$ this gives
\begin{eqnarray}
H^2\delta\chi &>& \frac{\lambda}{4}\left(
\frac{2\delta\chi}{M_1^2}+\frac{1}{M_2}
\right) \langle S^2X^2 \rangle \,.
\end{eqnarray}
As in the previous example, this constraint is tightest at $T=m_S$ when
$\langle S^2X^2 \rangle/H^2\sim m_{\rm pl}^2$. Therefore we find
\begin{eqnarray}
\frac{1}{M_1^2} &<& \frac{1}{\lambda}\frac{1}{m_{\rm pl}^2} \,,\qquad
\frac{1}{M_2} \,\,<\,\, \frac{1}{\lambda}\frac{H_{\rm inf}}{m_{\rm pl}^2} \,.
\label{Mconstraints}
\end{eqnarray}
Analogous to the previous example, the constraints of
Eqs.~(\ref{Mconstraints}) provide the same upper bound to both terms in
Eq.~(\ref{svfluct}). Thus the back reactions of ${\cal L}'_\chi$ limit the
level of density perturbations produced via this mechanism to
\begin{eqnarray}
\zeta &\sim& \delta_{\langle \sigma v\rangle }
\,<\, \frac{1}{\lambda}\frac{H_{\rm inf}^2}{m_{\rm pl}^2} \,.
\label{zetaconstr2}
\end{eqnarray}
This bound is significantly weaker than the bound of Eq.~(\ref{zetaconstr1})
obtained via a fluctuating $S$ mass. For example, the fluctuations resulting
from ${\cal L}'_\chi$ could form the dominant contribution to the observed
density perturbations if $\lambda$ is sufficiently small. In addition, for
a given $\zeta$ decreasing $\lambda$ allows for a lower scale of inflation.
Constraints on the smallness of $\lambda$ are discussed in Section
\ref{sec:Smodels}. Of course, a lower level of Gaussian (non-Gaussian)
perturbations is obtained by increasing $M_2$ ($M_1$).
Above we have taken $\langle \chi \rangle$ to be negligible, which
corresponds to taking $\langle \chi\rangle<\delta\chi$. Although this simplifies the presentation,
it unnecessarily strengthens the
constraints on $\mu_S$ and $M_2$. We know from Eq.~(\ref{simpcond}) that
\begin{eqnarray}
\langle \chi \rangle > \frac{\langle V' \rangle}{H^2}\,,
\end{eqnarray}
therefore keeping $\langle \chi \rangle$ small implies constraints on the potential $V$.
Referring to the second of
Eqs.~(\ref{chieom}), we see that for arbitrary $\langle\chi\rangle$ the
requirement that $\delta\chi$ remains Hubble friction dominated gives
\begin{eqnarray}
H^2\delta\chi \,>\, \dot{\phi}\langle\dot{\chi}\rangle \,,\quad
H^2\delta\chi \,>\, \delta V' \,,\quad
H^2\delta\chi \,>\, \phi \langle V' \rangle \,.\,\,
\label{chiweakmasscond2}
\end{eqnarray}
The first condition provides the constraint
$\langle\chi\rangle < \delta\chi/\phi$, with the evolution of $\phi\sim\zeta$
described in Appendix \ref{sec:pertanalysis}. It is sufficient to take
$\phi\sim 10^{-5}$, which also ensures that the homogeneous correction that
$\langle\chi\rangle$ provides to $m_S$ does not change $m_S$ by more than
order unity\footnote{In Appendix \ref{sec:pertanalysis} we find that after
freeze-out $\phi$ evolves as $\phi\sim (\rho_S/\rho)\,\zeta_{\rm f}$, where
$\zeta_{\rm f}\sim 10^{-5}$ is the final curvature perturbation. Thus if
we consider the scenario where $\chi$ fluctuations are transferred at
freeze-out and $\chi$ subsequently decays, we may take $\langle\chi\rangle$
to be constrained by $\phi^{-1}$ at freeze-out, which considerable weakens
the bounds in Eqs.~(\ref{newzeta}). However, in this case
$\langle\chi\rangle$ provides a homogeneous adjustment to $m_S$ which may
be much larger than $m_S$. This effect could then significantly alter the
constraints calculated in Section \ref{sec:Smodels}.}.
Through an analysis analogous to that above, we find the conditions of
Eqs.~(\ref{chiweakmasscond2}) constrain the level of {\em Gaussian}
fluctuations for the respective interactions of ${\cal L}_\chi$ and
${\cal L}'_\chi$ to
\begin{eqnarray}
\zeta_{\rm g} \,<\, \frac{\langle\chi\rangle}{\delta\chi}
\frac{H_{\rm inf}^2}{m_{\rm pl}^2}\,,\quad
\zeta_{\rm g} \,<\, \frac{1}{\lambda}\frac{\langle\chi\rangle}{\delta\chi}
\frac{H_{\rm inf}^2}{m_{\rm pl}^2} \,;\quad
\frac{\langle\chi\rangle}{\delta\chi} \,<\, \frac{1}{\phi} \,.
\label{newzeta}
\end{eqnarray}
The additional factor of $\langle\chi\rangle/\delta\chi$ significantly
weakens both bounds on Gaussian perturbations. This allows for greater
freedom in choosing $\mu_S$, $M_2$, $\lambda$, and/or $H_{\rm inf}$.
Non-Gaussian perturbations originate from the couplings quadratic in $\chi$. Taking these into account the fluctuation resulting from
${\cal L}_\chi$ becomes
\begin{eqnarray}
\zeta \sim \delta_m = \frac{\alpha_S\delta\chi^2}{4m_S^2}
+ \frac{\alpha_S\langle\chi\rangle\delta\chi}{2m_S^2}
+ \frac{\mu_S\delta\chi}{2m_S^2} \,.
\end{eqnarray}
Note that the quadratic term $\sim \alpha_S$ also gives rise to a Gaussian contribution to $\zeta$. Thus, the non-Gaussian perturbations obey the relation
\begin{eqnarray}
\zeta_{\rm ng} \sim \frac{\alpha_S \, \delta \chi}{\mu_S + \alpha_S \langle \chi \rangle} \zeta_{\rm g}\,.
\label{zetangrelation}
\end{eqnarray}
Note that taking $\alpha_S \to 0$ the non-Gaussian fluctuations can be made arbitrarily small. However, even if the $\alpha_S$ term in the potential dominates, the non-Gaussian fluctuations are always limited by
\begin{eqnarray}
\zeta_{\rm ng} < \frac{H_{\rm inf}^2}{m_{\rm pl}^2}\,,
\end{eqnarray}
as can be seen by combining Eqs.~(\ref{newzeta}) and~(\ref{zetangrelation}).
As mentioned before, WMAP sets the limit $H_{\rm inf}^2/m_{\rm pl}^2 \lesssim 10^{-8}$~\cite{cmb2}, thus the non-Gaussian fluctuations are at or below the current limits from WMAP~\cite{cmb3}. In addition, the observed Gaussian fluctuations can be produced by choosing $\delta \chi / \langle \chi \rangle$ appropriately. The second model with Lagrangian given in Eq.~(\ref{chifluctmodel2}) is less constrained since the factor of $\lambda$ weakens the constraint on $\delta \chi / \langle \chi \rangle$.
\section{Conclusions}
\label{sec:conclusions}
In \cite{DGZ} it was shown that fluctuations in the mass and the decay rate
of a heavy particle $S$, which at some point dominates the energy density
of the universe, lead to adiabatic density perturbations. In this scenario
it was assumed that the heavy particle decouples from radiation while it is
still relativistic.
In this work we have shown that if the heavy particle remains in thermal
equilibrium until it becomes non-relativistic, fluctuations in the
annihilation cross section of this particle with radiation lead to
additional sources of perturbations. We have presented two simple toy
models illustrating this effect. These additional fluctuations are generic,
unless the annihilation cross section is mediated by an additional particle
with mass exceeding $m_S$. If the $S$ particle is stable, for example if
$S$ is dark matter, then the resulting perturbations are non-adiabatic.
A simple analytical calculation determines the size of the density
perturbations from fluctuations in the mass, decay rate and annihilation
cross section. The fluctuations due to variations in the annihilation cross
section are shown to be of similar size as the ones generated from the
original DGZK mechanism. These results are checked numerically using
Boltzmann equations in conformal Newtonian gauge in
Appendix~\ref{sec:numerical}.
\begin{acknowledgments}
We would like to thank Mark Wise for collaboration at an early stage of
this work. This work was supported by the Department of Energy under the
contract DE-FG03-92ER40701.
\end{acknowledgments}
|
hep-ph/0502084
|
\section*{Abstract}
\abstracts{
We review recent important progresses at the B factories and discuss the future prospects. We also comment on how we might proceed to search for new physics.
}
\section{Introduction}
Much progress in $B$ physics has been achieved over the past few years.
Both KEK and SLAC have achieved their corresponding design luminosity
goals, and are working hard to surpass them.
The $B\rightarrow\psi K_S$ asymmetry has been discovered. The direct CP asymmetry
has been discovered in the $B\rightarrow K\pi$ decay. According to the Belle result,
the $B\rightarrow \pi\pi$ CP asymmetry shows direct CP violation.
First measurements of $\phi_2$ and $\phi_3$ have been made as well as polarization
studies of $B\rightarrow \phi K^*,~\rho\rho,~\rho K^*$.
In this note, we shall review important $B$ factory results and then discuss
possibility for the upgrade.
\section{Selected achievements at $B$ factories}
\subsection{$\phi_1$}
\begin{figure}[b]
\centerline{
\includegraphics[height=5cm]{figure-1.eps}}
\caption{The unitarity triangle.
\label{fig:UT}}
\end{figure}
Who would have thought 5 years ago that we have a precision measurement
of CP asymmetry in $B\rightarrow\psi K_S$ decay?
The first angle of the unitarity triangle shown
in Fig. \ref{fig:UT} to be measured was $\phi_1$:
\begin{eqnarray}
\sin 2\phi_1&=&+0.728\pm 0.056\pm 0.023
~~~~~~~{\rm Belle}\cite{phi1},\nonumber\\ \\
\sin 2\phi_1&=&+0.722\pm 0.040\pm 0.023
~~~~~~~{\rm BABAR}\cite{phi1b}.
\end{eqnarray}
\begin{figure}[htb]
\centerline{
\includegraphics[height=7cm]{fig2.eps}}
\caption{Belle and BABAR results on $B\rightarrow\pi\pi$ CP symmetry. The points on the upper left side represent Belle data
and the points in the center represent BABAR data.
\label{fig:bd}}
\end{figure}
The error is now less than 5\%. While this is certainly enough to
declare the correctness of the
Kobayashi-Maskawa theory, it is not enough, if we want to use this
information to look for New Physics beyond the Standard Model.
It is worthwhile measuring it to the accuracy of 1\% as the
theoretical uncertainty in relating this asymmetry to $\phi_1$ is of
that order.
\subsection{$\phi_2$}
The next challenge is $\phi_2$, but we are not so lucky here. We have both tree and
penguin amplitudes contributing to the $B\rightarrow\pi\pi$ decay. Nevertheless,
it is of great interest to pursue the time dependent CP asymmetry:
\begin{equation}
\frac{\Gamma_{\pi^+\pi^-}(t)-\overline\Gamma_{\pi^+\pi^-}(t)}
{\Gamma_{\pi^+\pi^-}(t)+\overline\Gamma_{\pi^+\pi^-}(t)}
=A_{\pi^+\pi^-}\cos(\Delta Mt)+S_{\pi^+\pi^-}\sin(\Delta Mt),
\end{equation}
where
\begin{equation}
A_{\pi^+\pi^-}=\frac{|\bar\rho(\pi^+\pi^-)|^2-1}{|\bar\rho(\pi^+\pi^-)|^2+1}
~~~~~
S_{\pi^+\pi^-}=\Im\left(\frac{q}{p}\bar\rho(\pi^+\pi^-)\right).
\end{equation}
We can easily show that
\begin{equation}
|A_{\pi^+\pi^-}|^2+|S_{\pi^+\pi^-}|^2\le 1.
\end{equation}
Fig. 2 shows both Belle and BABAR results\cite{piilonen}. While it is tempting to say that the
direct CP violation in $B\rightarrow\pi\pi$ (non-vanishing $A_{\pi^+\pi^-}$) has been discovered at
Belle, we feel that we should wait until their data comes within the circle.
Note that if it is established that the data point lies outside of the unit circle, it signals violation of quantum mechanics.
Both Belle and BABAR observe the $B\rightarrow\pi^0\pi^0$ decay:
\begin{eqnarray}
Br(B\rightarrow\pi^0\pi^0)&=&(1.17\pm 0.32\pm 0.10)\times 10^{-6}
~~~~~~~{\rm BABAR}\cite{pi0b},\nonumber\\ \\
Br(B\rightarrow\pi^0\pi^0)&=&(2.32\pm 0.48\pm 0.22)\times 10^{-6}
~~~~~~~{\rm Belle}\cite{pi0}.
\end{eqnarray}
This is very encouraging. Isospin analysis can be done. This may be a
place where $B$ factories continue to have the edge even after LHC turns on.
Certainly, Super $B$ luminosity should be defined to be that luminosity which gives 1\% measurement of $\phi_2$ using the isospin analysis.
\subsection{$\phi_3$}
The next challenge is $\phi_3$. One of the most promising ways is to
make use of the fact that we can not tell whether the intermediate state
is $D^0K^\pm$ or $\bar D^0K^\pm$ when we observe $D,\bar D\rightarrow K_S\pi\pi$ decay products in the final state:
\begin{eqnarray}
B^\pm&\rightarrow& D^0K^\pm\rightarrow K_S\pi\pi K^\pm,\nonumber\\ \\
B^\pm&\rightarrow& \bar D^0K^\pm\rightarrow K_S\pi\pi K^\pm.
\end{eqnarray}
Then amplitudes for these decays interfere, generating CP violation. This method was first suggested in Ref.\cite{bigisanda}.
First results have been obtained:
\begin{eqnarray}
\phi_3&=&(77^{+17}_{-19}(stat)\pm 13(syst)\pm
11(model))^\circ~~~~~~~{\rm Belle}\cite{DK},\nonumber\\ \\
\phi_3&=&(88\pm 41(stat)\pm 19(syst)\pm 10(model))^\circ~~~~~~{\rm BABAR}\cite{DKb}.
\end{eqnarray}
Future progress in this method seems very promising.
We are getting into an era where we are starting to get results on the angles of the unitarity triangle.
We should compute the
required luminosity for the $B$ factory upgrade based on a 1\% determination of
$\phi_2$ and $\phi_3$.
\subsection{Direct CP asymmetries in $K\pi$ }
Large direct CP asymmetry in $B\rightarrow K\pi$ decay
has been predicted
in the PQCD method and it has been observed in:
\begin{equation}
\frac{\,\mbox{BR}(\overline B\rightarrow K^-\pi^+)-\,\mbox{BR}(B\rightarrow K^+\pi^-)}
{\,\mbox{BR}(\overline B\rightarrow K^-\pi^+)+\,\mbox{BR}(B\rightarrow K^+\pi^-)}
=-0.113\pm 0.019
\end{equation}
An asymmetry of similar size has been predicted
in $B^\pm\rightarrow K^\pm\pi^0$ but actual measurement
shows that:
\begin{equation}
\frac{\,\mbox{BR}(B^-\rightarrow K^-\pi^0)-\,\mbox{BR}(B^+\rightarrow K^+\pi^0)}
{\,\mbox{BR}(B^-\rightarrow K^-\pi^0)+\,\mbox{BR}(B^+\rightarrow K^+\pi^0)}
=0.04\pm 0.04\nonumber\\
\end{equation}
Theoretically, the fact that these asymmetries must be equal follows
rather generally if the color suppressed amplitudes
and electroweak penguin diagrams are small.
Experimental measurement shows that these amplitudes are not negligible, and that they play an important role. If these amplitudes are important they may
also modify $B\rightarrow\pi^0\pi^0$ decay rate. Details of
this type of analysis has been presented by Yoshikawa.
\section{New Physics searches}
\subsection{$B\rightarrow\phi K_S$}
In the Standard Model (SM), the amplitudes for $B\rightarrow\psi K_S$ and $B\rightarrow\phi K_S$
have equal phases. So, we expect $S_{\phi K_S}=S_{\psi K_S}=\sin(2\phi_1)$.
But, Belle obtained\cite{sakai}:
\begin{eqnarray}
S(\phi K^0)&=&+0.06\pm 0.33 \pm 0.09,\nonumber\\ \\
A(\phi K^0)&=&+0.08\pm 0.22 \pm 0.09.
\label{S}
\end{eqnarray}
Note that the Belle result for $S(\phi K^0)$ is dramatically different
from the previous result, $S(\phi K^0)=-0.96\pm 0.5^{+0.09}_{-0.06}$\cite{HFAG}. This is due to the fact that
their new measurement with new vertex detector yielded $S(\phi K^0)=+0.78\pm 0.45$. Averaging all the data, they obtained the value shown in (\ref{S}). While the data taken with the new vertex detector
yields roughly the result $\sim \sin(2\phi_1)$, as
expected from the SM, and a Monte Carlo study shows
that the probability for this sign flip-flop is about
4.5\%, it is nevertheless mind
boggling.
The result of Eq. (\ref{S}) is off from the
SM prediction by about $2.2 \sigma$. One of the authors AIS) is reminded of what
Professor Wong-Young Lee told him once when he was a post doc at Columbia.
He said, ``A $3\sigma$ effect goes away half of the time!" So, we would wait
until there is more convincing data before we tell ourselves that New Physics
has been discovered.
But, depending on the confidence of the experimentalists,
this discrepancy should be a major motivation for building the
$B$ factory upgrade.
\begin{figure}[t]
\vspace{-3cm}
\centerline{
\includegraphics[height=10cm]{figure-2.eps}}
\caption{Suppose there is NP which contributes to $M_{12}$. The experimental measurement of the $\psi K_S$ asymmetry gives $\phi$ as shown in (A). Let us
entertain an extreme situation where $\phi_3$ is negative. Then the unitarity triangle is located
below the horizontal axis as shown here. Figure (B) shows the relationship (see Eq. (9)) between the vector representing the observed asymmetry, $e^{2i\phi}$,
the vector representing NP, $R_{NP}e^{2i\theta}$, and the vector representing the SM
contribution, $R_{SM}e^{2i\phi_1}$.
\label{fig:RNP}}
\end{figure}
\subsection{Dilepton asymmetries in $B\overline B\rightarrow l^\pm l^\pm +anything$}
If there is New Physics (NP), it might first show up in $\Delta M$.
Obviously, when we search for NP, the SM contribution is
the background. Since $\Delta M$ is of the second order in the weak
interaction, it may
be easier to observe NP contributions here.
We define\cite{xing}:
\begin{equation}
M_{12}=M_{12}^{SM}+M_{12}^{NP}\equiv\frac{\Delta M}{2}
\left(R_{SM}e^{2i\phi_1}+R_{NP}e^{2i\theta}\right)
=\frac{\Delta M}{2}e^{2i\phi},
\end{equation}
where $M_{12}^{NP}$ is the NP contribution to $M_{12}$.
The dilepton CP asymmetry is given by\cite{bigisanda2}:
\begin{eqnarray}
A_{SL} \equiv \frac{N^{++}-N^{--}}{N^{++}+N^{--}}
&=&\Im\frac{\Gamma_{12}}{M_{12}}\nonumber\\ \\
&=&r~\Im\left(\frac{V_{ub}V^*_{ud}+V_{cb}V^*_{cd}}{V_{tb}V_{td}^*+\frac{R_{NP}}{R_{SM}}|V_{tb}V_{td}^*|e^{2i\theta}}\right)^2+{\cal O}\left(r\frac{m_c^2}{m_b^2}\right),
\end{eqnarray}
where $r={\cal O}(10^{-3})$ is computed in the SM.
If $R_{NP}$ is not present, the unitarity constraint
of the KM matrix forces the leading term to vanish and the asymmetry is
${\cal O}\left(r\frac{m_c^2}{m_b^2}\right)$.
The actual computation of $\frac{\Gamma_{12}}{M_{12}}$ may be tricky as it may receive substantial contribution from long distance effects. Here we assume that contributions from intermediate states with $\alpha \beta$ ($\alpha,\beta=u~{\rm or}~c$) quarks appropriately average the long distance effects, and give a sufficiently good approximation.
The fraction $\frac{\Gamma_{12}}{M_{12}}$ has been computed including the next to leading order QCD corrections\cite{BBLN}.
Write contribution to $\frac{\Gamma_{12}}{M_{12}}$ from the box diagram
where the inner lines are $(\alpha,\beta)$ quarks as
\begin{equation}
F_{12}^{\alpha\beta}(V_{\alpha b}V_{\alpha d}^*)(V_{\beta b}V_{\beta d}^*).
\end{equation}
Then the result is given as\footnote{The actual expression for $F_{12}^{\alpha\beta}$ is given in Ref.\cite{BBLN}}:
\begin{eqnarray}
\frac{\Gamma_{12}}{M_{12}}
&=& \frac{(V_{tb}V_{td}^*)^2}{M_{12}^{SM}}
\left[-F^{cc}_{12}+2(F^{uc}_{12}-F^{cc}_{12})
\frac{V_{ub}V_{ud}^*}{V_{tb}V_{td}^*}
+(2F^{uc}_{12}-F^{cc}_{12}-F^{uu}_{12})
\frac{(V_{ub}V_{ud}^*)^2}{(V_{tb}V_{td}^*)^2}\right],
\label{G12/M12}
\end{eqnarray}
The dilepton CP asymmetry is written as a function of $\phi_1$ as follows:
\begin{eqnarray}
A_{SL}
&=& Im \{ \frac{\Gamma_{12}}{M_{12}^{SM}} \} R_{SM} \cos2(\phi - \phi_1)
- Re \{ \frac{\Gamma_{12}}{M_{12}^{SM}} \} R_{SM} \sin2(\phi
- \phi_1)
\label{asl}
\end{eqnarray}
The KM factors in $\Gamma_{12}/M_{12}^{SM}$ and $R_{SM}$ can be also
written as the functions of $\phi_1$. In the SM, $\phi_1$ should be
same with $\phi $ which is measured by the CP asymmetry in
$B\rightarrow \psi K_s $ so that the contribution is only the
first term in Eq. (\ref{asl}) and comes from the imaginary part of
the second and third terms in Eq. (\ref{G12/M12}), which vanishes in the limit $m_u=m_c$.
The SM contribution is roughly $10^{-4}$.
The presence of $R_{NP}$ spoils the cancellation and
the second term in Eq. (\ref{asl}) becomes non-vanishing. In this case, the CP asymmetry may become as large as a few \%.
If this asymmetry is measured to be much larger than
${\cal O}(10^{-4})$, it implies the presence of NP.
The best limit on this asymmetry is given by Belle\cite{dilepton}:
\begin{equation}
\frac{N^{++}-N^{--}}{N^{++}+N^{--}}=(-0.13\pm 0.60 \pm 0.56)\%.
\end{equation}
It is interesting to note that
$M_{12}^{NP}$ does not have to be complex. The presence of $R_{NP}$,
which means there may be a difference between $\phi_1 $ and $\phi $,
spoils the cancellation of the KM phase, leading to the asymmetry.
\begin{figure}[h]
\centerline{
\rotatebox{0}{\includegraphics[height=5cm]{ASL-PHI3-2Figs.ps}}}
\caption{ In the left figure
the allowed region of the dilepton CP asymmetry $A_{SL}$
for $\phi_1$ as the angle of $V_{tb}$ in the SM
by taking account of the constraint $\phi_3 = 77^\circ \pm 25^\circ
$\cite{DK} and in the right figure the allowed region for $\phi_3$ are
plotted. The dotted lines show the bounds from experimental data of
$\phi_3$ with four fold ambiguity. The regions by thick (thin) line in the
left correspond to the bounds shown by thick(thin) dotted lines in the
right one. The dotted line in left figure
shows the experimental bound of $A_{SL}$ by Belle. }\label{fig:ASL}
\end{figure}
In Fig. 3(A) we show an example of how the $\rho-\eta$ plot gets modified by a non-vanishing $R_{NP}$. The
CP asymmetry in $B\rightarrow\psi K_S$ determines $\phi$.
For an illustration, let us consider a remote
possibility that $\phi_3$ turned out to be negative.
Then we have a situation depicted in this figure. Fig. 3(B) gives the required
$R_{NP}e^{i\theta}$.
In Fig. \ref{fig:ASL}, the allowed
region for $A_{SL}$ can be shown in terms of $\phi_1$ under the
constraint of $\phi_3$. But there are four fold ambiguity to measure
$\phi_3$ and the experimental bounds from $\phi_3= 77^\circ \pm 25^\circ
$\cite{DK} with the ambiguity are
plotted by dotted lines in the right figure.
Under taking account of the constraint of $\phi_3$ for $\phi_1$, $A_{SL}$ is
plotted in the left figure.
The region by thick(thin) line in left figure is from the constraints
for $\phi_3$ by thick(thin) dashed line in right figure.
These figures may tell us that
combining the constraints from $A_{SL}$ and $\phi_3$ can reduce
the parameter space for NP and more accurate measurement will help to
solve the ambiguity for $\phi_3$.
Further improvement of the
upper limit is strongly encouraged.
\subsection{Lepton number violation}
We now know that there is neutrino mixing - lepton flavor number is violated.
This may show up in $\tau\rightarrow e\gamma$, $\tau\rightarrow \mu\gamma$, $\tau\rightarrow 3\mu$
$\tau\rightarrow 3e$, $\tau\rightarrow e\mu\mu$, etc. Belle has already obtained the following 90\% CL limits\cite{leptonv}:
\begin{eqnarray}
Br(\tau\rightarrow \mu\gamma)&<&3.1\times 10^{-7},\nonumber\\ \\
Br(\tau\rightarrow e\gamma)&<&3.8\times 10^{-7}.
\end{eqnarray}
It is not so unrealistic to expect that these lepton number violating
processes are actually observed in the near future.
It has been customary to study quark physics and lepton physics separately. Since we found that the lepton number is not conserved, it is perhaps advantageous
to study the quark system and the lepton system
in an unified manner. Searching for lepton number violation
in $B$ decays,
such as $B\rightarrow\tau\mu$ and $B\rightarrow 3\mu$, is good example of this unification.
\begin{table}[h]
\caption{Examples of lepton number violating decays. Lepton number violation may very well show up in $B$ decays.} \vspace{1mm}
\small
\begin{center}
\begin{tabular}{|c|c|}
\hline
{\rm Quark~ physics}&$B\rightarrow\tau\mu,~ B\rightarrow 3\mu,$~ etc.\\
\hline
{\rm Lepton ~physics}&$\mu\rightarrow e\gamma,~ \tau\rightarrow 3\mu,$~ etc. \\
\hline
\end{tabular}
\vspace{-1mm}
\end{center}
\end{table}
\section*{Conclusion}
Much exciting flavor physics with B and $\tau$ decays remains uncovered. We hope that Belle and Babar come to an agreement on $A_{\pi\pi}$ and $S_{\pi\pi}$ measurements. This should be followed by first results on the
isospin analysis for $B\rightarrow\pi\pi$ decays. Theoretical understanding of CP asymmetry for $B^\pm\rightarrow K^\pm\pi^0$ decay must be achieved. It is likely that the CP
asymmetry for $B\rightarrow\phi K_S$ will show new physics
at the level of less than 5\% as opposed to 50-100\%
level. Dilepton asymmetry will put nontrivial constraints on new physics in the near future. Lepton number violation may be around the corner.
\section*{Acknowledgments}
We acknowledge support from the Japan Society for the Promotion of Science,
Japan-US collaboration program, and a grant from Ministry of Education, Culture, Sports,
Science and Technology of Japan. The work of T.Y. was supported by 21st Century COE Program of Nagoya University.
\bigskip
|
gr-qc/0502109
|
\section{Introduction}
Causality plays an important role in the brane-world
scenario. Null geodesics passing through the extra
dimension may connect points in the lower dimension which are
causally disconnected in terms of the null geodesics confined to
the brane \cite{Chung}. Thus instead of inflation, such null
geodesics could be used to solve the cosmological horizon problem.
The condition for the existence of such ``causality violating" null
geodesics, which pass through the $AdS_5$ bulk, is merely the
deviation from a pure tension-like stress energy tensor, i.e.
provided that the energy density $\rho_b$ and the isotropic
pressure $p_b$ of the brane satisfies \cite{Ishi01}
\[
\rho_{b}+p_{b}>0,
\]
then the extrinsic curvature bends the brane concave towards the
bulk, allowing the existence of such null geodesics.
Initially most authors considered these ``shortcuts'' through the
bulk for a static cosmological brane (brane based approach). They
made use of the most general metric (see \cite{BinLang,BinDef}) of the form
\begin{equation*}
ds^{2}=-n^{2}\left( t ,y\right) dt ^{2}+a^{2}\left( t ,y\right)
\gamma _{\mu \nu }dx^{\mu }dx^{\nu }+b^{2}\left( t ,y\right)
dy^{2}.
\end{equation*}
Null bulk geodesic equations of motion has been computed for static and
cosmological single brane models (see \cite{Youm01,Davis,Abdalla02}). It was found that
although there are effective acausality in some cases, there is no \textit{a priori} solution to the horizon
problem.
A more realistic scenario was found by considering a dynamical
brane (bulk based approach). It has been shown that the
motion of a domain wall in the bulk can be interpreted as cosmological
expansion or contraction on the wall (see \cite{Kraus}). A generalization of Birkhoff's
theorem can be used to show that a fixed wall embedded in a non-static spacetime
is strictly equivalent to a moving wall embedded in a
static spacetime and so establishing in full generality
the equivalence between the two approaches (see \cite{Bowcock}). Solutions have also been
found for higher dimensional dynamic domain walls that couples to bulk matter \cite{Chamblin}.
Causality can now be investigated in a different way. The
trajectories of the null geodesics and the brane's equations of
motion in the bulk are first obtained. These are then plotted on
the same set of axes, with both curves having the same initial
conditions. If the curves intersect again, then it can be said
that two points are causally connected through the bulk. The
brane's equations of motion for a FRW brane in a $AdS_5$ bulk
(only considering the tension of the brane and not the matter
inside) was obtained in \cite{Kraus}. Shortcuts through the bulk
could thus be investigated from the dynamical or bulk based
approach (see e.g.\cite{Caldwell,Abdalla05}). A FRW brane embedded
in a $AdS_5$ was used and it was found that although the distance
covered by a graviton passing through the bulk is longer than the
corresponding distance on the brane, it is not enough to solve the
horizon problem \cite{Caldwell}. Shortcuts have also been
investigated in various other models, such as; six dimensional
brane worlds \cite{Abdalla02b}, $(D-2)$-branes (domain walls)
\cite{Abdalla03b} and for nonlinear dynamical universes
\cite{Melgar04}.
In non of the references above (and no reference to our knowledge),
has the condition of escape been determined for a graviton to
actually leave the brane. So far most authors have assumed that
gravitons escape freely into the bulk. If this was the case, why
then don't all the gravitons escape into the bulk? This is the
main question which we will try and answer in this paper. We will
attempt to firstly determine whether null geodesics return to the
brane after they have escaped and secondly the condition for a
graviton to escape from the brane.
\section{Null geodesic equations}
We start with the 5-dimensional anti-de Sitter Schwarzschild
metric
\begin{equation}
ds^{2}=-f_{k}(R)dT^{2}+f_{k}(R)^{-1}dR^{2}+R^{2}d\Sigma _{k}^{2}
\label{AdSS5 metric general}
\end{equation}
where $T$ is the global bulk time, $R$ is the bulk dimension and $d\Sigma
_{k}$ is the metric for a maximally symmetric three-dimensional \ spaces ($%
k=0$ for a flat space, $k=1$ for a three sphere and $k=-1$ for a
hyperbolic three space). The function $f_{k}(R)$ is given by
\begin{equation}
f_{k}(R)=k+\frac{R^{2}}{\ell^{2}}-\frac{\mu}{R^2} \label{function
fk}
\end{equation}
where $\ell\equiv \sqrt{\frac{-6}{\Lambda}}$ is the anti-de -Sitter
radius of curvature, $\Lambda$ is the negative bulk cosmological
constant (i.e. $\Lambda<0$) and $\mu$ is the five dimensional
Schwarzschild-like mass. Since the third term in the equation above scales like radiation, $\mu$
is sometimes called the dark radiation or Weyl parameter. The solutions are anti-de Sitter when
$\mu=0$ and when $\mu\neq 0$ we have a bulk black hole with a
horizon at
\begin{equation}
R_h^2=\frac{\ell^2}{2}\left(-k+\sqrt{k^2+\frac{4\mu}{\ell^2}}\right).
\label{horizon}
\end{equation}
A spherical coordinate system $(r,\theta ,\phi )$ can now be introduced in the brane which is
centered at a point A, so that any path can be described by a
radial geodesic \cite{Caldwell}. Then due to the radial geodesics we may ignore
the angular variables $\theta $ and $\phi $, so that we are left
with a three dimensional problem with the resulting metric being
given by
\begin{equation}
ds^{2}=-f_{k}(R)dT^{2}+f_{k}(R)^{-1}dR^{2}+R^{2}dr^{2}.
\label{3d adss5 metric}
\end{equation}
The geodesics may be easily obtained by using the symmetries to
obtain the Killing vectors, namely $\left(\frac{\partial}{\partial
T}\right)^a$ and $\left(\frac{\partial}{\partial r}\right)^a$ (see
\cite{Caldwell} and \cite{Misner}). Then the vector tangent to
the geodesics, $p^a$, is given by
\begin{equation}
p_T=-f_{k}(R)\frac{dT}{d\lambda }=-E\quad (a\;constant) \label{pT}
\end{equation}
and
\begin{equation}
p_r=R^{2}\frac{dr}{d\lambda }=P\quad (a\;constant) \label{pr}
\end{equation}
where $\lambda$ is an affine parameter that relates each of the
three parameters $(R,T,r)$ to one another. Both $E$ and $P$ are
positive constants which are similarly defined as the ``energy at
infinity" $E$ and ``angular momentum" $L$ in standard general
relativity (see \cite{Misner}).
For a null geodesic we furthermore have
\begin{equation}
\left( \frac{dR}{d\lambda }\right)
^{2}=E^{2}-P^{2}\frac{f_{k}(R)}{R^{2}} \label{dR/dlam}
\end{equation}
Equations (\ref{pT}), (\ref{pr}) and (\ref{dR/dlam}) may be rewritten in the standard GR form of \cite{Misner}
by setting $\tilde{\lambda}=P\lambda$ so that the
full set of geodesic equations are
\begin{eqnarray}
\frac{dT}{d\tilde{\lambda}}&=& \frac{E}{P f_k(R)}, \label{geodesic
T} \\
\frac{dr}{d\tilde{\lambda}}&=& \frac{1}{R^{2}}
\label{geodesic r}, \\
\left(\frac{dR}{d\tilde{\lambda}}\right) ^2&=&
\frac{E^2}{P^2}-\frac{f_k(R)}{R^{2}}. \label{geodesic R null}
\end{eqnarray}
\section{Brane motion}
We start by introducing the proper time $t$ on the brane, which is
defined by \cite{Caldwell} as
\begin{equation}
dt^2=f(R_b)dT^2-f(R_b)^{-1}dR_b^2 \label{dT/dt 2}
\end{equation}
so that
\begin{equation}
\frac{dT}{dt}=\frac{\sqrt{f(R_b)+\dot{R}_b^2}}{f(R_b)}
\label{dT/dt}
\end{equation}
where $\dot{R}_b=\frac{dR_b}{dt}$. $R_b (t)\equiv a(t)$ is the
usual cosmological scale factor in the brane-world which can be
obtained from the modified Friedmann equation which is determined
from the junction conditions.
The Israel junction conditions \cite{Israel} relate the
discontinuity in the second fundamental form (extrinsic curvature)
through the brane and the energy momentum tensor in the brane. If
$K^+_{ab}$ and $K^-_{ab}$ are the jump in extrinsic curvature on
the two sides of the brane, then the junction conditions for a $Z_2$ symmetric brane is \cite{Ishi01}
\begin{equation}
K^+_{ij}=-K^-_{ij}=-\frac{1}{2}\kappa^2_{5}\left(S_{ij}-\frac{1}{3}h_{ij}
S\right). \label{junction cond brane adss}
\end{equation}
The extrinsic curvature of the hypersurface can be computed using
the unit norm vector (see e.g. \cite{Abdalla05,Brax03a,Brax03}).
If we assume an isotropic distribution of matter on the brane,
then the energy momentum tensor $S_{ij}$ in the brane is given by
\begin{equation}
S_{ij}=\rho_b u_i u_j +p_b\left(h_{ij}+u_i u_j\right),\label{Sij
isotropic}
\end{equation}
where $\rho_b$ is the energy density and $p_b$ is the isotropic
pressure of the brane.
From the junction conditions above one obtains two sets of equation governing the evolution of the brane (see \cite{Kraus,Bowcock,BinLang}):
\begin{equation}
\frac{d\rho_b}{dt}=-3\frac{\dot{R}_b}{R_b}\left(\rho_b+p_b\right).
\label{jc isomat 2}
\end{equation}
and
\begin{equation}
\frac{\dot{R}^2_b}{R_b^2}+\frac{f(R_b)}{R_b^2}=\frac{\kappa^4_{5}
\rho_b^2}{36}. \label{jc isomat 1}
\end{equation}
We require some equation of state that relates the matter
density $\rho$ and pressure $p$ on the brane. We follow convention
by setting
\begin{equation*}
\rho_b=\rho +\sigma, \ \ \ \ p_b=p-\sigma,
\end{equation*}
where $\sigma$ is tension of the wall. The equation of state then
may be given by
\begin{equation*}
p=\left(\gamma -1\right)\rho.
\end{equation*}
Substitution then yields the conservation equations as
\begin{equation}
\frac{d\rho}{dt}=-3\frac{\dot{R}_b}{R_b}\left(\rho+p\right),
\label{cons eqn brane}
\end{equation}
with the solution
\begin{equation*}
\rho=\rho_0\left(\frac{R_0}{R_b}\right)^{3\gamma} \label{sol cons
eqn}
\end{equation*}
where the subscript $0$ means any given time. Similarly we may
obtain the modified Friedmann equation of \cite{BinDef} in the form
\begin{equation}
\dot{R}_b^2+\left(1-\left(\frac{\rho+\sigma}{\sigma_c}\right)^2\right)\frac{R_b^2}{\ell^2}-\frac{\mu}{R_b^2}=-k,
\label{matter FRW eqn}
\end{equation}
where $\sigma_c\equiv3/(4\pi G\ell^2)=6/(\kappa^2_{5} \ell)$ is
the critical tension for fine tuning. The cosmological constant in
the brane then has the form
\begin{equation*}
\frac{\Lambda_4}{3}=\frac{\kappa^4_{5}\sigma^2}{36}-\frac{1}{\ell^2}=\frac{1}{\ell^2}\left(\frac{\sigma^2}{\sigma_c^2}-1\right).
\end{equation*}
\section{Graviton emission and capture}
\subsection{Potential diagrams}
In the case of a domain wall, there is no matter on the wall so we do not expect the escaping gravitons to be attracted back to the brane. A more realistic scenario would be to consider a brane with isotropically distributed matter. Analytically computing trajectories of the moving brane with matter is difficult so we make use of potential diagrams to study whether gravitational signals will return to the brane. In classical GR, potential diagrams have been found useful for
qualitative analysis of orbits of particles and photons in Schwarzschild geometry. Potential diagrams has been used before to investigate Lorentz violations in $AdSS_5$ bulk background \cite{Csaki}. Here we extend that analysis to brane motion and graviton motion in $AdS_5$ and $AdSS_5$.
We start by rewriting equation (\ref{geodesic R null}) as
\begin{equation}
\left(\frac{dR}{d\tilde{\lambda}}\right) ^2+B^2(R)=b^2,
\label{geodesic R potential}
\end{equation}
where
\begin{equation}
B^2(R)=\frac{f_k(R)}{R^{2}} \label{effective poten}
\end{equation}
defines a kind of ``effective potential of a graviton", and
\begin{equation}
b^2=\frac{E^2}{P^2} \label{impact param}
\end{equation}
defines the ``impact parameter".
The potential impact parameter $B(R)$ has the following useful
interpretation: For a graviton to reach the point $R$, it must have
a impact parameter $b$ greater than or equal to $B(R)$, i.e.
\begin{equation}
b\geq B(R). \label{cond. acces}
\end{equation}
This is referred to as the \textit{condition of accessibility} \cite{Misner}.
\vspace{5mm}
\noindent \underline{$AdS_5$ ($\mu=0$)}: The effective potential (\ref{effective poten}) for
the various values of $k$ are
\begin{eqnarray}
B^2(R)&=& \frac{1}{\ell^{2}}+\frac{1}{R^{2}}, \ \ \ \ k=+1\\
B^2(R)&=&\frac{1}{\ell^{2}}, \ \ \ \ \ \ \ \ \ \ \ \ k=0 \\
B^2(R)&=&\frac{1}{\ell^{2}}-\frac{1}{R^{2}}, \ \ \ \ k=-1
\end{eqnarray}
and are plotted in Figure 1. Clearly none of the potential curves
have turning points, i.e if a graviton escape from the brane
(domain wall or brane with matter) into an anti-de
Sitter bulk, it will not return to the brane. For $k=-1$ we have a
horizon at $R=\ell$.
\begin{figure}[tbp]
\begin{center}
\epsfig{file=potAds.eps, scale=0.9}
\caption{Potential diagram for $AdS_5$ ($\mu=0$) bulk geometry.}
\end{center} \label{Poten Diagram mu=0}
\end{figure}
\noindent \underline{$AdSS_5$ ($\mu\neq0$)}: The effective potential (\ref{effective
poten}) for the various values of $k$ are
\begin{eqnarray}
B^2(R)&=& \frac{1}{\ell^{2}}+\frac{1}{R^{2}}-\frac{\mu}{R^4}, \ \ \ \ k=+1\\
B^2(R)&=&\frac{1}{\ell^{2}}-\frac{\mu}{R^4}, \ \ \ \ \ \ \ \ \ \ \ \ k=0 \\
B^2(R)&=&\frac{1}{\ell^{2}}-\frac{1}{R^{2}}-\frac{\mu}{R^4}, \ \ \
\ k=-1
\end{eqnarray}
The case where $k=+1$ is plotted in Figure 2. Both the
$k=0$ and $k=-1$ cases look similar to $k=-1$ when $\mu=0$ (see Figure 1), i.e they have
no turning points. The horizons for all $k$'s are given by
(\ref{horizon}).
\begin{figure}[tbp]
\begin{center}
\epsfig{file=potsch1.eps, scale=0.9}
\caption{Potential diagram for $\mu\neq 0$ and $k=+1$.}
\end{center} \label{Poten Diagram black k=+1}
\end{figure}
\subsection{Condition of return}
The turning point on Figure 2, is at
\begin{equation}
R_t=\left(2\mu\right)^\frac{1}{2} \label{Rt}
\end{equation}
which implies a maximum effective potential of
\begin{equation*}
B^2(R_t)=\frac{1}{\ell^{2}}+\frac{1}{4\mu}.
\end{equation*}
Combining this with the condition of accessibility (\ref{cond.
acces}), we can summarize the qualitative features of the
trajectories of gravitons emitted in the $AdSS_5$ bulk with $k=+1$ (see \cite{Misner}) as follows:
When a graviton is emitted near the horizon $R_h$ (i.e. at some point $R_{em}$, $R_h<R_{em}<R_t$), it will:
\begin{enumerate}
\item escape to infinity when
\begin{equation}
b>\sqrt{\frac{1}{\ell^2}+\frac{1}{4\mu}};
\end{equation}
\item or return when
\begin{equation}
0<b<\sqrt{\frac{1}{\ell^2}+\frac{1}{4\mu}}. \label{cond.ret}
\end{equation}
\end{enumerate}
Equation (\ref{cond.ret}) in effect gives us the \textit{condition of return}
for a graviton emitted near the bulk horizon $R_h$. The lower limit arise from the fact that both
$E$ and $P$ are positive constants, and hence $b>0$. Clearly we can only expect returning
gravitons at early times when $R_{em}<R_t$ when the effective potential is increasing.
When a graviton is emitted at $R_{em}>R_t$ it will escape to infinity (since $B(R)$ is decreasing).
\subsection{Conditions of escape}
For a gravitational signal to escape from the brane, it must move
faster than the brane with respect to an observer in the bulk.
Thus the relative velocity of the gravitational signal $v_g$ must
be greater than the relative velocity of the brane $v_b$ at the point of emission $R_{em}$. Therefore, we now seek to obtain the relative velocities of the
gravitational signal and the brane.
We first obtain the relative velocity of the gravitational signal
by dividing equation (\ref{geodesic T}) into equation
(\ref{geodesic R null}), and then taking the square root to find
\begin{equation}
v_g=\frac{dR_g}{dT}=f_k(R_g)\sqrt{1-\frac{P^2 f_k(R_g)}{E^2 R_g^2}}=f_k(R_g)\sqrt{1-\frac{B^2(R_g)}{b^2}}.
\label{vg}
\end{equation}
We see that this equation will hold if the condition of accessibility (\ref{cond. acces}) is satisfied
(where we made use of (\ref{effective poten}) and (\ref{impact param})). The velocity will be zero when
$b=B(R_g)$, i.e. we will have a turning point for a gravitational signal.
Next we find the relative velocity of the brane by substituting
equation (\ref{dT/dt 2}) for $dt$ into (\ref{jc isomat 1}),
\begin{equation}
dR_b^2=\left[\left(\frac{\rho+\sigma}{\sigma_c}\right)^2\frac{R_b^2}{\ell^2}-f_k(R_b)\right]
\left[f_k(R_b)dT^2-f_k^{-1}(R_b)dR_b^2\right],
\end{equation}
which on simplification yields
\begin{equation}
v_b=\frac{dR_b}{dT}=f_k(R_b)\sqrt{1-\frac{\ell^2f_k(R_b)}{\left(\frac{\rho+\sigma}{\sigma_c}\right)^2
R_b^2}}. \label{vb}
\end{equation}
The equation for the relative velocity of the brane will only hold if
\begin{equation}
\hat{\beta}=\frac{1}{\ell}\left(\frac{\rho+\sigma}{\sigma_c}\right)\geq B(R_b).
\end{equation}
In the limit of $R\rightarrow 0$ this imply that $\mu\geq 0$ and in the limit of $R\rightarrow \infty$
we must have $\sigma\geq \sigma_c$. We are therefore only considering positive dark radiation.
The ratio of the two relative velocities is then given by
\begin{equation}
\frac{v_g}{v_b}=\frac{f_k(R_g)}{f_k(R_b)}\cdot\sqrt{\frac{1-\frac{P^2 f_k(R_g)}{E^2
R_g^2}}{1-\frac{\ell^2f_k(R_b)}{\left(\frac{\rho+\sigma}{\sigma_c}\right)^2
R_b^2}}}
\end{equation}
We seek to find the conditions of escape, and therefore we only
consider the position of the brane and the energy density at the
time of emission, i.e. $R_b=R_g=R_{em}$ and $\rho=\rho_{em}$ respectively.
The graviton will have the same position as the brane at the bulk time of emission. A gravitational
signal can be emitted if and only if $v_g>v_b$, that is when
\begin{equation*}
1-\frac{P^2 f_k(R_{em})}{E^2
R_{em}^2}>1-\frac{\ell^2f_k(R_{em})}{\left(\frac{\rho_{em}+\sigma}{\sigma_c}\right)^2
R_{em}^2},
\end{equation*}
which after simplification reduces to
\begin{equation}
\frac{E}{P}>\frac{1}{\ell}\left(\frac{\rho_{em}+\sigma}{\sigma_c}\right) \label{cond escape}
\end{equation}
or
\begin{equation}
b>\hat{\beta}_{em}.
\end{equation}
This condition can be considered as the \textit{condition of escape}, that is the condition
which needs to be satisfied for a graviton to escape from the brane.
When $v_g=v_b$ we have the condition
\begin{equation}
\frac{E}{P}=\frac{1}{\ell}\left(\frac{\rho_{em}+\sigma}{\sigma_c}\right)
\end{equation}
which implies that the graviton is stuck on the brane and
therefore appears to be moving with the brane at the same relative
velocity. In the case when $v_g<v_b$,
the gravitational signal is lagging the brane. This is clearly not possible.
\section{Summary and discussion}
The only case which showed the possibility of emitting a graviton
with it returning to the brane, is $k=+1$ when $\mu\neq 0$. The
potential diagram for this case has a classical potential form
(see \cite{Misner} and Figure 2) which meant that if a graviton is emitted from the brane at $R_{em}$
(where $R_h<R_{em}<R_t$) it would return if the condition (\ref{cond.ret}) is satisfied. Comparing the
velocities of the graviton and the brane we were able to determine that a graviton will be able to escape
from the brane if the condition (\ref{cond escape}) is satisfied.
The condition for a graviton to escape from the brane at some early time (when $R_h<R_{em}<R_t$) and return,
is then given by
\begin{equation}
\frac{1}{\ell}\left(\frac{\rho_{em}+\sigma}{\sigma_c}\right)<b<\sqrt{\frac{1}{\ell^2}+\frac{1}{4\mu}},
\end{equation}
where $\mu>0$ and $\sigma\geq\sigma_c$. This condition can also be expressed in terms of the 5-dimensional
fundamental mass scale $M_5$, by using the
relationship $\kappa_5^2=8\pi G\ell=M_5^{-3}$, namely
\begin{equation}
\frac{\rho_{em}+\sigma}{6M^3_5}<b<\sqrt{\frac{1}{\ell^2}+\frac{1}{4\mu}}.
\end{equation}
Since the impact parameter has no physical significance, we rather use the conditions above to find the
relationship between parameters that do have physical significance. We find that the conditions of escape
and return impose the following restrictions on the dark radiation $\mu$ (Schwarzschild-like mass)
\begin{equation}
\mu<\frac{\ell^2}{4}\left[\left(\frac{\rho_{em}+\sigma}{\sigma_c}\right)^2-1\right]^{-1},
\end{equation}
for all $\rho_{em}+\sigma>\sigma_c$. This in turn tells us that we may only have returning gravitons before the
brane reaches the point $R_t$, where using (\ref{Rt}) we have the condition
\begin{equation}
R_t<\frac{\ell}{\sqrt{2}}\left[\left(\frac{\rho_{em}+\sigma}{\sigma_c}\right)^2-1\right]^{-\frac{1}{2}}.
\end{equation}
We can realistically only expect emitted gravitons to return to
the brane at very early times ($\rho_{em}+\sigma>>\sigma_c$) when
$R_{em}\rightarrow R_h$ and thus $B(R_{em})\rightarrow 0$. At late
times ($\rho \rightarrow 0$) we are essentially looking at a
domain wall moving in an $AdS_5$ space where gravitons do not
return to the brane (see appendix). In the limit
$\rho_{em}+\sigma>>\sigma_c$ the restrictions above reduce to
\begin{equation}
\mu<\frac{1}{(\rho_{em}+\sigma)^2}
\end{equation}
and hence
\begin{equation}
R_t<\frac{1}{\rho_{em}+\sigma}.
\end{equation}
The upper limit for the dark radiation $\mu$ obtained here still require
further investigation, especially in the light of the nucleosynthesis limits \cite{Ichiki}.
We see that there exist a small window of opportunity, before the brane reaches $R_t$,
for gravitons to be emitted from the brane and then return to it. After the brane reaches $R_t$,
all gravitons emitted never return to the brane. An alternative might be to look at a bulk with
some sort of field (dilatonic etc.) which forces the path back to the brane when it has gone beyond $R_t$.
Another possibility is to have a black hole on the brane that will give us
the required curvature. Black holes on the brane have been
consider by a number of authors (see \cite{ChamHawk,Dadhich,Emparan,Chamblin01,Pravda})
most notably in the context of causality, by \cite{Chamblin01b}.
Another issue that needs investigation, is that of conservation of
energy for a brane that is radiating these graviton. Clearly the
form (\ref{jc isomat 2}) does not take into consideration the
gravitons that are emitted. Radiating braneworlds have been
considered in \cite{Langlois} and \cite{Leeper}, which might show
more promising results. The mechanisms that could produce
gravitons with sufficient energy to leave the brane has also as
yet not been explored properly.
In conclusion, there are two important aspect of escaping gravitational signals.
In \S 4 we clearly saw that the condition of return does not
depend on the type of brane that we choose, but rather on the type
of bulk (i.e. on its 5-D spacetime). We also discovered that
the condition of escape depends on the type of brane we choose and
is independent of the bulk space time. We note that no matter
which values of $\mu$ ($AdSS_5$) and charge $Q$ (Reissner-Nordstr\"{o}m) we choose,
the escape condition still stays the same. The condition of escape is also indepent of the curvature values $k$.
We have thus shown that there are certain conditions which first
needs to be satisfied before one may assume that a signal can
escape and then return to the brane. Two important conditions need
to be satisfied, firstly we need a condition under which a
graviton can escape from the brane (or be captured by the brane),
and secondly the conditions which allow it to return to the brane.
Furthermore,
these two conditions together restrict gravitons to escaping and returning to
the brane before the brane reaches a point $R_t$ in the bulk.
If all of these conditions are not satisfied one can not expect to have
causally connected points through the bulk.
\ack
This research was supported by the National Research Foundation (South Africa).
J.L. also acknowledges the postgraduate assistantship from the University of South Africa (UNISA).
|
2206.07847
|
\section{Introduction}
\subsection{Symplectic capacities}
A {\it symplectic capacity} is a function $c$ that assigns numbers $c(X,\omega)\in [0,\infty]$ to symplectic manifolds $(X,\omega)$ of a certain dimension $2n$. Symplectic capacities are required to be monotonic under symplectic embeddings and behave linearly with respect to scalings of the symplectic form. More precisely, one requires:
\begin{itemize}
\item (Monotonicity) If $(X,\omega)$ symplectically embeds into $(X',\omega')$, then $c(X,\omega)\leq c(X',\omega')$.
\item (Conformality) For every $r>0$, we have $c(X,r\omega)=rc(X,\omega)$.
\end{itemize}
We will be mainly concerned with symplectic capacities of domains in Euclidean space ${\mathbb{R}}^{2n}={\mathbb{C}}^n$ equipped with the standard symplectic form
\begin{equation*}
\omega_0 \coloneqq \sum\limits_{j=1}^n dx_j\wedge dy_j.
\end{equation*}
We define the ball $B(a)$ and cylinder $Z(a)$ of symplectic width $a>0$ to be the sets
\begin{equation*}
B(a) \coloneqq \left\{z\in{\mathbb{C}}^n\mid \pi |z|^2\leq a\right\}\qquad \text{and} \qquad Z(a) \coloneqq \left\{z\in{\mathbb{C}}^n \mid \pi |z_1|^2\leq a\right\}.
\end{equation*}
A symplectic capacity is called {\it normalized} if it satisfies
\begin{itemize}
\item (Normalization) $c(B(\pi))= c(Z(\pi)) = \pi$.
\end{itemize}
Two examples of normalized symplectic capacities which are easy to define are the {\it Gromov width} $\operatorname{c_G}$ and the {\it cylindrical capacity} $\operatorname{c_Z}$. We use the notation $A \overset{s}{\hookrightarrow} B$ to indicate that there exists a {\it symplectic embedding} of $A$ into $B$, i.e. a smooth embedding preserving the symplectic structure. Gromov width and cylindrical capacity are given by
\begin{equation*}
\operatorname{c_G}(X)\coloneqq \sup \{a \mid B(a) \overset{s}{\hookrightarrow} X\} \qquad \text{and}\qquad
\operatorname{c_Z}(X)\coloneqq \inf \{a \mid X \overset{s}{\hookrightarrow} Z(a) \}.
\end{equation*}
Note, however, that it is highly non-trivial to show that $\operatorname{c_G}$ and $\operatorname{c_Z}$ are indeed normalized capacities. In fact, this is equivalent to the celebrated Gromov non-squeezing theorem \cite{Gr85}. There is a whole collection of symplectic capacities whose definition involves Hamiltonian dynamics. Examples of normalized capacities arising this way are the Hofer-Zehnder capacity $\operatorname{c_{HZ}}$ introduced in \cite{HZ90} and the Viterbo capacity $\operatorname{c_{SH}}$ defined in \cite{Vit99} using symplectic homology. Other capacities come in families parametrized by positive integers. Examples are the Ekeland-Hofer capacities $\cEH{k}$ defined in \cite{EH89} and \cite{EH90} and the equivariant capacities $\cCH{k}$ constructed by Gutt and Hutchings in \cite{GH18} from $S^1$-equivariant symplectic homology. The first capacities $\cEH{1}$ and $\cCH{1}$ in these families are normalized. In dimension $4$, there exists a sequence of capacities $\cECH{k}$ defined by Hutchings in \cite{Hu11} using embedded contact homology. Again, the first capacity $\cECH{1}$ is normalized. For more information on symplectic capacities, we refer the reader to Cieliebak-Hofer-Latschev-Schlenk \cite{CHLS07} and the references therein.\\
Recall that a contact form on an odd dimensional manifold is a nowhere vanishing $1$-form $\alpha$ such that the restriction of $d\alpha$ to the hyperplane field $\xi\coloneqq\ker\alpha$ is non-degenerate. A contact form $\alpha$ induces a Reeb vector field $R=R_\alpha$ characterized by the equations
\begin{equation*}
\iota_{R}\alpha = 1\qquad \text{and}\qquad \iota_R d\alpha = 0.
\end{equation*}
Studying the dynamical properties of Reeb flows, such as the existence of periodic orbits, is a topic of great interest in symplectic geometry. Contact forms naturally arise on the boundaries of convex or, more generally, star-shaped domains $X\subset{\mathbb{R}}^{2n}$. We equip ${\mathbb{R}}^{2n}$ with the radial Liouville vector field $Z_0$ and the associated Liouville $1$-form $\lambda_0$
\begin{equation}
\label{eq:liouville_vector_field_and_form}
Z_0 = \sum\limits_{j=1}^n (x_j\partial_{x_j} + y_j\partial_{y_j}) = \frac{1}{2}r\partial_r\qquad\text{and}\qquad \lambda_0 = \frac{1}{2}\sum\limits_{j=1}^n (x_jdy_j-y_jdx_j).
\end{equation}
They are related to the symplectic form $\omega_0$ via $d\lambda_0 = \omega_0$ and $\iota_{Z_0}\omega_0=\lambda_0$. Consider a closed, connected hypersurface $Y\subset{\mathbb{R}}^{2n}$. The restriction of $\lambda_0$ to $Y$ is a contact form if and only if the Liouville vector field $Z_0$ is transverse to $Y$. If $Y$ has this property, we call it a star-shaped hypersurface and the domain bounded by $Y$ a star-shaped domain. Note that all star-shaped hypersurfaces are contactomorphic to the sphere $S^{2n-1}$ equipped with its standard contact structure. Moreover, any contact form on $S^{2n-1}$ defining the standard contact structure is strictly contactomorphic to the restriction of $\lambda_0$ to some star-shaped hypersurface. Thus studying star-shaped hypersurfaces is equivalent to studying contact forms on $S^{2n-1}$ defining the standard contact structure.\\
It was proved by Rabinowitz in \cite{Rab78} that there exists a periodic Reeb orbit on the boundary of any star-shaped domain $X\subset{\mathbb{R}}^{2n}$. If $\gamma$ is a periodic orbit on $\partial X$, we define its action $\mathcal{A}(\gamma)$ to be
\begin{equation*}
\mathcal{A}(\gamma)\coloneqq\int_\gamma\lambda_0.
\end{equation*}
The capacities $\operatorname{c_{HZ}}$, $\operatorname{c_{SH}}$, $\cEH{k}$ and $\cCH{k}$ have the following important property: Their value on a star-shaped domain $X\subset{\mathbb{R}}^{2n}$ is equal to the action $\mathcal{A}(\gamma)$ of some (possibly multiply covered) periodic orbit $\gamma$ on $\partial X$. The capacities $\cECH{k}$ have a similar property. Their values can be represented as the sum of the actions of finitely many periodic orbits.
\subsection{Viterbo's conjecture}
In \cite{Vit00}, Viterbo stated the following fascinating conjecture concerning normalized symplectic capacities.
\begin{conjecture}[Viterbo conjecture]
\label{conjecture:weak_viterbo_conjecture}
Let $X\subset{\mathbb{R}}^{2n}$ be a convex domain. Then any normalized symplectic capacity $c$ satisfies the inequality
\begin{equation}
\label{eq:weak_viterbo_conjecture}
c(X) \leq (n!\operatorname{vol}(X))^{\frac{1}{n}}.
\end{equation}
\end{conjecture}
Note that inequality \eqref{eq:weak_viterbo_conjecture} holds for the Gromov width $\operatorname{c_G}$. This is an easy consequence of the fact that symplectomorphisms are volume preserving. For all other normalized capacities introduced above, Conjecture \ref{conjecture:weak_viterbo_conjecture} is open. There is a stronger version of Viterbo's conjecture.
\begin{conjecture}[Strong Viterbo conjecture]
\label{conjecture:strong_viterbo_conjecture}
Let $X\subset {\mathbb{R}}^{2n}$ be a convex domain. Then all normalized symplectic capacities agree on $X$.
\end{conjecture}
The strong Viterbo conjecture together with the above observation that Conjecture \ref{conjecture:weak_viterbo_conjecture} holds for $\operatorname{c_G}$ immediately implies that Conjecture \ref{conjecture:weak_viterbo_conjecture} is true for all normalized symplectic capacities. It is an easy consequence of the definitions that any normalized symplectic capacity $c$ satisfies $\operatorname{c_G}\leq c\leq \operatorname{c_Z}$. Thus Conjecture \ref{conjecture:strong_viterbo_conjecture} is equivalent to saying that Gromov width $\operatorname{c_G}$ and cylindrical capacity $\operatorname{c_Z}$ agree on convex domains. The convexity assumption in Viterbo's conjectures is essential. Even within the class of star-shaped domains there exist domains $X$ with arbitrarily small volume such that the cylindrical capacity satisfies $\operatorname{c_Z}(X)\geq 1$ (see Hermann's paper \cite{Her98}). We refer to Gutt-Hutchings-Ramos \cite{GHR20} for a recent account on Viterbo's conjectures.
\subsection{Main results}
It follows from work of Ekeland, Hofer, Zehnder, Abbondandolo, Kang and Irie that the normalized capacities $\operatorname{c_{HZ}}$, $\operatorname{c_{SH}}$, $\cEH{1}$ and $\cCH{1}$ agree on all convex domains $X\subset{\mathbb{R}}^{2n}$. We refer to Theorem 1.12 in \cite{GHR20} for a summary. If $X$ has smooth boundary, these capacities agree with $\operatorname{A_{min}}(X)$, the minimal action $\mathcal{A}(\gamma)$ of a periodic orbit $\gamma$ on $\partial X$, i.e.
\begin{equation}
\label{eq:amin_is_capacity}
\operatorname{A_{min}}(X)=\operatorname{c_{HZ}}(X)=\operatorname{c_{SH}}(X)=\cEH{1}(X)=\cCH{1}(X).
\end{equation}
Abbondandolo-Bramham-Hryniewicz-Salom\~{a}o \cite{ABHS18} proved that for all domains $X\subset{\mathbb{R}}^{4}$ whose boundary $\partial X$ is smooth and sufficiently close to the unit sphere with respect to the $C^3$-topology, the minimal action $\operatorname{A_{min}}(X)$ satisfies inequality \eqref{eq:weak_viterbo_conjecture}. This was extended to arbitrary dimension by Abbondandolo-Benedetti in \cite{AB20}. In particular, this implies that the capacities $\operatorname{c_{HZ}}$, $\operatorname{c_{SH}}$, $\cEH{1}$ and $\cCH{1}$ satisfy Conjecture \ref{conjecture:weak_viterbo_conjecture} in a $C^3$-neighbourhood of the round ball. Our first result strengthens this in the $4$-dimensional case. We prove the strong Viterbo conjecture (Conjecture \ref{conjecture:strong_viterbo_conjecture}) near the ball.
\begin{theorem}
\label{theorem:strong_viterbo_near_round_ball}
Let $X\subset{\mathbb{R}}^4$ be a convex domain. If $\partial X$ is sufficiently close to the unit sphere $S^3\subset{\mathbb{R}}^4$ with respect to the $C^3$-topology, then all normalized symplectic capacities agree on $X$.
\end{theorem}
The methods we use to obtain this result are similar to those in \cite{ABHS18}. In particular, the main tool in our proof are global surfaces of section. Let $\alpha$ be a contact form on a closed $3$-manifold $Y$. We call an embedded surface (with boundary) $\Sigma\subset Y$ a {\it global surface of section} for the Reeb flow if the boundary $\partial \Sigma$ is embedded and consists of closed, simple Reeb orbits, the Reeb vector field $R$ is transverse to the interior $\operatorname{int} (\Sigma)$, and every trajectory not contained in $\partial\Sigma$ meets $\operatorname{int} (\Sigma)$ infinitely often forward and backward in time. Surfaces of section are an extremely useful tool in three dimensional Reeb dynamics. For example, they have been used to show that every (non-degenerate) Reeb flow on a closed contact $3$-manifold must have either two or infinitely many periodic orbits (see \cite{HWZ98}, \cite{CHP19} and \cite{CDR20}). In this paper, we will be concerned with disk-like global surfaces of section, i.e. the case that $\Sigma$ is diffeomorphic to the $2$-dimensional closed disk. This implies that the underlying contact manifold must be the $3$-sphere $S^3$ with its unique tight contact structure. For more details on surfaces of section we refer to section \ref{subsection:disk_like_global_surfaces_of_section}. We state our second main result.
\begin{theorem}
\label{theorem:area_of_surface_of_section_bounds_cylindrical_capacity}
Let $X\subset{\mathbb{R}}^4$ be a star-shaped domain. Let $\Sigma\subset\partial X$ be a $\partial$-strong (see Definition \ref{def:non_degenerate_surface_of_section}) disk-like global surface of section of the natural Reeb flow on $\partial X$ of symplectic area
\begin{equation*}
a\coloneqq \int_\Sigma \omega_0 = \mathcal{A}(\partial\Sigma).
\end{equation*}
Then there exists a symplectic embedding $X\overset{s}{\hookrightarrow} Z(a)$. In particular, we have $\operatorname{c_Z}(X)\leq a$.
\end{theorem}
The boundary of a general star-shaped domain need not possess a disk-like global surface of section (see van Koert's paper \cite{VK20}). In this case, our result is vacuous. Theorem \ref{theorem:area_of_surface_of_section_bounds_cylindrical_capacity} is particularly useful when applied to the important class of {\it dynamically convex} domains. The notion of dynamical convexity was first introduced by Hofer-Wysocki-Zehnder in their groundbreaking paper \cite{HWZ98}. Since then, dynamical convexity has played a significant role in numerous papers on Reeb dynamics (see e.g. \cite{Hry14}, \cite{AM17}, \cite{AM17b}, \cite{GG20}, \cite{Zho20}, \cite{HN16}, just to name a few). We recall the definition of dynamical convexity from \cite{HWZ98}.
\begin{definition}[{\cite[Definition 1.2]{HWZ98}}]
\label{definition:dynamical_convexity}
A contact form $\alpha$ on $S^3$ defining the unique tight contact structure is called {\it dynamically convex} if every periodic Reeb orbit $\gamma$ of $\alpha$ has Conley-Zehnder index $\on{CZ}(\gamma)$ at least $3$. A star-shaped domain $X\subset{\mathbb{R}}^4$ is called dynamically convex if the restriction of the standard Liouville $1$-form $\lambda_0$ (see equation \eqref{eq:liouville_vector_field_and_form}) to the boundary $\partial X$ is dynamically convex.
\end{definition}
\begin{remark}
The Conley-Zehnder index $\on{CZ}(\gamma)$ of a periodic Reeb orbit depends on the choice (up to homotopy) of a symplectic trivialization of the contact structure along the orbit. On $S^3$ every contact structure admits a unique global trivialization up to homotopy. This is the trivialization used in Definition \ref{definition:dynamical_convexity}. Let us also point out that usually the Conley-Zehnder index is only defined for non-degenerate orbits. If $\gamma$ is degenerate, then $\on{CZ}(\gamma)$ in Definition \ref{definition:dynamical_convexity} refers to the lower semicontinuous extension of the Conley-Zehnder index.
\end{remark}
It is proved in \cite{HWZ98} that every convex domain $X\subset{\mathbb{R}}^4$ whose boundary has positive definite second fundamental form is dynamically convex. The main result of \cite{HWZ98} is that every dynamically convex Reeb flow on $S^3$ admits a disk-like global surface of section. This was later refined by Hryniewicz-Salom\~{a}o. In \cite{HS11}, they characterize precisely which simple closed orbits of a non-degenerate tight contact form on $S^3$ bound a disk-like global surface of section. Their characterization takes a particularly simple form in the case of dynamically convex contact forms. In this case, a simple orbit $\gamma$ is the boundary of a disk-like global surface of section if and only if $\gamma$ is unknotted and the self-linking number $\operatorname{sl}(\gamma)$ of $\gamma$ viewed as a transverse knot is equal to $-1$. It is proved by Hryniewicz in \cite{Hry14} that this statement continues to hold without any non-degeneracy assumption on the contact form.
\begin{definition}
A simple closed orbit of a tight Reeb flow on $S^3$ is called a {\it Hopf orbit} if it is unknotted and has self-linking number equal to $-1$.
\end{definition}
Using this terminology we summarize:
\begin{theorem}[Hryniewicz-Salom\~{a}o]
\label{theorem:hopf_iff_boundary_of_disk_gss}
Let $\alpha$ be a dynamically convex contact form on $S^3$ (not necessarily non-degenerate) and let $\gamma$ be any simple periodic Reeb orbit of $\alpha$. Then $\gamma$ bounds a disk-like global surface of section for the Reeb flow of $\alpha$ if and only if $\gamma$ is a Hopf orbit.
\end{theorem}
\begin{remark}
\label{remark:hopf_iff_boundary_of_disk_gss}
It is proved by Florio-Hryniewicz \cite[Proposition 2.8]{FH21} that every Hopf orbit of a dynamically convex Reeb flow on $S^3$ in fact bounds a disk-like global surface of section which is $\partial$-strong in the sense of Definition \ref{def:non_degenerate_surface_of_section}. Thus Theorem \ref{theorem:area_of_surface_of_section_bounds_cylindrical_capacity} is indeed applicable to these surfaces of section.
\end{remark}
While Hopf orbits need not bound disk-like global surfaces of section for general star-shaped domains, it is shown by Hofer-Wysocki-Zehnder \cite{HWZ96} that Hopf orbits always exist. Given a star-shaped domain $X\subset{\mathbb{R}}^4$, let us therefore define
\begin{equation}
\label{eq:def_of_ahopf}
\operatorname{A_{Hopf}}(X) \coloneqq \inf \left\{ \mathcal{A}(\gamma) \mid \gamma \enspace \text{is Hopf orbit on} \enspace \partial X \right\} \in (0,\infty).
\end{equation}
Although not explicitly stated in this form, the proof of the existence result of Hopf orbits in \cite{HWZ96} shows slightly more.
\begin{theorem}[Hofer-Wysocki-Zehnder]
\label{theorem:ahopf_bounded_by_cylindrical_capacity}
Let $X\subset{\mathbb{R}}^4$ be any star-shaped domain. Then
\begin{equation*}
\operatorname{A_{Hopf}}(X)\leq \operatorname{c_Z}(X).
\end{equation*}
\end{theorem}
The following result due to Hryniewicz-Hutchings-Ramos \cite{HHR21} says that the infimum in the definition of $\operatorname{A_{Hopf}}$ (equation \eqref{eq:def_of_ahopf}) is always attained in the dynamically convex case.
\begin{prop}[Hryniewicz-Hutchings-Ramos]
\label{prop:ahopf_attained_dyn_cvx}
Let $X\subset{\mathbb{R}}^4$ be a dynamically convex, star-shaped domain. Then there exists a Hopf orbit $\gamma$ on $\partial X$ such that $\mathcal{A}(\gamma)=\operatorname{A_{Hopf}}(X)$.
\end{prop}
As an immediate consequence of the above results we obtain:
\begin{theorem}
\label{theorem:a_hopf_equals_cylindrical_capacity_for_convex_domains}
Let $X\subset{\mathbb{R}}^4$ be a dynamically convex domain and let $a>0$. Then there exists a symplectic embedding $X\overset{s}{\hookrightarrow} Z(a)$ if and only if $a\geq \operatorname{A_{Hopf}}(X)$. In particular, we have
\begin{equation*}
\operatorname{c_Z}(X) = \operatorname{A_{Hopf}}(X).
\end{equation*}
\end{theorem}
\begin{proof}
If $a<\operatorname{A_{Hopf}}(X)$, then $X$ cannot be symplectically embedded into $Z(a)$ by Theorem \ref{theorem:ahopf_bounded_by_cylindrical_capacity}. By Proposition \ref{prop:ahopf_attained_dyn_cvx} we may choose a Hopf orbit $\gamma$ with action equal to $\operatorname{A_{Hopf}}(X)$. It follows from Theorem \ref{theorem:hopf_iff_boundary_of_disk_gss} and Remark \ref{remark:hopf_iff_boundary_of_disk_gss} that $\gamma$ bounds a $\partial$-strong disk-like global surface of section $\Sigma$. Thus Theorem \ref{theorem:area_of_surface_of_section_bounds_cylindrical_capacity} implies that $X$ symplectically embeds into $Z(\operatorname{A_{Hopf}}(X))$. Thus $X$ also embeds into $Z(a)$ for $a\geq \operatorname{A_{Hopf}}(X)$.
\end{proof}
Hryniewicz-Hutchings-Ramos show in \cite{HHR21} that $\operatorname{A_{Hopf}}(X)$ in fact agrees with the first embedded contact homology capacity $\cECH{1}(X)$ for all dynamically convex domains $X\subset{\mathbb{R}}^4$. We obtain the following corollary.
\begin{cor}
For all dynamically convex domains $X\subset{\mathbb{R}}^4$ we have
\begin{equation*}
\operatorname{c_Z}(X) = \cECH{1}(X).
\end{equation*}
\end{cor}
It is unknown whether a Reeb orbit of minimal action on the boundary of a dynamically convex domain $X\subset{\mathbb{R}}^4$ is a Hopf orbit, i.e. whether we have $\operatorname{A_{min}}(X) = \operatorname{A_{Hopf}}(X)$. Note that the strong Viterbo conjecture would imply $\operatorname{A_{min}}(X)=\operatorname{A_{Hopf}}(X)$ for all convex domains $X$. Equality of $\operatorname{A_{min}}$ and $\operatorname{A_{Hopf}}$ was proved by Hainz \cite{Hai07} (see also \cite{HH11}) under certain curvature assumptions.
\begin{theorem}[Hainz]
\label{theorem:unknottedness_of_index_tree_orbits}
Let $X\subset{\mathbb{R}}^4$ be a strictly convex domain. Assume that the principal curvatures $a\geq b\geq c$ of the boundary $\partial X$ satisfy the pointwise pinching condition $a\leq b+c$. Then any periodic Reeb orbit $\gamma$ on $\partial X$ of Conley-Zehnder index $3$ is a Hopf orbit.
\end{theorem}
It follows from Ekeland's book \cite{Ek90} (see in particular Theorem 3 and Proposition 9 in chapter V) that for convex domains $X$ with strictly positively curved boundary a Reeb orbit of minimal action has Conley-Zehnder index $3$. Thus we have $\operatorname{A_{min}}(X)=\operatorname{A_{Hopf}}(X)$ if $X$ satisfies the curvature assumptions in Theorem \ref{theorem:unknottedness_of_index_tree_orbits}. Together with Theorem \ref{theorem:a_hopf_equals_cylindrical_capacity_for_convex_domains} this implies that all normalized symplectic capacities which are bounded from below by $\operatorname{A_{min}}$ agree in this situation.
\begin{cor}
Let $X\subset{\mathbb{R}}^4$ be a convex domain with strictly positively curved boundary satisfying the curvature assumptions in Theorem \ref{theorem:unknottedness_of_index_tree_orbits}. Then
\begin{equation}
\label{eq:capacity_equality_curvature_assumption}
\operatorname{A_{min}}(X)=\operatorname{A_{Hopf}}(X)=\operatorname{c_{HZ}}(X)=\operatorname{c_{SH}}(X)=\cEH{1}(X)=\cCH{1}(X)=\cECH{1}(X)=c_Z(X).
\end{equation}
\end{cor}
We are led to ask the following question:
\begin{question}
\label{question:cz_3_implies_hopf}
Let $X\subset{\mathbb{R}}^4$ be a convex domain. Is it true that any closed Reeb orbit on $\partial X$ of Conley-Zehnder index $3$ must be a Hopf orbit?
\end{question}
An affirmative answer to Question \ref{question:cz_3_implies_hopf} would imply that equation \eqref{eq:capacity_equality_curvature_assumption} continues to hold for all convex domains in ${\mathbb{R}}^4$.
\subsection{Overview of the proofs}
Let us explain the main ideas. Consider the unit disk ${\mathbb{D}}\subset{\mathbb{C}}$ equipped with the standard symplectic form $\omega_0 = dx\wedge dy$. Let
\begin{equation*}
H:{\mathbb{R}}/{\mathbb{Z}} \times {\mathbb{D}}\rightarrow{\mathbb{R}}
\end{equation*}
be a $1$-periodic Hamiltonian vanishing on the boundary $\partial{\mathbb{D}}$. Consider {\it time-energy extended phase space}
\begin{equation*}
\widetilde{{\mathbb{D}}} \coloneqq {\mathbb{R}}_s \times ({\mathbb{R}}/{\mathbb{Z}})_t \times {\mathbb{D}}
\end{equation*}
equipped with the symplectic form
\begin{equation*}
\widetilde{\omega_0} = ds \wedge dt + \omega_0.
\end{equation*}
Let
\begin{equation*}
\Gamma(H)\coloneqq \{(H(t,z),t,z)\in\widetilde{{\mathbb{D}}}\mid (t,z)\in{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}\}
\end{equation*}
be the graph of $H$. This is a hypersurface in $\widetilde{{\mathbb{D}}}$ of codimension $1$. Hence the symplectic form $\widetilde{\omega_0}$ induces a characteristic foliation on $\Gamma(H)$. It is an easy computation (see Lemma \ref{lem:characteristic_foliation_on_graph}) that the vector field
\begin{equation*}
R\coloneqq X_{H_t}(z) + \partial_t + \partial_tH(z,t)\cdot \partial_s
\end{equation*}
is tangent to the characteristic foliation on $\Gamma(H)$. Observe that the projection of the flow of $R$ to the disk ${\mathbb{D}}$ agrees with the Hamiltonian flow $\phi_H^t$ on ${\mathbb{D}}$ induced by $H$. In particular, we see that the image of the map
\begin{equation*}
f:{\mathbb{D}}\rightarrow\Gamma(H)\qquad z\mapsto (H(0,z),0,z)
\end{equation*}
is a disk-like surface of section of the flow on $\Gamma(H)$ and that the first return map is given by $\phi_H^1$.\\
\noindent{\bf Main construction.} Assume that the Hamiltonian $H$ is strictly positive in the interior $\operatorname{int}({\mathbb{D}})$ of the disk and vanishes on the boundary $\partial{\mathbb{D}}$. We abbreviate
\begin{equation*}
\widetilde{{\mathbb{D}}}_+\coloneqq {\mathbb{R}}_{\geq 0}\times{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}\qquad\text{and}\qquad\widetilde{{\mathbb{D}}}_0\coloneqq \{0\}\times{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}.
\end{equation*}
Consider the map
\begin{equation*}
\Phi: \widetilde{{\mathbb{D}}}_+ \rightarrow {\mathbb{C}}^2
\qquad \Phi(s,t,z) \coloneqq \left(z\enspace,\enspace\sqrt{\frac{s}{\pi}}\cdot e^{2\pi i t}\right).
\end{equation*}
Note that the image of $\Phi$ is precisely the cylinder $Z(\pi)$. We observe that $\Phi$ restricts to a symplectomorphism
\begin{equation*}
\Phi : (\widetilde{{\mathbb{D}}}_+\setminus \widetilde{{\mathbb{D}}}_0,\widetilde{\omega_0}) \rightarrow (Z(\pi)\setminus ({\mathbb{D}}\times\{0\}),\omega_0).
\end{equation*}
The image $\Phi(\Gamma(H))\subset{\mathbb{C}}^2$ is a smooth hypersurface away from the circle $\partial{\mathbb{D}}\times\{0\}$. Under suitable assumptions on the boundary behaviour of $H$, it is smooth everywhere. In order to keep the introduction simple, let us ignore this issue for now. Let $A(H)$ denote the domain bounded by $\Phi(\Gamma(H))$. Since $\Phi$ restricts to a symplectomorphism on $\widetilde{{\mathbb{D}}}_+\setminus\widetilde{{\mathbb{D}}}_0$, it maps the characteristic foliation on $\Gamma(H)$ to the characteristic foliation on $\partial A(H)$. Thus $\Phi\circ f$ parametrizes a disk-like surface of section of the characteristic foliation on $\partial A(H)$. The first return map is given by $\phi_H^1$. Note that $\partial A(H)$ need not be star-shaped or even of contact type.\\
\noindent{\bf Embeddings into the cylinder.} Now suppose that $X\subset{\mathbb{C}}^2$ is a star-shaped domain and that the boundary $\partial X$ admits a disk-like surface of section $\Sigma\subset\partial X$. After scaling, we can always assume that the symplectic area of $\Sigma$ is equal to $\pi$. Suppose that $g:{\mathbb{D}}\rightarrow\Sigma$ is a parametrization of $\Sigma$ such that $g^*\omega_0=\omega_0$. Here $\omega_0$ denotes the standard symplectic form on both ${\mathbb{C}}^2$ and ${\mathbb{C}}$. Let $\phi\in\operatorname{Ham}({\mathbb{D}},\omega_0)$ be the first return map (see equation \eqref{eq:first_return_map}). In section \ref{subsection:disk_like_global_surfaces_of_section}, we explain how to lift $\phi$ to an element $\widetilde{\phi}\in\widetilde{\operatorname{Ham}}({\mathbb{D}},\omega_0)$ of the universal cover. Such a lift depends on a choice of trivialization which, roughly speaking, is an identification of $\partial X\setminus\partial\Sigma$ with the solid torus. In section \ref{subsection:disk_like_global_surfaces_of_section} we classify isotopy classes of such trivializations via an integer-valued function called degree. Let $\widetilde{\phi}$ denote the lift of $\phi$ with respect to a trivialization of degree $0$. Suppose that $\widetilde{\phi}$ can be generated by a $1$-periodic Hamiltonian $H$ which vanishes on the boundary and is strictly positive in the interior. The above discussion shows that $\partial X$ and $\partial A(H)$ admit disk-like surfaces of section whose first return maps agree. In fact, one can show more: The lifts of the first return maps to $\widetilde{\operatorname{Ham}}({\mathbb{D}},\omega_0)$ (with respect to trivializations of degree $0$) agree as well. We use a well-known result of Gromov and McDuff (Theorem \ref{theorem:gromov_mcduff_theorem}) to show that this in fact implies that the domains $X$ and $A(H)$ must be symplectomorphic. By definition, $A(H)$ is contained in the cylinder $Z(\pi)$. Therefore, we obtain an embedding of $X$ into the cylinder $Z(\pi)$. We make these arguments precise in Theorem \ref{theorem:embedding_result}. Unfortunately, we do not know whether the lift of the first return map of a disk-like surface of section with respect to a trivialization of degree $0$ can always be generated by a Hamiltonian which vanishes on the boundary and is strictly positive in the interior. In order to resolve this issue, let us observe that if $H$ and $G$ are Hamiltonians vanishing on the boundary $\partial{\mathbb{D}}$ and strictly positive in the interior $\operatorname{int}({\mathbb{D}})$, then the inequality $H\leq G$ implies the inclusion $A(H)\subset A(G)$. Roughly speaking, this says that we can increase the Hamiltonian generating the first return map by making the domain bigger. More precisely, in Proposition \ref{prop:modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian} we prove that any star-shaped domain with a disk-like surface of section in its boundary can be symplectically embedded into a bigger star-shaped domain whose boundary admits a disk-like surface of section of the same area and with the property that the degree $0$ lift of the first return map can be generated by a positive Hamiltonian. Theorem \ref{theorem:area_of_surface_of_section_bounds_cylindrical_capacity} is an easy consequence of Theorem \ref{theorem:embedding_result} and Proposition \ref{prop:modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian}.\\
\noindent{\bf Ball embeddings.} Suppose that the degree $0$ lift of the first return map of the surface of section $\Sigma\subset\partial X$ can be generated by a Hamiltonian $H$ which satisfies the inequality
\begin{equation}
\label{eq:lower_bound_hamiltonian_in_introduction}
H(t,z)\geq \pi(1-|z|^2).
\end{equation}
Then the domain $A(H)$ is squeezed between the ball $B(\pi)$ and the cylinder $Z(\pi)$, i.e.
\begin{equation*}
B(\pi)\subset A(H)\subset Z(\pi).
\end{equation*}
Since $X$ is symplectomorphic to $A(H)$, this implies that $\operatorname{c_G}(X)=\operatorname{c_Z}(X)$. The strategy of the proof of Theorem \ref{theorem:strong_viterbo_near_round_ball} is to show that if $\partial X$ is sufficiently close to the round sphere, then the shortest Reeb orbit on $\partial X$ must bound a disk-like surface of section with the property that the degree $0$ lift of the first return map can be generated by a Hamiltonian satisfying \eqref{eq:lower_bound_hamiltonian_in_introduction}. This is the subject of section \ref{section:a_positivity_criterion_for_hamiltonian_diffeomorphisms} and makes use of generalized generating functions as introduced in \cite{ABHS18}. Let us sketch the main ideas in a special case. We assume that $g:{\mathbb{D}}\rightarrow\Sigma\subset \partial X$ parametrizes a surface of section whose boundary orbit $\partial\Sigma$ has minimal action among all closed Reeb orbits on $\partial X$. Moreover, assume that $g^*\omega_0=\omega_0$. Let $\widetilde{\phi}\in\widetilde{\operatorname{Ham}}({\mathbb{D}},\omega_0)$ be the degree $0$ lift of the first return map $\phi$ to the universal cover. The periodic orbits on $\partial X$ different from the boundary orbit $\partial\Sigma$ correspond to the periodic points of $\phi$. As explained in section \ref{subsection:area_preserving_maps_of_the_disk}, any fixed point $p$ of $\phi$ has a well-defined action $\sigma_{\widetilde{\phi}}(p)$ depending on the lift $\widetilde{\phi}$ to the universal cover. Lifts with respect to a trivialization of degree $0$ have the property that $\sigma_{\widetilde{\phi}}(p)$ is equal to the action of the corresponding closed Reeb orbit on $\partial X$ (see Lemma \ref{lem:action_equals_first_return_time}). Since $\partial\Sigma$ is assumed to have minimal action, this implies that
\begin{equation}
\label{eq:action_inequality_in_overview_of_proofs}
\sigma_{\widetilde{\phi}}(p)\geq \mathcal{A}(\partial\Sigma)=\pi
\end{equation}
for all fixed points $p$ of $\phi$. If $\partial X$ is the unit sphere $S^3$, then the degree $0$ lift of the first return map $\widetilde{\phi}$ is equal to the counter-clockwise rotation by angle $2\pi$. Let us denote this rotation by $\widetilde{\rho}\in\widetilde{\operatorname{Ham}}({\mathbb{D}},\omega_0)$. If $\partial X$ is sufficiently close to $S^3$ with respect to the $C^3$-topology, then $\widetilde{\phi}$ is $C^1$-close to $\widetilde{\rho}$. This is proved in \cite{ABHS18} and explained in section \ref{section:from_reeb_flows_to_disk_like_surfaces_of_section_and_approximation_results}. In order to simplify the discussion, let us assume that $\widetilde{\phi}$ is actually equal to $\widetilde{\rho}$ in a small neighbourhood of the boundary $\partial{\mathbb{D}}$. Therefore, we can regard
\begin{equation*}
\psi\coloneqq \widetilde{\rho}^{-1}\circ\widetilde{\phi}
\end{equation*}
as an element of $\operatorname{Ham}_c({\mathbb{D}},\omega_0)$, the group of Hamiltonian diffeomorphisms compactly supported in the interior $\operatorname{int}({\mathbb{D}})$. It is $C^1$-close to the identity and it follows from \eqref{eq:action_inequality_in_overview_of_proofs} that the action $\sigma_\psi(p)$ is non-negative for all fixed points $p$. The following result is a special case of Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity} in section \ref{section:a_positivity_criterion_for_hamiltonian_diffeomorphisms}.
\begin{prop}[Special case of Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity}]
\label{prop:special_case_of_corollary_positivity_criterion}
Let $\psi\in\operatorname{Ham}_c({\mathbb{D}},\omega_0)$ be a Hamiltonian diffeomorphism compactly supported in the interior $\operatorname{int}({\mathbb{D}})$. Suppose that all fixed points of $\psi$ have non-negative action and that $\psi$ is close to the identity with respect to the $C^1$-topology. Then $\psi$ can be generated by a non-negative Hamiltonian $H$ with support contained in $\operatorname{int}({\mathbb{D}})$.
\end{prop}
We apply Proposition \ref{prop:special_case_of_corollary_positivity_criterion} to the Hamiltonian diffeomorphism $\psi=\widetilde{\rho}^{-1}\circ\widetilde{\phi}$. Let $G$ denote the resulting Hamiltonian. We may assume that $G_t$ vanishes for time $t$ close to $0$ or $1$. Let us define the Hamiltonian $K$ by the formula
\begin{equation*}
K(t,z)\coloneqq \pi(1-|z|^2).
\end{equation*}
This Hamiltonian generates the rotation $\widetilde{\rho}$. Now set
\begin{equation*}
H_t\coloneqq (K\#G)_t\coloneqq K_t + G_t\circ(\phi_K^t)^{-1}.
\end{equation*}
This defines a $1$-periodic Hamiltonian. Its time-$1$-flow represents $\widetilde{\phi}$. Since $G$ is non-negative, $H$ satisfies inequality \eqref{eq:lower_bound_hamiltonian_in_introduction}. As explained above, this implies that $B(\pi)\subset A(H)\subset Z(\pi)$ and hence $\operatorname{c_G}(X)=\operatorname{c_Z}(X)$.\\
\noindent{\bf Existence of non-negative Hamiltonians.} Let us sketch the proof of Proposition \ref{prop:special_case_of_corollary_positivity_criterion}. It follows the same basic idea as the proof of Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity}. The advantage of our simplified setting here is that we can work with standard generating functions (see e.g. chapter 9 in \cite{MS17}) and do not have to appeal to the generalized ones from \cite{ABHS18}. Let $\psi=(X,Y)$ denote the components of $\psi$. There exists a unique generating function $W:{\mathbb{D}}\rightarrow{\mathbb{R}}$, compactly supported in $\operatorname{int}({\mathbb{D}})$, such that
\begin{equation*}
\begin{cases}
X-x = \enspace\partial_2W(X,y)\\
Y-y = -\partial_1W(X,y)
\end{cases}
\end{equation*}
The fixed points of $\psi$ are precisely the critical points of $W$. Moreover, the action of a fixed point is equal to the value of $W$ at the fixed point. Since all fixed points are assumed to have non-negative action, this implies that $W$ takes non-negative values at all its critical points. In particular, this implies that $W$ is non-negative. For $t\in [0,1]$, let us define the generating function $W_t\coloneqq t\cdot W$. Let $\psi_t$ denote the compactly supported symplectomorphism generated by $W_t$. This defines an arc in $\operatorname{Ham}_c({\mathbb{D}},\omega_0)$ from the identity to $\psi$. Let $H$ be the unique compactly supported Hamiltonian generating the arc $\psi_t$. Our goal is to show that $H$ is non-negative. A direct computation shows that $H_0$, the Hamiltonian $H$ at time $0$, is equal to $W$ and in particular non-negative. The Hamiltonian $H$ need not be autonomous. However, the following is true. For every fixed $t\in[0,1]$, the set of critical points of $H_t$ is equal to the set of critical points of $W$. Moreover, $W$ and $H_t$ agree on this set. Hence $H_t$ takes non-negative values on its critical points. Therefore, the Hamiltonian $H$ must be non-negative.\\
\noindent{\bf Approximation results.} In general, the first return map of a disk-like surface of section need not be equal to the identity in any neighbourhood of $\partial{\mathbb{D}}$. Nevertheless, it will be convenient to assume that the Reeb flow in a small neighbourhood of the boundary orbit $\partial\Sigma$ has a specific simple form. More precisely, we want to assume that the local first return map of a small disk transverse to the orbit $\partial\Sigma$ is smoothly conjugated to a rotation. The main purpose of section \ref{section:from_reeb_flows_to_disk_like_surfaces_of_section_and_approximation_results} is to prove that we may approximate a given contact form with contact forms having this property. This is slightly subtle because we need to keep track of a certain number of higher order derivatives of the Reeb vector field in order to be able to apply the results from section \ref{section:a_positivity_criterion_for_hamiltonian_diffeomorphisms} to the first return map.\\
\noindent{\bf Organization.} The rest of the paper is structured as follows:\\
\indent In \S\ref{section:preliminaries} we review some preliminary material on area preserving disk maps (\S\ref{subsection:area_preserving_maps_of_the_disk}) and global surfaces of section (\S\ref{subsection:disk_like_global_surfaces_of_section}).\\
\indent The main results of section \ref{section:from_disk_like_surfaces_of_section_to_symplectic_embeddings}, namely the embedding result Theorem \ref{theorem:embedding_result} and Proposition \ref{prop:modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian} on modifications of star-shaped domains, are stated in \S\ref{subsection:embedding_results}. The construction of the domain $A(H)$ is explained in \S\ref{subsection:main_construction}. Proofs are given in \S\ref{subsection:proof_of_embedding_result} and \S\ref{subsection:proof_of_modification_result}. Note that the reader only interested in Theorems \ref{theorem:area_of_surface_of_section_bounds_cylindrical_capacity} and \ref{theorem:a_hopf_equals_cylindrical_capacity_for_convex_domains} on the cylindrical embedding capacity and not in the local version of the strong Viterbo conjecture (Theorem \ref{theorem:strong_viterbo_near_round_ball}) may skip \S\ref{section:a_positivity_criterion_for_hamiltonian_diffeomorphisms} and \S\ref{section:from_reeb_flows_to_disk_like_surfaces_of_section_and_approximation_results} and directly move on to \S\ref{section:proofs_of_the_main_results}, where we prove our main results.\\
\indent The main results of \S\ref{section:a_positivity_criterion_for_hamiltonian_diffeomorphisms} are Theorem \ref{theorem:positivity_criterion_for_radially_monotone_diffeomorphisms} and Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity} guaranteeing the existence of non-negative Hamiltonians generating certain Hamiltonian diffeomorphisms. They are stated in \S\ref{subsection:statement_of_the_positivity_criterion}. In \S\ref{subsection:lifts_to_the_strip} and \S\ref{subsection:generalized_generating_functions} we review material from \cite{ABHS18} on generalized generating functions. The only result that is not also explicitly explained in \cite{ABHS18} is Proposition \ref{prop:correspondence_symplectomorphisms_generating_functions}. The proofs of the main results of \S\ref{section:a_positivity_criterion_for_hamiltonian_diffeomorphisms} are given in \S\ref{subsection:proof_of_the_positivity_criterion_for_radially_monotone_diffeomorphisms} and \S\ref{subsection:proof_of_the_positivity_criterion_for_diffeomorphisms_close_to_the_identity}.\\
\indent \S\ref{section:from_reeb_flows_to_disk_like_surfaces_of_section_and_approximation_results} is slightly technical in nature. The main result that is needed outside of this section is Proposition \ref{prop:contact_forms_in_C_3_neighbourhood_may_be_approx_by_forms_satisfying_criterion} on certain approximations of contact forms.\\
\indent In \S\ref{section:proofs_of_the_main_results} we give proofs of the main results of our paper.\\
\noindent{\bf Acknowledgments.} We are deeply indepted Umberto Hryniewicz, whose talk on \cite{HHR21} inspired this paper. We also thank Michael Hutchings for his suggestion to prove the strong Viterbo conjecture near the round ball and Julian Chaidez for countless stimulating discussions.
\section{Preliminaries}
\label{section:preliminaries}
\subsection{Area preserving maps of the disk}
\label{subsection:area_preserving_maps_of_the_disk}
In this section, we recall some basic concepts and results concerning area preserving diffeomorphisms of the disk. Most of the material is taken from Abbondandolo-Bramham-Hryniewicz-Salom\~{a}o \cite[sections 2.1 and 2.2]{ABHS18}. Let $\omega$ be a smooth $2$-form on the closed unit disk ${\mathbb{D}}\subset {\mathbb{C}}$. We assume that $\omega$ is positive in the interior $\operatorname{int}({\mathbb{D}})$. On the boundary, $\omega$ is allowed to vanish. We let $\operatorname{Diff}^+({\mathbb{D}})$ denote the group of orientation preserving diffeomorphisms of ${\mathbb{D}}$. Let
\begin{equation*}
\pi:\widetilde{\operatorname{Diff}}({\mathbb{D}})\rightarrow \operatorname{Diff}^+({\mathbb{D}})\qquad \widetilde{\phi}\mapsto\phi
\end{equation*}
be the universal cover. We define $\operatorname{Diff}({\mathbb{D}},\omega)\subset\operatorname{Diff}^+({\mathbb{D}})$ to be the subgroup of all diffeomorphisms preserving $\omega$. Let $\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ denote the preimage of $\operatorname{Diff}({\mathbb{D}},\omega)$ under the universal covering map $\pi$. If $\omega$ is nowhere vanishing on the boundary $\partial{\mathbb{D}}$, then this agrees with the actual universal cover of $\operatorname{Diff}({\mathbb{D}},\omega)$. However, in general it need not agree with the universal cover (see Remark 2.1 in \cite{ABHS18}). Elements $\widetilde{\phi}\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ can be represented by arcs $(\phi_t)_{t\in [0,1]}$ in $\operatorname{Diff}^+({\mathbb{D}})$ which start at the identity and end at $\phi_1=\pi(\widetilde{\phi})\in\operatorname{Diff}({\mathbb{D}},\omega)$. Two such arcs are equivalent in $\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ if they are isotopic in $\operatorname{Diff}^+({\mathbb{D}})$ with fixed end points.\\
Consider a primitive $\lambda$ of $\omega$ and an element $\widetilde{\phi} = [(\phi_t)_{t\in [0,1]}]\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$. Then there exists a unique smooth function $\sigma_{\widetilde{\phi},\lambda}\in C^\infty({\mathbb{D}},{\mathbb{R}})$ such that
\begin{equation}
\label{eq:action_on_disk_first_defining_identity}
\phi^*\lambda - \lambda = d\sigma_{\widetilde{\phi},\lambda}
\end{equation}
and
\begin{equation}
\label{eq:action_on_disk_second_defining_identity}
\sigma_{\widetilde{\phi},\lambda}(z) = \int_{\{t\mapsto\phi_t(z)\}}\lambda
\end{equation}
for all $z\in\partial{\mathbb{D}}$ (see \cite[section 2.1]{ABHS18}). We call $\sigma_{\widetilde{\phi},\lambda}$ the {\it action} of $\widetilde{\phi}$ with respect to $\lambda$. We recall the following basic result \cite[Lemma 2.2]{ABHS18}.
\begin{lem}
\label{lem:basic_properties_of_action}
Let $\widetilde{\phi},\widetilde{\psi}\in \widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$. Let $\lambda$ be a primitive of $\omega$ and let $u$ be a smooth real-valued function on ${\mathbb{D}}$. Then:
\begin{enumerate}
\item $\sigma_{\widetilde{\phi},\lambda+du} = \sigma_{\widetilde{\phi},\lambda} + u\circ\phi - u$
\item $\sigma_{\widetilde{\psi}\circ\widetilde{\phi},\lambda} = \sigma_{\widetilde{\psi},\lambda}\circ\phi + \sigma_{\widetilde{\phi},\lambda}$
\item $\sigma_{\widetilde{\phi}^{-1},\lambda} = -\sigma_{\widetilde{\phi},\lambda}\circ\phi^{-1}$
\end{enumerate}
\end{lem}
In particular, item (1) in Lemma \ref{lem:basic_properties_of_action} implies that the value $\sigma_{\widetilde{\phi},\lambda}(p)$ at a fixed point $p$ of $\phi$ is independent of the choice of primitive $\lambda$ and we will occasionally denote this value by $\sigma_{\widetilde{\phi}}(p)$.\\
The {\it Calabi invariant} $\operatorname{Cal}(\widetilde{\phi})$ is defined to be the integral
\begin{equation*}
\operatorname{Cal}(\widetilde{\phi}) = \int_{\mathbb{D}} \sigma_{\widetilde{\phi},\lambda}\cdot \omega.
\end{equation*}
It follows from item (1) in Lemma \ref{lem:basic_properties_of_action} that this is independent of the choice of primitive $\lambda$ and from item (2) that
\begin{equation*}
\operatorname{Cal}:\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)\rightarrow{\mathbb{R}}
\end{equation*}
is a group homomorphism.\\
Let $(\phi_t)_{t\in [0,1]}$ be an arc in $\operatorname{Diff}({\mathbb{D}},\omega)$. Let $X_t$ be the vector field generating $\phi_t$. Since $\phi_t$ preserves $\omega$, the interior product $\iota_{X_t}\omega$ is a closed $1$-form. Since ${\mathbb{D}}$ is simply connected, there exists a smooth function $H_t$ on ${\mathbb{D}}$, unique up to addition of a constant, such that $dH_t=\iota_{X_t}\omega$. The vector field $X_t$ is tangent to the boundary $\partial{\mathbb{D}}$. This implies that $dH_t$ vanishes on tangent vectors of $\partial{\mathbb{D}}$. Thus $H_t$ is constant on the boundary. We will always use the normalization $H_t|_{\partial{\mathbb{D}}}=0$. This uniquely specifies $H_t$. Conversely, if we are given a family of smooth functions $H_t$ vanishing on the boundary, there exists a unique vector field $X_{H_t}$ in the interior $\operatorname{int}({\mathbb{D}})$ satisfying $\iota_{X_{H_t}}\omega = dH_t$. If $\omega$ does not vanish on $\partial{\mathbb{D}}$, then $X_{H_t}$ smoothly extends to a vector field on the closed disk which is tangent to the boundary. Note that this is not necessarily true if $\omega$ vanishes on the boundary. So while every arc in $\operatorname{Diff}({\mathbb{D}},\omega)$ is generated by a family of Hamiltonians vanishing on the boundary $\partial {\mathbb{D}}$, not every family of such Hamiltonians generates an arc in $\operatorname{Diff}({\mathbb{D}},\omega)$. The following result \cite[Proposition 2.6]{ABHS18} expresses the action of $\widetilde{\phi}= [(\phi_t)_{t\in [0,1]}]$ in terms the Hamiltonian $H_t$.
\begin{lem}
\label{lem:action_in_terms_of_hamiltonian}
Suppose that $(\phi_t)_{t\in [0,1]}$ is an arc in $\operatorname{Diff}({\mathbb{D}},\omega)$ generated by a family of Hamiltonians $H_t$ vanishing on the boundary $\partial {\mathbb{D}}$. Let $\widetilde{\phi}\in \widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ be the element represented by the arc $(\phi_t)_{t\in [0,1]}$. Then
\begin{equation*}
\sigma_{\widetilde{\phi},\lambda}(z) = \int_{\{t\mapsto\phi_t(z)\}}\lambda + \int_0^1 H_t(\phi_t(z))dt
\end{equation*}
for all $z\in {\mathbb{D}}$.
\end{lem}
\subsection{Global surfaces of section}
\label{subsection:disk_like_global_surfaces_of_section}
Let $Y^3$ be a closed oriented $3$-manifold equipped with a nowhere vanishing vector field $R$. Let $\phi^t$ denote the flow generated by $R$. Let $\Sigma\subset Y$ be an embedded compact surface, possibly with boundary, which we also assume to be embedded. We call $\Sigma$ a {\it global surface of section} for the flow $\phi^t$ if the boundary $\partial\Sigma$ consists of simple periodic orbits of $\phi^t$, the vector field $R$ is transverse to $\operatorname{int}(\Sigma)$ and every trajectory of $\phi^t$ which is not contained in $\partial\Sigma$ meets $\operatorname{int}(\Sigma)$ infinitely often forward and backward in time. We will always orient surfaces of section such that $R$ is positively transverse to $\Sigma$, i.e. the orientation of $R$ followed by the orientation of $\Sigma$ agrees with the orientation of $Y$. Consider a boundary orbit $\gamma$ of $\Sigma$. We call $\gamma$ {\it positive} if the orientation of $\gamma$ given by $R$ agrees with the boundary orientation of $\Sigma$ and {\it negative} otherwise. We define the {\it first return time} and {\it first return map} by
\begin{equation}
\label{eq:first_return_time}
\sigma:\operatorname{int}(\Sigma)\rightarrow{\mathbb{R}}_{>0}\qquad \sigma(p)\coloneqq \inf\{t>0\mid \phi^t(p)\in \Sigma\}
\end{equation}
and
\begin{equation}
\label{eq:first_return_map}
\phi:\operatorname{int}(\Sigma)\rightarrow \operatorname{int}(\Sigma)\qquad \phi(p)\coloneqq \phi^{\sigma(p)}(p).
\end{equation}
Studying the dynamics of the flow $\phi^t$ is equivalent to studying the discrete dynamics of the diffeomorphism $\phi$. Let $\Sigma'$ be a second global surface of section with the same boundary orbits as $\Sigma$, i.e. $\partial\Sigma'=\partial\Sigma$. Then the respective first return maps $\phi$ and $\phi'$ are smoothly conjugated. To see this, we define a transfer map $\psi:\operatorname{int}(\Sigma)\rightarrow\operatorname{int}(\Sigma')$ as follows. Let $z_0\in \operatorname{int}(\Sigma)$ and let $\tau(z_0)$ denote a real number such that $\phi^{\tau(z_0)}(z_0)\in\operatorname{int}(\Sigma')$. Then there exists a unique smooth extension of $\tau$ to a real-valued function on $\operatorname{int}(\Sigma)$ such that $\phi^{\tau(z)}(z)\in\operatorname{int}(\Sigma')$ for all $z\in\operatorname{int}(\Sigma)$. We define $\psi(z)\coloneqq \phi^{\tau(z)}(z)$. This is a diffeomorphism. The first return maps of $\Sigma$ and $\Sigma'$ are related via $\phi = \psi^{-1}\circ\phi'\circ\psi$.\\
In general, the first return time $\sigma$ and map $\phi$ need not smoothly extend to the boundary $\partial\Sigma$. In order to describe the boundary behaviour, we recall a blow-up construction due to Fried \cite{Fr82}. Our exposition follows Florio-Hryniewicz \cite{FH21}. We define the vector bundle $\xi\coloneqq TY/\langle R\rangle$ on $Y$, where $\langle R\rangle$ is the subbundle of $TY$ spanned by $R$. Moreover, we define the circle bundle ${\mathbb{P}}_+\xi\coloneqq (\xi\setminus 0)/{\mathbb{R}}_+$. The linearization of $\phi^t$ induces a lift $d\phi^t$ of the flow $\phi^t$ to $\xi$. This lift $d\phi^t$ descends to the bundle ${\mathbb{P}}_+\xi$. Now consider a simple closed orbit $\gamma$ of $\phi^t$. Then the torus ${\mathbb{T}}_\gamma\coloneqq {\mathbb{P}}_+\xi|_\gamma$ is invariant under the projective linearized flow $d\phi^t$. As a set, the blow-up of $Y$ at $\gamma$ is equal to the disjoint union $\overline{Y}\coloneqq (Y\setminus\gamma) \sqcup {\mathbb{T}}_\gamma$. It carries the structure of a compact smooth manifold with boundary ${\mathbb{T}}_\gamma$. The natural projection $\pi:\overline{Y}\rightarrow Y$ is smooth. The pullback of the restriction of the vector field $R$ to $Y\setminus\gamma$ is a smooth vector field $\overline{R}$ on the interior of $\overline{Y}$. It smoothly extends to all of $\overline{Y}$ (see e.g. \cite[Lemma A.1]{FH21}). The resulting flow $\overline{\phi}^t$ on $\overline{Y}$ lifts the flow $\phi^t$ and its restriction to the boundary ${\mathbb{T}}_\gamma$ agrees with the projective linearized flow $d\phi^t$. Consider a surface of section $\Sigma\subset Y$. Let $\overline{Y}$ be the simultaneous blow-up of $Y$ at all the boundary orbits of $\Sigma$. The surface $\Sigma$ lifts to a properly embedded surface $\overline{\Sigma}\subset\overline{Y}$ with boundary $\partial\overline{\Sigma}$ contained in $\partial\overline{Y}$. We recall the following definition from \cite{FH21}.
\begin{definition}
\label{def:non_degenerate_surface_of_section}
The global surface of section $\Sigma$ is called {\it $\partial$-strong} if the lifted surface $\overline{\Sigma}\subset \overline{Y}$ is a global surface of section for the lifted flow $\overline{\phi}^t$, i.e. if $\overline{R}$ is transverse to $\overline{\Sigma}$ and all trajectories of $\overline{\phi}^t$ meet $\overline{\Sigma}$ infinitely often forward and backward in time.
\end{definition}
Since $\Sigma$ is a surface of section, $\overline{R}$ is clearly transverse to $\overline{\Sigma}$ in the interior of $\overline{Y}$. Moreover, all trajectories in the interior meet $\overline{\Sigma}$ forward and backward in time. Thus the condition for being $\partial$-strong is equivalent to requiring that $\overline{\Sigma}\cap{\mathbb{T}}_\gamma$ is a surface of section for the projective linearized flow $d\phi^t$ on ${\mathbb{T}}_\gamma$ for all boundary orbits $\gamma$ of $\Sigma$.
\begin{lem}
Suppose that $\Sigma\subset Y$ is a $\partial$-strong global surface of section. Then the first return time extends to a smooth function $\sigma:\Sigma\rightarrow {\mathbb{R}}_{>0}$ and the first return map extends to a diffeomorphism $\phi:\Sigma\rightarrow\Sigma$. If $\Sigma'$ is a second $\partial$-strong global surface of section with the same boundary orbits, then any transfer map extends to a diffeomorphism $\psi:\Sigma\rightarrow\Sigma'$.
\end{lem}
\begin{proof}
We simply observe that $\Sigma$ being $\partial$-strong implies that the first return time and map of the lifted surface $\overline{\Sigma}$ are also defined on the boundary $\partial\overline{\Sigma}$ and smooth. The same argument applies to a transfer map $\psi$.
\end{proof}
In this paper we will be mainly concerned with {\it disk-like global surfaces of section}, i.e. the case that $\Sigma$ is diffeomorphic to the closed unit disk ${\mathbb{D}}$. The manifold $Y$ is then necessarily diffeomorphic to $S^3$. Suppose that $\Sigma$ is a $\partial$-strong disk-like global surface of section. It will be useful to lift the first return map $\phi:\Sigma\rightarrow\Sigma$ to an element $\widetilde{\phi}\in\widetilde{\operatorname{Diff}}(\Sigma)$ of the universal cover of the space $\operatorname{Diff}^+(\Sigma)$ of orientation preserving diffeomorphisms of $\Sigma$. Such a lift depends on a choice of trivialization. Let $\pi:\overline{Y}\rightarrow Y$ be the blow-up of $Y$ at the boundary orbit of $\Sigma$. A {\it trivialization} of $\overline{Y}$ is a diffeomorphism $\tau:{\mathbb{R}}/{\mathbb{Z}}\times \Sigma\rightarrow \overline{Y}$ such that the composition
\begin{equation*}
\Sigma\cong 0\times\Sigma\subset{\mathbb{R}}/{\mathbb{Z}}\times \Sigma \overset{\tau}{\rightarrow} \overline{Y} \overset{\pi}{\rightarrow} Y
\end{equation*}
is simply the inclusion of $\Sigma$. Moreover, we require that $\iota_{\tau^*\overline{R}}dt>0$, where $t$ denotes the coordinate on ${\mathbb{R}}/{\mathbb{Z}}$. Since $\operatorname{Diff}^+(\Sigma)$ is connected, the space of trivializations is non-empty. Let $\mathcal{T}$ denote the set of isotopy classes of trivializations of $\overline{Y}$. It is an affine space over $\pi_1(\operatorname{Diff}^+(\Sigma))\cong{\mathbb{Z}}$. We exhibit an explicit bijection $\on{deg}:\mathcal{T}\rightarrow{\mathbb{Z}}$ as follows. Let $\tau$ be a trivialization and $p\in\partial\Sigma$ a point in the boundary. Then the degree $d$ of the map
\begin{equation*}
S^1\cong {\mathbb{R}}/{\mathbb{Z}} \rightarrow \partial\Sigma\cong S^1\qquad t\mapsto \pi(\tau(t,p))
\end{equation*}
is independent of the choice of $p$ and only depends on the isotopy class of $\tau$. Here $\partial\Sigma$ is oriented as the boundary of $\Sigma$. We define the {\it degree} of $\tau$ to be $\on{deg}(\tau)\coloneqq d$. Given a trivialization $\tau$, there is a natural lift $\widetilde{\phi}$ of $\phi$ to $\widetilde{\operatorname{Diff}}(\Sigma)$ constructed as follows. Let $X$ denote the unique (positive) rescaling of the pullback vector field $\tau^*\overline{R}$ on ${\mathbb{R}}/{\mathbb{Z}}\times \Sigma$ such that $\iota_Xdt=1$. The flow of $X$ yields an arc in $\operatorname{Diff}^+(\Sigma)$ from the identity to $\phi$. Clearly, the element $\widetilde{\phi}\in\widetilde{\operatorname{Diff}}(\Sigma)$ represented by this arc only depends on the isotopy class of $\tau$. Let us explain the dependence of the lift on the choice of trivialization. Consider integers $d$ and $e$ and let $\widetilde{\phi}_d$ and $\widetilde{\phi}_e$ denote the lifts of $\phi$ with respect to trivializations of degrees $d$ and $e$, respectively. Let $\widetilde{\rho}\in\widetilde{\operatorname{Diff}}(\Sigma)$ be one full positive rotation of $\Sigma$. Then the lifts $\widetilde{\phi}_d$ and $\widetilde{\phi}_e$ are related by the identity
\begin{equation}
\label{eq:relationship_lift_trivialization}
\widetilde{\rho}^{e-d}\circ \widetilde{\phi}_e = \widetilde{\phi}_d.
\end{equation}
Let us now specialize our discussion of global surfaces of section to Reeb flows. Let $\alpha$ be a contact form on $Y$ and let $R$ be the induced Reeb vector field. We abbreviate $\omega\coloneqq d\alpha|_\Sigma$. This is a closed $2$-form on $\Sigma$. It vanishes on the boundary $\partial\Sigma$ and is a positive area form in the interior $\operatorname{int}(\Sigma)$. Note that by Stokes' theorem $\Sigma$ must possess at least one positive boundary orbit. In particular, if $\Sigma$ is a disk, then its boundary orbit must be positive. Let $\lambda$ denote the restriction of $\alpha$ to $\Sigma$. This defines a primitive of $\omega$. The first return time $\sigma$ and map $\phi$ satisfy the identity
\begin{equation}
\label{eq:first_return_time_identity}
\phi^*\lambda - \lambda = d\sigma.
\end{equation}
This implies that $\phi$ preserves the area form $\omega$. Similarly, one can show that a transfer map $\psi$ between two global surfaces of section $\Sigma$ and $\Sigma'$ with the same boundary orbits is area preserving.
\begin{lem}
\label{lem:action_equals_first_return_time}
Suppose that $\Sigma$ is a $\partial$-strong disk-like global surface of section and let $\widetilde{\phi}\in\widetilde{\operatorname{Diff}}(\Sigma,\omega)$ denote the lift of the first return map $\phi$ with respect to a trivialization of degree $0$. Then the action $\sigma_{\widetilde{\phi},\lambda}$ agrees with the first return time $\sigma$.
\end{lem}
\begin{proof}
We need to check that the first return time $\sigma$ satisfies \eqref{eq:action_on_disk_first_defining_identity} and \eqref{eq:action_on_disk_second_defining_identity}. The first identity is true by \eqref{eq:first_return_time_identity}. Let $(\phi_t)_{t\in [0,1]}$ be any arc in $\operatorname{Diff}^+(\Sigma)$ representing $\widetilde{\phi}$. Let $z\in\partial\Sigma$ be a point in the boundary and let $\gamma:[0,1]\rightarrow\partial\Sigma$ be the path defined by $\gamma(t)\coloneqq \phi_t(z)$. Let $\delta:[0,\sigma(z)]\rightarrow\overline{Y}$ be the trajectory of $\overline{\phi}^t$ starting at $z\in\partial \overline{\Sigma}\cong\partial\Sigma$. We can express $\sigma_{\widetilde{\phi},\lambda}(z)$ and $\sigma(z)$ as
\begin{equation*}
\sigma_{\widetilde{\phi},\lambda}(z) = \int_\gamma\lambda\qquad\text{and}\qquad \sigma(z) = \int_\delta \pi^*\alpha = \int_{\tau^{-1}\circ\delta} \tau^*\pi^*\alpha.
\end{equation*}
In order to see that these two numbers agree, we regard $\gamma$ as a path in ${\mathbb{R}}/{\mathbb{Z}}\times\Sigma$ via the inclusion $\Sigma\cong 0\times\Sigma\subset {\mathbb{R}}/{\mathbb{Z}}\times\Sigma$ and form the concatenation $\epsilon\coloneqq (\tau^{-1}\circ\delta) \# \overline{\gamma}$. This defines a loop in ${\mathbb{R}}/{\mathbb{Z}}\times\partial\Sigma$ which is homotopic to the loop ${\mathbb{R}}/{\mathbb{Z}}\times z$. The restriction $\beta\coloneqq (\tau^*\pi^*\alpha)|_{{\mathbb{R}}/{\mathbb{Z}}\times\partial\Sigma}$ is the pullback of the restriction of $\alpha$ to $\partial\Sigma$. Hence $\beta$ is a closed $1$-form. Since $\tau$ has degree $0$, the loop ${\mathbb{R}}/{\mathbb{Z}}\times z$ is mapped to a contractible loop in $\partial\Sigma$ by $\pi\circ\tau$. Thus the integral of $\beta$ over the loop ${\mathbb{R}}/{\mathbb{Z}}\times z$ vanishes. Since $\epsilon$ is homotopic to ${\mathbb{R}}/{\mathbb{Z}}\times z$, we obtain
\begin{equation*}
0 = \int_\epsilon \beta = \int_{\tau^{-1}\circ\delta} \tau^*\pi^*\alpha - \int_{\gamma} \lambda
\end{equation*}
where we have used that the restriction of $\beta$ to $0\times\partial\Sigma$ is equal to $\lambda$. This concludes the proof.
\end{proof}
\section{From disk-like surfaces of section to symplectic embeddings}
\label{section:from_disk_like_surfaces_of_section_to_symplectic_embeddings}
\subsection{Embedding results}
\label{subsection:embedding_results}
The following theorem says, roughly speaking, that if the boundary of a star-shaped domain $X\subset{\mathbb{R}}^4$ admits a disk-like global surface of section of symplectic area $a$ such that the lift of the first return map with respect to a trivialization of degree $0$ can be generated by a positive Hamiltonian, then the domain $X$ can be symplectically embedded into the cylinder $Z(a)$. The second part of the theorem states that if the lift of the first return map with respect to a trivialization of degree $1$ can still be generated by a positive Hamiltonian, then the ball $B(a)$ embeds into $X$.
\begin{theorem}
\label{theorem:embedding_result}
Let $X\subset{\mathbb{R}}^4$ be a star-shaped domain. Let $\Sigma\subset\partial X$ be a $\partial$-strong disk-like global surface of section of the natural Reeb flow on $\partial X$. Assume that the local first return map of a small disk transverse to the boundary orbit $\partial\Sigma$ is smoothly conjugated to a rotation. Set $\omega\coloneqq \omega_0|_{\Sigma}$ and let
\begin{equation*}
a\coloneqq \int_\Sigma \omega
\end{equation*}
be the symplectic area of the surface of section.
\begin{enumerate}
\item Let $\widetilde{\phi}_0\in \widetilde{\operatorname{Diff}}(\Sigma,\omega)$ be the lift of the first return map with respect to a trivialization of degree $0$. Suppose that there exists a Hamiltonian $H:{\mathbb{R}}/{\mathbb{Z}}\times\Sigma\rightarrow{\mathbb{R}}$ with the following properties:
\begin{enumerate}
\item $H$ is strictly positive in the interior $\operatorname{int}(\Sigma)$ and vanishes on the boundary $\partial\Sigma$.
\item $H$ is autonomous in some neighbourhood of $\partial\Sigma$.
\item The Hamiltonian vector field $X_{H_t}$ defined by $\iota_{X_{H_t}}\omega = dH_t$ in the interior $\operatorname{int}(\Sigma)$ smoothly extends to the closed disk $\Sigma$ and is tangent to $\partial \Sigma$.
\item The arc $(\phi_H^t)_{t\in [0,1]}$ represents $\widetilde{\phi}_0$.
\end{enumerate}
\noindent Then $X\overset{s}{\hookrightarrow} Z(a)$.
\item Let $\widetilde{\phi}_1\in \widetilde{\operatorname{Diff}}(\Sigma,\omega)$ be the lift of the first return map with respect to a trivialization of degree $1$. Assume that there exists a Hamiltonian $G:{\mathbb{R}}/{\mathbb{Z}}\times\Sigma\rightarrow{\mathbb{R}}$ satisfying properties (a)-(c) above such that the arc $(\phi_G^t)_{t\in [0,1]}$ represents $\widetilde{\phi}_1$. Then $B(a)\overset{s}{\hookrightarrow}X\overset{s}{\hookrightarrow} Z(a)$.
\end{enumerate}
\end{theorem}
Given a star-shaped domain $X$ and a disk-like surface of section $\Sigma\subset\partial X$, we do not know whether it is always possible to generate the lift $\widetilde{\phi}_0$ by a Hamiltonian which vanishes on the boundary $\partial\Sigma$ and is positive in the interior. The following Proposition says that we may always symplectically embed $X$ into a bigger domain satisfying the hypotheses of Theorem \ref{theorem:embedding_result}.
\begin{prop}
\label{prop:modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian}
Let $X\subset{\mathbb{R}}^4$ be a star-shaped domain. Let $\Sigma\subset\partial X$ be a $\partial$-strong disk-like surface of section of the natural Reeb flow on $\partial X$. Then there exist a star-shaped domain $X'$ and a $\partial$-strong disk-like surface of section $\Sigma'$ of the natural Reeb flow on the boundary $\partial X'$ such that $\Sigma$ and $\Sigma'$ have the same symplectic areas, $X$ symplectically embeds into $X'$ and the tuple $(X',\Sigma')$ satisfies all hypotheses of the first assertion of Theorem \ref{theorem:embedding_result}.
\end{prop}
\subsection{Main construction}
\label{subsection:main_construction}
Given a $1$-periodic Hamiltonian
\begin{equation*}
H:{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}\rightarrow{\mathbb{R}}
\end{equation*}
which is positive in the interior $\operatorname{int}({\mathbb{D}})$ and vanishes on the boundary $\partial{\mathbb{D}}$, we construct a domain
\begin{equation*}
A(H)\subset{\mathbb{C}}^2.
\end{equation*}
We show that the characteristic foliation on the boundary $\partial A(H)$ possesses a disk-like surface of section with first return map given by $\phi_H^1$.
\begin{lem}
\label{lem:characteristic_foliation_on_graph}
Let $(M,\omega)$ be a symplectic manifold. Let $\widetilde{M}\coloneqq {\mathbb{R}}_s\times ({\mathbb{R}}/{\mathbb{Z}})_t \times M$ denote time-energy extended phase space equipped with the symplectic form $\widetilde{\omega}\coloneqq ds\wedge dt + \omega$. Consider a periodic Hamiltonian
\begin{equation*}
H:{\mathbb{R}}/{\mathbb{Z}}\times M\rightarrow{\mathbb{R}}
\end{equation*}
and let
\begin{equation*}
\Gamma(H)\coloneqq \{(H(t,p),t,p)\mid (t,p)\in{\mathbb{R}}/{\mathbb{Z}}\times M\} \subset\widetilde{M}
\end{equation*}
denote its graph inside time-energy extended phase space $\widetilde{M}$. Then the characteristic foliation on $\Gamma(H)$ induced by the symplectic form $\widetilde{\omega}$ is spanned by the vector field
\begin{equation*}
X_{H_t}(p) + \partial_t + \partial_tH(t,p)\cdot \partial_s.
\end{equation*}
\end{lem}
\begin{proof}
We define the autonomous Hamiltonian $\widetilde{H}$ on time-energy extended phase space by
\begin{equation*}
\widetilde{H}: \widetilde{M} \rightarrow{\mathbb{R}} \qquad \widetilde{H}(s,t,p)\coloneqq H(t,p)-s.
\end{equation*}
The graph $\Gamma(H)$ is given by the regular level set $\widetilde{H}^{-1}(0)$. Thus the characteristic foliation on $\Gamma(H)$ is spanned by the restriction of the Hamiltonian vector field $X_{\widetilde{H}}$ to $\Gamma(H)$. We compute
\begin{equation*}
d\widetilde{H}(s,t,p) = d H_t(p) + \partial_t H(t,p)\cdot dt - ds = \iota_{X_{H_t} + \partial_t + \partial_tH\cdot \partial_s}(ds\wedge dt+\omega)
\end{equation*}
and conclude that
\begin{equation*}
X_{\widetilde{H}} = X_{H_t} + \partial_t + \partial_tH \cdot \partial_s.
\end{equation*}
\end{proof}
\begin{construction}
\label{construction:spun_hypersurface}
Consider ${\mathbb{C}}$ equipped with the standard symplectic form $\omega_0 = dx\wedge dy$. Let
\begin{equation*}
\widetilde{{\mathbb{C}}}\coloneqq {\mathbb{R}}_s\times({\mathbb{R}}/{\mathbb{Z}})_t\times{\mathbb{C}} \qquad \widetilde{\omega_0}\coloneqq ds\wedge dt+\omega_0
\end{equation*}
denote time-energy extended phase space and abbreviate
\begin{equation*}
\widetilde{{\mathbb{C}}}_+ \coloneqq {\mathbb{R}}_{\geq 0}\times{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{C}} \qquad\text{and}\qquad \widetilde{{\mathbb{C}}}_0 \coloneqq \{0\}\times{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{C}}.
\end{equation*}
Consider the map
\begin{equation*}
\Phi : \widetilde{{\mathbb{C}}}_+\rightarrow {\mathbb{C}}^2 \qquad \Phi(s,t,z)\coloneqq \left(z\enspace,\enspace\sqrt{\frac{s}{\pi}}\cdot e^{2\pi i t}\right).
\end{equation*}
$\Phi$ restricts to a diffeomorphism between $\widetilde{{\mathbb{C}}}_+\setminus\widetilde{{\mathbb{C}}}_0$ and ${\mathbb{C}}^2 \setminus ({\mathbb{C}}\times 0)$. Moreover $\Phi^*\omega_0 = \widetilde{\omega_0}$. For $a>0$, let $B^2(a)\subset{\mathbb{C}}$ denote the closed $2$-dimensional disk of area $a$. Let
\begin{equation*}
H:{\mathbb{R}}/{\mathbb{Z}}\times B^2(a) \rightarrow{\mathbb{R}}
\end{equation*}
be a smooth function. Assume that:
\begin{enumerate}
\item $H$ is strictly positive in the interior $\operatorname{int}(B^2(a))$.
\item There exists a constant $C>0$ such that in some neighbourhood of $\partial B^2(a)$ the function $H$ is given by
\begin{equation}
\label{eq:special_form_of_H_near_boundary}
H(t,z) = C\cdot (a-\pi|z|^2).
\end{equation}
\end{enumerate}
Let
\begin{equation*}
\Gamma_-(H)\coloneqq \{(s,t,z)\in \widetilde{{\mathbb{C}}}_+ \mid z \in B^2(a) \enspace \text{and}\enspace 0\leq s\leq H(t,z)\}
\end{equation*}
denote the subgraph of $H$. We define the subset $A(a,H)\subset {\mathbb{C}}^2$ by
\begin{equation*}
A(a,H) \coloneqq \Phi(\Gamma_-(H)).
\end{equation*}
\end{construction}
\begin{lem}[Basic properties]
\label{lem:basic_properties_of_spun_hypersurfaces}
The set $A(a,H)\subset{\mathbb{C}}^2$ defined in Construction \ref{construction:spun_hypersurface} satisfies the following basic properties:
\begin{enumerate}
\item $A(a,H)$ has smooth boundary and is diffeomorphic to the closed ball $D^4$.
\item $A(a,H)\subset Z(a)$
\item If $H(t,z)\geq a-\pi|z|^2$, then $B(a)\subset A(a,H)$.
\item The map
\begin{equation*}
f:B^2(a)\rightarrow \partial A(a,H) \quad z\mapsto \Phi(H(0,z),0,z)
\end{equation*}
is a parametrization of a disk-like surface of section of the characteristic foliation on $\partial A(a,H)$. We have $f^*\omega_0 = \omega_0$. Consider the lift $\widetilde{\phi}_0$ of the first return map with respect to a trivialization of degree $0$. We regard $\widetilde{\phi}_0$ as an element of $\widetilde{\operatorname{Diff}}(B^2(a),\omega_0)$ via the parametrization $f$. It is represented by $(\phi_H^t)_{t\in [0,1]}$.
\end{enumerate}
\end{lem}
\begin{remark}
The parametrization $f$ in item (4) of Lemma \ref{lem:basic_properties_of_spun_hypersurfaces} is only smooth in the interior $\operatorname{int}({\mathbb{D}})$. At the boundary $\partial{\mathbb{D}}$, the radial derivative $\partial_rf$ blows up. Since $H$ has the special form \eqref{eq:special_form_of_H_near_boundary} and $\phi_H^t$ is a rotation near the boundary, this does not cause problems.
\end{remark}
\begin{proof}
Clearly, the boundary $\partial A(a,H)$ is smooth away from the circle $\partial B^2(a)\times\{0\}$. Near this circle, the Hamiltonian $H$ has the special form \eqref{eq:special_form_of_H_near_boundary}. Thus $\partial A(a,H)$ can be described by the equation
\begin{equation*}
C|z_1|^2 + |z_2|^2 = \frac{Ca}{\pi}
\end{equation*}
near $\partial B^2(a)\times\{0\}$. The solution set of this equation is the boundary of an ellipsoid and in particular smooth. Let $G$ denote the Hamiltonian which is given by formula \eqref{eq:special_form_of_H_near_boundary} on the entire disk $B^2(a)$. The set $A(a,G)$ is an ellipsoid and in particular diffeomorphic to the closed ball $D^4$. Clearly there exists a diffeomorphism
\begin{equation*}
\psi:\Gamma_-(H)\rightarrow\Gamma_-(G)
\end{equation*}
between the subgraphs of $H$ and $G$. In fact, we can choose $\psi$ to agree with the identity map on all points $(s,t,z)\in \Gamma_-(H)$ such that $z$ is close to the boundary $\partial B^2(a)$ or $s$ is close to $0$. If $\psi$ has these properties, then $\Phi\circ\psi\circ\Phi^{-1}$ defines a diffeomorphism between $A(a,H)$ and $A(a,G)$. Thus $A(a,H)$ is diffeomorphic to $D^4$. By construction, $A(a,H)$ is contained in the cylinder $Z(a)$. If $H(t,z) = a-\pi|z|^2$, then $A(a,H)$ is the $4$-dimensional ball $B(a)$ of area $a$. Since $H\leq G$ implies $A(a,H)\subset A(a,G)$, assertion (3) is an immediate consequence. In order to prove assertion (4), let us first observe that Lemma \ref{lem:characteristic_foliation_on_graph} implies that the map
\begin{equation*}
g:B^2(a)\rightarrow \Gamma(H)\subset \widetilde{{\mathbb{C}}}_+ \qquad g(z)\coloneqq (H(0,z),0,z)
\end{equation*}
parametrizes a disk-like surface of section of the characteristic foliation on the graph $\Gamma(H)$. The first return map of this surface of section is given by $\phi_H^1$. The symplectomorphism $\Phi$ in Construction \ref{construction:spun_hypersurface} maps the characteristic foliation on the graph $\Gamma(H)$ to the characteristic foliation on $\partial A(a,H)$. Thus the first return map of the surface of section parametrized by $f$ is equal to $\phi_H^1$ as well. In order to show that $(\phi_H^t)_{t\in [0,1]}$ represents the correct lift, simply observe that the composition of the trivialization
\begin{equation*}
\tau:{\mathbb{R}}/{\mathbb{Z}}\times B^2(a)\rightarrow\Gamma(H)\qquad \tau(t,z)\coloneqq (H(t,z),t,z)
\end{equation*}
of $\Gamma(H)$ with $\Phi$ yields a trivialization of $\partial A(a,H)$ of degree $0$. This proves assertion (4).
\end{proof}
\subsection{Proof of Theorem \ref{theorem:embedding_result}}
\label{subsection:proof_of_embedding_result}
Throughout this section, we fix the setup of Theorem \ref{theorem:embedding_result}. We let $X\subset ({\mathbb{R}}^4,\omega_0)$ denote a star-shaped domain and $\Sigma\subset\partial X$ a $\partial$-strong disk-like global surface of section of the natural Reeb flow on $\partial X$ induced by the restriction of the standard Liouville $1$-form $\lambda_0$ defined in \eqref{eq:liouville_vector_field_and_form}. We assume that the local first return map of a small disk transverse to the boundary orbit $\partial\Sigma$ is smoothly conjugated to a rotation. Let $a>0$ denote the symplectic area of $\Sigma$.\\
Our strategy, roughly speaking, is to show that if the degree $0$ lift $\widetilde{\phi}_0$ of the first return map can be generated by a Hamiltonian $H$ which is positive in the interior $\operatorname{int}(\Sigma)$ and vanishes on the boundary $\partial \Sigma$, then $X$ is symplectomorphic to the domain $A(a,H)$ constructed in section \ref{subsection:main_construction}. We construct a symplectomorphism between $X$ and $A(a,H)$ in two steps. In Proposition \ref{prop:spun_hypersurface_isomorphic_starshaped_hypersurface} we show that there exists a diffeomorphism $\psi:\partial A(a,H)\rightarrow\partial X$ which pulls back $\omega_0|_{\partial X}$ to $\omega_0|_{\partial A(a,H)}$. Then we use a result of Gromov and McDuff (Theorem \ref{theorem:gromov_mcduff_theorem}) to extend $\psi$ to a symplectomorphism between $A(a,H)$ and $X$. This is done in Corollary \ref{cor:corollary_of_gromov_mcduff}.\\
We begin with the following auxiliary lemma on the existence of a convenient parametrization of a tubular neighbourhood of the boundary orbit.
\begin{lem}
\label{lem:neighbourhood_theorem_boundary_orbit_conj_to_rot}
Let $\epsilon>0$ be sufficiently small and let ${\mathbb{D}}_\epsilon\subset{\mathbb{C}}$ denote the disk of radius $\epsilon$. There exist a $\partial$-strong disk-like global surface of section $\Sigma'\subset \partial X$ with the same boundary orbit as $\Sigma$ and a parametrization $F:{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}_\epsilon\rightarrow \partial X$ of a tubular neighbourhood of $\partial\Sigma'$ such that the following is true.
\begin{equation}
\label{eq:neighbourhood_theorem_boundary_orbit_conj_to_rot_a}
F^{-1}(\Sigma') = \{(t,r e^{i\theta})\in {\mathbb{R}}/{\mathbb{Z}} \times {\mathbb{D}}_\epsilon \mid 0\leq r\leq \epsilon\enspace\text{and}\enspace \theta=0 \}
\end{equation}
and
\begin{equation}
\label{eq:neighbourhood_theorem_boundary_orbit_conj_to_rot_b}
F^*\lambda_0 = \frac{1}{2}r^2\cdot d\theta + (a-\pi b r^2)\cdot dt
\end{equation}
where $b$ is a positive real number.
\end{lem}
\begin{proof}
Consider a small disk $D$ transverse to $\partial\Sigma$ whose local first return map is smoothly conjugated to a rotation. We may choose a parametrization $f:{\mathbb{D}}_\epsilon\rightarrow D$ such that the local first return map, regarded as a diffeomorphism of ${\mathbb{D}}_\epsilon$ via $f$, is a rotation of ${\mathbb{D}}_\epsilon$. By an equivariant version of Moser's argument, after modifying the parametrization $f$ we may in addition assume that $f^*\omega_0=\omega_0$ where $\omega_0$ denotes the standard symplectic form on both ${\mathbb{R}}^4$ and ${\mathbb{D}}_\epsilon$. The primitives $f^*\lambda_0$ and $\lambda\coloneqq\frac{1}{2}r^2d\theta$ of the area form $\omega_0$ on ${\mathbb{D}}_\epsilon$ differ by an exact $1$-form, i.e. $\lambda = f^*\lambda_0 + d\alpha$ for a smooth function $\alpha$ on ${\mathbb{D}}_\epsilon$. We may normalize $\alpha$ such that $\alpha(0)=0$. We define
\begin{equation*}
f':{\mathbb{D}}_\epsilon\rightarrow \partial X\quad f'(z)\coloneqq \phi^{\alpha(z)}(f(z))
\end{equation*}
where $\phi^t$ denotes the Reeb flow on $\partial X$. This parametrizes a small disk $D'$ transverse to $\partial\Sigma$. A direct computation shows that $f'^*\lambda_0 = f^*\lambda_0+d\alpha = \lambda$ and the local first return map is still a rotation of ${\mathbb{D}}_\epsilon$. Let $\rho$ denote this rotation. Since $\rho^*\lambda=\lambda$, it follows from \eqref{eq:first_return_time_identity} that the first return time of $D'$ is constant and equal to $a$, the action of the orbit $\partial\Sigma$. Let us define the immersion
\begin{equation*}
F:{\mathbb{R}}\times{\mathbb{D}}_\epsilon\rightarrow\partial X \quad F(t,z)\coloneqq \phi^t(f'(z)).
\end{equation*}
We have $F^*\lambda_0 = dt+\lambda$ and $F$ is invariant under the diffeomorphism $\psi$ of ${\mathbb{R}}\times{\mathbb{D}}_\epsilon$ defined by $\psi(t,z) \coloneqq (t-a,\rho(z))$. Thus $F$ descends to a strict contactomorphism between the quotient $({\mathbb{R}}\times{\mathbb{D}}_\epsilon)/\sim$ of $({\mathbb{R}}\times{\mathbb{D}}_\epsilon,dt+\lambda)$ by the action of $\psi$ and a tubular neighbourhood of $\partial\Sigma$ in $\partial X$. It is a direct computation to check that we may choose a diffeomorphism $\tau:{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}_\epsilon\cong ({\mathbb{R}}\times{\mathbb{D}}_\epsilon)/\sim$ such that the contact form $dt+\lambda$ on $({\mathbb{R}}\times{\mathbb{D}}_\epsilon)/\sim$ pulls back to a contact form on ${\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}_\epsilon$ of the form \eqref{eq:neighbourhood_theorem_boundary_orbit_conj_to_rot_b} for some real number $b$. By slight abuse of notation, let $F:{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}_\epsilon\rightarrow\partial X$ denote the resulting parametrization of a tubular neighbourhood of $\partial \Sigma$. For appropriate choice of diffeomorphism $\tau$, the preimage $F^{-1}(\Sigma)$ is non-winding, i.e. isotopic to the annulus \eqref{eq:neighbourhood_theorem_boundary_orbit_conj_to_rot_a} in ${\mathbb{R}}\times{\mathbb{D}}_\epsilon$. In fact, since $\Sigma$ is a $\partial$-strong surface of section, it is easy to see that we may replace $\Sigma$ by an isotopic disk-like global surface of section $\Sigma'$ with the same boundary orbit such that $F^{-1}(\Sigma')$ is equal to the annulus \eqref{eq:neighbourhood_theorem_boundary_orbit_conj_to_rot_a}. It remains to show that the constant $b$ must be positive. This is a consequence of the fact that the boundary orbit $\partial\Sigma'$ is positive, i.e. the boundary orientation of $\Sigma'$ agrees with the orientation induced by the Reeb vector field. Since the Reeb vector field of \eqref{eq:neighbourhood_theorem_boundary_orbit_conj_to_rot_b} is simply given by $\frac{1}{a}(\partial_t+2\pi b\partial_\theta)$, this means that $b$ must be positive.
\end{proof}
The surface of section $\Sigma'$ constructed in Lemma \ref{lem:neighbourhood_theorem_boundary_orbit_conj_to_rot} still satisfies the assumptions of Theorem \ref{theorem:embedding_result} because we can simply use a transfer map $\psi:\Sigma\rightarrow\Sigma'$ transport the positive Hamiltonians $H$ and $G$ generating the lifts $\widetilde{\phi}_0$ and $\widetilde{\phi}_1$ of the first return map of $\Sigma$ to obtain positive Hamiltonians on $\Sigma'$. Thus we may replace $\Sigma$ by $\Sigma'$ and assume in addition that there exists a parametrization $F$ of a tubular neighbourhood of $\partial\Sigma$ satisfying \eqref{eq:neighbourhood_theorem_boundary_orbit_conj_to_rot_a} and \eqref{eq:neighbourhood_theorem_boundary_orbit_conj_to_rot_b}.\\
Our next step is to construct a special parametrization $f:B^2(a)\rightarrow\Sigma$ such that $f^*\omega = \omega_0$ where $\omega_0$ denotes the standard symplectic form on $B^2(a)$. For $re^{i\theta}$ near the boundary $\partial B^2(a)$ we define
\begin{equation*}
f (r e^{i\theta}) = F\left(\frac{\theta}{2\pi} \enspace,\enspace \sqrt{\frac{1}{b\pi}(a-\pi r^2)}\right).
\end{equation*}
A direct computation involving \eqref{eq:neighbourhood_theorem_boundary_orbit_conj_to_rot_b} shows that this pulls back $\omega$ to $\omega_0$. Since both $(B^2(a),\omega_0)$ and $(\Sigma,a)$ have area $a$, we can use a Moser type argument to extend $f$ to an area preserving map $f:(B^2(a),\omega_0)\rightarrow (\Sigma,\omega)$.\\
Consider the degree $0$ lift $\widetilde{\phi}_0$ of the first return map of $\Sigma$. Via the parametrization $f$ we can regard $\widetilde{\phi}_0$ as an element of $\widetilde{\operatorname{Diff}}(B^2(a),\omega_0)$. It follows from \eqref{eq:neighbourhood_theorem_boundary_orbit_conj_to_rot_b} and a short computation that $\widetilde{\phi}_0$ is a rotation by angle $2\pi/b$ near the boundary $\partial B^2(a)$. By the assumptions in the first assertion of Theorem \ref{theorem:embedding_result}, there exists a Hamiltonian $H:{\mathbb{R}}/{\mathbb{Z}}\times B^2(a)\rightarrow{\mathbb{R}}$ which vanishes on the boundary, is positive in the interior, is autonomous near the boundary and generates $\widetilde{\phi}_0$. We argue that we may in addition assume that
\begin{equation}
\label{eq:boundary_behaviour_of_ham}
H(t,z) = \frac{1}{b}(a-\pi |z|^2)\qquad \text{for}\enspace z\enspace\text{sufficiently close to}\enspace \partial B^2(a).
\end{equation}
Since $H$ is autonomous near the boundary, it is invariant under $\phi_H^1$, which is a rotation by $2\pi/b$. If $b$ is irrational, then this implies that $H$ is invariant under arbitrary rotations and it is not hard to see that $H$ must in fact be given by \eqref{eq:boundary_behaviour_of_ham}. If $b$ is rational, then $H$ need not be invariant under arbitrary rotations near the boundary. We show that we may replace $H$ by a Hamiltonian which is rotation invariant. There exists a symplectomorphism $g$ of $(B^2(a),\omega_0)$ supported in a small neighbourhood of $\partial B^2(a)$ which commutes with the rotation by angle $2\pi/b$ such that the level sets of $H\circ g$ near the boundary are circles centred at the centre of $B^2(a)$. The time-$1$-map of $H\circ g$ is given by $g^{-1}\circ \phi_H^1 \circ g$. Away from a neighbourhood of $\partial B^2(a)$ this agrees with $\phi_H^1$ because $g$ is equal to the identity. Near the boundary, $\phi_H$ is a rotation by angle $2\pi/b$ and thus commutes with $g$. Hence $g^{-1}\circ \phi_H^1\circ g$ agrees with $\phi_H^1$ on all of $B^2(a)$. Now simply replace $H$ by $H\circ g$. Then $H$ is rotation invariant near the boundary and again it follows that it must be given by \eqref{eq:boundary_behaviour_of_ham}.
\begin{prop}
\label{prop:spun_hypersurface_isomorphic_starshaped_hypersurface}
There exists a diffeomorphism $\psi : \partial A(a,H)\rightarrow \partial X$ such that $\psi^*(\omega_0|_{\partial X}) = \omega_0|_{\partial A(a,H)}$.
\end{prop}
\begin{proof}
After possibly shrinking $\epsilon$, we may define
\begin{equation*}
F':{\mathbb{R}}/{\mathbb{Z}}\times {\mathbb{D}}_\epsilon \rightarrow \partial A(a,H)\quad F'(t,z)\coloneqq
\left( \sqrt{\frac{a}{\pi}- b |z|^2}\cdot e^{2\pi i t}\enspace , \enspace z \right).
\end{equation*}
It follows from \eqref{eq:boundary_behaviour_of_ham} and the definition of $A(a,H)$ that the image of $F'$ is contained in $\partial A(a,H)$ for $\epsilon$ sufficiently small. Moreover, we define
\begin{equation*}
f':B^2(a)\rightarrow \partial A(a,H)\quad f'(z)\coloneqq \left(z,\sqrt{\frac{H(0,z)}{\pi}}\right).
\end{equation*}
A direct computation shows that the following two diffeomorphisms between submanifolds of $\partial A(a,H)$ and $\partial X$ agree on their overlap and pull back $\omega_0|_{\partial X}$ to $\omega_0|_{\partial A(a,H)}$:
\begin{equation*}
F\circ F'^{-1} : \operatorname{im}(F')\rightarrow \operatorname{im}(F)
\end{equation*}
and
\begin{equation*}
f\circ f'^{-1} : \operatorname{im}(f')\rightarrow \operatorname{im}(f)
\end{equation*}
On $\operatorname{im}(F')\cup \operatorname{im}(f')$ we may therefore define $\psi$ to agree with $F\circ F'^{-1}$ and $f\circ f'^{-1}$, respectively. Next, we explain how to extend to a diffeomorphism between $\partial A(a,H)$ and $\partial X$. We pull back the Reeb vector field $R$ on $\partial X$ via $F\circ F'^{-1}$ to obtain a vector field $R'$ on $\operatorname{im}(F')$ which is tangent to the characteristic foliation. We smoothly extend to a vector field on $\partial A(a,H)$, still denoted by $R'$, which is everywhere tangent to the characteristic foliation. The embedding $f'$ parametrizes a surface of section of the flow generated by $R'$. By Lemma \ref{lem:basic_properties_of_spun_hypersurfaces}, the lift of the first return map with respect to a trivialization of degree $0$ is represented by $(\phi_H^t)_{t\in [0,1]}$, which also represents the degree $0$ lift of the first return map of the surface of section of $\partial X$ parametrized by $f$. After replacing $R'$ by a positive scaling $\chi\cdot R'$ for a suitable smooth function $\chi:\partial A(a,H)\rightarrow {\mathbb{R}}_{>0}$, we may assume that the first return times of the two surfaces of section $f$ and $f'$ agree as well. This allows us to extend $\psi$ to a diffeomorphism $\psi:\partial A(a,H)\rightarrow \partial X$ by requiring that $\psi$ intertwines the flows on $\partial A(a,H)$ and $\partial X$ generated by $R'$ and $R$, respectively. Set $\omega\coloneqq \psi^*(\omega_0|_{\partial X})$. We need to show that $\omega = \omega_0|_{\partial A(a,H)}$. By construction of $\psi$, the pull-back of the characteristic foliation on $\partial X$ via $\psi$ is equal to the characteristic foliation on $A(a,H)$. Thus $\omega_0$ and $\omega$ induce the same characteristic foliations on $\partial A(a,H)$. Moreover, the restrictions of $\omega_0$ and $\omega$ to $\operatorname{im}(F')$ and $\operatorname{im}(f')$ agree. Cartan's formula implies
\begin{equation*}
\mathcal{L}_{R'}\omega_0 = \iota_{R'}d\omega_0 + d\iota_{R'}\omega_0 = 0
\end{equation*}
where we use that $\omega_0$ is closed and $R'$ is contained in its kernel. Similarly, $\mathcal{L}_{R'}\omega=0$. Let $p\in\partial A(a,H)\setminus \operatorname{im}(F')$ and let $v,w\in T_p\partial A(a,H)$. Our goal is to show that $\omega_0(v,w)=\omega(v,w)$. The trajectory of $R'$ through $p$ intersects the surface of section $\operatorname{im}(f')$ after finite time. Let $\widetilde{p}$ be the first intersection point. We transport $v$ and $w$ via the flow of $R'$ to obtain vectors $\widetilde{v},\widetilde{w}\in T_{\widetilde{p}}\partial A(a,H)$. Since $\mathcal{L}_{R'}\omega_0$ and $\mathcal{L}_{R'}\omega$ vanish, we have $\omega_0(v,w) = \omega_0(\widetilde{v},\widetilde{w})$ and $\omega(v,w) = \omega(\widetilde{v},\widetilde{w})$. After replacing $(p,v,w)$ by $(\widetilde{p},\widetilde{v},\widetilde{w})$, we can therefore assume w.l.o.g. that $p$ is contained in the surface of section $\operatorname{im}(f')$. In addition we may assume that $v$ and $w$ are tangent to $\operatorname{im}(f')$. Indeed, replacing $v$ and $w$ by their projections onto $T_p\operatorname{im}(f')$ along the characteristic foliation does not change $\omega_0(v,w)$ and $\omega(v,w)$ because the kernels of $\omega_0$ and $\omega$ are tangent to the characteristic foliation. Now we simply use that the restrictions $\omega_0|_{\operatorname{im}(f')}$ and $\omega|_{\operatorname{im}(f')}$ agree by construction of $\psi$.
\end{proof}
We recall the following well-known theorem due to Gromov and McDuff (see Theorem 9.4.2 in \cite{MS12}).
\begin{theorem}[Gromov-McDuff]
\label{theorem:gromov_mcduff_theorem}
Let $(M,\omega)$ be a connected symplectic $4$-manifold and $K\subset M$ be a compact subset such that the following holds.
\begin{enumerate}
\item There is no symplectically embedded $2$-sphere $S\subset M$ with self-intersection number $S\cdot S=-1$.
\item There exists a symplectomorphism $\psi:{\mathbb{R}}^4\setminus V\rightarrow M\setminus K$, where $V\subset{\mathbb{R}}^4$ is a star-shaped compact set.
\end{enumerate}
Then $(M,\omega)$ is symplectomorphic to $({\mathbb{R}}^4,\omega_0)$. Moreover, for every open neighbourhood $U\subset M$ of $K$, the symplectomorphism can be chosen equal to $\psi^{-1}$ on $M\setminus U$.
\end{theorem}
\begin{cor}
\label{cor:corollary_of_gromov_mcduff}
For $j\in\{1,2\}$, let $A_j\subset{\mathbb{R}}^4$ be a compact submanifold diffeomorphic to the closed disk $D^4$. Assume that there exists a diffeomorphism $\psi:\partial A_1\rightarrow\partial A_2$ such that $\psi^*(\omega_0|_{\partial A_2}) = \omega_0|_{\partial A_1}$. Then the boundary $\partial A_1$ is of contact type if and only if $\partial A_2$ is of contact type. In this case, $\psi$ extends to a symplectomorphism
\begin{equation}
\psi: (A_1,\omega_0|_{A_1})\rightarrow (A_2,\omega_0|_{A_2}).
\end{equation}
\end{cor}
\begin{proof}
By the uniqueness part of the coisotropic neighbourhood theorem in \cite{Got82}, there exist open neighbourhoods $U_j$ of $\partial A_j$ such that $\psi$ extends to a symplectomorphism
\begin{equation*}
\psi: (U_1,\omega_0|_{U_1})\rightarrow (U_2,\omega_0|_{U_2}).
\end{equation*}
Being of contact type is a property that only depends on a small neighbourhood of a hypersurface. Thus $\partial A_1$ is if contact type if and only if $\partial A_2$ is. Suppose now that this is the case. After possibly shrinking $U_1$, we can find a Liouville vector field $Z_1$ defined on $U_1$ and transverse to $\partial A_1$. Let $\lambda_1$ denote the associated Liouville $1$-form defined by $\lambda_1 = \iota_{Z_1}\omega_0$. Let $Z_2$ and $\lambda_2$ denote the push-forwards via $\psi$. For $j\in\{1,2\}$ let $(\widehat{A_j},\omega_j)$ be the symplectic completion of $(A_j,\omega_0|_{A_j})$ obtained by attaching a cylindrical end using the Liouville vector field $Z_j$. Let $\widehat{U_j}$ denote the union of $U_j$ with the cylindrical end attached to $A_j$. Clearly, $\psi$ extends to a symplectomorphism
\begin{equation*}
\psi: (\widehat{U_1},\omega_1)\rightarrow (\widehat{U_2},\omega_2).
\end{equation*}
The contact manifold $(\partial A_1,\ker \lambda_1)$ is fillable. Hence it follows from Eliashberg's paper \cite{Eli91} that it is contactomorphic to $S^3$ equipped with the standard tight contact structure. Thus we can find a star-shaped domain $V\subset{\mathbb{R}}^4$ and a strict contactomorphism from $(\partial V,\lambda_0)$ to $(\partial A_1,\lambda_1)$, where $\lambda_0$ denotes the standard Liouville $1$-form on ${\mathbb{R}}^4$. There exists an open neighbourhood $U_0$ of ${\mathbb{R}}^4\setminus V$ such that this strict contactomorphism extends to a symplectomorphism
\begin{equation*}
\phi: (U_0,\omega_0)\rightarrow (\widehat{U_1},\omega_1).
\end{equation*}
By Theorem \ref{theorem:gromov_mcduff_theorem}, there exists a symplectomorphism
\begin{equation*}
\phi_1:({\mathbb{R}}^4,\omega_0)\rightarrow (\widehat{A_1},\omega_1)
\end{equation*}
which agrees with $\phi$ on the complement of $V$. Similarly, applying Theorem \ref{theorem:gromov_mcduff_theorem} to the composition
\begin{equation*}
\psi\circ\phi:(U_0,\omega_0)\rightarrow (\widehat{U_2},\omega_2)
\end{equation*}
we obtain a symplectomorphism
\begin{equation*}
\phi_2:({\mathbb{R}}^4,\omega_0)\rightarrow (\widehat{A_2},\omega_2)
\end{equation*}
agreeing with $\psi\circ\phi$ on the complement of $V$. The composition $\phi_2\circ\phi_1^{-1}$ restricts to a symplectomorphism $(A_1,\omega_0)\rightarrow (A_2,\omega_0)$ extending the given diffeomorphism $\psi:\partial A_1\rightarrow \partial A_2$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem:embedding_result}]
We prove the first assertion. By Proposition \ref{prop:spun_hypersurface_isomorphic_starshaped_hypersurface} and Corollary \ref{cor:corollary_of_gromov_mcduff}, $X$ is symplectomorphic to $A(a,H)$. By the second item of Lemma \ref{lem:basic_properties_of_spun_hypersurfaces} we have $A(a,H)\subset Z(a)$. Thus $X\overset{s}{\hookrightarrow} Z(a)$.\\
We prove the second assertion. We define the Hamiltonian
\begin{equation*}
K:{\mathbb{R}}/{\mathbb{Z}}\times B^2(a) \rightarrow {\mathbb{R}}\quad K(t,z)\coloneqq a-\pi|z|^2.
\end{equation*}
This Hamiltonian generates $\widetilde{\rho}$, the full positive rotation of $B^2(a)$ by angle $2\pi$. Our goal is to show that we may assume that the Hamiltonian $H:{\mathbb{R}}/{\mathbb{Z}}\times B^2(a)\rightarrow{\mathbb{R}}$ generating $\widetilde{\phi}_0$ satisfies in addition $K\leq H$. Then it follows from Proposition \ref{prop:spun_hypersurface_isomorphic_starshaped_hypersurface}, Corollary \ref{cor:corollary_of_gromov_mcduff} and the third item in Lemma \ref{lem:basic_properties_of_spun_hypersurfaces} that $B(a){\hookrightarrow} X{\hookrightarrow} Z(a)$. We regard the lift $\widetilde{\phi}_1$ as an element of $\widetilde{\operatorname{Diff}}(B^2(a),\omega_0)$ via the parametrization $f$. It follows from \eqref{eq:relationship_lift_trivialization} that $\widetilde{\phi}_1 = \widetilde{\rho}^{-1}\circ \widetilde{\phi}_0$. In particular, $\widetilde{\phi}_1$ is a rotation by angle $2\pi(1/b-1)$ near the boundary. By assumption, there exists a Hamiltonian $G:{\mathbb{R}}/{\mathbb{Z}}\times B^2(a)\rightarrow{\mathbb{R}}$ generating $\widetilde{\phi}_1$ which is positive in the interior, vanishes on the boundary and is autonomous near the boundary. It follows as in the case of the Hamiltonian $H$ generating $\widetilde{\phi}_0$ that we can assume that $G$ is given by
\begin{equation*}
G(z,t) = (\frac{1}{b}-1)\cdot (a-\pi |z|^2)
\end{equation*}
near the boundary. Define
\begin{equation*}
H_t\coloneqq (K\#G)_t = K_t + G_t\circ (\phi_K^t)^{-1}.
\end{equation*}
This defines a $1$-periodic Hamiltonian generating $\widetilde{\phi}_0$ which has the special form \eqref{eq:boundary_behaviour_of_ham} near $\partial B^2(a)$ and is bounded below by $K_t(z)=a-\pi|z|^2$.
\end{proof}
\subsection{Proof of Proposition \ref{prop:modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian}}
\label{subsection:proof_of_modification_result}
We begin with some auxiliary lemmas. The first lemma concerns {\it positive paths} in the linear symplectic group (see Lalonde-McDuff \cite{LaMD97}). Let $\on{Sp}(2n)$ denote the group of all linear symplectomorphisms of ${\mathbb{R}}^{2n}$. Moreover, let $J_0$ denote the matrix representing the standard complex structure on ${\mathbb{R}}^{2n}\cong{\mathbb{C}}^n$. We recall that, for every path $S:[0,1]\rightarrow{\mathbb{R}}^{2n\times 2n}$ of symmetric matrices, the solution to the initial value problem
\begin{equation*}
\dot{\Phi}(t) = J_0 S(t)\Phi(t)\qquad \text{and}\qquad \Phi(0)=\on{id}
\end{equation*}
is an arc $\Phi$ in $\on{Sp}(2n)$ starting at the identity. Conversely, every such arc $\Phi$ arises this way for a unique path $S$ of symmetric matrices. A path $\Phi$ in $\on{Sp}(2n)$ is called {\it positive} if $S(t)$ is positive definite for all $t$. Lemma \ref{lem:positive_paths_rotation_number} below characterizes the elements $\widetilde{\Phi}$ of the universal cover $\widetilde{\on{Sp}}(2)$ which can be represented by positive arcs. This characterization is given in terms of the rotation number
\begin{equation*}
\rho : \widetilde{\on{Sp}}(2)\rightarrow{\mathbb{R}}.
\end{equation*}
We give two equivalent definitions of $\rho$. The first one involves eigenvalues of symplectic matrices. We begin by defining a function $\overline{\rho}:\on{Sp}(2) \rightarrow{\mathbb{R}}/{\mathbb{Z}}$. The complex eigenvalues of a matrix $A\in\on{Sp}(2)$ are either given by $\lambda,\lambda^{-1}$ for $\lambda\in{\mathbb{R}}\setminus\{0\}$ or by $e^{\pm 2\pi i\theta}$ for $\theta\in (0,1/2)$. In the former case, we define
\begin{equation*}
\overline{\rho}(A)\coloneqq \begin{cases} 0 & \text{if}\enspace \lambda>0 \\
1/2 & \text{if}\enspace\lambda<0 \end{cases}
\end{equation*}
In the latter case, we fix an arbitrary vector $v\in{\mathbb{R}}^2\setminus\{0\}$ and define
\begin{equation*}
\overline{\rho}(A)\coloneqq \begin{cases} \theta & \text{if}\enspace \langle J_0v,Av\rangle>0 \\
-\theta & \text{if}\enspace \langle J_0v,Av \rangle <0\end{cases}
\end{equation*}
The rotation number $\rho$ is defined to be the unique lift of $\overline{\rho}$ to the universal cover $\widetilde{\on{Sp}}(2)$ satisfying $\rho(\on{id})=0$.\\
For our second definition of $\rho$, we fix $v\in{\mathbb{R}}^2\setminus\{0\}$ and define $\overline{\rho}_v:\on{Sp}(2)\rightarrow{\mathbb{R}}/{\mathbb{Z}}$ to be the auxiliary function characterized by
\begin{equation*}
Av \in {\mathbb{R}}_{>0}\cdot e^{2\pi i \overline{\rho}_v(A)}\cdot v
\end{equation*}
We let $\rho_v:\widetilde{\on{Sp}}(2)\rightarrow{\mathbb{R}}$ denote the unique lift of $\overline{\rho}_v$ to the universal cover satisfying $\rho_v(\on{id})=0$ and define
\begin{equation*}
\rho(\widetilde{\Phi})\coloneqq \lim_{k\rightarrow\infty} \frac{\rho_v(\widetilde{\Phi}^n)}{n}.
\end{equation*}
We refer to \cite[appendix A]{CH21} for a proof that these two definitions of $\rho$ coincide.
\begin{lem}
\label{lem:positive_paths_rotation_number}
Let $\widetilde{\Phi}\in\widetilde{\on{Sp}}(2)$. Then $\widetilde{\Phi}$ can be represented by a positive arc in $\on{Sp}(2)$ if and only if the rotation number $\rho(\widetilde{\Phi})$ is strictly positive.
\end{lem}
\begin{proof}
Suppose that $\widetilde{\Phi}$ is represented by a positive arc $(\Phi(t))_{t\in [0,1]}$ in $\on{Sp}(2)$ starting at the identity. Let $S(t)$ denote the path of positive definite matrices generating $\Phi(t)$. We may choose $\epsilon>0$ such that $\langle z,S(t)z \rangle\geq\epsilon$ for all $t\in [0,1]$ and all unit vectors $z\in {\mathbb{R}}^2$. Fix $v\in{\mathbb{R}}^2\setminus\{0\}$. A direct computation shows that
\begin{equation*}
\frac{d}{dt}\overline{\rho}_v(\Phi(t)) = \frac{\langle \Phi(t)v,S(t)\Phi(t)v \rangle}{|\Phi(t)v|^2} \geq \epsilon.
\end{equation*}
It is immediate from our second definition of $\rho$ that this implies that $\rho(\widetilde{\Phi})\geq \epsilon>0$.\\
Conversely, suppose that $\rho(\widetilde{\Phi})>0$. Our goal is to construct a positive arc in $\on{Sp}(2)$ representing $\widetilde{\Phi}$. We claim that we may reduce ourselves to the case $\rho(\widetilde{\Phi})\in (0,1)$. Indeed, for $\tau\in{\mathbb{R}}$, let $\widetilde{R}_\tau\in\widetilde{\on{Sp}}(2)$ denote the element represented by the arc $(e^{it\tau})_{t\in [0,1]}$ in $\on{U}(1)\subset\on{Sp}(2)$. Consider the function $\tau\mapsto\rho((\widetilde{R}_\tau)^{-1}\circ \widetilde{\Phi})$. This function is continuous and it is clear from our second definition of $\rho$ that it is decreasing and that it diverges to $-\infty$ as $\tau\rightarrow +\infty$. Thus we may pick $\tau>0$ such that $\rho((\widetilde{R}_\tau)^{-1}\circ \widetilde{\Phi})\in (0,1)$. Since $\widetilde{R}_\tau$ is represented by a positive arc by definition, it suffices to show that the same is true for $(\widetilde{R}_\tau)^{-1}\circ \widetilde{\Phi}$. Hence, after replacing $\widetilde{\Phi}$ by $(\widetilde{R}_\tau)^{-1}\circ \widetilde{\Phi}$, we may assume w.l.o.g. that $\rho(\widetilde{\Phi})\in (0,1)$.\\
Let $\Phi\in\on{Sp}(2)$ denote the projection of $\widetilde{\Phi}$ to $\on{Sp}(2)$. Since $\rho(\widetilde{\Phi})\notin{\mathbb{Z}}$, the spectrum of $\Phi$ does not contain positive real numbers. Thus Theorem 1.2 in \cite{LaMD97} implies that there exists a positive arc $(\Phi(t))_{t\in [0,1]}$ in $\on{Sp}(2)$ starting at the identity and ending at $\Phi(1)=\Phi$ such that $\Phi(t)$ does not have any positive real eigenvalues for any $t>0$. Let $[(\Phi(t))_t]$ denote the element of the universal cover represented by the arc $(\Phi(t))_t$. Our goal is to show that $\widetilde{\Phi}=[(\Phi(t))_t]$. Since the projections of these to elements to $\on{Sp}(2)$ agree, it is enough to show that the rotation numbers $\rho(\widetilde{\Phi})$ and $\rho([(\Phi(t))_t])$ coincide. These rotation numbers must agree in ${\mathbb{R}}/{\mathbb{Z}}$, so it is actually enough to show that $\rho([(\Phi(t))_t])\in (0,1)$. Positivity of the arc $(\Phi(t))_t$ implies that $\rho([(\Phi(t))_t])>0$. This follows from the implication of Lemma \ref{lem:positive_paths_rotation_number} already proved above. Since the spectrum of $\Phi(t)$ does not contain positive real numbers for any $t>0$, we have $\overline{\rho}(\Phi(t))\neq 0$ for all $t>0$. We deduce that $\rho([(\Phi(t))_t])<1$. This concludes our proof that $\widetilde{\Phi}$ can be represented by a positive arc.
\end{proof}
\begin{lem}
\label{lem:neighbourhood_theorem_boundary_orbit}
Let $X\subset{\mathbb{R}}^4$ be a star-shaped domain and let $\Sigma\subset\partial X$ be a $\partial$-strong disk-like global surface of section. Let $\epsilon>0$ be sufficiently small and let ${\mathbb{D}}_\epsilon\subset{\mathbb{C}}$ denote the disk of radius $\epsilon$. Then there exist a $\partial$-strong disk-like global surface of section $\Sigma'$ with the same boundary orbit as $\Sigma$ and a parametrization $F:{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}_\epsilon\rightarrow \partial X$ of a tubular neighbourhood of $\Sigma$ such that the following is true:
\begin{equation}
\label{eq:neighbourhood_theorem_boundary_orbit_a}
F^{-1}(\Sigma') = \{(t,re^{i\theta})\in {\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}_\epsilon \mid 0\leq r\leq\epsilon\enspace\text{and}\enspace \theta=0\}
\end{equation}
and
\begin{equation}
\label{eq:neighbourhood_theorem_boundary_orbit_b}
F^*\lambda_0 = \frac{1}{2}r^2d\theta + H dt
\end{equation}
where $H:{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}_\epsilon\rightarrow{\mathbb{R}}$ is a Hamiltonian such that $H_t(0)=\int_{\partial\Sigma} \lambda_0$ and the differential $dH_t(0)$ vanishes. Moreover, the Hessian $\nabla^2H_t(0)$ is negative definite.
\end{lem}
\begin{proof}
Let $a\coloneqq\int_{\partial\Sigma}\lambda_0$ be the action of the orbit $\partial\Sigma$. Let $\xi$ denote the contact structure on $\partial X$. Let $\tau:\xi|_{\partial\Sigma}\cong{\mathbb{R}}^2$ be a symplectic trivialization of $\xi|_{\partial\Sigma}$ with the property that $\Sigma$ does not wind with respect to $\tau$. Via the trivialization $\tau$, the linearized Reeb flow $d\phi^t$ along $\partial\Sigma$ induces an arc $\Phi:[0,a]\rightarrow\on{Sp}(2)$ representing an element $\widetilde{\Phi}$ of the universal cover $\widetilde{\on{Sp}}(2)$. Since $\partial \Sigma$ is a positive boundary orbit of $\Sigma$, the linearized Reeb flow along $\partial\Sigma$ winds non-negatively with respect to the surface of section $\Sigma$, i.e. $\rho(\widetilde{\Phi})\geq 0$. The fact that $\Sigma$ is $\partial$-strong actually implies that $\rho(\widetilde{\Phi})$ is strictly positive. Hence it follows from Lemma \ref{lem:positive_paths_rotation_number} that $\widetilde{\Phi}$ is represented by a positive arc. We may therefore choose a loop $(S(t))_{t\in{\mathbb{R}}/a{\mathbb{Z}}}$ of symmetric positive definite matrices generating an arc $\Psi:[0,a]\rightarrow\on{Sp}(2)$ which represents $\widetilde{\Phi}$. After replacing $\tau$ by an isotopic trivialization, we can assume that the arc $\Psi$ is induced by the linearized Reeb flow. Using the trivialization $\tau$, we can choose a parametrization $F:{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}_\epsilon\rightarrow\partial X$ of a tubular neighbourhood of $\partial\Sigma$ such that
\begin{enumerate}
\item The pullback $F^*\lambda_0$ is given by $a\cdot dt$ on the circle ${\mathbb{R}}/{\mathbb{Z}}\times 0$.
\item The pullback $F^*\omega_0$ agrees with $\omega_0$ on the the circle ${\mathbb{R}}/{\mathbb{Z}}\times 0$. Here $\omega_0$ denotes the standard symplectic form on ${\mathbb{R}}^4$ and also the $2$-form on ${\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}_\epsilon$ whose restriction to fibres $t\times{\mathbb{D}}_\epsilon$ agrees with the standard symplectic form on ${\mathbb{D}}_\epsilon$ and which vanishes on the vector field $\partial_t$.
\item The linearized Reeb flow of $F^*\lambda_0$ along the orbit ${\mathbb{R}}/{\mathbb{Z}}\times 0$ is given by the arc $\Psi$.
\end{enumerate}
The remaining argument proceeds exactly as the proof of Lemma 5.2 in \cite{HM15}. The result is a modification of the parametrization $F$ such that properties (1)-(3) above still hold and such that $F^*\lambda_0$ is of the form \eqref{eq:neighbourhood_theorem_boundary_orbit_b} for some Hamiltonian $H$ which satisfies $H_t(0)=a$ and $dH_t(0)=0$. It follows from the fact that the linearized flow $\Psi$ is generated by symmetric positive definite matrices and our sign conventions that $\nabla^2H_t(0)$ is negative definite. Finally, since $\Sigma$ does not wind with respect to the parametrization $F$, we can achieve \eqref{eq:neighbourhood_theorem_boundary_orbit_a} by isotoping $\Sigma$ and possibly shrinking the tubular neighbourhood.
\end{proof}
\begin{lem}
\label{lem:embedding_time_engergy_extended_phase_space_into_symplectization}
Let $D$ be a closed $2$-dimensional disk. Let $\lambda$ be a Liouville $1$-form on $D$, i.e. $\omega\coloneqq d\lambda$ is a symplectic form and the Liouville vector field $W$ characterized by $\iota_W\omega = \lambda$ is transverse to $\partial D$. Let $I\subset{\mathbb{R}}$ be a closed interval and endow $I\times D$ with the contact form $dt+\lambda$. Here $t$ denotes the coordinate on $I$. Set $M\coloneqq {\mathbb{R}}_s\times I\times D$. We can regard $M$ as the symplectization of $I\times D$ and equip it with the symplectic form $\omega_M\coloneqq d(e^s(dt+\lambda))$. We can also regard $M$ as time-energy extended phase space of $D$ and endow it with the symplectic form $\widetilde{\omega}\coloneqq ds\wedge dt+\omega$. We abbreviate $M_+\coloneqq {\mathbb{R}}_{\geq 0}\times I\times D$ and $M_0\coloneqq 0\times I\times D$. There exists a symplectic embedding
\begin{equation*}
G: (M_+,\widetilde{\omega})\rightarrow (M_+,\omega_M)
\end{equation*}
which restricts to the identity on $M_0$.
\end{lem}
\begin{proof}
We assume w.l.o.g. that $0$ is contained in the interior of $I$. We define the vector field $Y$ on $M$ by
\begin{equation*}
Y\coloneqq \partial_s - W - t\cdot \partial t.
\end{equation*}
Since $W$ is outward pointing at $\partial D$ and $t\cdot\partial_t$ is outward pointing at $\partial I$, the flow of $Y$ is defined for all positive times. We define $G$ to be the embedding which is uniquely determined by requiring $G(0,t,z)= (0,t,z)$ for all $(t,z)\in I\times D$ and $G^*Y = \partial_s$. Let us check that
\begin{equation*}
G^* \omega_M = \widetilde{\omega}.
\end{equation*}
We compute
\begin{IEEEeqnarray}{rCl}
\mathcal{L}_Y \omega_M & = & \mathcal{L}_Y d(e^s(\lambda + dt)) \nonumber\\
& = & d ((\mathcal{L}_Y e^s)(\lambda + dt) + e^s\mathcal{L}_Y(\lambda + dt)) \nonumber\\
& = & d (e^s(\lambda + dt - d\iota_W\lambda - \iota_W d\lambda - d \iota_{t\partial_t}dt)) \nonumber\\
& = & 0.\nonumber
\end{IEEEeqnarray}
Clearly $\mathcal{L}_{\partial_s}\widetilde{\omega}=0$ as well. Thus is suffices to check that $G^*\omega_M$ and $\widetilde{\omega}$ agree on the set $M_0$. This is equivalent to showing that the pullbacks to $M_0$ of $G^*\omega_M$ and $\widetilde{\omega}$ and of $\iota_{\partial_s}G^*\omega_M$ and $\iota_{\partial_s}\widetilde{\omega}$ agree. Since the restriction of $G$ to $M_0$ is the identity, the pullback of $G^*\omega_M$ to $M_0$ is equal to $\omega$. This agrees with the pullback of $\widetilde{\omega}$. We compute
\begin{IEEEeqnarray}{rCl}
\iota_{\partial_s}G^*\omega_M & = & \iota_{\partial_s}G^* d(e^s(\lambda + dt))\nonumber\\
& = & G^* \iota_Y e^s(\omega + ds\wedge\lambda + ds\wedge dt) \nonumber\\
& = & G^* e^s (-\lambda + \lambda + dt + tds) \nonumber\\
& = & G^* e^s (dt + tds).\nonumber
\end{IEEEeqnarray}
The pullback of this form to $M_0$ is simply $dt$. This agrees with the pullback of $\iota_s\widetilde{\omega}$. This establishes $G^*\omega_M=\widetilde{\omega}$
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian}]
Our first step is to construct a star-shaped domain $X'\subset{\mathbb{R}}^4$ which contains $X$, agrees with $X$ outside a small neighbourhood of $\partial\Sigma$ and has a $\partial$-strong disk-like surface of section $\Sigma'\subset\partial X'$ of the same symplectic area as $\Sigma$ such that the local first return map of a small disk transverse to the boundary orbit $\partial \Sigma'$ is smoothly conjugated to a rotation. We apply Lemma \ref{lem:neighbourhood_theorem_boundary_orbit}. After replacing $\Sigma$ by a disk-like global surface of section with the same boundary orbit, we may choose a parametrization $F:{\mathbb{R}}/{\mathbb{Z}}\times {\mathbb{D}}_\epsilon \rightarrow \partial X$ of a tubular neighbourhood of $\partial\Sigma$ satisfying \eqref{eq:neighbourhood_theorem_boundary_orbit_a} and \eqref{eq:neighbourhood_theorem_boundary_orbit_b}. Consider time-energy extended phase space ${\mathbb{R}}_s\times ({\mathbb{R}}/{\mathbb{Z}})_t\times {\mathbb{D}}_\epsilon$ equipped with the symplectic form $\widetilde{\omega_0}\coloneqq \omega_0 + ds\wedge dt$ where $\omega_0$ denotes the standard symplectic form on ${\mathbb{D}}_\epsilon$. The pullback of $\widetilde{\omega_0}$ via the parametrization
\begin{equation*}
G:{\mathbb{R}}/{\mathbb{Z}}\times {\mathbb{D}}_\epsilon \rightarrow {\mathbb{R}}\times{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}_\epsilon \quad G(t,z)\coloneqq (H_t(z),t,z)
\end{equation*}
of the graph $\Gamma(H)$ is given by $\omega_0 + dH\wedge dt$. It follows from \eqref{eq:neighbourhood_theorem_boundary_orbit_b} that this agrees with $F^*\omega_0$ where, by slight abuse of notation, $\omega_0$ also denotes the standard symplectic form on ${\mathbb{R}}^4$. We set $\psi\coloneqq F\circ G^{-1}$. This defines a diffeomorphism from $\Gamma(H)$ to $\operatorname{im}(F)$ satisfying $\psi^*\omega_0 = \widetilde{\omega_0}|_{\Gamma(H)}$. By the coisotropic neighbourhood theorem in \cite{Got82}, we may extend $\psi$ to a symplectomorphism defined on some open neighbourhood $U$ of the graph $\Gamma(H)$. Note that the push-forward of the vector field $\partial_s$ via $\psi$ is transverse to $\partial X$ and outward pointing. Let $H'$ be a $C^1$-small perturbation of $H$ supported in a small neighbourhood of ${\mathbb{R}}/{\mathbb{Z}}\times 0$ with the following properties:
\begin{enumerate}
\item $H'\geq H$
\item Near ${\mathbb{R}}/{\mathbb{Z}}\times 0$ the Hamiltonian $H'$ is given by
\begin{equation}
\label{eq:modified_H_in_prop_modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian}
H'_t(z) = H_t(0) - b |z|^2
\end{equation}
for some positive constant $b$.
\end{enumerate}
This is possible because the Hessian $\nabla^2H_t(0)$ is negative definite. We define $X'$ to be the star-shaped domain which agrees with $X$ outside $\operatorname{im}(\psi)$ and which satisfies
\begin{equation*}
\partial X'\cap \operatorname{im}(\psi) = \psi(\Gamma(H')).
\end{equation*}
The inequality $H'\geq H$ implies that $X$ is contained on $X'$. By \eqref{eq:modified_H_in_prop_modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian}, the orbit $\partial\Sigma$ on $\partial X$ also is an orbit on $\partial X'$. If follows from our construction of $X'$ that there exists an embedding $F':{\mathbb{R}}/{\mathbb{Z}}\times {\mathbb{D}}_\epsilon \rightarrow\partial X'$ which agrees with $F$ near ${\mathbb{R}}/{\mathbb{Z}}\times \partial{\mathbb{D}}_\epsilon$ such that
\begin{equation*}
F'^*\omega_0 = \omega_0 + dH'\wedge dt.
\end{equation*}
We define $\Sigma'\subset\partial X'$ to agree with $\Sigma$ outside $\operatorname{im}(F')$ and to be given by
\begin{equation*}
\{F'(t,r)\mid 0\leq r\leq \epsilon\enspace \text{and}\enspace t\in{\mathbb{R}}/{\mathbb{Z}}\}
\end{equation*}
inside $\operatorname{im}(F')$. This clearly defines a disk-like surface of section of the Reeb flow on $\partial X'$. Its symplectic area agrees with the symplectic area of $\Sigma$ because $\partial\Sigma'=\partial\Sigma$. It follows from \eqref{eq:modified_H_in_prop_modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian} that the local return map of the orbit $\partial\Sigma'$ is smoothly conjugated to a rotation.\\
Let us replace $(X,\Sigma)$ by $(X',\Sigma')$. We may choose a smooth parametrization $f:{\mathbb{D}}\rightarrow\Sigma$ such that near the boundary $\partial{\mathbb{D}}$ the pullback $\omega\coloneqq f^*\omega_0$ is rotation invariant and the first return map $\phi$ is a rotation. Let $\widetilde{\phi}_0$ denote the lift of $\phi$ with respect to a degree $0$ trivialization. Note that by positivity of the constant $b$ in \eqref{eq:modified_H_in_prop_modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian}, the rotation angle of $\widetilde{\phi}_0$ near $\partial{\mathbb{D}}$ must be strictly positive. Thus $\widetilde{\phi}_0$ can be generated by a Hamiltonian $H$ which vanishes on the boundary and is autonomous and strictly positive in some neighbourhood of $\partial{\mathbb{D}}$. We do not know whether we can choose $H$ to be strictly positive everywhere in the interior. Our strategy is to construct a domain $X'$ which contains $X$ and agrees with $X$ outside an open neighbourhood of $\Sigma$ such that the degree $0$ lift of the first return map of the Reeb flow on $\partial X'$ can be generated by a Hamiltonian $H'$ which agrees with $H$ near $\partial{\mathbb{D}}$ and is strictly positive in the interior. Let $K:{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}\rightarrow{\mathbb{R}}$ be a non-negative Hamiltonian which is compactly supported in the interior $\operatorname{int}({\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}})$. The composition $\widetilde{\phi}_0\circ (\phi_K^t)_{t\in [0,1]}\in \widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ is generated by the Hamiltonian
\begin{equation}
\label{eq:composite_hamiltonian_in_prop_modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian}
(H\#K)_t\coloneqq H_t + K\circ (\phi_H^t)^{-1}.
\end{equation}
If $K$ is sufficiently large, then this Hamiltonian is strictly positive. The Hamiltonian $H\#K$ need not be $1$-periodic. This can be remedied as follows: First note that we can assume that $H_t$ is strictly positive in the interior for $t$ in some open neighbourhood of $0\in{\mathbb{R}}/{\mathbb{Z}}$. Then it suffices to consider $K$ with the property that $K_t$ vanishes for $t$ near $0$. In this situation $H\#K$ smoothly extends to a $1$-periodic Hamiltonian which is strictly positive in the interior and autonomous and rotation invariant near the boundary. Our goal is to construct $X'$ such that the degree $0$ lift of the first return map is given by $\widetilde{\phi}_0\circ (\phi_K^t)_{t\in [0,1]}$.\\
Let $D\subset\operatorname{int}({\mathbb{D}})$ be a closed disk centred at $0$ and containing the support of $K$. After possibly increasing the radius of $D$, we can assume that $\lambda\coloneqq f^*\lambda_0$ is a Liouville $1$-form on $D$ whose associated Liouville vector field is transverse to $\partial D$. Let $\epsilon>0$ be sufficiently small and let
\begin{equation*}
F:[0,\epsilon]\times D\rightarrow \partial X
\end{equation*}
be the unique embedding such that $F(0,z) = f(z)$ and such that the pullback of the Reeb vector field $R$ on $\partial X$ via $F$ is given by $\partial_t$ where $t$ is the coordinate on $[0,\epsilon]$. This implies that $F^*\lambda_0 = \lambda + dt$. Let $M\coloneqq {\mathbb{R}}\times [0,\epsilon]\times D$ be the symplectization of $([0,\epsilon]\times D,\lambda + dt)$ equipped with the symplectic form $\omega_M\coloneqq d(e^s(\lambda + dt))$. Using the radial Liouville vector field $Z_0$ on ${\mathbb{R}}^4$, we can extend $F$ to a symplectic embedding $F:(M,\omega_M)\rightarrow ({\mathbb{R}}^4,\omega_0)$ mapping $\partial_s$ to $Z_0$. Our modification of $X$ will be supported inside the image of $F$.\\
We apply Lemma \ref{lem:embedding_time_engergy_extended_phase_space_into_symplectization} and obtain a symplectic embedding $G:(M_+,\widetilde{\omega})\rightarrow (M_+,\omega_M)$. Let us reparametrize $K_t$ such that it is compactly supported in the time interval $(0,\epsilon)$ and still generates the same time-$1$-flow. Let $\Gamma_-(K)$ denote the subgraph of $K$, i.e. the set
\begin{equation*}
\Gamma_-(K) = \{(s,t,z)\in M_+\mid s\leq K_t(z)\}.
\end{equation*}
We define $X'$ to be the union
\begin{equation*}
X'\coloneqq X \cup F(G(\Gamma_-(K))).
\end{equation*}
$\Sigma$ is a disk-like global surface of section of the characteristic foliation on $\partial X'$ and it follows from Lemma \ref{lem:characteristic_foliation_on_graph} that the degree $0$ lift of the first return map is given by $\widetilde{\phi'}_0=\widetilde{\phi}_0\circ (\phi_K^t)_{t\in [0,1]}$. By our construction of $K$, the first return map $\widetilde{\phi'}_0$ can be generated by a Hamiltonian satisfying the hypotheses in Theorem \ref{theorem:embedding_result}.\\
The domain $X'$ might not be star-shaped. We argue that $X'$ must be symplectomorphic to a star-shaped domain for appropriate choice of Hamiltonian $K$. Our strategy is to define a contact form $\beta$ on $\partial X'$ such that $d\beta = \omega_0|_{X'}$. This contact form must be tight and there exists a star-shaped domain $X''\subset{\mathbb{R}}^4$ such that $(\partial X',\beta)$ is strictly contactomorphic to $(\partial X'',\lambda_0|_{\partial X''})$. Corollary \ref{cor:corollary_of_gromov_mcduff} then implies that $X''$ is symplectomorphic to $X'$. On the complement of the image of $F:M\rightarrow{\mathbb{R}}^4$ we simply define $\beta\coloneqq \lambda_0|_{\partial X'}$. We parametrize the intersection $\on{im}(F)\cap\partial X'$ via
\begin{equation*}
F':[0,\epsilon]\times D \rightarrow \on{im}(F)\cap\partial X'\quad (t,z)\mapsto F(G(K_t(z),t,z)).
\end{equation*}
A direct computation shows that $F'^*\omega_0 = \omega + dK_t\wedge dt$. Moreover, we have $F'^*\lambda_0 = dt+\lambda$ near the boundary of $[0,\epsilon]\times D$. Thus we may extend $\beta$ to a smooth $1$-form on all of $\partial X'$ be requiring that $F'^*\beta = dt+\lambda + K_t dt$. The resulting $1$-form clearly satisfies $d\beta = \omega_0|_{\partial X'}$. It remains to check $\beta$ is indeed a contact form on $\on{im}(F)\cap\partial X'$ for appropriate choice of $K$. This amounts to showing that $F'^* (\beta\wedge d\beta)$ is a positive volume form on $[0,\epsilon]\times D$. We compute
\begin{equation*}
F'^* (\beta\wedge d\beta) = dt\wedge ((1+K_t)\omega + \lambda\wedge dK_t).
\end{equation*}
Thus it suffices to construct $K$ such that $(1+K_t)\omega + \lambda\wedge dK_t$ is a positive area form on $D$. The first term clearly is a positive area form since $K_t\geq 0$. Let $W$ denote the Liouville vector field on $D$ induced by $\lambda$. We can guarantee that the second term is non-negative by choosing $K_t$ to be constant outside a small neighbourhood of $\partial D$ and requiring that $dK_t(W)\leq 0$ inside that small neighbourhood. Clearly we still have the freedom to make \eqref{eq:composite_hamiltonian_in_prop_modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian} positive.
\end{proof}
\section{A positivity criterion for Hamiltonian diffeomorphisms}
\label{section:a_positivity_criterion_for_hamiltonian_diffeomorphisms}
The results of this section are inspired by the fixed point theorem stated in \cite[Theorem 3]{ABHS18}. In fact, our proofs rely on the generalized generating functions introduced in \cite[sections 2.3 to 2.6]{ABHS18}.
\subsection{Statement of the positivity criterion}
\label{subsection:statement_of_the_positivity_criterion}
The generalized generating function framework we use applies to area-preserving diffeomorphisms of the disk ${\mathbb{D}}$ which are {\it radially monotone} in the sense of the following definition.
\begin{definition}
A diffeomorphism $\phi\in\operatorname{Diff}^+({\mathbb{D}})$ is called {\it radially monotone} if it fixes the center $0$ and if the image of the radial foliation of ${\mathbb{D}}$ under $\phi$ is transverse to the foliation of ${\mathbb{D}}$ by circles centred at $0$.
\end{definition}
We state the main result of this section.
\begin{theorem}
\label{theorem:positivity_criterion_for_radially_monotone_diffeomorphisms}
Let $\omega$ be a smooth $2$-form on ${\mathbb{D}}$ which is positive in the interior $\operatorname{int}({\mathbb{D}})$. Moreover, assume that $\omega$ is rotation invariant near the origin and the boundary $\partial{\mathbb{D}}$. Let $\widetilde{\phi}\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ and set $\phi\coloneqq \pi(\widetilde{\phi})$. Assume that:
\begin{enumerate}
\item $\phi$ fixes the origin and is radially monotone.
\item The restriction of $\phi$ to a small disk centred at the origin is a rotation.
\item The restriction of $\phi$ to a small annular neighbourhood of $\partial{\mathbb{D}}$ is a rotation.
\item The action $\sigma_{\widetilde{\phi}}(p)$ is positive for all fixed points $p$ of $\phi$.
\end{enumerate}
Then there exists a Hamiltonian $H:{\mathbb{R}}/{\mathbb{Z}} \times {\mathbb{D}} \rightarrow {\mathbb{R}}$ with the following properties:
\begin{enumerate}
\item $H$ is strictly positive in the interior $\operatorname{int}({\mathbb{D}})$ and vanishes on the boundary $\partial{\mathbb{D}}$.
\item $H$ is autonomous and rotation invariant in some neighbourhood of the origin and $\partial{\mathbb{D}}$.
\item The Hamiltonian vector field $X_{H_t}$ defined by $\iota_{X_{H_t}}\omega = dH_t$ in the interior $\operatorname{int}({\mathbb{D}})$ smoothly extends to the closed disk ${\mathbb{D}}$ and is tangent to $\partial {\mathbb{D}}$.
\item The arc $(\phi_H^t)_{t\in [0,1]}$ represents $\widetilde{\phi}$.
\end{enumerate}
\end{theorem}
In order to prove Theorem \ref{theorem:strong_viterbo_near_round_ball}, we need to apply Theorem \ref{theorem:positivity_criterion_for_radially_monotone_diffeomorphisms} to area-preserving diffeomorphisms of the disk which are $C^1$-close to the identity. Such diffeomorphisms need not be radially monotone. However, they are smoothly conjugated to radially monotone diffeomorphisms (see \cite[Proposition 2.24]{ABHS18}). We use this observation to deduce the following corollary of Theorem \ref{theorem:positivity_criterion_for_radially_monotone_diffeomorphisms}.
\begin{cor}
\label{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity}
Let $\omega$ be a smooth $2$-form on ${\mathbb{D}}$ which is positive in the interior $\operatorname{int}({\mathbb{D}})$. Let $\widetilde{\phi}\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ and set $\phi\coloneqq \pi(\widetilde{\phi})$. Assume that:
\begin{enumerate}
\item $\widetilde{\phi}$ is $C^1$-close to the identity $\operatorname{id}_{\mathbb{D}}$.
\item $\phi$ is smoothly conjugated to a rotation in some neighbourhood of the boundary $\partial{\mathbb{D}}$.
\item The action $\sigma_{\widetilde{\phi}}(p)$ is positive for all fixed points $p$ of $\phi$.
\end{enumerate}
Then there exists a Hamiltonian $H:{\mathbb{R}}/{\mathbb{Z}} \times {\mathbb{D}}\rightarrow{\mathbb{R}}$ with the following properties:
\begin{enumerate}
\item $H$ is strictly positive in the interior $\operatorname{int}({\mathbb{D}})$ and vanishes on the boundary $\partial{\mathbb{D}}$.
\item $H$ is autonomous in some neighbourhood of $\partial{\mathbb{D}}$.
\item The Hamiltonian vector field $X_{H_t}$ defined by $\iota_{X_{H_t}}\omega = dH_t$ in the interior $\operatorname{int}({\mathbb{D}})$ smoothly extends to the closed disk ${\mathbb{D}}$ and is tangent to $\partial {\mathbb{D}}$.
\item The arc $(\phi_H^t)_{t\in [0,1]}$ represents $\widetilde{\phi}$.
\end{enumerate}
\end{cor}
\subsection{Lifts to the strip}
\label{subsection:lifts_to_the_strip}
In order to represent area preserving diffeomorphisms of the disk ${\mathbb{D}}$ by generalized generating functions, it will be convenient to lift them to the strip $S\coloneqq [0,1]\times{\mathbb{R}}$. This is carefully explained in \cite[section 2.3]{ABHS18}. Here we summarize the relevant material. Consider the map
\begin{equation*}
p:S\rightarrow{\mathbb{D}}\quad (r,\theta)\mapsto r\cdot e^{i\theta}.
\end{equation*}
The restriction of $p$ to $(0,1]\times{\mathbb{R}}$ is a covering map to ${\mathbb{D}}\setminus\{0\}$. The translation
\begin{equation*}
T:S\rightarrow S\quad (r,\theta)\mapsto (r,\theta+2\pi)
\end{equation*}
generates the group of deck transformations. Any orientation preserving diffeomorphism $\phi\in\operatorname{Diff}^+({\mathbb{D}})$ fixing the origin lifts to a diffeomorphism $\Phi\in\operatorname{Diff}^+(S)$ (see \cite[Lemma 2.10]{ABHS18}). The lift $\Phi$ commutes with $T$, i.e.
\begin{equation*}
T\circ\Phi=\Phi\circ T.
\end{equation*}
Any two lifts are related by composition with a deck transformation. A lift $\widetilde{\phi}\in\widetilde{\operatorname{Diff}}({\mathbb{D}})$ of $\phi$ to the universal cover uniquely specifies a lift $\Phi\in\operatorname{Diff}^+(S)$ as follows: Represent $\widetilde{\phi}$ by a smooth arc $(\phi_t)_{t\in [0,1]}$ in $\operatorname{Diff}^+({\mathbb{D}})$ starting at the identity. This arc uniquely lifts to an arc $(\Phi_t)_{t\in [0,1]}$ in $\operatorname{Diff}^+(S)$ starting at the identity. Now simply set $\Phi\coloneqq \Phi_1$.\\
If the diffeomorphism $\phi\in\operatorname{Diff}^+({\mathbb{D}})$ is radially monotone, then any lift $\Phi$ is monotone in the sense of the following definition.
\begin{definition}
\label{definition:monotone_diffeomorphism_of_strip}
Let $\Phi\in\operatorname{Diff}^+(S)$ be a diffeomorphism and denote the components of $\Phi$ by $(R,\Theta)$. We call $\Phi$ {\it monotone} if $\partial_1R(r,\theta)>0$ for all $(r,\theta)\in S$.
\end{definition}
The following characterization of monotonicity will be useful in later sections.
\begin{lem}
\label{lem:monotonicity_equivalent_to_diagonal_projection_being_diffeomorphism}
Let $\Phi\in\operatorname{Diff}^+(S)$ be a diffeomorphism preserving the boundary components of $S$. Let $\Gamma(\Phi)\subset S\times S$ denote the graph of $\Phi$. Then $\Phi$ is monotone if and only if
\begin{equation*}
\pi: \Gamma(\Phi)\subset S\times S \rightarrow S\quad (r,\theta,R,\Theta)\mapsto (R,\theta)
\end{equation*}
is a diffeomorphism.
\end{lem}
\begin{proof}
Let $\Phi(r,\theta) = (R(r,\theta),\Theta(r,\theta))$ denote the components of $\Phi$. The map
\begin{equation*}
(\operatorname{id}_S,\Phi):S\rightarrow \Gamma(\Phi)\quad (r,\theta)\mapsto (r,\theta,R(r,\theta),\Theta(r,\theta))
\end{equation*}
is a parametrization of $\Gamma(\Phi)$. Clearly, $\pi$ is a diffeomorphism if and only if the composition
\begin{equation*}
\pi\circ (\operatorname{id}_S,\Phi):S\rightarrow S\quad (r,\theta)\mapsto (R(r,\theta),\theta)
\end{equation*}
is a diffeomorphism. This is the case if and only if
\begin{equation}
\label{eq:R_in_proof_of_monotonicity_criterion}
R(\cdot,\theta):[0,1]\rightarrow [0,1]
\end{equation}
is a diffeomorphism for every fixed $\theta\in {\mathbb{R}}$. By assumption $\Phi$ preserves the boundary components of $S$. Thus $R(0,\theta) = 0$ and $R(1,\theta)=1$ for every $\theta$. Hence \eqref{eq:R_in_proof_of_monotonicity_criterion} is a diffeomorphism if and only if $\partial_1R>0$, i.e. $\Phi$ is monotone.
\end{proof}
Now suppose that ${\mathbb{D}}$ is equipped with a $2$-form $\omega$ which is positive in the interior $\operatorname{int}({\mathbb{D}})$. Then
\begin{equation*}
\Omega\coloneqq p^*\omega
\end{equation*}
is a $2$-form on $S$ which is positive in the interior and invariant under the translation $T$, i.e.
\begin{equation*}
T^*\Omega = \Omega.
\end{equation*}
If $\phi\in\operatorname{Diff}({\mathbb{D}},\omega)$ fixes the origin and preserves $\omega$, then any lift $\Phi$ preserves $\Omega$, i.e. $\Phi\in\operatorname{Diff}(S,\Omega)$.
\subsection{Generalized generating functions on the strip}
\label{subsection:generalized_generating_functions}
Throughout this section, let $\Omega$ be a $2$-form on the strip $S$ which is positive in the interior $\operatorname{int}(S)$ and preserved by $T$. Moreover, let $\Phi\in\operatorname{Diff}(S,\Omega)$ be a diffeomorphism which preserves $\Omega$ and the boundary components of $S$ and which commutes with $T$. We equip the product $S\times S$ with the closed $2$-form $(-\Omega)\oplus\Omega$. The restriction of this $2$-form to the interior of $S\times S$ is a symplectic form. Consider a primitive $1$-form $\alpha$ of $(-\Omega)\oplus\Omega$ whose restriction to the diagonal $\Delta\subset S\times S$ vanishes. The graph $\Gamma(\Phi)$ is a Lagrangian submanifold of $(S\times S,(-\Omega)\oplus\Omega)$, i.e. the restriction of $(-\Omega)\oplus\Omega$ to $\Gamma(\Phi)$ vanishes. Therefore, the restriction of the primitive $\alpha$ to $\Gamma(\Phi)$ is closed. Since $S$ is simply connected, it is also exact, i.e. it can be written as
\begin{equation}
\label{eq:defining_equation_generalized_generating_function}
\alpha|_{\Gamma(\Phi)} = d W_{\Phi,\alpha}
\end{equation}
for a function $W_{\Phi,\alpha}:\Gamma(\Phi)\rightarrow{\mathbb{R}}$, which is unique up to addition of a constant. We call $W_{\Phi,\alpha}$ a {\it generalized generating function} for $\Phi$ with respect to $\alpha$ (c.f. \cite[Definition 9.3.8]{MS17}). In Lemma \ref{lem:normalization_generalized_generating_function} below, we specify a preferred normalization of $W_{\Phi,\alpha}$ which we will use throughout this paper. For $z\in \{1\}\times {\mathbb{R}}\subset \partial S$, let $\gamma_z$ denote the path
\begin{equation*}
\gamma_z : [0,1]\rightarrow \partial S\times \partial S\qquad \gamma_z(t)\coloneqq (z,(1-t)z+t\Phi(z)).
\end{equation*}
\begin{lem}
\label{lem:normalization_generalized_generating_function}
There exists a unique smooth function $W_{\Phi,\alpha}:\Gamma(\Phi)\rightarrow{\mathbb{R}}$ satisfying
\begin{equation}
\label{eq:characterizing_equation_generalized_generating_function_general_primitive}
d W_{\Phi,\alpha} = \alpha|_{\Gamma(\Phi)}
\end{equation}
and
\begin{equation}
\label{eq:normalization_equation_generalized_generating_function_general_primitive}
W_{\Phi,\alpha}(z,\Phi(z)) = \int_{\gamma_z}\alpha\qquad\text{for all}\enspace z\in\{1\}\times {\mathbb{R}}.
\end{equation}
\end{lem}
\begin{proof}
Set $z_0\coloneqq (1,0)$. Clearly, there exists a unique function $W_{\Phi,\alpha}$ satisfying \eqref{eq:characterizing_equation_generalized_generating_function_general_primitive} such that \eqref{eq:normalization_equation_generalized_generating_function_general_primitive} holds for $z_0$. We need to check that this function $W_{\Phi,\alpha}$ satisfies \eqref{eq:normalization_equation_generalized_generating_function_general_primitive} for all $z\in \{1\}\times {\mathbb{R}}$. Fix $z\in\{1\}\times{\mathbb{R}}$. We define two paths $\delta$ and $\epsilon$ by
\begin{equation*}
\delta : [0,1] \rightarrow \partial S \times \partial S \qquad \delta(t) = ((1-t)z_0+tz, \Phi((1-t)z_0+tz))
\end{equation*}
and
\begin{equation*}
\epsilon : [0,1] \rightarrow \partial S \times \partial S \qquad \epsilon(t) = ((1-t)z_0+tz,(1-t)z_0+tz).
\end{equation*}
The path $\delta$ is contained in $\Gamma(\Phi)$ and goes from $(z_0,\Phi(z_0))$ to $(z,\Phi(z))$. The path $\epsilon$ connects $(z_0,z_0)$ to $(z,z)$ and is contained in $\Delta$. The concatenations $\gamma_{z_0} \# \delta$ and $\epsilon \# \gamma_z$ are homotopic inside $\partial S\times\partial S$ with fixed end points. Since the restriction of $(-\Omega)\oplus\Omega$ to $\partial S\times \partial S$ vanishes, the restriction of $\alpha$ to this subspace is closed. Thus
\begin{equation*}
\int_{\gamma_{z_0} \# \delta} \alpha = \int_{\epsilon \# \gamma_z} \alpha.
\end{equation*}
Using \eqref{eq:characterizing_equation_generalized_generating_function_general_primitive} and the fact that $W_{\Phi,\alpha}$ satisfies \eqref{eq:normalization_equation_generalized_generating_function_general_primitive} for $z_0$, the left hand side evaluates to
\begin{equation*}
\int_{\gamma_{z_0} \# \delta} \alpha = \int_{\gamma_{z_0}}\alpha + \int_{\delta}\alpha = W_{\Phi,\alpha}(z_0,\Phi(z_0)) + W_{\Phi,\alpha}(z,\Phi(z)) - W_{\Phi,\alpha}(z_0,\Phi(z_0)) = W_{\Phi,\alpha}(z,\Phi(z)).
\end{equation*}
Since the restriction of $\alpha$ to the diagonal $\Delta$ vanishes, the right hand side is given by
\begin{equation*}
\int_{\epsilon \# \gamma_z} \alpha = \int_{\epsilon}\alpha + \int_{\gamma_z}\alpha =\int_{\gamma_z}\alpha.
\end{equation*}
This concludes our proof that \eqref{eq:normalization_equation_generalized_generating_function_general_primitive} holds for all $z\in\{1\}\times{\mathbb{R}}$.
\end{proof}
Let $\beta$ be a second primitive $1$-form of $(-\Omega)\oplus\Omega$ whose restriction to $\Delta$ vanishes. Then the difference $\alpha-\beta$ is exact, i.e. there exists a smooth function $u$ on $S\times S$ such that $\alpha-\beta = du$. Since the restrictions of $\alpha$ and $\beta$ to the diagonal $\Delta$ vanish, the function $u$ must be constant on $\Delta$. Let us normalize $u$ such that $u|_\Delta=0$. The following lemma relates the generalized generating functions of $\Phi$ with respect to $\alpha$ and $\beta$.
\begin{lem}
\label{lem:generalized_generating_functions_change_of_primitive}
Let $W_{\Phi,\alpha}$ and $W_{\Phi,\beta}$ be the generalized generating functions of $\Phi$ with respect to $\alpha$ and $\beta$. Then
\begin{equation*}
W_{\Phi,\alpha} = W_{\Phi,\beta} + u|_{\Gamma(\Phi)}.
\end{equation*}
In particular, since $u$ vanishes on the diagonal $\Delta$, the value of a generalized generating function at a fixed point of $\Phi$ is independent of the choice of primitive $1$-form.
\end{lem}
\begin{proof}
We set $W\coloneqq W_{\Phi,\beta} + u|_{\Gamma(\Phi)}$. We need to check that this function satisfies \eqref{eq:characterizing_equation_generalized_generating_function_general_primitive} and \eqref{eq:normalization_equation_generalized_generating_function_general_primitive}. In order to show \eqref{eq:characterizing_equation_generalized_generating_function_general_primitive}, we compute
\begin{equation*}
dW = dW_{\Phi,\beta} + d u|_{\Gamma(\Phi)} = \beta|_{\Gamma(\Phi)} + (\alpha-\beta)|_{\Gamma(\Phi)} = \alpha|_{\Gamma(\Phi)}.
\end{equation*}
Let $z\in\{1\}\times{\mathbb{R}}$. We have
\begin{equation*}
W(z,\Phi(z)) = W_{\Phi,\beta}(z,\Phi(z)) + u(z,\Phi(z)) = \int_{\gamma_z}\beta + \int_{\gamma_z} du = \int_{\gamma_z}\alpha.
\end{equation*}
Here the second equality uses that $u$ vanishes on the diagonal $\Delta$. This shows \eqref{eq:normalization_equation_generalized_generating_function_general_primitive}.
\end{proof}
There are two primitives of $(-\Omega)\oplus\Omega$ whose associated generalized generating functions are of particular importance to our discussion. The first such primitive is given by $(-\Lambda)\oplus\Lambda$ where $\Lambda$ is a primitive of the area form $\Omega$ on $S$. It will be useful to regard the associated generalized generating function $W_{\Phi,(-\Lambda)\oplus\Lambda}$ as a function on $S$ via the parametrization $(\operatorname{id}_S,\Phi):S\rightarrow\Gamma(\Phi)$ of the graph $\Gamma(\Phi)$. We define
\begin{equation*}
\Sigma_{\Phi,\Lambda} \coloneqq W_{\Phi,(-\Lambda)\oplus\Lambda} \circ (\operatorname{id}_S,\Phi)
\end{equation*}
and call it the {\it action} of $\Phi$ with respect to $\Lambda$. The characterizing equations \eqref{eq:characterizing_equation_generalized_generating_function_general_primitive} and \eqref{eq:normalization_equation_generalized_generating_function_general_primitive} for the generalized generating function $W_{\Phi,(-\Lambda)\oplus\Lambda}$ can be expressed in terms of the action $\Sigma_{\Phi,\Lambda}$ as
\begin{equation}
\label{eq:characterizing_equation_action_on_strip}
\Phi^*\Lambda - \Lambda = d\Sigma_{\Phi,\Lambda}
\end{equation}
and
\begin{equation}
\label{eq:normalization_equation_action_on_strip}
\Sigma_{\Phi,\Lambda}(1,\theta) = \int_{\delta_\theta}\Lambda
\qquad \text{for all}\enspace \theta\in{\mathbb{R}}
\end{equation}
where $\delta_\theta$ denotes the path
\begin{equation*}
\delta_\theta : [0,1]\rightarrow \partial S\quad \delta_\theta(t)\coloneqq (1-t)\cdot(1,\theta)+t\cdot \Phi(1,\theta).
\end{equation*}
The following basic properties of the action $\Sigma_{\Phi,\Lambda}$ will be useful later on.
\begin{lem}
\label{lem:action_on_strip_basic_properties}
\begin{enumerate}
\item Let $(\Phi_t)_{t\in [0,1]}$ be an arc in $\operatorname{Diff}(S,\Omega)$ starting at the identity. Let
\begin{equation*}
H:[0,1] \times S \rightarrow {\mathbb{R}}
\end{equation*}
be a Hamiltonian generating this arc. If we normalize $H$ by $H_t(1,\theta)=0$, then the action $\Sigma_{\Phi,\Lambda}$ may be computed via
\begin{equation*}
\Sigma_{\Phi,\Lambda}(z) = \int_{\{t\mapsto\Phi_t(z)\}}\Lambda + \int_0^1 H_t(\Phi_t(z))dt.
\end{equation*}
\item Suppose that $\Lambda=p^*\lambda$ is the pull-back of a primitive $\lambda$ of $\omega$ and that $\Phi$ is the lift of a diffeomorphism $\widetilde{\phi}\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ fixing the origin. Then $\Sigma_{\Phi,\Lambda} = \sigma_{\widetilde{\phi},\lambda}\circ p$.
\end{enumerate}
\end{lem}
\begin{proof}
Statement (1) is the analog of \cite[Proposition 2.6]{ABHS18}, which deals with area-preserving diffeomorphisms of the disk ${\mathbb{D}}$. The proof given in \cite{ABHS18} carries over to the case of the strip almost verbatim and we will not repeat it here.\\
We prove (2). We compute
\begin{equation*}
d(\sigma_{\widetilde{\phi},\lambda} \circ p) = p^* d\sigma_{\widetilde{\phi},\lambda} = p^* (\phi^*\lambda - \lambda) = \Phi^*p^*\lambda - p^*\lambda = \Phi^*\Lambda - \Lambda.
\end{equation*}
Here the third equality uses the identity $p\circ\Phi = \phi\circ p$. Let $(\phi_t)_{t\in [0,1]}$ be an arc in $\operatorname{Diff}^+({\mathbb{D}})$ representing $\widetilde{\phi}$. Then
\begin{equation*}
\sigma_{\widetilde{\phi},\lambda}\circ p(1,\theta) = \int_{t\mapsto\phi_t(p(1,\theta))} \lambda = \int_{p\circ\delta_\theta} \lambda = \int_{\delta_\theta} p^*\lambda = \int_{\delta_\theta} \Lambda = \Sigma_{\Phi,\Lambda}(1,\theta).
\end{equation*}
Here the second equality uses that the restriction of $\lambda$ to $\partial{\mathbb{D}}$ is closed and that $t\mapsto\phi_t(p(1,\theta))$ and $p\circ\delta_\theta$ are homotopic in $\partial{\mathbb{D}}$ with fixed end points. This shows that $\sigma_{\widetilde{\phi},\lambda} \circ p$ satisfies \eqref{eq:characterizing_equation_action_on_strip} and \eqref{eq:normalization_equation_action_on_strip}. Thus $\sigma_{\widetilde{\phi},\lambda} \circ p = \Sigma_{\Phi,\Lambda}$.
\end{proof}
Let us define a second special primitive of $(-\Omega)\oplus\Omega$. We write
\begin{equation*}
\Omega = F(r,\theta)\cdot dr\wedge d\theta
\end{equation*}
where $F$ is a smooth function on $S$ which is positive in the interior and invariant under $T$. Next, we define functions $A$ and $B$ on $S$ by
\begin{equation*}
A(r,\theta)\coloneqq \int_0^rF(s,\theta)ds \quad\quad \text{and}\quad\quad B(r,\theta) \coloneqq \int_0^\theta F(r,\vartheta) d\vartheta.
\end{equation*}
We let $(r,\theta,R,\Theta)$ denote coordinates on $S\times S$ and define
\begin{equation*}
\Xi \coloneqq (A(R,\theta)-A(r,\theta))\cdot d\theta + (B(R,\theta)-B(R,\Theta))\cdot dR.
\end{equation*}
A direct computation shows that $d\Xi=(-\Omega)\oplus\Omega$ and that the restriction of $\Xi$ to the diagonal $\Delta\subset S\times S$ vanishes. The resulting generalized generating function $W_{\Phi,\Xi}$ is particularly useful if the diffeomorphism $\Phi\in\operatorname{Diff}(S,\Omega)$ is monotone in the sense of Definition \ref{definition:monotone_diffeomorphism_of_strip}. From now on, let us assume that this is the case. Consider the projection
\begin{equation}
\label{eq:diagonal_projection_for_generating_function}
\pi_\Delta: \Gamma(\Phi)\rightarrow S\quad (r,\theta,R,\Theta)\mapsto (R,\theta).
\end{equation}
By Lemma \ref{lem:monotonicity_equivalent_to_diagonal_projection_being_diffeomorphism}, $\pi_\Delta$ is a diffeomorphism. It will be convenient to view $W_{\Phi,\Xi}$ as a function on $S$ via the diffeomorphism $\pi_\Delta$. We abbreviate
\begin{equation*}
W\coloneqq W_{\Phi,\Xi}\circ\pi_\Delta^{-1}.
\end{equation*}
Equation \eqref{eq:characterizing_equation_generalized_generating_function_general_primitive} can be rewritten in terms of $W$ as
\begin{equation}
\label{eq:defining_equation_generating_function}
\begin{cases}
\partial_1W(R,\theta) = B(R,\theta)-B(R,\Theta)\\
\partial_2W(R,\theta) = A(R,\theta) - A(r,\theta)
\end{cases}
\qquad \text{for all}\enspace (r,\theta,R,\Theta)\in\Gamma(\Phi).
\end{equation}
The normalization \eqref{eq:normalization_equation_generalized_generating_function_general_primitive} simply becomes
\begin{equation}
\label{eq:normalization_generating_function}
W|_{\{1\}\times {\mathbb{R}}} = 0
\end{equation}
because the restriction of the primitive $\Xi$ to $(\{1\}\times{\mathbb{R}})\times (\{1\}\times{\mathbb{R}})$ vanishes. We summarize the relevant properties of $W$ in the following Proposition (see Proposition 2.15, Lemma 2.16 and Proposition 2.17 in \cite{ABHS18}).
\begin{prop}
\label{prop:generalized_generating_functions}
Suppose that $\Phi\in\operatorname{Diff}(S,\Omega)$ is monotone and commutes with $T$. Then there exists a unique generating function $W:S\rightarrow{\mathbb{R}}$ satisfying equations \eqref{eq:defining_equation_generating_function} and the normalization \eqref{eq:normalization_generating_function}. The function $W$ is invariant under $T$ and is constant on the boundary components of $S$. The interior critical points of $W$ are precisely the interior fixed points of $\Phi$. We have $W(p)=\Sigma_{\Phi,\Lambda}(p)$ for all fixed points $p$ of $\Phi$ and any primitive $\Lambda$ of $\Omega$. If the restriction of $\Lambda$ to $\{0\}\times{\mathbb{R}}$ vanishes, then $W$ agrees with $\Sigma_{\Phi,\Lambda}$ on $\{0\}\times{\mathbb{R}}$.
\end{prop}
\begin{proof}
Existence and uniqueness of $W$ follow from Proposition 2.15 in \cite{ABHS18}. Moreover, this proposition asserts that $W$ is invariant under $T$ and constant on the boundary components of $S$ and that the interior critical points of $W$ are precisely the interior fixed points of $W$. The remaining assertions are proved in \cite[Lemma 2.16 and Proposition 2.17]{ABHS18} in the special case that $\Phi$ is the lift of a diffeomorphism $\widetilde{\phi}\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ and $\Lambda=p^*\lambda$ for a primitive $\lambda$ of $\omega$. Since the statements in Proposition \ref{prop:generalized_generating_functions} are slightly more general, we provide independent proofs. It is a direct consequence of Lemma \ref{lem:generalized_generating_functions_change_of_primitive} that $W(p) = \Sigma_{\Phi,\Lambda}(p)$ for all fixed points $p$ of $\Phi$. Suppose that the restriction of $\Lambda$ to $\{0\}\times{\mathbb{R}}$ vanishes. This implies that the restriction of $(-\Lambda)\oplus\Lambda$ to $(\{0\}\times{\mathbb{R}})^2$ vanishes. Similarly, the restriction of the primitive $\Xi$ to this subspace vanishes. It is a direct consequence of \eqref{eq:characterizing_equation_generalized_generating_function_general_primitive} that the generating functions $W_{\Phi,\Xi}$ and $W_{\Phi,(-\Lambda)\oplus\Lambda}$ are both constant on $\Gamma(\Phi)\cap (\{0\}\times{\mathbb{R}})^2$. Let $u$ be the unique smooth function on $S\times S$ whose restriction to the diagonal $\Delta$ vanishes and which satisfies $\Xi = (-\Lambda)\oplus\Lambda + du$. Since both $\Xi$ and $(-\Lambda)\oplus\Lambda$ restrict to zero on $(\{0\}\times{\mathbb{R}})^2$, the function $u$ vanishes on this set. By Lemma \ref{lem:generalized_generating_functions_change_of_primitive}, this implies that $W_{\Phi,\Xi}$ and $W_{\Phi,(-\Lambda)\oplus\Lambda}$ agree on $\Gamma(\Phi)\cap (\{0\}\times{\mathbb{R}})^2$. We conclude that $W$ and $\Sigma_{\Phi,\Lambda}$ agree and are constant on $\{0\}\times{\mathbb{R}}$.
\end{proof}
In order to avoid technicalities involving the behaviour of $\Phi$ and $W$ near $\partial S$, let us now assume in addition that $\Omega$ is translation invariant in some small neighbourhood of $\partial S$. In other words, $F(r,\theta)$ does not depend on $\theta$ for $r$ sufficiently close to $0$ or $1$. Moreover, we will restrict our attention to diffeomorphisms $\Phi$ whose restrictions to neighbourhoods of the two boundary components of $S$ are translations. More precisely, we will assume that there exist constants $\theta_0$ and $\theta_1$ such that for $j\in\{0,1\}$
\begin{equation}
\label{eq:translation_condition_on_diffeomorphisms}
\Phi(r,\theta)=(r,\theta+\theta_j) \quad\quad \text{if }r\text{ is sufficiently close to }j.
\end{equation}
Since the following result is not explicitly stated in \cite{ABHS18}, we provide a proof.
\begin{prop}
\label{prop:correspondence_symplectomorphisms_generating_functions}
There exists a bijective correspondence between the set of all diffeomorphisms $\Phi\in\operatorname{Diff}(S,\Omega)$ which are monotone, commute with $T$ and satisfy \eqref{eq:translation_condition_on_diffeomorphisms} and the set of all smooth functions $W:S\rightarrow{\mathbb{R}}$ satisfying
\begin{enumerate}
\item\label{item:G_solvable} $0<A(r,\theta)-\partial_2W(r,\theta)<A(1,\theta)$ for all $(r,\theta)\in\operatorname{int}(S)$
\item\label{item:G_derivative_invertible} $\partial_{12}W(r,\theta)<F(r,\theta)$ for all $(r,\theta)\in\operatorname{int}(S)$
\item\label{item:G_periodic} $W\circ T=W$
\item\label{item:G_boundary_behaviour} There exist constants $c_1$, $c_2$ and $c_3$ such that
\begin{equation*}
\begin{cases}
W(r,\theta) = c_1 + c_2\cdot \int_0^rF(s,\theta)ds & \quad\text{if } r \text{ is sufficiently close to 0}\\
W(r,\theta) = c_3\cdot \int_r^1F(s,\theta)ds & \quad\text{if } r \text{ is sufficiently close to 1}.
\end{cases}
\end{equation*}
\end{enumerate}
$\Phi$ and $W$ correspond to each other under this bijection if and only if equations \eqref{eq:defining_equation_generating_function} hold.
\end{prop}
\begin{proof}
Let $\Phi\in\operatorname{Diff}(S,\Omega)$ be a diffeomorphism which is monotone, commutes with $T$ and satisfies \eqref{eq:translation_condition_on_diffeomorphisms}. Let $W$ be the associated generating function satisfying \eqref{eq:defining_equation_generating_function} and the normalization \eqref{eq:normalization_generating_function}. We verify that $W$ satisfies properties (1)-(4). Property (3) actually is a consequence of Proposition \ref{prop:generalized_generating_functions}. It follows from \eqref{eq:translation_condition_on_diffeomorphisms} that any point $(r,\theta,R,\Theta)\in\Gamma(\Phi)$ sufficiently close to the boundary satisfies $r=R$. Thus equation \eqref{eq:defining_equation_generating_function} implies that
\begin{equation*}
\partial_2 W(R,\theta) = A(R,\theta)-A(r,\theta) = 0
\end{equation*}
near $\partial S$. Hence $W$ is independent of $\theta$ in a neighbourhood of the boundary. Near $\partial S$ we also have
\begin{equation*}
B(r,\theta) = \theta\cdot F(r).
\end{equation*}
Here we use that $F(r,\theta)=F(r)$ does not depend on $\theta$ near $\partial S$. Thus \eqref{eq:defining_equation_generating_function} yields
\begin{equation*}
\partial_1W(R,\theta) = B(R,\theta)-B(R,\Theta) = (\theta-\Theta)\cdot F(R).
\end{equation*}
Using \eqref{eq:translation_condition_on_diffeomorphisms} we obtain
\begin{equation*}
\partial_1W(R,\theta) = -\theta_j\cdot F(R)
\end{equation*}
for $R$ close to $j\in\{0,1\}$. Property (4) is an immediate consequence. Next we check property (1). Rearranging the second equation in \eqref{eq:defining_equation_generating_function} gives
\begin{equation*}
A(R,\theta)-\partial_2W(R,\theta) = A(r,\theta).
\end{equation*}
Now we simply use that $A(0,\theta)=0$ and that $A(r,\theta)$ is strictly monotonic in $r$ for fixed $\theta$. It remains to verify (2). By Lemma \ref{lem:monotonicity_equivalent_to_diagonal_projection_being_diffeomorphism}, monotonicity of $\Phi$ implies that the projection $\pi_\Delta$ defined in equation \eqref{eq:diagonal_projection_for_generating_function} is a diffeomorphism. Let us now parametrize $\Gamma(\Phi)$ by the inverse of this diffeomorphism
\begin{equation*}
\pi_\Delta^{-1}: S\rightarrow \Gamma(\Phi)\subset S\times S \quad (R,\theta)\mapsto
\left(\begin{matrix}
r(R,\theta)\\
\theta\\
R\\
\Theta(R,\theta)
\end{matrix}\right).
\end{equation*}
For $j\in\{1,2\}$ let $\pi_j:S\times S\rightarrow S$ be the projection onto the $j$-th factor. The restriction of $\pi_j$ to $\Gamma(\Phi)$ is a diffeomorphism onto $S$. This shows that
\begin{equation*}
(R,\theta)\mapsto (r(R,\theta),\theta)\quad\quad\text{and}\quad\quad (R,\theta)\mapsto (R,\Theta(R,\theta))
\end{equation*}
are both diffeomorphisms of $S$. In fact, these diffeomorphisms are orientation preserving. Thus the linearizations
\begin{equation*}
\left(\begin{matrix}
\partial_1r(R,\theta) & \partial_2r(R,\theta)\\
0 & 1
\end{matrix}\right)
\quad\quad\text{and}\quad\quad
\left(\begin{matrix}
1 & 0\\
\partial_1\Theta(R,\theta) & \partial_2\Theta(R,\theta)
\end{matrix}\right)
\end{equation*}
both have positive determinant. This implies $\partial_1r(R,\theta)>0$ and $\partial_2\Theta(R,\theta)>0$. Let us view both sides of the first equation in \eqref{eq:defining_equation_generating_function} as functions of $R$ and $\theta$. Then differentiating with respect to $\theta$ yields
\begin{equation*}
\partial_{12}W(R,\theta) = F(R,\theta) - F(R,\Theta(R,\theta))\cdot\partial_2\Theta(R,\theta).
\end{equation*}
In the interior of $S$, both $F(R,\Theta)$ and $\partial_2\Theta$ are strictly positive. Thus
\begin{equation*}
\partial_{12}W(R,\theta) < F(R,\theta)
\end{equation*}
proving (2).\\
Let us now prove the converse direction of Proposition \ref{prop:correspondence_symplectomorphisms_generating_functions}. We start with a generating function $W$ satisfying properties (1)-(4). Let us first show that we may solve equations \eqref{eq:defining_equation_generating_function} for $r$ and $\Theta$ and obtain a smooth map
\begin{equation*}
\operatorname{int}(S)\ni (R,\theta)\mapsto (r(R,\theta),\Theta(R,\theta)) \in \operatorname{int}(S).
\end{equation*}
The function $B(R,\cdot)$ is a diffeomorphism of ${\mathbb{R}}$ for fixed $R\in (0,1)$. Thus we may apply the implicit function theorem and solve the first equation in \eqref{eq:defining_equation_generating_function} for $\Theta(R,\theta)$ in the interior $\operatorname{int}(S)$. For fixed $\theta$, the function $A(\cdot,\theta)$ is a diffeomorphism from $(0,1)$ onto $(0,A(1,\theta))$. By property (1), $\partial_2W(R,\theta)$ is contained in the image of $A(R,\theta)-A(\cdot,\theta)$. Again we invoke the implicit funciton theorem to solve the second equation in \eqref{eq:defining_equation_generating_function} for $r(R,\theta)$. The map
\begin{equation}
\label{eq:parametrization_iota_of_graph_of_phi}
\iota:\operatorname{int}(S)\rightarrow S\times S\quad (R,\theta)\mapsto \left(\begin{matrix}
r(R,\theta)\\
\theta\\
R\\
\Theta(R,\theta)
\end{matrix}\right)
\end{equation}
parametrizes a smooth submanifold of $S\times S$. We show that this submanifold is in fact the graph of a diffeomorphism of $\operatorname{int}(S)$. Differentiating the first equation in \eqref{eq:defining_equation_generating_function} with respect to $\theta$ and the second equation with respect to $R$ yields:
\begin{equation*}
\begin{cases}
\partial_{12}W(R,\theta) = F(R,\theta) - F(R,\Theta(R,\theta))\cdot \partial_2\Theta(R,\theta)\\
\partial_{12}W(R,\theta) = F(R,\theta) - F(r(R,\theta),\theta)\cdot \partial_1r(R,\theta)
\end{cases}
\end{equation*}
Using property (2), we conclude that $\partial_1r>0$ and $\partial_2\Theta>0$. Hence the composition $\pi_j\circ\iota$ is a diffeomorphism onto its image for $j\in\{1,2\}$. We show that this image actually is all of $\operatorname{int}(S)$. By property (4), we have $\partial_2W=0$ near $\partial S$. Thus the second equation in \eqref{eq:defining_equation_generating_function} implies that $r(R,\theta)=R$ for $R$ near $0$ or $1$. This shows that $r(\cdot,\theta)$ is a diffeomorphism of $(0,1)$ for fixed $\theta$. Therefore the image of $\pi_1\circ\iota$ is $\operatorname{int}(S)$. The function $\partial_1W$ is bounded. For fixed $R\in (0,1)$, the function $B(R,\cdot)$ is an orientation preserving diffeomorphism of ${\mathbb{R}}$. Thus the first equation in \eqref{eq:defining_equation_generating_function} implies that $\lim_{\theta\rightarrow\pm\infty}\Theta(R,\theta)=\pm\infty$. Hence the image of $\pi_2\circ\iota$ is $\operatorname{int}(S)$. This shows that the image of $\iota$ is the graph of a diffeomorphism $\Phi\in\operatorname{Diff}(\operatorname{int}(S))$. Equations \eqref{eq:defining_equation_generating_function} imply that the pull-back of $\Xi$ via $\iota$ is closed. Therefore $\Phi$ preserves $\Omega$. Property (4) implies that in a neighbourhood of $\partial S$ equations \eqref{eq:defining_equation_generating_function} become:
\begin{equation*}
\begin{cases}
c\cdot F(R) = (\theta-\Theta)\cdot F(R)\\
0 = A(R,\theta)-A(r,\theta)
\end{cases}
\end{equation*}
We conclude that $\Phi$ satisfies \eqref{eq:translation_condition_on_diffeomorphisms} near $\partial S$. Therefore $\Phi$ smoothly extends to the closed strip and we have $\Phi\in\operatorname{Diff}(S,\Omega)$. In order to check that $\Phi$ commutes with $T$, first note that $A(R,\theta)$ is invariant under $T$ and that $B(R,\theta+2\pi)=B(R,\theta)+B(R,2\pi)$. In combination with invariance of $W$ under $T$, this implies that $r(R,\theta)$ is invariant under $T$ and that $\Theta(R,\theta+2\pi)=\Theta(R,\theta)+2\pi$. Hence the image of $\iota$ is invariant under the diagonal action of $T$ on $S\times S$, which implies that $\Phi$ commutes with $T$. The composition of $\pi_\Delta\circ\iota$ is equal to $\operatorname{id}_S$. Thus $\pi_\Delta$ is a diffeomorphism, which is equivalent to monotonicity of $\Phi$ by Lemma \ref{lem:monotonicity_equivalent_to_diagonal_projection_being_diffeomorphism}.
\end{proof}
\subsection{Proof of the positivity criterion for radially monotone diffeomorphisms}
\label{subsection:proof_of_the_positivity_criterion_for_radially_monotone_diffeomorphisms}
\begin{proof}[Proof of Theorem \ref{theorem:positivity_criterion_for_radially_monotone_diffeomorphisms}]
Let $\Phi\in\operatorname{Diff}(S,\Omega)$ be the lift of $\widetilde{\phi}$ to the strip. $\Phi$ is monotone and commutes with $T$. Since $\phi$ is a rotation near the origin and the boundary, $\Phi$ is a translation near the two boundary components of $S$, i.e. $\Phi$ satisfies \eqref{eq:translation_condition_on_diffeomorphisms}. It follows from rotation invariance of $\omega$ near $0$ and $\partial{\mathbb{D}}$ that $\Omega$ is translation invariant near $\partial S$. Let $W$ be the unique generating function satisfying properties (1)-(4) in Proposition \ref{prop:correspondence_symplectomorphisms_generating_functions}. For $t\in [0,1]$, we define $W_t\coloneqq t\cdot W$. The functions $W_t$ satisfy conditions (1)-(4) for all $t$. Indeed, the set of functions satisfying conditions (1)-(4) is convex and both $W$ and the zero function satisfy these conditions. Proposition \ref{prop:correspondence_symplectomorphisms_generating_functions} therefore yields an isotopy $\Phi_t\in\operatorname{Diff}(S,\Omega)$ starting at the identity and ending at $\Phi$. Let $H:[0,1]\times S\rightarrow{\mathbb{R}}$ be the unique Hamiltonian generating $\Phi_t$ which is normalized by $H_t(1,\theta)=0$. Since $\Phi_t$ commutes with $T$ for all $t$, the Hamiltonian $H_t$ is invariant under $T$. Near the boundary components of $S$, the isotopy $\Phi_t$ is a translation at constant speed. Hence $H$ is autonomous and translation invariant near the boundary.\\
Our goal is to show that the restriction of $H$ to the complement of the boundary component $\{1\}\times{\mathbb{R}}$ is strictly positive. Our strategy is the following: If we can show that $H_t$ is strictly positive on all its interior critical points and on the boundary component $\{0\}\times{\mathbb{R}}$, then it follows that $H_t$ must be strictly positive on the complement of $\{1\}\times{\mathbb{R}}$. We begin by showing that for every $t$, the set of interior critical points of $H_t$ is equal to the set of interior critical points of $W$. In other words, if the velocity $\partial_t\Phi_t(z)$ vanishes for some $t\in [0,1]$, then $z$ must be a critical point of $W$ and is fixed by the entire isotopy $\Phi_t$. Let $(R_t,\Theta_t)$ denote the components of $\Phi_t$. The defining equations for the generating function $W_t$ read:
\begin{equation*}
\begin{cases}
\partial_1W_t(R_t,\theta) = B(R_t,\theta)-B(R_t,\Theta_t)\\
\partial_2W_t(R_t,\theta) = A(R_t,\theta) - A(r,\theta)
\end{cases}
\end{equation*}
Differentiating with respect to $t$ yields:
\begin{equation*}
\begin{cases}
\partial_1 W(R_t,\theta) + \partial_{11}W_t(R_t,\theta)\cdot\partial_tR_t = \partial_1B(R_t,\theta)\cdot \partial_tR_t-\partial_1B(R_t,\Theta_t)\cdot \partial_tR_t-\partial_2B(R_t,\Theta_t)\cdot \partial_t\Theta_t \\
\partial_2W(R_t,\theta) + \partial_{12}W_t(R_t,\theta)\cdot\partial_tR_t = \partial_1A(R_t,\theta)\cdot\partial_tR_t
\end{cases}
\end{equation*}
If $z=(r,\theta)$ is a point satisfying $\partial_t\Phi_t(z)=0$ for some $t$, then these equations yield:
\begin{equation}
\label{eq:time_derivative_of_generating_equations_at_stationary_point}
\begin{cases}
\partial_1 W(R_t(r,\theta),\theta)=0\\
\partial_2 W(R_t(r,\theta),\theta)=0
\end{cases}
\end{equation}
This implies that $z$ is a critical point of $W$. Indeed, if $t=0$, then $R_t(r,\theta)=r$ and \eqref{eq:time_derivative_of_generating_equations_at_stationary_point} says that $(r,\theta)$ is a critical point of $W$. If $t>0$, then \eqref{eq:time_derivative_of_generating_equations_at_stationary_point} implies that $(R_t(r,\theta),\theta)$ is a critical point of $W_t$. Hence $z$ is fixed by $\Phi_t$ and therefore a critical point of $W_t$. Since $t>0$, this implies that $z$ is a critical point of $W$. Hence we have verified that every interior critical point of $H_t$ is a critical point of $W$. Conversely, suppose that $z$ is a critical points of $W$. Then $z$ is a critical point of $W_t$ for all $t$. Thus $z$ is fixed by the isotopy $\Phi_t$ and hence a critical point of $H_t$.\\
Let $z$ be an interior critical point of $H_t$. By the above discussion, $z$ is a fixed point of $\Phi$. We show that $H_t(z)=\Sigma_{\Phi}(z)$ for all $t$. This implies that $H_t(z)>0$. Indeed, all fixed points of $\phi$ are assumed to have strictly positive action and the same is true for $\Phi$ by item (3) in Lemma \ref{lem:action_on_strip_basic_properties}. Let $\lambda$ be a primitive of $\omega$ and let $\Lambda$ denote the pull-back to $S$. For every $\tau\in [0,1]$, we can compute the action $\Sigma_{\Phi_\tau}(z)$ via item (2) in Lemma \ref{lem:action_on_strip_basic_properties}
\begin{equation*}
\Sigma_{\Phi_\tau}(z) = \Sigma_{\Phi_\tau,\Lambda}(z) = \int_{\{[0,\tau]\ni t\mapsto\Phi_t(z)\}}\Lambda + \int_0^\tau H_t(\Phi_t(z))dt = \int_0^\tau H_t(z)dt.
\end{equation*}
Here the last equality uses that $z$ is fixed by the isotopy $\Phi_t$. By Proposition \ref{prop:generalized_generating_functions}, the action $\Sigma_{\Phi_\tau}(z)$ agrees with $W_\tau(z)$. We obtain
\begin{equation*}
\tau\cdot W(z) = \int_0^\tau H_t(z)dt.
\end{equation*}
Differentiating with respect to $\tau$ yields $H_\tau(z) = W(z) = \Sigma_{\Phi}(z)>0$.\\
Next we show that $H_t$ is positive on the boundary component $\{0\}\times{\mathbb{R}}$. Since $\Lambda$ is given by the pull-back $p^*\lambda$ and $p$ maps the entire boundary component $\{0\}\times {\mathbb{R}}$ to the origin $0$ of ${\mathbb{D}}$, the restriction of $\Lambda$ to $\{0\}\times {\mathbb{R}}$ vanishes. Thus item (2) in Lemma \ref{lem:action_on_strip_basic_properties} yields
\begin{equation*}
\Sigma_{\Phi_\tau,\Lambda}(z) = \int_0^\tau H_t(\Phi_t(z))dt = \int_0^\tau H_t(z) dt
\end{equation*}
for all $z\in \{0\}\times{\mathbb{R}}$. Here the second equality uses that $H_t$ is translation invariant near the boundary and in particular constant on $\{0\}\times{\mathbb{R}}$. By Proposition \ref{prop:generalized_generating_functions}, the action $\Sigma_{\Phi_\tau,\Lambda}$ agrees with $W_\tau$ on $\{0\}\times {\mathbb{R}}$. Thus
\begin{equation*}
\tau\cdot W(z) = \int_0^\tau H_t(z)dt
\end{equation*}
for all $z\in\{0\}\times{\mathbb{R}}$. Differentiating with respect to $\tau$ yields $H_\tau(z) = W(z) = \Sigma_{\Phi,\Lambda}(z)$. By item (3) in Lemma \ref{lem:action_on_strip_basic_properties}, the action $\Sigma_{\Phi,\Lambda}(z)$ is equal to the action $\sigma_{\widetilde{\phi}}(0)>0$. Hence $H_\tau$ is strictly positive on $\{0\}\times{\mathbb{R}}$ for all $\tau$. This completes the proof that $H_t$ is strictly positive on the complement of $\{1\}\times{\mathbb{R}}$.\\
The Hamiltonian $H$ is invariant under $T$ and constant on $\{0\}\times{\mathbb{R}}$. Thus it descends to a continuous function $H:[0,1]\times {\mathbb{D}}\rightarrow{\mathbb{R}}$ which is smooth away from the origin. The Hamiltonian flow of $H$, which is defined on the complement of the origin, is a rotation in some small neighbourhood of the origin. Thus the flow extends to a smooth flow on the entire disk ${\mathbb{D}}$. This implies that $H$ is actually smooth everywhere. Clearly $H$ satisfies properties (1)-(4) in Theorem \ref{theorem:positivity_criterion_for_radially_monotone_diffeomorphisms}.\\
There is one detail remaining: We actually want the Hamiltonian $H$ to be $1$-periodic in time. Here is how to fix this. Let $\eta:{\mathbb{R}}\rightarrow [0,1]$ be a smooth cut-off function which vanishes in an open neighbourhood of $(-\infty,0]$ and is equal to $1$ in an open neighbourhood of $[1,\infty)$. For $\epsilon>0$ we define
\begin{equation*}
\eta^\epsilon(t)\coloneqq \eta\left(\frac{t-1+\epsilon}{\epsilon}\right).
\end{equation*}
The function $\eta^\epsilon$ vanishes in a neighbourhood of $(-\infty,1-\epsilon]$ and is equal to $1$ in a neighbourhood of $[1,\infty)$. Now define
\begin{equation*}
G^\epsilon:[0,1]\times {\mathbb{D}}\rightarrow{\mathbb{R}}\quad G^\epsilon(t,z) \coloneqq (1-\eta^\epsilon(t))\cdot H(t,z) + \eta^\epsilon(t)\cdot H(0,z).
\end{equation*}
This actually extends to a smooth $1$-periodic Hamiltonian $G^\epsilon:{\mathbb{R}}/{\mathbb{Z}}\times {\mathbb{D}}\rightarrow{\mathbb{R}}$. In an open neighbourhood of $0$ and $\partial{\mathbb{D}}$, the Hamiltonian $G^\epsilon_t$ agrees with $H_t$ for all $t\in [0,1]$ because $H$ is autonomous in this region. If $t\in [0,1-\epsilon]$, then $G^\epsilon_t$ agrees with $H_t$ on the entire disk ${\mathbb{D}}$. Moreover, $G^\epsilon$ is strictly positive in the interior of ${\mathbb{D}}$. The time-$1$-map $\phi_{G^\epsilon}^1$ agrees with $\phi$ in a neighbourhood of $0$ and $\partial{\mathbb{D}}$, but it need not agree with $\phi$ on the entire disk. As $\epsilon$ approaches $0$, the diffeomorphism $(\phi_{G^\epsilon}^1)^{-1}\circ\phi$, which is compactly supported in the complement of $0$ and $\partial{\mathbb{D}}$, converges to the identity in the $C^1$-topology. Using Lemma \ref{lem:small_symplectomorphism_generated_by_small_hamiltonian}, we can therefore find a Hamiltonian $K^\epsilon:[0,1]\times {\mathbb{D}}\rightarrow{\mathbb{R}}$, compactly supported in the complement of $0$ and $\partial{\mathbb{D}}$ and vanishing for $t$ close to $0$ or $1$, such that $\phi_{K^\epsilon}^1 = (\phi_{G^\epsilon}^1)^{-1}\circ\widetilde{\phi}$ and such that $\|X_{K^\epsilon}\|_{C^0}$ converges to $0$ as $\epsilon$ approaches $0$. Now define
\begin{equation*}
H^\epsilon_t\coloneqq (G^\epsilon \# K^\epsilon)_t = G^\epsilon_t + K^\epsilon_t\circ (\phi_{G^\epsilon}^t)^{-1}
\end{equation*}
for $t\in [0,1]$. This extends to a smooth $1$-periodic Hamiltonian. It agrees with $H$ in a neighbourhood of $0$ and $\partial{\mathbb{D}}$ and its time-$1$-map is $\phi_{H^\epsilon}^1$ is equal to $\widetilde{\phi}$. For $\epsilon>0$ sufficiently small, $H^\epsilon$ is strictly positive in the interior $\operatorname{int}({\mathbb{D}})$. Thus we have constructed a $1$-periodic Hamiltonian satisfying properties (1)-(4).
\end{proof}
\subsection{Proof of the positivity criterion for diffeomorphisms close to the identity}
\label{subsection:proof_of_the_positivity_criterion_for_diffeomorphisms_close_to_the_identity}
The goal of this section is to deduce Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity} from Theorem \ref{theorem:positivity_criterion_for_radially_monotone_diffeomorphisms}. Key ingredient is the following lemma.
\begin{lem}
\label{lem:moving_fixed_point_to_origin}
Let $\phi:{\mathbb{D}}\rightarrow{\mathbb{D}}$ be a diffeomorphism. Assume that:
\begin{enumerate}
\item $\phi$ is sufficiently $C^1$-close to the identity $\operatorname{id}_{\mathbb{D}}$.
\item $\phi$ is smoothly conjugated to a rotation near $\partial{\mathbb{D}}$.
\item There exists a fixed point $p\in\operatorname{int}({\mathbb{D}})$ such that $\phi$ is smoothly conjugated to a rotation in a neighbourhood of $p$.
\end{enumerate}
Then there exists a diffeomorphism $\psi:{\mathbb{D}}\rightarrow{\mathbb{D}}$ such that:
\begin{enumerate}
\item $\psi(0)=p$
\item $\psi^{-1}\circ\phi\circ\psi$ is radially monotone.
\item $\psi^{-1}\circ\phi\circ\psi$ is a rotation near $0$ and $\partial{\mathbb{D}}$.
\end{enumerate}
\end{lem}
\begin{proof}
We proceed in four steps.\\
{\bf Step 1:} Let $\psi:{\mathbb{D}}\rightarrow{\mathbb{D}}$ be a M\"{o}bius transformation such that $\psi(0)=p$. By Proposition 2.24 in \cite{ABHS18}, the diffeomorphism $\psi^{-1}\circ\phi\circ \psi$ fixes the origin and is radially monotone. After replacing $\phi$ by $\psi^{-1}\circ\phi\circ \psi$, we can therefore assume that $\phi$ is radially monotone and smoothly conjugated to rotations near $0$ and $\partial{\mathbb{D}}$. Note, however, that we can no longer guarantee that $\phi$ is $C^1$-close to the identity.\\
{\bf Step 2:} We show that we may further reduce to the case that $\phi$ agrees with the linearization $d\phi(0)$ in an entire open neighbourhood of $0$. We choose a $\phi$-invariant neighbourhood $U$ of $0$ and an orientation preserving diffeomorphism $f:({\mathbb{D}},0)\rightarrow (U,0)$ such that $\rho\coloneqq f^{-1}\circ\phi\circ f$ is a rotation around the center of ${\mathbb{D}}$. We may approximate $f$ with respect to the $C^1$-topology by a diffeomorphism $\widetilde{f}:({\mathbb{D}},0)\rightarrow (U,0)$ which agrees with its linearization $d\widetilde{f}(0)$ near $0$ and with $f$ outside some arbitrarily small neighbourhood of $0$. We set $\psi\coloneqq f\circ\widetilde{f}^{-1}$. This defines a compactly supported diffeomorphism of $U$. We may smoothly extend it to a diffeomorphism of ${\mathbb{D}}$ by setting $\psi$ to be equal to the identity outside $U$. The resulting diffeomorphism $\psi:({\mathbb{D}},0)\rightarrow ({\mathbb{D}},0)$ is $C^1$-close to the identity and supported in a small neighbourhood of $0$. Near $0$ we have
\begin{equation*}
\psi^{-1}\circ\phi\circ\psi = (f\circ\widetilde{f}^{-1})^{-1}\circ\phi\circ (f\circ\widetilde{f}^{-1}) =
\widetilde{f}\circ (f^{-1}\circ\phi\circ f)\circ \widetilde{f}^{-1} =
d\widetilde{f}(0)\circ \rho \circ d\widetilde{f}(0)^{-1}.
\end{equation*}
This shows that $\psi^{-1}\circ\phi\circ\psi$ is linear in some neighbourhood of $0$. Since $\psi$ is $C^1$-close to the identity and fixes the origin $0$, radial monotonicity is preserved by conjugation by $\psi$. We can therefore replace $\phi$ by $\psi^{-1}\circ\phi\circ\psi$ and assume in addition that $\phi$ agrees with $d\phi(0)$ near $0$.\\
{\bf Step 3:} Since $\phi$ agrees with its linearization near $0$ after performing Step 2, we may choose a linear orientation preserving diffeomorphism $g:({\mathbb{D}},0)\rightarrow (U,0)$ onto a $\phi$-invariant neighbourhood of $0$ such that $\rho\coloneqq g^{-1}\circ \phi\circ g$ is a rotation of ${\mathbb{D}}$. We construct a (non-linear) diffeomorphism $\widetilde{g}:({\mathbb{D}},0)\rightarrow (U,0)$ satisfying properties (1)-(4) below. Let $(R,\Theta)$ and $(\widetilde{R},\widetilde{\Theta})$ denote the components of $g$ and $\widetilde{g}$ in polar coordinates, respectively.
\begin{enumerate}
\item $\widetilde{g}(r,\theta)$ agrees with $g(r,\theta)$ for $r> \frac{3}{4}$.
\item $\widetilde{\Theta}(r,\theta)$ agrees with $\Theta(r,\theta)$ for $r>\frac{1}{2}$.
\item There exists a constant $C>0$ such that $\widetilde{R}(r,\theta) = C\cdot r$ for $r<\frac{1}{2}$.
\item $\widetilde{\Theta}(r,\theta) = \theta$ for $r< \frac{1}{4}$.
\end{enumerate}
Let us first choose $C>0$ such that the ball $B_C(0)$ is contained in the image $g(B_{\frac{3}{4}}(0))$. We may choose a smooth function $\widetilde{R}(r,\theta)$ which agrees with $C\cdot r$ for $r<\frac{1}{2}$ and with $R(r,\theta)$ for $r>\frac{3}{4}$ and which satisfies $\partial_1\widetilde{R}(r,\theta)>0$. Since $g$ is linear, the function $\Theta(r,\theta)$ is actually independent of $r$ and we denote it by $\Theta(\theta)$. The function $\Theta(\theta)$ is an orientation preserving diffeomorphism of the circle ${\mathbb{R}}/2\pi{\mathbb{Z}}$. Hence there exists a smooth isotopy from $\Theta$ to the identity. Using such an isotopy, we may define a function $\widetilde{\Theta}(r,\theta)$ such that $\widetilde{\Theta}(r,\theta)=\Theta(\theta)$ for $r>\frac{1}{2}$ and $\widetilde{\Theta}(r,\theta) = \theta$ for $r<\frac{1}{4}$. It is immediate from the construction that $\widetilde{g} = (\widetilde{R},\widetilde{\Theta})$ is a diffeomorphism satisfying properties (1)-(4) above. As before, we define a diffeomorphism $\psi:({\mathbb{D}},0)\rightarrow ({\mathbb{D}},0)$ which is compactly supported inside $U$ and agrees with $g\circ\widetilde{g}^{-1}$ inside $U$. We have
\begin{equation*}
\psi^{-1}\circ\phi\circ\psi = (g\circ\widetilde{g}^{-1})^{-1}\circ\phi\circ(g\circ\widetilde{g}^{-1}) = \widetilde{g}\circ (g^{-1}\circ\phi\circ g)\circ \widetilde{g}^{-1} = \widetilde{g}\circ \rho\circ \widetilde{g}^{-1}
\end{equation*}
inside $U$. Since $\widetilde{g}$ is simply multiplication by $C$ near $0$, this is an actual rotation near $0$. We need to check that $\psi^{-1}\circ\phi\circ\psi$ is radially monotone. Since this diffeomorphism agrees with $\phi$ outside $U$, we only need to check radial monotonicity of the restriction to $U$, which is given by $\widetilde{g}\circ \rho\circ \widetilde{g}^{-1}$ by the above computation. For $r>\frac{1}{2}$, the function $\widetilde{\Theta}(r,\theta)$ agrees with $\Theta(r,\theta)$ and is therefore independent of $r$. This implies that the restriction of $\widetilde{g}$ to ${\mathbb{D}}\setminus B_{\frac{1}{2}}(0)$ preserves the foliations by radial rays. Since $\rho$ is a rotation, the same is true for the restriction of $\widetilde{g}\circ \rho\circ \widetilde{g}^{-1}$ to $\widetilde{g}({\mathbb{D}}\setminus B_{\frac{1}{2}}(0))$. This implies radial monotonicity of $\psi^{-1}\circ\phi\circ\psi$ on the set $\widetilde{g}({\mathbb{D}}\setminus B_{\frac{1}{2}}(0))$. For $r<\frac{1}{2}$, the function $\widetilde{R}$ has the special form $\widetilde{R}(r,\theta)=C\cdot r$. Thus the restriction of $\widetilde{g}$ to $B_{\frac{1}{2}}(0)$ preserves the foliation by circles centered at the origin. Since $\rho$ is a rotation, the same continues to hold for the restriction of $\widetilde{g}\circ \rho\circ \widetilde{g}^{-1}$ to $\widetilde{g}(B_{\frac{1}{2}}(0))$. Radial monotonicity of $\psi^{-1}\circ\phi\circ\psi$ on the set $\widetilde{g}(B_{\frac{1}{2}}(0))$ is a direct consequence. After replacing $\phi$ by $\psi^{-1}\circ\phi\circ\psi$, we can hence assume in addition that $\phi$ is an actual rotation near the origin.\\
{\bf Step 4:} It remains to perform a similar construction to turn $\phi$ into a rotation near $\partial{\mathbb{D}}$ while preserving radial monotonicity. Let $\dot{{\mathbb{D}}}\coloneqq {\mathbb{D}}\setminus\{0\}$ denote the punctured disk. Let $V$ be a $\phi$-invariant neighbourhood of $\partial{\mathbb{D}}$ and $f=(R,\Theta):\dot{{\mathbb{D}}}\rightarrow V$ an orientation preserving diffeomorphism such that $\rho\coloneqq f^{-1}\circ\phi\circ f$ is a rotation of $\dot{{\mathbb{D}}}$ around $0$. Our goal is to define a diffeomorphism $\widetilde{f}=(\widetilde{R},\widetilde{\Theta}):\dot{{\mathbb{D}}}\rightarrow V$ agreeing with $f$ in a small neighbourhood of $0$ such that $\widetilde{f}\circ\rho\circ\widetilde{f}^{-1}$ is radially monotone on $V$ and an actual rotation near $\partial{\mathbb{D}}$. Once we have such $\widetilde{f}$, we may set $\psi\coloneqq f\circ\widetilde{f}^{-1}$ and extend to a diffeomorphism of ${\mathbb{D}}$ by setting $\psi$ to be equal to the identity outside $V$. We have
\begin{equation*}
\psi^{-1}\circ\phi\circ\psi = (f\circ\widetilde{f}^{-1})^{-1}\circ\phi\circ(f\circ\widetilde{f}^{-1}) = \widetilde{f}\circ (f^{-1}\circ\phi\circ f)\circ\widetilde{f}^{-1} = \widetilde{f}\circ \rho\circ\widetilde{f}^{-1}
\end{equation*}
which implies that $\psi^{-1}\circ\phi\circ\psi$ is radially monotone and a rotation near $\partial {\mathbb{D}}$. Here is how we construct $\widetilde{f}$. After shrinking $V$ if necessary, we may assume that the image of $f(r,\cdot)$ is $C^1$-close to $\partial{\mathbb{D}}$ for all $r$. In particular, $\Theta(r,\cdot)$ is a diffeomorphism of ${\mathbb{R}}/2\pi{\mathbb{Z}}$ for fixed $r$. Moreover, we can assume that $\partial_1 R(r,\theta)>0$. Let $\eta(r)$ be a smoothing of the function $\min(r,\frac{1}{5})$. Assume that both $\eta(r)$ and $\eta'(r)$ are monotonic and that $\eta(r)$ agrees with $\min(r,\frac{1}{5})$ outside $(\frac{1}{5}-\epsilon,\frac{1}{5}+\epsilon)$ for some small $\epsilon>0$. For $r<\frac{3}{5}$ we set $\widetilde{\Theta}(r,\theta)\coloneqq \Theta(\eta(r),\theta)$. For $r> \frac{4}{5}$ we set $\widetilde{\Theta}(r,\theta)\coloneqq \theta$.
Choose a smooth isotopy from $\Theta(\frac{1}{5},\cdot)$ to $\operatorname{id}_{{\mathbb{R}}/2\pi{\mathbb{Z}}}$ and use it to define $\widetilde{\Theta}$ in the interval $\frac{3}{5}<r<\frac{4}{5}$. We set $\widetilde{R}(r,\theta)$ to be equal to $R(r,\theta)$ for $r<\frac{2}{5}$. We extend $\widetilde{R}$ is such a way that $\partial_1\widetilde{R}(r,\theta)>0$. Moreover, we require that there exists $C>0$ such that $\widetilde{R}(r,\theta) = 1+C\cdot(r-1)$ for $r>\frac{3}{5}$. This finishes the construction of $\widetilde{f}$. Clearly, $\widetilde{f}\circ\rho\circ\widetilde{f}^{-1}$ is an actual rotation near $\partial{\mathbb{D}}$. We need to check radial monotonicity. On the set $\widetilde{f}(\{r>\frac{3}{5}\})$, radial monotonicity follows from the special form $\widetilde{R}(r,\theta) = 1+C\cdot(r-1)$. Inside $\widetilde{f}(\{\frac{1}{5}+\epsilon<r<\frac{3}{5}\})$, radial monotonicity follows from the fact that $\widetilde{\Theta}(r,\theta)$ is independent of $r$. For $r<\frac{1}{5}-\epsilon$ the diffeomorphisms $\widetilde{f}$ and $f$ agree, which implies radial monotonicity in $\widetilde{f}(\{r<\frac{1}{5}-\epsilon\})$. It remains to verify radial monotonicity inside $\widetilde{f}(\{\frac{1}{5}-\epsilon<r<\frac{1}{5}+\epsilon\})$. A direct computation shows that for $r<\frac{2}{5}$
\begin{equation}
\label{eq:formula_radial_derivative}
dr\left(\partial_1(f\circ\rho\circ f^{-1})(f(r,\theta))\right) =
\frac{1}{\det df(r,\theta)} \cdot dR(\rho(r,\theta))
\left(\begin{matrix}
\partial_2\Theta(r,\theta) \\
-\partial_1\Theta(r,\theta)
\end{matrix}\right)
\end{equation}
and
\begin{equation}
\label{eq:formula_radial_derivative_tilde}
dr\left(\partial_1(\widetilde{f}\circ\rho\circ\widetilde{f}^{-1})(\widetilde{f}(r,\theta))\right) =
\frac{1}{\det d\widetilde{f}(r,\theta)} \cdot dR(\rho(r,\theta))
\left(\begin{matrix}
\partial_2\Theta(\eta(r),\theta) \\
-\partial_1\Theta(\eta(r),\theta)\cdot\eta'(r)
\end{matrix}\right).
\end{equation}
Note that radial monotonicity of $f\circ\rho\circ f^{-1}$ is precisely saying that the left hand side of equation \eqref{eq:formula_radial_derivative} is positive. Thus \eqref{eq:formula_radial_derivative} implies that
\begin{equation*}
dR(\rho(r,\theta))
\left(\begin{matrix}
\partial_2\Theta(r,\theta) \\
-\partial_1\Theta(r,\theta)
\end{matrix}\right) >0.
\end{equation*}
By choosing $\epsilon>0$ sufficiently small, we can guarantee that
\begin{equation*}
dR(\rho(r,\theta))
\left(\begin{matrix}
\partial_2\Theta(\eta(r),\theta) \\
-\partial_1\Theta(\eta(r),\theta)
\end{matrix}\right) >0
\end{equation*}
whenever $r\in (\frac{1}{5}-\epsilon,\frac{1}{5}+\epsilon)$. Using the fact that $\partial_1R>0$ and $\partial_2\Theta>0$, we see that
\begin{equation*}
dR(\rho(r,\theta))
\left(\begin{matrix}
\partial_2\Theta(\eta(r),\theta) \\
0
\end{matrix}\right) >0.
\end{equation*}
Therefore, the linear functional $dR(\rho(r,\theta))$ is positive on any convex linear combination of the vectors $(\partial_2\Theta(\eta(r),\theta) ,
-\partial_1\Theta(\eta(r),\theta))$ and $(\partial_2\Theta(\eta(r),\theta),0)$. Using $\eta'(r)\in [0,1]$, we can hence deduce that
\begin{equation*}
dR(\rho(r,\theta))
\left(\begin{matrix}
\partial_2\Theta(\eta(r),\theta) \\
-\partial_1\Theta(\eta(r),\theta)\cdot\eta'(r)
\end{matrix}\right)>0.
\end{equation*}
Together with \eqref{eq:formula_radial_derivative_tilde} this implies radial monotonicity of $\widetilde{f}\circ\rho\circ\widetilde{f}^{-1}$ in the annulus $\widetilde{f}(\{\frac{1}{5}-\epsilon<r<\frac{1}{5}+\epsilon\})$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity}]
Let us first prove the corollary under the additional assumption that $\phi$ possesses a fixed point $p$ such that $\phi$ is smoothly conjugated to a rotation in some neighbourhood of $p$. In this situation we may apply Lemma \ref{lem:moving_fixed_point_to_origin}. Let $\psi$ be the resulting diffeomorphism of ${\mathbb{D}}$. The $2$-form $\psi^*\omega$ and the diffeomorphism $\psi^{-1}\circ\widetilde{\phi}\circ\psi\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\psi^*\omega)$ satisfy all assumptions in Theorem \ref{theorem:positivity_criterion_for_radially_monotone_diffeomorphisms}, except for possibly rotation invariance of $\psi^*\omega$ near $0$ and $\partial{\mathbb{D}}$. Using an equivariant version of Moser's argument, we may construct a diffeomorphism of ${\mathbb{D}}$ which is supported near $0$ and $\partial {\mathbb{D}}$, commutes with $\psi^{-1}\circ \phi\circ\psi$ and pulls back $\psi^*\omega$ to a $2$-form which is rotation invariant near $0$ and $\partial{\mathbb{D}}$. After replacing $\psi$ by its composition with this diffeomorphism, we may assume w.l.o.g. that $\psi^*\omega$ is rotation invariant near the origin and the boundary and apply Theorem \ref{theorem:positivity_criterion_for_radially_monotone_diffeomorphisms} to the tuple $(\psi^*\omega,\psi^{-1}\circ\widetilde{\phi}\circ\psi)$. Let $H$ denote the resulting Hamiltonian. Then the Hamiltonian $H\circ\psi^{-1}$ satisfies all assertions of Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity}.\\
Let us now consider the general case. We claim that $\phi$ must possess an interior fixed point which is either degenerate or elliptic. This clearly is the case if $\phi$ is equal to the identity near the boundary $\partial{\mathbb{D}}$. So assume that $\phi$ is not equal to the identity near the boundary. Since $\phi$ is conjugated to a rotation near the boundary, this implies that there are no fixed points at all near the boundary. Let us assume that all fixed points of $\phi$ are non-degenerate. Then there are only finitely many fixed points and their signed count equals the Euler characteristic of ${\mathbb{D}}$, which is equal to $1$. The Lefschetz sign of positive hyperbolic fixed points is $-1$ and the Lefschetz sign of negative hyperbolic and elliptic fixed points $1$. Thus there must exist a fixed point which is negative hyperbolic or elliptic. Since $\widetilde{\phi}$ is assumed to be $C^1$-close to the identity, there are no negative hyperbolic fixed points, which implies that there must exist an elliptic one. This concludes the proof that there must be a degenerate or elliptic interior fixed point. We choose such a fixed point $p$. By Lemma \ref{lem:perturbing_symplectomorphism_near_elliptic_fixed_point} there exists a Hamiltonian $G$, supported in an arbitrarily small neighbourhood of $p$ and with arbitrarily small $C^2$-norm $\|X_G\|_{C^2}$, such that $\phi'\coloneqq \phi\circ \phi_G^1$ is smoothly conjugated to an rotation in a neighbourhood of $p$. We define the lift $\widetilde{\phi'}$ of $\phi'$ by $\widetilde{\phi'}\coloneqq \widetilde{\phi}\circ\phi_G$ where $\phi_G\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ is represented by the arc $(\phi_G^t)_{t\in [0,1]}$. After shrinking $\|X_G\|_{C^2}$ if necessary, we can assume that the action $\sigma_{\widetilde{\phi'}}$ is positive on all fixed points of $\phi'$. Thus Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity} holds for $\widetilde{\phi'}$ by the above discussion. Let $H'$ be a Hamiltonian generating $\widetilde{\phi'}$ and satisfying all assertions in Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity}. In fact, it follows from assertion (2) in Theorem \ref{theorem:positivity_criterion_for_radially_monotone_diffeomorphisms} that $H'$ can be chosen to satisfy $dH'_t(p)=0$ and $H'_t(p) = \sigma_{\widetilde{\phi'}}(p)$ for all $t$. The action $\sigma_{\widetilde{\phi'}}(p)$ is close to $\sigma_{\widetilde{\phi}}(p)>0$. Thus we can bound $H'_t(p)$ from below by a positive constant which can be chosen uniform among all sufficiently small $G$. After reparametrizing the Hamiltonian flow of $G$, we can assume w.l.o.g. that $G$ vanishes for $t$ near $0$ and $1$. We define $H$ by
\begin{equation}
\label{eq:proof_of_positivity_criterion_close_to_identity_def_of_ham}
H_t\coloneqq (H'\#\overline{G})_t = H'_t - G_t\circ\phi_G^t\circ (\phi_{H'}^t)^{-1}
\end{equation}
for $t\in [0,1]$. This smoothly extends to a $1$-periodic Hamiltonian and generates $\widetilde{\phi}$. Since $H$ agrees with $H'$ outside a small neighbourhood of $p$, it satisfies assertions (2) and (3) of Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity}. If we choose $G$ with sufficiently small norm $\|X_G\|_{C^2}$, we can also guarantee that $H$ is strictly positive in the interior of ${\mathbb{D}}$. This follows from \eqref{eq:proof_of_positivity_criterion_close_to_identity_def_of_ham} and the fact that we have a strictly positive lower bound on $H'_t(p)$ independent of $G$. Thus $H$ has all desired properties.
\end{proof}
\section{From Reeb flows to disk-like surfaces of section and approximation results}
\label{section:from_reeb_flows_to_disk_like_surfaces_of_section_and_approximation_results}
Let $\alpha_0$ denote the restriction of the standard Liouville $1$-form $\lambda_0$ on ${\mathbb{R}}^4$ defined in \eqref{eq:liouville_vector_field_and_form} to the unit sphere $S^3$. We will refer to $\alpha_0$ as the standard contact form. The goal of this section is to prove the following result.
\begin{prop}
\label{prop:contact_forms_in_C_3_neighbourhood_may_be_approx_by_forms_satisfying_criterion}
Every tight contact form $\alpha$ on $S^3$ which is sufficiently $C^3$-close to the standard contact form $\alpha_0$ can be $C^2$-approximated by contact forms $\alpha'$ with the following properties: There exists a unique Reeb orbit $\gamma$ of minimal action. The local first return map of a small disk transversely intersecting $\gamma$ is smoothly conjugated to an irrational rotation. There exists a smooth embedding $f:{\mathbb{D}}\rightarrow S^3$ parametrizing a $\partial$-strong disk-like global surface of section with boundary orbit $\gamma$ such that the $2$-form $\omega\coloneqq f^*d\alpha'$, the first return map $\phi\in \operatorname{Diff}({\mathbb{D}},\omega)$ and the lift $\widetilde{\phi}_1\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ of $\phi$ with respect to a trivialization of degree $1$ satisfy assertions (1)-(3) below.
\begin{enumerate}
\item $\widetilde{\phi}_1$ is $C^1$-close to the identity $\operatorname{id}_{\mathbb{D}}$.
\item $\phi$ is smoothly conjugated to an irrational rotation in some neighbourhood of the boundary $\partial{\mathbb{D}}$.
\item All fixed points $p$ of $\phi$ have positive action $\sigma_{\widetilde{\phi}_1}(p)$.
\end{enumerate}
\end{prop}
\subsection{Approximation results}
\label{subsection:approximation_results}
The next result says that we may perturb the contact form near an elliptic orbit such that the local first return map of the perturbed Reeb flow is smoothly conjugated to a rotation. We can choose the perturbation such that both the contact form and Reeb vector field are close to the original contact form and Reeb vector field with respect to the $C^2$-topology.
\begin{prop}
\label{prop:making_local_return_map_of_elliptic_orbit_a_rotation}
Let $\alpha$ be a contact form on a $3$-manifold $Y$. Let $\gamma$ be an elliptic, simple, closed Reeb orbit of $\alpha$. For every open neighbourhood $V$ of $\gamma$ and every $\epsilon>0$ there exists a contact form $\alpha'$ on $Y$ such that:
\begin{enumerate}
\item $\alpha'$ agrees with $\alpha$ outside $V$.
\item $\|\alpha'-\alpha\|_{C^2} < \epsilon$
\item $\|R_{\alpha'}-R_\alpha\|_{C^2} < \epsilon$
\item Up to reparametrization, $\gamma$ is a simple closed Reeb orbit of $\alpha'$. The local first return map of a small disk transversely intersecting $\gamma$ is smoothly conjugated to an irrational rotation.
\end{enumerate}
\end{prop}
Our proof of Proposition \ref{prop:making_local_return_map_of_elliptic_orbit_a_rotation} requires some preparation. Consider ${\mathbb{R}}^2$ equipped with the standard symplectic form $\omega_0$. Any compactly supported symplectomorphism $\phi$ which is sufficiently $C^1$-close to the identity can be represented by a unique compactly supported generating function $W$ (see chapter 9 in \cite{MS17}). If $(X,Y)$ denote the components of $\phi$, the defining equations for the generating function are:
\begin{equation*}
\begin{cases}
X-x = \enspace\partial_2 W(X,y)\\
Y-y = -\partial_1 W(X,y)
\end{cases}
\end{equation*}
Conversely, any compactly supported function $W$ which is sufficiently $C^2$-close to the identity uniquely determines a compactly supported symplectomorphism $\phi$.
\begin{lem}
\label{lem:continuity_of_correspondence_between_symplectomorphisms_and_generating_functions}
Let $k\geq 0$. We equip the space of compactly supported diffeomorphisms with the $C^k$-topology and the space of compactly supported generating functions with the topology induced by the norm $\|\nabla\cdot\|_{C^k}$. The correspondence between symplectomorphisms and generating functions is continuous in both directions with respect to these topologies.
\end{lem}
\begin{proof}
If $\phi$ is a symplectomorphism with corresponding generating function $W$, then the graph $\Gamma(\phi)$ of $\phi$ inside ${\mathbb{R}}^2\times{\mathbb{R}}^2$ can be parametrized via
\begin{equation*}
\iota : {\mathbb{R}}^2\ni (X,y)\mapsto
\left(\begin{matrix}
X-\partial_2W(X,y)\\
y\\
X\\
y-\partial_1W(X,y)
\end{matrix}\right)\in\Gamma(\phi)\subset{\mathbb{R}}^2\times{\mathbb{R}}^2.
\end{equation*}
For $j\in\{1,2\}$ let $\pi_j:{\mathbb{R}}^2\times{\mathbb{R}}^2\rightarrow{\mathbb{R}}^2$ denote the projection onto the $j$-th factor. Then $\phi$ can be written as
\begin{equation*}
\phi = (\pi_2\circ\iota)\circ (\pi_1\circ\iota)^{-1}.
\end{equation*}
Clearly, the assignment $W\mapsto \pi_j\circ\iota$ is continuous with respect to the topologies on the spaces of functions and diffeomorphisms specified above. Since composition and taking inverses of diffeomorphisms are continuous operations with respect to the $C^k$-topology, this shows that the symplectomorphism $\phi$ depends continuously on $W$. Conversely, given $\phi$ we define the diffeomorphism $\psi(x,y)\coloneqq (X(x,y),y)$. The assignment $\phi\mapsto \psi$ is continuous with respect to the $C^k$-topology. Define the function
\begin{equation*}
V(x,y) \coloneqq (y - Y(x,y), X(x,y)-x).
\end{equation*}
Then $\nabla W$ is given by the composition $\nabla W = V\circ\psi^{-1}$. Both $V$ and $\psi^{-1}$ depend continuously on $\phi$ with respect to the $C^k$-topology. Hence the same is true for $\nabla W$.
\end{proof}
\begin{lem}
\label{lem:small_symplectomorphism_generated_by_small_hamiltonian}
There exist a $C^1$-open neighbourhood $\mathcal{U} \subset \operatorname{Symp}_c({\mathbb{R}}^2,\omega_0)$ of the identity and a map
\begin{equation*}
\mathcal{U}\rightarrow C^\infty_c([0,1]\times {\mathbb{R}}^2)\qquad \phi\mapsto H_\phi
\end{equation*}
such that:
\begin{enumerate}
\item $H_\phi$ generates $\phi$.
\item $H_{\operatorname{id}} = 0$
\item For every integer $k\geq 1$ the following is true: Equip $\mathcal{U}$ with the $C^k$-topology and $C^\infty_c([0,1]\times {\mathbb{R}}^2)$
with the topology induced by the norm $H\mapsto \|X_H\|_{C^{k-1}}$ where we view $X_H$ as an element of $C^\infty_c([0,1] \times {\mathbb{R}}^2,{\mathbb{R}}^2)$. The map $\phi\mapsto H_\phi$ is continuous with respect to these topologies.
\end{enumerate}
\end{lem}
\begin{proof}
Let $\phi$ be a compactly supported symplectomorphism sufficiently $C^1$-close to the identity and let $W$ be the associated generating function. For $t\in [0,1]$, let $\phi_t$ be the symplectomorphism associated to the generating function $t\cdot W$. Let $H_\phi:[0,1]\times {\mathbb{R}}^2\rightarrow{\mathbb{R}}$ be the unique compactly supported Hamiltonian generating the flow $(\phi_t)_{t\in [0,1]}$. This yields a map $\phi\mapsto H_\phi$ defined on a $C^1$-open neighbourhood of the identity. Clearly, assertions (1) and (2) hold. It remains to check continuity. Let $k\geq 1$. Consider the parametrization
\begin{equation*}
\iota_t : {\mathbb{R}}^2\ni (X,y)\mapsto
\left(\begin{matrix}
X-t\cdot \partial_2W(X,y)\\
y\\
X\\
y- t\cdot \partial_1W(X,y)
\end{matrix}\right)\in\Gamma(\phi_t)\subset{\mathbb{R}}^2\times{\mathbb{R}}^2
\end{equation*}
of $\Gamma(\phi_t)$. We have
\begin{equation*}
\phi_t = (\pi_2\circ\iota_t)\circ (\pi_1\circ\iota_t)^{-1}
\end{equation*}
where $\pi_j:{\mathbb{R}}^2\times{\mathbb{R}}^2\rightarrow{\mathbb{R}}^2$ denotes the projection onto the $j$-th factor. Clearly, the map
\begin{equation*}
(C^\infty_c({\mathbb{R}}^2),\|\nabla \cdot\|_{C^k})\enspace\ni\enspace W \mapsto \pi_j\circ\iota_t \enspace\in\enspace (C^\infty([0,1]\times {\mathbb{R}}^2,{\mathbb{R}}^2),\|\cdot\|_{C^k})
\end{equation*}
is continuous. Since composition and inversion of diffeomorphisms are continuous operations with respect to the $C^k$-topology, this implies that
\begin{equation}
\label{eq:continuity_of_generating_function_mapsto_flow}
(C^\infty_c({\mathbb{R}}^2),\|\nabla \cdot\|_{C^k})\enspace\ni\enspace W \mapsto \phi_t \enspace\in\enspace (C^\infty([0,1] \times {\mathbb{R}}^2,{\mathbb{R}}^2),\|\cdot\|_{C^k})
\end{equation}
is continuous. We have $X_{H_\phi} = (\partial_t\phi_t)\circ\phi_t^{-1}$. Combining Lemma \ref{lem:continuity_of_correspondence_between_symplectomorphisms_and_generating_functions} with continuity of \eqref{eq:continuity_of_generating_function_mapsto_flow}, we obtain that $\phi\mapsto X_{H_\phi}$ is continuous with respect to the $C^k$-topology on symplectomorphisms and the $C^{k-1}$-topology on vector fields.
\end{proof}
\begin{lem}
\label{lem:perturbing_symplectomorphism_near_elliptic_fixed_point}
Let $\omega$ be an area form and $\phi$ a symplectomorphism defined near the origin of ${\mathbb{R}}^2$. Assume that $0$ is a fixed point of $\phi$ and that it is either elliptic or degenerate. Then there exists a Hamiltonian $H\in C^\infty_c([0,1]\times {\mathbb{R}}^2,{\mathbb{R}})$ supported inside an arbitrarily small open neighbourhood of $0$ and with arbitrarily small norm $\|X_H\|_{C^2}$ such that $0$ is a fixed point of $\phi\circ \phi_H^1$ and such that $\phi\circ \phi_H^1$ is smoothly conjugated to an irrational rotation in some neighbourhood of $0$.
\end{lem}
\begin{proof}
After a change of coordinates, we can assume w.l.o.g. that $\omega=\omega_0$. Since $0$ is elliptic or degenerate as a fixed point of $\phi$, we may choose a $C^\infty$-small Hamiltonian $H$ supported in a small neighbourhood of $0$ such that $0$ is an elliptic fixed point of $\phi\circ\phi_H^1$ with rotation number an irrational multiple of $2\pi$ and such that $\phi\circ\phi_H^1$ is real analytic in some open neighbourhood of $0$. It follows from \cite[Chapter 23, p. 172-173]{MoSi95} that there exists a symplectomorphism $\psi$ defined in an open neighbourhood of $0$ and fixing $0$ such that
\begin{equation*}
\psi^{-1}\circ\phi\circ\phi_H^1\circ\psi (x,y) =
\left(\begin{matrix}
\cos(\theta(x,y)) & -\sin(\theta(x,y)) \\
\sin(\theta(x,y)) & \cos(\theta(x,y))
\end{matrix}\right)\cdot
\left(\begin{matrix}
x\\y
\end{matrix}\right) + O_4(x,y)
\end{equation*}
where $\theta(x,y)=\theta_0 + \theta_1(x^2+y^2)$ for real constants $\theta_0$ and $\theta_1$ and $O_4(x,y)$ is a real analytic map vanishing up to order $3$ at the origin. Since $d(\phi\circ\phi_H^1)(0)$ is conjugated to an irrational rotation, the constant $\theta_0$ is an irrational multiple of $2\pi$. There exists a symplectomorphism $\xi$ arbitrarily $C^3$-close to the identity and supported in an arbitrarily small neighbourhood of $0$ such that
\begin{equation*}
\psi^{-1}\circ\phi\circ\phi_H^1\circ\psi \circ \xi (x,y) =
\left(\begin{matrix}
\cos(\theta(x,y)) & -\sin(\theta(x,y)) \\
\sin(\theta(x,y)) & \cos(\theta(x,y))
\end{matrix}\right)\cdot
\left(\begin{matrix}
x\\y
\end{matrix}\right)
\end{equation*}
near $0$. Lemma \ref{lem:small_symplectomorphism_generated_by_small_hamiltonian} yields a Hamiltonian $G$ supported in a small neighbourhood of $0$ such that $\phi_G^1 = \xi$ and such that $\|X_G\|_{C^2}$ is controlled by $\|\xi-\operatorname{id}\|_{C^3}$. We may choose an autonomous Hamiltonian $K$ such that $\|K\|_{C^3}$ is arbitrarily small, $K$ is supported in an arbitrarily small neighbourhood of $0$ and $K(x,y) = \frac{\theta_1}{4}(x^2+y^2)^2$ near $0$. The time-$1$-map $\phi_K^1$ is given by
\begin{equation*}
\phi_K^1(x,y) = \left(\begin{matrix}
\cos(-\theta_1(x^2+y^2)) & -\sin(-\theta_1(x^2+y^2)) \\
\sin(-\theta_1(x^2+y^2)) & \cos(-\theta_1(x^2+y^2))
\end{matrix}\right)\cdot
\left(\begin{matrix}
x\\y
\end{matrix}\right)
\end{equation*}
in a neighbourhood of $0$. Thus we have
\begin{equation*}
\psi^{-1}\circ\phi\circ\phi_H^1\circ\psi \circ \phi_G^1\circ \phi_K^1 (x,y)=
\left(\begin{matrix}
\cos\theta_0 & -\sin\theta_0 \\
\sin\theta_0 & \cos\theta_0
\end{matrix}\right)\cdot
\left(\begin{matrix}
x\\y
\end{matrix}\right).
\end{equation*}
Hence
\begin{equation*}
\phi\circ\phi_H^1\circ\psi \circ \phi_G^1\circ \phi_K^1\circ\psi^{-1} = \phi\circ \phi_H^1\circ \phi_{G\circ\psi^{-1}}^1\circ \phi_{K\circ\psi^{-1}}^1
\end{equation*}
is smoothly conjugated to an irrational rotation. Let $\eta:[0,1]\rightarrow [0,1]$ be a smooth function such that $\eta(t)=0$ for $t$ near $0$ and $\eta(t)=1$ for $t$ near $1$ and $\eta'\geq 0$. Define the Hamiltonian $F$ by
\begin{equation*}
F:[0,1] \times {\mathbb{R}}^2\rightarrow{\mathbb{R}}\qquad F(t,z)\coloneqq
\begin{cases}
3\cdot\eta'(3t)\cdot K(\eta(3t),\psi^{-1}(z)) & \text{for}\enspace 0\leq t\leq \frac{1}{3} \\
3\cdot\eta'(3t-1)\cdot G(\eta(3t-1),\psi^{-1}(z)) & \text{for}\enspace \frac{1}{3}\leq t\leq \frac{2}{3} \\
3\cdot\eta'(3t-2)\cdot H(\eta(3t-2),z) & \text{for}\enspace \frac{2}{3}\leq t\leq 1.
\end{cases}
\end{equation*}
$F$ is compactly supported, vanishes for $t$ near $0$ and $1$ and generates the symplectomorphism $\phi_H^1\circ \phi_{G\circ\psi^{-1}}^1\circ \phi_{K\circ\psi^{-1}}^1$. Thus $\phi\circ\phi_F^1$ is smoothly conjugated to an irrational rotation. By shrinking the supports of $H$, $\xi$ and $K$ and the norms $\|H\|_{C^\infty}$, $\|\xi-\operatorname{id}\|_{C^3}$ and $\|K\|_{C^3}$, we can make the support of $F$ and the norm $\|X_F\|_{C^2}$ arbitrarily small.
\end{proof}
\begin{lem}
\label{lem:lift_modified_return_map_to_contact_form}
Let $\alpha$ be a contact form on a $3$-manifold $Y$ and let $\gamma$ be a simple closed Reeb orbit. Let $p$ be a point on $\gamma$ and let $D$ be a small disk intersecting $\gamma$ transversely in $p$. We denote $\omega\coloneqq d\alpha|_D$. Let $\phi:(U,\omega)\rightarrow (D,\omega)$ be the local first-return-map of the Reeb flow, defined in some open neighbourhood $U\subset D$. There exist a $C^1$-open neighbourhood $\mathcal{U}$ of zero inside $C^\infty_c([0,1]\times U,{\mathbb{R}})$ and a map
\begin{equation*}
\mathcal{U}\rightarrow \Omega^1(Y)\qquad H \mapsto \alpha_H
\end{equation*}
such that:
\begin{enumerate}
\item $\alpha_H$ is a contact form and agrees with $\alpha$ outside a small neighbourhood of $\gamma$.
\item The local first return map of the Reeb flow of $\alpha_H$ is given by $\phi\circ \phi_H^1$.
\item $\alpha_0 = \alpha$
\item For every interger $k\geq 0$ the following is true: Equip $\mathcal{U}$ with the topology induced by the norm $\|X_H\|_{C^k}$. Equip $\Omega^1(Y)$ and $\operatorname{Vect}(Y)$ with the $C^k$-topologies. Then the maps $H\mapsto \alpha_H$ and $H\mapsto R_{\alpha_H}$ are continuous with respect to these topologies.
\end{enumerate}
\end{lem}
\begin{proof}
Denote $\lambda\coloneqq \alpha|_{D}$. For $T>0$ sufficiently small, there exists an embedding
\begin{equation*}
F:[0,T]\times D\rightarrow Y
\end{equation*}
such that the restriction of $F$ to $\{0\}\times D$ is the inclusion of $D$ and such that $F^*\alpha= dt + \lambda$. Let $\eta:[0,T]\rightarrow [0,1]$ be a smooth function which is equal to $0$ near $0$, equal to $1$ near $T$ and satisfies $\eta'\geq 0$. Given a Hamiltonian $H\in\mathcal{U}$, we define $H'$ by
\begin{equation*}
H'(z,t)\coloneqq \eta'(t)\cdot H(z,\eta(t)).
\end{equation*}
$H'$ vanishes for $t$ near $0$ and $T$ and its time-$T$-map agrees with the time-$1$-map of $H$. The map $H\mapsto H'$ is continuous with respect to the topology induced by the norm $\|X_H\|_{C^k}$. We define $\alpha_H$ by
\begin{equation}
\label{eq:lift_modified_return_map_to_contact_form}
\alpha_H\coloneqq (1+H')dt + \lambda
\end{equation}
in the coordinate chart $F$. We extend $\alpha_H$ to all of $Y$ by setting it equal to $\alpha$ outside $\operatorname{im}(F)$. If $H$ is sufficiently $C^1$-small, then $\alpha_H$ is a contact form. The Reeb vector field inside the coordinate chart $F$ is given by
\begin{equation}
\label{eq:lift_modified_return_map_to_contact_form_reeb_vector_field}
R_{\alpha_H} = \frac{1}{1 + H' + \lambda(X_{H'})}\cdot (\partial_t + X_{H'}).
\end{equation}
This is positively proportional to $\partial_t + X_{H'}$. Thus the local first return map of the disk $D$ induced by the Reeb flow of $\alpha_H$ is given by $\phi\circ\phi_H^1$. It is immediate from formulas \eqref{eq:lift_modified_return_map_to_contact_form} and \eqref{eq:lift_modified_return_map_to_contact_form_reeb_vector_field} that $\alpha_H$ and $R_{\alpha_H}$ depend continuously on $H$ with respect to the topologies specified in assertion (4).
\end{proof}
We are finally ready to prove Proposition \ref{prop:making_local_return_map_of_elliptic_orbit_a_rotation}.
\begin{proof}[Proof of Proposition \ref{prop:making_local_return_map_of_elliptic_orbit_a_rotation}]
Let $D$ be a small disk transversely intersecting $\gamma$ in a point $p$. Denote $\omega\coloneqq d\alpha|_D$. For a sufficiently small open neighbourhood $p\in U\subset D$ we have a well-defined local first return map $\phi:(U,\omega)\rightarrow (D,\omega)$. The map $\phi$ has an elliptic fixed point at $p$. By Lemma \ref{lem:perturbing_symplectomorphism_near_elliptic_fixed_point}, there exists a Hamiltonian $H:[0,1]\times U\rightarrow{\mathbb{R}}$, supported in an arbitrarily small neighbourhood of $p$ and with arbitrarily small norm $\|X_H\|_{C^2}$, such that $\phi\circ\phi_H^1$ is smoothly conjugated to an irrational rotation in some neighbourhood of $p$. Lemma \ref{lem:lift_modified_return_map_to_contact_form} yields a contact form $\alpha_H$ on $Y$ which agrees with $\alpha$ outside a small neighbourhood of $\gamma$ such that the local first return map of the Reeb flow of $\alpha_H$ is given by $\phi\circ\phi_H^1$. By assertion (4) in Lemma \ref{lem:lift_modified_return_map_to_contact_form}, we can make $\|\alpha_H-\alpha\|_{C^2}$ and $\|R_{\alpha_H}-R_\alpha\|_{C^2}$ arbitrarily small by shrinking $\|X_H\|_{C^2}$.
\end{proof}
\subsection{Proof of Proposition \ref{prop:contact_forms_in_C_3_neighbourhood_may_be_approx_by_forms_satisfying_criterion}}
\label{subsection:proof_of_proposition_on_contact_forms_in_c_3_neighbourhood}
This section is devoted to the proof of Proposition \ref{prop:contact_forms_in_C_3_neighbourhood_may_be_approx_by_forms_satisfying_criterion}.
\begin{lem}
\label{lem:viterbo_near_standard_contact_form_if_local_return_map_is_rotation}
There exists $\epsilon>0$ with the following property: Let $\alpha$ be a tight contact form on $S^3$ satisfying the following conditions:
\begin{enumerate}
\item $\|R_\alpha-R_{\alpha_0}\|_{C^2}<\epsilon$
\item $R_\alpha = c\cdot R_{\alpha_0}$ on the great circle $\Gamma\coloneqq \{(z_1,0)\mid |z_1|=1\}$ for some constant $c>0$.
\item $\Gamma$ is the unique shortest Reeb orbit of $\alpha$.
\item The local return map of a small disk transversely intersecting $\Gamma$ is smoothly conjugated to an irrational rotation.
\end{enumerate}
Then there exists a smooth embedding $f:{\mathbb{D}}\rightarrow S^3$ parametrizing a $\partial$-strong disk-like surface of section with boundary orbit $\Gamma$ such that the $2$-form $\omega\coloneqq f^*d\alpha$ and the lift of the first return map $\widetilde{\phi}_1\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ with respect to a trivialization of degree $1$ satisfy assertions (1)-(3) in Proposition \ref{prop:contact_forms_in_C_3_neighbourhood_may_be_approx_by_forms_satisfying_criterion}.
\end{lem}
\begin{proof}
Our proof is based on Proposition 3.6 in \cite{ABHS18}. We define
\begin{equation*}
f:{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}\rightarrow S^3 \quad f(t,e^{i\theta})\coloneqq \left(\sin\left(\frac{\pi}{2}r\right)e^{i(\theta+2\pi t)},\cos\left(\frac{\pi}{2}r\right)e^{2\pi it}\right).
\end{equation*}
Up to replacing ${\mathbb{R}}/\pi{\mathbb{Z}}$ by ${\mathbb{R}}/{\mathbb{Z}}$, this agrees with the map $f$ defined in \cite{ABHS18}. By assertion (iii) in \cite[Proposition 3.6]{ABHS18}, the pull-back of the Reeb vector field $R_\alpha$ via $f|_{{\mathbb{R}}/{\mathbb{Z}}\times\operatorname{int}({\mathbb{D}})}$ extends to a smooth vector field $R$ on the closed solid torus ${\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}$. Moreover, the $C^1$-norm $\|R-\partial_t\|_{C^1}$ is controlled by the $C^2$-norm $\|R_\alpha - R_{\alpha_0}\|_{C^2}$. This shows that if $R_\alpha$ is sufficiently $C^2$-close to $R_{\alpha_0}$, then $R$ is positively transverse to the fibres $t\times{\mathbb{D}}$ of the solid torus. In particular, the restriction of $f$ to $0\times{\mathbb{D}}$ parametrizes a $\partial$-strong disk-like global surface of section of the Reeb flow of $\alpha$. Let $z$ be a point in the boundary $\partial{\mathbb{D}}$. The map
\begin{equation*}
{\mathbb{R}}/{\mathbb{Z}}\rightarrow \Gamma \quad t \mapsto f(t,z)
\end{equation*}
has degree $1$. Thus the flow of $R$ on ${\mathbb{R}}/{\mathbb{Z}} \times {\mathbb{D}}$ induces the lift $\widetilde{\phi}_1$ of the first return map of $f|_{0\times {\mathbb{D}}}$ with respect to a trivialization of degree $1$. We see that the $C^1$-distance between $\widetilde{\phi}_1$ and the identity $\operatorname{id}_{\mathbb{D}}$ is controlled by the $C^2$-distance between $R_\alpha$ and $R_{\alpha_0}$, which yields assertion (1) of Proposition \ref{prop:contact_forms_in_C_3_neighbourhood_may_be_approx_by_forms_satisfying_criterion}. The hypothesis that the local first return map of the orbit $\Gamma$ is smoothly conjugated to an irrational rotation implies that the global first return map $\phi$ is smoothly conjugated to an irrational rotation near the boundary. Let $p$ be a fixed point of $\phi$ corresponding to a closed Reeb orbit $\gamma$ of $\alpha$. By assumption, $\Gamma$ is the unique shortest Reeb orbit of $\alpha$. Thus $\int_\gamma\alpha > \int_\Gamma\alpha$. Let $\widetilde{\phi}_0$ denote the lift of $\phi$ with respect to a trivialization of degree $0$. It follows from Lemma \ref{lem:action_equals_first_return_time} that $\int_\gamma\alpha = \sigma_{\widetilde{\phi}_0}(p)$. The actions of $\widetilde{\phi}_0$ and $\widetilde{\phi}_1$ are related via $\sigma_{\widetilde{\phi}_0}(p)=\sigma_{\widetilde{\phi}_1}(p)+\int_\Gamma\alpha$. Thus we can conclude that $\sigma_{\widetilde{\phi}_1}(p)$ is positive.
\end{proof}
\begin{lem}
\label{lem:move_orbit_to_great_circle}
For every $\epsilon >0$ there exists $\delta >0$ such that for all tight contact forms $\alpha$ on $S^3$ which satisfy $\| R_\alpha - R_{\alpha_0}\|_{C^2}< \delta$ and for all simple closed Reeb orbits $\gamma$ of action less than $\frac{3}{2}\cdot\pi$ there exists a diffeomorphism $\psi$ of $S^3$ such that:
\begin{enumerate}
\item $R_{\psi^*\alpha} = c\cdot R_{\alpha_0}$ on the great circle $\Gamma \coloneqq \{(z_1,0)\mid |z_1|=1\}$ for some constant $c>0$.
\item $\psi(\Gamma) = \operatorname{im} (\gamma)$
\item $\|R_{\psi^*\alpha} - R_{\alpha_0}\|_{C^2} < \epsilon$
\end{enumerate}
\end{lem}
\begin{proof}
This is immediate from Proposition 3.10 in \cite{ABHS18}.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:contact_forms_in_C_3_neighbourhood_may_be_approx_by_forms_satisfying_criterion}]
Choose $\epsilon >0$ as in Lemma \ref{lem:viterbo_near_standard_contact_form_if_local_return_map_is_rotation}. Choose corresponding $\delta>0$ as in Lemma \ref{lem:move_orbit_to_great_circle}. Let $\mathcal{U}$ be a small $C^3$-open neighbourhood of $\alpha_0$ such that all $\alpha\in\mathcal{U}$ satisfy $\|R_\alpha-R_{\alpha_0}\|_{C^2}<\delta$. We also demand that $(S^3,\alpha)$ is strictly contactomorphic to the boundary of a strictly positively curved domain. We prove that the Proposition holds for all $\alpha\in\mathcal{U}$. It suffices to consider $C^\infty$-generic $\alpha$. In particular we can assume that all periodic orbits are non-degenerate and that there exists a unique orbit $\gamma$ of minimal action. Since $(S^3,\alpha)$ is strictly contactomorphic to the boundary of a strictly positively curved domain, it follows from \cite{Ek90} (see in particular Theorem 3 and Proposition 9 in chapter V) that $\gamma$ must have Conley-Zehnder index $3$ with respect to a global trivialization of the contact structure. This implies that $\gamma$ must be elliptic or negative hyperbolic. By shrinking $\mathcal{U}$ we can guarantee the linearized return map of $\gamma$ to be arbitrarily close to the identity. Hence we can guarantee that $\gamma$ is elliptic. We apply Proposition \ref{prop:making_local_return_map_of_elliptic_orbit_a_rotation}. This yields a contact form $\alpha'$ approximating $\alpha$ in the $C^2$-topology and agreeing with $\alpha$ outside a small neighbourhood of $\gamma$ such that the local return map of $\gamma$ generated by the Reeb flow of $\alpha'$ is smoothly conjugated to an irrational rotation. We can also demand $\|R_{\alpha'} - R_{\alpha}\|_{C^2}$ to be arbitrarily small. In particular we can guarantee that $\|R_{\alpha'} - R_{\alpha_0}\|_{C^2}<\delta$. We apply Lemma \ref{lem:move_orbit_to_great_circle} to the contact form $\alpha'$ and the Reeb orbit $\gamma$ of minimal action. Let $\psi$ be a diffeomorphism of $S^3$ satisfying properties (1)-(3) in Lemma \ref{lem:move_orbit_to_great_circle}. Then the contact form $\psi^*\alpha'$ satisfies all assumptions in Lemma \ref{lem:viterbo_near_standard_contact_form_if_local_return_map_is_rotation}. Hence there exists a smooth embedding $f:{\mathbb{D}}\rightarrow S^3$ parametrizing a $\partial$-strong disk-like surface of section of the Reeb flow of $\psi^*\alpha'$ with boundary orbit $\Gamma$ such that the $2$-form $\omega\coloneqq f^*d(\psi^*\alpha')$ and the first return map $\widetilde{\phi}_1\in\widetilde{\operatorname{Diff}}({\mathbb{D}},\omega)$ satisfy assertions (1)-(3) in Proposition \ref{prop:contact_forms_in_C_3_neighbourhood_may_be_approx_by_forms_satisfying_criterion}. Since $\psi^*\alpha'$ and $\alpha'$ are strictly contactomorphic, the same is true for $\alpha'$.
\end{proof}
\section{Proofs of the main results}
\label{section:proofs_of_the_main_results}
\begin{proof}[Proof of Theorem \ref{theorem:strong_viterbo_near_round_ball}]
Let $X\subset{\mathbb{R}}^4$ be a convex domain such that $\partial X$ is $C^3$-close to the unit sphere $S^3$. Let $g:S^3\rightarrow{\mathbb{R}}_{>0}$ be the unique function such that
\begin{equation*}
\partial X = \{\sqrt{g(x)}\cdot x\mid x\in S^3\}.
\end{equation*}
The function $g$ is $C^3$-close to the constant function $1$. The pull-back of the contact form $\lambda_0|_{\partial X}$ via the radial map
\begin{equation*}
S^3\rightarrow\partial X \qquad x\mapsto \sqrt{g(x)}\cdot x
\end{equation*}
is given by $\alpha \coloneqq g\cdot \alpha_0$ and is $C^3$-close to $\alpha_0$. Let $\alpha'$ be a contact form which is $C^2$-close to $\alpha$ and satisfies all assertions of Proposition \ref{prop:contact_forms_in_C_3_neighbourhood_may_be_approx_by_forms_satisfying_criterion}. We claim that there exists a star-shaped domain $X'$ whose boundary $\partial X'$ is $C^1$-close to $\partial X$ and strictly contactomorphic to $(S^3,\alpha')$. Indeed, arguing as in the proof of Proposition 3.11 in \cite{ABHS18}, we conclude that there exists a $C^1$-open neighbourhood $\mathcal{U}\subset \Omega^1(S^3)$ of $\alpha$ and a map
\begin{equation*}
\mathcal{U} \rightarrow C^\infty(S^3) \quad \beta \mapsto g_\beta
\end{equation*}
which is continuous with respect to the $C^{k+1}$ topology on $\mathcal{U}$ and the $C^k$-topology on $C^\infty(S^3)$, maps $\alpha$ to the constant function $1$ and has the property that every $\beta\in\mathcal{U}$ is a contact form strictly contactomorphic to $g_\beta\cdot \alpha$. Since $\alpha'$ is $C^2$-close to $\alpha$, the function $g_{\alpha'}$ is $C^1$-close to the constant function $1$. We define $X'$ to be the star-shaped domain with boundary
\begin{equation*}
\partial X' = \{ \sqrt{g_{\alpha'}(x)\cdot g(x)}\cdot x\mid x\in S^3\}.
\end{equation*}
$\partial X'$ is $C^1$-close to $\partial X$. The pull-back to $S^3$ of the contact form $\lambda_0|_{\partial X'}$ via the radial projection is given by $g_{\alpha'}\cdot g\cdot \alpha_0 = g_{\alpha'}\cdot \alpha$. This is strictly contactomorphic to $\alpha'$. We claim that $\operatorname{c_G}(X') = \operatorname{c_Z}(X')$. Let $f:{\mathbb{D}}\rightarrow\partial X'$ be a surface of section satisfying the assertions of Proposition \ref{prop:contact_forms_in_C_3_neighbourhood_may_be_approx_by_forms_satisfying_criterion}. This means that we may apply Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity} to $\widetilde{\phi}_1$, the lift of the first return map with respect to a trivialization of degree $1$. Let $H:{\mathbb{R}}/{\mathbb{Z}}\times{\mathbb{D}}\rightarrow{\mathbb{R}}$ denote a Hamiltonian satisfying all assertions of Corollary \ref{cor:positivity_criterion_for_diffeomorphisms_close_to_the_identity}. This Hamiltonian satisfies all hypotheses for the second part of Theorem \ref{theorem:embedding_result}. We conclude that $B(a)\overset{s}{\hookrightarrow}X'\overset{s}{\hookrightarrow} Z(a)$ where $a>0$ is the symplectic area of the surface of section. In particular, this implies that $\operatorname{c_G}(X')=\operatorname{c_Z}(X')$. We can make the $C^1$-distance between $\partial X$ and $\partial X'$ arbitrarily small by letting the $C^2$-distance between $\alpha$ and $\alpha'$ go to zero. This shows that $X$ may be approximated in the $C^1$-topology by star-shaped domains $X'$ whose Gromov width and cylindrical embedding capacity agree. It is an easy consequence of the monotonicity and conformality of symplectic capacities that any symplectic capacity is continuous on the space of all star-shaped domains with respect to the $C^0$-topology. Therefore $\operatorname{c_G}(X)=\operatorname{c_Z}(X)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem:area_of_surface_of_section_bounds_cylindrical_capacity}]
We apply Proposition \ref{prop:modify_hypersurface_such_that_return_map_is_generated_by_positive_hamiltonian}. This yields a star-shaped domain $X'$ such that $X\overset{s}{\hookrightarrow} X'$ and a $\partial$-strong disk-like global surface of section $\Sigma'\subset\partial X'$ such that $\Sigma'$ has the same symplectic area as $\Sigma$ and such that $(X',\Sigma')$ satisfies all hypotheses in the first part of Theorem \ref{theorem:embedding_result}. Let $a>0$ denote the symplectic area of $\Sigma$. By Theorem \ref{theorem:embedding_result}, there exist symplectic embeddings $X\overset{s}{\hookrightarrow} X'\overset{s}{\hookrightarrow} Z(a)$. In particular, we have $\operatorname{c_Z}(X)\leq a$.
\end{proof}
\bibliographystyle{plain}
|
1909.00893
|
\section{Introduction}
Output tracking in dynamical systems, such as robots, flight control, economics, biology, cyber-physical systems, is the practice of designing decision makers which ensure that a system's output tracks a given signal \cite{devasia1996nonlinear,martin1996different}.
Well-known existing methods for nonlinear output regulation and tracking include control techniques based on nonlinear inversions \cite{Isidori90},
high-gain observers \cite{Khalil98}, and the framework of
model predictive control (MPC) \cite{allgower2012nonlinear,Rawlings17}. Recently a new approach has been proposed, based on
the Newton-Raphson flow for solving algebraic equations \cite{Wardi17}.
Subsequently it has been tested on various applications including controlling an inverted pendulum, and position control of platoons of mobile robotic vehicles \cite{Wardi18,shivam2018tracking}. While perhaps not as general as the aforementioned established techniques, it seems to hold out promise of efficient computations and large domains of stability.
The successful deployment of complex control systems in real world applications increasingly depends on their ability to operate on highly unstructured -- even adversarial -- settings, where \textit{a-priori} knowledge of the evolution of the environment is impossible to acquire. Moreover, due to the increasing interconnection between the physical and the cyber domains, control systems become more intertwined with human operators, making model-based solutions fragile to unpredictable. Towards that, methods that augment low-level control techniques with intelligent decision making mechanisms have been extensively investigated in \cite{saridis1983intelligent}.
Machine learning \cite{haykin2009neural,vrabie2013optimal}, offers a suitable framework to allow control systems to autonomously adapt by leveraging data gathered from their environment. To enable data-driven solutions for autonomy, learning algorithms use artificial neural networks (NNs); classes of functions that, due to properties that stem from their neurobiological analogy, offer adaptive data representations and prediction based on external observations.
NNs have been used extensively in control applications \cite{narendra1990identification}, both in open-loop and closed-loop fashion. In closed-loop applications, NNs have been utilized as dynamics approximators, or in the framework of reinforcement learning, in enabling online solution of the Hamilton-Jacobi-Bellman equation \cite{vamvoudakis2010online}. However, the applicability of NNs in open-loop control objectives is broader, due to their ability to operate as classifiers, or as nonlinear function approximators \cite{bishop1995neural}.
The authors of \cite{narendra1990identification} introduced NN structures for system identification as well as adaptive control. Extending the identification capabilities of learning algorithms, the authors of \cite{bhasin2013novel} introduce a robustification term that guarantees asymptotic estimation of the state and the state derivative. Furthermore, reinforcement learning has received increasing attention since the development of methods that solve optimal control problems for continuous time control systems online without the knowledge of the dynamics \cite{vamvoudakis2017q}.
Prediction has been in the forefront of research conducted on machine learning. Learning-based attack prediction was employed both in \cite{weber2009data} and \cite{alpcan2010network} in the context of cyber-security, and \cite{pesch1995synthesis} utilized NNs to solve a pursuit evasion game by constructing both the evader's and the pursuer's strategies offline using pre-computed trajectories. Recently, authors of this paper have applied NN for on-line model construction in a control application \cite{Kanellopoulos19}.
This paper applies an NN technique to the pursuit-evasion problem
investigated in \cite{quintero2016robust}, which is more challenging than the problem addressed in \cite{Kanellopoulos19}. The strategies of both pursuers and evader are based on respective games. In Ref. \cite{quintero2016robust},
the pursuers know the game of the evader ahead of time, and an MPC technique is used to determine their trajectories. In this paper the pursuers do not have an a-priori knowledge of the evader's game or its structure, and they employ an NN
in real time to identify its input-output mapping. We use our tracking-control technique \cite{Wardi17} rather than MPC, and obtain similar results to \cite{quintero2016robust}. Furthermore, the input to the system has a lesser dimension that its output,
and hence the control is underactuated. We demonstrate a way of overcoming this limitation, which may have a broad scope in applications.
The rest of the paper is structured as follows.
Section II describes our proposed control technique
and some preliminary results on NN, and it formulates the pursuers-evader problem. Section III describes results on model-based and learning-based strategies. Simulation results are presented in Section IV. Finally, Section V concludes the paper and discusses directions for future research.
\section{Preliminaries and Problem Formulation}
\subsection{Tracking Control Technique}\label{sec:tracking}
This subsection recounts results published in our previous work in which prediction-based output tracking was used for fully-actuated systems \cite{Wardi17, Wardi18,shivam2018tracking}.
Consider a system as shown in Figure~\ref{control_system} with $r(t)\in \mathbb{R}^m$, $y(t)\in \mathbb{R}^m$,
$u(t)\in \mathbb{R}^m$, and $e(t):=r(t)-y(t)$. The objective of the controller is
to ensure that
\begin{equation}\label{track_error_eq}
\lim_{t\rightarrow\infty}||r(t)-y(t)||<\varepsilon,
\end{equation}
for a given (small) $\varepsilon\in\Real^+$.
\begin{figure}
\centering
\includegraphics[width=2.8in]{System.pdf}
\caption{{ Basic control system scheme.}}\label{control_system}
\end{figure}
To illustrate the basic idea underscoring the controller, let us first assume that (i) The plant subsystem is a memoryless nonlinearity of the form
\begin{equation}\label{eq:output}
y(t)=g(u(t)),
\end{equation}
for a continuously-differentiable function $g:\mathbb{R}^m\rightarrow \mathbb{R}^m$,
and (ii) the target reference $\{r(t):t\in[0,\infty)\}$ is a constant, $r(t)\equiv r$ for a given $r\in \mathbb{R}^m$.\footnote {Henceforth we will use the notation $\{x(t)\}$ For a generic signal $\{x(t),~t\in[0,\infty)\}$, to distinguish it from its value at a particular point $t$, $x(t)$.} These assumptions will be relaxed later.
In this case, the tracking controller is defined by the following
equation,
\begin{equation}\label{eq:controller}
\dot{u}(t)=\Big(\frac{\partial g}{\partial u}(u(t))\Big)^{-1}\big(r-y(t)\big),
\end{equation}
assuming that the Jacobian matrix $\frac{\partial g}{\partial u}(u(t))$ is nonsingular at every point $u(t)$ computed by the controller via \eqref{eq:controller}. Observe that \eqref{eq:controller} defines the Newton-Raphson flow for solving the algebraic equation $r-g(u)=0$, and hence (see \cite{Wardi17,Wardi18})
the controller converges in the sense that
$\lim_{t\rightarrow\infty}\big(r(t)-y(t)\big)=0$.
Next, suppose that the reference target is time-dependent, while keeping the assumption that the plant is a memoryless nonlinearity. Suppose that
$\{r(t)\}$ is bounded, continuous, piecewise-continuously differentiable, and $\{\dot{r}(t)\}$ is bounded.
Define
\begin{equation}\label{eq:rdot}
\eta:=\lim\sup_{t\rightarrow\infty}||\dot{r}(t)||,
\end{equation}
then (see \cite{Wardi18}), with the controller defined by \eqref{eq:controller}, we have that
\begin{equation}\label{track_error_inf}
\lim_{t\rightarrow\infty}||r(t)-y(t)||\leq\eta.
\end{equation}
Note that Eqs. \eqref{eq:output} and \eqref{eq:controller} together define the closed-loop system. Observe that the plant-equation \eqref{eq:output} is an algebraic equation while the controller equation \eqref{eq:controller}
is a differential equation, hence the closed-loop system represents a dynamical system. Its stability, in the sense that $\{y(t)\}$ is bounded
whenever $\{r(t)\}$ and $\{\dot{r}(t)\}$ are bounded, is guaranteed by \eqref{track_error_inf} as long as the control
trajectory $\{u(t)\}$ does not pass through a point $u(t)$ where the Jacobian matrix $\frac{\partial g}{\partial u}(u(t))$ is singular.
Finally, let us dispense with the assumption that the plant subsystem is a memoryless nonlinearity.
Instead, suppose that it is a dynamical system modeled by the following two equations,
\begin{align}
\dot{x}(t)&=f(x(t),u(t)),~~x(0):=x_{0}\label{eq:state_eqn}\\
y(t)&=h(x(t)), \label{eq:general_output}
\end{align}
where the state variable $x(t)$ is in $\mathbb{R}^n$, and the functions $f:\mathbb{R}^n\times \mathbb{R}^m\rightarrow \mathbb{R}^n$ and $h:\mathbb{R}^n\rightarrow \mathbb{R}^m$ satisfy the following
assumption.
\begin{assumption}
(i). The function $f:\mathbb{R}^n\times \mathbb{R}^m\rightarrow \mathbb{R}^n$ is continuously differentiable, and for every compact set $\Gamma\subset \mathbb{R}^m$
there exists $K\in\Real^+$ such that, for every $x\in \mathbb{R}^n$ and $u\in\Gamma$,
$||f(x,u)||\leq K\big(||x||+1\big)$.
(ii). The function $h:\mathbb{R}^n\rightarrow \mathbb{R}^m$ is continuously differentiable. \frqed
\end{assumption}
This assumption ensures that whenever the control signal $\{u(t)\}$ is bounded and continuous, the state equation \eqref{eq:state_eqn} has a unique solution $x(t)$ on the interval $t\in[0,\infty)$.
In this setting, $y(t)$ is no longer a function of $u(t)$, but rather of
$x(t)$ which is a function of $\{u(\tau):\tau<t\}$. Therefore \eqref{eq:output} is no longer valid, and hence the controller cannot be defined by \eqref{eq:controller}. To get around this conundrum we pull the feedback not from the output $y(t)$ but from a predicted value thereof. Specifically, fix the look-ahead time $T\in\Real^+$, and suppose that at time $t$ the system computes a prediction of $y(t+T)$, denoted by $\tilde{y}(t+T)$. Suppose also that $\tilde{y}(t+T)$ is a function of $(x(t),u(t))$, hence can be written as
$\tilde{y}(t+T)=g(x(t),u(t))$,
where the function $g:\mathbb{R}^n\times \mathbb{R}^m\rightarrow \mathbb{R}^m$ is continuously differentiable.
Now the feedback law is defined by the following equation,
\begin{equation}\label{eq:control_full}
\dot{u}(t)=\Big(\frac{\partial g}{\partial u}(x(t),u(t))\Big)^{-1}\big(r(t+T)-g(x(t),u(t))\big).
\end{equation}
The state equation \eqref{eq:state_eqn} and control equation \eqref{eq:control_full} together define the closed-loop system. This system can be viewed as an $(n+m)$-dimensional dynamical system with the state variable $(x(t)^{\textrm{T}},u(t)^{\textrm{T}})^{\textrm{T}}\in \mathbb{R}^{n+m}$ and input $r(t)\in \mathbb{R}^m$. We are concerned with a variant of Bounded-Input-Bounded-State (BIBS) stability whereby if $\{r(t)\}$ and $\{\dot{r}(t)\}$ are
bounded, $\{x(t)\}$ is bounded as well. Such stability
no-longer can be taken for granted as in the case where the plant is a memoryless nonlinearity.
We remark that a larger $T$ means larger prediction errors, and these translate into larger asymptotic tracking errors. On the other hand, an analysis of various second-order systems in \cite{Wardi17} reveals that they all were unstable if $T$ is too small, and stable if $T$ is large enough.
It can be seen that, a requirement for a restricted prediction error can stand in contradiction with the stability requirement. This issue was resolved by speeding up the controller in the following manner.
Consider $\alpha>1$, and modify \eqref{eq:control_full} by multiplying its right hand side by $\alpha$, resulting in the following control equation:
\begin{equation*}
\dot{u}(t)=\alpha\Big(\frac{\partial g}{\partial u}(x(t),u(t))\Big)^{-1}\big(r(t+T)-g(x(t),u(t))\big).
\end{equation*}
It was verified in \cite{Wardi17,Wardi18,shivam2018tracking}, that regardless of the value of $T\in\Real^+$, a large-enough $\alpha$ stabilizes the closed-loop system.\footnote{This statement seems to have a broad scope, and does not require the plant to be a minimum-phase system.} Furthermore, if the closed-loop system is stable
then the following bound holds,
\begin{equation}\label{eq:error_alpha}
\lim\sup_{t\rightarrow\infty}||r(t)-\tilde{y}(t)||\leq\frac{\eta}{\alpha},
\end{equation}
where $\eta$ is defined by \eqref{eq:rdot}. Thus, a large gain $\alpha$ can stabilize the closed-loop system and reduce the asymptotic tracking error.
\subsection{Problem Formulation}
In an attempt to broaden the application scope of the control algorithm, underactuated systems such as the fixed-wing aircraft are explored, which are widely used in the domain of aerospace engineering.
The behavior of a fixed wing aircraft at constant elevation can be approximated by a planar Dubins vehicle with $3$ states \cite{lavalle2006planning} $\forall t\geq0$,
\begin{align*}
\dot{z}_1^p(t)&=V^{p}\cos\theta^p(t)\text{, }\\
\dot{z}_2^p(t)&=V^{p}\sin\theta^p(t) \text{, }\\
\dot{\theta}^p(t)&=u(t),
\end{align*}
where $( z^p_1(t),z^p_2(t))^{\textrm{T}}$ denotes the planar position of the vehicle, $\theta^p(t)$ its heading and $u(t)$ the angular acceleration, constrained as, $\norm{u} \leq u_{\textrm{max}}$. The input saturation enforces a minimum turning radius equal to $V_0/u_{\textrm{max}}$. For testing the efficacy of the controller for the underactuated system, henceforth referred to as the pursuer, it is tasked with tracking an evading vehicle, modeled as a single integrator, with dynamics as follows:
\begin{equation*}\frac{\textrm{d}}{\textrm{d}t}
\begin{bmatrix}
z_1^\textrm{e}(t)\\[0.21em]
z_2^\textrm{e}(t)
\end{bmatrix} =
\begin{bmatrix}
V^{\textrm{e}}\cos\theta^{\textrm{e}}\\[0.2em]
V^{\textrm{e}}\sin\theta^{\textrm{e}}\\[0.2em]
\end{bmatrix},
\end{equation*}
where $(z_1^e(t),z_2^e(t))^{\top}$ denote the planar position of the evader, and $V^e$ is its speed.
We consider two cases; one where the evader is agnostic to the pursuer and follows a known trajectory and the other where the the evader is adversarial in nature and its trajectory is not known to the pursuer.
The next section will provide two solutions for the problem of estimating the evader's trajectory based, respectively, on a model-based approach and a learning-based approach.
\section{Predictive Framework}
\subsection{Model-Based Pursuit Evasion}
The considered system is underactuated because the pursuer's position, $(z_{1}^p(t),z_{2}^p(t))^{\top}$, is two-dimensional while it is controlled by an one-dimensional variable, $u(t)$. This raises a problem since the application of the proposed tracking technique requires the control variable and system's output to have the same dimension. To get around this difficulty, we define
a suitable function $F:R^2\rightarrow R^+$ and set $g(x(t),u(t)):=\int_t^{t+T}F(\tilde{y}^p(\tau)-\tilde{y}^e(\tau))\textrm{d}\tau$ where
$\tilde{y}^p(\tau)$ and $\tilde{y}^e(\tau)$ are the predicted position of the pursuer and the evader at time $\tau$; we apply the Newton-Raphson flow to the equation $g(x(t),u(t))=0$. The modified controller becomes
\begin{equation}\label{u_dot}
\dot{u}(t)=-\alpha\Big(\frac{\partial g}{\partial u}(x(t),u(t))\Big)^{-1}\big(g(x(t),u(t))\big) ,\ t\geq0.
\end{equation}
Since $g(x,u)$ is a scalar, the modified algorithm works similar to the base case.
Assume general nonlinear system dynamics as in \eqref{eq:state_eqn} with output described in \eqref{eq:general_output}. The predicted state trajectory is computed by holding the input to a constant value over the prediction horizon, given by the following differential equation:
\begin{equation}\label{predicted_state}
\dot{\xi}(\tau)=f(\xi(\tau),u(t)), ~\tau\in [t,t+T],
\end{equation}
with the initial condition $\xi(t)=x(t) $ as shown in \cite{Wardi17}. The predicted output at $\tau$ is $\tilde{y}^p(\tau)=h(\xi(\tau))$. Furthermore, by taking the partial derivative of \eqref{predicted_state} with respect to u(t), we obtain
\begin{equation}\label{predicted_derivative}
\dot{\frac{\partial \xi}{\partial u}}(\tau)=\frac{\partial f}{\partial \xi}(\xi(\tau),u(t))\frac{\partial \xi}{\partial u}(\tau)+\frac{\partial f}{\partial u}(\xi(\tau),u(t)),
\end{equation}
with the initial condition ${\frac{\partial \xi}{\partial u}}(t)=0$. The above is a differential equation in ${\frac{\partial \xi}{\partial u}}(\tau); ~\tau \in [t,t+T]$ and \eqref{predicted_state} and \eqref{predicted_derivative} can be solved numerically.
Finally, the values of $g(x,u)$ and $\frac{\partial g}{\partial u}(x,u)$ can be substituted in \eqref{u_dot} to get the control law.
In the next section, results are presented for an agnostic as well as an adversarial pursuer- evader system. However, as mentioned above, in the adversarial problem formulation, the trajectory of the evader is not known in advance, which can be overcome in two ways.
In the first approach, the pursuer(s) use game theory to predict the approximate direction of evasion. As mentioned in \cite{isaacs1999differential}, in the case of single pursuer, the evader's optimal strategy is to move along the line joining the evader and pursuer's position, if the pursuer is far enough. When the distance between the pursuer and the evader reduces to the turning radius of the pursuer, the evader switches strategies and enters into the non-holonomic constraint region of the pursuer. This can be represented as follows:
\begin{equation}\label{evasion}
\theta_E= \begin{cases}
\arctan\bigg({\frac{\vphantom{z_{2_p}}z_2^e(t)-z_2^p(t)}{z_1^e(t)-\vphantom{z_1^{p^p}(t)}z_1^{p}(t)}}\bigg), & d > R_P, \\ \\
\arctan\bigg({\frac{\vphantom{z_{2_p}}z_2^e(t)-z_2^p(t)}{z_1^e(t)-\vphantom{z_1^{p^p}(t)}z_1^{p}(t)}}\bigg) \pm \pi/2, & d\leq R_P.
\end{cases}
\end{equation}
Here $\theta_E$ is the expected evasion angle of the evader and $d$ is the distance between the pursuer and evader,
If there are multiple pursuers, it is assumed that the evader follows the same strategy by considering only the closest pursuer. It is notable that this will not provide the pursuers a correct prediction of the evader's motion as they do not know about the goal seeking behavior mentioned above. However, it gives a good enough approximation of the pursuer's motion that the algorithm can be used for tracking.
The second approach involves learning the evader's behavior over time using NN. The pursuers take their positions and the position of the evader as input and the NN gives the estimated evasion direction as the output after training.
To showcase the efficacy of our method, we consider a pursuit evasion problem, involving multiple pursuing agents. Such problems are typically formulated as zero-sum differential games \cite{isaacs1999differential}. Due to the difficulty of solving the underlying Hamilton-Jacobi-Isaacs (HJI) equations \cite{basar1999dynamic} of this problem, we shall utilize the method described in \ref{sec:tracking} to approximate the desired behavior.
Furthermore, we show that augmenting the controller with learning structures in order to tackle the pursuit evasion problem without explicit knowledge of the evader's behavior is straightforward.
In order to formulate the pursuit evasion problem, we define a global state space system consisting of the dynamics of the pursuers and the evader. For ease of exposition, the analysis will focus on the $2$-pursuer, $1$-evader problem, since extending the results to multiple pursuers is straightforward.
The global state dynamics become,
\begin{equation}\label{eq:nonlin_dynam}
\frac{\textrm{d}}{\textrm{d}t}
\begin{bmatrix}
z_1^{\textrm{p}_1}(t)\\[0.21em]
z_2^{\textrm{p}_1}(t)\\[0.21em]
\theta^{\textrm{p}_1}(t)\\[0.21em]
z_1^{\textrm{p}_2}(t)\\[0.21em]
z_2^{\textrm{p}_2}(t)\\[0.21em]
\theta^{\textrm{p}_2}(t)\\[0.21em]
z_1^\textrm{e}(t)\\[0.21em]
z_2^\textrm{e}(t)
\end{bmatrix} =
\begin{bmatrix}
V^{\textrm{p}_1}\cos\theta^{\textrm{p}_1}\\[0.2em]
V^{\textrm{p}_1}\sin\theta^{\textrm{p}_1}\\[0.2em]
u^\textrm{p}_1\\[0.2em]
V^{\textrm{p}_1}\cos\theta^{\textrm{p}_2}\\[0.2em]
V^{\textrm{p}_2}\sin\theta^{\textrm{p}_2}\\[0.2em]
u^\textrm{p}_2\\[0.2em]
V^{\textrm{e}}\cos\theta^{\textrm{e}}\\[0.2em]
V^{\textrm{e}}\sin\theta^{\textrm{e}}\\[0.2em]
\end{bmatrix},
\end{equation}
where the subscripts indicate the autonomous agent. For compactness, we denote the global state vector as $x(t)\in \mathbb{R}^8$, the pursuers' control vector $u(t) \in \mathbb{R}^2$, and the nonlinear mapping described by the right-hand side of \eqref{eq:nonlin_dynam}. Thus, given the initial states of the agents $x_0\in\mathbb{R}^8$, the evolution of the pursuit evasion game is described by
$\dot{x}(t) = f(x(t),u,u_\textrm{e})\text{, }x(0)=x_0\text{, }t\geq 0$.
Subsequently, this zero-sum game can be described as a minimax optimization problem through the cost index,
\begin{align}\label{eq:cost}
J(x,u,u_\textrm{e}) &= \int_{0}^{\infty} e^{-\gamma t}L(x)\textrm{d}t \nonumber\\&:= \int_0^\infty e^{-\gamma t}\bigg(\beta_1(d^2_1+d_2^2)+\beta_2\frac{d^2_1d^2_2}{d^2_1+d^2_2}\bigg)\textrm{d}t,
\end{align}
where $d_i=\sqrt{(z_1^i-z^\textrm{e})^2+(z_2^i-z_2^\textrm{e})^2}$, $i\in\lbrace \textrm{p}_1,\textrm{p}_2\rbrace$ is the distance between the $i$-th pursuer and the evader, $\beta_1,\ \beta_2\in\Real^+$ are user defined contants, and $\gamma\in\Real^+$ is a discount factor. The first term ensures that the pursuers remain close to the evader, while the second term encourages cooperation between the agents. The cost decreases exponentially to ensure that the integral has a finite value in the absence of equilibrium points.
Let $V(x):\mathbb{R}^8\rightarrow\mathbb{R}$ be a smooth function quantifying the value of the game when specific policies are followed starting from state $x(t)$.
Then, we can define the corresponding Hamiltonian of the game as,
\begin{equation}\label{eq:ham}
H\big(x,u,u_e,\frac{\partial V}{\partial x}\big) = L(x) + \frac{\partial V}{\partial x}^{\textrm{T}}f(x,u,u_e) + \gamma V.
\end{equation}
The optimal feedback policies $u^\star(x)$, $u^\star_e(x)$ of this game are known to constitute a saddle point \cite{basar1999dynamic} such that,
\begin{align}
u^\star(x) = \arg\min_u H(x,u,u_e), \label{eq:opt_purs}\\
u_e^\star(x) = \arg\max_{u_e} H(x,u,u_e). \label{eq:opt_evade}
\end{align}
Under the optimal policies \eqref{eq:opt_purs},\eqref{eq:opt_evade}, the HJI equation is satisfied,
\begin{equation}\label{eq:HJI}
H\big(x,u^\star,u_e^\star,\frac{\partial V}{\partial x}^\star\big) = 0.
\end{equation}
Evaluating the optimal pursuit policies, yields the singular optimal solutions described by,
$V_{\theta_{p1}}u_1 = V_{\theta_{p2}}u_2 =0$,
where $V_{x_i}$ is the partial derivative of the value function with respect to the state $x_i$, calculated by solving \eqref{eq:HJI}.
To obviate the need for bang-bang control, as is derived by \eqref{eq:opt_purs} and \eqref{eq:opt_evade} we shall employ the predictive tracking technique described in Section \ref{sec:tracking} to derive approximate, easy to implement, feedback controllers for the pursuing autonomous agents. Furthermore, by augmenting the predictive controller with learning mechanisms, the approximate controllers will have no need for explicit knowledge of $u_e^\star(x)$, the evader's policy.
The following theorem presents bounds on the optimality loss induced by the use of the look-ahead controller approximation.
\begin{theorem}
Let the pursuit evasion game evolve according to the dynamics given by \eqref{eq:nonlin_dynam}, where the evader is optimal with respect to \eqref{eq:cost} and the pursuers utilize the learning-based predictive tracking strategy given \eqref{u_dot}. Then, the tracking error of the pursuers and the optimality loss due to the use of the predictive controller are bounded if $\exists \bar{\Delta}\in\Real^+$, such that, $\Delta(x(t),\hat{u}(t),\hat{u}(t)_\textrm{e}) \leq \bar{\Delta},~\forall t\geq0$,
where $
\Delta(x,\hat{u},\hat{u}_\textrm{e}) = V_{x_\textrm{e}}v_\textrm{e}(\cos\hat{u}_\textrm{e}-\cos u^\star_\textrm{e})+V_{y_\textrm{e}}v_\textrm{e}(\sin \hat{u}_\textrm{e}-\sin u^\star_\textrm{e}) + V_{\theta_\textrm{p}1}(u_1^\star-\hat{u}_1)+V_{\theta_\textrm{p}2}(u_2^\star-\hat{u}_2),
$
with $V_{\xi}$ denoting the partial derivative of the game value with respect to the state component $\xi(t)$.
\end{theorem}
Proof: Consider the Hamiltonian function when the approximate controller, denoted $\hat{u}(t)$ and the NN-based prediction of the evader's policy, $\hat{u}_\textrm{e}(t)$ are used,
\begin{align}\label{eq:subopt_hamiltonian}
H(x,\hat{u},\hat{u}_\textrm{e}) = L(x) + \big(\frac{\partial V}{\partial x}\big)^\textrm{T} f(x,\hat{u},\hat{u}_\textrm{e}) +\gamma V.
\end{align}
Taking into account the nonlinear dynamics of the system \eqref{eq:nonlin_dynam}, one can rewrite \eqref{eq:subopt_hamiltonian} in terms of the optimal Hamiltonian as,$H(x,\hat{u},\hat{u}_\textrm{e}) = H(x,u^\star,u^\star_\textrm{e}) + \Delta(\hat{u},\hat{u}_\textrm{e})$,
where $H(x,u^\star,u^\star_\textrm{e})=0$ is the HJI equation that is obtained after substituting \eqref{eq:opt_purs} and \eqref{eq:opt_evade} in \eqref{eq:ham}.
Now, take the orbital derivative of the value function along the trajectories using the approximate controllers as,
$
\dot{V} = \big(\frac{\partial V}{\partial x}\big)^\textrm{T} f(x,\hat{u},\hat{u}_\textrm{e}).
$
Substituting \eqref{eq:subopt_hamiltonian} yields
$
\dot{V} = -L(x) - \gamma V + \Delta(x,\hat{u},\hat{u}_\textrm{e}).
$
Thus, since $L(x)> 0$, $\forall x \in \mathbb{R}^8\setminus \lbrace 0 \rbrace$,
\begin{align*}
\dot{V} < -\gamma V + \Delta(x,\hat{u},\hat{u}_\textrm{e}) \Rightarrow\dot{V} < -\gamma V + \bar{\Delta}.
\end{align*}
Hence for $V\geq\bar{\Delta}/\gamma$, we have $\dot{V}\leq0$. Thus $\lbrace x\in \mathbb{R}^8~|~ V(x)\leq \bar{\Delta}/\gamma \rbrace$ is a forward invariant set, which implies that the tracking error and the optimality loss over any finite horizon is bounded.
\frQED
\begin{remark}
Note that we do not use optimal control or MPC to solve the pursuit evasion problem. Instead, the controller is governed by \eqref{u_dot}, which is simple to implement and has low computational complexity.
\frqed
\end{remark}
\subsection{Deep Learning-Based Pursuit Evasion}
A deep NN, consisting of $L > 2$ hidden layers, describes a nonlinear mapping between its input space $\mathbb{R}^n$ and output space $\mathbb{R}^p$.
Each layer receives the output of the previous layer as an input and, subsequently, feeds its own output to the next layer. Each layer's output consists of the weighted sum of its input alongside a bias term, filtered through an application-specific activation function \cite{haykin2009neural}.
Specifically, let $\mathbb{R}^{n_l}$ be the input space of a specific layer, and $\mathbb{R}^{p_l}$ the corresponding output space. Then the layer's output is,
\begin{equation*}
Y_i(x) = \sigma\bigg(\sum_{j=1}^{n_l} v_{ij}X_j + v_{i0}\bigg)\text{, } i = 1,2,\dots,p_l,
\end{equation*}
where $X^\prime = \begin{bmatrix}X_1 & \dots & X_{n_l} \end{bmatrix}^\textrm{T} \in \mathbb{R}^{n_l}$ is the input vector, gathered from training data or from the output of previous layers, $v_{ij}\in \mathbb{R}$ is a collection of $n_l$ weights for each layer, $v_{i0} \in \mathbb{R}$ the bias term and $\sigma: \mathbb{R}^{n_l} \rightarrow \mathbb{R}$ is the layer's activation function.
We note that it is typical to write the output of layer compactly, with slight abuse of notation, as,
\begin{equation}\label{eq:NN}
Y = \sigma(W^\textrm{T} \sigma^\prime(X)),
\end{equation}
where $Y = \begin{bmatrix} Y_1 &\dots & Y_{p_l} \end{bmatrix} \in \mathbb{R}^{p_l}$, $W = \begin{bmatrix} v_{ij} \end{bmatrix}\in \mathbb{R}^{(n_l+1)\times p_l}$ and $\sigma^\prime:\mathbb{R}^{n_l^\prime} \rightarrow \mathbb{R}^{n_l}$ is the activation function of the previous layer, taking as input the vector $X=\begin{bmatrix}{X^\prime}^{\textrm{T}}& 1 \end{bmatrix}^{\textrm{T}}$ .
It is known \cite{lewis1998neural}, that two-layer NNs possess the universal approximation property, according to which, any smooth function can be approximated arbitrarily close by an NN of two or more layers.
Let $\mathbb{S}\subset \mathbb{R}^n$ be a simply connected compact set and consider the nonlinear function $\kappa : \mathbb{S} \rightarrow \mathbb{R}^p$. Given any $\epsilon_b \geq 0$, there exists a NN such structure such that,
\begin{equation*}
\kappa(x) = \sigma\big(W^\textrm{T} \sigma^\prime(x)\big) + \epsilon\text{, } \forall x \in \mathbb{S},
\end{equation*}
where $\|\epsilon\| \leq \epsilon_b$. We note that, typically, the activation function of the output layer $\sigma(\cdot)$ is taken to be linear.
Evaluating the weight matrix $W$ in a network is the main concern of the area of machine learning. In this work, we employ the gradient descent based backpropagation algorithm.
Given a collection of $N_d$ training data, stored in the tuple $\lbrace x_k,\kappa_k \rbrace_{k}$, where $x_k \in \mathbb{R}^n$, $\kappa_k \in \mathbb{R}^p$, $\forall k =1,\dots,N_d$, we denote the output errors as
$
r_k = \kappa(x_k) - \kappa_k.
$
Then, the update equation for the weights at each optimization iteration $t_k$ is given by,
\begin{align}\label{eq:NN_tune}
w_{ij}(t_k+1) = w_{ij}(t_k) - \eta \frac{\partial (r_k^\textrm{T}r_k)}{\partial w_{ij}}, ~\forall t_k \in \mathbb{N},
\end{align}
where $\eta\in\Real^+$ denotes the learning rate. We note that the update index $t_k$ need not correspond to the sample index $k$, since different update schedules leverage the gathered data in different ways \cite{lewis1998neural}.
It can be seen that in order for the proposed method to compute the pursuers' control inputs, an accurate prediction of the future state of the evader is required. However, this presupposes that the pursuers themselves have access to the evader's future decisions; an assumption that is, in most cases, invalid.
Thus, we augment the pursuers' controllers with a NN structure, that learns to predict the actions of the evader, based on past recorded data.
Initially, we assume that the evader's strategy is computed by a feedback algorithm, given her relative position to the pursuers. This way, the unknown function we wish to approximate is $f:\mathbb{R}^{2N} \rightarrow \mathbb{R}^2$, with,
$
u^e = f(\delta z_1^{{p}_1}, \delta z_2^{p_1} ,\dots, \delta z_1^{p_N},\delta z_2^{p_N}), $
where, $(\delta z_1^{p_i},\delta z_2^{p_i})$ denote the distance of pursuer $i$ to the evader in the X and Y axes, respectively.
In order to train the network, we let the pursuers gather data regarding the fleet's position with respect to the evader, as well as her behavior over a predefined time window $T_l > 0$.
\begin{remark}
Increasing the time window $T_l$ will allow the pursuers to gather more training data for the predictive network. However, this will not only increase the computational complexity of the learning procedure, but will make the pursuers more inert to sudden changes in the evader's behavior. Simulation results corroborate our choice of training parameters. \frqed
\end{remark}
Subsequently, we denote by $\hat{u}^e(x)$, the current prediction function for the evader's strategy, i.e., $\hat{u}^e(x) = \sigma\big(\hat{W}^\textrm{T}\hat{\sigma}^\prime(\chi)\big)$,
where $\chi = \begin{bmatrix} \delta z_1& \delta y_1 &\dots& \delta x_N & \delta y_N \end{bmatrix} \in \mathbb{R}^{2N}$, $\hat{W}$ denotes the current weight estimate of the NNs output layer, and $\hat{\sigma}^\prime(\cdot)$ is the current estimate of the hidden layers, parametrized by appropriate hidden weights.
\begin{remark}
While the learning algorithm for the evader's behavior operates throughout the duration of the pursuit, thus making the approximation weights time-varying, we suppress their explicit dependence on time since the process is open-loop, in the sense that the system is learning in batches, rather that in a continuous fashion. \frqed
\end{remark}
\begin{algorithm}[b]
\caption{Deep Learning-Based and Predictive Pursuit Evasion}
\textbf{Inputs:} ${X_{P_i}(t)}$, $\forall i\lbrace 1,\dots,N\rbrace$, $X_E(t)$ and evasion strategy approximation weights $W$.\\
\textbf{Output:} ${u_{P_i}(t)}$, $\forall i\lbrace 1,\dots,N\rbrace$.
\begin{algorithmic}[1]
\State Compute $(\delta x_i,\delta y_i)$, $i\in\lbrace1,\dots,N\rbrace$.
\State Predict evader's future behavior via \eqref{eq:NN}.
\State Train NN as in \eqref{eq:NN_tune}.
\State Predict evader's future state as $\tilde{X}_E(t+T)=X_E(t)+[V_E\cos{\theta_E} ~~V_E\sin{\theta_E}]^{\textrm{T}}T$.
\State Propagate pursuer dynamics to get $\tilde{X}_P(t+T)$.
\State Computed current Newton flow parameters using \eqref{g_multi_pursuer}.
\State Computed control dynamics $\dot{u}_{P_i}(t)$ from \eqref{eq:controller}.
\State Propagate actual system evolution using \eqref{eq:nonlin_dynam}.
\State Append current distances $(\delta x_i,\delta y_i)$ to a stack of previous observations.
\State Update evader prediction network through \eqref{eq:NN_tune}.
\end{algorithmic}
\end{algorithm}
\section{Simulation Results}
This section presents results for the problems briefly described in the previous section. First, the agnostic evader case is considered followed by the adversarial case. For the second case, single and multiple pursuer systems are considered separately. The controller is implemented on a Dubins vehicle. For the purpose of tracking, we define the system output to be $y^i=\begin{bmatrix}z_1^i&z_2^i\end{bmatrix}^\textrm{T}$, $i \in \lbrace p_1,p_2,e\rbrace$.
\vspace{-1mm}
\subsection{Single Pursuer - Agnostic Target}
In this subsection, the controller is tested on a Dubins vehicle with the task of pursuing an agnostic target moving along a known trajectory. Since the vehicle has a constant speed and an input saturation is enforced, it has an inherent minimum turning radius. For this simulation, we set $V^p=2$~m/s and the input saturation is first set to $\frac{\pi}{2}$~rad/s and then to $2{\pi}$~rad/s. The evader moves along two semicircular curves with a constant speed which is less than $V^p$.
As a consequence, when the pursuer catches up to the evader, it overshoots and has to go around a full circle to again start tracking. Naturally, lower turning radius translates to better tracking as the vehicle can make ``tighter'' turns. This can be seen when comparing the trajectories of the vehicle in Figure~\ref{trajectory_R} with Figure~\ref{trajectory_r}. For the same trajectory of the evader, the tracking performance is far better in the second case. Once the pursuer catches up to the target, the maximum tracking error in the first case is approximately $4$ meters and only $1$ meter in the second case, shown in Figures~\ref{error_R} and \ref{error_r}. This is consistent with the fact that the ratio of the turning radii is $4:1$.
\begin{figure}[!ht]
\vspace{6pt}
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectory_large_r.pdf}
\vspace{-8pt}
\caption{ Agnostic evader with a large turning radius.}\label{trajectory_R}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{distance_large_r.pdf}
\vspace{-8pt}
\captionof{figure}{{Evolution of an agnostic evader tracking error with a large turning radius.}}\label{error_R}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectory_small_r.pdf}
\vspace{-8pt}
\captionof{figure}{{Agnostic evader with a small turning radius.}}\label{trajectory_r}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{distance_small_r.pdf}
\vspace{-8pt}
\captionof{figure}{{ Evolution of the agnostic evader tracking error with a small turning radius.}}\label{error_r}
\end{center}
\end{figure}
\subsection{Single Pursuer - Adversarial Evader}
The pursuer is again modelled as a Dubins vehicle, while the evader is modelled as a single integrator with a maximum velocity less than the speed of the pursuer. Hence, while the pursuer is faster, the evader is more agile, and can instantly change its direction of motion. In this and subsequent cases, the evader is considered adversarial in nature and uses game theory to choose evasion direction.
Let $y^p(t)$ and $y^e(t)$ be the position vector of the pursuer and evader respectively at time $t$. First, the pursuer makes an estimate of the optimal evasion direction based on the relative position of the evader and itself at time $t$ using \eqref{evasion}. Assuming this direction of evasion to be fixed over the prediction window from $t$ to $t+T$ gives the predicted position of the evader at all time instances in this interval, denoted as $\tilde{y}^e(\tau), \tau \in [t, t+T]$. Next, the pursuer estimates its own predicted position if its input is kept constant, called $\tilde{y}^p(\tau),\tau \in [t, t+T]$. Finally, $g(t)$ is set as $||\tilde{y}^e(t+T)-\tilde{y}^p(t+T)||^2$ and the value of $\frac{\partial g}{\partial u}(x(t),u(t))$ ($x(t)$ being the ensemble vector of the states of the pursuer and the evader) is used to compute the input differential equation \eqref{u_dot}.
Figures~\ref{trajectory_pursuer} shows the trajectories of the pursuer and the evader, with the goal for the evader set to to point $(150,60)$. It can be observed that the evader moves towards the goal while the pursuer is far away and starts evasive maneuvers when it gets close to it, by entering its non-holonomic region. Figure~\ref{error_pursuer} displays the tracking error, defined as the distance between the pursuer and the evader, which is almost periodic. This is because the evader's maneuver forcing the pursuer to circle back. The peak tracking error after the pursuer catches up is slightly more than twice the turning radius, as expected.
\begin{figure}[!ht]
\vspace{6pt}
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectory_1_pursuer.pdf}
\vspace{-8pt}
\caption{ Trajectories for a single pursuer-evader system.}\label{trajectory_pursuer}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{distance_1_pursuer.pdf}
\vspace{-8pt}
\captionof{figure}{Evolution of the tracking error for a single pursuer-evader system.}\label{error_pursuer}
\end{center}
\end{figure}
\vspace{-1mm}
\subsection{Multiple Pursuers - Adversarial Evader}
While the previous section had only one pursuer, this simulation considers the case of two pursuers and a single evader. Having multiple pursuers means there must be cooperation between them in order to optimally utilize resources. Thus, a pursuer can no longer make decisions solely based on the position of the evader relative to itself. The positions of the rest of the pursuers must also be factored in. Thus we redefine the expression for $g(x,u)$ to include these parameters as shown below for the case of two pursuers. Let $d_{\textrm{p}}$ be the distance between the two pursuers, and let
{\small\begin{align} \label{g_multi_pursuer}
g(x(t),u(t)) := \int_{t}^{t+T} &\bigg\{\beta _1 (d^2_1(\tau)+d^2_2(\tau)) + \beta _2 \frac{d^2_1(\tau)d^2_2(\tau)}{d^2_1(\tau)+d^2_2(\tau)} \nonumber \\ & \quad + \beta _3 e^{-\gamma d_\textrm{p}(\tau)}\bigg\}\textrm{d}\tau,\ \forall t\geq0.
\end{align}}
The first term ensures that the pursuers remain close to the evader, while the second term encourages cooperation between agents. The last term is added to repel pursuers apart if they come close to each other, as having multiple pursuers in close vicinity of each other is sub-optimal.
Figure~\ref{trajectory_2pursuers} shows the trajectories of the pursuers and the evader when the goal for the evader is set to the point $(15,-1)$. In this case, the pursuers close in on the evader and trap it away from its goal due to their cooperative behavior. The evader is forced to continuously perform evasive maneuvers as the other pursuer closes in when the first has to make a turn. This can be seen more clearly in the tracking error plot given in Figure~\ref{error_2pursuer}. After catching up with the evader, it can be seen that when one pursuer is at its maximum distance, the other is at its minimum.
The results achieved show good coordination between the pursuers and low tracking error and are qualitatively comparable to \cite{quintero2016robust}.
Lastly, we present the results under the learning-based prediction. In Figure~\ref{fig_nn_dist}, we present a comparative result of the tracking error of the model-based algorithm vis-\`a-vis the NN-based control. Figure~\ref{fig_nn_cost} showcases the quality of the performance of the proposed algorithm based on the game theoretic cost metric. From these figures, it can be seen that the NN structure offers fast predictive capabilities to the controller; hence the overall performance is comparable to the model based control.
\begin{figure}[!ht]
\vspace{6pt}
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectory_2_pursuers.pdf}
\vspace{-8pt}
\caption{ Trajectories for the two pursuer-single evader system.}\label{trajectory_2pursuers}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{distance_2_pursuers.pdf}
\vspace{-8pt}
\caption{Evolution of the tracking error for the two pursuer-single evader system.}\label{error_2pursuer}
\end{center}
\end{figure}
\begin{figure}[!ht]
\vspace{6pt}
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectory_NN.pdf}
\vspace{-8pt}
\caption{Trajectories for two pursuers-single evader system with learning.}\label{fig_nn_trajectory}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{distance_difference_1.pdf}
\vspace{-8pt}
\caption{ Evolution of the tracking error for the systems with and without learning.}\label{fig_nn_dist}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{cost_difference_2.pdf}
\vspace{-8pt}
\captionof{figure}{{Total cost for the system with and without learning. }}\label{fig_nn_cost}
\end{center}
\end{figure}
\section{Conclusion and Future Work}
This work extends the framework of prediction-based nonlinear tracking in the context of pursuit evasion games.
We present results for vehicle pursuit of agnostic targets, modeled as moving along known trajectories, as well as adversarial target tracking, where the evader evolves according to game-theoretic principles. Furthermore, to obviate the need for explicit knowledge of the evader's strategy, we employ learning algorithms alongside the predictive controller.
The overall algorithm is shown to produce comparable results to those in the literature, while it precludes the need for solving an optimal control problem.
Future work will focus on developing robustness guarantees will allow for more realistic scenarios, where noise and external disturbances are taken into consideration.
\balance
\bibliographystyle{IEEEtran}
\section{Introduction}
Output tracking in dynamical systems, such as robots, flight control, economics, biology, cyber-physical systems, is the practice of designing decision makers which ensure that a system's output tracks a given signal \cite{devasia1996nonlinear,martin1996different}.
Well-known existing methods for nonlinear output regulation and tracking include control techniques based on nonlinear inversions \cite{Isidori90},
high-gain observers \cite{Khalil98}, and the framework of
model predictive control (MPC) \cite{allgower2012nonlinear,Rawlings17}. Recently a new approach has been proposed, based on
the Newton-Raphson flow for solving algebraic equations \cite{Wardi17}.
Subsequently it has been tested on various applications including controlling an inverted pendulum, and position control of platoons of mobile robotic vehicles \cite{Wardi18,shivam2018tracking}. While perhaps not as general as the aforementioned established techniques, it seems to hold out promise of efficient computations and large domains of stability.
The successful deployment of complex control systems in real world applications increasingly depends on their ability to operate on highly unstructured -- even adversarial -- settings, where \textit{a-priori} knowledge of the evolution of the environment is impossible to acquire. Moreover, due to the increasing interconnection between the physical and the cyber domains, control systems become more intertwined with human operators, making model-based solutions fragile to unpredictable. Towards that, methods that augment low-level control techniques with intelligent decision making mechanisms have been extensively investigated in \cite{saridis1983intelligent}.
Machine learning \cite{haykin2009neural,vrabie2013optimal}, offers a suitable framework to allow control systems to autonomously adapt by leveraging data gathered from their environment. To enable data-driven solutions for autonomy, learning algorithms use artificial neural networks (NNs); classes of functions that, due to properties that stem from their neurobiological analogy, offer adaptive data representations and prediction based on external observations.
NNs have been used extensively in control applications \cite{narendra1990identification}, both in open-loop and closed-loop fashion. In closed-loop applications, NNs have been utilized as dynamics approximators, or in the framework of reinforcement learning, in enabling online solution of the Hamilton-Jacobi-Bellman equation \cite{vamvoudakis2010online}. However, the applicability of NNs in open-loop control objectives is broader, due to their ability to operate as classifiers, or as nonlinear function approximators \cite{bishop1995neural}.
The authors of \cite{narendra1990identification} introduced NN structures for system identification as well as adaptive control. Extending the identification capabilities of learning algorithms, the authors of \cite{bhasin2013novel} introduce a robustification term that guarantees asymptotic estimation of the state and the state derivative. Furthermore, reinforcement learning has received increasing attention since the development of methods that solve optimal control problems for continuous time control systems online without the knowledge of the dynamics \cite{vamvoudakis2017q}.
Prediction has been in the forefront of research conducted on machine learning. Learning-based attack prediction was employed both in \cite{weber2009data} and \cite{alpcan2010network} in the context of cyber-security, and \cite{pesch1995synthesis} utilized NNs to solve a pursuit evasion game by constructing both the evader's and the pursuer's strategies offline using pre-computed trajectories. Recently, authors of this paper have applied NN for on-line model construction in a control application \cite{Kanellopoulos19}.
This paper applies an NN technique to the pursuit-evasion problem
investigated in \cite{quintero2016robust}, which is more challenging than the problem addressed in \cite{Kanellopoulos19}. The strategies of both pursuers and evader are based on respective games. In Ref. \cite{quintero2016robust},
the pursuers know the game of the evader ahead of time, and an MPC technique is used to determine their trajectories. In this paper the pursuers do not have an a-priori knowledge of the evader's game or its structure, and they employ an NN
in real time to identify its input-output mapping. We use our tracking-control technique \cite{Wardi17} rather than MPC, and obtain similar results to \cite{quintero2016robust}. Furthermore, the input to the system has a lesser dimension that its output,
and hence the control is underactuated. We demonstrate a way of overcoming this limitation, which may have a broad scope in applications.
The rest of the paper is structured as follows.
Section II describes our proposed control technique
and some preliminary results on NN, and it formulates the pursuers-evader problem. Section III describes results on model-based and learning-based strategies. Simulation results are presented in Section IV. Finally, Section V concludes the paper and discusses directions for future research.
\section{Preliminaries and Problem Formulation}
\subsection{Tracking Control Technique}\label{sec:tracking}
This subsection recounts results published in our previous work in which prediction-based output tracking was used for fully-actuated systems \cite{Wardi17, Wardi18,shivam2018tracking}.
Consider a system as shown in Figure~\ref{control_system} with $r(t)\in \mathbb{R}^m$, $y(t)\in \mathbb{R}^m$,
$u(t)\in \mathbb{R}^m$, and $e(t):=r(t)-y(t)$. The objective of the controller is
to ensure that
\begin{equation}\label{track_error_eq}
\lim_{t\rightarrow\infty}||r(t)-y(t)||<\varepsilon,
\end{equation}
for a given (small) $\varepsilon\in\Real^+$.
\begin{figure}
\centering
\includegraphics[width=2.8in]{System.pdf}
\caption{{ Basic control system scheme.}}\label{control_system}
\end{figure}
To illustrate the basic idea underscoring the controller, let us first assume that (i) The plant subsystem is a memoryless nonlinearity of the form
\begin{equation}\label{eq:output}
y(t)=g(u(t)),
\end{equation}
for a continuously-differentiable function $g:\mathbb{R}^m\rightarrow \mathbb{R}^m$,
and (ii) the target reference $\{r(t):t\in[0,\infty)\}$ is a constant, $r(t)\equiv r$ for a given $r\in \mathbb{R}^m$.\footnote {Henceforth we will use the notation $\{x(t)\}$ For a generic signal $\{x(t),~t\in[0,\infty)\}$, to distinguish it from its value at a particular point $t$, $x(t)$.} These assumptions will be relaxed later.
In this case, the tracking controller is defined by the following
equation,
\begin{equation}\label{eq:controller}
\dot{u}(t)=\Big(\frac{\partial g}{\partial u}(u(t))\Big)^{-1}\big(r-y(t)\big),
\end{equation}
assuming that the Jacobian matrix $\frac{\partial g}{\partial u}(u(t))$ is nonsingular at every point $u(t)$ computed by the controller via \eqref{eq:controller}. Observe that \eqref{eq:controller} defines the Newton-Raphson flow for solving the algebraic equation $r-g(u)=0$, and hence (see \cite{Wardi17,Wardi18})
the controller converges in the sense that
$\lim_{t\rightarrow\infty}\big(r(t)-y(t)\big)=0$.
Next, suppose that the reference target is time-dependent, while keeping the assumption that the plant is a memoryless nonlinearity. Suppose that
$\{r(t)\}$ is bounded, continuous, piecewise-continuously differentiable, and $\{\dot{r}(t)\}$ is bounded.
Define
\begin{equation}\label{eq:rdot}
\eta:=\lim\sup_{t\rightarrow\infty}||\dot{r}(t)||,
\end{equation}
then (see \cite{Wardi18}), with the controller defined by \eqref{eq:controller}, we have that
\begin{equation}\label{track_error_inf}
\lim_{t\rightarrow\infty}||r(t)-y(t)||\leq\eta.
\end{equation}
Note that Eqs. \eqref{eq:output} and \eqref{eq:controller} together define the closed-loop system. Observe that the plant-equation \eqref{eq:output} is an algebraic equation while the controller equation \eqref{eq:controller}
is a differential equation, hence the closed-loop system represents a dynamical system. Its stability, in the sense that $\{y(t)\}$ is bounded
whenever $\{r(t)\}$ and $\{\dot{r}(t)\}$ are bounded, is guaranteed by \eqref{track_error_inf} as long as the control
trajectory $\{u(t)\}$ does not pass through a point $u(t)$ where the Jacobian matrix $\frac{\partial g}{\partial u}(u(t))$ is singular.
Finally, let us dispense with the assumption that the plant subsystem is a memoryless nonlinearity.
Instead, suppose that it is a dynamical system modeled by the following two equations,
\begin{align}
\dot{x}(t)&=f(x(t),u(t)),~~x(0):=x_{0}\label{eq:state_eqn}\\
y(t)&=h(x(t)), \label{eq:general_output}
\end{align}
where the state variable $x(t)$ is in $\mathbb{R}^n$, and the functions $f:\mathbb{R}^n\times \mathbb{R}^m\rightarrow \mathbb{R}^n$ and $h:\mathbb{R}^n\rightarrow \mathbb{R}^m$ satisfy the following
assumption.
\begin{assumption}
(i). The function $f:\mathbb{R}^n\times \mathbb{R}^m\rightarrow \mathbb{R}^n$ is continuously differentiable, and for every compact set $\Gamma\subset \mathbb{R}^m$
there exists $K\in\Real^+$ such that, for every $x\in \mathbb{R}^n$ and $u\in\Gamma$,
$||f(x,u)||\leq K\big(||x||+1\big)$.
(ii). The function $h:\mathbb{R}^n\rightarrow \mathbb{R}^m$ is continuously differentiable. \frqed
\end{assumption}
This assumption ensures that whenever the control signal $\{u(t)\}$ is bounded and continuous, the state equation \eqref{eq:state_eqn} has a unique solution $x(t)$ on the interval $t\in[0,\infty)$.
In this setting, $y(t)$ is no longer a function of $u(t)$, but rather of
$x(t)$ which is a function of $\{u(\tau):\tau<t\}$. Therefore \eqref{eq:output} is no longer valid, and hence the controller cannot be defined by \eqref{eq:controller}. To get around this conundrum we pull the feedback not from the output $y(t)$ but from a predicted value thereof. Specifically, fix the look-ahead time $T\in\Real^+$, and suppose that at time $t$ the system computes a prediction of $y(t+T)$, denoted by $\tilde{y}(t+T)$. Suppose also that $\tilde{y}(t+T)$ is a function of $(x(t),u(t))$, hence can be written as
$\tilde{y}(t+T)=g(x(t),u(t))$,
where the function $g:\mathbb{R}^n\times \mathbb{R}^m\rightarrow \mathbb{R}^m$ is continuously differentiable.
Now the feedback law is defined by the following equation,
\begin{equation}\label{eq:control_full}
\dot{u}(t)=\Big(\frac{\partial g}{\partial u}(x(t),u(t))\Big)^{-1}\big(r(t+T)-g(x(t),u(t))\big).
\end{equation}
The state equation \eqref{eq:state_eqn} and control equation \eqref{eq:control_full} together define the closed-loop system. This system can be viewed as an $(n+m)$-dimensional dynamical system with the state variable $(x(t)^{\textrm{T}},u(t)^{\textrm{T}})^{\textrm{T}}\in \mathbb{R}^{n+m}$ and input $r(t)\in \mathbb{R}^m$. We are concerned with a variant of Bounded-Input-Bounded-State (BIBS) stability whereby if $\{r(t)\}$ and $\{\dot{r}(t)\}$ are
bounded, $\{x(t)\}$ is bounded as well. Such stability
no-longer can be taken for granted as in the case where the plant is a memoryless nonlinearity.
We remark that a larger $T$ means larger prediction errors, and these translate into larger asymptotic tracking errors. On the other hand, an analysis of various second-order systems in \cite{Wardi17} reveals that they all were unstable if $T$ is too small, and stable if $T$ is large enough.
It can be seen that, a requirement for a restricted prediction error can stand in contradiction with the stability requirement. This issue was resolved by speeding up the controller in the following manner.
Consider $\alpha>1$, and modify \eqref{eq:control_full} by multiplying its right hand side by $\alpha$, resulting in the following control equation:
\begin{equation*}
\dot{u}(t)=\alpha\Big(\frac{\partial g}{\partial u}(x(t),u(t))\Big)^{-1}\big(r(t+T)-g(x(t),u(t))\big).
\end{equation*}
It was verified in \cite{Wardi17,Wardi18,shivam2018tracking}, that regardless of the value of $T\in\Real^+$, a large-enough $\alpha$ stabilizes the closed-loop system.\footnote{This statement seems to have a broad scope, and does not require the plant to be a minimum-phase system.} Furthermore, if the closed-loop system is stable
then the following bound holds,
\begin{equation}\label{eq:error_alpha}
\lim\sup_{t\rightarrow\infty}||r(t)-\tilde{y}(t)||\leq\frac{\eta}{\alpha},
\end{equation}
where $\eta$ is defined by \eqref{eq:rdot}. Thus, a large gain $\alpha$ can stabilize the closed-loop system and reduce the asymptotic tracking error.
\subsection{Problem Formulation}
In an attempt to broaden the application scope of the control algorithm, underactuated systems such as the fixed-wing aircraft are explored, which are widely used in the domain of aerospace engineering.
The behavior of a fixed wing aircraft at constant elevation can be approximated by a planar Dubins vehicle with $3$ states \cite{lavalle2006planning} $\forall t\geq0$,
\begin{align*}
\dot{z}_1^p(t)&=V^{p}\cos\theta^p(t)\text{, }\\
\dot{z}_2^p(t)&=V^{p}\sin\theta^p(t) \text{, }\\
\dot{\theta}^p(t)&=u(t),
\end{align*}
where $( z^p_1(t),z^p_2(t))^{\textrm{T}}$ denotes the planar position of the vehicle, $\theta^p(t)$ its heading and $u(t)$ the angular acceleration, constrained as, $\norm{u} \leq u_{\textrm{max}}$. The input saturation enforces a minimum turning radius equal to $V_0/u_{\textrm{max}}$. For testing the efficacy of the controller for the underactuated system, henceforth referred to as the pursuer, it is tasked with tracking an evading vehicle, modeled as a single integrator, with dynamics as follows:
\begin{equation*}\frac{\textrm{d}}{\textrm{d}t}
\begin{bmatrix}
z_1^\textrm{e}(t)\\[0.21em]
z_2^\textrm{e}(t)
\end{bmatrix} =
\begin{bmatrix}
V^{\textrm{e}}\cos\theta^{\textrm{e}}\\[0.2em]
V^{\textrm{e}}\sin\theta^{\textrm{e}}\\[0.2em]
\end{bmatrix},
\end{equation*}
where $(z_1^e(t),z_2^e(t))^{\top}$ denote the planar position of the evader, and $V^e$ is its speed.
We consider two cases; one where the evader is agnostic to the pursuer and follows a known trajectory and the other where the the evader is adversarial in nature and its trajectory is not known to the pursuer.
The next section will provide two solutions for the problem of estimating the evader's trajectory based, respectively, on a model-based approach and a learning-based approach.
\section{Predictive Framework}
\subsection{Model-Based Pursuit Evasion}
The considered system is underactuated because the pursuer's position, $(z_{1}^p(t),z_{2}^p(t))^{\top}$, is two-dimensional while it is controlled by an one-dimensional variable, $u(t)$. This raises a problem since the application of the proposed tracking technique requires the control variable and system's output to have the same dimension. To get around this difficulty, we define
a suitable function $F:R^2\rightarrow R^+$ and set $g(x(t),u(t)):=\int_t^{t+T}F(\tilde{y}^p(\tau)-\tilde{y}^e(\tau))\textrm{d}\tau$ where
$\tilde{y}^p(\tau)$ and $\tilde{y}^e(\tau)$ are the predicted position of the pursuer and the evader at time $\tau$; we apply the Newton-Raphson flow to the equation $g(x(t),u(t))=0$. The modified controller becomes
\begin{equation}\label{u_dot}
\dot{u}(t)=-\alpha\Big(\frac{\partial g}{\partial u}(x(t),u(t))\Big)^{-1}\big(g(x(t),u(t))\big) ,\ t\geq0.
\end{equation}
Since $g(x,u)$ is a scalar, the modified algorithm works similar to the base case.
Assume general nonlinear system dynamics as in \eqref{eq:state_eqn} with output described in \eqref{eq:general_output}. The predicted state trajectory is computed by holding the input to a constant value over the prediction horizon, given by the following differential equation:
\begin{equation}\label{predicted_state}
\dot{\xi}(\tau)=f(\xi(\tau),u(t)), ~\tau\in [t,t+T],
\end{equation}
with the initial condition $\xi(t)=x(t) $ as shown in \cite{Wardi17}. The predicted output at $\tau$ is $\tilde{y}^p(\tau)=h(\xi(\tau))$. Furthermore, by taking the partial derivative of \eqref{predicted_state} with respect to u(t), we obtain
\begin{equation}\label{predicted_derivative}
\dot{\frac{\partial \xi}{\partial u}}(\tau)=\frac{\partial f}{\partial \xi}(\xi(\tau),u(t))\frac{\partial \xi}{\partial u}(\tau)+\frac{\partial f}{\partial u}(\xi(\tau),u(t)),
\end{equation}
with the initial condition ${\frac{\partial \xi}{\partial u}}(t)=0$. The above is a differential equation in ${\frac{\partial \xi}{\partial u}}(\tau); ~\tau \in [t,t+T]$ and \eqref{predicted_state} and \eqref{predicted_derivative} can be solved numerically.
Finally, the values of $g(x,u)$ and $\frac{\partial g}{\partial u}(x,u)$ can be substituted in \eqref{u_dot} to get the control law.
In the next section, results are presented for an agnostic as well as an adversarial pursuer- evader system. However, as mentioned above, in the adversarial problem formulation, the trajectory of the evader is not known in advance, which can be overcome in two ways.
In the first approach, the pursuer(s) use game theory to predict the approximate direction of evasion. As mentioned in \cite{isaacs1999differential}, in the case of single pursuer, the evader's optimal strategy is to move along the line joining the evader and pursuer's position, if the pursuer is far enough. When the distance between the pursuer and the evader reduces to the turning radius of the pursuer, the evader switches strategies and enters into the non-holonomic constraint region of the pursuer. This can be represented as follows:
\begin{equation}\label{evasion}
\theta_E= \begin{cases}
\arctan\bigg({\frac{\vphantom{z_{2_p}}z_2^e(t)-z_2^p(t)}{z_1^e(t)-\vphantom{z_1^{p^p}(t)}z_1^{p}(t)}}\bigg), & d > R_P, \\ \\
\arctan\bigg({\frac{\vphantom{z_{2_p}}z_2^e(t)-z_2^p(t)}{z_1^e(t)-\vphantom{z_1^{p^p}(t)}z_1^{p}(t)}}\bigg) \pm \pi/2, & d\leq R_P.
\end{cases}
\end{equation}
Here $\theta_E$ is the expected evasion angle of the evader and $d$ is the distance between the pursuer and evader,
If there are multiple pursuers, it is assumed that the evader follows the same strategy by considering only the closest pursuer. It is notable that this will not provide the pursuers a correct prediction of the evader's motion as they do not know about the goal seeking behavior mentioned above. However, it gives a good enough approximation of the pursuer's motion that the algorithm can be used for tracking.
The second approach involves learning the evader's behavior over time using NN. The pursuers take their positions and the position of the evader as input and the NN gives the estimated evasion direction as the output after training.
To showcase the efficacy of our method, we consider a pursuit evasion problem, involving multiple pursuing agents. Such problems are typically formulated as zero-sum differential games \cite{isaacs1999differential}. Due to the difficulty of solving the underlying Hamilton-Jacobi-Isaacs (HJI) equations \cite{basar1999dynamic} of this problem, we shall utilize the method described in \ref{sec:tracking} to approximate the desired behavior.
Furthermore, we show that augmenting the controller with learning structures in order to tackle the pursuit evasion problem without explicit knowledge of the evader's behavior is straightforward.
In order to formulate the pursuit evasion problem, we define a global state space system consisting of the dynamics of the pursuers and the evader. For ease of exposition, the analysis will focus on the $2$-pursuer, $1$-evader problem, since extending the results to multiple pursuers is straightforward.
The global state dynamics become,
\begin{equation}\label{eq:nonlin_dynam}
\frac{\textrm{d}}{\textrm{d}t}
\begin{bmatrix}
z_1^{\textrm{p}_1}(t)\\[0.21em]
z_2^{\textrm{p}_1}(t)\\[0.21em]
\theta^{\textrm{p}_1}(t)\\[0.21em]
z_1^{\textrm{p}_2}(t)\\[0.21em]
z_2^{\textrm{p}_2}(t)\\[0.21em]
\theta^{\textrm{p}_2}(t)\\[0.21em]
z_1^\textrm{e}(t)\\[0.21em]
z_2^\textrm{e}(t)
\end{bmatrix} =
\begin{bmatrix}
V^{\textrm{p}_1}\cos\theta^{\textrm{p}_1}\\[0.2em]
V^{\textrm{p}_1}\sin\theta^{\textrm{p}_1}\\[0.2em]
u^\textrm{p}_1\\[0.2em]
V^{\textrm{p}_1}\cos\theta^{\textrm{p}_2}\\[0.2em]
V^{\textrm{p}_2}\sin\theta^{\textrm{p}_2}\\[0.2em]
u^\textrm{p}_2\\[0.2em]
V^{\textrm{e}}\cos\theta^{\textrm{e}}\\[0.2em]
V^{\textrm{e}}\sin\theta^{\textrm{e}}\\[0.2em]
\end{bmatrix},
\end{equation}
where the subscripts indicate the autonomous agent. For compactness, we denote the global state vector as $x(t)\in \mathbb{R}^8$, the pursuers' control vector $u(t) \in \mathbb{R}^2$, and the nonlinear mapping described by the right-hand side of \eqref{eq:nonlin_dynam}. Thus, given the initial states of the agents $x_0\in\mathbb{R}^8$, the evolution of the pursuit evasion game is described by
$\dot{x}(t) = f(x(t),u,u_\textrm{e})\text{, }x(0)=x_0\text{, }t\geq 0$.
Subsequently, this zero-sum game can be described as a minimax optimization problem through the cost index,
\begin{align}\label{eq:cost}
J(x,u,u_\textrm{e}) &= \int_{0}^{\infty} e^{-\gamma t}L(x)\textrm{d}t \nonumber\\&:= \int_0^\infty e^{-\gamma t}\bigg(\beta_1(d^2_1+d_2^2)+\beta_2\frac{d^2_1d^2_2}{d^2_1+d^2_2}\bigg)\textrm{d}t,
\end{align}
where $d_i=\sqrt{(z_1^i-z^\textrm{e})^2+(z_2^i-z_2^\textrm{e})^2}$, $i\in\lbrace \textrm{p}_1,\textrm{p}_2\rbrace$ is the distance between the $i$-th pursuer and the evader, $\beta_1,\ \beta_2\in\Real^+$ are user defined contants, and $\gamma\in\Real^+$ is a discount factor. The first term ensures that the pursuers remain close to the evader, while the second term encourages cooperation between the agents. The cost decreases exponentially to ensure that the integral has a finite value in the absence of equilibrium points.
Let $V(x):\mathbb{R}^8\rightarrow\mathbb{R}$ be a smooth function quantifying the value of the game when specific policies are followed starting from state $x(t)$.
Then, we can define the corresponding Hamiltonian of the game as,
\begin{equation}\label{eq:ham}
H\big(x,u,u_e,\frac{\partial V}{\partial x}\big) = L(x) + \frac{\partial V}{\partial x}^{\textrm{T}}f(x,u,u_e) + \gamma V.
\end{equation}
The optimal feedback policies $u^\star(x)$, $u^\star_e(x)$ of this game are known to constitute a saddle point \cite{basar1999dynamic} such that,
\begin{align}
u^\star(x) = \arg\min_u H(x,u,u_e), \label{eq:opt_purs}\\
u_e^\star(x) = \arg\max_{u_e} H(x,u,u_e). \label{eq:opt_evade}
\end{align}
Under the optimal policies \eqref{eq:opt_purs},\eqref{eq:opt_evade}, the HJI equation is satisfied,
\begin{equation}\label{eq:HJI}
H\big(x,u^\star,u_e^\star,\frac{\partial V}{\partial x}^\star\big) = 0.
\end{equation}
Evaluating the optimal pursuit policies, yields the singular optimal solutions described by,
$V_{\theta_{p1}}u_1 = V_{\theta_{p2}}u_2 =0$,
where $V_{x_i}$ is the partial derivative of the value function with respect to the state $x_i$, calculated by solving \eqref{eq:HJI}.
To obviate the need for bang-bang control, as is derived by \eqref{eq:opt_purs} and \eqref{eq:opt_evade} we shall employ the predictive tracking technique described in Section \ref{sec:tracking} to derive approximate, easy to implement, feedback controllers for the pursuing autonomous agents. Furthermore, by augmenting the predictive controller with learning mechanisms, the approximate controllers will have no need for explicit knowledge of $u_e^\star(x)$, the evader's policy.
The following theorem presents bounds on the optimality loss induced by the use of the look-ahead controller approximation.
\begin{theorem}
Let the pursuit evasion game evolve according to the dynamics given by \eqref{eq:nonlin_dynam}, where the evader is optimal with respect to \eqref{eq:cost} and the pursuers utilize the learning-based predictive tracking strategy given \eqref{u_dot}. Then, the tracking error of the pursuers and the optimality loss due to the use of the predictive controller are bounded if $\exists \bar{\Delta}\in\Real^+$, such that, $\Delta(x(t),\hat{u}(t),\hat{u}(t)_\textrm{e}) \leq \bar{\Delta},~\forall t\geq0$,
where $
\Delta(x,\hat{u},\hat{u}_\textrm{e}) = V_{x_\textrm{e}}v_\textrm{e}(\cos\hat{u}_\textrm{e}-\cos u^\star_\textrm{e})+V_{y_\textrm{e}}v_\textrm{e}(\sin \hat{u}_\textrm{e}-\sin u^\star_\textrm{e}) + V_{\theta_\textrm{p}1}(u_1^\star-\hat{u}_1)+V_{\theta_\textrm{p}2}(u_2^\star-\hat{u}_2),
$
with $V_{\xi}$ denoting the partial derivative of the game value with respect to the state component $\xi(t)$.
\end{theorem}
Proof: Consider the Hamiltonian function when the approximate controller, denoted $\hat{u}(t)$ and the NN-based prediction of the evader's policy, $\hat{u}_\textrm{e}(t)$ are used,
\begin{align}\label{eq:subopt_hamiltonian}
H(x,\hat{u},\hat{u}_\textrm{e}) = L(x) + \big(\frac{\partial V}{\partial x}\big)^\textrm{T} f(x,\hat{u},\hat{u}_\textrm{e}) +\gamma V.
\end{align}
Taking into account the nonlinear dynamics of the system \eqref{eq:nonlin_dynam}, one can rewrite \eqref{eq:subopt_hamiltonian} in terms of the optimal Hamiltonian as,$H(x,\hat{u},\hat{u}_\textrm{e}) = H(x,u^\star,u^\star_\textrm{e}) + \Delta(\hat{u},\hat{u}_\textrm{e})$,
where $H(x,u^\star,u^\star_\textrm{e})=0$ is the HJI equation that is obtained after substituting \eqref{eq:opt_purs} and \eqref{eq:opt_evade} in \eqref{eq:ham}.
Now, take the orbital derivative of the value function along the trajectories using the approximate controllers as,
$
\dot{V} = \big(\frac{\partial V}{\partial x}\big)^\textrm{T} f(x,\hat{u},\hat{u}_\textrm{e}).
$
Substituting \eqref{eq:subopt_hamiltonian} yields
$
\dot{V} = -L(x) - \gamma V + \Delta(x,\hat{u},\hat{u}_\textrm{e}).
$
Thus, since $L(x)> 0$, $\forall x \in \mathbb{R}^8\setminus \lbrace 0 \rbrace$,
\begin{align*}
\dot{V} < -\gamma V + \Delta(x,\hat{u},\hat{u}_\textrm{e}) \Rightarrow\dot{V} < -\gamma V + \bar{\Delta}.
\end{align*}
Hence for $V\geq\bar{\Delta}/\gamma$, we have $\dot{V}\leq0$. Thus $\lbrace x\in \mathbb{R}^8~|~ V(x)\leq \bar{\Delta}/\gamma \rbrace$ is a forward invariant set, which implies that the tracking error and the optimality loss over any finite horizon is bounded.
\frQED
\begin{remark}
Note that we do not use optimal control or MPC to solve the pursuit evasion problem. Instead, the controller is governed by \eqref{u_dot}, which is simple to implement and has low computational complexity.
\frqed
\end{remark}
\subsection{Deep Learning-Based Pursuit Evasion}
A deep NN, consisting of $L > 2$ hidden layers, describes a nonlinear mapping between its input space $\mathbb{R}^n$ and output space $\mathbb{R}^p$.
Each layer receives the output of the previous layer as an input and, subsequently, feeds its own output to the next layer. Each layer's output consists of the weighted sum of its input alongside a bias term, filtered through an application-specific activation function \cite{haykin2009neural}.
Specifically, let $\mathbb{R}^{n_l}$ be the input space of a specific layer, and $\mathbb{R}^{p_l}$ the corresponding output space. Then the layer's output is,
\begin{equation*}
Y_i(x) = \sigma\bigg(\sum_{j=1}^{n_l} v_{ij}X_j + v_{i0}\bigg)\text{, } i = 1,2,\dots,p_l,
\end{equation*}
where $X^\prime = \begin{bmatrix}X_1 & \dots & X_{n_l} \end{bmatrix}^\textrm{T} \in \mathbb{R}^{n_l}$ is the input vector, gathered from training data or from the output of previous layers, $v_{ij}\in \mathbb{R}$ is a collection of $n_l$ weights for each layer, $v_{i0} \in \mathbb{R}$ the bias term and $\sigma: \mathbb{R}^{n_l} \rightarrow \mathbb{R}$ is the layer's activation function.
We note that it is typical to write the output of layer compactly, with slight abuse of notation, as,
\begin{equation}\label{eq:NN}
Y = \sigma(W^\textrm{T} \sigma^\prime(X)),
\end{equation}
where $Y = \begin{bmatrix} Y_1 &\dots & Y_{p_l} \end{bmatrix} \in \mathbb{R}^{p_l}$, $W = \begin{bmatrix} v_{ij} \end{bmatrix}\in \mathbb{R}^{(n_l+1)\times p_l}$ and $\sigma^\prime:\mathbb{R}^{n_l^\prime} \rightarrow \mathbb{R}^{n_l}$ is the activation function of the previous layer, taking as input the vector $X=\begin{bmatrix}{X^\prime}^{\textrm{T}}& 1 \end{bmatrix}^{\textrm{T}}$ .
It is known \cite{lewis1998neural}, that two-layer NNs possess the universal approximation property, according to which, any smooth function can be approximated arbitrarily close by an NN of two or more layers.
Let $\mathbb{S}\subset \mathbb{R}^n$ be a simply connected compact set and consider the nonlinear function $\kappa : \mathbb{S} \rightarrow \mathbb{R}^p$. Given any $\epsilon_b \geq 0$, there exists a NN such structure such that,
\begin{equation*}
\kappa(x) = \sigma\big(W^\textrm{T} \sigma^\prime(x)\big) + \epsilon\text{, } \forall x \in \mathbb{S},
\end{equation*}
where $\|\epsilon\| \leq \epsilon_b$. We note that, typically, the activation function of the output layer $\sigma(\cdot)$ is taken to be linear.
Evaluating the weight matrix $W$ in a network is the main concern of the area of machine learning. In this work, we employ the gradient descent based backpropagation algorithm.
Given a collection of $N_d$ training data, stored in the tuple $\lbrace x_k,\kappa_k \rbrace_{k}$, where $x_k \in \mathbb{R}^n$, $\kappa_k \in \mathbb{R}^p$, $\forall k =1,\dots,N_d$, we denote the output errors as
$
r_k = \kappa(x_k) - \kappa_k.
$
Then, the update equation for the weights at each optimization iteration $t_k$ is given by,
\begin{align}\label{eq:NN_tune}
w_{ij}(t_k+1) = w_{ij}(t_k) - \eta \frac{\partial (r_k^\textrm{T}r_k)}{\partial w_{ij}}, ~\forall t_k \in \mathbb{N},
\end{align}
where $\eta\in\Real^+$ denotes the learning rate. We note that the update index $t_k$ need not correspond to the sample index $k$, since different update schedules leverage the gathered data in different ways \cite{lewis1998neural}.
It can be seen that in order for the proposed method to compute the pursuers' control inputs, an accurate prediction of the future state of the evader is required. However, this presupposes that the pursuers themselves have access to the evader's future decisions; an assumption that is, in most cases, invalid.
Thus, we augment the pursuers' controllers with a NN structure, that learns to predict the actions of the evader, based on past recorded data.
Initially, we assume that the evader's strategy is computed by a feedback algorithm, given her relative position to the pursuers. This way, the unknown function we wish to approximate is $f:\mathbb{R}^{2N} \rightarrow \mathbb{R}^2$, with,
$
u^e = f(\delta z_1^{{p}_1}, \delta z_2^{p_1} ,\dots, \delta z_1^{p_N},\delta z_2^{p_N}), $
where, $(\delta z_1^{p_i},\delta z_2^{p_i})$ denote the distance of pursuer $i$ to the evader in the X and Y axes, respectively.
In order to train the network, we let the pursuers gather data regarding the fleet's position with respect to the evader, as well as her behavior over a predefined time window $T_l > 0$.
\begin{remark}
Increasing the time window $T_l$ will allow the pursuers to gather more training data for the predictive network. However, this will not only increase the computational complexity of the learning procedure, but will make the pursuers more inert to sudden changes in the evader's behavior. Simulation results corroborate our choice of training parameters. \frqed
\end{remark}
Subsequently, we denote by $\hat{u}^e(x)$, the current prediction function for the evader's strategy, i.e., $\hat{u}^e(x) = \sigma\big(\hat{W}^\textrm{T}\hat{\sigma}^\prime(\chi)\big)$,
where $\chi = \begin{bmatrix} \delta z_1& \delta y_1 &\dots& \delta x_N & \delta y_N \end{bmatrix} \in \mathbb{R}^{2N}$, $\hat{W}$ denotes the current weight estimate of the NNs output layer, and $\hat{\sigma}^\prime(\cdot)$ is the current estimate of the hidden layers, parametrized by appropriate hidden weights.
\begin{remark}
While the learning algorithm for the evader's behavior operates throughout the duration of the pursuit, thus making the approximation weights time-varying, we suppress their explicit dependence on time since the process is open-loop, in the sense that the system is learning in batches, rather that in a continuous fashion. \frqed
\end{remark}
\begin{algorithm}[b]
\caption{Deep Learning-Based and Predictive Pursuit Evasion}
\textbf{Inputs:} ${X_{P_i}(t)}$, $\forall i\lbrace 1,\dots,N\rbrace$, $X_E(t)$ and evasion strategy approximation weights $W$.\\
\textbf{Output:} ${u_{P_i}(t)}$, $\forall i\lbrace 1,\dots,N\rbrace$.
\begin{algorithmic}[1]
\State Compute $(\delta x_i,\delta y_i)$, $i\in\lbrace1,\dots,N\rbrace$.
\State Predict evader's future behavior via \eqref{eq:NN}.
\State Train NN as in \eqref{eq:NN_tune}.
\State Predict evader's future state as $\tilde{X}_E(t+T)=X_E(t)+[V_E\cos{\theta_E} ~~V_E\sin{\theta_E}]^{\textrm{T}}T$.
\State Propagate pursuer dynamics to get $\tilde{X}_P(t+T)$.
\State Computed current Newton flow parameters using \eqref{g_multi_pursuer}.
\State Computed control dynamics $\dot{u}_{P_i}(t)$ from \eqref{eq:controller}.
\State Propagate actual system evolution using \eqref{eq:nonlin_dynam}.
\State Append current distances $(\delta x_i,\delta y_i)$ to a stack of previous observations.
\State Update evader prediction network through \eqref{eq:NN_tune}.
\end{algorithmic}
\end{algorithm}
\section{Simulation Results}
This section presents results for the problems briefly described in the previous section. First, the agnostic evader case is considered followed by the adversarial case. For the second case, single and multiple pursuer systems are considered separately. The controller is implemented on a Dubins vehicle. For the purpose of tracking, we define the system output to be $y^i=\begin{bmatrix}z_1^i&z_2^i\end{bmatrix}^\textrm{T}$, $i \in \lbrace p_1,p_2,e\rbrace$.
\vspace{-1mm}
\subsection{Single Pursuer - Agnostic Target}
In this subsection, the controller is tested on a Dubins vehicle with the task of pursuing an agnostic target moving along a known trajectory. Since the vehicle has a constant speed and an input saturation is enforced, it has an inherent minimum turning radius. For this simulation, we set $V^p=2$~m/s and the input saturation is first set to $\frac{\pi}{2}$~rad/s and then to $2{\pi}$~rad/s. The evader moves along two semicircular curves with a constant speed which is less than $V^p$.
As a consequence, when the pursuer catches up to the evader, it overshoots and has to go around a full circle to again start tracking. Naturally, lower turning radius translates to better tracking as the vehicle can make ``tighter'' turns. This can be seen when comparing the trajectories of the vehicle in Figure~\ref{trajectory_R} with Figure~\ref{trajectory_r}. For the same trajectory of the evader, the tracking performance is far better in the second case. Once the pursuer catches up to the target, the maximum tracking error in the first case is approximately $4$ meters and only $1$ meter in the second case, shown in Figures~\ref{error_R} and \ref{error_r}. This is consistent with the fact that the ratio of the turning radii is $4:1$.
\begin{figure}[!ht]
\vspace{6pt}
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectory_large_r.pdf}
\vspace{-8pt}
\caption{ Agnostic evader with a large turning radius.}\label{trajectory_R}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{distance_large_r.pdf}
\vspace{-8pt}
\captionof{figure}{{Evolution of an agnostic evader tracking error with a large turning radius.}}\label{error_R}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectory_small_r.pdf}
\vspace{-8pt}
\captionof{figure}{{Agnostic evader with a small turning radius.}}\label{trajectory_r}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{distance_small_r.pdf}
\vspace{-8pt}
\captionof{figure}{{ Evolution of the agnostic evader tracking error with a small turning radius.}}\label{error_r}
\end{center}
\end{figure}
\subsection{Single Pursuer - Adversarial Evader}
The pursuer is again modelled as a Dubins vehicle, while the evader is modelled as a single integrator with a maximum velocity less than the speed of the pursuer. Hence, while the pursuer is faster, the evader is more agile, and can instantly change its direction of motion. In this and subsequent cases, the evader is considered adversarial in nature and uses game theory to choose evasion direction.
Let $y^p(t)$ and $y^e(t)$ be the position vector of the pursuer and evader respectively at time $t$. First, the pursuer makes an estimate of the optimal evasion direction based on the relative position of the evader and itself at time $t$ using \eqref{evasion}. Assuming this direction of evasion to be fixed over the prediction window from $t$ to $t+T$ gives the predicted position of the evader at all time instances in this interval, denoted as $\tilde{y}^e(\tau), \tau \in [t, t+T]$. Next, the pursuer estimates its own predicted position if its input is kept constant, called $\tilde{y}^p(\tau),\tau \in [t, t+T]$. Finally, $g(t)$ is set as $||\tilde{y}^e(t+T)-\tilde{y}^p(t+T)||^2$ and the value of $\frac{\partial g}{\partial u}(x(t),u(t))$ ($x(t)$ being the ensemble vector of the states of the pursuer and the evader) is used to compute the input differential equation \eqref{u_dot}.
Figures~\ref{trajectory_pursuer} shows the trajectories of the pursuer and the evader, with the goal for the evader set to to point $(150,60)$. It can be observed that the evader moves towards the goal while the pursuer is far away and starts evasive maneuvers when it gets close to it, by entering its non-holonomic region. Figure~\ref{error_pursuer} displays the tracking error, defined as the distance between the pursuer and the evader, which is almost periodic. This is because the evader's maneuver forcing the pursuer to circle back. The peak tracking error after the pursuer catches up is slightly more than twice the turning radius, as expected.
\begin{figure}[!ht]
\vspace{6pt}
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectory_1_pursuer.pdf}
\vspace{-8pt}
\caption{ Trajectories for a single pursuer-evader system.}\label{trajectory_pursuer}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{distance_1_pursuer.pdf}
\vspace{-8pt}
\captionof{figure}{Evolution of the tracking error for a single pursuer-evader system.}\label{error_pursuer}
\end{center}
\end{figure}
\vspace{-1mm}
\subsection{Multiple Pursuers - Adversarial Evader}
While the previous section had only one pursuer, this simulation considers the case of two pursuers and a single evader. Having multiple pursuers means there must be cooperation between them in order to optimally utilize resources. Thus, a pursuer can no longer make decisions solely based on the position of the evader relative to itself. The positions of the rest of the pursuers must also be factored in. Thus we redefine the expression for $g(x,u)$ to include these parameters as shown below for the case of two pursuers. Let $d_{\textrm{p}}$ be the distance between the two pursuers, and let
{\small\begin{align} \label{g_multi_pursuer}
g(x(t),u(t)) := \int_{t}^{t+T} &\bigg\{\beta _1 (d^2_1(\tau)+d^2_2(\tau)) + \beta _2 \frac{d^2_1(\tau)d^2_2(\tau)}{d^2_1(\tau)+d^2_2(\tau)} \nonumber \\ & \quad + \beta _3 e^{-\gamma d_\textrm{p}(\tau)}\bigg\}\textrm{d}\tau,\ \forall t\geq0.
\end{align}}
The first term ensures that the pursuers remain close to the evader, while the second term encourages cooperation between agents. The last term is added to repel pursuers apart if they come close to each other, as having multiple pursuers in close vicinity of each other is sub-optimal.
Figure~\ref{trajectory_2pursuers} shows the trajectories of the pursuers and the evader when the goal for the evader is set to the point $(15,-1)$. In this case, the pursuers close in on the evader and trap it away from its goal due to their cooperative behavior. The evader is forced to continuously perform evasive maneuvers as the other pursuer closes in when the first has to make a turn. This can be seen more clearly in the tracking error plot given in Figure~\ref{error_2pursuer}. After catching up with the evader, it can be seen that when one pursuer is at its maximum distance, the other is at its minimum.
The results achieved show good coordination between the pursuers and low tracking error and are qualitatively comparable to \cite{quintero2016robust}.
Lastly, we present the results under the learning-based prediction. In Figure~\ref{fig_nn_dist}, we present a comparative result of the tracking error of the model-based algorithm vis-\`a-vis the NN-based control. Figure~\ref{fig_nn_cost} showcases the quality of the performance of the proposed algorithm based on the game theoretic cost metric. From these figures, it can be seen that the NN structure offers fast predictive capabilities to the controller; hence the overall performance is comparable to the model based control.
\begin{figure}[!ht]
\vspace{6pt}
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectory_2_pursuers.pdf}
\vspace{-8pt}
\caption{ Trajectories for the two pursuer-single evader system.}\label{trajectory_2pursuers}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{distance_2_pursuers.pdf}
\vspace{-8pt}
\caption{Evolution of the tracking error for the two pursuer-single evader system.}\label{error_2pursuer}
\end{center}
\end{figure}
\begin{figure}[!ht]
\vspace{6pt}
\begin{center}
\includegraphics[width=0.9\linewidth]{trajectory_NN.pdf}
\vspace{-8pt}
\caption{Trajectories for two pursuers-single evader system with learning.}\label{fig_nn_trajectory}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{distance_difference_1.pdf}
\vspace{-8pt}
\caption{ Evolution of the tracking error for the systems with and without learning.}\label{fig_nn_dist}
\end{center}
\begin{center}
\includegraphics[width=0.9\linewidth]{cost_difference_2.pdf}
\vspace{-8pt}
\captionof{figure}{{Total cost for the system with and without learning. }}\label{fig_nn_cost}
\end{center}
\end{figure}
\section{Conclusion and Future Work}
This work extends the framework of prediction-based nonlinear tracking in the context of pursuit evasion games.
We present results for vehicle pursuit of agnostic targets, modeled as moving along known trajectories, as well as adversarial target tracking, where the evader evolves according to game-theoretic principles. Furthermore, to obviate the need for explicit knowledge of the evader's strategy, we employ learning algorithms alongside the predictive controller.
The overall algorithm is shown to produce comparable results to those in the literature, while it precludes the need for solving an optimal control problem.
Future work will focus on developing robustness guarantees will allow for more realistic scenarios, where noise and external disturbances are taken into consideration.
\balance
\bibliographystyle{IEEEtran}
|
1909.00726
|
\section{Yukawa Phases in the Type-III 2HDM}
\paragraph{}
In a type-III 2HDM, complex phases in the Yukawa couplings cannot be rotated
away completely by absorbing the phases in the right-handed quark fields. To see
why this is true, consider the mass term for the top quark coming from
Eq.~\eqref{yukawas} after electroweak symmetry breaking
\begin{equation}
\mathcal L_{\text{mass}}^{\text{top Yuk.}} =\frac{v}{\sqrt{2}}\left(h_t s_\beta + h'_t c_\beta\right)t t_c + \text{h.c.}.
\end{equation}
In order to make the top-quark mass real, the phase of the combination $\left(h_t
s_\beta + h'_t c_\beta\right)$ must be absorbed into the charge-conjugate quark
field. However, this phase will pop up again in the Yukawa couplings and will
not be cancelled by the phase of $h_t$ or $h'_t$. This means that, for example,
the top Yukawa term proportional to $h_t$ after the rotation now takes the form
\begin{equation}
\mathcal L^{\text{top Yuk.}} = e^{-i\phi}h_t \epsilon_{ij} \Phi_2^i t_c Q^j + \text{h.c.},
\end{equation}
where $\phi\equiv\arg(h_t s_\beta + h'_t c_\beta)$. These phases must be taken
into account when calculating the self-energy and tadpole corrections to the
Higgs mass matrix. However, we find that the effect is numerically negligble, as
the phase $\phi$ is small.
\section{Tadpole Contributions}\label{tadpole}
In the following, we list the combinations of the tadpole contributions of the Higgs boson fields in the gauge basis $T_{\phi_1}$, $T_{\phi_2}$ and $T_{a_1}$ as they appear in the renormalized tadpole matrix $\hat{T}$ in Eqs.~\eqref{eq:det} and \eqref{mass_mat} and in the corresponding unrenormalized tadpole matrix $T$,
\begin{align}
T_{11} &= \frac{c_{\beta} \Big[\left(c_{\beta}^2+2 s_{\beta}^2\right) T_{\phi_1}-c_{\beta} s_{\beta} T_{\phi_2}\Big]}{v},\\
T_{12} &= \frac{c_{\beta}^3 T_{\phi_2}+s_{\beta}^3 T_{\phi_1}}{v} = T_{21},\\
T_{13} &= 0 = T_{31},\\
T_{14} &= -\frac{T_{a_1}}{s_{\beta} v} = T_{41},\\
T_{22} &= \frac{s_{\beta} \Big[\left(2 c_{\beta}^2 +s_{\beta}^2 \right)T_{\phi_2}-c_{\beta} s_{\beta} T_{\phi_1}\Big]}{v},\\
T_{23} &= \frac{T_{a_1}}{s_{\beta} v} =T_{32},\\
T_{24} &= 0 = T_{42} ,\\
T_{33} &= T_{11
,\\
T_{34} &=
T_{12} = T_{43},\\
T_{44} &= T_{22
.\\
\end{align}
\section{Conversion of the Vacuum Expectation Value}\label{se:vevconversion}
In Sect.~\ref{se:results}, the conversion of the vacuum expectation value is
explained briefly. In this appendix, for completeness, we list the explicit
conversion formulas.
The finite part of the counterterm to the on-shell vev
$\delta{v}^2_{\text{OS}-\mathrm{finite}}$ is given as
\begin{align}\label{eq:vevOSCT}
\delta{v}^2_{\text{OS}-\mathrm{finite}} = v^2_{\text{OS}}\left[\left(1-\frac{c_W^2}{s_W^2}\right)\frac{\delta M_W^2}{M_W^2} + \frac{c_W^2}{s_W^2}\frac{\delta M_Z^2}{M_Z^2} -\frac{\delta e^2}{e^2}\right]_{\text{finite}}
\end{align}
where $c_W^2 = 1 - s_W^2$. The W and the Z mass counterterm are chosen on-shell
via the one-loop pole mass definition while the counterterm for the electric
charge is fixed via the electron positron photon vertex in the Thomson limit and
are thus
\begin{align}
\delta M_V^2 = \text{Re} \Sigma_{VV}^\text{T}(M_V^2) \quad \text{with} \quad V = W,Z,\\
\delta e^2 = 2 e^2 \left[ \frac{1}{2} \left.\frac{\partial \text{Re}\Sigma_{\gamma\gamma} (k^2)}{\partial k^2}\right|_{k^2 = 0} - \frac{s_W}{c_W} \frac{\Sigma^{\text{T}}_{\gamma Z}(0)}{M_Z^2}\right]
\end{align}
where $\Sigma_{VV}^\text{T}$ is the transversal part of the W or the Z~boson
self energy at one-loop order, respectively. The one-loop photon self energy is
denoted by $\Sigma_{\gamma\gamma}$ and the transversal part of the one-loop
photon Z boson mixing as $\Sigma^{\text{T}}_{\gamma Z}$.
For $\Delta r$~\cite{Sirlin:1980prd}, we employed the one-loop result
\begin{align}\nonumber
\Delta r &= \frac{\delta e^2}{e^2} - \left(1 - \frac{c_W^2}{s_W^2}\right) \frac{\delta M_W^2}{M_W^2} - \frac{c_W^2}{s_W^2}\frac{\delta M_Z^2}{M_Z^2} + \frac{2 \Sigma^{\text{T}}_{\gamma Z}(0)}{c_W s_W M_Z^2} + \frac{\Sigma^{\text{T}}_{WW}(0)}{M_W^2} \\&\quad + \frac{e^2}{32 \pi^2 s_W^4}\left[12 s_W^2 +( 7 - 4 s_W^2) \ln\left(\frac{M_W^2}{M_Z^2}\right) \right]
\end{align}
so that the complete one-loop conversion can be written as
\begin{align}
v_{\overline{\text{MS}}}^2 = v_{G_F}^2\left\{1 + \frac{2 \Sigma^{\text{T}}_{\gamma Z}(0)}{c_W s_W M_Z^2} + \frac{\Sigma^{\text{T}}_{WW}(0)}{M_W^2} + \frac{e^2}{32 \pi^2 s_W^4}\left[12 s_W^2 +( 7 - 4 s_W^2) \ln\left(\frac{M_W^2}{M_Z^2}\right) \right]\right\}.
\end{align}
\section{The Effective Low-energy Theory}
\label{se:efftheory}
\paragraph{}
The resulting low-energy theory is a Two-Higgs-Doublet Model (2HDM) with the following Higgs
potential $V$
\begin{align}\label{THDM_potential}
V &= m_{11}^2\Phi_1^{\dagger}\Phi_1 + m_{22}^2\Phi_2^{\dagger}\Phi_2 - [m_{12}^2\Phi_1^{\dagger}\Phi_2 +
\textrm{h.c.}]\\\nonumber & + \frac{1}{2}\lambda_1(\Phi_1^{\dagger}\Phi_1)^2 +
\frac{1}{2}\lambda_2(\Phi_2^{\dagger}\Phi_2)^2 + \lambda_3(\Phi_1^{\dagger}\Phi_1)(\Phi_2^{\dagger}\Phi_2) +
\lambda_4(\Phi_1^{\dagger}\Phi_2)(\Phi_2^{\dagger}\Phi_1)\\\nonumber & +
\big\{\frac{1}{2}\lambda_5(\Phi_1^{\dagger}\Phi_2)^2 + [\lambda_6(\Phi_1^{\dagger}\Phi_1) +
\lambda_7(\Phi_2^{\dagger}\Phi_2)]\Phi_1^{\dagger}\Phi_2 + \textrm{h.c.}\big\}.
\end{align}
Here, the mass parameters $m_{11}^2$ and $m_{22}^2$ are real, $m_{12}^2$ is complex, the quartic couplings
$\lambda_{1...4}$ are real, and $\lambda_5$, $\lambda_6$, and $\lambda_7$ are in general complex. The two Higgs doublets
$\Phi_1$ and $\Phi_2$, both having hypercharge $Y=1$, can be decomposed into
\begin{align}
\Phi_1 = \begin{pmatrix} \phi^+_1 \\ \frac{1}{\sqrt{2}} (v_1 + \phi_1 + \text{i} a_1) \end{pmatrix}, \quad \Phi_2 =
\begin{pmatrix} \phi^+_2 \\ \frac{1}{\sqrt{2}} (v_2 + \phi_2 + \text{i} a_2) \end{pmatrix}
\end{align}
where $v_1$ and $v_2$ are the vacuum expectation values, $\phi^+_1$, $\phi^+_2$ two complex Higgs fields, and $\phi_1$,
$\phi_2$, $a_1$, $a_2$ the neutral Higgs fields.
In the matching conditions, we take into account loop-induced couplings of the``wrong" Higgs doublet to the
corresponding quarks, which renders the 2HDM a type III instead of the tree-level type II version, where one Higgs
doublet couples only to the up-type quarks and the other Higgs doublet couples to the down-type quarks and the charged
leptons. The Yukawa Lagrangian for the third generation is accordingly
\begin{align}\label{yukawas}
\mathcal L_{\text{Yukawa}} = h_t' \epsilon_{ij} \Phi_1^i t_c Q^j + h_t \epsilon_{ij} \Phi_2^i t_c Q^j - h_b
\delta_{ij}\Phi_1^{*i} b_c Q^j - h_b'\delta_{ij}\Phi_2^{*i} b_c Q^j + \text{h.c.}\,.
\end{align}
Here, we follow the SUSY conventions and write all fields as left-handed fields. $Q$ is the quark doublet, $\epsilon_{12}
= 1$, and $t_c$ and $b_c$ are the left-handed top- and bottom-quark charge-conjugate fields, respectively. The Yukawa
couplings $h_t$ and $h_b$ are the top- and bottom-Yukawa couplings also present in the type II 2HDM case, while $h_t'$
and $h_b'$ denote the coupling to the ``wrong" Higgs doublet only existing in the type III case. We neglect
contributions from Yukawa couplings from the first two generations and neglct the $h_{\tau}$ and $h'_{\tau}$ Yukawa
couplings.
The tree-level mass matrices, parameterized in terms of charged Higgs boson mass $M_{H^\pm}$, have the entries
\begin{align}\nonumber
\mathcal{M}^{2}_{11} &= v^2 \left(c_{\beta}^2 \lambda_1
+\frac{1}{2} s_{\beta}^2 \left[\lambda_4+\operatorname{Re}(\lambda_5)\right] + 2 c_{\beta}s_{\beta} \operatorname{Re}(\lambda_6)\right)+ s_{\beta}^2 M_{H^\pm}^2, \\\nonumber
\mathcal{M}^{2}_{12} &= v^2 \left(c_{\beta} s_{\beta}
\lambda_3 + \frac{1}{2} c_{\beta} s_{\beta}
\left[\lambda_4+\operatorname{Re}(\lambda_5)\right] +c_{\beta}^2 \operatorname{Re}(\lambda_6) + s_{\beta}^2 \operatorname{Re}(\lambda_7)\right)
-c_{\beta} s_{\beta}M_{H^\pm}^2,\\\nonumber
\mathcal{M}^{2}_{22} &= v^2 \left(s_{\beta}^2 \lambda_2 + \frac{1}{2} c_{\beta}^2 \left[\lambda_4+\operatorname{Re}(\lambda_5)\right]+2 c_{\beta}s_{\beta}\operatorname{Re}(\lambda_7) \right)
+c_{\beta}^2 M_{H^\pm}^2, \\\nonumber
\mathcal{M}^{2}_{33} &= \frac{1}{2} s_{\beta}^2 \{v^2 \left[\lambda_4-\operatorname{Re}(\lambda_5)\right]+2
M_{H^\pm}^2\},
\\\nonumber
\mathcal{M}^{2}_{34} &= -\frac{1}{2} c_{\beta} s_{\beta} \{v^2 \left[\lambda_4-\operatorname{Re}(\lambda_5)\right]+2
M_{H^\pm}^2\},\\\nonumber
\mathcal{M}^{2}_{44} &= \frac{1}{2} c_{\beta}^2 \{v^2 \left[\lambda_4-\operatorname{Re}(\lambda_5)\right]+2
M_{H^\pm}^2\},\\\nonumber
\mathcal{M}^{2}_{13} &= \frac{1}{2} s_{\beta} v^2 \left[s_{\beta} \operatorname{Im}(\lambda_5) + 2 c_{\beta} \operatorname{Im}(\lambda_6)\right] = - \frac{1}{\tan \beta}\mathcal{M}^{2}_{14}, \\
\mathcal{M}^{2}_{23} &= \frac{1}{2} s_{\beta} v^2 \left[c_{\beta} \operatorname{Im}(\lambda_5)+2 s_{\beta} \operatorname{Im}(\lambda_7)\right] =- \frac{1}{\tan \beta}\mathcal{M}^{2}_{24}, \label{eq:2HDMmassmatrixneutral}\\\nonumber
\mathcal{M}^{2}_{+_{11}} &= s_{\beta}^2 M_{H^\pm}^2, \\\nonumber
\mathcal{M}^{2}_{+_{12}} &= -c_{\beta} s_{\beta}M_{H^\pm}^2 = \mathcal{M}^{2}_{+_{21}},\\%\nonumber
\mathcal{M}^{2}_{+_{22}} &= c_{\beta}^2 M_{H^\pm}^2 \label{eq:2HDMmassmatrixcharged}
\end{align}
\begin{equation}\nonumber
\text{with \quad} v^2\equiv \sqrt{v_1^2+v_2^2},
\end{equation}
where $\mathcal{M}^2$ is the neutral Higgs mass matrix in the $(\phi_1, \phi_2,
a_1, a_2)$ basis and $\mathcal{M}^{2+}$ is the charged Higgs mass matrix in
the $(\phi_1^+, \phi_2^+)$ basis.
\section{Introduction}
\label{se:intro}
\paragraph{}
Supersymmetric extensions of the Standard Model (SM) can help overcome shortcomings of the SM, providing for example dark matter
candidates and additional sources of CP violation. While these models are theoretically attractive, none of the
predicted supersymmetric (SUSY) particles has been found thus far. In order to guide experimentalists in their
continued hunt for these yet-undiscovered states, and to ascertain the continued theoretical viability of these theories,
precise theoretical predictions of experimentally observable quantities are needed.
The discovery of the Higgs boson in 2012 \cite{Aad:2012tfa, Chatrchyan:2012xdj} has provided theorists with a whole new
set of results against which these theories may be tested. The best measured property of the Higgs boson is its mass
$m_h$ with a value of
\begin{align}
m_h = 125.09 \pm 0.24 \text{ GeV} \quad \text{\cite{Aad:2015zhl}}.
\end{align}
This mass is a free parameter in the SM, and must be determined from experiment. In supersymmetric extensions,
however, the mass of the discovered Higgs boson can be predicted using the additional parameters of the theory.
Demanding that the theoretical prediction match the experimentally-measured value for each point in the parameter space
constrains the theory. Since the experimental result for the Higgs mass is very precise, it is necessary to obtain a
theoretical prediction that is of similar accuracy in order to fully exploit the information of the experiment and
to yield stringent constraints. This entails incorporating quantum corrections in the Higgs mass calculations.
The necessity of including quantum corrections of higher order in the Higgs-mass prediction has been recognized for a
long time. In the context of the Minimal Supersymmetric Standard Model (MSSM), radiative corrections have been shown to be important to lift the mass of the lightest Higgs boson above the
tree-level upper limit, which is given by the Z-boson mass \cite{Haber:1990aw, Ellis:1990nz, Okada:1990vk,
Ellis:1991zd}. A lot of work has been performed to improve the theoretical prediction: At fixed order, one-loop
\cite{Chankowski:1991md, Brignole:1992uf, Chankowski:1992er, Dabelstein:1994hb, Pierce:1996zz, Frank:2006yh}, two-loop
\cite{Heinemeyer:1998jw, Heinemeyer:1998kz, Zhang:1998bm, Heinemeyer:1998np, Espinosa:1999zm, Espinosa:2000df, Degrassi:2001yf,
Brignole:2001jy, Brignole:2002bz, Martin:2002iu, Martin:2002wn, Dedes:2002dy, Dedes:2003km, Martin:2004kr, Allanach:2004rh,
Heinemeyer:2004xw, Martin:2005eg, Heinemeyer:2007aq, Borowka:2014wla, Degrassi:2014pfa, Hollik:2014wea, Hollik:2014bua,
Hollik:2015ema, Borowka:2015ura, Goodsell:2016udb, Passehr:2017ufr, Borowka:2018anu} as well as three-loop corrections
\cite{Martin:2007pg, Harlander:2008ju, Kant:2010tf, Harlander:2017kuc, Stockinger:2018oxe, R.:2019ply, R.:2019irs} to the Higgs-boson masses have been
calculated. As it stands, the theoretical uncertainty of the lightest Higgs boson has been estimated to be of the order 3 GeV
~\cite{Degrassi:2002fi}, an estimate which is still applied in phenomenological studies. While the assessment of the
theoretical uncertainty is still an ongoing discussion (see Refs.~\cite{Vega:2015fna, Bahl:2017aev, Allanach:2018fif}),
the general consensus is that it is challenging to reduce the uncertainty below 1 GeV.
The method by which one includes these quantum corrections is dependent on the mass of the SUSY particles. The
aforementioned fixed-order approach is particularly useful for SUSY particles with masses up to the TeV scale where the
mass of the SUSY partners of the top quark is most important. Different arguments coming from, for example, the
naturalness or grand-unification perspective motivated early searches for SUSY particles on the TeV scale. However,
since there is still no sign of low-energy SUSY at the LHC,
heavy SUSY particles attract more interest. Within these scenarios, however, the fixed-order calculations provide a
less accurate result due to the emergence of large logarithms of ratios of masses of SUSY particles and the energy scale
at which the calculation is performed. These large logarithms spoil the fixed-order perturbation series. In order to
take these large logarithms into account, another approach has been applied; the effective-field theory approach with
the following assumptions. At a high-energy scale, the full MSSM governs the interaction behaviour. The effects of the
heavy SUSY particles are encoded into the couplings of a viable low-energy theory, such as the SM or the
Two-Higgs-Doublet Model (2HDM), via matching at the matching-energy scale, i.e. the couplings are calculated in both
theories up to fixed order and set equal. The resulting couplings are then evolved down to the low-energy scale at which
the observables are investigated with the help of the corresponding renormalization group equations (RGE). Solving the
renormalization group equations leads to a resummation of the large logarithms. Assuming the SM as low-energy theory,
leading logarithms (LL) have been resummed in Ref.~\cite{Barbieri:1990ja, Okada:1990gg} and next-to leading logarithms
(NLL) in Ref.~\cite{Kodaira:1993yt, Hempfling:1993qq, Casas:1994us, Haber:1996fp} taking into account top-Yukawa
corrections. Allowing all Higgs bosons to be light, LL corrections have been calculated in Ref.~\cite{Espinosa:1991fc,
Sasaki:1991qu, Chankowski:1992ek, Haber:1993an}. Analytical expressions for the Higgs-boson mass have been obtained
taking LL contributions into account up to two-loop order~\cite{Carena:1995bx}. A hierachical stop-mass spectrum with
one stop-quark mass much heavier than the other has been considered in Ref.~\cite{Espinosa:2001mm} and the resulting
two-loop LL and NLL corrections of the Higgs-boson mass have been calculated. This approach has also been used to obtain
one- and two-loop LL corrections to the Higgs-boson mass spectrum in a CP-violating scenario \cite{Pilaftsis:1999qt, Carena:2000yi}. In
Ref. \cite{Draper:2013oza}, this approach has been performed taking into account an intermediate step with light
electroweak fermionic SUSY partners and the SM as low-energy theory. The results have been improved further allowing
also for light Higgs bosons in Ref.~\cite{Lee:2015uza, Bagnaschi:2015pwa}\footnote{Matching conditions of the MSSM to the 2HDM are also discussed in Ref.~\cite{Gorbahn:2009pp}.}. The calculations of Refs.~\cite{Draper:2013oza, Lee:2015uza} have been implemented into the tool
MhEFT. A further tool was presented with SUSYHD~\cite{Vega:2015fna}, which includes threshold corrections to a higher
order. The threshold correction to the quartic Higgs coupling, assuming the SM as low-energy theory, has been calculated
taking into account two-loop QCD~\cite{Bagnaschi:2014rsa}, top Yukawa~\cite{Bagnaschi:2017xid}, both in the gaugeless limit, and most recently the full QCD corrections~\cite{Bagnaschi:2019esc}. Effects
of higher-dimensional operators have been studied in Refs.~\cite{Bagnaschi:2017xid, Wells:2017vla} where
Ref.~\cite{Wells:2017vla} exploits a different method to obtain the one-loop threshold corrections. Recently, the
prediction of the Higgs-boson mass has been improved by resumming logarithms of fourth logarithmic order
($\text{N}^3$LL)~\cite{Harlander:2018yhj} in the case that the SM is the low-energy theory. Furthermore, a combination
of both approaches, the fixed-order and the RGE approach, has been performed, in particular to improve the intermediate
regime with particles not very heavy but too heavy for the existing fixed-order results~\cite{Hahn:2013ria,
Bahl:2016brp, Bahl:2018jom}. These results are implemented in FeynHiggs~\cite{Frank:2006yh, Heinemeyer:1998np,
Degrassi:2002fi, Hahn:2013ria, Bahl:2016brp, Heinemeyer:1998yj, Bahl:2018qog}. Similarly,
FlexibleSUSY~\cite{Athron:2014yba, Athron:2017fvs} as well as a version of Sarah/SPheno~\cite{Staub:2009bi,
Staub:2010jh, Staub:2012pb, Staub:2013tta, Porod:2003um, Porod:2011nf} have implemented such a
combination~\cite{Athron:2016fuq, Staub:2017jnp}.
The particular theory we wish to explore in this work is the
MSSM with complex parameters where the SUSY particles and light Higgs bosons are heavy. In addition to studying the mass of
the lightest neutral Higgs boson in this scenario, we analyze the size of this boson's CP-odd component, which is
induced by quantum corrections in the presence of complex parameters of the high-energy theory. We also look into the
mixing and masses of the heavy Higgs bosons. For this, we assume a type-III complex 2HDM as the low-energy effective
theory, where both Higgs doublets can couple to up- as well as down-type fermions and use different approaches to
connect to the Standard Model.
Other studies of a CP-violating MSSM have been done. The scan of the phenomenological MSSM in Ref.~\cite{Arbey:2014msa}
did not find a measurable size of the CP-odd component of the lightest Higgs boson, while the study presented in
Ref.~\cite{Li:2015yla} found some more promising results with some scenarios that could be measurable at least at future
runs of the LHC. In Ref.~\cite{Carena:2015uoe} a scenario with heavy superpartners has been explored
using complex MSSM parameters to include CP-violating effects, taking into account the finding of Ref.~\cite{Lee:2015uza}.
In this paper, we further improve the results for scenarios of an MSSM with complex parameters and heavy superpartners: We exploit two-loop RGEs
allowing for complex parameters and a one-loop matching of the MSSM to the 2HDM type III leading to a result where the NLL are resummed. We will start out with setting our conventions for the MSSM in Section \ref{se:MSSM}
and continue to describe the considered low-energy theory in Section \ref{se:efftheory}. In Section \ref{se:matching} we
discuss the exploited matching procedure and in Section \ref{se:mass}, we present the details of the determination of
the Higgs-boson masses as well as their mixing. The numerical results are discussed in Section~\ref{se:results}, and,
finally, we conclude in Section \ref{se:conclusion}.
\section{Calculating the Higgs-mass Spectrum and the Mixing}\label{mass_calculation}
\label{se:mass}
\paragraph{}
The Higgs masses are determined completely once all the parameters of the MSSM
at the scale $M_s$ are given. These are the necessary boundary conditions for
solving the RGEs and obtaining the values for the relevant couplings at the
scale where the masses are calculated. However, not all the relevant input
parameters are given at the same scale. The soft parameters of the MSSM are
given as user-defined input at the scale $M_s$ (except for $\tan\beta$, which is
defined at the scale $M_{H^+}$, and, hence as a 2HDM parameter), while all SM
couplings relevant to the calculation are fixed at the electroweak scale.
There are different ways to approach this mixed-scale boundary-value issue. The
``bottom up" approach starts with the low-energy scale values from the SM and
evolves the parameters up to the high-energy scale taking matching effects into
account on the way up to the high-energy scale and guessing the values of the
first parameters such as $\lambda_i$. In an iterative procedure, evolving the
parameters up and down, the complete set of parameters at a single energy scale
is found. With the ``top down'' approach, which has been exploited also in
Refs.~\cite{Bahl:2016brp, Bahl:2018jom}, one guesses inital values for the high
scale MSSM parameters and evolves all the couplings down to $M_t$. Here, the
couplings calculated from the EFT procedure are compared to the experimentally
fixed values, and the high scale parameters are adjusted to minimize the
differences using a numerical algorithm. This way, evolving
parameters up to the high scale can be avoided. We adapt the ``top down''
approach, employing an upwards evolution of the parameters just for the initial
guess. The process is sketched in Fig.~\ref{fig:process} and the single steps
are described in the following:
\begin{enumerate}
\item First, initial values for the high scale MSSM couplings as a first guess have to be found To obtain these,
\begin{enumerate}
\item we start at the scale $M_t$ (with $M_t$ being the top \textit{pole
mass}) where the SM couplings are fixed, guess a value for the SM quartic
Higgs coupling of $\lambda_{\text{SM}} = 0.25$, and evolve the SM couplings up to
the intermediate scale $M_{H^+}$ using SM RGEs obtained from {\tt
Sarah}~\cite{Staub:2012pb, Staub:2013tta}.
\item At the scale $M_{H^+}$, it is assumed that all the 2HDM quartic couplings $\lambda_1$,\dots, $\lambda_7$, ``wrong''
Yukawa couplings $h_t'$ and $h_b'$, and the phases of the Yukawa couplings $\varphi_{h_t}$, $\varphi_{h_t'}$, $\varphi_{h_b}$, and $\varphi_{h_b'}$ are zero, and the 2HDM Yukawa couplings $h_t$ and $h_b$ are calculated accordingly via the tree-level matching of the Yukawa couplings,
\begin{equation}\label{eq:top}
h_t^{\mathrm{2HDM}} = \frac{1}{\sin\beta}\, y_t^{\mathrm{SM}},
\end{equation}
\begin{equation}\label{eq:bottom}
h_b^{\mathrm{2HDM}} = \frac{1}{\cos\beta}\, y_b^{\mathrm{SM}}.
\end{equation}
Then, the 2HDM couplings are evolved up to the scale $M_s$ using the full two-loop 2HDM RGEs
including complex phases. For the gauge, Yukawa, and quartic couplings, we have calculated these implementing the general prescription first developed by
Refs.~\cite{Vaughn:1983,Vaughn:1984,Vaughn:1985,Luo:2002ti} and expanded upon in Ref.~\cite{Schienbein:2018fsw} to account for
kinetic mixing of scalar fields in the presence of multiple Higgs doublets. For the running vevs, we use the formulae
from Refs.~\cite{Sperling:2013eva,Sperling:2013xqa}. We have checked our results with the findings of the authors of Ref.~\cite{Oredsson:2018yho}
and find agreement for all couplings.
\item As our first guess, the values of the 2HDM
gauge and Yukawa couplings emerging from the previous step are taken to determine the initial values of the MSSM gauge and Yukawa couplings,
\begin{align}
c^{\text{MSSM}} = c^{\text{2HDM}} \quad \text{with} \quad c = g_y, g, g_s, h_t, h_b.
\end{align}
It should be noted that
in the MSSM, the ``wrong'' Yukawa couplings are purely loop-induced and that the
Yukawa phases can be absorbed into the fields. Hence, we only have the real parameters
$h_t^\mathrm{MSSM}$ and $h_b^\mathrm{MSSM}$.
\end{enumerate}
\item Now, the MSSM parameters are given by the gauge and Yukawa couplings obtained in step~1 (or adapted in the minimization procedure) and the soft SUSY breaking parameters $A_t$, $A_b$, $\varphi_{M_3}$ as well as the parameter $\mu$ defined as input at the scale $M_s$ used in the following steps:
\begin{figure}
\centering
\begin{tikzpicture}
\node (a) at (-2,0) {\footnotesize{$M_t$}};
\node (b) at (-2,2) {\footnotesize{$M_{H^+}$}};
\node (c) at (-2,4) {\footnotesize{$M_S$}};
\node (A) at (0,0) [draw,thick] {\footnotesize{Start at $M_t$}};
\node (B) at (0,2) [draw,thick,align=center] {\footnotesize{$\lambda_i's = h^\prime_i=0,$} \\
\footnotesize{$\varphi=0$ at $M_{H^+}$}};
\node (C) at (0,4) [draw,thick,align=left] {\footnotesize{First guesses for} \\ \footnotesize{couplings at $M_s$}};
\node at (0,5.8) {Step 1};
\node at (4,5.8) {Step 2};
\node at (8,5.8) {Step 3};
\draw (A) [->] -- (B);
\draw (B) [->] -- (C);
\node (D2) at (4,4.9)
align=center]{\footnotesize{Minimization procedure} \\ \footnotesize{$\Rightarrow$ High-scale couplings:}};
\node (D) at (4,4)[align=center]{\footnotesize{Matching MSSM to 2HDM}};
\node (Ee) at (4, 2)[align=center]{\footnotesize{Matching 2HDM to SM}};
\node (Ff) at (4, 0)[align=center]{\footnotesize{Comparison:}\\\footnotesize{SM vs experimental values}};
\draw[thick] (1.95,-0.5) rectangle (6.05,5.5);
\node (E) at (8.,2) [draw,thick,align=left]{\footnotesize{Calculate Mass} \\ \footnotesize{Matrix at $M_{H^+}$}};
\node (F) at (8,0) [draw,thick,align=center]{\footnotesize{Run to $m_t$,} \\ \footnotesize{Calculate $m_t\;\mathrm{\&}\;m_h$}};
\draw (D) [->] -- (Ee);
\draw (Ee) [->] -- (Ff);
\draw (C) [->] -- (D);
\draw (D) [-] -- (8,4);
\draw (8,4)[->] -- (E);
\draw (E) [->] -- (F);
\end{tikzpicture}
\caption{Pictorial description of the mass calculation.}
\label{fig:process}
\end{figure}
\begin{enumerate}
\item With the MSSM parameters, the 2HDM
couplings are calculated using the matching conditions given in Sect.~\ref{se:matching}. These MSSM threshold corrections give the non-vanishing values for the 2HDM quartic couplings $\lambda_1, \dots, \lambda_7$,
the ``wrong'' Yukawa couplings $h_t^{\prime\mathrm{2HDM}}$, $h_b^{\prime\mathrm{2HDM}}$, and the Yukawa phases of the 2HDM $\varphi_{h_t}$, $\varphi_{h_t'}$, $\varphi_{h_b}$, and $\varphi_{h_b'}$. The couplings are then run down to the scale $M_{H^+}$.
\item \label{SM_THDM_matching} Then, the 2HDM
is matched to the SM. The tree-level matching conditions for the SM Yukawa couplings $y_t$ and $y_b$ to the 2HDM ones $h_t$, $h_b$, $h_t'$ and $h_b'$ are
\begin{align}\nonumber
&\sqrt{|h_t|^2\sin^2\beta + 2|h_t||h_t^\prime|\cos\beta\sin\beta\cos(\varphi_{h_t} - \varphi_{h_t^\prime}) +
|h_t^\prime|^2\cos^2\beta} = y_t\\\label{eq:quark_mass_match}
&\sqrt{|h_b|^2\cos^2\beta + 2|h_b||h_b^\prime|\cos\beta\sin\beta\cos(\varphi_{h_b} - \varphi_{h_b^\prime}) +
|h_b^\prime|^2\sin^2\beta} = y_b
\end{align}
where $\varphi_\chi$ is the phase of the coupling $\chi$. The quartic Higgs coupling in the SM
$\lambda^{\mathrm{SM}}$ can be calculated at tree-level via
\begin{equation}\label{eq:lamSM2HDM}
\lambda_{\text{SM}} = c_\beta^4 \lambda_1 + 4 c_\beta^3 s_\beta \operatorname{Re}(\lambda_6) +
2 c_\beta^2 s_\beta^2\left[\lambda_3 + \lambda_4 + \operatorname{Re}(\lambda_5)\right] + 4 c_\beta s_\beta^3 \operatorname{Re}(\lambda_7) + s_\beta^4
\lambda_2.
\end{equation}
The one-loop threshold correction to $\lambda_{SM}$ is obtained by integrating out the heavy Higgs bosons. In the real case
where all phases are set to zero, the answer is known in closed form \cite{Bahl:2018jom}. In the complex case, on the
other hand, the calculation is complicated by the $4\times4$ neutral mixing and mass matrices. We have
evaluated the full threshold corrections numerically for the complex case, which leads to the problem that the result includes contributions of order
$\mathcal O(v/M_{H^+})$ that are ignored elsewhere in the calculation. Comparing the results for the Higgs masses using only the tree-level matching, the one-loop threshold of Ref.~\cite{Bahl:2018jom}, and the full one-loop threshold including $\mathcal O(v/M_{H^+})$ terms leads to very small differences, so we can neglect the one-loop threshold entirely. Similarly, the one-loop 2HDM threshold corrections to the SM Yukawa couplings are numerically negligible.
\item \label{SM_evolution} In the next step, the SM couplings are evolved from the scale $M_{H^+}$ down to $M_t$ and checked against the
experimental values for\footnote{Including a check of the value of the vev does not change the result within our numerical accuracy.} $g_y$, $g$, $g_s$, $y_t$, $y_b$. We repeat this procedure,
adjusting the high scale MSSM couplings each time to minimize the differences between the SM couplings at the low-scale and the experimental values until
good agreement is found.
\end{enumerate}
\item \label{masscalculation} Via the minimization procedure in step~2, we obtained a final set of value for all MSSM high scale parameters. These are
evolved down one last time to $M_{H^+}$. At this stage, all
the low-scale 2HDM parameters necessary for computing the Higgs masses are determined, and one could in principle
calculate the eigenvalues of the loop-corrected mass matrix at the scale
$M_{H^+}$ and determine the pole masses. However, this will lead to terms
containing potentially large logarithms of $\ln(M_{H^+}/m_t)$, which are
additionally enhanced by factors of the large top-Yukawa coupling. These terms
originate from the one-loop corrections in the conversion of the
$\overline{\text{MS}}$ mass to the pole mass. Therefore, we considered three
conceptionally different methods to calculate the Higgs-boson masses: In the
case that the charged Higgs boson is sufficiently light, the 2HDM can be used as
the low-energy theory (options (a) and (b) below). If the charged Higgs boson is
heavy, then the SM is the appropriate low-energy theory and a matching procedure for the 2HDM and the SM is performed at the scale $M_{H^+}$ (option (c)). Finally, we apply an approximation that interpolates between both results (option (d)). In the following, we list the options and include some details about the calculation:
\begin{enumerate}
\item \label{2HDMatMHp} The parameters are taken at the scale $\mu_{\text{ren}} = M_{H^+}$ and the on-shell Higgs masses are calculated via the zeros of the determinant
\begin{align}\label{eq:det}
\det\left[p^2 - \mathcal M^2 (\mu_{\text{ren}}) + \hat{\Sigma}(\mu_{\text{ren}}, p^2) - \hat{T}\right] = 0
\end{align}
expanded up to one-loop order
where $\hat{\Sigma}(\mu_{\text{ren}}, p^2)$ denotes the top and bottom Yukawa contributions to the self energy matrix in
the $\overline{\text{MS}}$ renormalization scheme at momentum $p^2$. To ensure the proper minimum
of the effective potential, tadpole contributions $\hat{T}$ originating from top and
bottom loops have to be taken into account. The entries of the matrix $\hat{T}$ are given in the appendix~\ref{tadpole} in terms of tadpole contributions in the interaction basis. The mass matrix $\mathcal M^2
(\mu_{\text{ren}})$ has the form of the tree-level mass matrix given in
Eqs.~\eqref{eq:2HDMmassmatrixneutral} with the parameters evaluated at the scale
$\mu_{\text{ren}} = M_{H^+}$, where the charged Higgs mass is given in the
$\overline{\text{MS}}$ scheme.
\item \label{2HDMatmt} In this option, the low-energy theory is still the 2HDM, however, the
parameters are evolved down to the \textit{running} top-quark mass $m_t$
calculated in terms of 2HDM parameters, and Eq.\eqref{eq:det} is evaluated at
the scale $\mu_{\text{ren}} = m_t$ where the $\overline{\text{MS}}$ mass of the charged Higgs boson $M_{H^+}$ is interpreted as given at the scale\footnote{Within the calculation, it is consistent to use $M_{H^+}(m_t)$ instead of $M_{H^+}(M_{H^+})$ as an input---$M_{H^+}(M_{H^+})$ is chosen in step~\ref{2HDMatMHp} as input. However, when comparing both approaches of step~\ref{2HDMatMHp} and of step~\ref{2HDMatmt}, one has to be careful with the interpretation of the results. We find that the relative difference between $M_{H^+}(m_t)$ and $M_{H^+}(M_{H^+})$ is at the per-mille level in the parameter region where the 2HDM calculation is applicable and we ignore this difference.}
$m_t$. Using this scale choice, the logarithms
$\ln(\mu_{\text{ren}}/m_t)$ in the self energies vanish, since we evaluate the
self-energies using the running top and bottom masses. This method is only valid
as long as $M_{H^+}$ is not much larger than $m_t$ since the 2HDM RGEs are not
the correct RGEs for evolving the couplings below the scale $M_{H^+}.$
\item
In this method, the SM is decoupled completely from the 2HDM and treated as
the low-energy theory. This method applies when $M_{H^+}\gg m_t$. In this case,
the heavy Higgs bosons of the 2HDM are decoupled by matching the 2HDM to the SM
in the same way as step \ref{SM_THDM_matching}, and the SM couplings are
evolved down to $m_t$. In this case, however, $m_t$ is calculated in terms of SM
parameters only. The $\overline{\text{MS}}$ mass of the lightest Higgs boson is
taken to be $v^2\lambda_{\mathrm{SM}}(m_t)$, which is converted to the pole mass
via
\begin{align}\label{eq:polemass}
p^2 - v^2\lambda_{\mathrm{SM}}(m_t) + \hat{\Sigma}^\text{SM} - \hat{T}^\text{SM} = 0
\end{align}
where $ \hat{\Sigma}^\text{SM}$ and $\hat{T}^\text{SM}$ are the self-energy and
tadpole contributions of the SM-like Higgs boson of $\mathcal O(\alpha_t)$ and $\mathcal O(\alpha_b)$ with $\alpha_{\{t,b\}} = y_{\{t,b\}}^2/(4\pi)$ evaluated in the
$\overline{\text{MS}}$ scheme. In this option, since the heavy 2HDM Higgs
bosons are decoupled, all information about the heavy Higgs bosons at the
scale $m_t$ is encoded in the size of the couplings and their masses can be
estimated to be of order $M_{H^+}$. However, at the scale $M_{H^+}$, one can
still obtain direct information about the heavy Higgs bosons.
\item Finally, in this option, an approximation is exploited that allows one to
resum large logarithms in the scenario of $M_{H^+}\gg m_t$ while still retaining
information about the heavy Higgs bosons and the mixing between them and the
lightest Higgs boson. In order to do so, firstly, we only consider top-Yukawa
effects. We do not attempt to resum logarithms proportional to other couplings
in the 2HDM. This
approximation is good in the low-$\tan\beta$ regime where $h_b^{\mathrm{2HDM}}$
is small, which is also the most phenomenologically relevant regime, especially
for low values of the mass of the charged Higgs boson. The second assumption is
that for the resummation of these logs, our 2HDM can be considered a classic
type-II CP-even 2HDM where only one Higgs doublet couples to the top quarks.
This is because the logarithms we wish to resum arise from one-loop corrections,
and CP-violating and ``wrong-type" Yukawa couplings are already loop-suppressed,
so any effect these couplings may have will be suppressed by an extra loop
factor.
With this regime in mind, we wish to incorporate the effect of running from
$M_{H^+}$ to $m_t$ (with $m_t$ evaluated with parameters of the 2HDM) into the full 2HDM neutral mass matrix at the scale
$M_{H^+}$. To begin, we evaluate $\lambda_{\text{SM}}$ and
$v^{\overline{\mathrm{MS}}}$ at $M_{H^+}$ and $m_t$ according to
steps~\ref{SM_THDM_matching} and \ref{SM_evolution}, but we take $y_b = 0$ into
account when running the SM couplings down to $m_t$. Then, at the scale
$M_{H^+}$, we rotate into the so-called ``Higgs Basis'' \cite{Branco:1999fs,
Gunion:2002zf, Haber:2015pua}, defined by
\begin{gather}\label{eq:higgs_basis}
H_1 = c_{\beta}\Phi_1 + s_{\beta}\Phi_2, \qquad H_2 = c_{\beta}\Phi_2 - s_{\beta}\Phi_1,\\\nonumber
H_1 = \begin{pmatrix} h_1^+\\\frac{1}{\sqrt{2}}(v+h_1+ib_1)\end{pmatrix},\qquad H_2 = \begin{pmatrix} h_2^+\\\frac{1}{\sqrt{2}}(h_2
+ib_2)\end{pmatrix}\\\nonumber
s_\beta\equiv \sin\beta, \qquad c_\beta\equiv\cos\beta,\qquad \tan\beta \equiv\frac{v_2}{v_1},\qquad v^2\equiv v_1^2 + v_2^2,
\end{gather}
where $ h_j^+$, $h_j$, and $b_j$ with $j =1, 2$ are the charged, the
CP-even, the CP-odd Higgs fields in the Higgs basis, respectively. In this
basis, only Higgs doublet $H_1$ gets a vev $v$ and can therefore be identified
with the SM Higgs doublet. The mass matrix in the Higgs basis can be obtained
via $\mathcal M^\text{Higgs} = \mathcal U \mathcal M^2 \mathcal U^\dagger$, where
$\mathcal M^2$ is given in Eq.~\eqref{eq:2HDMmassmatrixneutral} and
\begin{align}\label{eq:Umatrix}
\mathcal U = \begin{pmatrix} U &0 \\ 0 &U\end{pmatrix} \quad \text{with} \quad U = \begin{pmatrix} c_\beta &s_\beta \\ -s_\beta & c_\beta \end{pmatrix},
\end{align}
and the (1,1) component of $\mathcal M^\text{Higgs}$ can be identified with
$v^{\mathrm{SM}}\lambda^{\mathrm{SM}}$ leading to the threshold condition given
in Eq.~\eqref{eq:lamSM2HDM}. The submatrix of $\mathcal M^\text{Higgs}$ given by
the second and third row and column describe the heavy Higgs bosons.
Before continuing, we note two relevant facts. First, in the decoupling limit
\cite{Gunion:2002zf}\footnote{Originally, the decoupling limit was formulated
for $M_A \gg v$ where $M_A$ is the mass of the CP-odd Higgs boson in the CP
conserving 2HDM.} where $M_{H^+}\gg v$, the mixing angle $\alpha$ (which
diagonalizes the CP-even neutral Mass matrix in the CP-conserving 2HDM) can be
approximated by $\beta-\frac{\pi}{2},$ and the Higgs basis is up to a minus sign
the mass basis,
\begin{equation}
\begin{pmatrix} h\\H\end{pmatrix} = \begin{pmatrix} -s_{\alpha} & c_{\alpha} \\c_{\alpha} & s_{\alpha}\end{pmatrix}
\begin{pmatrix} \phi_1 \\ \phi_2 \end{pmatrix} \xrightarrow{M_{H^+}\gg v}
\begin{pmatrix} c_{\beta} & s_{\beta}\\ s_{\beta} & - c_{\beta} \end{pmatrix}
\begin{pmatrix}\phi_1\\\phi_2\end{pmatrix}
\end{equation}
which according to equation \eqref{eq:higgs_basis} shows that
\begin{equation}
h=h_1\qquad H=-h_2.
\end{equation}
Second, since at tree-level in the gauge-eigenstate basis only the 2HDM doublet
$\Phi_2$ couples to top quarks according to our above assumptions, one can
approximate the self-energy corrections to the CP-even Higgs bosons by
\begin{align}\nonumber\label{eq:approx}
\Sigma^{\mathrm{tops}}_{h_1h_1} = \Sigma^{\mathrm{tops}}_{hh} &= s_\beta^2\Sigma^{\mathrm{tops}}_{\phi_2\phi_2}\\\nonumber
\Sigma^{\mathrm{tops}}_{h_1h_2} = -\Sigma^{\mathrm{tops}}_{hH} &= c_\beta s_\beta
\Sigma^{\mathrm{tops}}_{\phi_2\phi_2} = \frac{1}{\tan\beta}\Sigma^{\mathrm{tops}}_{hh} \\
\Sigma^{\mathrm{tops}}_{h_2h_2} = \Sigma^{\mathrm{tops}}_{HH} &= c_\beta^2 \Sigma^{\mathrm{tops}}_{\phi_2\phi_2} =
\frac{1}{\tan^2\beta}\Sigma^{\mathrm{tops}}_{hh}.
\end{align}
Now we consider $v^2(m_t)\lambda^{\mathrm{SM}}(m_t)$ to be the leading-log
resummation of the one-loop-leading-log contribution coming from the
$\Sigma^{\mathrm{tops}}_{h_1h_1}$ self-energy correction. Therefore, using Eq.~\eqref{eq:approx}, we
incorporate this into the Higgs basis matrix as follows
\begin{equation}
\mathcal M^{\mathrm{Higgs}}_{\text{approx}} = \mathcal M^{\mathrm{Higgs}} + \begin{pmatrix}\gamma^{\mathrm{resum}} &
\frac{1}{\tan\beta}\gamma^{\mathrm{resum}} & 0 & 0\\ \frac{1}{\tan\beta}\gamma^{\mathrm{resum}}
&\frac{1}{\tan^2\beta}\gamma^{\mathrm{resum}}& 0 &0 \\ 0 & 0 & 0& 0 \\ 0 &0&0 &
0\end{pmatrix},
\end{equation}
where $\gamma^{\mathrm{reum}}\equiv v^2(m_t)\lambda^{\mathrm{SM}}(m_t) - v^2(m_{H^+})\lambda^{\mathrm{SM}}(M_{H^+}).$
Then, the eigenvalues of the matrix
\begin{equation}\label{mass_mat}
\mathcal M^{\mathrm{Higgs}}_{\mathrm{approx}} - \hat{\Sigma}(p^2,\mu) + \hat{T}
\end{equation}
are calculated, where $\hat{\Sigma}$ is the $4\times4$ matrix of one-loop
self-energies of the neutral Higgs bosons in the gauge basis and $\hat{T}$ is
the matrix of one-loop neutral tadpole corrections in gauge basis given in
appendix \ref{tadpole}, both in the $\overline{\mathrm{MS}}$ scheme and using the parameters at $m_t$.
To calculate these eigenvalues, it is necessary to choose the renormalization
scale $\mu$ and the external momentum $p^2$. The renormalization scale is set to
$m_t$, and the external momenta are chosen to be the tree-level masses. We
choose to calculate the eigenvalues one a time, with the mass matrix
diagonalized once for each tree-level mass. For example, the lightest neutral
Higgs corresponds to the lightest eigenvalue of the loop-corrected mass matrix
evaluated with the external momentum set to the tree-level neutral Higgs mass.
In order to proceed along the same lines as in the pure 2HDM case, see \ref{2HDMatMHp}, the matrix $\mathcal M^{\mathrm{Higgs}}_{\text{approx}}$ can be rotated back to the interaction eigenstates using the transformation matrix \eqref{eq:Umatrix}. This results in $\mathcal M^2$ with an additional contribution to the (2,2) element, \begin{align}\label{eq:Msquaredresum}
\mathcal (M^2_{22})^{\mathrm{resum}} = \mathcal M^2_{22} + \frac{1}{\sin^2 \beta} \gamma^{\mathrm{resum}}.
\end{align}
Using this new matrix $(\mathcal M^2)^{\mathrm{resum}}$ and replacing $\mathcal M^2$ by $(\mathcal M^2)^{\mathrm{resum}}$ in Eq.~\eqref{eq:det}, one can calculate the Higgs masses by finding the zeros of the resulting equation up to one-loop order. We find that both approaches lead to nearly the same result.
\end{enumerate}
For options (b) to (d), an evaluation at the scale of the
running top-quark mass $m_t (m_t)$ is performed. In these cases, the running
top-quark mass is calculated iteratively.
\end{enumerate}
\section{Matching the MSSM to the 2HDM}
\label{se:matching}
\paragraph{}
The complex MSSM Higgs potential of Eq.~\eqref{higgs_potential} is matched to the general type-III 2HDM given above at the
one-loop level at the scale $M_{\text{s}}$. The doublets $H_u$ and $H_d$ from Eq.~\eqref{higgs_potential} are related to those
of Eq.~\eqref{THDM_potential} by
\begin{equation}
\Phi_1\equiv-i\sigma_2H_d^*, \qquad \Phi_2 \equiv H_u.
\end{equation}
At tree level, the matching conditions for the quartic Higgs couplings are
\begin{align}\nonumber
\lambda_1 &= \frac{1}{4}(g^2+g_y^2),\\\nonumber
\lambda_2 &= \frac{1}{4}(g^2+g_y^2),\\\nonumber
\lambda_3 &= \frac{1}{4}(g^2-g_y^2),\\\nonumber
\lambda_4 &= -\frac{1}{2}g^2,\\
\lambda_5 &= \lambda_6 = \lambda_7 = 0.
\end{align}
For the Yukawa couplings, one obtains
\begin{equation}
h_t^{\mathrm{2HDM}} = h_t^{\mathrm{MSSM}},\qquad h_b^{\mathrm{2HDM}} = h_b^{\mathrm{MSSM}},\qquad h_t'^{\mathrm{2HDM}}= h_b'^{\mathrm{2HDM}}=0.
\end{equation}
Here we are assuming that we know all the parameters of the MSSM and match the MSSM to the 2HDM. This means all
couplings given below are assumed to be MSSM couplings unless we explicitly state otherwise and we drop the superscript ${}^{\mathrm{MSSM}}$.
The one-loop threshold corrections are calculated under three assumptions. Firstly, all supersymmetric soft-breaking
mass parameters are assumed to share the common mass scale $M_{\text{s}}$, in particular $M_{\text{s}}= M_{L_Q} =
M_{R_U} = M_{R_D} = M_3$. Secondly, all Yukawa couplings are assumed to vanish except the top and bottom Yukawa couplings,
and only contributions proportional to powers of these Yukawa couplings or the strong gauge coupling are included. This
amounts to only including third-generation squarks in loops when deriving the thresholds and neglecting terms of
$\mathcal O(g^a g_y^b)$ with $a+b = 4$. Lastly, all loop functions are evaluated in the limit of zero external momenta.
The diagrams were evaluated using \texttt{FeynArts} and \texttt{FormCalc} \cite{Hahn:2000kx},\cite{Hahn:1998yk}, and the
loop functions are evalued using \texttt{ANT} \cite{Angel:2013hla}.
The results given here agree with the complex results given in Ref.~\cite{Carena:2015uoe} up to gauge contributions to the
couplings $\lambda_6$ and $\lambda_7$, which, however, are found in the work of Ref.~\cite{Haber:1993an} and
Ref.~\cite{Bahl:2018jom} in the real case.
\begin{figure}
\centering
\begin{subfigure}[]{.4\textwidth}
\centering \includegraphics[width=0.7\textwidth]{feynman_diagrams/box.pdf}
\caption{box diagram with squarks}
\label{fig:boxes}
\end{subfigure}\qquad
\begin{subfigure}[]{.4\textwidth}
\centering\includegraphics[width=0.7\textwidth]{feynman_diagrams/triangle.pdf}
\caption{triangle diagram with squarks}
\label{fig:triangles}
\end{subfigure}
\caption{Two sample diagrams contributing to the one-loop threshold of the quartic Higgs couplings. }
\label{fig:diagrams}
\end{figure}
Box diagrams like those of Fig.~\ref{fig:boxes} lead to the following corrections to the quartics
\begin{align}
\Delta\lambda_{1}^{(4)} &= -\frac{\kappa}{2 } \Big\{|\hat{A}_b|^4 h_b^4+h_t^4 |\hat{\mu}|^4\Big\},\\
\Delta\lambda_{2}^{(4)} &= -\frac{\kappa}{2 } \Big\{|\hat{A}_t|^4 h_t^4+h_b^4 |\hat{\mu}|^4\Big\},\\\nonumber
\Delta\lambda_{3}^{(4)} &= \frac{\kappa}{2 } \Big\{-|\hat{A}_b|^2 |\hat{A}_t|^2 h_b^2 h_t^2-|\hat{A}_b|^2 h_b^4
|\hat{\mu}|^2-|\hat{A}_t|^2 h_t^4|\hat{\mu}|^2-h_b^2 h_t^2 |\hat{\mu}|^4 \\&\quad+(\hat{A}_b \hat{A}_t^* +
\hat{A}_b^* \hat{A}_t) h_b^2 h_t^2 |\hat{\mu}|^2\Big\},\\\nonumber
\Delta\lambda_{4}^{(4)} &= \frac{\kappa}{2 } \Big\{|\hat{A}_b|^2 |\hat{A}_t|^2 h_b^2 h_t^2-|\hat{A}_b|^2 h_b^4
|\hat{\mu}|^2-|\hat{A}_t|^2 h_t^4|\hat{\mu}|^2+h_b^2 h_t^2 |\hat{\mu}|^4 \\&\quad- (\hat{A}_b \hat{A}_t^* +
\hat{A}_b^* \hat{A}_t) h_b^2 h_t^2 |\hat{\mu}|^2 \Big\},\\
\Delta\lambda_{5}^{(4)} &=
-\frac{\hat{\mu}^2}{2 }\kappa\Big\{\hat{A}_b^2 h_b^4
+\hat{A}_t^2 h_t^4\Big\},\\
\Delta\lambda_{6}^{(4)} &= \frac{\hat{\mu}}{2 } \kappa\Big\{|\hat{A}_b|^2 \hat{A}_b h_b^4
+ \hat{A}_t h_t^4 |\hat{\mu}|^2\Big\},\\
\Delta\lambda_{7}^{(4)} &= \frac{\hat{\mu}}{2 } \kappa\Big\{|\hat{A}_t|^2 \hat{A}_t h_t^4
+\hat{A}_b h_b^4 |\hat{\mu}|^2\Big\},\\
\kappa&\equiv\frac{1}{16\pi^2}.
\end{align}
All hatted parameters above and following in the rest of the paper are normalized to the scale $M_\text{s}.$
The triangle diagrams like those of Fig.~\ref{fig:triangles} give
\begin{align}
\Delta\lambda_{1}^{(3)} &= \frac{3}{4 }\kappa \Big\{-|\hat{A}_b|^2 h_b^2 \Big(g^2+g_y^2-8h_b^2\Big)+\Big(g^2+g_y^2\Big)
h_t^2 |\hat{\mu}|^2\Big\},\\
\Delta\lambda_{2}^{(3)} &= \frac{3}{4 }\kappa \Big\{-|\hat{A}_t|^2 h_t^2 \Big(g^2+g_y^2-8h_t^2\Big)+\Big(g^2+g_y^2\Big)
h_b^2 |\hat{\mu}|^2\Big\},\\\nonumber
\Delta\lambda_{3}^{(3)} &=
-\frac{3}{8}\kappa \Big\{h_t^2|\hat{A}_t|^2 (g^2-g_y^2 -4 h_b^2)
+h_b^2\ |\hat{A}_b|^2 \Big(g^2-g_y^2-4 h_t^2\Big)\\\nonumber &\quad-h_b^2 |\hat{\mu}|^2\left[g^2-g_y^2+4 (h_b^2 -h_t^2)\right]
-h_t^2 |\hat{\mu}|^2\left[g^2-g_y^2 + 4( h_t^2 - h_b^2)\right] \\
&\quad - 4 h_b^2 h_t^2(\hat{A}_b \hat{A}_t^* + \hat{A}_b^* \hat{A}_t) \Big\},\\\nonumber
\Delta\lambda_{4}^{(3)} &= \frac{3}{4 }\kappa \Big\{+h_t^2 |\hat{A}_t|^2 \left(g^2 - 2 h_b^2\right) +h_b^2 |\hat{A_b}|^2 \left(g^2- 2 h_t^2\right)
\\&\quad - |\hat{\mu}|^2(h_t^2+h_b^2)\left[g^2- 2 (h_b^2 +h_t^2)\right] -2 h_b^2 h_t^2 (\hat{A_b} \hat{A}_t^* + \hat{A}_b^*
\hat{A}_t) \Big\},\\
\Delta\lambda_{5}^{(3)} &= 0,\\
\Delta\lambda_{6}^{(3)} &= \frac{3 \hat{\mu}}{8 }\kappa \Big\{\hat{A_b} h_b^2 \Big(g^2+g_y^2-8 h_b^2\Big)
- \hat{A}_t h_t^2 \Big(g^2+g_y^2\Big)\Big\},\\
\Delta\lambda_{7}^{(3)} &= -\frac{3 \hat{\mu}}{8 }\kappa \Big\{\hat{A_b} \Big(g^2+g_y^2\Big) h_b^2
-\hat{A}_t h_t^2 \Big(g^2+g_y^2-8 h_t^2\Big)\Big\}.
\end{align}
There are also contributions coming from the redefinition of the Higgs doublets. Squark loops induce
mixing between the scalar fields, which must be accounted for in order to preserve canonically normalized kinetic terms
for the scalar fields in the Lagrangian. This is done by redefining the Higgs doublet fields in the following manner
\begin{equation}\label{doublet_redefine}
\begin{pmatrix}\Phi_1 \\ \Phi_2 \end{pmatrix} \rightarrow \begin{pmatrix}\Phi_1 \\ \Phi_2 \end{pmatrix} -
\frac{1}{2}\begin{pmatrix}
\Delta Z_{\Phi_1 \Phi_1} & \Delta Z_{\Phi_1 \Phi_2} \\
\Delta Z_{\Phi_2 \Phi_1} & \Delta Z_{\Phi_2 \Phi_2}
\end{pmatrix}\begin{pmatrix}\Phi_1 \\ \Phi_2 \end{pmatrix}.
\end{equation}
The SU(2) invariance ensures that the corrections can be applied to the complete Higgs doublets and not only the component fields.
The expressions for the wave-function-correction factors $\Delta Z_{\Phi_i \Phi_j}$ can be derived via the finite parts of the derivatives of the self energies in the electroweak interaction basis $\Sigma'_{\phi_i \phi_j}$ with $\phi_{\{i, j\}} = \{\phi_1, \phi_2, a_1, a_2\}$, corresponding to a $\overline{\mathrm{MS}}$ renormalized self energy,
\begin{align}
\Delta Z_{\Phi_1 \Phi_1} &= \frac{1}{2}\left(\Sigma'_{\phi_1 \phi_1} + \Sigma'_{a_1 a_1} \right)= \kappa\frac{h_t^2 |\hat{\mu}|^2 + h_b^2 |\hat{A_b}|^2}{2}, \\
\Delta Z_{\Phi_1 \Phi_2} &= \frac{1}{2}\left( \Sigma'_{\phi_1 \phi_2} + \text{i} \Sigma'_{\phi_1 a_2} - \text{i} \Sigma'_{\phi_2 a_1} + \Sigma'_{a_1 a_2} \right) =-\kappa\frac{\hat{\mu} (h_t^2 \hat{A}_t +
h_b^2 \hat{A_b})}{2 }, \\
\Delta Z_{\Phi_2 \Phi_1} &= \Delta Z_{\Phi_1 \Phi_2}^*, \\
\Delta Z_{\Phi_2 \Phi_2} &= \frac{1}{2}\left( \Sigma'_{\phi_2 \phi_2} + \Sigma'_{a_2 a_2} \right)= \kappa\frac{h_t^2 |\hat{A}_t|^2 + h_b^2 |\hat{\mu}|^2}{2}.
\end{align}
In Ref.~\cite{Bahl:2018jom}, it was shown for the CP-even Higgs boson fields that this choice for the wave-function-correction factors together with an appropriate choice of correction of the mixing angle leads to the physical fields being the same in the MSSM and the 2HDM at the matching scale as required. The field redefinitions lead to
the following threshold corrections
\begin{align}
&\Delta\lambda_{1}^{(2)} = -\frac{g^2+g_y^2}{4 }
\kappa\Big(h_b^2 |\hat{A_b}|^2
+ h_t^2 |\hat{\mu}|^2\Big),
\\
&\Delta\lambda_{2}^{(2)} = -\frac{g^2+g_y^2}{4 }
\kappa\Big(h_t^2 |\hat{A}_t|^2
+ h_b^2|\hat{\mu}|^2\Big)),
\\
&\Delta\lambda_{3}^{(2)} = -\frac{g^2-g_y^2}{8 }
\kappa\Big( h_t^2 \left(|\hat{A}_t|^2 +|\hat{\mu}|^2\right)
+ h_b^2 \left(|\hat{A_b}|^2+|\hat{\mu}|^2\right) \Big),
\\
&\Delta\lambda_{4}^{(2)} = \frac{g^2\kappa}{4}\Big(h_b^2(|\hat{A_b}^2| + |\hat{\mu}|^2) + h_t^2(|\hat{A}_t^2| +
|\hat{\mu}|^2)\Big),\\
&\Delta\lambda_{5}^{(2)} = \Delta\lambda_{6}^{(2)} = \Delta\lambda_{7}^{(2)} = 0.
\end{align}
We do not include two-loop corrections to the thresholds, since they are still unknown for complex parameters and MSSM parameters given in the $\overline{\text{DR}}$ scheme\footnote{In Ref.~\cite{Carena:2015uoe}, two-loop threshold corrections of $\mathcal O(h_t^4 g_s^2)$ are given for complex parameters but in the $\overline{\text{MS}}$ scheme.}.
Finally, the Yukawa couplings receive the one-loop corrections resulting in the following 2HDM Yukawa couplings at the matching scale (including the tree-level contribution):
\begin{equation} \label{eq:htthres}
h_t^{\text{2HDM}} = h_t\left\{1 -
\kappa\left[\frac{4}{3}g_s^2\left(\hat{A}_t\hat{M}^*_3
- 1\right) +
\frac{1}{4}\left(h_t^2 |\hat{A}_t|^2 + h_b^2 |\hat{\mu}|^2\right)\right]\right\},
\end{equation}
\begin{equation}\label{eq:hbthres}
h_b^{\text{2HDM}} = h_b\left\{1 -
\kappa\left[\frac{4}{3}g_s^2\left(\hat{A_b}\hat{M}^*_3
- 1\right) +
\frac{1}{4}\left(h_b^2 |\hat{A_b}|^2 + h_t^2 |\hat{\mu}|^2\right)\right]\right\},
\end{equation}
\begin{equation}
h_t^{'\text{2HDM}} = \kappa h_t\left\{\frac{4}{3}g_s^2\hat{\mu}^*\hat{M}^*_3 + \frac{1}{4}
\left(h_b^2 \hat{A}_b^* \hat{\mu}^* + h_t^2 \hat{A}_t^*\hat{\mu}^*\right) \right\},
\end{equation}
\begin{equation}
h_b^{'\text{2HDM}} = \kappa h_b\left\{\frac{4}{3}g_s^2\hat{\mu}^*\hat{M}^*_3 + \frac{1}{4}
\left(h_b^2 \hat{A}_b^* \hat{\mu}^* + h_t^2 \hat{A}_t^*\hat{\mu}^*\right) \right\}.
\end{equation}
It should be noted, first, that, since the absolute value of the gluino mass parameter is $|M_3| = M_S$, $\hat{M}_3$ is
just a phase factor $\hat{M}_3= \text{e}^{\text{i} \varphi_{M_3}}$ where $\varphi_{M_3}$ is the phase of the gluino
mass parameter. Second, we do not include any $\tan \beta$ resummation in the MSSM. Hence, for large $\tan \beta$, it is to be expected that our results are not reliable anymore. We restrict our numerical discussion to $\tan \beta \leq 20$.
We do not calculate threshold corrections to parameters such as $\tan \beta$, since they do not enter in the MSSM
threshold corrections and, hence, are only needed as 2HDM parameters.
\section{The Minimal Supersymmetric Standard Model} \label{se:MSSM}
\paragraph{}
As mentioned in Section \ref{se:intro}, we consider a scenario in which all the superpartner particles are much heavier
than the SM particles and the Higgs bosons. Hence, the full MSSM is active at a high-energy scale and the effects of the
superpartners enter into the low-energy theory via threshold effects. The aim of this section is to set up the notation
needed for the calculation of the threshold corrections.
The superpotential of the MSSM is given as
\begin{align}\label{superpotential}
W_{\text{MSSM}} = - \epsilon_{ij}\left[h_u^{\text{MSSM}} \check{H}_u^i \check{Q}^j \check{U}^c - h_d^{\text{MSSM}}
\check{H}_d^i \check{Q}^j \check{D}^c -h_{e}^{\text{MSSM}}\check{H}_d^i \check{L}^j \check{E}^c + \mu
\check{H}_d^i\check{H}_u^j\right],
\end{align}
where all fields denoted with a $\check{}$ are left-chiral superfields. $\check{Q}$ and $\check{L}$ denote the quark and lepton
superfield doublets, $\check{U}$, $\check{D}$ and $\check{E}$ denote the up-type quark, down-type quark, and lepton
charge-conjugate superfield singlets, the two Higgs-doublet superfields are denoted by $\check{H}_u$ and $ \check{H}_d$, and $\epsilon_{12}
= 1$. The
corresponding Yukawa couplings are $h_u^{\text{MSSM}}$, $h_d^{\text{MSSM}}$, and $h_{e}^{\text{MSSM}}$, which are
complex $3\times3$ matrices in general. However, in our calculation, we will neglect the first two generations, and the
matrices collapse to the the top-, bottom-, and tau-Yukawa coupling $h_t^{\text{MSSM}}$, $h_b^{\text{MSSM}}$, and
$h_{\tau}^{\text{MSSM}}$, which can be chosen to be real \cite{Kobayashi:1973fv}. The strength of the mixing of the two Higgs doublets
is described by the complex parameter $\mu$.
We do not explicity give the vector part of the Lagrangian with the kinetic and interaction terms for the gauge bosons,
but refer the reader to e.g. Refs.~\cite{Martin:1997ns, Drees:1996ca}.
Since supersymmetry cannot be exact, it is explicitly broken in the MSSM by soft-SUSY breaking terms:
\begin{align}\label{soft_breaking}\nonumber
\mathcal L_{\text{MSSM}}^{\text{soft}} &= - m_{H_d}^2 |H_d|^2 - m_{H_u}^2 |H_u|^2 \\\nonumber
&- M_{L_Q}^2 |\tilde{Q}|^2 - M_{R_U}^2 |\tilde{u}_R|^2 - M_{R_D}^2 |\tilde{d}_R|^2
- M_{L_L}^2 |\tilde{L}|^2 - M_{R_E}^2 |\tilde{e}_R|^2 \\& \quad \nonumber
+ \epsilon_{ij} (m_{H_dH_u}^2 H_d^i H_u^j + h_u^{\text{MSSM}} A_u H_u^i \tilde{Q}^j \tilde{u}^*_R
- h_d^{\text{MSSM}} A_d H_d^i \tilde{Q}^j \tilde{d}^*_R \\& \qquad \qquad \nonumber
- h_e^{\text{MSSM}} A_e H_d^i \tilde{L}^j \tilde{e}^*_R + \mathrm{h.c.})\\& \quad
- \frac{1}{2} (M_1 \tilde{B} \tilde{B} + M_2 \tilde{W}_i \tilde{W}_i + M_3
\tilde{G} \tilde{G} + \mathrm{h.c.}) \; .
\end{align}
where $\tilde{Q}$, $\tilde{L}$, $\tilde{U}$, $\tilde{D}$, $\tilde{E}$, $H_d$ and $H_u$ denote the scalar component of
the corresponding superfields. The $\tilde{}$ indicates a superpartner field. The gaugino fields corresponding to
$U(1)$, $SU(2)$, and $SU(3)$ are denoted by $\tilde{B}$, $\tilde{W}$, and $\tilde{G}$, respectively. We assume colour
indices to be implicit. The gaugino soft breaking parameters $M_1$, $M_2$, and $M_3$ as well as the Higgs mixing
parameter $m_{H_dH_u}^2$ are complex numbers, while the soft Higgs mass breaking parameters $m_{H_d}^2$, $m_{H_u}^2$
are real. In general, the sfermion mass parameters $M_{L_Q}^2$, $M_{R_U}^2$, $M_{R_D}^2$, $M_{L_L}^2$, $M_{R_E}^2$ are
$3\times3$ Hermitian matrices, but reduce to real parameters when generation mixing is ignored. Finally, the trilinear
couplings $A_u$, $A_d$, and $A_e$ are general $3 \times 3$ complex matrices, but reduce to complex numbers if they are
assumed to be proportional to the SM Yukawa matrices, as is done in this paper.
In this paper, we are concerned with the Higgs sector of the MSSM. Equations \ref{superpotential} and
\ref{soft_breaking} give rise to a Higgs potential of the form
\begin{multline}\label{higgs_potential}
V_H = \frac{1}{8}(g^2+g_y^2)(|H_d|^2-|H_u^2|)^2 + \frac{1}{2}g^2|H_d^\dagger H_u|^2+|\mu|^2(|H_d|^2+|H_u|^2) \\
+ m_{H_d}^2|H_d|^2 + m_{H_d}^2|H_u^2| - m_{H_dH_u}^2(\epsilon_{ab}H^a_d H^b_u + \mathrm{h.c.})
\end{multline}
for the two Higgs doublets $H_u$ and $H_d$ of hypercharge 1 and -1, respectively. The $SU(2)$ and the $U(1)$ gauge coupling are denoted by $g$ and $g_y$, respectively.
Finally, the squark mass matrices are given by
\begin{align}\label{Sfermionmassenmatrix}
\mathcal M_{\tilde{q}} &= \begin{pmatrix}
M_{L_Q}^2 + m_q^2 + M_Z^2 c_{2 \beta} (T_q^3 - Q_q \sin^2 \theta_W) &
m_q X_q^* \\[.2em]
m_q X_q &
M_{R_F}^2 + m_q^2 +M_Z^2 c_{2 \beta} Q_q \sin^2 \theta_W
\end{pmatrix}
\end{align}
with
\begin{align}\label{kappa}
X_q &= A_q - \mu^*\kappa~, \quad \kappa = \{\cot\beta, \tan\beta\} \text{ and } F = \{U, D\}
\quad {\rm for} \quad q = \{t, b\}~.
\end{align}
Here, we introduce the gauge-boson mass $M_Z$, the electroweak mixing angle $\theta_W$, the quark masses $m_q$, as well
as $\beta$, which is defined via the ratio of the Higgs vacuum expectation values of the MSSM, $\tan \beta \equiv
v_u/v_d$. The charge and the third component of the isospin of the squarks are denoted by $Q_q$ and $T_q^3$,
respectively.
\section{Conclusion}\label{se:conclusion}
\paragraph{}
In this paper, we have explored the Higgs sector of the CP-violating MSSM in a
mass scenario with heavy SUSY particles and light Higgs bosons using effective
field theory techniques. We matched the complex MSSM to a type-III 2HDM (where
both Higgs doublets couple to both the top and bottom quarks), and calculated
the complex threshold corrections to the 2HDM at one-loop level and the RGEs for
the 2HDM with complex parameters at two-loop level. Using these matching
conditions and evolving the parameters down to a low scale with these RGEs, we
resum contributions at NLL order. We explored the effect of including the
complex phases of the quartic couplings in the RGEs and found that in particular
the absolute values of $\lambda_5$ to $\lambda_7$ change substantially compared
to a scenario where only the absolute values (and signs) but not the phases are
included in the RGEs.
In order to calculate the pole masses of the Higgs bosons, we exploited
different methods:
\begin{enumerate}
\item[(a)] Calculation of the pole mass with the 2HDM with parameters at the scale of the mass of the charged Higgs boson $M_{H^+}$.
\item[(b)] Calculation of the pole mass with the 2HDM with parameters at the scale of the running top quark mass.
\item[(c)] Matching to the SM at the scale $M_{H^+}$ and calculation of the pole mass of the Higgs boson at the scale of the running top quark mass within the SM.
\item[(d)] Exploiting an approximation which resums the most important logarithms of
the form $\ln(\frac{M_{H^+}}{m_t})$ but still uses the full 2HDM for the calculation of the pole masses.
\end{enumerate}
The approximation (d) agrees well with the pure 2HDM result for small values of
$M_{H^+}$ and approaches the result where the SM is used as the low-scale effective
theory for larger $M_{H^+}$. Therefore, it can be used as an interpolation between
the two regimes.
We investigated several different scenarios to discuss the phase dependence of
both the masses of the Higgs bosons and the CP-violating components of the Higgs bosons.
All the masses of the Higgs bosons show a sizeable dependence on the common
phase $\varphi_A$ of $A_t$ and $A_b$, which can be of the order of several GeV
for low $\tan \beta$, in particular for the mass of the lightest Higgs boson.
The heaviest Higgs boson shows the least sensitivity to $\varphi_A$. The
dependence of the Higgs masses on the gluino phase is much weaker and of the
order of one GeV.
Additionally, we found that the size of the CP-odd component of the two heavy
Higgs bosons shows a negligible dependence on the mass of the charged Higgs
bosons, and that the heavy Higgs bosons interchange their CP-oddness when
varying $\varphi_A$. For the light Higgs boson, the size of the CP-odd component
decreases quickly with larger values of $M_{H^+}$ as has been discussed before
in e.g.\ Ref.~\cite{Li:2015yla}. Even though we find that the CP-odd admixture
in the considered scenario for $M_{H^+} = 500$ GeV is just below the expected
experimental reach according to the discussion in Ref.~\cite{Berge:2015nua}, it
is likely that this particular scenario is excluded by the experimental results
for the Higgs signal rates as well as of the measurement of electric dipole moments.
In order to find out whether there is a viable scenario with heavy SUSY partners
that leads to an observable size of a CP-odd component, further work is needed.
\subsection*{Acknowledgements}
We would like to thank Henning Bahl, Thomas Hahn, Sven Heinemeyer, Esben M\o
lgaard, Pietro Slavich, Florian Staub, Dominik St\"ockinger, and Georg Weiglein
for helpful discussions and Joel Oredsson for providing his RGEs for comparison.
Our work is partially funded by the Danish National Research Foundation, grant
number DNRF90. Part of this work was supported by a STSM Grant from COST Action
CA16201 PARTICLEFACE.
\section{Numerical Results}
\label{se:results}
\subsection{Choice of input parameters}\label{sse:inputs}
\paragraph{}
The list of relevant input parameters for the calculation is
\begin{equation}
\{\underbrace{y_t,\;y_b,\;g_y,\;g_3,\;g_2,\;v,}_{\text{at }
M_t}\;\underbrace{\tan\beta(M_{H^+}),\;M_{H^+},}_{\text{at }
M_{H^+}}\;\underbrace{A_t,\;A_b,\;\varphi_{M_3},\;\mu}_{\text{at } M_s}\}.
\end{equation}
As mentioned in Sect.~\ref{se:mass}, these parameters are fixed at different
scales. The soft-breaking parameters of the MSSM and the Higgs mixing parameter
$\mu$ are defined at the scale $M_s$, $\tan\beta$ at the scale $M_{H^+}$, and
the SM input parameters are fixed at the low-scale $M_t$ by current experimental
results. The relevant SM observables needed to define the SM couplings, taken
from \cite{Lee:2015uza, Buttazzo:2013uya}, are
\begin{equation}\label{SM_inputs}
\begin{gathered}
\alpha_s(M_z) = 0.1184,\qquad M_t = 173.34\;\mathrm{GeV}, \qquad M_W = 80.384\;\mathrm{GeV},\\ M_Z = 91.1876
\;\mathrm{GeV},\qquad m_b(m_b) = 4.18\;\mathrm{GeV}, \qquad v^2_{G_F} \equiv \frac{1}{\sqrt{2}G_F} = 246.21971
\;\mathrm{GeV},
\end{gathered}
\end{equation}
with $G_F$ being the Fermi constant. The values for
$y_b,\;y_t,\;g_y,\;g_2,\;g_3$ are extracted from these observables in
\cite{Lee:2015uza, Buttazzo:2013uya} and given below as running parameters at
the scale $M_t$
\begin{equation}
\begin{gathered}\label{eq:SMcouplvalues}
g_3 = 1.1666,\qquad g_2 = 0.64779, \qquad g_y = \sqrt{\frac{3}{5}} g_1 = 0.35830 \\
y_t = 0.94018,\qquad y_b = 0.0156
\end{gathered}
\end{equation}
where in the conversion the value of the SM Higgs pole mass $M_h^{\text{SM}} =
125.15$~GeV was used according to Ref.~\cite{Buttazzo:2013uya}. One must also
determine the running vev $v_{\,\overline{\mathrm{MS}}}$ from the vev $v_{G_F}$,
which is experimentally determined via the Fermi constant, which is measured via
the muon lifetime. We derive the
$\overline{\mathrm{MS}}$ vev from the on-shell vev using
\begin{equation}
v^2_{\,\overline{\mathrm{MS}}} = v^2_{\text{OS}} + \delta{v}^2_{\text{OS-finite}},
\end{equation}
defining the on-shell vev by
\begin{equation}
v^2_{\text{OS}} \equiv \frac{4 M^2_W s^2_W}{e^2} = v^2_{G_F}(1+\Delta r)
\end{equation}
with the counterterm $\delta{v}^2_{\text{OS-finite}}$ given in
Eq.~\eqref{eq:vevOSCT} in the appendix. Here, $s^2_W = 1-\frac{M_W^2}{M_Z^2}$
denotes the sine squared of the weak mixing angle and $\Delta r$ parameterizes
the one-loop radiative corrections to the muon decay in the Fermi Model
\cite{Sirlin:1980prd}, the process by which $v_{G_F}$ is defined. The different
formulas needed for the conversion are collected in the
App.~\ref{se:vevconversion}. The value for $v_{\,\overline{\mathrm{MS}}}$ at the
scale of the top pole mass $M_t$ is then
\begin{align}\label{eq:SMvevvalue}
v_{\,\overline{\mathrm{MS}}}(M_t) = 247.3897 \text{ GeV}
\end{align}
employing again the Higgs-boson mass $M_h^{\text{SM}} = 125.15$~GeV. In the
numerical evaluation of the masses of the Higgs bosons and mixings, we use the
numbers given in Eqs.~\eqref{eq:SMcouplvalues} and \eqref{eq:SMvevvalue} as
input values for the SM parameters\footnote{The conversion from the SM input
values given in Eq.~\eqref{SM_inputs} to the parameters in
Eqs.~\eqref{eq:SMcouplvalues} and \eqref{eq:SMvevvalue} involves the
Higgs-boson mass so that, since we calculate the mass of the Higgs boson, a more
sophisticated approach would be an iteration where the conversion is
recalculated depending on the obtained result for the Higgs-boson mass. Since in
a physical viable scenario, the SM-like Higgs-boson mass should be about 125
GeV, we consider the ``one time'' conversion as sufficient.}.
For the remaining input parameters $M_{H^+}$, $\tan \beta$, $A_t$, $A_b$, $M_s$,
$\mu$, no measured values exist, but the experimental searches and measurements
constrain the viable parameter space. The exclusion bounds from searches of
further Higgs bosons \cite{Sirunyan:2018zut, Aaboud:2017sjh} constrain in
particular the region of high $\tan \beta$ and light ``heavy'' Higgs bosons.
Taking these results together with further studies of different parameter
scenarios \cite{Bahl:2018zmf, Bahl:2019ago}, it is clear that scenarios with
$\tan\beta>10$ and $M_{H^+},M_A <500$ GeV are strongly disfavoured by LHC data.
Flavour observables support these constraints, as discussed in
Ref.~\cite{Arbey:2017gmh} for different types of the 2HDM. In our numerical
analysis, we therefore favour values for the mass of the charged Higgs boson of
$M_{H^+} \geq 500$ GeV and $\tan\beta = 5$. To show specific features of the
results of our calculation, we will however partly take into account scenarios
that do not fulfill these constraints.
Regarding the non-vanishing phases, it should be noted that in the MSSM, some of
the parameter phases can be eliminated by symmetry transformations. Hence, only
certain combinations of phases are physical, i.e.\ can change the value of a
physical observable. Important constraints of these phase combinations come from
electric-dipole moment (EDM) measurements, see
e.g.~Ref.~\cite{Berger:2015eba,Abe:2018qlw,Cesarotti:2018huy} for more recent
studies of the constraints of the MSSM phases due to EDM. Since in our
calculation the phases of the $U(1)$ and $SU(2)$ gaugino-mass parameters $M_1$
and $M_2$ do not enter, we can assume that they are chosen such that the effect
on the EDM is minimized. Furthermore, larger masses of the SUSY particles tend
to relax the constraints coming from the EDM.
It should be noted that, in this paper, we refrain from explicit checks whether
a certain parameter point is viable, since we focus on specific features of the
results. In particular, using our results at the low scale to study the
constraints from the EDMs for the MSSM phases at the high scale will be
interesting but is left for future work.
Our default scenario is
\begin{align}\nonumber
A_t &= A_b \equiv A; \quad |A| = |\mu| = 3M_s; \quad M_{H^+} = 500 \text{ GeV}; \quad \tan \beta = 5; \\
\varphi_{A_t} &= \varphi_{A_b} = 2.1 \approx 0.67\pi = 120^\circ; \quad \varphi_\mu = 0; \quad \varphi_{M_3} = 0.
\end{align}
where $M_s$ is varied. The choice $|A| = |\mu| = 3M_s$ leads to large threshold
corrections to the 2HDM couplings and maximizes the amount of CP-violation
introduced into the theory. This way we can give an estimate of the largest
effects that can occur. Similarly, we observe that $\varphi_{A_t} =
\varphi_{A_b} = 2.1 \approx 0.67 \pi = 120^\circ$ maximizes roughly the size of the CP-odd
component of the lightest neutral Higgs boson, see
Sect.~\ref{sse:CPodd_component}. We will however deviate from this default
scenario in order to study the different characteristics and state that
explicitly.
\subsection{Influence of the Running of Complex Parameters}
\paragraph{}
In this work, we exploit the two-loop RGEs for the 2HDM with each Higgs doublet
coupling to up- as well as down-type fermions including all phases, see
Sect.~\ref{se:mass} for the details of our calculation of the RGEs. The first
numerical results exemplify the effect of including this phase dependence versus
the ``real RGE'' approximation where the phase dependence is taking into account
only via the threshold effects and the phases are assumed to be unaffected by
the running.
In Fig.~\ref{complex_real_rges}, we show the effect of including these phases on
the running of the 2HDM quartic couplings. The values of the couplings are
plotted against the scale $M_s$, demonstrating how this dependence changes when
the running of both the real and the imaginary part of the couplings is taken
into account. The coupling values shown are the ones that either enter the final
evolution at the scale $M_s$ or are calculated via the final evolution to the
scale of the charged Higgs mass in step \ref{masscalculation}. The red lines
represent the values obtained with RGEs taking phases into account while the
blue ones represent values obtained using only real parameters, determining the
sign via the argument of the corresponding parameter at the scale $M_s$. The
dashed lines are the parameters at the scale $M_s$, while the solid lines are
those at $M_{H^+}$. One can clearly see a dependence on whether the running of
the phases is taken into account or not. It changes the resulting absolute value
of the couplings $\lambda_5$, $\lambda_6$ and $\lambda_7$ as well as the phases
themselves. The dependence of the running on the phases is relatively small;
only the phase of $\lambda_7$ shows a change of up to a couple of degrees.
Hence, the overall dependence of the phases determined using the RGEs for the
complex case on the value of $M_s$ is also relatively small. Comparing the
phases at $M_s$ (dashed lines) with the ones obtained with using the real RGEs
(blue), one can double-check that indeed the phases do not change when
exploiting the real RGEs. The absolute values of $\lambda_5$ and $\lambda_6$
change more when the complex RGEs are applied compared to a result with only
real RGEs. The opposite is true for the absolute value for $\lambda_7$, which
changes less if the complex RGEs are applied.
Since the phase values at the low scale enter in the prediction of the
EDM, they can be relevant for checking the exclusion of high-scale CP-violating
MSSM scenarios due to the measurements of the EDM.
\begin{figure}
\centering
\begin{subfigure}[]{0.3\textwidth}
\includegraphics[width=\textwidth]{plots/lambdas/lam1.pdf}
\caption{$\lambda_1$}
\label{complex_real_lam1}
\end{subfigure}
%
\begin{subfigure}[]{0.3\textwidth}
\includegraphics[width=\textwidth]{plots/lambdas/lam2.pdf}
\caption{$\lambda_2$}
\label{complex_real_lam2}
\end{subfigure}
\begin{subfigure}[]{0.3\textwidth}
\includegraphics[width=\textwidth]{plots/lambdas/lam3.pdf}
\caption{$\lambda_3$}
\label{complex_real_lam3}
\end{subfigure}
\begin{subfigure}[]{0.3\textwidth}
\includegraphics[width=\textwidth]{plots/lambdas/lam4.pdf}
\caption{$\lambda_4$}
\label{complex_real_lam4}
\end{subfigure}
%
\begin{subfigure}[]{0.3\textwidth}
\includegraphics[width=\textwidth]{plots/lambdas/lam5Abs.pdf}
\caption{$|\lambda_5|$}
\label{complex_real_lam5real}
\end{subfigure}
\begin{subfigure}[]{0.3\textwidth}
\includegraphics[width=\textwidth]{plots/lambdas/Arglam5.pdf}
\caption{$\arg(\lambda_5)$}
\label{complex_real_lam5im}
\end{subfigure}
\begin{subfigure}[]{0.3\textwidth}
\includegraphics[width=\textwidth]{plots/lambdas/lam6Abs.pdf}
\caption{$|\lambda_6|$}
\label{complex_real_lam6real}
\end{subfigure}
\begin{subfigure}[]{0.3\textwidth}
\includegraphics[width=\textwidth]{plots/lambdas/Arglam6.pdf}
\caption{$\arg(\lambda_6)$}
\label{complex_real_lam6im}
\end{subfigure}
\begin{subfigure}[]{0.3\textwidth}
\includegraphics[width=\textwidth]{plots/lambdas/lam7Abs.pdf}
\caption{$|\lambda_7|$}
\label{complex_real_lam7real}
\end{subfigure}
\begin{subfigure}[]{0.3\textwidth}
\includegraphics[width=\textwidth]{plots/lambdas/Arglam7.pdf}
\caption{$\arg(\lambda_7)$}
\label{complex_real_lam7im}
\end{subfigure}
\caption{The quartic couplings'
dependence on $M_s$ for the default scenario: $\tan\beta=5$,
$\varphi_{A}=2.1 \approx 120^\circ$, $\varphi_{\mu} = \varphi_{M_3}=0$,
$|\mu|=|A|=3M_s$, $M_{H^+} = 500$ GeV. The red curves are the result of
employing complex RGEs, and the blue real RGEs. Dashed lines
are the couplings at the high scale $M_s$, and solid lines are couplings at
the scale $M_{H^+}.$
}
\label{complex_real_rges}
\end{figure}
\subsection{Comparison of Methods for Computing Masses}
\label{sse:comparison}
\paragraph{}
In Sect.~\ref{mass_calculation}, the calculation procedure was explained, and
in step \ref{masscalculation} of this procedure we discussed several possibilities for
calculating the mass of the Higgs boson $m_h$ at the low-energy scale: a)
exploiting the 2HDM at the scale $M_{H^+}$, b) employing
the 2HDM at the scale $m_{t}$, c) matching the 2HDM to the SM and using the SM
at the scale $m_t$, or d) approximating the effects of the matching to the SM
and running down to scale $m_t$. Using the first two approaches, one keeps
information about all the Higgs masses and their mixings with one another. This
is very important, as we are also interested in the size of the CP-odd component
of the lightest Higgs boson. However, if the scale $M_{H^+} \gg v \sim m_t$, we
again encounter large logarithms. The masses of the heavy Higgs bosons will not
pose a problem, since the logarithms are not equally enhanced by large prefactors as the ones appearing in the calculation of light Higgs mass and since the relative shifts due to the large tree-level masses are smaller, so we can trust the perturbative results for
these masses without an additional resummation of logarithms. These large
logarithms are more important for the lightest Higgs, on the other hand.
\begin{figure}
\centering
\begin{subfigure}[]{.4\textwidth}
\includegraphics[width=\textwidth]{plots/Lightest_higgs/Mass_method_compare/tanb5.pdf}
\caption{$\tan\beta=5$}
\end{subfigure}
\begin{subfigure}[]{.4\textwidth}
\includegraphics[width=\textwidth]{plots/Lightest_higgs/Mass_method_compare/tanb10.pdf}
\caption{$\tan\beta=10$}
\end{subfigure}\vspace{5mm}
\begin{subfigure}[]{.7\textwidth}
\includegraphics[width=\textwidth]{plots/Lightest_higgs/Mass_method_compare/tanb20.pdf}
\caption{$\tan\beta=20$}
\end{subfigure}
\caption{The mass of the light Higgs boson is shown in dependence on the
scale $M_{H^+}$ for the scenario $|A|=|\mu|=3M_s$, $M_s = 3$ TeV, $\varphi_A
= \varphi_\mu = \varphi_{M_3} = 0$, $\tan \beta = 5, 10, 20$ employing
different approximations. The curves labeled ``resummed" present the result
where the logarithms of the form $\ln(M_{H^+}/M_t)$ have been resummed by
matching to SM, but the Higgs mass is still calculated using the mass matrix
of the 2HDM. Those labeled with ``SM" are calculated by decoupling
completely from the 2HDM, and calculating a purely SM mass. Those labeled
with ``2HDM" refer to those calculated treating the 2HDM as the low-energy
theory. For the ``tree-level" curves, one-loop corrections have not been
included in the calculation of the pole mass.}
\label{method_compare}
\end{figure}
Figure~\ref{method_compare} shows the result of calculating the lightest
Higgs-boson mass using the different approaches for real parameters. The
calculation of the mass of the lightest Higgs boson using the 2HDM at the scale
$M_{H^+}$ and $m_t$ is denoted with ``$m_h(M_{H^+})$ 2HDM (a)'' and ``$m_h(m_t)$
2HDM (b)'', respectively. The result where the SM is the low-energy theory is
called ``$m_h$ SM pole (c)''. A variant of this result is ``$m_h$ SM pole tree
level'' where the self-energy contribution $\hat{\Sigma}^\text{SM}$ as well as
the one-loop tadpole contribution $\hat{T}^{\text{SM}}$ in Eq.~\eqref{eq:polemass}
are neglected. Finally, the results labeled ``$m_h$ resummed (d)'' correspond to
the approximation where parts of the logarithms $\ln\left(M_{H^+}/m_t\right)$
are resummed. The result ``$m_h$ resummed tree level'' neglects the self-energy
and the tadpole contributions in Eq.~\eqref{mass_mat}.
For large $M_{H^+}$, the ``$m_h$ SM pole (c)" is expected to be the most
``correct" answer for large $M_{H^+}$, while the result obtained by using the
2HDM as the EFT should be the best result when $M_{H^+}\sim v \sim m_t$.
The two 2HDM results agree for low $M_{H^+}$ but start to deviate quickly
with rising $M_{H^+}$. In the self-energy contribution in Eq.~\eqref{eq:det},
logarithms of $\ln\left(M_{H^+}/m_t\right)$ arise if the self energy is
evaluated at the scale $M_{H^+}$. Enhanced via large top Yukawa couplings, this
result differs quickly from the other 2HDM result where these logarithms vanish
in the self energy due to the scale choice and is taken into account via the
running of the parameters. Therefore, it is preferable to calculate the Higgs
mass $m_h$ at the scale $m_t$. If $M_{H^+}$ however is large, $M_{H^+} \gg m_t$,
the heavy Higgs-bosons have to be decoupled from the running of the parameters
and the SM is the correct low-energy theory. For $M_{H^+} = 1000$ GeV, the
deviation of the ``$m_h$ SM pole (c)" result from the ``$m_h(m_t)$ 2HDM (b)''
result is roughly 500 MeV for all values of $\tan \beta$.
That means that, for $M_{H^+} <
1000$ GeV, the 2HDM can still be used as a reasonable low-energy theory but
with increasing theoretical uncertainty for increasing values of $M_{H^+}$. The difference between both results ``$m_h$ SM pole (c)" and ``$m_h(m_t)$ 2HDM (b)'' first decreases and then starts growing. For $\tan \beta = 10$ and $\tan \beta = 20$, the increase sets already in for lower values of $M_{H^+}$.
For
small $M_{H^+}$, the mixing of the Higgs bosons becomes relevant, which
leads to a decrease of the Higgs-boson mass in the 2HDM results. The ``$m_h$ SM
pole (c)" result does not take the mixing of the Higgs bosons into account due
to the decoupling of the heavy Higgs bosons. The ``$m_h$ resummed (d)'' result
follows nicely the 2HDM result for low values and the SM result for large
values of $M_{H^+}$. Hence, it interpolates well between the two options.
Therefore, we have chosen this as default option.
In addition, the effect of the one-loop self energies in the calculation of the
pole masses can be read off when comparing the ``$m_h$ resummed (d)'' and
``$m_h$ resummed tree level'' result (or similar the corresponding SM pole mass
results). The top and bottom quark contributions that we take into account lead
to approximately a 0.6 GeV rise of the result.
\subsection{The Mass of the Light Higgs Boson}
\paragraph{}
In this section, we discuss the dependence of the lightest Higgs boson on the
MSSM input parameters $\mu$, $A_t=A_b=A$, $\tan\beta,$ and $\varphi_{M_3}$.
\begin{figure}[t]
\centering
\begin{subfigure}[]{.4\textwidth}
\includegraphics[width=\textwidth]{plots/Lightest_higgs/mh_vs_phiA_A_Mu_2MS.pdf}
\caption{$|A| = |\mu| = 2M_s$}
\end{subfigure}
\begin{subfigure}[]{.4\textwidth}
\includegraphics[width=\textwidth]{plots/Lightest_higgs/mh_vs_phiA_A_Mu_3MS.pdf}
\caption{$|A| = |\mu| = 3M_s$}
\end{subfigure}\vspace*{0.3cm}
\begin{subfigure}[]{.4\textwidth}
\includegraphics[width=\textwidth]{plots/Lightest_higgs/mh_vs_A_nophases.pdf}
\caption{$\varphi_A = 0$, $|\mu|=M_s$}
\end{subfigure}
\begin{subfigure}[]{.4\textwidth}
\includegraphics[width=\textwidth]{plots/Lightest_higgs/mh_vs_A_phiA_2pt1.pdf}
\caption{$\varphi_A = 2.1 \approx 120^\circ$, $|\mu|=M_s$}
\end{subfigure}
\caption{The upper row presents the mass of the lightest Higgs boson depending on the phase
$\varphi_A$ while the lower role shows the dependence on the absolute value
of $A$ for a scenario with $M_{H^+}=500$ GeV, $M_s = 5$ TeV,
$\varphi_{M_3}=0$, and $\tan \beta = 5, 10, 15, 20$.}
\label{fig:A_dependence}
\end{figure}
The first row of Fig.~\ref{fig:A_dependence} shows the dependence on the common
phase $\varphi_A$. Firstly, the plots demonstrate that the sensitivity of
the mass of the lightest Higgs boson to the phase is highly dependent on
$\tan\beta$, as the mass fluctuates more with $\varphi_A$ for low values of
$\tan\beta$. For $\tan \beta = 5$, the mass of the Higgs boson leads to a change
of the Higgs-boson mass of almost 20 GeV in the quite extreme case of
$|A|/M_s =3$, while for $\tan \beta = 20$ it is only about 5 GeV for otherwise
the same parameters. Secondly, it can be seen that the quantitative features are
strongly dependent on the ratio
$r_{\mathrm{inputs}}\equiv\frac{A_b,A_t,\mu}{M_s}$. The qualitative dependence
on the phase is in fact opposite for the two cases $r_{\mathrm{inputs}} =2$ and
$r_{\mathrm{inputs}} =3$, where the mass has either a minimum for
$r_{\mathrm{inputs}} =3$ or near maximum value for $\varphi_A = 180^\circ$ for
$r_{\mathrm{inputs}} = 2$. The different dependence on the phase $\varphi_A$
with respect to the ratio $|A|/M_s$ can also be read off the second row of
Fig.~\ref{fig:A_dependence} where the lightest Higgs mass is plotted against the
magnitude of the common trilinear $|A|$ with $\mu$ fixed to $\mu = M_s$. Here,
it is seen that the mass peaks for $|A| \approx 12.5 \text{ TeV } = 2.5 M_s $ for $\varphi_A
= 0$ (left plot), and increasing $\tan\beta$ shifts the peak slightly to lower
values of $|A|/M_s$. For $\varphi_A = 2.1 \approx 120^\circ$, the peak is shifted to a lower
value $|A|/M_s\approx 2.2$ while increasing $\tan \beta$ leads to a shift to
slightly higher values in this case. Comparing the left and right plot of the lower row
of Fig. ~\ref{fig:A_dependence}, one can read off for which $|A|/M_s$, the
values of the Higgs-boson mass are smaller for $\varphi_A = 0$ than for
$\varphi_A = 180^\circ$ and vice versa resulting in the changed maxima in the
upper row of Fig.~\ref{fig:A_dependence}.
The effect of changing $|\mu|$ can be seen by comparing the upper rows from
Fig~\ref{fig:A_dependence} with the lower rows. The value of mu changes from
$\mu = 3 M_s$ to $\mu=M_s$ from the top right figure to the bottom row. Comparing
values for $m_h$ for $|A| = 10, 15$ TeV and $\varphi_A = 0^\circ$ in the shown
example leads to a rise of the Higgs-boson mass by roughly 1 GeV. For $\varphi_A = 2.1 \approx
120^\circ$, the same change leads to a shift of up to 6 GeV in the example
scenarios, which is reached for $\tan \beta = 5$.
Figure \ref{Phi_M3} shows the dependence on $\varphi_{M_3}$. The overall
dependence on $\varphi_{M_3}$ is not very large and varying $\varphi_{M_3}$
leads to changes of the order of 1 GeV. The largest shift of 1.3 GeV in the
considered scenarios can be found for $\tan \beta = 5$ and similar for $\tan
\beta = 20$. For $\tan\beta = 10, 15$, the changes are slightly smaller.
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{plots/Lightest_higgs/mh_vs_phiM3_A_Mu_3MS.pdf}
\caption{Dependence of the mass of the lightest Higgs boson on the phase
$\varphi_{M_3}$ for a scenario with $M_{H^+}=500$ GeV, $\varphi_{A} = 0$, $M_s = 5$ TeV, and $|A|=|\mu|=3M_s$.}
\label{Phi_M3}
\end{figure}
\subsection{A CP-odd Admixture to the Light Higgs boson}\label{sse:CPodd_component}
\paragraph{}
\begin{figure}
\centering
\begin{subfigure}[t]{.4\textwidth}
\includegraphics[width=\textwidth]{plots/CPvio/percent_CPodd_h1.pdf}
\caption{The solid curves are calculated using the tree-level mixing matrix,
and the dashed curves are calculated using the one-loop mixing matrix at
zero-external momentum ($p^2 = 0$), both calculated at $\mu=M_{H^+}$}
\end{subfigure}
\hspace{.5cm}
\begin{subfigure}[t]{.4\textwidth}
\includegraphics[width=\textwidth]{plots/CPvio/percent_CPodd_h1_tree_compare.pdf}
\caption{The solid curves are calculated using the tree-level mass matrix
without the resummed log contribution, and the dashed curves include the log
resummation, both at $\mu = M_{H^+}$}
\end{subfigure}
\caption{The other parameters of the scenario are $\tan\beta=5$, $M_s=30$ TeV, $\varphi_\mu = \varphi_{M_3} = 0$, and $|A| = |\mu| = 3M_s.$ }
\label{CPodd_mh_phiA}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.7\textwidth]{plots/Lightest_higgs/CPodd_tanb_MHplus.pdf}
\caption{The dependence of the size of the CP-odd component on $M_{H^+}$ and $\tan \beta$ is shown for a scenario $M_s=30$ TeV,$\varphi_{M_3} = 0,$ $\varphi_{A} = 2.1 \approx 120^\circ$, and $|A| = |\mu| = 3M_s$. The
color coding gives the percentage of the lightest Higgs boson that is CP-odd.}
\label{CPodd_tanb_MHplus}
\end{figure}
The detection of a CP-odd component in the SM-like Higgs boson would be a sign
of new physics, and it is therefore interesting to explore the size of such a
component generated by the CP-violating phases in the complex MSSM. While we
leave a detailed analysis of the current experimental sensitivity to this CP-odd
component for future work, in this section we aim to give a qualitative picture
of the size of this component and its dependence on the relevant parameters.
We calculate the CP-odd component using the tree-level mixing matrix
$\mathcal{M}^2$ of ~\eqref{eq:2HDMmassmatrixneutral}, inserting the parameter
values at $M_{H^+}$ obtained after the final iteration. First, we separate out
the Goldstone field by rotating by $D\mathcal{M}^2D^{\mathrm{T}}$, where
\begin{equation}
D= \begin{pmatrix} I_2 & 0 \\ 0 & \begin{matrix} -\sin\beta &
\cos\beta \\ \cos\beta & \sin\beta \end{matrix}\end{pmatrix}
\end{equation}
with $I_2$ being the two-dimensional identity matrix.
This rotates the mass matrix into the
basis of the fields $\phi_1, \phi_2, a, G$, where $\phi_1,\phi_2$ are pure CP-even
fields, $a\equiv-a_1\sin\beta
+ a_2\cos\beta$ is pure CP-odd, and $G$ is the the Goldstone boson. In this
basis, the mass matrix is block-diagonal, with a $3\times3$ block for the fields
$\phi_1,\phi_2,a$, and a $1\times1$ block for the Goldstone boson that does not mix with any of
the other fields.
The rotation matrix $P$ that diagonalizes the $3\times3$ matrix relates the
physical fields, $h$, $H_2$, $H_3$ to the interaction fields after electroweak symmetry breaking, $\phi_1$, $\phi_2$, $a$, by
\begin{equation} \label{eq:mixing}
\begin{pmatrix} h \\ H_2 \\ H_3 \end{pmatrix} = P\begin{pmatrix} \phi_1 \\ \phi_2 \\ a \end{pmatrix}.
\end{equation}
The third column of $P$ gives the size of the CP-odd component of each of the
physical fields. We choose to plot the CP-odd percentage rather than the
mixing-matrix component itself, which is just $P_{i3}^2*100$ for $i, 1...3$.
In Fig.~\ref{CPodd_mh_phiA}, we show how the CP-odd component depends on
$\varphi_A$ and $M_{H^+}$. The solid lines in Fig.~\ref{CPodd_mh_phiA} (a) and
(b) are the same and depict the result of the procedure just described. It is
seen that this component is maximized for values of the phases from $110^\circ$ to $120^\circ$,
justifying our claim from Sect.~\ref{sse:inputs}. In addition, the results
confirm that the CP-odd component very quickly drops to vanishingly small values
as $M_{H^+}$ increases. This agrees with previous findings, see e.g. Ref.~\cite{Li:2015yla}. Since data
disfavours a charged Higgs boson with small mass, it also disfavours a sizeable
CP-odd component of lightest Higgs boson in the MSSM.
Figure~\ref{CPodd_mh_phiA} (a) also shows the difference between the one-loop
corrected mixing matrix evaluated at zero external momentum ($p^2=0$) using the
mass matrix given in Eq.~\eqref{mass_mat} and the tree-level case, both with parameters at the scale $M_{H^+}$, which is
evidently numerically small. The largest effect can be seen for $M_{H^+} =
200$~GeV in the peak region where the inclusion of one-loop corrections leads
to an increase of the squared CP-odd component of about 0.2 percentage points.
If not otherwise stated, we use the tree-level mixing in the following.
In Fig.~\ref{CPodd_mh_phiA} (b), we compare the approach described above (solid
line) with the approach where instead of $\mathcal M^2$ the mass matrix
$(\mathcal M^2)^{\text{resum}}$, see Eq.~\eqref{eq:Msquaredresum}, is used to
calculated the CP-odd admixture (dashed line). The maximal difference is again
about 0.2 percentage points and decreases with larger values of $M_{H^+}$. Not
shown are one-loop effects in the resummed approach as well as results when
parameters are evaluated at the scale $m_t$. The corresponding results would lie
approximately between the dashed lines in Fig.~\ref{CPodd_mh_phiA} (a) and (b).
Figure \ref{CPodd_tanb_MHplus} shows the percentage of the lightest Higgs boson
that is CP-odd (shaded contours) and the mass of lightest Higgs boson (dashed
line contours) in the $(M_{H^+},\tan\beta)$ plane. As before, the CP-odd
component is calculated according to the approach described above using
$\mathcal M^2$ and parameters evaluated at the scale $M_{H^+}$ without including
loop effects. Again, the CP-odd component drops rapidly with increasing
$M_{H^+}$, and one sees that it is only weakly dependent on $\tan\beta$. In
light of the discussion in Sect.~\ref{sse:inputs} and the mass of the observed
Higgs boson being $\sim~125$ GeV, the results of this section suggest that a
large CP-odd component of the lightest Higgs boson is strongly disfavoured by
experimental data. Already for $M_{H^+} = 260$~GeV in this scenario the CP-odd
component drops below 0.5\%.
According to
Ref.~\cite{Berge:2015nua}, an angle $\varphi_\tau$ of about $4^\circ$ can be
reached with the high-luminosity run of the LHC. The angle $\varphi_\tau$ is
defined via the effective Lagrangian given in four-component Dirac notation
\begin{align}
\mathcal{L}^{\tau-\text{Yukawa}}_{\text{eff}} = - \frac{m_\tau}{v}
\kappa_\tau\left(\cos\varphi_\tau \bar{\tau} \tau +\sin\varphi_\tau \bar{\tau}
\text{i} \gamma_5 \tau\right) h
\end{align}
where $\kappa_\tau$ denotes the
change of the absolute strength of Yukawa coupling with respect to the SM while
$\varphi_\tau$ governs the amount of CP-violation. To connect to the notation of
\eqref{yukawas}, we write this in two-component notation as
\begin{equation}
\label{eq:effective_tau_yukawa}
\mathcal{L}^{\tau-\text{Yukawa}}_{\text{eff}} =
-\frac{m_\tau}{v}\kappa_\tau\left(\cos\varphi_\tau \tau\tau_c -
\text{i}\sin\varphi_\tau \tau\tau_c \right)h + \mathrm{h.c.}
\end{equation}
In the case of the 2HDM discussed here, assuming that the coupling of the second
Higgs doublet to the tau leptons is negligible, $\tan \varphi_\tau =
-s_\beta P_{13}/P_{11}\approx - P_{13}/P_{11}$ for scenarios with
$t_\beta\gtrapprox 5$. A CP-odd component of $P_{13}^2\star100 = 0.5\%$ as discussed
above will lead to $\varphi_\tau\gtrapprox4^\circ$, since $P_{11}
\leq 1$. Going back to the scenarios discussed in Fig.~\ref{CPodd_mh_phiA}, the
maximal value of $\varphi_\tau$ is larger than $20^\circ$ for $M_{H^+} =
200$~GeV and larger than $3^\circ$ but below $4^\circ$ for $M_{H^+} = 500$~GeV.
Hence, these values suggest that if the MSSM is the final answer, then while it
will certainly be difficult, it might be possible to determine a CP-odd component experimentally with further
improvements of the measurements. However, we neither checked whether the Higgs signal rates are in the
experimentally allowed regime---this is particularly relevant if $\varphi_\tau$
is enhanced by a smaller $P_{11}$ as it is actually the case for the scenario in
Fig.~\ref{CPodd_mh_phiA} for $M_{H^+} = 500$~GeV---nor took into account any
constraints from the electric dipole moments. Thus, to find out whether there is
still a viable region of parameter space with a measurable CP-odd admixture of
the SM-like Higgs boson in the MSSM with heavy superpartners, further
investigations are required.
\subsection{Masses and Mixings of the Heavy Higgs Bosons}
\paragraph{}
\begin{figure}
\centering
\includegraphics[width=.5\textwidth]{plots/HeavyHiggs/vs_argA.pdf}
\caption{The dependence of the mass of the second lightest and the heaviest Higgs boson on the phase $\varphi_A$ is shown for the parameter scenario $M_s= 30$ TeV, $M_{H^+}=500$ GeV, $|A|=|\mu| = 3M_s$, $\varphi_{M_3}
= \varphi_\mu = 0$. The solid lines correspond to the mass of the second lightest Higgs boson, $m_{H_2}$, and the dashed lines
to the mass of the heaviest Higgs boson, $m_{H_3}$.}
\label{heavyhiggs_vs_argA}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[]{.4\textwidth}
\includegraphics[width=\textwidth]{plots/CPvio/percent_CPodd_h2.pdf}
\end{subfigure}
\begin{subfigure}[]{.4\textwidth}
\includegraphics[width=\textwidth]{plots/CPvio/percent_CPodd_h3.pdf}
\end{subfigure}
\caption{The dependence of the CP-odd component in percent for the next-to lightest Higgs boson, $P_{23}^2\star 100$, in the left plot and of the heaviest Higgs boson, $P_{33}^2\star100$, in the right plot on the phase $\varphi_A$ for the scenario $\tan\beta=5$, $M_s=30$ TeV, $\varphi_\mu = \varphi_{M_3} = 0$, and
$|A| = |\mu| = 3M_s.$ The solid curves are calculated using the tree-level
mixing matrix, and the dashed curves are calculated using the one-loop
mixing matrix at zero-external momentum ($p^2 = 0$).}
\label{mixing_phiA}
\end{figure}
In this section we explore the masses and mixings of the heavy Higgs bosons
$H_2$ and $H_3$. The masses are calculated according to
Sect.~\ref{mass_calculation} where, in step~\ref{masscalculation} of the calculation procedure,
option (d) is exploited where large logarithms $\ln(M_{H^+}/m_t)$ are resummed. In Fig.~\ref{heavyhiggs_vs_argA}, the masses of the two
heavy neutral Higgs bosons $m_{H_2}$ and $m_{H_3}$ are shown in dependence of
$\varphi_A$ with solid and dashed lines, respectively. Similar to the mass
of the light Higgs boson, the phase dependence is larger for low $\tan \beta$,
with a change of the mass of the second-to lightest neutral Higgs boson of
roughly 3.5 GeV and of the heaviest neutral Higgs boson of 1.5 GeV for $\tan \beta
= 5$. This phase dependence is gradually washed out for increasing $\tan\beta$.
In Fig.~\ref{mixing_phiA}, the CP-odd percentage is shown with
$P_{23}^2$ and $P_{33}^2$ defined in Eq.~\eqref{eq:mixing} and evaluated at
tree level with the 2HDM parameters at the scale $M_{H^+}$. Figure~\ref{mixing_phiA}
demonstrates that the mixing of the two heavy neutral Higgs bosons is very
nearly independent of $M_{H^+}$. Only for $M_{H^+}= 200$ GeV can slight deviations
from the other results be observed. For $M_{H^+}= 200$ GeV, the CP-oddness
is also shared with the lightest neutral Higgs boson while for larger $M_{H^+}$
the CP-odd component is mostly shared between the second lightest and heavy
Higgs boson. One can also read off of Fig.~\ref{mixing_phiA} that, for real
values of $|A|$ i.e. for $\varphi_A = 0,180, 360^\circ$, the heavy Higgs boson
$H_3$ is CP-odd and $H_2$ is CP-even. However, for $\varphi_A
\approx 90, 270^\circ$, the next-to lightest neutral Higgs boson is mostly
CP-odd. That means both Higgs bosons oscillate between $\sim 100\%$ CP-odd
and CP-even with the phase $\varphi_A$.
In addition, it is shown in Fig.~\ref{mixing_phiA} that the loop effects are
negligible for the mixing of the heavy Higgs bosons. Not shown are differences
between the different methods of evaluating the CP-mixing, i.e.\ whether the
parameters are evaluated at $M_{H^+}$ or at $m_t$, and whether $\mathcal M^2$ or
$(\mathcal M^2)^{\text{resum}}$ is used. The differences are on the same order
as the difference shown between the tree-level and the one-loop results, in
other words negligible.
|
1305.0678
|
\section{Introduction}
It is well know fact that hyperbolic dynamics has been one of the most sucessfull theory in dynamical systems. But soon was realized that there is an easy way to relax hyperbolicity, called partial hyperbolicity, which allows the tangent bundle to split into invariant subbundles $TM=E^s\oplus E^c\oplus E^u,$ such that the behavior of vectors in $E^s, E^u$ is similar to the hyperbolic case, but vectors in $E^c$ may be neutral for the action of the tangent map. This notion arose in a natural way in the context of time one maps of Anosov flows, frame flows, group extensions and it was possible to show the existence of open set od partially hyperbolic diffeomorphisms which are not hyperbolic. See \cite{BP}, \cite{Sh}, \cite{M}, \cite{BD}, \cite{BV} for examples of these systems and \cite{HP}, \cite{PS} for an overview.
However, until the results in \cite{CP}, partially hyperbolic systems where unknown in the context of geodesic flows induced by Riemannian metrics. In fact, in \cite{CP} it was proved that
{\em for some compact locally symmetric space $(M,g)$ whose sectional curvature takes values in the whole interval $[-4a^2,-a^2]$, there is a metric $g^*$ in $M$ such that its geodesic flow is partially hyperbolic but not Anosov.}
On the other hand, from the works of J. Lewowicz, qudratic forms have been a powerfull tool to characterize expansive dynamics in general and hyperbolic ones in particular (see \cite{L1, L4}); moreover, this approach have been extended to the context of geodesic flows (see for instance \cite{L}, \cite{P}, \cite{R1} and \cite{R2}) and billiards (see \cite{Mar}, \cite{MPS}). Those techniques, using previous results by Potapov, have been extended and generalized in \cite{W2}.
In the present paper, we explore the use of quadratic forms for the particular context of partially hyperbolic geodesic flows. Moreover, we revisite the examples of partially hyperbolic geodesic flows provided in \cite{CP} using quadratic forms and we relate the partially hyperbolicity with the curvature tensor.
In the second section, we give some definitions and the criteria we are going to apply to get partially hyperbolic examples.
In the thid section, we prove the main theorem with the help of the criteria and to get a corollary, which is going to be useful to prove that some examples are partially hyperbolic.
In the fourth and last section we present some examples of geodesic flows which are going to be partially hyperbolic.
\section{Definitions}
Before given the results, we need to introduce a few definitions.
A partially hyperbolic flow $\phi_t : M \to M$ in the manifold $M$ generated by the vector field $X: M \to TM$ is a flow such that its quotient bundle $TM / \langle X \rangle$ (assuming that $X$ has not singularities) have an invariant splitting $TM / \langle X \rangle = E^s \oplus E^c \oplus E^u$ such that these subbundles are non trivial and with the following properties:
$$ d\phi_t(x) (E^s(x)) = E^s(\phi_t(x)), d\phi_t(x) (E^c(x)) = E^c(\phi_t(x)), d\phi_t(x) (E^u(x)) = E^u(\phi_t(x)), $$
$$ || d\phi_t(x)|_{E^s} || \leq C \exp(t \lambda), || d\phi_{-t}(x)|_{E^u} || \leq C \exp(t \lambda), $$ $$C \exp(t \mu) \leq || d\phi_t(x)|_{E^c} || \leq C \exp(- t \mu),$$
\noindent for $\lambda < \mu < 0 < C$.
Let $(M,g)$ be a Riemannian manifold, $p: TM \to M$ its tangent bundle, $\phi_t: SM \to SM$ be its geodesic flow. The geodesic flow is always a Reeb flow \cite{P}, i.e., given a $2n+1$-dimensional manifold $N$, an one form $\tau$ such that $\tau \wedge d\tau^n$ is a volume form, the Reeb vector field $Y$ is the vector field such that $i_Y \tau = 1$ and $i_Y d\tau = 0$, and its flow is a Reeb flow. The kernel of $\tau$ is called the contact structure of the contact manifold $(N,\tau)$. It is allways invariant under the flow and transversal to the Reeb vector field \cite{P}.
The double tangent bundle $TTM$ is isomorphic to the vector bundle $\mathcal{E} \to TM$, $\mathcal{E} = \pi^*TM \oplus \pi^*TM$, with fiber $\mathcal{E}_v = T_{\pi(v)}M \oplus T_{\pi(v)}M$. We define the isomorphism as $$\mathcal{I}: TTM \to \mathcal{E} : Z \to ((\pi \circ V)'(0), \frac{DV}{dt}(0)),$$ where $V$ is a curve on $TM$ such that $V'(0) = Z$. The Sasaki metric in the double tangent bundle is the pull-back of the metric $\widetilde{g}$ on $\mathcal{E}$: $\widetilde{g}_v((\eta_1,\varsigma_1),(\eta_2,\varsigma_2)) = g_{\pi(v)}(\eta_1,\eta_2) + g_{\pi(v)}(\varsigma_1,\varsigma_2)$ where $(\eta_1,\varsigma_1),(\eta_2,\varsigma_2) \in T_{\pi(v)}M \oplus T_{\pi(v)}M$. The contact structure $\xi(SM)$ of the geodesic flow is identified with the vector bundle $\mathcal{E'} \to SM$, $\mathcal{E'}_v \subset \mathcal{E}_v$, for all $v \in SM$, whose fiber at $v$ is $v^{\bot} \oplus v^{\bot}$, where $v^{\bot} = \{ w \in T_{\pi(v)}M : g(w,v) = 0 \}$. The derivative of the geodesic flow, given the identification of $TTM$ and $\mathcal{E}$ is $d_v\phi_t(\eta,\varsigma) = (J(t),J'(t))$, $J(t) \in T_{\phi_t(v)}M$ such that $J''(t) + R(\phi_t(v),J(t))\phi_t(v) = 0$ \cite{Ba1},\cite{P}.
Let $\pi: \xi(SM) \to SM$ be the contact structure of the geodesic flow of $(M,g)$. Let $Q: \xi(SM) \to \mathbb{R}$ be a nondegenerate quadractic form of constant signature $(l,m)$. Let $\mathcal{C}_+(x,v) = \{ \eta \in \xi(x,v) : Q_{(x,v)}(\eta) > 0 \}$ be its positive cone, $\mathcal{C}_-(x,v) = \{ \eta \in \xi(x,v): Q_{(x,v)}(\eta) < 0 \}$ be its negative cone and $\mathcal{C}_0(x,v) = \{ \eta \in \xi(x,v): Q_{(x,v)}(\eta) = 0 \}$ be their boundary. The criteria says the following:
\begin{lemma}
If $$\frac{d}{dt} \mathcal{Q}(\eta,\varsigma) > 0$$ for all $(\eta,\varsigma) \in \mathcal{C}_0(x,v)$, $(x,v) \in SM$, then the flow $\phi_t$ is strictly $Q$-separated. This criteria and reversibility of the geodesic flow imply it has a partially hyperbolic splitting $\xi = E^s \oplus E^c \oplus E^u$, $dim(E^{\sigma}) = l$, $\sigma=s,u$.
\end{lemma}
\begin{proof}
See the proof in \cite{W1}.
\end{proof}
\section{Main results}
In this section we state the main theorem and a corollary which is going to be useful to prove that some geodesic flows are partially hyperbolicity.
Let $(M^n,g)$ be a $n$-dimensional Riemannian manifold, $\nabla$ its Levi-Civita connection. Let $R_x: T_xM \times T_xM \times T_xM \to T_xM$ be its curvature tensor. Let $v \in T_xM$, then $R_x(v,\cdot)v: T_xM \to T_xM$ is a symmetric linear operator. We can restrict it to $v^\bot$, since $R(v,v)v = 0$. So $R(v,\cdot)v : v^\bot \to v^\bot$ is a symmetric linear operator. So we can diagonalize it: there are eigenvalues $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_{n-1}$ and eigenvectors $v_1,v_2,\ldots,v_{n-1}$ such that $R(v,v_k)v = \lambda_k v_k$.
Suppose there is an $1 < r < n-2$ such that $\lambda_{r}(v) > \lambda_{r+1}(v)$ for each $v \in TM$. Then we are able to define $A(v) = \mathbb{R}v_1 \oplus \ldots \oplus \mathbb{R}v_r$ and $B(v) = \mathbb{R}v_{r+1} \oplus \ldots \oplus \mathbb{R}v_{n-1}$. It is easy to see that $A(v) \oplus B(v) = v^\bot$. Let $Gr(r,TM)$ be the Grassmanian bundle of $r$-dimensional subbundles of $TM$. Then $A,B : TM \to Gr(r,TM)$. Also $A(cv) = A(v)$, $B(cv) = B(v)$ for all $c \in \mathbb{R}$, $c \neq 0$. So we consider $A,B : SM \to Gr(r,TM)$, where $SM$ is the unitary tangent bundle of $M$.
Let $P_{A(v)} : T_{p(v)}M \to A(v)$ be the orthogonal projection to $A(v)$. Let $A' = P_{A(v)}' = \frac{d}{dt}|_{t=0} P_{A(\phi_t(v))}$. Let $\eta_A$ be $P_A \eta$. Let $K_A$ be the restriction $R(v,\cdot)v : A(v) \to A(v)$ and $K_B$ be the restriction $R(v,\cdot)v : B(v) \to B(v)$.
Linearization of the derivative of the geodesic flow gives you the system of equations $$\eta' = \varsigma , \varsigma' = - R(v,\eta)v,$$ for $(\eta,\varsigma) \in T_xM \oplus T_xM \cong T_vTM$.
\begin{theorem}
\label{t.maintheo}
Let $(M,g)$ be a Riemannian manifold. Suppose $A: SM \to Gr(r,TM): v \to A(v) \subset T_{p(v)}M$ is a continuous function (smooth along geodesics). Let the quadractic form be $\mathcal{Q}^c(\eta,\varsigma) = g(\eta_A,\varsigma_A) - c^2g(\eta_B,\eta_B) - g(\varsigma_B,\varsigma_B)$, where $c$ is a positive real number. If
$\frac{d}{dt} \mathcal{Q}^c(\eta,\varsigma) = \widetilde{g}(S^c(\eta,\varsigma),(\eta,\varsigma))$ is positive for the following matrix $$S^c = \begin{bmatrix} -K_A & 0 & c^2A' & \frac{1}{2} A' \\ 0 & Id & \frac{1}{2} A' & A' \\ c^2A' & \frac{1}{2} A' & 0 & -c^2Id + K_B \\ \frac{1}{2} A' & A' & -c^2Id + K_B & 0 \end{bmatrix}$$ and for all $(\eta,\varsigma) \in \mathcal{C}_+(x,v)$, $(x,v) \in SM$, then the geodesic flow of the Riemannian manifold $(M,g)$ is partially hyperbolic $dim(E^{\sigma}) = r$, $\sigma = s,u$.
\end{theorem}
\begin{proof}
We have to calculate the following derivative:
\begin{eqnarray*}
\frac{d}{dt} \mathcal{Q}^c(\eta,\varsigma) & = & \frac{d}{dt} (g(\eta_A,\varsigma_A) - c^2 g(\eta_B,\eta_B) - g(\varsigma_B,\varsigma_B)) \\ & = & g(\varsigma_A,\varsigma_A) - g(R(v,\eta)v,\eta_A) + g(\eta_{A'},\varsigma_A) + g(\eta_A,\varsigma_{A'}) \\ & - & 2c^2 g(\eta_B,\varsigma_B) + 2 g(R(v,\eta)v,\varsigma_B) - 2c^2 g(\eta_{B'},\eta_B) - 2 g(\varsigma_{B'},\varsigma_B).
\end{eqnarray*}
$P_A$ is the orthogonal projection to $A$, $P_B$ is the orthogonal projection to $B$, then $P_A (P_A)' = (P_A)' P_B$ and $P_B (P_A)' = (P_A)' P_A$. It implies that
\begin{eqnarray*}
\frac{d}{dt} \mathcal{Q}^c(\eta,\varsigma) & = & g(\varsigma_A,\varsigma_A) - g(R(v,\eta)v,\eta_A) + g((P_A)' \eta_B,\varsigma_A) + g(\eta_A,(P_A)' \varsigma_B) \\ & - & 2 c^2 g(\eta_B,\varsigma_B) + 2 g(R(v,\eta)v,\varsigma_B) + 2 c^2 g((P_A)' \eta_{A},\eta_B) + 2 g((P_A)' \varsigma_{A},\varsigma_B)).
\end{eqnarray*}
\end{proof}
\noindent Now suppose we are able to define two functions, $\alpha,\beta: SM \to \mathbb{R}_+$ such that
\begin{itemize}
\item[i.] $-\alpha(v)^2 > max \{\lambda_i\}_{i=1}^r$,
\item[ii.] $-\alpha(v)^2 < - \beta(v)^2 < \lambda_i$ if $i=r+1,\ldots,n-1$,
\item[iii.] there is a constant $e \in \mathbb{R}_+$ such that $\beta(v) < e < \alpha(v)$ for all $v \in SM$,
\end{itemize}
\noindent then we are able to fix $c \in \mathbb{R}_+$ such that $c := e$.
\begin{corollary}
\label{c.maintheo}
Under the hypothesis of theorem \ref{t.maintheo} and the hypothesis stated above, there is an $\epsilon: SM \to \mathbb{R}_+$ which depends on the curvature tensor $R$ and the real numbers $c$ and $d$ such that if $\| A'(v) \| < \epsilon(v)$ then geodesic flow of the Riemannian manifold $(M,g)$ is partially hyperbolic $dim(E^{\sigma}) = r$, $\sigma = s,u$.
\end{corollary}
\begin{proof}
The proof is straightforward from the theorem \ref{t.maintheo}, if one notices that if $A' = 0$ then $\frac{d}{dt} \mathcal{Q}^e = \widetilde{g}(S^e \cdot,\cdot)$ where $$S^e = \begin{bmatrix} -K_A & 0 & 0 & 0 \\ 0 & Id & 0 & 0 \\ 0 & 0 & 0 & -e^2Id + K_B \\ 0 & 0 & -e^2Id + K_B & 0 \end{bmatrix}$$ so for $(\eta,\varsigma) \in C_+$ we have $\frac{d}{dt} \mathcal{Q}^e \geq g(\varsigma_A,\varsigma_A) + \alpha^2 g(\eta_A,\eta_A) - 2(e^2 + \beta^2) g(\eta_B,\varsigma_B)$ and $g(\eta_A,\varsigma_A) \geq e^2g(\eta_B,\eta_B) + g(\varsigma_B,\varsigma_B) \geq 2e g(\eta_B,\varsigma_B)$. Then,
$$ \frac{d}{dt} \mathcal{Q}^e \geq g(\varsigma_A - \alpha \eta_A,\varsigma_A - \alpha \eta_A) + 2 \alpha g(\eta_A,\varsigma_A) - 2(e^2 + \beta^2) g(\eta_B,\varsigma_B)$$ $$\geq g(\varsigma_A - \alpha \eta_A,\varsigma_A - \alpha \eta_A) + (2 \alpha - e - \frac{\beta^2}{e}) g(\eta_A,\varsigma_A) > 0$$ for $(\eta,\varsigma) \in \mathcal{C}_+$, since $2 \alpha - e - \frac{\beta^2}{e}$ when $e \in (\beta,\alpha)$.
\end{proof}
\begin{remark}
So, if we look at the statements in the corollary, we see that at the moment we have to ask for the existence of an interval between the $r$-biggest eingenvalues of $R(v,\cdot)v$ and the other eiganvalues (second hypothesis), and a non-oscillatory hypothesis for this interval, i.e., it has to have a constant $e$ in the interval which does not depend on $v \in SM$. One good question would be if partial hyperbolicity still holds when there is no such constant.
\end{remark}
\section{Examples}
In this section we show some examples. The first example, in subsection \ref{s.negative}, is the Riemannian manifold of negative curvature. In the case of the Riemannian manifold of negative curvature the criteria is the same as the criteria for hyperbolicity of the geodesic flow. In the subsection \ref{s.phsymmetric} the example satisfies the criteria of partial hyperbolicity. It is also an hyperbolic example. In the subsection \ref{s.nphsymmetric} the example does not satisfy the criteria and is not partially hyperbolic. In subsection \ref{s.nonanosov} we show the last example. For the last example the criteria is satisfied out of a small set of vectors in the unit tangent bundle. The last example is non Anosov and partially hyperbolic \cite{CP}.
\subsection{Negatively curved manifolds}
\label{s.negative}
In the negatively curved case, the theorem is trivial. In this case, $A(v) = (\mathbb{R}v)^{\bot}$, and there is no need of a $\beta$ function. Suppose $K \leq - \alpha^2$, for a positive real number $\alpha$. Since $(P_A)' = 0$ in this case, the criteria trivially holds:
$$ \frac{d}{dt} g(\eta,\varsigma) = g(\varsigma,\varsigma) - R(v,\eta,v,\eta) \geq g(\varsigma,\varsigma) + \alpha^2g(\eta,\eta) > 0.$$
\noindent for any $(\eta,\varsigma) \in C_+(x,v)$, $(x,v) \in SM$.
\subsection{Locally symmetric manifolds}
\label{s.lsm}
In this section we look at the case of $(M,g)$ compact locally symmetric manifold of noncompact type. In a previous work we had shown that if rank is one then the geodesic flow is partially hyperbolic, if rank is at least two, then it is not.
\begin{definition}
A simply connected Riemannian manifold is called symmetric if for every $x \in M$ there is an isometry $\sigma_x: M \to M$ such that
$$\sigma_x(x) = x, d\sigma_x(x) = - id_{T_xM}.$$
\noindent The property of being symmetric is equivalent to:
\begin{itemize}
\item $\nabla R \equiv 0$,
\item if $X(t)$, $Y(t)$ and $Z(t)$ are parallel vector fields along $\gamma(t)$, then $R(X(t),Y(t))Z(t)$ is also a parallel vector field along $\gamma(t)$.
\end{itemize}
\end{definition}
Each locally symmetric space $N$ is the quotient of a simply connected symmetric space $M$ and a group $\Gamma$ acting on $M$ discretly, without fixed points, and isometrically, such that $N = M / \Gamma.$
\subsubsection{Locally symmetric manifolds of noncompact type and of rank one}
\label{s.phsymmetric}
Locally symmetric spaces with non constant negative curvature have the following parallel subspaces of $(\mathbb{R} v)^{\bot}$:
\begin{eqnarray}
\label{d.AB}
A(x,v) & := & \{ w \in T_xM : K(v,w) = -4 a^2 \}, \\ B(x,v) & := & \{ w \in T_xM : K(v,w) = - a^2 \},
\end{eqnarray}
\noindent where $a \in \mathbb{R}$.
The curvature tensor for locally symmetric manifolds of noncompact type and rank one is $$R(v,\eta)v = - 4 a^2 \eta_A - a^2 \eta_B,$$ where $v \in SM$.
Partial hyperbolicity follows from:
\begin{eqnarray*} \frac{d}{dt}(g(\eta_A,\varsigma_A) - e^2 g(\eta_B,\eta_B) - g(\varsigma_B,\varsigma_B)) & = & g(\varsigma_A,\varsigma_A) - g(R(v,\eta)v,\eta_A) \\ - 2e^2 g(\eta_B,\varsigma_B) + 2 g(R(v,\eta)v,\varsigma_B) & = &
4a^2 g(\eta_A,\eta_A) + g(\varsigma_A,\varsigma_A) \\ - 2e^2g(\eta_B,\varsigma_B) - 2 a^2 g(\eta_B,\varsigma_B) & > & 4a^2 g(\eta_A,\eta_A) + g(\varsigma_A,\varsigma_A) \\ - (e + \frac{a^2}{e}) g(\eta_A,\varsigma_A) & > & 0
\end{eqnarray*}
\noindent if $e \in (a,2a)$, for if $e \in (a,2a)$ then $e + \frac{a^2}{e} < 4a$.
\subsubsection{Locally symmetric manifolds of noncompact type and of rank at least two}
\label{s.nphsymmetric}
\begin{definition}
Let $\mathfrak{g}$ be the algebra of Killing fields on the symmetric space $M$, $p \in M$. Define
$$\mathfrak{k} := \{ X \in \mathfrak{g} : X(p) = 0 \},$$
$$\mathfrak{p} := \{ X \in \mathfrak{g} : \nabla X (p) = 0 \}.$$
\noindent For these subspaces of $\mathfrak{g}$, $\mathfrak{k} \oplus \mathfrak{p} = \mathfrak{g}$ and $\mathfrak{k} \cap \mathfrak{p} = \{ 0 \}$, and $T_pM$ identifies with $\mathfrak{p}$.
\end{definition}
\begin{definition}
Given $p \in M$, we define the involution $\phi_p(g): G \to G: g \to \sigma_p \circ g \circ \sigma_p$. Then, we obtain $\theta_p : d \phi_p : \mathfrak{g} \to \mathfrak{g}$. Since $\theta_p^2 = id$ and $\theta_p$ preserves the lie brackets, the properties of this subspaces of $\mathfrak{g}$ are:
\begin{itemize}
\item[i.] $\theta_{p|\mathfrak{k}} = id$,
\item[ii.] $\theta_{p|\mathfrak{p}} = - id$,
\item[iii.] $[\mathfrak{k},\mathfrak{k}] \subset \mathfrak{k}$, $[\mathfrak{p},\mathfrak{p}] \subset \mathfrak{k}$, $[\mathfrak{k},\mathfrak{p}] \subset \mathfrak{p}$,
\end{itemize}
\end{definition}
Fix a maximal Abelian subspace $\mathfrak{a} \subset \mathfrak{p}$. Let $\Lambda$ denote the set of roots determined by $\mathfrak{a}$, and $$\mathfrak{g} = \mathfrak{g}_0 + \sum_{\alpha \in \Lambda} \mathfrak{g}_{\alpha}.$$
\noindent $\mathfrak{g}_{\alpha} = \{ w \in \mathfrak{g} : (ad X) w = \alpha(X) w \}$, $\alpha: \mathfrak{a} \to \mathbb{R}$ is a one-form.
Define a corresponding decomposition for each $\alpha \in \Lambda$, $\mathfrak{k}_{\alpha} = (id + \theta) \mathfrak{g}_{\alpha}$ and $\mathfrak{p}_{\alpha} = (id - \theta) \mathfrak{g}_{\alpha}$. Then:
\begin{itemize}
\item[i.] $id + \theta: \mathfrak{g}_{\alpha} \to \mathfrak{k}_{\alpha}$ and $id - \theta: \mathfrak{g}_{\alpha} \to \mathfrak{p}_{\alpha}$ are isomorphisms,
\item[ii.] $\mathfrak{p}_{\alpha} = \mathfrak{p}_{- \alpha}$, $\mathfrak{k}_{\alpha} = \mathfrak{k}_{- \alpha}$, and $\mathfrak{p}_{\alpha} \oplus \mathfrak{k}_{\alpha} = \mathfrak{g}_{\alpha} \oplus \mathfrak{g}_{- \alpha}$,
\item[iii.] $\mathfrak{p} = \mathfrak{a} + \sum_{\alpha \in \Lambda} \mathfrak{p}_{\alpha}$, $\mathfrak{k} = \mathfrak{k}_0 + \sum_{\alpha \in \Lambda} \mathfrak{k}_{\alpha}$, where $\mathfrak{k}_0 = \mathfrak{g}_0 \cap \mathfrak{k}$.
\end{itemize}
For $X \in \mathfrak{a}$ we have that, along the geodesic $\gamma$ in $M$ with initial conditions $\gamma(0) = p$, $\gamma'(0) = X$, the Jacobi fields are linear combinations of the following Jacobi fields:
$$cosh(|\alpha(X)|t)v_j(t) \textrm{ and } sinh(|\alpha(X)|t)v_j(t).$$
\begin{proposition}
If $(M,g)$ is a locally symmetric manifold of noncompact type and rank bigger than one, then there is no continuous function $$A: SM \to Gr(r,TM): v \to A(v) \subset T_{p(v)}M$$ satisfying the hypothesis of the theorem \ref{t.maintheo}.
\end{proposition}
So, in the case of rank bigger than one, fix $r < dim M$, pick $v \in T_xM$ such that $A(v) = \oplus_{i=1}^r p_{\alpha_i}$, $i = 1, \ldots, r$, $|\alpha_1| > |\alpha_2| > \ldots > |\alpha_r|$, such that if $\beta \neq \alpha_i$, $\forall i = 1,\ldots,r$, then $\beta(v) < \alpha_i(v)$, $\forall i = 1,\ldots,r$.
Now we pick $(x,v')$ such that $\alpha_1(v') = 0$. Then $A(v') = \oplus_{i=1}^r p_{\beta_i}$, for some $\beta_j \in \Lambda$, $j=1,\ldots,r$, $|\beta_1| > |\beta_2| > \ldots > |\beta_{r}|$. Notice that $\alpha_1(v') = 0$ implies $\beta_j \neq \alpha_1$, $\forall j=1,\ldots,r$. There is no way to go from one decomposition to the other continuously, so there is no way to define a continuous $A$ as in the statement of the theorem.
\begin{remark}
In the case of locally symmetric manifolds of rank bigger than one, all the three hypothesis stated prior to theorem \ref{t.maintheo} are not satisfied. It would be interesting to look for examples which do not satisfy only one of these hypothesis.
\end{remark}
\subsection{Non Anosov example}
\label{s.nonanosov}
Let $(M,g)$ be a Riemannian manifold, $\nabla$ its Levi-Civita connection, $R$ its curvature tensor. Let $(M,g_1=e^{\alpha}g)$ be Riemannian manifold with a metric in the conformal class of $g$, $\nabla^1$ its Levi-Civita connection, $R^1$ its curvature tensor. Then
$$\nabla^1_X Y = \nabla_X Y + \frac{1}{2} g(\nabla \alpha, X) Y + \frac{1}{2} g(\nabla \alpha, Y) X - \frac{1}{2} g(X,Y) \nabla \alpha,$$
\noindent and if $\alpha$ is $C^2$-close to zero, then
$$R^1(X,Z,X,W) \approx R(X,Z,X,W) - \frac{1}{2} g(X,X) g(\nabla_Z \nabla \alpha,W) - \frac{1}{2} g(Z,W) g(\nabla_X \nabla \alpha,X).$$
If $(M,g)$ is the locally symmetric space with curvature in $[-4a^2,-a^2]$, the idea is to pick a closed geodesic $\gamma$ without self-intersections, take a tubular neighborhood aroud $\gamma$. Define an orthogonal $x_0,x_1,\ldots,x_{n-1}$ coordinate system in the tubular neighborhood such that along $\gamma$, $\gamma' = \partial_{x_0}$, $K(\gamma',\partial_{xi}) = -4a^2$ for $i=1,\ldots,r$ and $K(\gamma',\partial_{x_i})= - a^2$ for $i=r+1,\ldots,n-1$. Then we are able to define an $\alpha: SM \to \mathbb{R}$ such that for $g_1 = e^{\alpha}g$, $K(\gamma',\partial_{xi}) = -4a^2$ for $i=1,\ldots,r$ and $K(\gamma',\partial_{x_i})= 0$ for $i=r+1,\ldots,n-1$.
Define $A$ as in subsection \ref{s.lsm}. Then, for the same quadractic form of the theorem \ref{t.maintheo}, the criteria holds at a set $\mathcal{PH} \subset SM$. Let $\mathcal{T}:= SM - \mathcal{PH}$. Any orbit which crosses $\mathcal{T}$ stays there as little time as we want - time depends on the size of the tubular neighborhood. So, with a bit more work, we can show that its geodesic flow is partially hyperbolic (see details in \cite{CP}).
|
2009.02005
|
\section{Implementation}
Given the focus of our study on monitoring and comprehension, our
implementation of staged animation of online dynamic graphs
precludes any user interaction with the system.
This is an artificial constraint as most visualization or visual
analytic systems for monitoring networks would have some control for
overview, filtering and playback.
Yet, it was necessary for our controlled study to exclude other
variables that might influence the perception of the results from each
staging strategy.
Our implementation thus focused on only one aspect, graph layout, other than the staging strategies.
A crucial aspect of dynamic graph layouts is how each time step is laid
out.
In offline dynamic graph layouts, a trade off is made between globally
optimized layout for all times or local optimized for each time.
Since online dynamic graph layouts do not posses the knowledge of all the
time steps ahead of time they focus on reducing the amount of movement
nodes experience between each time point.
We utilize the incremental layout method work by Crnovrsanin et
al.\cite{Crnovrsanin2015}.
Their qualitative study demonstrated improved results over the two other state-of-the-art methods: Aging~\cite{Gorochowski2012} and Pinning~\cite{Frishman2008}.
Their methods consist of two parts, an initial placement and layout
algorithm followed by a refinement approach.
The refinement allows nodes with high energy to move until they reach a
low energy state.
Due to how staged animation works and for the study, we had to make a few
changes to how their layout method operates.
In their implementation, refinement is run between time steps while the
system waits for more data.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{/Figures/animation_order_singlecol}
\caption{
Illustration of how the staged animation appears to the
participants in the study, inspired by
GraphDiaries~\cite{Bach2014graphdiaries}.
At the start of the deletion stage, all entities (nodes/edges) to be
deleted flash orange, and then disappear over the next 0.5 seconds.
Labels of deleted nodes remain briefly for the participants to
register what nodes have disappeared.
The remaining nodes move (1.2 sec) to their new computed positions.
Entities that are added flash blue briefly and the color fades away
(0.5 sec).
}
\label{fig:animation_order}
\end{figure}
The benefit of this approach is that it allows a gradual change of the
network and helps to maintain the stability of the network over time.
The incremental layout method that we use~\cite{Crnovrsanin2015} has two options: either refine the layout gradually between the time steps, or perform refinement and lay out the graph without showing the intermediate steps.
Unfortunately, in our implementation, we can not run refinement between
time steps due to constant data coming in the study.
Therefore, we run refinement right after the initial placement of new
nodes and edges, followed by the layout algorithm but before staged
animation is run.
This groups the movement from both the addition and deletions as well as
high energy nodes shifting to a low energy state.
Another change is an addition of a central force for all nodes to keep
them closer to each other.
Without this central force, disconnected components would continue to
move away over time.
A side benefit is this allows us to fit all nodes within the screen,
making it easier to conduct the study.
The actual disappearance, movement, and appearance of the entities in
each stage follows GraphDiaries~\cite{Bach2014graphdiaries} closely (see
Fig.~\ref{fig:animation_order}).
\section{User Study}
Our goal through the user study is to determine the suitability of each
staging strategy to a number of tasks broadly
based on task taxonomies for network visualizations with temporal
components~\cite{Andrienko2006exploratory, Ahn2013task,
Kerracher2015task}.
We broadly split our tasks into \textit{monitoring} and
\textit{comprehension} tasks, or elementary and synoptic tasks
respectively, according to the Andrienko Task Format
(ATF)~\cite{Andrienko2006exploratory}.
We designed a within-subjects study where each participant was exposed
to multiple monitoring and comprehension tasks using all three animation
strategies.
We supplement this study with a follow-up qualitative study with two experts in network visualization (see \autoref{sec:follow_up}).
We use a think-aloud protocol to evaluate their comprehension of a set of online dynamic network animation videos across the three animation strategies.
\subsection{Participants}
We recruited 21 participants (9 female, 12 male) between 18 and 35 years
of age.
All participants were university students, with 12 Ph.D students, 6
masters students, and 3 undergraduate students.
Eighteen students were computer science majors, the remaining
three majored in Aerospace engineering, Telecommunication, and
Information Technology, respectively.
Most of the participants (13 students) reported being highly familiar
with information visualization.
Five were moderately familiar with visualization, while three had little
to no familiarity with the subject.
Eight of the 21 participants reported using node-link diagrams regularly
in their work, an equal number reported having used them at least once,
while the remaining 5 had little or no knowledge of node-link diagrams.
\subsection{Apparatus}
Of the 21 participants, 16 used a MacBook Pro laptop with a 2.7 GHz
Intel Core i5 processor and 8 GB RAM, connected to a 30 in display
(2560$\times$1600 resolution).
Due to logistical constraints, five participants participated remotely,
and used different monitors (all 15-inch laptop screens).
All animations were shown to participants as video clips with no
playback controls.
The questionnaire including video playback was administered on a Chrome
browser.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figures/task_timelines.pdf}
\vspace*{-8mm}
\caption{
Chart showing the rate at which the events occur over time in the raw data (left), \edit{and the number of nodes at the start and end of the animation for each task (right)}.
The duration over which these events are displayed in the animations will depend on the animation strategy used.
}
\vspace*{-2mm}
\label{fig:eventsvsTime}
\end{figure}
\subsection{Tasks \& Dataset}
\label{sec:tasks_dataset}
In order to generate the animations illustrating the three staging
strategies, we used the MIT Reality Mining
Dataset~\cite{Eagle2006reality}.
The dataset contains activity records of 100 individuals at MIT over the span of the 2004-2005 academic year and includes datasets recording proximity, location, communication, and other activity.
In this study, we draw from subsets of this data concerning the activities labelled \emph{Call} and \emph{Proximity}.
The \emph{Call} dataset contains temporal data of individuals placing calls to others, while the \emph{Proximity} dataset contains temporal data of individuals moving in and out of each other's Bluetooth ranges.
For our tasks, we choose temporal data segments from different sections of the larger dataset.
This allows us to choose segments with variations in the rate at which events occurred over time (see \autoref{fig:eventsvsTime}).
In addition, the two types of networks present in the data set (call and proximity) are structurally different from each other.
As mentioned earlier, tasks were split into monitoring and comprehension
tasks.
Monitoring tasks were timed.
The question was asked before the video was shown to the participants, and they needed to respond \textit{during} the video playback as soon as they spotted the answer to their question.
\edit{Participant response times were noted relative to when the event occurred, in addition to when the event was shown in the animation (see \autoref{fig:monitoring_results} for details).}
Comprehension tasks were multiple-choice, and required participants to
observe a video and then answer one or more questions related to the
video.
\edit{We used node labels for tasks that required paying attention to specific nodes, and kept the nodes unlabeled for tasks that were more general, e.g.\ paying attention to clusters or the overall graph.}
\autoref{tab:task_categorization} describes the tasks used in the study.
\edit{A third comprehension task was initially included in the study but subsequently dropped from the analysis due to the fact that the event-based and hybrid staging resulted in identical ``batches'' of animations, with only the node positioning being different. This would have created an unintended confound.}
Under Koussoulakou and Kraak's classification of spatio-temporal tasks~\cite{Koussoulakou1992spatia}, the monitoring tasks can be categorized as \emph{elementary}
under space and \emph{intermediate} under time, as participants are typically tracking one or two nodes over a given duration, looking for a specific behavior.
The comprehension tasks in this study as well as the follow-up think-aloud study fall under \emph{``overall level''} under both space and time in the same classification, as participants are asked to report on the overall behavior of a node, group of nodes, or the entire network over a time duration.
In terms of tasks specific to network visualization, we use as reference Ahn et al.'s taxonomy~\cite{Ahn2013task}, where temporal features are broadly classified into \emph{individual temporal features} that are typically event-related, \emph{shape of changes} that concern event collections such as growth/contraction, stability etc.\, and \emph{rate of changes} that involve the measurement of speed or time.
Since ours is an exercise in perception and not measurement, our tasks do not fall under the \emph{rate of changes} category.
Using this classification, our monitoring tasks can be categorized under \emph{individual temporal features}, while the comprehension tasks and the think-aloud tasks in the follow-up study can be categorized under \emph{shape of changes}.
Participants were given training questions for each type of task: one
question for monitoring, and two for comprehension.
\begin{table}[h]
\small
\centering
\begin{tabular}{l c p{2.1in} }
\toprule
\textbf{Task Type} & \textbf{ID} & \textbf{Task Description} \\
\midrule
\multirow{3}{*}{Monitoring}
& T1 & Track clusters and respond as soon as they merge. \\
& T2 & Track graph and respond as soon as a particular named
entity (node) appears. \\
& T3 & Track graph and respond as soon as two named entities
(nodes) are linked. \\
\midrule
\multirow{7}{*}{~\newline ~\newline Comprehension}
& T4 & Entity pointed out before the video, and after the video
playback, asked what happened to it over the course of the
animation.\\
& T5 & Cluster pointed out before the video, and after the video
playback, asked what happened to it over the course of the
animation. \\
\bottomrule
\end{tabular}
\vspace{1mm}
\caption{Task categorization and description for each staging
\label{tab:task_categorization}
strategy}
\vspace{-3mm}
\end{table}
\edit{In real-world applications such as our security networks example, the individuals who monitor the networks are intimately familiar with said network.
Based their knowledge of prior attacks, they can judge which nodes are vulnerable and need attention.
While it would be difficult to (a) find a suitable number of network security experts and (b) set up the data to suit their prior knowledge of similar networks, we were able to \textit{simulate} this prior knowledge by asking participants to pay attention to certain nodes.}
\edit{
In addition, real-world scenarios would likely have additional embellishments such as highlighting on recently-changed portions of the network visualization.
However, we decided to use it sparingly for a number of reasons.
Given our focus on understanding which staging strategies best use participants' capabilities to discern changes in networks, we use highlighting as shown in Fig.~\ref{fig:animation_order} to draw attention to node creation and removal.
Other forms of highlighting that draw attention to certain categories of behavior often do so for the user to step back in time and review what happened to the highlighted nodes.
This would make sense in a long-term case study of an actual network being monitored, but not in our controlled study.}
Since most participants were not expected to be familiar with dynamic
graphs and node-link diagrams, the questions were mostly phrased in the
context of a social network.
For instance, T3 was worded as \textit{``You will be shown a friendship
network with users shown as nodes and relationships between them shown
as edges. Look out for Ryan and Emily, and click on the button as soon
as they become friends.''}
This being a full-factorial within-subjects design, tasks \edit{T1--T5} were
repeated for each condition, i.e. each animation staging strategy.
We used the same dataset for each condition, with labels changed and
graphs rotated/mirrored for each condition.
While the graphs were all identical (except for rotation/mirroring) at
the start of the animation, the different staging strategies would
result in changes in layout between the three staging conditions.
While this potentially introduces an additional variable (layout) into
our study, it is unavoidable within the scope of this work.
All tasks were performed in sequence \edit{(T1--T5)} for each condition.
We counterbalanced the condition order using a Latin Square design to mitigate learning effects.
\subsection{Procedure}
Individual participants were first given a basic background of the
study, but were not described the specific animation techniques, so as
to not bias them.
Instead, they were told that data would be shown to them in three
different forms, and that they would be asked to perform a set of tasks
for each form of data presentation.
All questions and tasks were presented in the form of an online survey,
with the video showing animations embedded in the survey.
To ensure participants see the video only once, playback controls were
disabled.
Due to the animation strategies used, the video durations varied for
each condition.
Table~\ref{tab:video_durations} shows video duration for each task and condition.
\begin{table}[h]
\small
\centering
\begin{tabular}{c c c c }
\toprule
\multirow{2}{*}{\textbf{Task ID}} &
\multicolumn{3}{c}{\textbf{Video duration (sec)}}\\
\cmidrule{2-4}
&
\textbf{Event-based} &
\textbf{Hybrid } &
\textbf{Time-based} \\
\midrule
T1 & ~34 & 34 & 16 \\
T2 & ~75 & 65 & 23 \\
T3 & 108 & 96 & 15 \\
T4 & ~69 & ~65 & ~28 \\
T5 & ~92 & ~75 & ~37 \\
\bottomrule
\end{tabular}
\vspace{1mm}
\caption{
Video durations for each task and staging strategy.
Note that the data shown is the same for each task; the varying
durations are a result of the staging strategies.
Note that for T1--T3 (monitoring), participants are not required to
watch the entire video.
}
\label{tab:video_durations}
\vspace*{-5mm}
\end{table}
In the case of monitoring tasks, the video was accompanied by a button
labelled \textit{``Click as soon as you find (the answer)''}, and the
time elapsed between the start of video playback and the button click
was recorded.
Note that for monitoring tasks, participants do not have to watch the
entire video.
Participants were presented with all 5 tasks in the same order for each
condition (animated staging strategy).
Our study thus involved 21 participants $\times$ 3 staging strategies
$\times$ 5 tasks, resulting in a total of 315 trials.
At the end of each condition, participants also filled out a NASA Task
Load Index (TLX) response sheet~\cite{Hart1988development}.
Note that we collected the NASA TLX data once per condition rather than
once per task to avoid survey fatigue.
A typical session lasted 45 minutes.
\subsection{Hypotheses}
Based on our design considerations and the requirements outlined in
Section~\ref{sec:design_considerations}, we formulated the following
hypotheses:
\vspace{-2mm}
\begin{enumerate}[itemsep=-0.5mm]
\item[\textbf{H1}] Participant response to the monitoring tasks will
\edit{be affected by the volume of data. They will} be quicker for event-based staging conditions than the remaining two as the reduced visual complexity would help them spot the event as soon as it happens.
For the same reason, we posit that hybrid staging will prompt
quicker responses than time-based staging.
\edit{Response time is a stand-in for the ability to perceive an event that occurred.}
\item[\textbf{H2a}] Participant responses to comprehension tasks will
show more errors in time-based staging conditions than the remaining
two due to the increased visual complexity to which time-based
staging is susceptible given a high rate of data influx.
\item[\textbf{H2b}] Participant responses to comprehension tasks will
show more errors in event-based staging conditions than in hybrid
staging conditions due to the greater time period for which
participants need to track and remember events.
\item[\textbf{H3a}] Participants will report lower levels of
performance and higher levels of frustration in time-based staging
conditions when compared to the remaining two.
\item[\textbf{H3b}] Participants will report higher levels of mental,
physical, and temporal load and effort in time-based staging
strategies when compared to the remaining two.
\end{enumerate}
\vspace{-3mm}
\section{Limitations and Future Work}
Through our study we learned that regardless of monitoring or
comprehension tasks, animation staging strategies that prioritize
comprehension do better for participant response times, accuracy, and
comfort.
Yet, the differences between the staging strategies are slightly blurred
for tasks that are less complex or require less monitoring time.
In addition, our hybrid strategy was a simple combination of the
parameters used in the time-based and event-based strategies.
\edit{How event- and time-based parameters are combined for the hybrid strategy---and the user's awareness of the strategy---could impact monitoring tasks. In our study, participants were not informed of the delays between the actual event times and when they were shown in the animations (black bars in \autoref{fig:monitoring_results}). Their perception of their response time to the event was thus different from the actual response time to the event.
In addition, we used constant time/event thresholds for each animation strategy for our study.
We plan to explore the design space of adaptive strategies discussed in the previous section, along with indicators for animation lags in the future.
In addition, we plan to explore using these staging strategies to capture network ``states'' that will then be visualized as static, small multiples visualizations to be used for post-event analysis.
Lastly, we plan to incorporate the hybrid staging strategy into a visual analytic system for analyzing online dynamic networks to examine its applicability in real-world scenarios.
}
\section{Design Considerations}\label{sec:design_considerations}
Analyzing changes to dynamic networks in real time involves two main
kinds of tasks: \textit{monitoring}, where the analyst needs to
become aware of a particular kind of change as soon as it occurs, and
\textit{comprehension} where the analyst needs to understand what
changes have occurred to a graph over a period of time.
For real-time analysis, it is safe to assume that this period of time is
relatively short.
From the related research, it is clear that there are several tradeoffs
that need to be made in order for the viewer to keep track of the
changes.
The balance lies between keeping the analyst aware of changes to the
network as soon as they occur, yet not overwhelming them with too
many updates to the network.
We posit that the design of staged animated representations of
change in online dynamic graphs needs to balance three aspects:
\vspace{-2mm}
\begin{enumerate}[itemsep=-0.5mm]
\item[R1] Timeliness: The animated representation of the event should
occur as close as possible to the actual time of the event.
\item[R2] Mental Map preservation: The animated representation should
occur in a way that allows the viewer to track the changes to the
graph, thus preserving their mental map of the graph.
\item[R3] Minimize Transition Time: The animated transition should be
short enough to make effective use of the viewer's short-term memory
in recalling the changes in the graph.
\end{enumerate}
\vspace{-2mm}
For the purpose of this paper, we focus on changes to the network that
are triggered by new data, rather than by user interactions.
We use Chevalier et al.'s definitions of transition and
animation~\cite{Chevalier2014not} to define the design space of staged
transitions.
They define \textit{transition} as a pair of visual states (initial and
final).
An \textit{animation} is a series of images that provides the impression
of perceptual continuity between the initial state of the transition and
the final state.
We also define, for the purpose of our study, an \textit{event} as a
change in the network. An event can thus be (a) the appearance of a new
node, (b) the appearance of a new edge, (c) the disappearance of an
existing node, (d) the disappearance of an existing edge, and (e) a
combination of (a) and (b) or of (c) and (d).
We define a \textit{stage} of the animation as a representation of one
or more events collected based on a triggering parameter.
Since events occur over time, we consider both time
intervals and event count as triggering parameters for these stages.
Finally, we define \textit{animation time} as the duration over which
the animation plays out on the interface.
\begin{figure*}[t]
\includegraphics[width=\textwidth]{/Figures/staging_strategies}
\vspace{-2em}
\caption{Staging strategies shown for a sample dynamic network from an
arbitrary start state (top left) to an arbitrary end state (bottom
left).
Nodes/edges that are created between these two states are marked in
blue in the end state, while nodes/edges that are deleted are marked
in orange.
Each creation/deletion event is numbered based on their order of
occurrence.
The three animation staging strategies studied are shown in the
middle: \textbf{(a)} time-based, \textbf{(b)} event-based, and
\textbf{(c)} hybrid.
Each animation stage, composed of deletion, movement, and creation
sub-stages is shown for each staging strategy relative to the event
timeline.
}
\label{fig:staging_strategies}
\end{figure*}
We base our animation design itself on Bach et
al.~\cite{Bach2014graphdiaries}, who
report that while deletion of nodes/edges and addition of nodes/edges
may both trigger positional changes to the network, pre-computing these
and ordering them as deletion-movement-addition works best to preserve
the viewer's mental map.
Our animation time $T_{an}$ will thus also be split into a time $t_d$
over which entities are deleted, time $t_m$ over which they are moved,
and a time $t_a$ over which new entities are added, in that order (see
bottom right of Fig.~\ref{fig:staging_strategies}).
In addition, we include a ``pause time'' ($t_p$) after each animation to
perceptually distinguish one stage from another.
Without a pause, the user would perceive a continuous series of
animations instead of the intended stages.
Thus, the total animation time $T_{an} = t_d + t_m + t_a + t_p$ \edit{as seen in Fig.~\ref{fig:staging_strategies}.}
Based on this, and on the requirements above, we examine three
strategies of staged animations.
For our study, we set $t_d$ = 450ms, $t_m$ = 600ms, $t_a$=450ms, and $t_p$=500ms, for a total animation time of $T_{an}$ of 2 seconds.
\textbf{Time-based staging:}
As the name implies, this staging strategy uses a specified set of time
intervals $t_i \ge t_a$ over which events are recorded.
Events are separated into deletions and additions, and the new positions
of the nodes in the dynamic graph are computed.
At the end of this time interval, the animation renders deletions,
movement to new positions, and additions in sequence.
This process is repeated for successive time intervals $t_i$.
One of the main advantages is that regardless of the data influx, the
animation will lag behind the actual events by a maximum of $t_i + t_a$,
satisfying the \textit{timeliness} requirement (R1).
However, this strategy does not place an upper limit on the number of
events per animation stage.
For instance, Fig.~\ref{fig:staging_strategies}~(a) shows three stages of
animation, with 4, 6, and 3 events respectively if represented using
time-based staging.
For high data influx, the number of events per stage can grow to an
overwhelming proportion.
Since there is no way of predicting the data influx for online dynamic
network, the effectiveness of this staging strategy hinges on the
data influx rate and the choice of $t_i$.
The time-based staging strategy is thus more suitable for
monitoring rather than comprehension tasks, provided the animations are not overwhelming.
For the purpose of our study, we use 2 seconds as the value of $t_i$.
\textbf{Event-based staging:}
Given that human visual perception is limited in terms of the number of
independently-moving objects it can track, it is reasonable to explore an
animation staging strategy that counts events as they occur, and deploy
the events once a threshold of $N$ events has been reached.
The separation of events and the subsequent animation follows the same
process as time-based staging, but there are a few differences in the
staging of the animation.
The number of events shown per stage is limited, which helps with
comprehension and mental map preservation (R2).
However, even when the data influx rate is high, only $N$ events can be
shown for every time period $T_{an}$, the animation time.
If the number of events that occur over $T_{an}$ is greater than $N$, it
results in events ``piling up'' to be animated, resulting in
an increasing time lag between the event and its animation.
On the other hand, when the data influx rate is very low, such as when the time taken for $N$ events to occur is much higher than $T_{an}$, those events
are not shown at all until the event threshold is reached.
Fig.~\ref{fig:staging_strategies}~(b) illustrates this issue.
While three events have occurred after Stage 2, they are not shown to the viewer as the event threshold is $N=5$ in this example.
Both these conditions illustrate how an event-based staging strategy \textit{does not} satisfy the timeliness requirement (R1).
Event-based staging strategies are thus more suitable for comprehension
tasks rather than those involving monitoring.
In our study, we use $N=5$ for comprehension tasks and $N=3$ for monitoring tasks.
\textbf{Hybrid staging:}
In order to address the shortcomings of time-based and event-based
staging strategies, we introduce a combination of the two: a
\textit{hybrid} strategy that uses both event and time thresholds to
trigger the next stage of the animation.
Thus the animation triggers at a specified event count $N$ or at a
specified time interval $t_i$, whichever occurs earlier.
For high data influx, the staging is based on an event-based trigger,
prioritizing comprehension (R2) over timeliness.
For low data influx, the staging is based on a time-based trigger,
prioritizing timeliness (R1).
Comprehension is not compromised in this case as the data influx rate is
low.
\edit{Traditional event- and time-based animations use uniform timings, even when there are no addition or deletion events.
This uniformity helps reduce mental load on the user as they can anticipate events.
However, when no events occur, this can contribute to the animation ``lag''.
Since our goal with the hybrid staging was to reduce lag without compromising comprehension, we use a variable animation time based on the kinds of events recorded for each stage.}
Thus, if there are no deletion events in a stage, the animation time
reduces to $T_{an} = t_m + t_a + t_p$.
Alternately if there are no addition events, the animation time reduces
to $T_{an} = t_d + t_m + t_p$.
There can also be cases where the time threshold $t_i$ is reached with
no events occurring, and this can trigger a convergence of the graph
involving only movement.
In this case, the animation time is simply $T_{an} = t_m + t_p$.
This reduces the transition time overall, making more effective use of
the viewer's short-term memory (R3).
The hybrid staging strategy thus combines the advantages of both
time-based and event-based strategies, \textit{except} in the case of
high-throughput data, where its timeliness is equivalent to that of
the event-based strategy.
However, the introduction of variable animation time reduces unnecessary
time delays even in this situation.
For our study, we maintain the same event and time thresholds for the hybrid staging as we do for event- and time-based staging.
We implement all three approaches for our user study in order to compare
these three strategies for both monitoring and comprehension tasks in
online dynamic networks.
\section{Discussion}
We will first summarize the results from our analyses before explaining
and generalizing them.
Of our proposed hypotheses, we find:
\vspace{-2mm}
\begin{itemize}[label=$\bullet$,leftmargin=1.5em]
\setlength{\topsep}{-0.08in}
\setlength{\itemsep}{-0.04in}
\item Event-based staging showed significantly shorter
response times compared to time-based staging, but not compared to
hybrid staging for two of the three monitoring tasks.
Hybrid staging showed shorter response times
than time-based staging for one monitoring task (\textbf{partially
confirming H1})
\item Participant responses to comprehension tasks showed no
significant differences for tasks T4 and T5.
These results \textbf{reject H2a} and \textbf{H2b}.
\item Participant responses to the NASA TLX showed no significant
difference in participant perception of performance, but a
significantly higher level of frustration in time-based staging
compared to the remaining conditions, and in hybrid staging
compared to event-based staging (\textbf{partially confirming
H3a}).
\item Participants also reported significantly higher mental,
physical and temporal loads and effort on time-based staging
compared to the remaining two (\textbf{confirming H3b}).
They also reported higher physical load and effort in event-based
staging compared to hybrid staging.
\end{itemize}
\subsection{Explaining the Results}
We had posited that timeliness (R1), mental map preservation (R2), and
minimization of transition time (R3) were the driving requirements in
staging animations in online dynamic networks.
Our hypotheses were derived from these requirements, and while they were partially confirmed for monitoring tasks, they were rejected for the comprehension tasks.
In this section, we examine the instances where the hypotheses failed and why they failed.
\subsubsection{Monitoring Tasks}
The three monitoring tasks were each designed to involve more complex
monitoring than the previous task, with fewer changes to the graph in
T1, and more and more complex changes over longer time durations (see
Table~\ref{tab:video_durations}).
In addition, T1 was an abstract task
with unlabeled nodes, while T2 and T3 involved graphs with nodes labeled
as first names of people.
Specifically, T2 simply required participants to look for the
appearance of one (named) node, while T3 required them to look for two
named nodes, track them, and respond when they connect.
It is very likely that the differences between staging conditions were
less significant for easier and ``familiar'' tasks (such as T2) while
they were more pronounced for more complex tasks (T3).
While differences between the hybrid and event-based strategies
do not emerge even for the complex monitoring task (T3), the answer
perhaps lies in the comprehension task results.
\subsubsection{Comprehension Tasks}
The argument of complexity can be made to explain the results of the
comprehension tasks as well.
Of the first two comprehension tasks, T4 appears to be too complex with
an almost equal distribution of right and wrong answers regardless of
the condition.
T5 appears to be too straightforward, with most participants answering
correctly regardless of condition.
T4 gives participants the label of a node to track, and requires them to
first seek out the node once the video starts (the node appears
\textit{after} the start of the video), and keep track of it,
remembering the changes in its degree.
Participants performed poorly regardless of the condition likely because
of their failure to identify the node in time, missing early changes to
the node's degree.
T5 demands the least from the participant's perception and
memory as the cluster is labeled and does not go through very complex
changes (it grows and splits).
While the follow-up study with expert participants was not quantitatively evaluated, it appears to validate our reasoning: participants could only make general observations about the network in time-based animations, such as changes in network size and cluster stability.
Participants observed that event-based animations sometimes had little or no changes occur over certain periods, while their impression of hybrid animations fell somewhere between the two, skewing towards event-based animation.
This observation is supported by the scalability simulation discussed in \autoref{sec:scalability}.
Given the limitation posed by the number of perceivable events~\cite{Pylyshyn1988tracking}, the hybrid strategy cannot be more ``timely'' than event-based strategy for high event occurrence rates.
Other strategies that use Gestalt principles of completeness and common fate need to be adopted to group related events together so that they are perceived as one event.
This can theoretically improve the users' perception of multiple events, though it may come at the risk of reduced perception of anomalous activity.
\subsubsection{Participant Experience}
Responses on the NASA TLX scale were as predicted, except for ``performance'', the mixed responses for which could be because participants were not informed whether they had the
correct answer.
\subsection{Generalizing the Results}
Overall, event-based tasks scored well on participant preference as they
reduced the load on participant perception (R2).
On the other hand, Fig.~\ref{fig:monitoring_results} clearly shows that
time-based strategies are best for timeliness (R1), which is achieved without compromising comprehension in the case of
low data influx.
The hybrid approach tries to bridge this gap between timeliness and
comprehension by providing both event-based and time-based thresholds.
An adaptive hybrid strategy that combines shorter time thresholds with a
higher event threshold or vice versa has the potential---with judicious
threshold choices---to provide the regularity of updates and the
timeliness of time-based transitions for low data influx, and the ease
of comprehension for high data influx.
Even in the case of a monitoring task, we see from the study that
comprehension is to be prioritized over timeliness.
\edit{
It is worth noting that the general approach of binning---used for any staged animation---comes at the expense of information loss within the bins.
This includes event order within the bin, and entire events themselves---such as nodes appearing and disappearing within the binned intervals.
The limitations can be overcome by coupling additional views and metrics to track behaviors within a bin window and notify the user when such instances occur.
The effectiveness of these staging alone will vary on the volume and rate of the incoming data.
With higher rates of data influx, an adaptive staging strategy that adapts to the \textit{complexity} of the changes.
For instance, a large change can still be simple if the change is of one kind, e.g.\ a cluster of nodes being added to the network.
Another approach would be to combine such an adaptive staging (for higher data influx rates) with time-based staging (for lower rates), to reduce the animation lag.
At any rate, animation alone is not sufficient to monitor and comprehend online dynamic networks.
Instead, it might work to the dashboard designer's advantage to prioritize comprehension when it comes to animation staging, and provide supporting views for monitoring tasks.}
\section{Related Work}
A comprehensive review of dynamic network visualization techniques is
provided by Beck et al.~\cite{Beck2014}, who categorize the techniques
into \textit{animation} or ``time-to-time mapping'', \textit{timeline}
or ``time-to-space mapping'', and hybrids of both approaches.
In this section, we will briefly cover all three techniques with a
greater emphasis on time-to-time mapping given the scope of our paper.
We will also examine differences between offline and online dynamic
networks and the challenges presented by the latter, and go over some
basic principles of animation that inform our exploration of the design
space we consider for our study.
\subsection{Static Representations of Dynamic Networks}
\label{sec:dynamic}
The challenge in visualizing dynamic networks lies in accurately
depicting the changes that occur to the network over time, while also
ensuring that salient changes are perceptually more apparent to the
user.
Timeline-based approaches achieve this by providing an overview of
changes that occur between time steps.
For instance, TimeArcTrees, a technique introduced by Greilich et
al.~\cite{Greilich2009} vertically aligns graph nodes across time steps,
facilitating comparison between two states based on position of the
nodes and the organization of edges between nodes.
While graph scale is somewhat addressed by collapsing subgraphs into
parent nodes, comparison between two graphs still requires a one-to-one
comparison between nodes.
Burch et al.~\cite{Burch2011} address this issue in TimeArcTrees by
introducing parallel edge splatting: each time-state of the graph is
first mapped to a 1D vertical layout, with edge overplotting and
weighting shown as a heatmap between parallel lines.
Reda et al.~\cite{Reda2011} address issues of scale by focusing on
communities: each entity in the network is plotted as a polyline
extending from left to right, with vertical movements of the polyline
corresponding to its membership in the communities that exist within the
time period of interest.
Polylines that stay in the same community over time are bundled together
to form a band, further addressing issues of scale.
Vehlow et al.~\cite{Vehlow2015} improve on this work by using Gestalt
principles to show continuity between communities as they merge or split
over time, and use color to depict similarities between communities
over time.
Though timeline-based representations actively focus on the issue of
scale, they are restricted to a finite screen space in which to show all
salient states of the graph.
\edit{Previous works~\cite{simonetto2017drawing, SimonettoEB} have used a model for dynamic graphs which is not based on time slices using the DynNoSlice force-directed algorithm which uses a space-time cube (2D+time) to visualize an offline dynamic graph.}
Static representations of \edit{dynamic graphs}
have been shown to work better than dynamic representations for analysis\edit{~\cite{Archambault2011, Farrugia11, brandesunrolling, simonetto2017drawing, SimonettoEB}} but such representations are more suitable for offline dynamic graphs, where the states of the network are known in advance.
\edit{Real-time data would produce a larger number of static states as each event can be considered a new state without periodical summarising or binning of the states. }
\subsection{Animated Representations of Dynamic Networks}
In contrast, animation-based techniques---typically based on node-link
diagrams---offer a more intuitive visualization of dynamic networks.
Force-directed layout generation is one of the most common approaches
for visualizing networks.
The method produces aesthetically-appealing layouts for static networks,
but is also useful in visualizing dynamic networks as changes to the
network---modeled as particles entering/leaving the system and coupling
with/decoupling from other particles in the system---can result in
smooth animations.
However, force-directed approaches are not without fault: minor changes
to the graph, such as the linking of two disconnected components, can
have a large impact on the overall layout.
Because position is used to encode entity relations, a user can form and
maintain an abstract interpretation of the network's structure, called a
``mental map''~\cite{Diehl2002}.
In their review of dynamic graph visualizations, Beck et
al.~\cite{Beck2014} identify the main goal of animation-based techniques
to be the preservation of this mental map, typically by keeping the
position of nodes in node-link diagrams stable over time, thus
minimizing the visual difference between the layouts of the
network across different time slices.
\subsubsection{Animated Transitions in Offline Dynamic Networks}
Most strategies for computing transitions between dynamic network states
have been developed for ``offline'' dynamic graphs, whose structure over
time is known at the time of visualization~\cite{Beck2014}.
Computing strategies thus use this ability to ``look ahead'' and
anticipate changes that inform the current layout.
For instance, Diehl and G{\"o}rg~\cite{Diehl2002} introduced a metric
called ``mental distance'' to indicate the similarity between two
layouts, and built a metagraph from the time sequence to help preserve
the mental map.
They also introduced a \textit{Foresighted Layout with Tolerance} (FLT)
approach for force-directed graphs that trades layout quality of
individual graphs for overall graph stability to preserve the mental
map.
G{\"o}rg et al.~\cite{Gorg2004} extended FLT to hierarchical and
orthogonal layouts and developed adjustment strategies for each.
Initial approaches to lay out dynamic graphs used the GRIP
algorithm~\cite{Gajer2000grip} for its speed in laying out large graphs.
For instance,
Collberg et al.~\cite{Collberg2003} added time-slice information to the
GRIP algorithm to compute distances between corresponding nodes from
consecutive time slices of the same graph.
A similar approach was used by Erten et al.~\cite{Erten2003} who
modified GRIP to preserve the mental map by minimizing positional
changes between nodes in one timeslice to the corresponding nodes in
subsequent timeslices.
Brandes et al.~\cite{Brandes2006} argue for the suitability of spectral
layouts to visualize dynamic networks, as layout changes scale
proportionally to changes in the graph.
They demonstrate their argument using spectral methods to animate
small-world network models over time.
More recently, GraphDiaries~\cite{Bach2014graphdiaries} combines the
advantages of timeline and animated approaches to represent dynamic
graphs.
It uses small multiples to provide an overview of the network
states over time, and uses staged animation transition to show the user
what changes between two given states.
When visualizing large dynamic networks, it becomes infeasible to track
individual node positions, and approaches focus instead on tracking
clusters.
Kumar and Garland~\cite{Kumar2006}, for instance, propose a
stratification technique to visualize hierarchies in large graphs, and
extend their approach to dynamic networks by tracking the changes in the
clustering metric over time and minimizing changes in node positions. \edit{A similar technique of clustering nodes that share a common motion from initial layout to the target layout is mentioned in ~\cite{FriedrichGraphinMotion}}.
Sallaberry et al.~\cite{Sallaberry2012} use their previous rapid
graph layout approach~\cite{Muelder2008} to compute clusters in a given
dynamic network at each time step, and use supporting views to visualize
how clusters evolve over time.
\subsubsection{Animated Transitions in Online Dynamic Networks}
The main challenge for online dynamic networks is that visualization
strategies can only take into account the past states of the network as
the future states are unknown.
Unknown future states of a network can result from the data itself, with
the addition or removal of new entities and/or relationships.
They can also result from user interactions, such as filtering or
reorganizing data.
In both cases, the objective remains the same---preserving the user's
mental map of the network---while the challenges are different.
Most prior work on visualizing online dynamic networks address changes
due to user interactions rather than changes due to streaming data.
Early approaches to address the inherent unpredictability of these
networks used random field modeling where models of the layout and its
stability were used by a stochastic estimator that computes layouts in
terms of random fields~\cite{Brandes1997}, but ignored mental maps when
computing the layouts.
Brandes et al.~\cite{Brandes2006} whose spectral layout is discussed in
Section~\ref{sec:dynamic} argue that their approach can be applicable to
online dynamic networks, as it is based on the assumption that since the
graphs in consecutive time steps are similar, spectral layouts of these
graphs do not vary significantly.
However, there has been some work done on online graphs that involve
``streaming'' data.
In early work, North~\cite{North1996} proposed a heuristic
called DynaDAG for directed acyclic graphs to view incremental updates
to graph layouts based on a combination of operational primitives.
Lee et al.~\cite{Lee2006} use simulated annealing to address the
challenge of preserving the mental map while addressing changes in the
network from streaming data, but at the cost of speed as the
approach redraws the entire layout at each time step.
Frishman and Tal~\cite{Frishman2004} introduce ``spacer vertices'' that
minimize the movement of graph clusters between successive layouts as a
technique to preserve the mental map between updates.
In a later work~\cite{Frishman2008}, they address the same problem by
proposing an algorithm that uses a \textit{pinning weight} to identify
computationally-intensive parts of the network, i.e. areas of the
network that change over time, and computes their layouts using the GPU.
Gorochowski et al.~\cite{Gorochowski2012}
suggest a similar approach: an ``age-directed layout'' technique that both colors and
adjusts the degree of a node's movement based on its ``age''.
Nodes that show fewer updates to their connections over time are
considered ``older'' and move less, whereas ``younger'' nodes are
subject to greater movement.
Hayashi et al.~\cite{Hayashi2013} use two techniques to manage changes
to the graph: an automatic edge resizing technique reduces variations in
edge lengths over time, while a sorted sequential barycenter merging
technique updates the graph with new nodes based on their connectivity
to the existing nodes.
Both techniques reduce variation in the graph to preserve the mental map.
In more recent work, Crnovrsanin et al.~\cite{Crnovrsanin2015} use an
incremental algorithm based on $FM^3$~\cite{Godiyal2008}, which uses
GPU acceleration for fast computation.
The incremental algorithm, which enables fast computation between
updates while preserving the mental map, is followed by a refinement
technique that uses an energy-minimization strategy to reduce edge
crossings and lengths for an aesthetic layout.
In our study of staging strategies, we use this incremental
algorithm to preserve the mental map and refine the graph layout between
animation stages.
\subsection{Staged Animation and Perception}
In the prior sections we looked at static and animated
representations of dynamic networks.
While static representations can provide an excellent overview,
animation provides a metaphor for transition (in the form of movement)
consistent with our mental model of the physical world.
In addition, animations can convey not only transitions, but also
causality~\cite{Scholl2000perceptual} (e.g. move nodes apart after deleting an
edge between them).
Animated transitions are widely used in human-computer interaction (HCI)
for providing fluid interactions and in information visualizations to
maintain continuity between different states of
visualizations~\cite{Elmqvist2011fluid}.
While layout strategies optimize the use of space---specifically, the
use of displacement---to preserve mental maps, animations can be said to
use \textit{time} for the same purpose.
However, animation needs to be used with care.
When animations show complex interactions of moving parts, the viewer's perception of transitions may be inaccurate~\cite{Tversky2002animation}.
Moreover, there are perceptual limits to the number of
independently-moving objects our visual attention can simultaneously
track~\cite{Pylyshyn1988tracking}.
This ability is further impacted by the speed of the animation; faster animations reduce the number of objects we can track
simultaneously, while also reducing the tracking
accuracy~\cite{Feria2013speed}.
Strategies to minimize the number of ``objects'' that the user needs to
track typically discretize objects over time and/or space.
Trajectory bundling, for instance, uses discretization over space,
``grouping'' individual objects and moving them together, thus
taking advantage of the Gestalt principle of common fate to improve
the viewer's ability to track more points~\cite{Du2015trajectory}.
On the other hand, staged animation~\cite{Heer2007animated} uses the
discretization-over-time approach, where animated transitions are presented in stages over successive time intervals.
While users consistently prefer staged animations over simultaneous
ones, it is not always recommended.
Staging can increase the time span between events, rendering the user's short-term memory unreliable~\cite{Bach2014graphdiaries}.
When it comes to dynamic graph visualization, there is a limit to the
amount of trajectory bundling that can be done while preserving the
mental map of the graph, making staged transition a promising strategy.
GraphDiaries~\cite{Bach2014graphdiaries} is the only work that, to our
knowledge, examines strategies in staging animations in dynamic graph visualization.
They split the staging into element removal, transformation and addition in that order.
This order is shown to be optimal in reducing ambiguity while also
minimizing transition time.
However, GraphDiaries is designed for \textit{offline} dynamic graphs,
allowing for the choice of staging strategies ahead of time to reduce the
user's perceptual load.
\edit{Wang et al.~\cite{nonuniwangetall} investigate a non-uniform time slicing approach by adapting histogram equalization to create time slices of equal visual complexity, thus conveying more important details of time slices when there is a sudden influx of edges.
However, due to the lack of user studies, it remains unclear which graph analysis task benefits from the non-uniform and uniform time slicing approach.}
In this paper, we explore animation strategies for \textit{online}
dynamic graphs, wherein we adapt staging order and timing from
GraphDiaries but study the effects of time-based and event-based
animations, as well as a hybrid of the two approaches that we introduce.
\section{Results}
We split our results into monitoring (hypothesis H1),
comprehension (H2a \& H2b), and participant experience (H3a \& H3b), and
report the results in detail under each.
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\columnwidth]{/Figures/monitoring_results}
\vspace{-3mm}
\caption{
Differences between \edit{the time at which an event occurs}
(\edit{i.e.\ the \textit{y=0} line}),
the time at which the event is shown in the animation (black horizontal
lines), and participant response times \edit{(colored markers with error bars)} for monitoring tasks (T1--T3).
Error bars represent 95\% CI.
}
\label{fig:monitoring_results}
\end{figure}
\subsection{Monitoring Tasks}
\begin{figure}[t]
\centering
\includegraphics[width=.8\columnwidth]{/Figures/comprehension_results}
\vspace{-3mm}
\caption{
Distribution of correct and incorrect answers for the comprehension
tasks (T4 \& T5), categorized by animation staging strategy.
}
\label{fig:comprehension_results}
\vspace{-3mm}
\end{figure}
Fig.\ref{fig:monitoring_results} shows participant response distribution
for monitoring tasks (T1--T3).
The figure shows two kinds of delays: (1) the delay in participant
response when compared to the actual event (the boxplots in the figure)
and (2) the delay between an event occurring and it being shown in the
video (the horizontal \edit{black} lines in the figure).
We separate the two delays.
We measure the difference between the time at which the participant
is shown the event and their corresponding response as the ``participant
response time''.
We use this as a measure of how easily the participant was able to see
the event when it was shown to occur (hypothesis H1).
\edit{This also addresses cases where participants respond \textit{before} the event occurs, which in monitoring tasks is an error at the same level or worse than a \textit{delayed} response.
We observe in the raw data that this happened only for three participants in time-based staging.}
We analyzed the participant response times using a
repeated-measures analysis of variance (RM-ANOVA) and found a
significant effect of animation staging strategy on the time difference
between participant response time and the time at which the event was
shown on the video for tasks T1 ($F(2, 40) = 4.04, p<0.05$)
and T3 ($F(2, 40) = 99.43, p<0.001$).
No significant difference in response times was found between the
conditions for task T2.
This partially confirms hypothesis H1.
A post-hoc Tukey HSD test showed significant pairwise differences
between time-based and hybrid staging conditions ($p<0.05$) for tasks T1
and T3, and between time-based and event-based staging conditions
($p<0.001$) for task T3 (see \autoref{tab:monitoring_tukey_results}).
\begin{table}[tb]
\small
\centering
\begin{tabular}{c l c c c c c}
\toprule
\multirow{2}{*}{Task} & \multirow{2}{*}{Condition} &
\multicolumn{2}{c}{Response Diff. (Video)} &
\multicolumn{3}{c}{Tukey HSD Significance} \\
\cmidrule(lr){3-4}
\cmidrule(lr){5-7}
& & Mean (sec) & S.D. (sec) & Event & Hybrid & Time \\
\midrule
& Event & 0.233 & 1.580 & -- & & \textbf{*} \\
T1 & Hybrid & 0.753 & 1.158 & & -- & \\
& Time & 1.454 & 1.606 & \textbf{*} & & -- \\
\cmidrule(lr){1-7}
& Event & 3.230 & 1.522 & -- & & \\
T2 & Hybrid & 3.132 & 7.589 & & -- & \\
& Time & 4.892 & 4.046 & & & -- \\
\cmidrule(lr){1-7}
& Event & 0.464 & 0.788 & -- & & \textbf{**} \\
T3 & Hybrid & 1.213 & 1.164 & & -- & \textbf{**} \\
& Time & 6.722 & 2.294 & \textbf{**} & \textbf{**} & -- \\
\midrule
& & & \multicolumn{2}{c}{\textbf{*} : $p<0.05$} &
\multicolumn{2}{c}{\textbf{**} : $p<0.001$} \\
\bottomrule
\end{tabular}
\vspace{1mm}
\caption{Results of monitoring tasks, with pairwise significant
differences between conditions.}
\label{tab:monitoring_tukey_results}
\vspace{-2mm}
\end{table}
\subsection{Comprehension Tasks}
\autoref{fig:comprehension_results} shows the distribution of correct
and incorrect answers for all the comprehension tasks, categorized by
the staging conditions.
We analyzed the distribution of correct and incorrect responses for each
question using Cochran's Q-Test.
Overall, we found no significant difference in response correctness
between conditions for tasks T4 and T5 (rejecting H2a and H2b).
\subsection{Participant Experience}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{/Figures/tlx_results_singlecol}
\vspace*{-6mm}
\caption{
Distribution of participant responses showing the mental, physical, and temporal
demands of the tasks, along with their perception of performance,
effort, and frustration while doing the tasks.
Scores were self-reported on the 21-point NASA TLX
scale and converted to a 7-point scale for comprehension.
}
\label{fig:tlx_results}
\end{figure}
The distribution of participant responses on the NASA TLX
for each staging condition is shown in Fig.~\ref{fig:tlx_results}.
We performed Friedman's test on each scale separately, and found
significant differences between the staging conditions for
participant responses on
mental demand ($\chi^2=15.89, p<0.001$),
physical demand ($\chi^2=8.79, p<0.05$),
temporal demand ($\chi^2=9.66, p<0.01$),
effort ($\chi^2=5.05, p<0.001$), and
frustration ($\chi^2=11.14, p<0.01$).
No significant difference was found
for participant responses on performance.
A posthoc Conover test revealed pairwise significant
differences(see Table~\ref{tab:tlx_friedman_results}).
\begin{table}[tb]
\small
\centering
\vspace{-5mm}
\begin{tabular}{l l r r r r r}
\toprule
\multirow{2}{*}{Task} &
\multirow{2}{*}{Condition} &
Score &
\multicolumn{3}{c}{Conover Test Significance} \\
\cmidrule(lr){4-6}
& & (median) & Event & Hybrid & Time \\
\midrule
& Event & 9 & -- & & \textbf{**} \\
Mental & Hybrid & 10 & & -- & \textbf{**} \\
& Time & 15 & \textbf{**} & \textbf{**} & -- \\
\cmidrule(lr){1-6}
& Event & 3 & -- & \textbf{*} & \textbf{**}\\
Physical & Hybrid & 3 & \textbf{*} & -- & \textbf{**}\\
& Time & 4 & \textbf{**} & \textbf{**} & -- \\
\cmidrule(lr){1-6}
& Event & 9 & -- & & \textbf{**} \\
Temporal & Hybrid & 7 & & -- & \textbf{**} \\
& Time & 11 & \textbf{**} & \textbf{**} & -- \\
\cmidrule(lr){1-6}
& Event & 10 & -- & \textbf{**} & \textbf{**}\\
Effort & Hybrid & 10 & \textbf{**} & -- & \textbf{**}\\
& Time & 13 & \textbf{**} & \textbf{**} & -- \\
\cmidrule(lr){1-6}
& Event & 5 & -- & \textbf{**} & \textbf{**}\\
Frustration & Hybrid & 6 & \textbf{**} & -- & \textbf{**}\\
& Time & 11 & \textbf{**} & \textbf{**} & -- \\
\midrule
& & \multicolumn{2}{c}{\textbf{*} : $p<0.05$} &
\multicolumn{2}{c}{\textbf{**} : $p<0.01$} \\
\bottomrule
\end{tabular}
\vspace{1mm}
\caption{Participant responses on the NASA Task Load Index for each
condition with pairwise significant differences marked.}
\label{tab:tlx_friedman_results}
\vspace{-7mm}
\end{table}
From the table, we see that participants were significantly more
frustrated using time-based staging than the other two,
though participant self-report of performance showed no
significant differences (partially confirming H3a).
On the other hand, mental load, physical load, temporal load, and effort
were all significantly higher for time-based staging than for the
remaining two (confirming H3b).
In fact, with the exception of mental load, there are significant
differences in participant perception of load, effort, and frustration
between hybrid and event-based strategies as well, with hybrid
performing better than event in terms of temporal load and worse in the
other measures.
\section{Scenario: Computer Security Monitoring}
\label{sec:use_case}
Tools that allow users to examine data in real time or after the fact help the user identify behaviors and patterns of interest.
Once the behavior is characterized unambiguously, algorithms can be devised to automatically recognize it, but in mission-critical applications, it always helps to have a visual display of data so that new or unexpected patterns have a chance of being recognized.
For example, CCTV footage is most often used to examine footage details after an event occurs, usually initiated by a live observation of something of interest (i.e. robbery, altercation).
On the other hand, security personnel also monitor CCTV footage in real time, looking for indicators of threatening events.
These ``indicators'' cannot be coded to be recognized by an automated system; it depends on the experience and perceptiveness of the security personnel monitoring the footage.
What determines \emph{when} the data needs to be observed is the importance of immediacy.
If an event of interest requires an immediate response from the viewer, and if the viewer has an understanding of the kind of behavior to anticipate, they can proactively observe the data.
A good example
is computer security monitoring, as seen in the four cyber security data sources\footnote{\url{https://csr.lanl.gov/data/cyber1/}} within the Los Alamos National Laboratory's (LANL) corporate, internal computer network.
One data category of interest concerns network flow events.
These events indicate network connections between computers and contain relevant information such as time of connection, duration of connection, amount of information moved, and the protocol used.
The authors of the network flow event dataset point out that ``compromise events'' are often intentionally created by authorized attackers to test network security~\cite{akent-2015}.
In their dataset of events spanning 58 days, 749 such compromise events occur, and are recorded with information such as \emph{domain}, \emph{source computer}, and \emph{destination computer}.
They observe that models are yet to be created to automatically identify some of the compromise events that are recorded in this dataset.
Currently, there exist some forms of attacks and malicious behavior in this dataset, the indicators of which have not yet been successfully validated or correlated.
This further underscores the need for compromising events to first be observed and understood.
Depending on the sensitivity of the data, this type of dataset might need live monitoring with in addition to other means of tracking such events.
\section{Follow-Up Study with Experts}
\label{sec:follow_up}
The participants in the prior study did not have much expertise in network visualization, though they had varying degrees of familiarity with the subject.
To follow up our findings with observations from expert participants, we conducted a qualitative study of the three animation strategies with domain experts in network theory and visualization.
The participants (P1 and P2) were Graduate Ph.D.\ students with over 5 years of experience in network analysis and visualization, and they design and implement new
network visualization techniques.
We followed a think-aloud protocol where each participant was shown an animation of a dynamic network, and asked to narrate aloud what they thought was happening throughout the animation.
At the end of each animation, they were asked follow-up questions on what they had just observed, and about the state of the network in general, and observations they had made in particular.
The video was played only once (during the think-aloud component), and participants were asked not to play back the video.
The study was administered remotely via web conference, with both participants using Mac Book Pro laptops with resolutions of 2560$\times$1600 and 1920$\times$1080 each.
The study involved 3 different animation clips for each animation strategy.
One clip used the MIT proximity dataset~\cite{Kent2015comprehensive} mentioned in \autoref{sec:tasks_dataset},
while the remaining two clips used the LANL dataset mentioned in the introduction.
For the MIT dataset, participants were asked to observe and track individuals that moved through the network the most, and individuals who had high centrality, i.e.\ those who connected two or more clusters in the network.
For the LANL dataset, participants were asked to keep track of computers that connected to multiple other computers, and those that switched connections between different computers frequently.
Participants viewed training clips to familiarize them with the kind of data they would observe and their contexts.
They were shown video snippets describing what behaviors in the network we are trying to identify.
For example, we showed the users video snippets of what a stable network would be.
As in the previous study, the same data was animated using the three different strategies and shown to the participants.
Learning effects were minimized by (a) ordering the clips so that the same data was not shown in successive clips, and (b) changing the node identifiers between animation strategies.
Finally, the order in which the videos were shown were switched up between participants.
Our observations are described below.
\textbf{Time-Based Animation.}
Participants' responses to time-based animation clips seemed rushed.
They were unable to follow sudden changes that occurred in the network, and found it difficult to track individual node connections.
To catch up to the rapid changes in the network, the participants went from using specific network terminologies to describe changes to general phrases such as ``\textit{I see a lot of changes in the network}" (P1), and ``\textit{changes happening everywhere}" (P2).
However, we noticed that both participants provided similar descriptions of the overall network evolution, especially attributes such as changes in network size and stability of clusters.
\textbf{Event-Based Animation.}
Participants found certain sections of the network to be quite dense, stating that they seemed like a ``hairball" which made it difficult to identify which nodes were connected to which others.
The participants were able to identify the overall network evolution trend which included descriptions of stability and cluster changes.
There were sections of the video where both the participants claimed to seeing ``\textit{not many changes}".
\textbf{Hybrid Animation.}
Participants were able to identify, with little effort, the stable and unstable sections of the network, the nodes that connected different parts of the network together, and the overall network evolution trend.
Their description of the network evolution was less rushed.
The participants mentioned that the dense sections of the network were ``\textit{hard to follow and explain}". However, they were still able to explain individual connectivity within the dense network which wasn't the case in event and hybrid tasks.
\section{Scalability}
\label{sec:scalability}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{/Figures/TEH_1000.pdf}
\vspace{-1em}
\caption{
Results of simulating individual animation strategies for varying event occurrence rates and time intervals.
}
\vspace{-4mm}
\label{fig:scalability}
\end{figure}
We explore how scalability affects each of the animation strategies by varying the volume of data (i.e., number of events) and the time interval in which they occur.
Testing user perception of a wide range of event counts and intervals with a user study would be unrealistic due to the large number of possible variables.
Instead, we ran a simulation of each animation strategy for one minute.
For the time-based animation strategy, all events are shown in near real time.
The limiting factor is the number of events a user can perceive at a given time.
Therefore, we output the number of events shown per each animation cycle.
As previously stated, studies haven shown that individuals can successfully track up to five simultaneously moving objects~\cite{Pylyshyn1988tracking}.
Based on this constraint, we limit the number of events that can be shown at one time to 5 for event-based and hybrid strategies.
This constraint will affect the delay in seconds between the time an event occurs and the time that it is shown in the animation.
For the event-based strategies, we also look into the ``offset''---the time taken for 5 events to accumulate before being animated---which would depend on the data influx rate.
We vary the volume of data (i.e., the number of events) from 1 to 10 events at a time, and vary the time interval over which these chunks of events occur from 8 seconds down to 0.001s.
The results of our simulation are shown in Figure \ref{fig:scalability}.
The dashed line at the 2 second mark represents the point when the rate at which events occur is greater than the rate at which they are shown in the animation.
The number of events that occur is mapped to the y-axis, the time taken for each interval is mapped to the x-axis, and color is mapped to either the time delay (for event-based and hybrid animations) or number of events displayed per animation cycle (for time-based animations).
We see a staircase effect for the time-based strategy results (\autoref{fig:scalability}).
Intervals longer than 2 seconds follow a predictable pattern of being dependent on the number of events
in that interval as there is enough time for the animation cycle to complete before new events occur.
With an increase in the number of events occurring per interval, the ability to perceive the events quickly declines as the events exceed 5 per interval.
The simulation results for event-based and hybrid strategies in \autoref{fig:scalability} show a similar staircase effect that extends beyond the animation cycle of 2 seconds.
Since there is a cap of 5 events that can occur per any animation cycle, multiple cycles are needed to handle a high volume of data.
In either case, a pile-up of events occurs to the point that the animation cycles can not keep up.
For instance, 10 events occurring every two second interval means
that this set of events will take up the next two animation intervals, during which time, 20 more events have accumulated.
The event-based strategy shows an offset at the bottom when fewer than five events occur over a standard animation cycle.
That cycle will not be triggered until five events have accumulated, thus increasing the time delay between the first event occurring and it being shown.
This simulation confirms our notion that the hybrid strategy eliminates the inordinate delays that we see in event-based animations with low event rates, and works in a manner similar to event-based strategy at higher event rates.
It also indicates that the hybrid strategy's timeliness is not better than event-based strategy for high event rates.
\vspace{-2mm}
|
2207.12940
|
\section{Introduction}
\label{sec:intro}
Medical studies using real-world data (RWD) may benefit from exploiting clinical reports, a rich albeit unstructured part of electronic health records (EHR) collected during care episodes. These reports may contain relevant information that is scarce in structured EHR: by some estimates, up to 80\% of entities found in clinical reports are absent from other media \cite{raghavan_how_2014}.
In this context and given the scale of data to analyse, natural language processing (NLP) methods are needed to extract meaningful medical information from this unstructured medium, and help address challenges such as automatic detection of adverse drug reaction, clinical trial eligibility or identification of temporal associations \cite{kreimeyer_natural_2017}.
Initially bound to purely rule-based methods, the NLP field has been shifting towards machine learning (ML) algorithms that can detect patterns automatically. The most recent techniques rely on a first processing stage to represent free-text data into machine-readable input using models known as word embeddings, whose goal is to provide a vector representation that conveys as much semantic and syntactic information as possible.
Methods such as GloVe \cite{pennington_glove_2014}, Word2Vec \cite{mikolov_efficient_2013} or fastText \cite{bojanowski_enriching_2017} can learn meaningful static representations for words, but novel embeddings algorithms like ELMo \cite{peters_deep_2018} and XLNet \cite{yang_xlnet_2020} have since been proposed to include contextual information in the embeddings. Introduced by Delvin et al, the Bidirectional Encoder Representations from Transformers (BERT) \cite{devlin_bert_2019} proposes an efficient method outputting rich representations for words based on their context that consistently demonstrates state-of-the-art performance in most NLP applications. In French, FlauBERT \cite{le_flaubert_2020} and CamemBERT \cite{martin_camembert_2020} are trained on general-purpose French-language documents crawled from the Internet.
Using transfer-learning, such pre-trained models can serve as a basis for a variety of NLP tasks. In the context of a clinical data warehouse (CDW), an ecosystem of researchers and clinicians may rely on a shared pre-trained language model, and fine-tune it on their specific tasks.
Previous work has shown that using specialty language for training BERT-based models can widely increase performances \cite{lee_biobert_2019,alsentzer_publicly_2019}: specialty languages and clinical reports in particular follow a distinct syntax and vocabulary, such that training a model to learn these specificities can represent an advantage. Moreover, Martin et al \cite{martin_camembert_2020} have determined that a model trained on a carefully selected subcorpus could achieve comparable results despite using less than 10\% of the original training data.
In this work, we leverage the CDW of the Greater Paris University Hospitals (Entrepôt des Données de Santé, EDS) to confirm whether there is significant advantage to using a word embedding model specifically trained on French clinical reports for clinical NLP tasks, and address the following questions:
\begin{enumerate}
\item Is there an advantage to retraining from scratch, as opposed to fine-tuning an existing model, given the excess computational toll and environmental footprint?
\item How many training steps and examples are necessary to learn useful knowledge about the speciality language?
\end{enumerate}
\section{Methods}
\label{sec:methods}
This study followed the RECORD reporting guideline \cite{benchimol_reporting_2015}; the checklist is available in the appendix \ref{app:reproducibility/RECORD}.
\subsection{Dataset}
\label{sec:methods/datasets}
The EDS contains data collected in the EHR of 39 hospitals from the greater Paris area and relative to 11M patients, including 80M clinical text reports.
The training corpus for this work consists of clinical reports gathered between August 2017 and July 2021. Documents are pseudonymised \cite{paris_desidentification_2019} to preserve privacy, by replacing directly identifying entities with fake entities.
Reports containing less than 20 characters were removed and the corpus was resampled to limit the influence of over-represented report types (e.g. prescriptions, consultation or imaging reports).
We pre-processed selected documents with \texttt{EDS-NLP} \cite{dura_eds-nlp_2022} and \texttt{spaCy} \cite{honnibal_spacy_2020} by removing textual pollution, such as administrative information shared by a large proportion of the reports, which could skew the distribution seen by the model (see appendix \ref{app:dataset} for details). Although clinical reports may contain other forms of duplicate information \cite{digan_evaluating_nodate}, we remained conservative and did not push the pre-processing further.
This study was authorised by the EDS institutional review board (IRB 00011591, project CSE-19-20). The EDS is approved by the French national data protection agency (CNIL, decision 1980120).
\subsection{Models}
\label{sec:methods/models}
We used the architecture of CamemBERT-base for all the experiments and compared two training strategies: fine-tuning or retraining it “from scratch”. In what follows, we focus on two models which we compare to the freely-accessible CamemBERT-base model:
\begin{enumerate}
\item \textit{EDS-fine-tuned}, a version fine-tuned on our clinical documents but using the original weights as the initialisation.
\item \textit{EDS-from-scratch}, a version trained from the ground up. This approach lets us retrain a domain-specific tokenizer.
\end{enumerate}
Since most reports go over the 512-token limit imposed by the BERT architecture, we decided to segment documents into 128-token-long sequences.
\subsection{Training}
\label{sec:methods/training}
The dataset was split into training (19.6M documents) and validation subsets (1M documents). We pre-trained seven models:
\textbf{EDS-from-scratch} was initialised with random weights. We followed CamemBERT’s training procedure, and ran the experiment for twelve full epochs, totalling more training steps to compensate for the smaller batch size.
\textbf{EDS-fine-tuned} used CamemBERT-base as its initialisation point, and was trained for one epoch on the full dataset. We also trained five other versions to estimate the impact of the number of training samples, using 100K, 300K, 1M, 3M, 10M and 21M documents. We sampled the documents uniformly from the training dataset described earlier. Every model used in this comparison was trained with the same number of steps, corresponding to one full epoch on 21M documents.
We relied on the transformers library by \texttt{HuggingFace} \cite{wolf_transformers_2020}, \texttt{Pytorch} \cite{noauthor_pytorchpytorch_2022} and \texttt{Pytorch-Lightning} \cite{falcon_pytorch_2019} for our code base.
\subsection{Validation}
\label{sec:methods/validation}
\subsubsection{Intrinsic validation}
\label{sec:methods/validation/intrinsic}
We validated our models using their perplexity measured on a held out validation set, and investigated the influence of the tokenization step. We compared the distribution of tokenized sequence lengths to evaluate whether the tokenizer had learnt some useful information about the clinical vocabulary.
\subsubsection{Extrinsic validation}
\label{sec:methods/validation/extrinsic}
We validated our models on two named entity recognition (NER) tasks, see appendix \ref{app:tasks} for detail:
\begin{itemize}
\item APMed \cite{neuraz_natural_2020,jouffroy_hybrid_2021}: a corpus for extracting drug related information in clinical reports in French.
\item QUAERO \cite{neveol_quaero_nodate}, a compilation of two French corpora annotated to ten types of clinical entities:
\begin{itemize}
\item EMEA includes long texts containing information on marketed drugs from the European Medicines Agency;
\item MEDLINE regroups titles of research articles.
\end{itemize}
\end{itemize}
Every task was framed as a token classification problem, using IOB2 notation. We used the same architecture for every test, and trained the models in depth during fine-tuning on the downstream task. We added a classification head consisting of:
\begin{itemize}
\item A fully-connected hidden layer with ReLU activation;
\item A fully-connected output layer.
\end{itemize}
Experiments were reproduced ten times with different random initialisations, to obtain a confidence interval around the results. We used seqeval \cite{nakayama_seqeval_2018} to compute the micro-averaged F1-score, and we evaluated the statistical significance using a Wilcoxon signed-rank test. All tests were 2-sided and p-values were considered statistically significant when lower than 0.05.
\section{Results}
\label{sec:results}
\subsection{Training}
\label{sec:results/training}
Training \textit{EDS-from-scratch} on 21M reports for 12 epochs took 25 days on 8 Tesla V100 GPUs. Each version of EDS-CamemBERT-fine-tuned was trained for 2 days on the same setup. Total carbon emissions were estimated using the MachineLearning Impact calculator \cite{lacoste_quantifying_2019} at respectively 10 and 110 kgCO2eq for each version of \textit{EDS-fine-tuned} and \textit{EDS-from-scratch}.
\subsection{Intrinsic validation}
\label{sec:results/validation-intrinsic}
The median number of tokens needed to represent one document was 1724 for CamemBERT’s original tokenizer, and 1500 using our EDS-specific tokenizer.
The models’ loss on unseen data was still decreasing at the end of training (see appendix \ref{app:training}).
\subsection{Extrinsic validation}
\label{sec:results/validation-extrinsic}
\subsubsection{Comparison with CamemBERT-base}
\label{sec:results/validation-extrinsic/comparison}
We compared the transfer-learning capabilities of our models with CamemBERT-base, and recapitulated the results in Table \ref{tab:perf-extrinsic}.
\begin{table}[h]
\caption{Performance of our models on multiple extrinsic tasks compared to CambemBERT-base. Results are formatted as mean (+/-std).\\ *: significantly different from from CamemBERT-base (p-value: $p<0.05$)}
\label{tab:perf-extrinsic}
\begin{adjustwidth}{-.5in}{-.5in}
\centering
\begin{tabular}{ ccccc }
\toprule
\multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{APMed (F1-score)}} & \multicolumn{3}{ c }{\textbf{QUAERO (F1-score)}} \\
& & \textbf{EMEA} & \textbf{MEDLINE} & \textbf{Total} \\
\midrule
\textit{EDS-fine-tuned} &
.902 (±0.003)* &
.729 (±0.008) * &
.597 (±0.007) * &
\textbf{.655 (±0.007)} \\
\textit{EDS-from-scratch} &
\textbf{.908 (±0.005)} * &
.693 (±0.012) * &
\textbf{.601 (±0.01)} * &
.642 (±0.007) * \\
CamemBERT-base &
.866 (±0.007) &
\textbf{.737 (±0.006)} &
.584 (±0.004) &
.651 (±0.004) \\
Best QUAERO \cite{neveol_clinical_nodate} model &
&
.749 &
.698 & \\
\bottomrule \\
\end{tabular}
\end{adjustwidth}
\end{table}
The results on APMed, the EDS-specific dataset, show a statistically significant improvement when using re-trained language models (maximum $p = 4 \cdot 10^{-3}$). However, the difference between \textit{EDS-fine-tuned} and \textit{EDS-from-scratch} is not significant ($p = 0.1$).
On QUAERO, \textit{EDS-fine-tuned} performed better than \textit{EDS-from-scratch} overall ($p = 4 \cdot 10^{-3}$).
\subsubsection{Impact of the number of training steps and training examples}
\label{sec:results/validation-extrinsic/impact}
Figure \ref{fig:training-steps} investigates the impact of the number of training steps performed on \textit{EDS-from-scratch} on its performance on the APMed dataset.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\textwidth]{training-steps}
\caption{Impact of the number of training steps for \textit{EDS-from-scratch} on the F1-score for the APMed task (logarithmic scale on the horizontal axis). CamemBERT-base is presented as reference (not re-trained). Each model goes through 5.6M documents and emits roughly 4 kgCO2eq every 100k steps.}
\label{fig:training-steps}
\end{figure}
Moreover, Table \ref{tab:num-train-examples} shows the impact of the number of examples when fine-tuning CamemBERT-base.
\begin{table}[!ht]
\centering
\caption{Performance on APMed with respect to the number of the training examples}
\label{tab:num-train-examples}
\begin{tabular}{cc}
\toprule
\textbf{Training examples} & \textbf{APMed (F1-score)} \\
\midrule
100K & .900 (±0.004) \\
300K & .900 (±0.006) \\
1M & .904 (±0.004) \\
3M & .904 (±0.003) \\
10M & .901 (±0.004) \\
21M (\textit{EDS-fine-tuned}) & .902 (±0.003) \\
\bottomrule \\
\end{tabular}
\end{table}
\section{Discussion}
\label{sec:discussion}
In this work, we investigated the impact of pre-training a BERT-based language model on clinical reports by comparing the performance on two medical down-stream NER tasks.
Our results on the APMed corpus confirm previous literature findings that pre-training on speciality language leads to a statistically significant performance improvement.
We also evaluated our models outside the EDS context to check for non-regression, namely on the QUAERO corpora, and observed that \textit{EDS-fine-tuned} fared comparably to CamemBERT-base on this non-clinical dataset. However, BERT-based methods achieved much lower performance than QUAERO’s rule-based laureates on the MEDLINE subcorpus. We posit that the fine-tuned model retains sufficient general language knowledge to keep relatively high performances on a non-clinical task, and that the short length of MEDLINE’s examples hinders contextual methods, although further validation is needed.
What is more, even on the EDS-specific APMed task, we found no significant advantage to using a model pre-trained from scratch compared to fine-tuning a generalist model. Although the new tokenizer is better apt to capture the medical vocabulary (the median token sequence length on EDS reports drops by 15\% when using the EDS-specific tokenizer), the overall performance on EDS-specific NER tasks is similar (Table \ref{tab:num-train-examples}). The impact in terms of inference time is to be evaluated in a future work.
Moreover, our study on the impact of the number of training steps and training examples suggests that relatively few samples are required to reach good test performance. Indeed, our model fine-tuned on 100K samples was able to reach performances similar to models trained on the full 21M-report dataset, at a fraction of the computational and environmental toll. This finding opens up a world of possibilities for smaller-scale CDW, which can adapt general-language models to their distribution at a relatively low cost.
\section{Conclusion}
\label{sec:conclusion}
In this work, we propose EDS-CamemBERT, a language modelling neural network adapted to the context of French-speaking clinical data warehouses. We show that fine-tuning state-of-the-art language models on clinical reports improves performances on downstream speciality tasks. We demonstrate that in this setting, training a model from scratch bears little advantages to fine-tuning a general-language model, despite providing a better tuned tokenizer. Finally, we provide evidence that very few samples are needed to achieve a statistically significant gap in performance.
\section*{Acknowledgement}
\label{sec:acknowledgement}
We thank the Greater Paris University Hospitals CDW for its support and the realisation of data management and data curation tasks.
\section*{Conflicts of Interest statement}
\label{sec:conflicts-of-interest}
None declared.
\section*{Data and code sharing}
\label{sec:data-code-sharing}
Access to the Clinical Data Warehouse's raw data can be granted following the process described on its website: eds.aphp.fr. A prior validation of the access by the local IRB is required. In the case of non-APHP researchers, the signature of a collaboration contract is also mandatory.
The source code used for building the dataset and training the models is freely available on APHP’s Github account, distributed under a 3-Clause BSD licence. It is documented, versioned and citable through Zenodo.
\section*{Funding}
\label{sec:funding}
This study has been supported by grants from the APHP Foundation.
\bibliographystyle{unsrtnat}
|
2111.03731
|
\section{Introduction}
Machine learning (ML) methods are rapidly gaining traction and importance in virtually all applications of information technology. The success and pervasive use of machine learning techniques is fuelled by a combination of increasing availability of data and readily accessible, cheap and powerful computational resources. While applications to very large amounts of data remain computationally challenging, computational resources are no longer a limiting factor in the general case\footnote{An exception might be that of Deep Learning \citep{Bengio}.}.
Given the number and diversity of the ML algorithms, system developers and users face the long known issue of {\em algorithm selection}, aimed at selecting the algorithm with best expected performance on the considered use case. Performance indicators are most usually related to the prediction quality (expected fraction or cost of errors), regardless of running time.
However, there are increasingly many use cases subject to different performance measures.
For example, when running code on portable devices, particularly mobile phones,
conserving battery charge is still an important objective.
This holds to an even larger extent for wearable devices, such as smartwatches, which additionally are much more limited in computational power, and for most of the computing technology involved in the Internet of Things (IoT).
For these application contexts, in addition to limitations on battery capacity and computational power, data transmission is typically limited or costly.
While it can be expected that all of these limitations will become less stringent as technology continues to progress, they will remain, at least in part, relevant in the near and perhaps not-so-near future.
Another consideration that has the potential to impose fundamental constraints on the use of ML algorithms is privacy, related to \eg personal health, activities or habits. Not everyone feels comfortable to share
fine-grained data of this nature with a service provider, in order to get better advices or recommendations. In many cases, obtaining slightly less accurate feedback based on substantially less
sensitive personal data can be appealing. Along this line, a trade-off is
offered by processing private data on-board (using a portable or wearable device such as a smartwatch) and offering ML services without transmitting sensitive data at all, thereby addressing privacy concerns and reducing the need for data transmission. Similar considerations apply to smart home applications, embedded sensing and control systems, as well as in the context of many IoT scenarios.
These considerations motivate the concept of \emph{frugal machine learning},
which emphasises the cost associated with the use of data and computational resources.
Frugality, i.e., the idea of working with limited resources, comes in different flavours:
\begin{itemize}
\item Input frugality emphasises the cost associated with the data, specifically with the acquisition of the training data, the exploitation of the descriptive features, or both. Frugal inputs may involve fewer training data or fewer features than required for the best prediction quality achievable in a non-frugal setting.
Input frugality can be motivated by resource constraints and by privacy constraints.
\item Learning process frugality emphasises the cost associated with the learning process, specifically the computational and memory resources. Frugal learning might produce a model with lower prediction quality than achievable in a non-frugal setting, but do so much more efficiently. Learning process frugality is primarily motivated by resource constraints, including limited computational power and limited battery capacity.
\item Model frugality emphasises the cost associated with storing or using a machine learning model, such as a classifier or regression model. For supervised learning, frugal models may require less memory and produce predictions with less computational effort than required for optimal prediction quality.
Model frugality is primarily motivated by resource constraints such as limited memory or limited processing capabilities.
\end{itemize}
This paper is motivated by the new ML settings induced by on-board wearable devices such as smartwatches, and similarly limited devices, facing severe restrictions in terms of i) acquisition and transmission of data, due to cost or privacy issues; ii) learning and decision making, due to energy issues.
We focus on frugal supervised learning, and specifically classification. Our goal is to empirically investigate the trade-off between predictive accuracy and the computational cost involved in learning models and using them, for a wide range of ML algorithms and datasets. The motivating application is that of a smartwatch used as a stand-alone device for activity recognition, using supervised classification to recognise various types of activities, such as running, walking, bicycling, driving, riding a bus, weightlifting, resting or sleeping, from biometric and other sensor input.
The remainder of this article is structured as follows.
Section~2 provides a formal background and briefly discusses related work.
In Section~3, we motivate and introduce a quantitative measure for frugality, which we then use to study a broad range of classification algorithms and benchmarks from the well-known OpenML platform for reproducible machine learning research \citep{vanschoren2014openml}.
Section~4 is devoted to the experimental setting. Section~5 reports on the results of our empirical study. A striking (though not unexpected) result is how strongly algorithm rankings depend on the desired trade-off between predictive accuracy, computational learning cost, and decision making cost.
The paper concludes with the lessons learned regarding the trade-off between
predictive performance and frugality, and discusses our perspectives for further work.
\hide{earlier outline:
[HH to start fleshing out some text]
motivation:
- wearable devices (-> restricted resources, especially power -> running time,
but also memory)
- health data -> privacy
- smart home -> privacy [relationship wearables: e.g., in-home local tracking using
bluetooth le = ibeacon]
- activity recognition for elders (anomaly detection)
- in-store / in-area advertising / shopping assistance -> privacy
- tie to concrete example on actual wearable?
what is frugal ml, and why is it important?
- cost of features (human / machine effort of determining them)
- cost of running the model (running time -> power, memory)
- cost of learning the model (running time -> power, memory)
- privacy: features (i.e., input to the model),
meta-features (i.e., input to a meta-learning procedure that helps to
shortcut learning)
model (may not want to share the model with anyone)
-> might have to do learning on device
- important because of on-device / privacy issues (see motivation)
- conceptual:
- frugal inputs (features: time, expertise, privacy)
- frugal models (outcome)
- frugal learning (process)
- frugal meta-learning (meta-features, meta-learning process)
(briefly position against most closely relatd work, point to sec 2 for detailed discussion)
(briefly outline main technical contributions)
here: restrict to classification settings (to keep things manageable)
}
\section{Related work}
\label{sec:related}
Frugality has been considered in Machine Learning with regard to four different goals.
The first three goals consider the memory and/or computation and/or oracle resources required to achieve learning and build a hypothesis. The fourth goal considers the computational resources required to make a decision based on a learned hypothesis.
To our best knowledge, frugal learning-oriented research has been devoted to supervised learning, the best and longest established ML task; supervised ML will be the only learning task considered in the remainder of this article.
\def\mbox{\bf x}{\mbox{\bf x}}
\def{\rm I\hspace{-0.50ex}R}{{\rm I\hspace{-0.50ex}R}}
\subsection{Notations}
\label{subsec:notations}
Let $\cal E$ denote the training set, made of $n$ pairs $(\mbox{\bf x}_i,y_i)$, with instance $\mbox{\bf x}_i$ in the instance space $\cal X$ (in the vast majority of cases, a propositional space e.g. ${\rm I\hspace{-0.50ex}R}^d$; else a relational space, e.g. a logic program), and $y_i$ in the label space $\cal Y$, either binary $\{-1,1\}$ or multi-class $\{0,1, \ldots K \}$ or real-valued. The characteristics of instances (the coordinates, in the ${\rm I\hspace{-0.50ex}R}^d$ case) are referred to as {\em features}.
Let $\cal H$ denote the hypothesis space, mapping $\cal X$ onto $\cal Y$.
The learned hypothesis $h$ is most usually built by solving an optimization problem, which involves a data fitting term ${\cal F}(h,{\cal E})$, measuring how closely $h(\mbox{\bf x}_i)$ fits $y_i$
for $1 \le i \le n$, and a regularization or penalization term ${\cal R}(h)$, meant to address overfitting\footnote{Overfitting characterizes the discrepancy between $h$ performance on the training set $\cal E$ used to build $h$, and its performance on further data, say another independent training set ${\cal E}'$, which has not been used to build $h$. The learning goal is to optimize the expectation of ${\cal F}(h,{\cal E}')$, referred to as the generalization error of $h$.
}, and/or enforcing $h$ compliance with prior knowledge or additional requirements.
\[ h = \arg\min_{h \in {\cal H}} \{ {\cal F}(h,{\cal E}) + {\cal R}(h) \} \]
As claimed by \cite{BottouB07}, all dimensions of frugality, ranging from example and model selection to learning computational cost are relevant to the quality of the learned models.
\subsection{Frugality through compression or approximation}
\label{subsec:decision_making}
Early works related to frugal learning achieved the compression of
the training set $\cal E$ itself. Formally, the goal was to map the training set, viewed as a (partial) decision table, into a resource-bounded device.
This approach is currently blossoming in the field of binary decision diagrams, pioneered by \cite{Minato}. Note that binary decision diagrams can be directly implemented into Field Programmable Gate Arrays (FPGAs) \citep{FPGA}.
More remotely related is the work on approximate knowledge bases \citep{Marquis,Darwiche}. The hypothesis is formed of a large or complex knowledge base $KB$, \eg\ with severely limited real-time exploitation. The approach aims at building a (significantly smaller) lower and an upper bounded KB, respectively denoted $KB_L$ and $KB_U$, used to approximate the $KB$ decision, respectively holding $\mbox{\bf x}$ as true if $KB_L(\mbox{\bf x})$ is true, false if $KB_U(\mbox{\bf x})$ is false, and unknown otherwise.
\subsection{When frugality helps learning}
An essential concern for Machine Learning is to avoid overfitting, with hypothesis frugality as a consequence: the more frugal the hypothesis, the less likely it will overfit the data. In early days, an empirical strategy was to detect and remove the hypothesis components which were the less active, e.g. cancelling out the weights with lowest absolute value in a neural net, a heuristic referred to as {\em Optimal Brain Damage} \citep{OBD}.
Later on, with the advent of the structural risk minimization \citep{Vapnik92,Vapnik95}, a regularization term was added to the data fitting objective to reduce the variance, and henceforth
the richness, of the learned hypothesis.
Some regularization terms directly support frugality through feature selection, reducing the number of features involved in the hypothesis.
For instance, regularization terms $R(h)$ based on the $L_1$ norm of $h$ (as opposed to $L_2$ norm) typically contribute to minimizing the number of features involved in hypothesis $h$. This can be graphically understood as shown in Fig. \ref{fig:L1-L2}: the isolines of the data fitting term ${\cal F}(h,{\cal E})$ are more likely to cut the $L_1$ ball at a corner (which can be expressed in fewer features), than for an $L_2$ ball.
\begin{figure}
\centering
\includegraphics[scale=0.38]{L1-L2.jpeg}
\caption{Isolines of the data fitting term: feature selection with $L_1$ (left) and $L_2$ (right) regularization}
\label{fig:L1-L2}
\end{figure}
A huge literature is devoted to feature selection (FS). The main three approaches involve: \textit{scoring or filtering approaches}, which independently rank features using some score function and select the top-ranked ones; \textit{wrapper approaches}, which tackle the whole combinatorial optimization problem of finding the subset of features which optimize (an estimate of) the generalization error; \textit{embedded approaches}, which combine learning and feature selection through prior regularization or posterior pruning \citep{Tibshirani,RFE}.
Another ML field related to frugality is that of online learning, where examples are considered one at a time, and used to update the current hypothesis (see \cite{Online14} and references therein). The key issue is related to the possible drift of the sample distribution, and the subsequent adaptation of the hypothesis. The trade-off here concerns the number of examples used by the learner: distribution changes should be detected as early as possibly (and outdated examples discarded) while the learned hypothesis must be robust against example noise. Moreover, several online algorithms for high-speed data streams only use a subsample of the available examples in order to be trained faster \citep{Domingos:2000:MHD:347090.347107}.
\subsection{Learning with frugal oracle resources}
\label{subsec:learning_with_frugal}
ML performance classically improves, everything else being equal, as the amount of training examples increases. The main request is on the number of {\em labelled} examples; in many applications, unlabelled examples can be gathered with virtually no limits, e.g. on the Web or by observing the phenomenon under study. The shortage is on {\em labelled} examples, either because it requires expertise, or because it requires additional expensive procedures (e.g., medical tests, destructive experiments).
Active learning is concerned with the selection of the most informative samples. These samples are labelled by the oracle or teacher, and the learning process goes on in an online fashion. Quite a few criteria have been designed for active learning; typically the samples that are the most uncertain according to the current hypotheses bring more information. In an ideal case, selecting the most informative examples might yield and exponential increase of the learning speed (see \cite{Dasgupta11,ShalevShwartz13} and references therein); in practice however, the most informative examples might also be the most difficult ones for the oracle to label, slowing down the process.
\subsection{Decision making and frugality}
Some ML attempts toward frugal decision making consider a two-step process. In a first step, a possibly computationally expensive model is learned; in a second step, this model is simplified in diverse manners while preserving its predictive accuracy to the best possible extent.
In \citet{RFE} for instance, the features involved in a linear classifier with low absolute weight value are considered to be poorly relevant features (or redundant with other ones), and they are removed, possibly along an iterative process. \cite{Busa-FeketeBK12} consider the (usually large) set of hypotheses learned along a boosting process. Traditionally, these hypotheses vote to deliver the final decision. It is possible, however, to prune the set of hypotheses in an example-dependent manner, converting the ensemble into a Directed Acyclic Graph.
A most striking example is due to \cite{Caruana}, aimed at compressing deep neural nets \citep{Bengio}.
Let us consider a (large) ensemble of (very large and deep) neural networks, referred to as the teacher, and learned from some training set $\cal E$. This ensemble is used to label a much larger set of samples ${\cal E}'$. \cite{Caruana} show that, using this new training set (made of the samples in ${\cal E}'$ and the associated teacher label), a {\em shallow} neural net with same performances as the teacher can be trained, with orders of magnitude gains on the number of weights (though the process cannot be considered frugal).
\hide{
related work (and how frugal ml differs):
- resource-constrained ml: hard resource constraints, fml: focus on tradeoffs
- katharina morik (tu dortmund) -> TODO: check out 2014 summer school
- resource-efficient machine learning:
https://sites.google.com/site/resefml2013/ (also see the reference list)
http://nips.cc/Conferences/2013/Program/event.php?ID=3717
They talk mostly about resource-dependent evaluation measures (e.g. reward for computation operation), trading off different resources (e.g. RAM vs CPU), while the workshop also includes work on scalability techniques (e.g. random projections)
- multi-objective learning, data envelopment analysis
TODO: check out Ji & Sendhoff 2008 and other publications
(but this is coming from soft-computing, not core ml)
- some of this focusses not on resource/performance tradeoffs,
but multiple performance measures, e.g., roc curves
- fml focus on resource-/feature-poor situations,
special interest in frugal
- knowledge compilation, bdds and related techniques:
these are techniques that use special representations to facilitate
efficient reasoning; might help in frugal decision making
- warm-starting & meta-learning: techniques that might aid in frugal learning
by reducing complexity of learning process
- transfer learning
- fast & frugal heuristics work in psychology -> gigerenzer
- lots of ml can be seen through a frugal learning lense:
feature selection
complexity reducing priors / regularisation
ensemble techniques
active learning (minimal data acquisition)
- Bottou & Bousquet: http://papers.nips.cc/paper/3323-the-tradeoffs-of-large-scale-learning.pdf
here: restrict to classification settings (to keep things manageable)
}
\section{How to empirically assess frugality}
\label{sec:how_to_empirically}
As was shown in Section \ref{sec:related}, several methods and strategies can be considered to perform machine learning on a resource limited device. To find out exactly how \textit{frugal} each of these are, and which ones to select for applications, we need to empirically assess their performance. This can be done by running the machine learning techniques on the device, and measuring their predictive performance as well as their usage of resources, i.e. CPU cycles or runtime, RAM usage, or battery consumption.
Analyzing all of these factors independently is very useful, but can also be quite cumbersome. One can find that method A is very accurate but slow, and method B is less accurate but fast: which method is most frugal? Moreover, these factors may actually interact with each other. For instance, on smartwatches, sensors often produce data on a best-effort basis depending on the available CPU cycles and battery charge. If either of these is low, they can produce less frequent data, ultimately harming the predictive performance of machine learning applications that depend on that sensor information. Hence, an algorithm that is very accurate (but CPU-hungry) on a smartphone may be significantly less accurate when running on a smartwatch with more limited resources.
Hence, it is useful to define a novel, multi-objective evaluation measure, i.e. \textit{frugality}, that combines both predictive performance and resource consumption, and allows us to easily compare different techniques. Ideally, it also includes a parameter that allows us to adjust the need for frugality by the device, e.g., depending on whether it is a powerful server, a smartphone, or small chips in wearable devices.
To make this more concrete, consider the problem of selecting a classification algorithm for use on a wearable device. One could consider relatively simple algorithms, such as
Decision Trees \citep{quinlan2014c4}, which can be constructed fast and without substantial memory allocation. On the other hand, Decision Trees are often outperformed by more sophisticated but slower algorithms such as Support Vector Machines \citep{vladimir1995nature} or Neural Networks \citep{Schmidhuber201585}. Consequently, there exists a clear trade-off between predictive performance (e.g., accuracy) and usage of resources (CPU and RAM).
\subsection{Existing multi-objective measures}
There are many different ways to combine multiple measures into a single one. In the context of algorithm selection, \citet{brazdil2003ranking} have proposed the Adjusted Ratio of Ratio's (ARR). This metric, however, has the drawback that the constructed function is not monotonic, and thus hard to interpret \citep{abdulrahman2014measures}. To address this problem, \citet{abdulrahman2014measures} proposed the A3R measure, defined as follows:
\begin{equation}
\mathit{A3R}_{a_\mathit{ref},a_j}^{d_i} =
\frac{ \frac{\mathit{SR}_{a_j}^{d_i}}
{\mathit{SR}_{a_\mathit{ref}}^{d_i}}}
{ \sqrt[N]{T_{a_j}^{d_i} / T_{a_\mathit{ref}}^{d_i}}}
\end{equation}
Here, $\mathit{SR}_{a_j}^{d_i}$ and $\mathit{SR}_{a_\mathit{ref}}^{d_i}$ represent the \emph{success rates} (e.g. predictive accuracy) of algorithms $a_j$ and $a_\mathit{ref}$ on data set $d_i$, where $a_\mathit{ref}$ represents a given \emph{reference algorithm} for pairwise comparison. Similarly, $T_{a_j}^{d_i}$ and $T_{a_\mathit{ref}}^{d_i}$ represent the run times of the algorithms, in seconds. To trade off the importance of time, A3R includes the $N^{th}$ root parameter: the higher the value for $N$, the smaller the influence of runtime.
A simplified, non-pairwise version of A3R introduced by~\citet{Rijn2015} assumes that both the success rate of the reference algorithm $\mathit{SR}_{a_\mathit{ref}}^{d_i}$ and the corresponding time $T_{a_\mathit{ref}}^{d_i}$ have a fixed value, set to 1. This version, called $A3R'$, is defined as follows:
\begin{equation}
\mathit{A3R'}_{a_j}^{d_i} =
\frac{ \mathit{SR}_{a_j}^{d_i}
}
{ \sqrt[N]{T_{a_j}^{d_i}}}
\end{equation}
z
This is a useful measure of frugality, but it can be hard to choose a sensible value for N given that it ranges between 1 and infinity. Moreover, it reaches a value close to infinity for very small runtimes, which is not very intuitive.
\subsection{Frugality score}
In our definition of frugality, we would like a score function that is bounded within a fixed range, e.g. [-1,1]. We do still want a trade-off parameter between predictive performance and runtime, but ideally this value can be adjusted between 0 and 1. These constraints led us to the following definition:
\begin{equation} \label{eq:frugal_score}
Frug_{a_j}^{d_i} = P_{a_j}^{d_i} - \frac{w}{1 + \frac{1}{R_{a_j}^{d_i}}}
\end{equation}
where $P_{a_j}^{d_i}$ is a measure of predictive performance of algorithm $a_j$ on data set $d_i$ that needs to be maximized, w is tradeoff coefficient that defines the importance (or \textit{weight}) of frugality (i.e., how scarce resources are), and $R_{a_j}^{d_i}$ is the resource consumption of algorithm $a_j$ on data set $d_i$.
In this paper, we will mainly use the Area under the ROC curve \citep{bradley1997use}, or AUC, as the measure for $P_{a_j}^{d_i}$, because it is fairly robust against imbalanced classes. For multiclass problems, we use a multiclass AUC \citep{hand2001simple} that uses a one-versus-all approach for every class value. For $R_{a_j}^{d_i}$, we will mainly use CPU time (in milliseconds, and non-zero), or more precisely the sum of the training time and prediction time, since both have to be performed on the wearable device, and CPU usage affects battery life most. Hence, we will be using $Frug_{a_j}^{d_i} = AUC - \frac{w}{1 + \frac{1}{T_{train}+T_{test}}}$ in our experiments.
Note however, that these can easily be substituted by other measures depending on the application. For instance, if memory consumption is an issue, one could use a measure similar to Ram-Hours \citep{Bifet:2013c}, which is simply the product of the RAM usage (in GB) and the CPU time (in hours).
In this formula, $w$ defines the demand for frugality. The range for $w$ can vary from 0 to infinity, but in practice we will use values between 0 and 1. If $w$ is high, e.g. $w = 1$, our frugality score will favor algorithms that run fast with reasonable accuracy, useful for wearable devices. Values higher than 1 indicate that predictive performance is less important. When $w$ is low, e.g. 0.1, then priority is given to algorithms with better performance, even if they are slower. This is useful for smartphones or more powerful devices. If the value of $w$ is set to 0, then algorithms will be ranked only by AUC without taking into account the required training and testing time, as may be the case in cloud computing.
The derivation of $\frac{w}{1 + \frac{1}{R}}$ is as follows: we want to scale resource consumption between 0 and 1, and hence transform it using sigmoid function scaled to that range: $\frac{1}{1 + \exp(-R)}$. However, since runtimes often grow exponentially, we take a logarithm, yielding $\frac{1}{1 + \exp(-\log(R))}$. This is equivalent to the frugality definition above. Using AUC and a value for w between 0 and 1, the frugality score will be a value between -1 and 1.
\begin{figure}[t]
\centering
\includegraphics[scale=0.38]{linearPlots_example}
\caption{Illustrative frugality curves for Decision Trees (J48), Random Forest, and LibSVM (SVM with RBF kernel).}
\label{fig:frugal_score_example}
\end{figure}
This definition of frugality can be used to create \textit{frugality curves} to analyse the frugality of different algorithms in function of w, as shown in Figure \ref{fig:frugal_score_example}. Three algorithms were selected to show how the frugality score changes as we put more emphasis on resource constraints. Note that these results are calculated based on the averaged performance and runtime for each algorithm over a wide range of data sets. In other words, these lines are the average of all lines obtained for the same algorithm over all data sets.
Frugality curves also show how algorithm rankings change as frugality becomes more important: the values for $w=0$ represent the ranking of all algorithms based on AUC alone. As we increase the value for $w$, we can see how this ranking changes. Hence, when computational resources are not an issue you would consider algorithms following the ranking for $w=0$, while on a smartwatch you would consider their ranking for a higher $w$ value.
Considering the algorithms in Fig. \ref{fig:frugal_score_example}, when w is equal to zero, Random Forests perform best overall, followed by Decision trees and LibSVM (using an RBF kernel).\footnote{Note however, that we are using default parameter settings here. In future work, we will optimize algorithms on each data set individually.} When frugality becomes more important ($w$ is higher), the ranking of algorithms changes. Especially the curve crossings are useful, as they indicate the point where you would replace one algorithm with another as resources become more scarce. Indeed, while Random Forests initially performed best, it is overtaken by Decision Trees around $W=3$. LibSVM never overtakes the other algorithms, and drops below zero around $w=0.75$.
A much larger set of algorithms will be analysed in Section \ref{subsec:perf_evaluat}.
\section{Experimental setup}
\label{sec:exper_set}
To be able to perform a detailed analysis of many learning algorithms in terms of frugality on a wide range of data sets, we downloaded around 53,000 evaluations (both AUC and runtime) of 103 algorithms on 517 data set from OpenML\footnote{http://www.openml.org}, a collaborative platform for reproducible machine learning research \citep{vanschoren2014openml}. The size of these data sets varies from 10 to 98528 observations. The algorithms all originate from the WEKA \citep{hall2009weka} library. A full list of algorithms and used parameter settings is available in Appendix \ref{appendix:algorithms_studied}. Note that a few algorithms don't have evaluations on all datasets, typically due to execution errors or timeouts. If an algorithm has more than 10 missing results, it was deleted from further processing. The names of removed algorithms and the amount of missing values per algorithm are shown in Appendix \ref{appendix:missing_values}.
The remaining missing values were imputed using the Iterative SVD method \citep{fuentes2006using} from the R package SpatioTemporal\footnote{https://cran.r-project.org/web/packages/SpatioTemporal/index.html}. This method computes missing values by generating a singular value decomposition (SVD) of the original matrix and multiplying the resulting matrices again using the first n components of SVD. The reason of using this method instead of taking the mean performance for an algorithm over all data sets is because every data set is different, and hence AUC and time for each data set are not normally distributed. The Iterative SVD produces more reliable values, since it takes the values of the other algorithms over these data sets into account.
In terms of computational resources, most algorithms were run on 8-core machines operating at 2GHz, 12 GB memory, and Fedora 14 64-bit as operating system. OpenML also keeps system benchmarks for each system using the SciMark benchmarking tool\footnote{SciMark: \url{http://math.nist.gov/scimark2/}} for future reference. In addition, we ported a selection of WEKA algorithms to run on an Android-based smartwatch. We used an LG URBANE smartwatch with Qualcomm Snapdragon 400 CPU, 512 MB RAM and 410 mAh li-Ion battery.
For the exploratory analysis R version 3.2.2 was used. The R ecosystem provides access to numerous packages that allow making analysis, validating results and visualizing them. The `OpenML' package was used for downloading prior machine learning experiments and uploading new results.
\section{Empirical Analysis}
\label{sec:exper_res}
We used different techniques to study machine learning algorithm performance through a ‘frugal lens’. The goal is to understand the trade-offs between predictive performance and resource usage for a wide range of algorithms, and select the algorithms that are most useful for a given scenario. Ultimately, our goal is to identify a very small number of algorithms that are subsequently implemented on wearable devices for further analysis and use.
In this section, we will perform various experiments. First, since the frugality of an algorithm may depend on properties of the data set, e.g. an algorithm may be equally accurate but much slower on high-dimensional data, it behooves us to assess the frugality of algorithms on different types of data sets separately. In Section~\ref{subsec:an_data_set}, we therefore first cluster the data sets based on the performance of the algorithms trained on them. Second, we will do a Pareto Front analysis of our machine learning algorithms in terms of predictive performance and the CPU time required for running them in Section~\ref{subsec:pareto_front}, to determine which algorithms are most interesting to study further.
Next, in Section~\ref{subsec:hierar_clust_algorithms}, we generate hierarchical clusterings of data sets to visualize the performance of these algorithms on all data sets for different levels of frugality. Finally, in Section~\ref{subsec:perf_evaluat}, we present frugality plots to analyse a subset of representative algorithms and data sets.
\subsection{Analysis of data sets}
\label{subsec:an_data_set}
Analysing a broad variety of data sets is important for obtaining general insight into the performance of algorithms. If one wants to identify the most appropriate algorithms for different frugality levels, it is necessary to build a study that comprises different data collected for various tasks. OpenML, a platform for reproducible machine learning research, provides the opportunity to get this data with minimal effort. One can create collections of data sets based on different data set properties. In this study we focused on obtaining a very diverse set of classification data sets in terms of number of examples, dimensions and classes. The complete list of data sets, algorithms, and experiments can be found online\footnote{\url{http://www.openml.org/s/1}}. All together, 517 data sets were selected for this study. For each data set, OpenML provides around 100 data characteristics (meta-features). In this paper, we will use a subset of them, presented in Table \ref{tab:attributes}.
\begin{table}[!htb]
\centering
\caption{Attributes of data sets and their descriptions}
\label{tab:attributes}
\begin{tabular}{|p{3cm}|p{8cm}|}
\hline
\textbf{Name of parameter} & \textbf{Description}
\\ \hline
NumAttributes & The number of attributes in a data set.
\\ \hline
ClassEntropy & Entropy of the class attribute. This parameter defines the amount of information required to identify the class of an instance. If entropy is low, then the data set has a skewed class distribution.
\\ \hline
MaxNominalAtt-DistinctValues & The maximum number of distinct values found in the categorical attributes.
\\ \hline
MajorityClassSize & The number of instances (examples) in a data set labeled with the majority (the most frequent) class.
\\ \hline
DecisionStumpAUC & The AUC performance of a 1-level decision tree (Decision Stump) trained on the data set. Also known as a landmarker or probing feature.
\\ \hline
\end{tabular}
\end{table}
\subsubsection{Separating data sets in clusters}
\label{subsubsec:sep_dat_set}
First, we will cluster the data sets based on the performance of the algorithms trained on them, motivated by the question whether the algorithms behave similarly on certain groups of data sets. Once we know this, we can study the frugality of algorithms on specific types (clusters) of data sets.
To do this, a first question that we want to answer is whether there exists some level of structure in the collection of data sets, or whether they are randomly distributed. The Hopkins statistic \citep{banerjee2004validating} can be used to analyse how much structure there is in our meta-data (the space of algorithm performance data). This statistic generates randomly and uniformly distributed points and measures how well these points fit into the existing data. When distances between randomly generated and existing points are approximately the same, then the value of the index lies around 0.5. The further the value deviates from 0.5, the more structure exists in the data. On our data, we measure a Hopkins statistic of 0.032, which strongly suggests that the meta-data for our data sets is not randomly distributed.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.41]{silhouette}
\caption{Number of clusters based on Silhouette method.}
\label{fig:silhouette}
\end{figure}
Having determined that our meta-data has structure, and therefore has clusters,
the next step is to identify the expected number of clusters. This can be done with help of the Silhouette method \citep{rousseeuw1987silhouettes}, shown in Formula~\ref{eq:silhouette}, which assesses quantitatively how well a given set of data points can be separated into a specific number of clusters.
Value $a(i)$ in Formula~\ref{eq:silhouette} is the average dissimilarity of point $i$ to all other points within the same cluster, and $b(i)$ is the average dissimilarity to the nearest cluster that point $i$ does not belong to.
\begin{equation} \label{eq:silhouette}
s(i) = \frac{b(i) - a(i)}{\max \{a(i), b(i)\}}
\end{equation}
Figure~\ref{fig:silhouette} presents the Silhouette values for different numbers of clusters, showing that the highest value of $s(i)$ was achieved when the number of clusters was equal to 2. Therefore, we will focus on clustering the data sets using kMeans clustering \citep{hartigan1979algorithm} with 2 clusters. We also used hierarchical clustering, which will be discussed in Section~\ref{subsec:hierar_clust_algorithms}.
\subsubsection{Visualizing clusters}
\label{subsubsec:visual_clust}
Based on the previous analysis, we can now build and visualize a clustering of our data sets using kMeans with $k=2$. Popular algorithms for visualizing high-dimensional data are Principal Component Analysis \citep{jolliffe2002principal} (PCA) and t-SNE \citep{van2008visualizing}. These algorithms are based on different approaches for constructing a visualization and provide different perspectives on the clustering.
\begin{figure}[p]
\centering
\includegraphics[scale=0.42]{pca_first}
\caption{Visualization based on first two principal components from PCA.}
\label{fig:pca}
\includegraphics[scale=0.42]{tsne}
\caption{The result of t-SNE visualization.}
\label{fig:tsne}
\end{figure}
Figure~\ref{fig:pca} visualizes the data set clusters using the first two principal components of PCA. It shows a larger, dense cluster (black dots), and a second, more sparse cluster (red dots). Both clusters can be cleanly separated.
Figure~\ref{fig:tsne} visualizes the same clusters using t-SNE. We again find that the large cluster (red) and the smaller cluster (blue) can be cleanly separated. Moreover, here we show the OpenML IDs of the data sets. For instance, details on data set 33 can be found at \url{www.openml.org/d/33}.
The results from Figure~\ref{fig:pca} and Figure~\ref{fig:tsne} demonstrate that our data sets can indeed be cleanly separated into two clusters. Note, however, that results obtained with t-SNE can be prone to noise. Therefore, we verify the robustness of the t-SNE results by adding a noise feature to our meta-data.
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.29]{tsne_noise}
\caption{Visualization with t-SNE algorithm for data sets clustering with an additional noise feature.}
\label{fig:tsne_noise}
\end{figure}
As can be seen from Figure~\ref{fig:tsne_noise}, t-SNE takes into account the random feature while constructing a visualization, but the border between clusters remains stable even with increasing levels of noise. Including noise results in clusters that are more scattered but still cleanly separated from each other. From this, we conclude that the number of clusters was chosen in a correct way and remains stable even in the presence of noise.
\subsubsection{Data set properties of the clusters}
\label{subsubsec:propert_clust}
Having clustered the data sets, the next stage of analysis is to study the properties of the data sets within these clusters. This can be done in numerous ways, but in this study, we focus on the data set characteristics discussed previously, in Table~\ref{tab:attributes}. We show the mean and median values of these characteristics in Table~\ref{t:mean_clusters} and Table \ref{t:median_clusters}, respectively.
Considering both mean and median values is important, since in some cases, characteristics of our data sets have extreme values that can distort the mean while not being reflected by the median.
\begin{table}[!htb]
\centering
\caption{Mean values per cluster}
\label{t:mean_clusters}
\begin{tabular}{|p{5cm}|p{2cm}|p{2cm}|}
\hline
Name of parameter & Cluster 1 & Cluster 2 \\ \hline
NumAttributes & 30.457 & 1625.26 \\ \hline
ClassEntropy & 0.885 & 2.771 \\ \hline
DecisionStumpAUC & 0.723 & 0.73 \\ \hline
MaxNominalAttDistinctValues & 36.88 & 460.14 \\ \hline
MajorityClassSize & 1790.292 & 993.5 \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htb]
\centering
\caption{Median values per cluster}
\label{t:median_clusters}
\begin{tabular}{|p{5cm}|p{2cm}|p{2cm}|}
\hline
Name of parameter & Cluster 1 & Cluster 2 \\ \hline
NumAttributes & 11 & 63.5 \\ \hline
ClassEntropy & 0.983 & 2.799 \\ \hline
DecisionStumpAUC & 0.719 & 0.721 \\ \hline
MaxNominalAttDistinctValues & -1 & -1 \\ \hline
MajorityClassSize & 203 & 200 \\ \hline
\end{tabular}
\end{table}
As shown in Table~\ref{t:mean_clusters}, Cluster~2 consists of large data sets with more than 1600 attributes, while the average length of the feature vectors in Cluster~1 is around 30. Since class entropy tends to be related to the number of features, it is not surprising that Cluster~2 also shows higher levels of class entropy.
Significant differences can be observed between the mean and median number of attributes (and class entropy).
The performance of a Decision Stump classifier for both clusters is close, but slightly higher for Cluster~2. Not only is the average number of features lower in Cluster~1, but its nominal features also have fewer values than the largest nominal features in another cluster.
However, the median values for both clusters are once again quite close to each other. Finally, the majority class is almost two times larger in Cluster~2 than in Cluster~1. Based on the small difference in the median values seen in Table~\ref{t:median_clusters}, we conclude that this difference is caused by outliers.
\subsubsection{Selecting representative data sets for further analysis}
\label{subsubsec:selec_dat_set}
Next, we select ten representative data sets for further analysis from each of our two clusters using medoids. Taking into account the difference in size between the two clusters, we pick nine data sets from the larger Cluster~1 and one set from Cluster~2 (proportional to cluster size).
Medoids are central objects in a cluster and can be computed via Partitioning Around Medoids (PAM) analysis.
PAM analysis \citep{kaufman1990partitioning} can help to identify the central elements in subclusters of our two clusters.
For Cluster~1, this results in the following choice of data sets:
`980\_optdigits', `844\_breastTumor', `751\_fri\_c4\_1000\_10', `831\_autoMpg', `1038\_gina\_agnostic', `457\_prnn\_cushings', `1119\_adult-census', `9\_autos', and `454\_analcatdata\_halloffame'.
The single data set determined to be most representative of Cluster~2 is `20\_mfeat-pixel'. Details of these data sets can be found on the OpenML website, and the numbers represent OpenML IDs.
\subsection{Pareto Front analysis for algorithms}
\label{subsec:pareto_front}
Quantitative comparisons of classification procedures are typically based on Accuracy, FScore \citep{beitzel2006understanding} or AUC, and developers as well as users of such procedures typically focus on maximizing one of these measures.
Our frugality score is based on AUC and running time, and it is therefore natural and necessary to study the trade-off between the objectives of maximizing AUC and minimizing running time.
To this end, we investigated the respective Pareto fronts \citep{tomoiagua2013pareto}, where each point on front corresponds to a classification procedure whose performance in terms of AUC and running time is not dominated by any other classifier considered in our study.
In our plots of Pareto fronts, an ideal classifier would be represented by a point in the lower left corner of the plot, corresponding to maximum prediction accuracy (AUC) at minimum cost (running time).
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{722_pol}
\caption{Individual Pareto front for the 722\_pol data set. The Y-axis shows milliseconds on a $log_{10}$ scale}
\label{fig:pareto_ind}
\end{figure}
We constructed and studied the Pareto fronts for each data set individually. As an example, the Pareto front for the \verb|722_pol| data set is shown in Figure~\ref{fig:pareto_ind}. The best result for this data set in terms of AUC is achieved by the Rotation Forest algorithm with the number of iterations set to 160. However, it takes more than $10^5$ milliseconds to train and run the corresponding model. In contrast, the HyperPipes classifier completes the task almost instantly but with an AUC of about 0.87 (still significantly better than random guessing). The performance difference between these two algorithms is approximately 0.12 in terms of AUC and five orders of magnitude in terms of combined training and running time. All other algorithms show either lower computing time and lower AUC compared to Rotation Forest (I160) or better AUC and higher computing times than HyperPipes.\footnote{ZeroR is a base-line procedure that predicts the majority class and cannot be considered a meaningful classifier.}
Thus, based on given constraints on running time or requirements for AUC, the most suitable classifiers can be easily identified from the Pareto front.
Next, because the performances of our classifiers differ considerably between data sets, we clustered our data sets (using kMeans with $k = 2$) as discussed in \ref{subsec:an_data_set} and averaged the CPU time and AUC per algorithm over each cluster. This gives us an ‘average’ picture of performance, for which we can again study the Pareto front. Averaged results presented in Figure \ref{fig:pareto_cluster_first} and \ref{fig:pareto_cluster_second}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{cluster_Pareto_1}
\caption{Pareto front for algorithms computer for Cluster 1, with 466 data sets and 15 algorithms on Pareto front.}
\label{fig:pareto_cluster_first}
\end{figure}
From the Pareto front shown in Figure~\ref{fig:pareto_cluster_first}, we can see that Rotation Forest, Random Forest, and Boosted Stump classifiers are the best choices in the absence of strong constraints on running time. Otherwise, CPU time can be traded for AUC, and the A1DE algorithm may become an attractive choice. A1DE is a ensemble of 1-dependence classifiers, resulting in a Naive Bayes-like models with weaker feature independence assumptions \citep{Webb2005}.
If running time is severely limited, DecisionTree or HyperPipes classifiers become the methods of choice. Similar observations can be made for the second cluster, with some notable differences. As seen in Figure~\ref{fig:pareto_cluster_second}, a Rotation Forest with 40 iterations achieves the highest AUC, while for Cluster~1, a Rotation Forest with 160 iterations performed best in terms of AUC.
As shown in Section~\ref{subsubsec:propert_clust}, Cluster~2 contains data sets with more attributes. This could be a reason why Rotation Forest I160 is slow in this environment. While HyperPipes once again turns out to be fastest, Na\"{i}ve Bayes classifiers are now an attractive choice for intermediate trade-offs between the two performance objectives.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{cluster_Pareto_2}
\caption{The second cluster with 50 data sets and Pareto front consisting of 14 algorithms.}
\label{fig:pareto_cluster_second}
\end{figure}
\subsection{Hierarchical clustering for algorithms}
\label{subsec:hierar_clust_algorithms}
Next, we present the frugality scores for the algorithms in the form of heat maps, with parameter $w$ set to values of 0.1, 0.5, and 1, reflecting a low, moderate, or high cost of computational resources, respectively.
\subsubsection{Transition to a low-dimensional feature space}
\label{subsec:trans_low_dim}
As a first step towards producing heat maps, a hierarchical clustering of datasets is performed on the full table of frugality results. This creates a logical ordering of data sets, meaning that similar data sets are close to each other in the heat map. Because the original space created by the 103 algorithms is very high-dimensional, we first transform it to a lower-dimensional space by doing a singular value decomposition, and cancelling out the lowest singular values. The SVD transforms the original data set of frugality scores (with $w=0$) to a product of 3 matrices $U D V$, where $U$ represents data sets, $V^t $describes algorithms, and $D$ shows the importance of each (latent) dimension. The number of latent features can be chosen, and our experiments show that 5 latent features already explain about 91 percent of variance in our data. We therefore perform an SVD with 5 latent features to obtain a matrix $U$ with 516 rows and 5 columns. Next, we construct a hierarchical clustering over the data sets in $U$. The resulting dendrogram and 5-dimensional matrix of latent features are shown in Fig. \ref{fig:data_sets_svd}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.09]{datasets_svd}
\caption{Dendrogram for data sets in a low-dimensional space.}
\label{fig:data_sets_svd}
\end{figure}
\subsubsection{Heat maps for the preserved order of rows and columns}
\label{subsubsec:hierar_clust}
We can now construct heatmaps in which the order of the datasets is directly derived from this dendrogram, to ensure that similar data sets are shown in spatial proximity to each other. The order of algorithms in our heat maps is also fixed and corresponds to the order in which they appear on the Pareto front built on all the datasets (combining Clusters 1 and 2): the leftmost algorithm has lowest AUC and fastest running time, while the rightmost algorithm has the highest AUC value and requires the largest amount of CPU time.
\setcounter{figure}{11}
\begin{figure}[p]
\subfloat[Frugality scores on all datasets for w=0.1]{\includegraphics[height=9.5cm]{togetherMx_s01} } \,
\subfloat[Frugality scores on all datasets for w=0.5]{\includegraphics[height=9.5cm]{togetherMx_s05} } \,
\end{figure}
\begin{figure}[t]
\ContinuedFloat
\subfloat[Frugality scores on all datasets for w=1.0]{\includegraphics[height=9.5cm]{togetherMx_s10} } \,
\caption{Hierarchical clustering results for increasing levels of frugality.}
\label{fig:allmaps}
\end{figure}
The frugality scores shown in the resulting heat maps, shown in Figure~\ref{fig:allmaps}, are coded such that high scores appear red and lower scores are represented by lighter colours.
We show heat maps for different values of the frugality parameter $w$.
As can be seen in Figure~\ref{fig:allmaps}, as $w$ increases, frugality scores decrease.
The extent to which this happens differs between algorithms, and some algorithms suffer more than others from an increase in the cost of computation. Compare, for instance, Random SubSpace with 10 iterations \emph{vs} Real AdaBoost with 10 iteration). In general, the high-performing algorithms on the right degrade fastest, although some algorithms, such as Logistic Boosting with 10 trees, survive quite well, at least on some datasets. Some of fast but less accurate algorithms on the left, such as `HyperPipes' and `RandomTree', maintain their scores quite well over a large number of datasets.
\subsection{Performance evaluation}
\label{subsec:perf_evaluat}
Finally, we can study the frugality of our selected algorithms on the subset of representative datasets. We average the frugality scores over the 10 data sets identified in Section \ref{subsubsec:selec_dat_set}, and vary the frugality level $w$ from 0.0, meaning that there is no penalty for time at all, to 1.0, meaning that demand for frugality is very strong (i.e., algorithms should be able to work fast still showing decent performance).
\begin{figure}[!htb]
\includegraphics[scale=0.45]{linearPlots_11}
\caption{Results for selected algorithms and data sets that are medoids for clusters. The numbers in the legend are the OpenML ID's.}
\label{fig:lines}
\end{figure}
The resulting frugality plots are shown in Figure \ref{fig:lines}. These can be used to find which algorithms should be considered when dealing with increasingly resource-scarce scenarios. A first lesson is that there are many similarly performing algorithms when CPU time is not an issue. However, many algorithms quickly fall down the ranking if frugality comes into play, especially large ensembles. In general, the ranking of frugal algorithms seems to slowly flip, with the best algorithms becoming the worst and vice versa. Looking at the results for LogitBoost (Logistic Boosting with, in this case, Decision Stumps), we can observe that the version with 20 iterations is quickly overtaken by the version with 10 iterations.
The most frugal algorithm for $w=0$ is a large Rotation Forest with 160 trees, but from $w=0.1$ it is replaced by Logistic Boosting (first with 20 and then with 10 iterations). In turn, these are overtaken by A1DE (a Naive Bayes-like learner) around $w=0.2$, which stays the most frugal learning algorithm right up to $w=0.7$, when it is replace by Dagging. The latter is an ensemble learner that splits the data into stratified chunks, and gives each to a base classifier (here a Decision Stump). Another algorithm that performs quite well (in second or third place for most of the range) is BayesNet, a Bayesian Network learner. At extreme levels of frugality, much higher than $w=1$, HyperPipes ultimately takes the crown, but by then frugality has dropped below 0.
Overall, a general lesson would be that A1DE, boosted stumps with few iterations, and Bayesian Networks are good choices for frugal learning.
\section{Future work}
\label{sec:future}
In future work, we aim to evaluate the most promising algorithms (in terms of frugality) on a smartwatch, to solve real-world tasks such as Human Activity Recognition (HAR), and study the performance and battery consumption of these algorithms. Smartwatches have plenty of sensors such as an accelerometer, gyroscope, gravity sensor, linear acceleration sensor, rotation vector, step detector, air pressure sensor, magnetometer, and heart rate monitor. The last one is especially important since most of the HAR research performed with consumer wearable devices is based on this data. The set of activities that we want to classify based on this data are: walking, walking downstairs, walking upstairs, sitting, standing, and laying down. These activities are also used for activity recognition on smartphones \citep{anguita2013public}.
Based on earlier analysis, interesting candidates seem to be Na\"{i}ve Bayes (based on the A1DE results), HyperPipes and Random Forest. The latter is not very frugal but is included as a baseline. We have already ported these algorithms on an Android Wear device and have started to collect data. Figure \ref{fig:battery}, for instance, shows the change in battery level when performing activity recognition with HyperPipes, Na\"{i}ve Bayes, and Random Forest algorithms. While we run the HAR task, we sample battery charge twice per minute, and keep the application running for 50 minutes. We repeat this experiment for every algorithm, fully recharging the device before every run. The watch was placed on the wrist, and the application was run in ambient mode\footnote{http://developer.android.com/training/wearables/apps/always-on.html} all the time.
\begin{figure}[t]
\centering
\includegraphics[scale=0.4]{battery_consumption}
\caption{Battery drain for HyperPipes, Na\"{i}ve Bayes, and Random Forest}
\label{fig:battery}
\end{figure}
Random Forests lead to the fastest battery drain, corroborating our earlier findings. HyperPipes has slower battery drain, while NaiveBayes' results seem to fluctuate. Future work will pursue larger, more definite experiments.
There are several other avenues for future work. Most importantly, we aim to study \textit{frugal stream mining}, studying the frugality of stream mining algorithms that are able to handle concept drift. We also aim to extend our analysis to study the impact of algorithm hyperparameters, as well as the impact of preprocessing techniques such as feature selection.
\section{Conclusions}
\label{sec:conclusion}
With the rapid rise of wearable devices and the Internet of Things, many new machine learning applications will pervade our everyday lives. These applications may not always permit streaming large amounts of data over a network, and even if they do, privacy issues may come into play. Indeed,
while wearable sensors will enable us to improve our quality of life, we are not all comfortable sharing significant amounts of our location, movement and biometric data on a regular basis. Hence, we can expect an increasing demand for machine learning to be performed on wearable and embedded devices themselves.
We presented a new framework for the analysis of machine learning algorithms in terms of their \textit{frugality}, i.e., of how proficient they are at delivering accurate predictions when working with (possibly severely) limited resources. We introduced a novel evaluation measure, the frugality score, which trades off predictive accuracy for resource consumption and can be adjusted to the resources available to a learning algorithm, depending, for instance, on whether it is to be run on a smartphone or a smartwatch.
In an extensive empirical analysis, evaluating 103 learning algorithms on 517 classification data sets, we discovered that our collection of data sets can be divided into two clearly separate clusters, on which the algorithms perform quite differently. Using a Pareto Front analysis, we found which algorithms are the most interesting candidates for frugal learning, and we visualized their frugality scores on all data sets for increasingly resource-scarce scenario's. Finally, we introduced \textit{frugality curves}, which allow to quickly discover how different algorithms perform relative to each other, and find that algorithms that perform best without resource limitations are often the least efficient, and vice versa. In particular, we found that while large ensembles perform best when given ample resources, they are quickly overtaken by Naive Bayes-like algorithms and boosted decision stumps with few iterations.
In future work, we aim to extend this analysis by porting these learning algorithms to smartwatches and measuring their actual performance on wearable devices. Preliminary results do corroborate our earlier findings.
The concept of frugality offers an interesting and useful new way to analyse learning algorithm performance. By studying their performance through a `frugal lens', we can discover many new interesting properties and leverage this knowledge to field machine learning algorithms in pervasive low-power scenarios, and stimulate research into novel, more frugal machine learning algorithms.
\bibliographystyle{plainnat}
|
2111.03713
|
\section{Applications} \label{sec:application}
We now illustrate the benefits of the Karhunen-Loève expansion on functionals for the pricing of exotic derivatives.
We slightly change notations and write $W$ for the coordinate process. The path $X$ now represents the stock price where for simplicity, we employ the Black-Scholes model with zero interest rate. That is, $\Q$ is the Wiener measure and $x_t = x_0 \calE_t(\sigma W)$, where $x_0>0$ is fixed and $\calE$ denotes the stochastic exponential.
Notice, however, that our method applies to any dynamics of the underlying, possibly multidimensional or involving jumps. As we shall see below, what matters is whether the functional in the payoff, say, $f$, generates square-integrable paths $Y=f(X)$ for the covariance kernel to be well-defined.\footnote{Further work would include a treatment of exotic payoffs depending on \textit{several} functionals, although the latter can be usually combined.
Take for instance \textit{range options} that entail the difference between the running maximum $\overline{f}(X_t) =\max_{0\le s \le t}x_s$ and minimum $\underline{f}(X_t) =\min_{0\le s \le t}x_s$. Then one simply sets $f = \overline{f} - \underline{f}$. }
Let $\calM \subseteq \R$, $\calT \subseteq [0,T]$ be a finite set of option parameters and maturities, respectively.
We seek to approximate the price surface
$p: \calM \times \calT \to \R$, $p(m,\tau) = \E^{\Q}[ \varphi_m(X_\tau)]$, where the payoffs $\varphi_m(X_{\tau}) = (h_m \circ f)(X_{\tau})$ depends on a parameter $m \in \calM$. For example, a call option on $Y=f(X)$ is obtained with $h^{\text{Call}}_m(y) := (y-m x_0)^{+}$ and $m$ is the \textit{moneyness} of the option.
The standard Monte Carlo approach (MC) consists of simulating the underlying path on a partition $\Pi_{N} = \{0=t_0 < t_1 < ... < t_N=T \}$ that contains $\calT$ and compute the price as $p^{N,J}(m,\tau) = \frac{1}{J}\sum_{j=1}^J \varphi_m(X^{N,j}_\tau)$, $J\in \N$.
In contrast, the \textit{Karhunen-Loève Monte Carlo method} (KLMC) samples $Y=f(X)$ directly and computes the price surface using the representation $p(m,\tau) = \E^{\Q}[ h_m(y_\tau)]$.
We now describe the method in more depth.
\subsection{The KLMC Algorithm}
We assume for simplicity that $Y$ has zero mean, otherwise minor changes must be made for the KL expansion; see \cref{rem:center}.
First, we simulate trajectories
$Y^j=(f(X_{t_n}^j))_{t_n \in \Pi_{N_{\text{off}}}}$, $j=1,...,J_{\text{off}}$ with $J_{\text{off}}, N_{\text{off}} \in \N$, and compute the eigenfunctions of $\kappa^{N_{\text{off}}}_Y$ as in $\eqref{eq:eigendecomp}$. Next, for $k=1,...,K$, we estimate the sample quantile function $\Phi_k^{-1}:[0,1]\to \R$ of $\xi_k = (Y,F_k)$ by employing method N\textsuperscript{o} $\! 7$ in \cite{HyndmanFan}.
The coefficients $(\xi_k )$ can thereafter be simulated using inverse transform sampling \cite{Devroye}.
Notice that these steps can be done in an offline phase so $(\Phi_k^{-1})$ can be reused for other options contingent upon the functional $f$; see \Cref{sec:numResultX}.
In the online phase, we simulate $J \in \N$ transformed paths using inverse transform sampling for $\xi_k$.
Finally, the price surface is calculated using Monte Carlo. The procedure is summarized in \Cref{alg:klmc}.
It should be noted that although the coefficients $(\xi_k)$ are orthogonal in $L^2(\Q)$, they may well be \textit{dependent} when $Y$ is non-Gaussian. While the marginals of $(\xi_k)_{k\le K}$ are fitted properly,
the dependence is omitted as generating dependent random vectors with unknown joint distribution is highly non-trivial. Nevertheless, this simplification doesn't induce a bias in the obtained prices as we shall see in the numerical experiments.
\begin{algorithm}[H]
\caption{(KLMC) }\label{alg:klmc}
\begin{itemize}
\vspace{-2mm}
\item \textbf{Offline}: Given $f$, $K$, $J_{\text{off}}$, $N_{\text{off}}$
\begin{enumerate}
\setlength \itemsep{0.2ex}
\vspace{-2mm}
\item Simulate trajectories $Y^j=(f(X_{t_n}^j))_{t_n \in \Pi_{N_{\text{off}}}}$, $j=1,...,J_{\text{off}}$
\item Compute $\kappa^{N_{\text{off}}}_Y$ (closed-form or from the sample $(Y^j)$)
\item Solve the eigenvalue problem $\eqref{eq:eigendecomp}$ to obtain $(\lambda^{\frakF}_k,F_k)$
\item Using $(Y^j)$, estimate the quantile functions $\Phi_k^{-1}$, $k\le K$
\end{enumerate}
\vspace{-2mm}
\item \textbf{Online}: Given $J, \ \calM, \ \calT$
\begin{enumerate}
\setlength \itemsep{0.2ex}
\vspace{-2mm}
\item Simulate $\xi^j_k = \Phi^{-1}_k(u_k^j)$, $(u_k^j) \overset{i.i.d.}{\sim} U(0,1)$, $j \le J$, $k\le K$
\item Compute $y_\tau^{K,\frakF,j} =\sum_{k=1}^K \xi^j_k F_k(\tau)$, $\tau \in \calT$, $j\le J$
\item Estimate the price surface $p^{K,\frakF,J}(m,\tau) := \frac{1}{J}\sum_{j=1}^{J} h_m(y_\tau^{K,\frakF,j}).$
\end{enumerate}
\end{itemize}
\vspace{-3mm}
\end{algorithm}
\subsection{Numerical Results} \label{sec:numResultPrice}
First, we build the price surface for Asian and lookback call options, i.e. by choosing $h_m = h^{\text{Call}}_m$ and the running maximum and time average as underlying functional, respectively. Of course, the put option price surface can be retrieved thanks to put-call parity. We also consider Up \& Out digital options, that is $f(X_t)=\max_{0\le s \le t}x_s$ and $h^{\text{UO}}_m(y):=\mathds{1}_{\{y \ \le \ m x_0\}}$. The parameter $m\ge 1$ thus represents the barrier of the option relative to the spot price.
We can therefore reuse the quantile functions computed for the lookback call options.
The parameters are $(x_0,\sigma,N_{\text{off}},J_{\text{off}},J) = (100, 0.2, 10^3, 2^{17}, 2^{19})$,
$T=1$ year and $\calT = \{\frac{1}{52},\frac{2}{52},\ldots,1\}$ (weekly maturities).
The moneyness and barrier levels are respectively $\calM^{\text{Call}} = \{0.75,0.80,\ldots,1.25\}$ and $\calM^{\text{UO}} = \{1.05,1.10,\ldots,1.50\}$.
We assess accuracy in the mean square sense, namely by computing
$$
\text{MSE} = \frac{1}{|\calM|\ |\calT|}\sum_{(m,\tau) \in \calM \times \calT} |p^{\text{(B)}}(m,\tau) - \hat{p}(m,\tau)|^2.
$$ The function $\hat{p}$ is the approximated price and $p^{\text{(B)}}$ a benchmark obtained using a standard Monte Carlo with $40 \cdot |\calT| = 2080$ time steps and same number of simulations.
\interfootnotelinepenalty=10000
\cref{tab:results} displays the MSE, runtime (online phase) and number of variates per simulated path ($K$ and $N$ for the KLMC and MC method, respectively).\footnote{The experiments have been made on a personal computer; see \href{https://github.com/valentintissot/KLMC.git}{https://github.com/valentintissot/KLMC} for an implementation. The offline phase takes about 10 seconds per functional.} Notice that we increase $K,N$ for the lookback call and Up \& Out digital option as the running maximum has a slower rate of $L^2(\Q\otimes dt)$ convergence as seen in \Cref{fig:L2Error} for the Brownian case. The KLMC method constantly yields a lower MSE and runtime. For the Asian call option, note that the number of variates per path is less than the number of maturity points and KLMC method, which couldn't be done with the MC method.
\vspace{-1mm}
\begin{table}[H]
\centering
\caption{Mean squared errors and runtime (seconds)}
\vspace{-3mm}
\begin{tabular}{ccccccc}
\hline & \multicolumn{3}{c}{KLMC} & \multicolumn{3}{c}{MC} \\ \hline
Option & K & MSE & Time & N & MSE & Time \\
\hline
Asian Call & $40$ & 1.50e-04 & 2.26 & $ 52$ & 3.10e-04 & 2.40\\
Lookback Call & $100$ & 1.15e-02 & 4.47& $4\cdot 52$ & 1.92e-01 & 7.05 \\
Up \& Out Digital & $100$ & 1.80e-04 & 4.40 & $4\cdot 52$ & 1.90e-04 & 6.88\\ \hline
\end{tabular}
\label{tab:results}
\end{table}
\section*{Conclusion}
This paper sheds further light on the approximation of path functionals. After a thorough review of Hilbert projections and a connection with the path signature, we show the power of the Karhunen-Loève expansion to parsimoniously simulate path-dependent payoffs.
Further work would include the use of copulas in the KLMC algorithm to capture the dependence between the $L^2([0,T])$ coefficients of the Karhunen-Loève expansion and a performance comparison with signature-based methods.
\subsubsection{Covariance Kernel of Running Maximum}
\begin{proposition}
The covariance kernel is given by
$$\kappa_Y(s,t)= \frac{s}{2} + \frac{\sqrt{s(t-s)}- 2\sqrt{st} + t \,\textnormal{arcsin}(\sqrt{s/t})}{\pi}$$
\end{proposition}
\begin{proof}
Let $\tau_t$ be the last time the maximum of $X$ has been reached in $[0,t]$ (the optimal time—ex-post—to sell if $X$ is seen as a stock process). Note that $\tau_t$ is also the last visit of $Y-X$ to the origin, i.e.
$$\tau_t = \sup \calN_t^{Y-X}, \quad \calN_t^{W} = \{s \in [0,t] \, | \, w_s = 0 \}.$$
Now since $y_t - x_t \overset{d}{=} |x_t|$ and
$\calN_t^{|X|} = \calN_t^{X}$, \rr{then $ \tau_t \overset{d}{=} \sup \calN_t^{X}$}. The distribution of the latter is known, and finally gives
$$\Q(\tau_t\le s) = \frac{2}{\pi} \textnormal{arcsin}(\sqrt{s/t}). \qquad (?)$$
Idea: Fix $s\le t$:
\begin{enumerate}
\item
\begin{align*}
\E[y_s y_t] &= \frac{ \E[y_t^2] + \E[y_s^2] - \E[(y_t - y_s)^2] }{2}
\end{align*}
and use $\E[(y_t - y_s)^2]=\E[(y_t - y_s)^2\mathds{1}_{\{\tau_t \le s\}}]$
\item
\begin{align*}
\E[y_s y_t] &= \E[y_s y_t\mathds{1}_{\{\tau_t \le s\}}] + \E[y_s y_t\mathds{1}_{\{\tau_t > s\}}]\\\\
\E[y_s y_t\mathds{1}_{\{\tau_t \le s\}}] &= \E[y^2_s\mathds{1}_{\{\tau_t \le s\}}] = \E[y^2_t\mathds{1}_{\{\tau_t \le s\}}]\\\\
\E[y_s y_t\mathds{1}_{\{\tau_t > s\}}] &= \E[y_s y_{s,t}\mathds{1}_{\{\tau_t > s\}}] = \E[y_s (x_s + \tilde{y}_{t-s})\mathds{1}_{\{\tau_t > s\}}]
\end{align*}
\item Local time approach: $y_t = x_0 + \frac{1}{2}l_t^{Y-X}(0)$, so that ($x_0=0$)
$$y_t = \frac{1}{2}l_s^{Y-X}(0) + \frac{1}{2}l_{s,t}^{Y-X}(0) \overset{\text{Markov}}{=} y_s + x_s + \frac{1}{2}l_{t-s}^{\tilde{Y}-\tilde{X}}(0). $$
Thus,
\begin{align*}
\E[y_s y_t] &= \E[y_s^2] + \E[y_s x_s] + \frac{1}{2} \E[y_s] \E[l_{t-s}^{\tilde{Y}-\tilde{X}}(0)]\\
&= s + \E[y_s x_s] + 2\frac{\sqrt{s(t-s)}}{\pi}
\end{align*}
Note that
$$s = \E[|x_s|^2] = \E[(y_s - x_s)^2] = \E[y_s^2] + \E[x_s^2] - 2 \E[y_s x_s] \, \Longrightarrow \E[y_s x_s] = \frac{s}{2}.$$
Hence, $\E[y_s y_t] = \frac{3s}{2} + 2\frac{\sqrt{s(t-s)}}{\pi}$.
\end{enumerate}
\end{proof}
\section{Projection of Functionals}
\label{sec:funcApprox}
In \Cref{sec:pathApprox}, we unveiled two ways to approximate exotic payoffs $\varphi = h\circ f$, namely by projecting the original path $X$ or $Y = f(X)$ directly. If $\pi^{K,\frakF}$ denotes as before the projection map onto a Hilbert space $\calH$, we can therefore write $\varphi^{K,\frakF} := h\circ f^{K,\frakF}$, where either $f^{K,\frakF} = f \circ \pi^{K,\frakF}$ (functional of projected path)
or $f^{K,\frakF} = \pi^{K,\frakF} \circ f$ (projected functional).
We shall see in \Cref{ssec: numResult} that the former is suboptimal.
Although not so problematic for functionals capturing global features of a path, local path characteristics (e.g. running maximum) will typically be grossly estimated. Indeed, projecting a path first erases most of its microstructure.
We thus favor the second option ($f^{K,\frakF} = \pi^{K,\frakF} \circ f$), which consists of replacing $X$ by $Y$ in $\eqref{eq:proj}$. Let us now focus on $\calH = L^2([0,T])$ and demonstrate how to compute the Karhunen-Loève basis of $Y$.
\input{Functional_Approx/Functional_Projection}
\input{Functional_Approx/Numerics}
\subsection{Karhunen-Loève Expansion of Functionals}
Assume that $Y \in L^2([0,T]) \cap \Lambda_T$ has zero mean (otherwise, see \cref{rem:center}). \cref{thm:KL} suggests to set $\frakF$ equal to the eigenfunctions of $\kappa_Y(s,t) = (y_s,y_t)_{L^2(\Q)}$. Optimality comes, however, at the cost of explicitizing $\frakF$. We proceed as follows: take a regular partition $\Pi_N = \{t_n = n\, \delta t\, |\, n=0,\ldots,N\}$, $ \delta t =\frac{T}{N}$ and compute the kernel matrix $\kappa^{N}_Y = (\kappa_Y(t_n,t_m))_{0 \le n,m \le N}$. When $\kappa_Y$ does not admit a closed-form expression, $\kappa^{N}_Y$ is replaced by the sample covariance matrix using simulated paths for $Y$. The eigenfunctions thus become eigenvectors and solve the systems\footnote{In $\eqref{eq:eigendecomp}$, $\sum"$ means that the first and last summand are halved, i.e. the trapezoidal rule is used to compute $(\kappa_Y(\cdot,t), F_k)$. Another approach, known as Nyström's method \cite{reinhardt} consists of employing a Gaussian quadrature scheme instead.
However, for large $N$, we haven't observed any improvement
and thus favor the more convenient discretization in $\eqref{eq:eigendecomp}$.}
\vspace{-1mm}
\begin{equation}\label{eq:eigendecomp}
\sum_{t_n\in \Pi_N}\!\!" \, \kappa^{N}_{Y}(t_n,t_m) F_{k}(t_n) \delta t = \lambda^{\frakF}_{k}\, F_{k}(t_m), \quad t_m\in \Pi_N, \quad k=0,...,N.
\end{equation}
\vspace{-3mm}
This is a simple eigenvalue problem so all pairs $(F_k,\lambda_k^{\frakF})$ can be computed in one go. Let us proceed with two examples where $T=1$ and $\Q=$ Wiener measure throughout.
\begin{example} \label{ex:timeIntAvg}
Consider the time integral and average of a Brownian path,
$y_t =
f(X_t) = \int_0^t x_s\, ds, \ \bar{y}_{t}=
\bar{f}(X_t) = \frac{1}{t}f(X_t). $
These are clearly centered processes and their covariance kernels of can be found explicitly. Starting with $Y$,
$$\kappa_{Y}(s,t) = \left(\int_0^s x_r dr,\int_0^t x_u du \right)_{L^2(\Q)} \overset{\textnormal{Fubini}}{=} \int_0^s\int_0^t \kappa_X(r,u) dr du.$$
A straightforward calculation gives
$\kappa_{Y}(s,t) = \frac{s^2 t}{2} - \frac{s^3}{6}$
and
$\kappa_{\overline{Y}}(s,t) = \frac{s}{2} -\frac{s^2}{6t}, \, s\le t,$ where $\overline{Y} = \overline{f}(X)$.
We display in \Cref{fig:AvgK,fig:IntK} the covariance kernel (top) and first eigenfunctions (bottom) of $f$ and $\bar{f}$, respectively.
The dashed lines in the top panels are the eigenfunctions of the original (Brownian) path. Note the wider range in the eigenfunctions $F_1,F_2$ for $\bar{f}(X)$ compared to the integrated path for small $t$. This might come from the greater fluctuations of the time average at inception.
\vspace{-3mm}
\end{example}
\begin{figure}[H]
\caption{Covariance kernels (top) and eigenfunctions (bottom).}
\vspace{-2mm}
\begin{subfigure}[b]{0.325\textwidth}
\centering
\caption{Time average}
\includegraphics[height=1.4in,width=1.85in]{Figures/KLAverageKernel.png}
\label{fig:AvgK
\end{subfigure}
\begin{subfigure}[b]{0.325\textwidth}
\centering
\caption{Time integral}
\includegraphics[height=1.4in,width=1.85in]{Figures/KLIntegralKernel.png}
\label{fig:IntK}
\end{subfigure}
\begin{subfigure}[b]{0.325\textwidth}
\centering
\caption{Running maximum}
\includegraphics[height=1.4in,width=1.85in]{Figures/KLMaximumKernel.png}
\label{fig:MaxK}
\end{subfigure}
\vspace{2mm}
\begin{subfigure}[b]{0.325\textwidth}
\centering
\includegraphics[height=1.4in,width=1.85in]{Figures/KLAsian.png}
\label{fig:Avg}
\end{subfigure}
\begin{subfigure}[b]{0.325\textwidth}
\centering
\includegraphics[height=1.4in,width=1.85in]{Figures/KLIntegral.png}
\label{fig:Int}
\end{subfigure}
\begin{subfigure}[b]{0.325\textwidth}
\centering
\includegraphics[height=1.4in,width=1.85in]{Figures/KLLookback.png}
\label{fig:Int}
\end{subfigure}
\vspace{1mm}
\begin{center}
\footnotesize{
\textit{Solid lines: transformed
path. Dashed lines: Brownian motion.}}
\end{center}
\vspace{-3mm}
\end{figure}
\begin{example} Consider the running maximum functional $y_t=
f(X_t) = \max_{0 \le s \le t} x_s$. \Cref{fig:max3D} provides an illustration in the $(t,X,Y)$ plane. The mean function is in this case non-zero and$-$using, e.g., the reflection principle$-$given by $\E^{\Q}[y_t] = \sqrt{\frac{2}{\pi} t}$. The covariance kernel admits an explicit yet complicated expression \cite{Benichou},
$$\kappa_Y(s,t) = \frac{s}{2} + \frac{\sqrt{s(t-s)}-2\sqrt{st} + t \arcsin(\sqrt{s/t})}{\pi}, \quad s \le t.$$
\Cref{fig:MaxK} displays the covariance kernel (top) and first eigenfunctions (bottom). The latter turns out to be quite close to the eigenfunctions of Brownian motion.
\end{example}
\begin{figure}[H]
\centering
\caption{Running maximum functional for two trajectories.}
\vspace{-2mm}
\includegraphics[scale =0.2]{Figures/MaxDouble.JPG}
%
\label{fig:max3D}
\end{figure}
\subsection{Numerical Results} \label{ssec: numResult}
Let us compare the $L^2(\Q \otimes \, dt)$ error
$\lVert Y^{K,\frakF} - Y \rVert^2_{*} $ in the Brownian case for the two avenues discussed at the beginning of this section. When $Y^{K,\frakF} = (f \circ \pi^{K,\frakF})(X) $, the error is calculated using Monte Carlo simulations. As in \Cref{sec:numResultX}, we choose $T=1$, $N=10^4$ and $K\in \{1,\ldots,128\}$.
\Cref{fig:L2Error} displays the result for the running maximum, integral and average functionals. We also add the Brownian motion itself, corresponding to the identity functional $f(X)=X$. We observe a clear improvement when projecting the transformed path. Moreover, it comes as no surprise that smooth functionals (integral, average) exhibits a faster rate of convergence than the running maximum, highly sensitive to local behaviours of a path.
\begin{figure}[H]
\centering
\caption{$L^2(\Q \otimes dt)$ approximation errors. }
\vspace{-2mm}
\includegraphics[scale =0.38]{Figures/Error_nSim5000_N1000.pdf}
\label{fig:L2Error}
\\
\vspace{1mm}
\footnotesize{
\textit{ Dashed lines: functionals of projected paths. \\Solid lines: projected functionals.}}
\end{figure}
\subsection{Discussion: Projection of Functionals using the Signature} \label{sec:sigFunc}
Another approximation of $Y=f(X)$ can be obtained by combining signature functionals in a linear fashion. For instance, one can consider all the words of length less than $\bar{K} \in \N$, giving the approximation
\begin{equation}\label{eq:sigPayoff}
y^{K,\calS}_t := \sum_{l(\alpha)\, \le \, \bar{K}} \xi^f_{\alpha} \calS_{\alpha}(X_t),
\end{equation}
\vspace{-3mm}
with $K = |\{\alpha \, | \, l(\alpha)\le \bar{K}\}|=2^{\bar{K}+1}-1.$
The coefficients $\xi^f_{\alpha}$ may depend on $X_0$ only and can be calculated by either regressing $Y$ against the signature elements or using a Taylor formula for functionals \cite{LittererOberhauser} when $f$ is smooth (in the Dupire sense) and $X$ is a diffusion.
In the literature, $\eqref{eq:sigPayoff}$ is referred to as \textit{polynomial functional} \cite{LittererOberhauser} or \textit{signature payoff} \cite{Szpruch,LyonsNum} in finance.
The appeal of such projection comes from the fact that polynomial functionals are dense in the space of continuous functionals restricted to paths of bounded variation; see Theorem 5 in \cite{LittererOberhauser}.
On the other hand, despite the existence of packages to calculate the signature of discrete time paths (e.g., \texttt{iisignature} and \texttt{esig} in Python), the projection in $\eqref{eq:sigPayoff}$ is still challenging from a computational perspective.
Indeed, contrary to the reconstruction in \Cref{sec:sigLegendre}, the signature functionals have to be known at \textit{every} intermediate time. Also, as there is a priori no recipe to select words up to a given length, we must retain all of them so the number of elements doubles every time a layer is added.
\section{Introduction}
The pricing of exotic
options remains a difficult task in quantitative finance. The main challenge is to find an adequate trade-off between pricing accuracy and fast computation. Efficient techniques such as finite difference \cite{Schwartz} or the fast Fourier transform \cite{Carr} are in general not applicable to path-dependent payoffs. Practitioners are often forced to turn to standard Monte Carlo methods, despite being slow.
Therefore, researchers have come up with novel ideas over the years to tackle this issue.
Recently, some authors have employed deep learning to price vanilla and exotic options in a non-parametric manner \cite{Hull} while others
showed the benefits of the path signature \cite{Szpruch,LyonsNum} to project exotic payoffs in a nonlinear way.
In this paper, we move away from prevailing machine learning methods and bring a classical tool back into play: the Karhunen-Loève (KL) expansion \cite{Karhunen, Loeve}. The latter provides an orthogonal decomposition of stochastic processes that is optimal in the $L^2(\Q \otimes dt)$ sense. The theory has been applied to Gaussian processes \cite{Ghanem,Solin}, functional quantization \cite{Pages}, and recently to the Brownian Bridge in a weighted Hilbert space \cite{Foster}.
In this paper, the KL expansion takes on a newfound importance when it is applied to the projection of path functionals. We propose a simple simulation-based procedure, which we call the Karhunen-Loève Monte Carlo (KLMC) algorithm, to compute the price of exotic options.\footnote{See \href{https://github.com/valentintissot/KLMC.git}{https://github.com/valentintissot/KLMC} for an implementation.}
The superiority of KL-based Monte Carlo methods compared to the ordinary one was shown numerically in \cite{Acworth} for the Brownian case. Our goal is here to further support these findings and
extend this approach. We also discuss alternative methods employing the path signature as basis of the space of functionals.
The remainder of this paper is structured as follows. In \Cref{sec:pathApprox}, we gather standard results from approximation theory and draw a
parallel between Hilbert projection and the à la mode path signature.
\Cref{sec:funcApprox} is devoted to the approximation of functionals, where two routes are contrasted as well as a short discussion on the use of the signature for this task.
We apply the developed tools and finally present the KLMC algorithm with accompanying numerical evidence in \Cref{sec:application}.
\section{Path Approximation}
\label{sec:pathApprox}
For fixed horizon $T>0$, let $\Lambda_t = \calC([0,t],\R)$ and $\Lambda := \bigcup_{t\in[0,T]}\Lambda_t$. For $X \in \Lambda_t$ and $s \le t$, $X_s$ denotes the trajectory up to time $s$, while $x_s= X(s)$ is the value at time $s$.
We equip $\Lambda$ with a $\sigma-$algebra $\calF$, filtration $\F$ and probability measure $\Q$ to form a stochastic basis $(\Lambda,\calF,\F, \Q)$.
The goal of this paper is to price exotic options with payoff of the form $\varphi = h\circ f$, where $f:\Lambda \to \R$ is a \textit{functional} and $h$ a real function.
For instance, an Asian call option is obtained with $f(X_t) = \frac{1}{t}\int_{0}^t x_s ds$ and $h(y) = (y-K)^+$. If $\Q$ is a risk-neutral measure and assuming zero interest rate, then
$$p = \E^{\Q}[\varphi(X_\tau)],$$ is the value of the option with payoff $\varphi$ and maturity $\tau \in [0,T]$.
To compute $p$ using Monte Carlo, we typically simulate an approximated version of $X \in \Lambda_\tau$, e.g. using time discretization. Alternatively, we can approximate the \textit{transformed path},
$$Y = f(X), \quad y_t = f(X_t), \quad t\in [0,\tau],$$ and write $p = \E^{\Q}[h(y_\tau)]$. We favor the second option, as shown throughout the paper and in the numerical experiments.
A natural way to approximate $X$ (or $Y$) is to project it onto a Hilbert space. For simplicity, we focus on paths defined on the whole interval $[0,T]$, so working on $\Lambda_T$ is enough. Also, $f$ is assumed to preserve continuity, so $f(X)\in \Lambda_T$ as well. We now present the theory for the original path, although the same would hold for the transformed one; see \cref{sec:funcApprox}.
Let $\calH $ be a separable Hilbert space with inner product $(\cdot,\cdot)_{\calH}$. Then any $X \in \Lambda_T \cap \, \calH $ admits the representation
\begin{equation}\label{eq:proj}
x_t = \sum_{k} \xi_k F_k(t), \quad \xi_k = (X,F_k)_{\calH}, \quad t\in [0,T],
\end{equation}
\vspace{-4mm}
where $\mathfrak{F} := (F_k)$ is an orthonormal basis (ONB) of $\calH$.\footnote{
The enumeration of $\mathfrak{F}$ will depend on its construction and
common notations. For instance, $\mathfrak{F}$ may or may not include an initial element $F_0$. For fairness sake, however, we always compare projections involving the same number of basis functions.} An approximation of $X$ is obtained by truncating the series in $\eqref{eq:proj}$, that is
$x^{K,\frakF}_t = \sum_{k\, \le\, K} \xi_k F_k(t).$
Each pair $(K,\frakF)$ thus induces a
projection map $\pi^{K,\frakF}:\calH \to \calH$ given by $\pi^{K,\frakF}(X) = X^{K,\frakF}$. Although paths are assumed to be one-dimensional for simplicity, the present framework is easily generalized. Indeed, if $x_t = (x^1_t,\ldots,x^d_t) \in \R^d$, it suffices to project each component separately, i.e. $x_t^{i,K,\frakF^i} =\sum_{k\, \le\, K} \xi^i_k F^i_k(t)$ with $\xi^i_k = (X^i,F^i_k)_{\calH}$ and $(\frakF^i)_{i=1}^{d}$ ONB's of $\calH$.
\subsection{Karhunen-Loève Expansion}\label{ssec:KL}
Let $\calH$ be the Lebesgue space $L^2([0,T])$ of square-integrable functions, where we write
$(\cdot,\cdot) = (\cdot,\cdot)_{L^2([0,T])} $ for brevity.
Among the myriad of bases available, which one should be picked? The answer will depend upon the optimality criterion. One possibility is to minimize the square of the $L^2(\Q \otimes \, dt)-$norm (denoted by $\lVert \cdot \rVert_{*}$) between a path and its $K-$order truncation, i.e.
$$\epsilon^{K,\frakF} := \lVert X - X^{K,\frakF}\rVert^2_{*} = \E^{\Q} \int_0^T |
x_t - x^{K,\frakF}_t|^2 dt,$$
for $X \in \Lambda_T \cap L^2([0,T])$.
Thanks to the orthogonality of $\frakF$, we have
\begin{equation}\label{eq:err}
\epsilon^{K,\frakF} = \sum_{k,l \, > \, K}(\xi_k,\xi_l)_{L^2(\mathbb{Q})} \;(F_k,F_l)_{L^2([0,T])} = \sum_{k \, > \, K} \lambda^{\frakF}_k , \quad \lambda^{\frakF}_k := \lVert \xi_k\rVert^2_{L^2(\Q)},
\end{equation}
where it is assumed that $\lambda_k^{\frakF} \ge \lambda_l^{\frakF}\; \; \forall \ k < l $ without loss of generality.
As the mapping
$\frakF \mapsto \sum_{k} \lambda^{\frakF}_k $ is constant and equal to the total variance $\lVert X\rVert^2_{L^2(\Q \otimes \, dt)}$, the projection error is solely determined by the speed of decay of $(\lambda^{\frakF }_k)$. Inversely, the optimal basis will maximize the cumulative sum of variance $\sum_{k \, \le \, K} \lambda^{\frakF}_k$.
This leads us to the \textit{Karhunen-Loève expansion} \cite{Karhunen,Loeve}, the continuous analogue of Principal Component Analysis.
In what follows, assume $\E^{\Q}[x_t]=0$ $\forall \, t \in [0,T]$ and define the covariance kernel $\kappa_X(s,t) = (x_s, x_t)_{L^2(\Q)}.$
As $\kappa_X$ is symmetric, continuous and non-negative definite, Mercer's representation theorem \cite{Mercer} ensures the existence of an ONB $\frakF=(F_k)$ of $L^2([0,T])$ and scalars $\lambda_1^{\frakF} \ge \lambda_2^{\frakF} \ge \ldots \ge 0$ such that
\begin{equation}\label{eq:mercer}
\kappa_X(s,t)= \sum_{k=1}^{\infty} \lambda^{\frakF}_k F_k(s) F_k(t).
\end{equation}
Then $\frakF$ is the \textit{Karhunen-Loève (KL) basis} associated to $X$ under $\Q$.
From $\eqref{eq:mercer}$, it is immediate that $F_k$ solves the Fredholm integral equation
$$(\kappa_X(t,\cdot),F_k) = \lambda_k^{\frakF}\, F_k(t), \quad t\in [0,T].$$ Accordingly,
$\frakF$ and $(\lambda_k^{\frakF})$ are termed \textit{eigenfunctions} and \textit{eigenvalues} of $\kappa_X$, respectively.
Observe that the squared $L^2(\Q)$ norm of the KL coefficient $\xi_k=(X,F_k)$ is precisely $\lambda_k^{\frakF}$, whence comes the notation in $\eqref{eq:err}$.
The next result reflects the relevance of the KL expansion; see \cite[Theorem 2.1.2.]{Ghanem} for a proof.
\begin{theorem}
\label{thm:KL}
The Karhunen-Loève basis
is the unique ONB of $L^2([0,T])$ minimizing $\epsilon^{K,\frakF}$ for every truncation level $K\ge 1$.
\end{theorem}
\begin{remark}\label{rem:center}
For non-centered trajectories, it suffices to characterize the Karhunen-Loève basis of $x_t-\E^{\Q}[x_t]$. The projected path is then obtained by adding the mean function back to the expansion.
\end{remark}
\begin{example} \label{ex:KLBM} Let $T=1$ and $\Q$ be the Wiener measure, i.e. the coordinate process $X$ is Brownian motion on $[0,1]$.
The covariance kernel is $\kappa_X(s,t) = s \wedge t$, leading to the eigenfunctions $F_k(t) = \sqrt{2} \sin((k-1/2)\pi t)$ and eigenvalues $\lambda_k^{\frakF} = \frac{1}{\pi^2(k-1/2)^2}$, $k \ge 1.$
The projection error is approximately equal to
\begin{align*}
\epsilon^{K,\frakF}
= \frac{1}{\pi^2}\sum_{k\,>\,K} \frac{1}{(k-1/2)^2}
\approx \frac{1}{\pi^2} \int_K^{\infty} \frac{dk}{(k-1/2)^2} =
\frac{1}{\pi^2(K-1/2)}.
\end{align*}
It is easily seen that $\xi_k = (X,F_k) \sim \calN(0,\lambda_k^{\frakF})$ so that $\xi_k, \xi_l$ are independent for $k\ne l$. Therefore, "smooth" Brownian motions can be simulated by setting
$x^{K,\frakF}_t = \sum_{k =1}^K \sqrt{\lambda_k^{\frakF}}\, z_k \, F_k(t)$ with $(z_k)_{k=1}^K \overset{\textnormal{i.i.d.}}{\sim} \calN(0,1)$, $K\ge 1.$
\end{example}
\subsection{Lévy-Cieselski Construction}\label{ssec:LC}
Another Hilbert space is the \textit{Cameron–Martin space},
$\calR = \{F \in \Lambda_T \, | \, dF \ll dt, \, \dot{F} \in L^2([0,T]) \} $
where $\dot{F}$ denotes the time derivative of $F$. The inner product is $(F,G)_{\mathcal{R}} = (\dot{F}, \dot{G})$, from which
$(F_k)$ is an ONB of $\calR \; \Longleftrightarrow \; (\dot{F}_k)$ is an ONB of $L^2([0,T])$
is immediate.
If $X^{K,\frakF}$ is a projected path with respect to an ONB $\frakF$ of $\calR$, then taking derivative gives
$$\dot{x}_t^{K,\frakF} = \sum_{k \, \le \, K} (\dot{X},\dot{F}_k)\dot{F}_k(t) = \sum_{k \, \le \, K} (X,F_k)_{\calR}\, \dot{F}_k(t). $$
We gather that the projection of a path onto $\calR$ corresponds to an $L^2([0,T])$ projection of its (possibly generalized) derivative
followed by a time integration.
When $\Q$ is the Wiener measure, this procedure is often called the \textit{Lévy-Cieselski construction}.
With regards to accuracy, we recall the expression for the $L^2(\Q \otimes \, dt)-$error,
$$\epsilon^{K,\frakF} = \lVert X - X^{K,\frakF}\rVert^2_{*} = \sum_{k,l \, > \, K}(\xi_k,\xi_l)_{L^2(\mathbb{Q})} \;(F_k,F_l)_{L^2([0,T])}.\vspace{-2mm}$$
As orthogonal functions in $\calR$ need not be orthogonal in $L^2([0,T])$, we cannot in general get rid of the double sum above.
However, if $\Q$ is the Wiener measure and $\dot{X}$ the white noise process, then Fubini's theorem gives
$$(\xi_k,\xi_l)_{L^2(\mathbb{Q})} = \int_{[0,T]^2} \underbrace{\E^{\Q}[\dot{x}_s \dot{x}_t]}_{=\, \delta(t-s)}\dot{F}_k(s) \dot{F}_l(t) ds dt = \int_0^1 \dot{F}_k(t) \dot{F}_l(t) dt = (F_k,F_l)_{\calR}= \delta_{kl} .$$
Thus,
$\epsilon^{K,\frakF} = \sum_{k\, >\, K} \lVert F_k \rVert^2 $.
The optimal Cameron-Martin basis would therefore have the fastest decay of its squared norms $(\lVert F_k \rVert^2)$, assuming the latter are sorted in non-increasing order.
We illustrate the Lévy-Cieselski construction with two examples, taking $T=1$.
\begin{example}\label{ex:BBC}
A standard method to prove the existence of Brownian motion follows from the \text{Brownian bridge construction}. In short, it consists of a random superposition of triangular functions$-$the Schauder functions$-$obtained by integrating the Haar basis on $[0,1]$,
$$\dot{F}_{k,l}(t)=2^{k/2} \, \psi \left(2^{k}t-l\right),\quad 0 \le l < 2^k,\quad t\in [0,1],$$
with the wavelet $\psi= (-1)^{ \mathds{1}_{[1/2, 1)}}$, $\textnormal{supp}(\psi) = [0,1]$. The Schauder and Haar functions are illustrated on the left side of \Cref{fig:CMS}. It is easily seen that $\dot{F}_{k,l}$ as well as $F_{k,l}$ have support
$[l/2^k,(l+1)/2^k]$, the $l-$th subinterval of the dyadic partition $\Pi_{k} = \{l/2^k\,|\, 0 \le l \le 2^k\}$.
The construction is incremental:
First, the terminal value of the path is simulated. Then, for each subinterval of $\Pi_k$, $k\ge 0$, a random value for the mid-point is generated and thereafter connected to the endpoints in a linear fashion (using $F_{k+1,\cdot}$).
The restriction of $X$ to $\Pi_k$ will therefore remain the same when finer characteristics of the path are added.
When considering all functions up to the $\bar{K}-$th dyadic partition, the total number of basis functions employed is $K =
2^{\bar{K}+1}-1$.
For Brownian motion, the approximation error is known \cite{Brown} and equal to
$\epsilon^{K,\frakF} = \frac{1}{6K}.$
\end{example}
\begin{example}\label{ex:cosine}
Let $(\dot{F})$ be the \text{cosine Fourier ONB}, i.e.
$\dot{F}_k(t) = \sqrt{2}\cos(\pi k t)$, $t \in [0,1]$. The anti-derivatives $F_k(t) = \sqrt{2}\,\frac{\sin(\pi k t)}{\pi k}$ turns out to correspond$-$up to a factor$-$to the Karhunen-Lo\`eve basis of the \text{Brownian bridge}. Indeed, recalling that $\kappa_X(s,t) = s\wedge t - st$ if $X$ is a Brownian bridge, we have for the ONB $\, \tilde{\frakF} = (\tilde{F}_k) = (\pi k \, F_k)$,
\begin{align*}
(\kappa_X(\cdot,t),\tilde{F}_k)
= \sqrt{2}\left[(1-t)\int_0^t s\, \sin(\pi k s) ds + t \int_t^1 (1-s)\sin(\pi k s) ds\right]
= \sqrt{2}\, \frac{\sin(\pi k t)}{\pi^2 k^2},
\end{align*}
using integration by parts in the last equality.
The eigenvalues are therefore $(\lambda_k^{\tilde{\frakF}}) = (\frac{1}{\pi^2 k^2})$.
The first elements of $\tilde{\frakF}$ and the Fourier cosine ONB are displayed on the right charts of \Cref{fig:CMS}.
Following the same argument as in \cref{ex:KLBM}, the (minimal) projection error onto $K$ basis functions is approximately equal to $\frac{1}{\pi^2 K}$. This is less than Brownian motion (see \Cref{ex:KLBM}) as little more is known about a Brownian bridge; $\Q-$almost all trajectories return to the origin.
\end{example}
\subsection{Connection with the Wiener-Itô Chaos Expansion}
Let $(\Omega,\calF,\F,\Q)$ be a filtered probability space where
$\F=(\calF_t)_{t\in [0,T]}$ is the natural filtration of a standard Brownian motion $W$ on $[0,T]$. As $d=1$, remark \ref{rmk:Strat} gives
\begin{equation}\label{eq:sigW}
\calS(W_t)= \left(1,w_t,\frac{w^2_t}{2},\ldots,\frac{w^n_t}{n!},\ldots\right).
\end{equation}
Hence the signature is here path-independent.
We now recall the Wiener-Itô chaos expansion.
\begin{thm}
Any $\calF_T-$measurable random variable $\xi$ admits the representation
$$\xi = \E^{\Q}[\xi] + \sum_{n=1}^\infty J_n(W_T;\phi_n), $$
where $\phi_n$ are symmetric, square integrable functions and
\begin{equation}\label{eq:chaos}
J_n(W_T;\phi_n) &=\int_{\Delta_{n,T}} \, \phi_n\, dw^{\otimes n}_{t} = \int_{\Delta_{n,T}} \, \phi_n(t_1,\ldots, t_n)dw_{t_1} \ldots dw_{t_n}.
\end{equation}
Note that the above iterated integral is in the Itô sense.
\end{thm}
\begin{remark}
The Wiener-Itô chaos expansion is often expressed (see, e.g., \citealp{DiNunno}) as
$$\xi = \E^{\Q}[\xi] + \sum_{n=1}^\infty n!\, J_n(W_T;f_n), $$
where $f_n$ are again symmetric, square integrable functions. In fact, $f_n = \frac{\phi_n}{n!}$. It is for our purposes, however, more convenient to stick to the expansion given in the previous theorem.
\end{remark}
\begin{corollary}
For any $\F-$adapted process $S$, there exists $(\phi_{n,t})_{n,t\,\in \N \,\times [0,T]}$ such that
$$s_t = \E^{\Q}[s_t] + \sum_{n=1}^\infty J_n(W_T;\phi_{n,t}), $$
with $J_n$ as in $\eqref{eq:chaos}$.
\end{corollary}
\begin{example}
Consider the Black-Scholes model (with zero interest rate) $$s_t= e^{\,\sigma w_t - \frac{\sigma^2}{2}t},\; t \in [0,T], \;\;\sigma>0.$$ Using the Taylor expansion of a Gaussian kernel, it can be shown that
$$s_t = \sum_{n=0}^\infty \frac{(\sigma\sqrt{t})^n}{n!}h_n\left(\frac{w_t}{\sqrt{t}}\right),$$
where $h_n(\cdot)$ is the $n-$th (probabilist's) Hermite polynomial. Moreover, if $$\phi_{n,t}(t_1,\ldots,t_n)= g_t^{\otimes n}(t_1,\ldots,t_n),\quad(= g_t(t_1)\cdots g_t(t_n))$$ for some $g_t\in L^2([0,T])$, then
$$ J_n(W_T;\phi_{n,t})= \frac{\lVert g_t\rVert^n}{n!}h_n\left(\frac{(g_t \bullet w)_t}{\lVert g_t\rVert}\right), \qquad (\lVert \cdot \rVert = \lVert \cdot \rVert_{L^2([0,T])}).$$
Hence the Wiener-Itô chaos expansion of $s_t$ is obtained using $g_t= \sigma \cdot \mathds{1}_{[0,\,t]}$. In particular for $t=T$, $(t_1,\ldots,t_n)\in \Delta_{n,T}$, we have
$$\phi_{n,T}(t_1,\ldots,t_n)= \sigma^n\prod_{i=1}^n \mathds{1}_{[0,\,T]}(t_i)= \sigma^n.$$
On the other hand, equation $\eqref{eq:sigW}$ yields the signature decomposition
\begin{align*}
s_T = e^{-\frac{\sigma^2}{2}T}\sum_{n=0}^\infty \frac{(\sigma \, w_T)^n}{n!}= \sum_{n=0}^\infty e^{-\frac{\sigma^2}{2}T} \sigma^n \underbrace{\frac{ w_T^n}{n!}}_{=\, J^{\circ}_n(W_T)} =
\langle \varphi,\calS(W_T) \rangle,
\end{align*}
with $\varphi_n= e^{-\frac{\sigma^2}{2}T}\sigma^n$. Hence the constant function $\phi_{n,T}=\sigma^n$ differs from $\varphi_n$ only by a factor $e^{-\frac{\sigma^2}{2}T}$.
To sum up, we have found the analogy (writing $J_n(W_T)=J_n(W_T;\phi_n)$ if $\phi_n\equiv 1$),
\begin{align*}
s_T = \underbrace{\sum_{n=0}^\infty e^{-\frac{\sigma^2}{2}T} \sigma^n J^{\circ}_n(W_T)}_{\text{signature decomposition}}\; = \underbrace{\sum_{n=0}^\infty \sigma^n J_n(W_T)}_{\text{Wiener-Itô chaos expansion}}.
\end{align*}
\end{example}
\begin{remark}
Consider a (possibly path-dependent) option that pays $G(W_T)$ at maturity.
If $G$ is a signature payoff, i.e. $G = \langle \varphi, \calS \rangle$ for some $\varphi\in \R^{\N}$, then the price of the option can be expressed as (assuming zero interest rate),
$$\E^{\Q}[G(W_T)]= \langle \varphi,\E^{\Q}[\calS(W_T)] \rangle.$$
Thus the signature is an efficient option pricing tool, provided that $\E^{\Q}[\calS(W_T)]$ can be computed "easily". On the other hand, the Wiener-Itô chaos expansion does not seem to help with this regard (namely the valuation of options). Indeed, if
$$G(W_T)= \E^{\Q}[G(W_T)] + \sum_{n=1}^\infty J_n(W_T;\phi_n), $$
then taking expectation on both sides yields a trivial equality since the iterated integrals $J_n(W_T;\phi_n)$ have zero expectation. Nevertheless, Malliavin calculus becomes of course much more relevant for hedging problems.
\end{remark}
\begin{example}
(Asian forward, Bachelier). Note that
\begin{align*}
\xi :&= \frac{1}{T}\int_0^T s_t dt -K\\ &= r-K +\, \frac{\sigma}{T}\int_0^T w_t dt\\
&= \E[\xi]+\, \frac{\sigma}{T}\int_0^T (T-t) dw_t\\
&= \E[\xi]+\, J_1(W_T;\phi_1),
\end{align*}
with $\phi_1(t)= \sigma(1-\frac{t}{T})$. Note that $\xi =r(1-T/2) -K + \int_0^T \Delta_t ds_t$, with the hedging strategy $\Delta_t=1-\frac{t}{T}$ (linear asset liquidation).
\end{example}
\subsubsection{Connection with Legendre Polynomials}
An important property of the signature is that it uniquely characterizes a path, up to a certain equivalence relation, called tree-like equivalence (\citealp{Hambly}). In short, two paths having same signature differ at most by a \textit{tree-like path}, a specific type of loop (curve whose end points coincide).
Hence extending a path with time$-$being strictly increasing$-$ prevents the apparition of loops and in turn ensures the injectivity of the signature map.
This gives hope to reconstruct the (unique) path associated to a given signature. This was introduced for instance by \cite{Geng}, who came up with a geometric reconstruction using polygonal approximations for multi-dimensional Brownian paths.
We here propose a simple algorithm, in connection with our discussion on Hilbert projections.\footnote{I thank Bruno Dupire for suggesting this interesting parallel.}
For ease of presentation, assume $x_0=0$ and $T=1$. We start off with a useful identity.
\begin{proposition}\label{prop:100}
For the words $\alpha^{(k)} :=1\underbrace{0\ldots0}_{ k+1}\,$, $k \ge 0$, we have
\begin{equation}\label{eq:100}
\calS_{\alpha^{(k)}}(X_{t}) = \int_0^{t} x_s \frac{(t -s)^{k}}{k!}ds,\quad \forall \, t \in [0,1].
\end{equation}
\end{proposition}
\begin{proof}
Fix $t\in [0,1]$. First, notice that $\calS_{\alpha^{(0)}}(X_t)=\calS_{10}(X_t) = \int_0^t x_sds$, which is $\eqref{eq:100}$. Now by induction on $k\ge 1$, uniformly on $[0,t]$,
\begin{align*}
\calS_{\alpha^{(k)}}(X_t) = \int_0^t \calS_{\alpha^{(k-1)}}(X_u)du
= \int_0^t \int_0^u x_s \frac{(u -s)^{k-1}}{(k-1)!}ds\, du
= \int_0^t x_s\int_u^t \frac{(u -s)^{k-1}}{(k-1)!}du\, ds
= \int_0^t x_s \frac{(t -s)^{k}}{k!} ds.
\end{align*}
\end{proof}
If $\overleftarrow{X}$ denote the \textit{time reversed path}, i.e. $\overleftarrow{x_t} = x_{1-t}$, then
$$\calS_{\alpha^{(k)}}(\overleftarrow{X_1}) = \int_0^1 \overleftarrow{x_t}\frac{(1 -t)^{k}}{k!}dt = \int_0^1 x_t \frac{t^{k}}{k!}dt = \frac{1}{k!}(x,m_k), $$
with the monomials $(m_k) = (t^k)$.
As mentioned in \cite{Hambly}, the signature of the time reversed path corresponds to the inverse of $\calS(X)$ in the extended tensor algebra $\calT((\R^d))= \bigoplus_{n=0}^\infty (\R^d)^{\otimes n}$. Moreover, $\calS(\overleftarrow{X_1})$ can be retrieved from $\calS(X_1)$ by solving a system of equations for words of increasing lengths. This goes, however, beyond the scope of this work.
Alternatively, we observe that
\begin{align*}
\calS_{\alpha^{(k)}}(\overleftarrow{X_1})
&= (-1)^{k}\int_0^1 x_t \frac{((1-t)-1)^{k}}{k!}dt \\
&= (-1)^{k} \sum_{j=0}^k {k \choose j} \int_0^1 x_t \frac{(1-t)^{j}(-1)^{k-j}}{k!}dt \\
&= \sum_{j=0}^{k} \calS_{\alpha^{(j)}}(X_1) \frac{(-1)^{j}}{(k-j)!}.
\end{align*}
\begin{figure
\centering
\caption{Projected paths with $K=8\,$ basis elements. }
\includegraphics[scale = 0.45
]{Figures/ProjPath.pdf}
\label{fig:projPath}
\end{figure}
To fall within the context of orthonormal projection, we transform the monomials into the (unique) polynomial ONB of $L^2([0,1])$, constructed as follows.
Let $(p_k)$ be the Legendre polynomials (\citealp{Szego}), forming a basis of $L^2([-1,1])$.
Then define the \textit{shifted Legendre polynomials} simply as $q_k = p_k \circ \tau_{[-1,1]}$, where
$\tau_{[-1,1]}(t) = 2t-1, \, t\in [0,1]. $
The first elements write $$q_0(t) = 1, \quad q_1(t) = 2t-1, \quad q_2(t) = 6t^2 - 6t -1.$$
It is easily seen that the standardized polynomials $\frakF = (F_k)$, $F_k := \frac{q_k}{\lVert q_k\rVert}$ form an ONB of $L^2([0,1])$.
Next, we can write $F_k(t)= \sum\limits_{j\le k} a_{kj}t^j$, with coefficients $a_{kj}$ obtained for instance from \textit{Rodrigues' formula} (\citealp[Section 4.3]{Szego}). As usual, $X^{K,\frakF} = \sum\limits_{k\le K} \xi_k F_k$, where the Fourier coefficients become
$$\xi_k = (X,F_k) = \sum_{j\le k} a_{kj}\, (X,m_j) = \sum_{j \le k} b_{k,j}\, \calS_{\alpha^{(j)}}(\overleftarrow{X_1}), \quad b_{k,j} = j!\, a_{k,j}.$$
Therefore, having signature elements up to order $K \ge 1$ yields
\begin{align*}
x^{K,\frakF}_t
= \sum_{k\le K} \xi_k F_k(t)
= \sum_{j \le K} \calS_{\alpha^{(j)}}(\overleftarrow{X_1}) G_j(t), \qquad G_j(t) = \sum\limits_{k\le j \le K} b_{k,j}\, F_k(t).
\end{align*}
Altogether, we have seen that the signature elements $(\calS_{\alpha^{(j)}})$ generate the $L^2$ products of the path with the monomials$-$and in turn, with the Legendre polynomials$-$from which a projection of the path is available.
The reconstruction algorithm is summarized below.
\vspace{3mm}
\begin{mybox}{black}{\textbf{Path Reconstruction Algorithm}}
\begin{enumerate}
\setlength \itemsep{1ex}
\item \textbf{Input:} (i) Signature elements $\calS_{\alpha^{(j)}}(X_1)$, $\; 0\le j \le K$.
\hspace{9mm} (ii) Time grid $0 = t_0 < \ldots < t_N =1$.
\item \textbf{Offline:} For $0 \le j \le K, \,\; 0\le n \le N$, calculate $G_j(t_n)$.
\item \textbf{Online}: For $0\le j \le K$, compute $\calS_{\alpha^{(j)}}(\overleftarrow{X_1})$.
\item \textbf{Output}: For $0\le n \le N$, return $x^{K,\frakF}_{t_n} = \sum_{j\le K} \calS_{\alpha^{(j)}}(\overleftarrow{X_1}) G_j(t_n)$.
\end{enumerate}
\end{mybox}
\subsection{Numerical Results}\label{sec:numResultX}
We concentrate our experiments on Brownian trajectories.
First, we illustrate
the path approximations seen earlier (Karhuhen-Loève, L\'evy-Cieselski, Signature).
Figure \ref{fig:projPath} displays the projections using $K=8$ basis elements. We naturally notice similarities between the Karhunen-Loève transform and the L\'evy-Cieselski construction with Fourier cosines, both
obtained by superposing trigonometric functions.
\begin{figure}[H]
\centering
\caption{Projected paths with $K=8\,$ basis elements. }
\vspace{-2mm}
\includegraphics[scale = 0.43
]{Figures/ProjPath.pdf}
\label{fig:projPath}
\end{figure}
Let us gauge the accuracy of the above approximations for Brownian trajectories, in terms of (i)
$\epsilon^{K,\frakF}$ and (ii) variance explained
$ \vartheta^{K,\frakF} := \frac{\lVert X^{K,\frakF} \rVert^2_{*}}{\left \lVert X \right \rVert^2_{*}}.$
To compute (i), (ii) and the coefficients $(X,F_k)_{\calH}$, we discretize the interval $[0,1]$ a regular partition made of $N = 10^4$ subintervals.
\Cref{fig:Error_VarExp} displays the evolution of $\epsilon^{K,\frakF}$, $\vartheta^{K,\frakF}$ for $K\in \{1,\ldots,128\}$. The Karhunen-Lo\`eve expansion clearly dominates the other projections, although being asymptotically equivalent to the L\'evy-Cieselski construction with Fourier cosine basis. Besides, the $L^2(\mathbb{Q} \, \otimes \, dt)$ convergence of the Brownian bridge construction (L\'evy-Cieselski with Haar basis) is non-monotonic. Indeed, a bump appears until a full cycle of the dyadic partition is completed.
Lastly, the slopes in the log-log convergence plot (left chart of \Cref{fig:Error_VarExp})
are roughly equal to $-1$. Put differently, the squared approximation error is of order $\calO(\frac{1}{K})$, confirming our findings from the above examples.
\begin{figure}[H]
\centering
\caption{$L^2(\mathbb{Q} \, \otimes \, dt)$ error and variance explained.}
\vspace{-2mm}
\includegraphics[scale = 0.46]{Figures/Err_VarExp.pdf}
\label{fig:Error_VarExp}
\end{figure}
\subsection{$L^2(\Q \otimes dt)$ projection using truncated signature.}
Given $(\calS_{\alpha})_{|\alpha|_0\, \le \, K}$, solve
$$\min_{\frakF } \left \lVert X^{K,\frakF} - X \right \rVert_{L^2(\Q \otimes dt)}, \quad x^{K,\frakF}_t = \sum_{|\alpha|_0\, \le \, K} \calS_{\alpha} F_\alpha(t),$$
for a family of deterministic functions $\frakF = (F_{\alpha})$.
\begin{example}
(Legendre Polynomials) Set $F_{\alpha^{(j)}} = G_j$ (see previous part) and $F_{\alpha} = 0$ otherwise.
\end{example}
\subsection{Nonlinear combination}
Take $T =1$, $x_0 = 0$ $\Q-$a.s. and $K=3$. Relevant quantities up to order $3$ (see figure on next page) are
\begin{itemize}
\item \textbf{Order $0$}: $\calS_{\emptyset} = 1$. (\textbf{Constant})
\item \textbf{Order $1$}: $\calS_{(1)} = x_1 =: x$. (\textbf{Terminal value})
\item \textbf{Order $2$}: $\calS_{(1,0)} = \int_0^1 x_t dt =: \bar{x}$. (\textbf{Average})
\item \textbf{Order $3$}: $ \int_0^1 (x_t - \bar{x})^2 dt =: \bar{\bar{x}}$. (\textbf{Variation}), $ \int_0^1 (x_t - \bar{x}) t dt =: \underbar{\bar{x}}$ (\textbf{Chronology}).
\end{itemize}
Note that $\bar{\bar{x}}$ and $\underbar{\bar{x}}$ are functions of signature elements up to order $3$, since
\begin{align*}
\bar{\bar{x}} &= \int_0^1 x_t^2 dt - \bar{x}^2 = 2 \calS_{(1,1,0)} - \bb{\calS_{(1,0)}^2}, \\
\underbar{\bar{x}} &= \underbrace{\calS_{(1,0)} -\calS_{(1,0,0)}}_{= \, \int_0^1 x_t\, t dt } - \underbrace{\frac{1}{2}\calS_{(1,0)}}_{= \, \int_0^1 \bar{x}\, t dt } = \frac{1}{2}\calS_{(1,0)} - \calS_{(1,0,0)}.
\end{align*}
Now look at a nonlinear combination, i.e.
$$ x^{K,\frakF}_t = \sum_{|\alpha|_0\, \le \, K} g_{\alpha}(\calS_{\alpha}) F_\alpha(t)$$
(e.g. take \bb{$g_{(1,0)}(s) = s^2$ for $\bar{\bar{x}}$}), such that
$$\{\alpha \, | \, F_{\alpha} \ne 0\} \subseteq \{\emptyset, (1), (1,0), (1,1,0), (1,0,0)\}.$$
\newpage
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{Figures/Path_Construction_Sig.jpg}
\end{figure}
\newpage
\subsection{Signature and Legendre Polynomials}\label{sec:sigLegendre}
An alternative characterization of a path is available through the so-called \textit{signature} \cite{Lyons}. Roughly speaking, the signature extract from a path an infinite-dimensional skeleton, where each "bone" contains inherent information.
We start off with a few definitions.
A \textit{word} is a sequence $\alpha = \alpha_1 ... \alpha_k$ of letters from the alphabet $\{0,1\}$. The length of $\alpha$ is denoted by $l(\alpha)$.
Moreover, we augment a path $X \in \Lambda$ with the time itself $t \mapsto t$ and write
$x^0_{t} = t$, $x^1_{t} = x_t$. The words $0,1$ are therefore identified with the time $t$ and path $x$, respectively.
\begin{definition}
The \textit{signature} is a collection of functionals $\calS= \{\calS_{\alpha}: \Lambda \to \R\}$ given by
\begin{equation}\label{eq:sig}
\calS_{\emptyset} \equiv 1, \qquad \calS_{\alpha}(X_t) =
\int_{0}^{t} \int_{0}^{t_k} \cdots \int_{0}^{t_2} \circ \, dx^{\alpha_1}_{t_1} \cdots \circ dx^{\alpha_k}_{t_k}, \quad l(\alpha)=k \ge 1,
\end{equation}
where $\circ$ indicates Stratonovich integration.\footnote{The integrals in $\eqref{eq:sig}$ are in the Lebesgue-Stieltjes (respectively It\^o) sense when the integrator (respectively integrand) is of bounded variation. In both cases, the symbol $\circ$ can be omitted.\\[-0.5em]}
\end{definition}
When referring to a specific path $X$, we shall call the sequence $(\calS_{\alpha}(X))$ the \textit{signature of $X$}. This is usually how the signature is defined; see \cite{Lyons}.
If $x_t\in \R^d$ with $d\ge 2$, then the alphabet becomes $\{0,1,\ldots,d\}$ and the signature is obtained analogously.
The first signature functionals read
\begin{align*}
\calS_{0}(X_t) &= \int_0^t d t_1 = t, &&\calS_{1}(X_t) = \int_0^t \circ \,d x_{t_1} = x_t - x_0,\\
\calS_{00}(X_t) &= \int_0^t \int_0^{t_2} d t_1 d t_2 =\frac{t^2}{2}, &&\calS_{11}(X_t) = \int_0^t \int_0^{t_2}\circ \, d x_{t_1} \circ d x_{t_2}=\frac{(x_t-x_0)^2}{2},\\
\calS_{10}(X_t) &= \int_0^t \int_0^{t_2} dx_{t_1} d t_2 =\int_0^t (x_{s} - x_0) ds, &&\calS_{01}(X_t) = \int_0^t \int_0^{t_2} d t_1 d x_{t_2} =\int_0^t s \,dx_s.
\end{align*}
Keeping track of the passage of time is crucial, as the signature would otherwise barely carry information about the path. Indeed, notice that
$\calS_{\alpha}(X_t) = \frac{(x_t-x_0)^{k}}{k!}$ for $\alpha = 1...1$, $l(\alpha)=k$
(as seen above for $k=1,2$) thus only the increment $x_t-x_0$ is known with the alphabet $\{1\}$.
A property of the signature is that it uniquely characterizes a path, up to a equivalence relation: two paths having same signature differ at most by a \textit{tree-like path} \cite{Hambly}, a specific type of loop.
Hence, extending a path with time
not only enriches the signature but also
ensures injectivity as $t\to x^0_t =t$ is increasing.
This gives hope to reconstruct the (unique) path associated to a signature sequence. This was investigated
in \cite{Geng}, where the author shows a
geometric reconstruction using polygonal approximations for Brownian paths.
We propose a simple algorithm, in connection with our discussion on Hilbert projections.\footnote{I thank Bruno Dupire for suggesting this interesting parallel.}
For ease of presentation, assume $x_0=0$ and $T=1$. We first make the following observation.
\begin{lemma}\label{lem:Legendre}
Let $\overleftarrow{X}$ denote the \textit{time reversed path}, i.e. $\overleftarrow{\ x_t} = x_{1-t}$ and
introduce the words $\alpha^{(k)} :=10\ldots0\,$, $l(\alpha^{(k)})=k+2$, $k \ge 0$.
Then $\calS_{\alpha^{(k)}} (\overleftarrow{X}_{\! 1}) = \frac{1}{k!}(X, m_k) $ where $m_k(t)=t^k$.
\end{lemma}
\begin{proof} First, observe that
\begin{equation}\label{eq:100}
\calS_{\alpha^{(k)}}(X_{t}) = \int_0^{t} x_s \frac{(t -s)^{k}}{k!}ds,\quad \forall \, t \in [0,1].
\end{equation}
Indeed for fixed $t\in [0,1]$ and $k=0$, then $\calS_{\alpha^{(0)}}(X_t)=\calS_{10}(X_t) = \int_0^t x_sds$, which is $\eqref{eq:100}$. Now by induction on $k\ge 1$, uniformly on $[0,t]$,
\begin{align*}
\calS_{\alpha^{(k)}}(X_t) = \int_0^t \calS_{\alpha^{(k-1)}}(X_u)du
= \int_0^t \int_0^u x_s \frac{(u -s)^{k-1}}{(k-1)!}ds\, du
= \int_0^t x_s \frac{(t -s)^{k}}{k!} ds.
\end{align*}
Now taking $t=1$ and $\overleftarrow{X}$ instead of $X$, we get
$
\calS_{\alpha^{(k)}}(\overleftarrow{X}_{\! 1}) = \int_0^1 \overleftarrow{\ x_t}\frac{(1 -t)^{k}}{k!}dt = \frac{1}{k!}(x,m_k).
$
\end{proof}
Essentially, \Cref{lem:Legendre} states that the knowledge of $(\calS_{\alpha^{(k)}}(\overleftarrow{X}_{\! 1}))_{k\ge 0}$ is equivalent to the knowledge of the inner products of the path with the monomials. Since also
\begin{align*}
\calS_{\alpha^{(k)}}(\overleftarrow{X}_{\! 1})
&= (-1)^{k}\int_0^1 x_t \frac{((1-t)-1)^{k}}{k!}dt
= \sum_{j=0}^{k} \calS_{\alpha^{(j)}}(X_1) \frac{(-1)^{j}}{(k-j)!},
\end{align*}
the coefficients $(X,m_k)_{k\ge 0}$ can be retrieved from $(\calS_{\alpha^{(k)}}(X_1))_{k\ge 0}$ as well.
To fall within the context of orthonormal projection, we transform the monomials into the (unique) polynomial ONB of $L^2([0,1])$.
Let $(p_k)$ be the Legendre polynomials \cite{Szego}, forming an orthogonal basis of $L^2([-1,1])$. Then consider the \textit{shifted Legendre polynomials}, $q_k(t) = p_k(2t-1),$ $t\in [0,1].$
We write
$$q_k(t)= \sum_{j\le k} a_{k,j}t^j, \quad a_{k,j} = (-1)^{k+j} {k \choose j} {k + j \choose j},$$ with coefficients derived for instance from \textit{Rodrigues' formula} \cite[Section 4.3]{Szego}. The standardization $F_k := \frac{q_k}{\lVert q_k\rVert} = \sqrt{2k+1}q_k$ makes $\frakF = (F_k)$ an ONB of $L^2([0,1])$. This leads us to the following result.
\begin{proposition}
If $b_{k,j} := \sqrt{2k+1} j!\, a_{k,j}$ and $G_j(t) := \sum_{k= j}^K b_{k,j}\, F_k(t)$, then
\begin{equation}\label{eq:sigLeg}
x^{K,\frakF}_t
= \sum_{k\le K} \xi_k F_k(t)
= \sum_{j \le K} \calS_{\alpha^{(j)}}(\overleftarrow{X_1}) G_j(t).
\end{equation}
\end{proposition}
\begin{proof}
First, $(X,F_k) = \sum_{j\le k} a_{k,j}\, (X,m_j)$ with $m_j(t) = t^j$. Using \cref{lem:Legendre}, we obtain
$$\xi_k = \sum_{j \le k} b_{k,j} \calS_{\alpha^{(j)}}(\overleftarrow{X_1}).$$ Thus $ X^{K,\frakF} = \sum_{j \le K} \calS_{\alpha^{(j)}}(\overleftarrow{X_1}) \ G_j$
with $G_j$ as in the statement.
\end{proof}
In summary, the signature elements $(\calS_{\alpha^{(k)}})_{k\le K}$ generate the $L^2([0,1])$ products of the path with the monomials$-$and in turn, with the Legendre polynomials$-$from which the projected path $X^{K,\frakF} $ becomes available.
We can therefore retrieve $X$ by letting $K\to \infty$. Note that this reconstruction works for multidimensional paths as well, as we can apply the procedure to each component $i=1,\ldots,d$ with the words
$\alpha^{(i,k)} :=i 0\ldots0$, $l(\alpha^{(i,k)}) = k+2$, $k \ge 0$.
We finish this section by computing the projection error of $\eqref{eq:sigLeg}$ when $X$ is Brownian motion. Recalling that $\kappa_X(s,t) = s \wedge t$, a simple calculation gives
$$\lambda_k^{\frakF} = \E^{\Q}[\xi_k^2] = 2 \int_{0}^1 \int_{0}^t s F_k(s)dsF_k(t)dt = 2 (2k+1)\sum_{i,j \le k} \frac{a_{k,j}\ a_{k,i}}{(j+2)(i+j+3)}.
$$
The first values are given by $(\lambda^{\frakF}_k)_{k=0}^3 = (\frac{1}{3},\frac{1}{10}, \frac{1}{42}, \frac{1}{90})$ from which we conjecture that $\lambda^{\frakF}_k = \frac{1}{(2k+3)(4k-2)}$ for all $k\ge 1$. This is supported by the fact that $\lambda^{\frakF}_k$ must be rational numbers as $a_{k,j}\in \Z$ for all $k,j$ and
$$\sum_{k=0}^K \lambda^{\frakF}_k = \frac{1}{3} + \frac{1}{8} \sum_{k=1}^K \left( \frac{1}{2k-1}-\frac{1}{2k+3} \right) = \frac{1}{2} - \frac{K+1}{(2K+3)(4K+2)} \xrightarrow{K \uparrow \infty} \frac{1}{2},$$ coinciding with the total variance of Brownian motion on $[0,1]$.
Thus, the approximation error reads $\epsilon^{K, \frakF} =\frac{K+1}{(2K+3)(4K+2)} = \calO(\frac{1}{8K})$, which is of course larger than the Karhunen-Loève basis but smaller than the Brownian Bridge construction (\cref{ex:BBC}). Note that polynomial ONB's may well be optimal if the approximation criterion is modified. In \cite{Foster}, the authors show that in the weighted Hilbert space $L^2([0,1],\mu)$, $\mu(dt) = \frac{dt}{t(1-t)}$, the Karhunen-Loève basis of the Brownian bridge is formed by the anti-derivatives of the Legendre polynomials. Although the construction is different, it is also curiously related to the signature elements $(\calS_{\alpha^{(k)}})$; see Theorem 2.3 and 2.4 in \cite{Foster}.
\begin{remark}
Note that the approximation in $\eqref{eq:sigLeg}$ may be improved by adding other elements of the signature, especially those that are \textit{nonlinear} in $X$, e.g. $\calS_{110}(X_t) = \frac{1}{2}\int_0^t x^2_s ds $. We postpone this discussion to \Cref{sec:sigFunc} when projecting running functionals.
\end{remark}
\subsection{PBSDEs (?)}
This seems closely related to (F)BSDEs: if $Y=f(X)$, then
$$y_t = y_T - \int_t^T \varphi(s, Y_s, Z_s)ds - \int_t^T z_s \circ dx_s, $$
with $z_t = \Delta_x f(X_t)$, $\varphi(t,X_t) = \Delta_t f(X_t)$, or if
$$dx_t = \mu(X_t)dt + \sigma(X_t) \circ dw_t, $$
then
$$y_t = y_T - \int_t^T \varphi(s,Y_s,Z_s)ds - \int_t^T z_s \circ dw_s, $$
with $z_t = \sigma(X_s) \Delta_x f(X_t) $,\, $\varphi(t,X_t,Z_t) = \frac{\mu(X_t)}{\sigma(X_t)} z_t + \Delta_t f(X_t)$ if $0 \notin \sigma(\Lambda) $ \textbf{(FBSDE)}.
\bb{See Peng and Wang}
\subsection{Losses}
\subsubsection{$L^2$ projection }
Define for $t\in [0,T]$
$$ l_t(f) = \lVert f(X_t) - g(X_T) \rVert_{L^2(\Q)}^2, $$
which gives the conditioned expectation $f(X_t) = \E^{\Q}[g(X_T) \, |\, X_t]$, i.e. the price of an option with (path-dependent) payoff $g$. We indeed have $f(X_t) \overset{t\uparrow T}{\longrightarrow} g(X_T)$. Then $f$ also minimizes the total distance
$$l(f) = \int_0^T l_t(f)dt = \lVert f(X) - g(X_T) \rVert_{L^2(\Q\, \otimes \,dt)}^2. $$
Why penalizing large deviations from $g(X_T)$ if the terminal constraint is already here?
\subsubsection{Time Variation}
$$l(f) = \E^{\Q}[(\calV f)(X_T)],$$
with the (pathwise) time variation
$$(\calV f)(X_T) = \int_0^T (f(X_t) - \bar{f})^2 dt, \quad \bar{f} = \frac{1}{T} \int_0^T f(X_t)dt. $$
\subsection{Intrinsic Functional}
\begin{definition}
The intrinsic functional associated to $g:\Lambda_T \to \R$ is defined as $$\iota(X_t) := g(X_{t,T-t}),\quad t\in [0,T].$$
\end{definition}
\bb{Use an operator: $\calI: \R^{\Lambda_T} \to \R^{\Lambda}$, $\calI g =f $.}
Derivatives of $\iota\,$:
\begin{align*}
\Delta_x \iota(X_t) &= \lim_{h \to 0} \frac{ \iota(X^h_t)- \iota(X_t)}{h}
= \lim_{h \to 0} \frac{g((X^h)_{t,T-t})- g(X_{t,T-t})}{h}, \quad \text{(bump at $t$)}\\\\
\Delta_t \iota(X_t) &= \lim_{\delta t \downarrow 0} \frac{ \iota(X_{t,\delta t})- \iota(X_t)}{h}
= \lim_{\delta t \downarrow 0} \frac{g(X_{t,\delta t + (T-t - \delta t)})- g(X_{t,T-t})}{\delta t} = 0.
\end{align*}
The space derivative of $\iota$ is somewhat similar the Malliavin derivative of $g$, as the whole future path is shocked at time $t$.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.2]{Figures/IntrinsicFuncDerivative.jpg}
\end{figure}
Now decompose any $f\in \Gamma_g^K$ as
$$f = \iota + h, \quad h\in \Gamma_{\boldsymbol{0}}.$$
When is it optimal to choose $h \equiv 0$?
\begin{example}
Let $g(X_T)=\frac{1}{T}\int_0^T x_s ds$ and $f(X_t) = \frac{1}{t}\int_0^t x_s ds$. The intrinsic functional is
$$\iota(X_t) =\frac{1}{T}\int_0^T x_{s\wedge t} ds = \lambda_t \underbrace{\frac{1}{t}\int_0^t x_s ds}_{\text{average over $[0,t]$}} + (1-\lambda_t)\underbrace{x_t}_{\mathclap{\text{average over $[t,T]$}}}\,, \quad \lambda_t = \frac{t}{T}.$$
Therefore,
$$f(X_t) = \iota(X_t) + \frac{T-t}{t}(\iota(X_t)-x_t) = \iota(X_t) + h(X_t), $$
with $h(X_t) = \left(\frac{1}{t} - \frac{1}{T}\right) \int_0^t (x_s-x_t) ds \overset{t\uparrow T}{\longrightarrow} 0$.
\begin{figure}[H]
\centering
\includegraphics[scale = 0.2]{Figures/IntrinsicFuncAsian.jpg}
\end{figure}
\end{example}
\begin{example}
Let $g(X_T)=x^2_T$ and consider its Bachelier price with volatility $\sigma$, i.e.
$$f_{\sigma}(X_t) = \E^{\Q_{\sigma}}[ x_T^2\, | \, X_t] = \underbrace{x_t^2}_{= \, \iota(X_t)} + \underbrace{\sigma^2(T-t)}_{=\,h_{\sigma}(X_t)}.$$
Hence $f_{\sigma} \overset{\sigma \downarrow 0}{\longrightarrow} \iota$.
\end{example}
\begin{example}
Let $g(X_T)=\langle X \rangle_T$, $X$ L\'evy process and $f$ is the conditioned expectation. This gives,
$$f(X_t) = \E^{\Q}[ \langle X \rangle_T\, | \, X_t] = \underbrace{\langle X \rangle_t}_{= \, \iota(X_t)} + \underbrace{\E^{\Q}[\langle X \rangle_{T-t}]}_{=\,h(X_t)}.$$
\end{example}
The time variation reads
$$\calV f = \calV \iota + \calV h + 2\, (\iota - \bar{\iota},h - \bar{h})_{L^2([0,T])}.$$
\subsection{Applications}
\begin{itemize}
\item Hedging: $f(X_t) = \int_0^t \varphi(X_s) dx_s$?
\item American option: Requires $g$ to be defined on $\Lambda$, in which case we simply take the intrinsic value $g(X_t)$. Replace the cont. value by $$\E^{\Q} [g(X_{t,\tau-t})] = \int_t^T g(X_{t,s-t}) \Q(\tau \in ds)?$$
\end{itemize}
\section{\blue{Terminal functional and Path-Dependent BSDEs (PBSDE) (relevant?)}}
\input{TerminalFunctional/TermFunc}
\input{TerminalFunctional/PBSDE}
|
1911.11165
|
\section{Introduction}
It is evident from observations that most if not all elliptical galaxies and many disk galaxies host a supermassive black hole at their centre (SMBH, see \citealt{kormendy.ho.2013, graham.2016} for reviews). By extracting energy from gas accretion, those SMBHs are the power engines of active galactic nuclei (AGN). Theoretical studies of galaxy formation have shown that AGN feedback plays an essential role in shaping many properties of massive galaxies (e.g. \citealt{springel.etal.2005, booth.schaye.2009, choi.etal.2015, weinberger.etal.2017}). Furthermore, it is also widely considered the most plausible mechanism for star formation quenching in massive galaxies (see \citealt{man.belli.2019} and references therein).
From an observational point of view, one of the most promising avenues to study the effects of AGN feedback is galactic hot atmospheres (or coronae, see \citealt{tumlinson.etal.2017} and \citealt{werner.etal.2019} for reviews). The hot, diffuse, and soft X-ray emitting gas, that permeates the inter-stellar medium (ISM, within a few kpc from a galaxy center) or in many cases extends well beyond the stellar distribution to the circumgalactic medium (CGM, up to hundreds of kpc in galactocentric distance), encrypts important information about galaxy formation: it reflects the complex interplay of various heating/cooling processes such as gravitational heating via virial shocks, radiative cooling, feedback from stellar activity (e.g. supernovae explosions) and AGN. Unlike the hot intra-cluster medium, which extends to larger scales, the galactic hot atmospheres are closer to the sites of star formation and supermassive black hole activity and therefore their thermal properties are anticipated to be more sensitive to non-gravitational processes, including BH-driven feedback.
This is supported by recent X-ray observations of massive early-type galaxies (ETGs) with the {\it Chandra} X-ray observatory (e.g. \citealt{goulding.etal.2016,babyk.etal.2018,lakhchaura.etal.2018}), which show that the X-ray scaling relations, e.g. the X-ray luminosity-temperature ($L_{\rm X}-T_{\rm X}$) relation, are steeper than the self-similar predictions based only on gravitational heating. Those studies suggest that, while the hot gas properties of massive galaxies are primarily determined by the gravitational potential, they are also affected significantly by AGN feedback.
The emerging consensus picture is that the hot atmospheres are stabilized by mechanical (also called radio mode) feedback driven by SMBHs at low accretion rates (e.g. \citealt{nulsen.etal.2009, randall.etal.2011, randall.etal.2015, hlavacek-larrondo.etal.2015}). For example, \cite{nulsen.etal.2009} studied a sample of 24 elliptical galaxies obtained from the {\it Chandra} archive and found that the jet power that is determined based on the X-ray cavities exceeds the luminosity of the cooling atmosphere (see Fig. 1 in their work). This result indicates that mechanical AGN feedback in the form of cavities may be sufficient to offset the energy lost due to the radiative cooling of the atmosphere.
Low-mass galaxies are generally expected to host fainter X-ray atmospheres, because their potential wells are shallower. However, observational studies of hot gas atmospheres in systems down to below the mass of the Milky Way (e.g. \citealt{strickland.etal.2004,tullmann.etal.2006, yamasaki.etal.2009,mineo.etal.2012, bogdan.etal.2013, bogdan.etal.2015,bogdan.etal.2017,li.wang.2013, li.etal.2017}) show that many spiral, late-type galaxies (LTGs) host detectable luminous X-ray atmospheres, with $L_{\rm X}\sim10^{40}$ erg~s$^{-1}$. In this low-mass regime, in addition to AGN feedback, the hot gas content is also expected to be influenced by stellar feedback (e.g. \citealt{crain.etal.2010,dave.etal.2011, dave.etal.2012, vandevoort.etal.2016,christensen.etal.2016, sokolowska.etal.2018}). Therefore, X-ray observations of these lower-mass systems potentially probe processes connected to their star formation status.
Cosmological simulations that include AGN feedback, e.g. \cite{McCarthy.etal.2010} (OWLS), \cite{lebrun.etal.2014} (cosmo-OWLS), \cite{planelles.etal.2014}, \cite{choi.etal.2015}, \cite{ liang.etal.2016}, \cite{henden.etal.2018} (FABLE), \cite{dave.etal.2019} (SIMBA), reproduce the hot gas properties in better agreement with observations than simulations that do not consider SMBH feedback. For example, \cite{choi.etal.2015} show that, without the inclusion of AGN feedback, simulations overestimate the X-ray luminosity of the hot atmospheres by more than 2 orders of magnitude compared to observations. More importantly, by comparing simulations with various treatments of AGN feedback (e.g. thermal versus mechanical), they point out that, in their implementation, the mechanical feedback is the responsible channel for reproducing the observed X-ray luminosities. However, their work is based on zoom-in simulations of a relatively small sample of 20 simulated galaxies, with exclusive focus on the high-mass end: $M_*>8.8\times10^{10}M_\odot$.
In this paper, we aim to explore the hot galactic atmospheres using a large sample of simulated galaxies taken from the IllustrisTNG project (TNG; \citealt{nelson.etal.2018,naiman.etal.2018, marinacci.etal.2018,pillepich.etal.2018a,springel.etal.2018, pillepich.etal.2019, nelson.etal.2019}). In particular, in this work we use the TNG100 and TNG50 flagship runs (see Section~\ref{sec:sims} for a detailed description): these cover simulated volumes of $\sim(110\ {\rm Mpc})^3$ and $\sim(50\ {\rm Mpc})^3$, respectively, comparable to the volumes probed by current X-ray observations in the local Universe, and they have a numerical mass resolution good enough for us to confidently study systems down to the scale of $M_*\gtrsim$ a few $10^{9}M_\odot$.
By construction, the TNG simulations are based on a galaxy formation model whose unconstrained choices have been adopted to reproduce observed stellar properties, e.g. the galaxy stellar mass function at $z=0$ (see \citealt{pillepich.etal.2018}). However, other outcomes, such as the temperature and metallicity of the hot gaseous atmospheres, are predictions of the simulation that can be readily compared with observations. For this task, we employ a dataset of $\sim160$ nearby galaxies that have {\it Chandra} and {\it XMM-Newton} X-ray observations in the literature (\citealt{mineo.etal.2012,li.wang.2013,li.etal.2017,goulding.etal.2016,babyk.etal.2018,lakhchaura.etal.2019}). In particular, in this paper we use the TNG simulations to get insights into the role that SMBH feedback can have 1) on shaping the X-ray properties of the gaseous atmospheres in galaxies across more than 2 orders of magnitude in stellar mass and 2) on the relationship between star formation quenching and gas content.
Within the TNG framework, earlier works by \cite{weinberger.etal.2017, weinberger.etal.2018,nelson.etal.2018, terrazas.etal.2019, davies.etal.2019} find a close connection between the suppression of star-formation rate in massive galaxies and the BH feedback in kinetic mode, whereby suggesting the crucial role played by the latter in establishing the quenched population in the TNG simulations. Here, we explore the connection between BH feedback, gas content and star formation activity by characterizing the hot atmospheres of star-forming and quenched galaxy populations in TNG100 and TNG50 at $z=0$, after having compared the X-ray properties of the hot atmospheres of simulated ETGs with observed ones. For this purpose, we perform mock {\it Chandra} X-ray observations of the simulated galaxies to mimic the typical observation procedure applied to the observed samples elected for the comparison. Importantly, throughout the paper, the X-ray signals we are interested in are produced by the diffuse, hot, metal-enriched gas in both simulations and observations: therefore, by construction, these signals do not account for the contribution from black holes, supernova remnants or binary stars that instead are a non-negligible contribution to the X-ray emission from the interstellar-medium of galaxies.
The paper is arranged as follows. We first describe in Section \ref{sec:2} the observed and simulated galaxy samples and the analysis of the mock X-ray observations. Section \ref{sec:3} is dedicated to the comparison between the simulated and observed ETG samples. We first describe the way we select analog quiescent galaxies from the simulations and perform a detailed comparison between the simulated and observed X-ray relations. Next, in Section \ref{sec:predictions}, we inspect the dependence of the X-ray luminosity on the galaxy properties, in particular galaxy star formation rate, for both simulations and observations. In Section \ref{sec:5}, we carry out a theoretical investigation on the origin of the difference in $L_{\rm X}$ in connection with the star formation rate and with the SMBH feedback. Finally, we conclude in Section \ref{sec:6}.
\section{Methodology and galaxy samples}
\label{sec:2}
\begin{table*}
\caption{\label{tb1}
Summary of the observed datasets used in our study to contrast observational findings to the outcomes of the TNG100 and TNG50 simulations.}
\begin{center}
\resizebox{0.99\textwidth}{!}{
\begin{tabular}{|cccccccc|}
\hline
Dataset & Number of galaxies & Type & Distance & Aperture & Main Instrument & Exposure & Fitting Model \\
\hline
MASSIVE (\citealt{goulding.etal.2016}) & 33 & Early-type & $<108$ Mpc & $R_{\rm e}$ & {\it Chandra} ACIS-S & 2-300 ks & APEC \\
$\rm{ATLAS}^{3D}$ (\citealt{goulding.etal.2016}) & 41 & Early-type & $<42$ Mpc & $R_{\rm e}$ & {\it Chandra} ACIS-S & $>10$ ks & APEC \\
\cite{lakhchaura.etal.2019} & 24 & Early-type & $<100$ Mpc & $R_{\rm e}$ & {\it Chandra} ACIS-S & 2-175 ks & APEC (version 3.0.7) \\
\cite{babyk.etal.2018} & 42 & Early-type & $\siml150$ Mpc & $5R_{\rm e}$ & {\it Chandra} ACIS-S & $>10$ ks & APEC (version 3.0.7) \\
\cite{mineo.etal.2012} & 20 & Late-type & $<40$ Mpc & $D_{25}$ & {\it Chandra} ACIS-S & $\geq$ 15 ks & MEKAL \\
\cite{li.wang.2013} & 39 & (29) Late/(10) early-type & $\siml30$ Mpc & $D_{25}$ & {\it Chandra} ACIS-S & $\simg10$ ks & MEKAL/VMEKAL \\
\cite{li.etal.2017} & 6 & Late-type & $<100$ Mpc & $30-100$ kpc & {\it XMM-Newton} EPIC & 45-123 ks & APEC \\
\hline
\end{tabular}}
\end{center}
\end{table*}
\subsection{The X-ray observational samples of reference}
\label{sec:obs}
In this paper, we compare the output of the TNG simulations (see Section~\ref{sec:sims}) to results from observations. In particular, we collect a number of galaxy datasets with available X-ray measurements. Their basic properties are summarized in Table \ref{tb1}, including e.g. subsets of the MASSIVE and $\textrm{ATLAS}^{\rm 3D}$ samples by \citealt{goulding.etal.2016}.
To validate the TNG model in terms of gaseous atmospheres, we collect samples of massive early-type galaxies from observations. In particular, we employ the sample compiled by \cite{goulding.etal.2016}, which consists of 74 ETGs obtained from the MASSIVE and $\textrm{ATLAS}^{\rm 3D}$ surveys with available \textit{Chandra} X-ray observations. As the selection of these is based on well-defined optically-based criteria and their X-ray data are analyzed in a homogeneous way, we choose the \cite{goulding.etal.2016} sample as a reference throughout the paper. Below we briefly describe the selection as well as the X-ray analysis of the compiled MASSIVE and ${\rm ATLAS^{3D}}$ samples.
Unlike in simulations, observations rarely come along with dynamical mass measurements, thus one has to rely on mass proxies. One of such proxies that is widely used in observational studies is the K-band absolute magnitude ($M_{\rm K}$) for it is considered to be closely linked to stellar mass (e.g. \citealt{cappellari.etal.2011,ma.etal.2014}). For all the observational datasets used in this paper, the K-band magnitude is collected from the Two Micron All Sky Survey (2MASS) database for extended sources (\citealt{skrutskie.etal.2006})\footnote{https://irsa.ipac.caltech.edu/applications/2MASS/PubGalPS/}. The total K-band magnitude is computed from the total K-band luminosity of the galaxy derived from a combination of the measured inner surface brightness profile and an extrapolated profile at larger radii obtained by fitting the inner one to a single Sersic profile (\citealt{jarrett.etal.2003}).
The MASSIVE survey targets the most massive ETGs in the local Universe within a distance of $d\lesssim108$ Mpc and with an absolute K-band magnitude $M_{\rm K}<-25.3$ ($M_*\simg 10^{11.5}M_\odot$) resulting in a volume-limited sample of 118 galaxies (see \citealt{ma.etal.2014} for an overview). About $1/4$ of the original sample, i.e. 33 galaxies, have X-ray observations in the {\it Chandra} archive. The $\textrm{ATLAS}^{\rm 3D}$ survey, on the other hand, is dedicated to lower-mass ($M_{\rm K}<-21.5$ or $M_*\simg6\times10^9M_\odot$) galaxies, within a smaller distance of $d\lesssim42$ Mpc \citep[see][ for an overview]{cappellari.etal.2011}. 41 galaxies, out of the original sample of 260 nearby ETGs, have X-ray data obtained by {\it Chandra}. For both MASSIVE and $\textrm{ATLAS}^{\rm 3D}$, the early-type nature of the galaxies is established by selecting objects based on their morphology, i.e. only ellipticals and lenticulars (S0) are selected. In practice, this has been achieved by excluding galaxies with spiral arms upon visual inspection of their stellar-light images. X-ray spectra are extracted within a circular region of the half-light radius ($R_{\rm e}$) to allow direct measurements of X-ray quantities of the hot gas hosted by the galaxies. A model of a single-temperature plasma in collisional ionisation equilibrium (APEC, \citealt{smith.etal.2001}) is used to describe the X-ray emission of the hot inter-stellar medium. The spectral fitting for the temperature is limited to the energy range $[0.3-7]$ keV, while the X-ray luminosity is computed in the $[0.3-5]$ keV range.
Though starting with volume-limited and magnitude-selected samples, the X-ray subsets of MASSIVE and $\textrm{ATLAS}^{\rm 3D}$ are not complete. To enlarge the X-ray sample size, we therefore also consider 24 ETGs from \cite{lakhchaura.etal.2019} (out of an original sample of 47 nearby ETGs), with available {\it Chandra} observations and K-band measurements, which do not overlap with the MASSIVE+$\textrm{ATLAS}^{\rm 3D}$ sample. In addition, in order to constrain the X-ray relations across larger galactic apertures, we employ 87 ETGs obtained from \cite{babyk.etal.2018}, of which 45 systems overlap with the MASSIVE+$\textrm{ATLAS}^{\rm 3D}$+\cite{lakhchaura.etal.2019} sample. The X-ray properties of these galaxies were measured by {\it Chandra} within a radius of $5R_{\rm e}$.
Finally, in order to investigate the X-ray properties of the hot gas in low-mass, star-forming galaxies and to compare our findings from TNG with existing observations, we also collect a sample of nearby late-type galaxies taken from \cite{mineo.etal.2012, li.wang.2013,li.etal.2017}. \cite{mineo.etal.2012} report an X-ray study of 20 late-type (spiral and irregular), star-forming galaxies at $d\lesssim40$~Mpc observed by {\it Chandra}, covering a broad range in star formation rates ($\sim0.1-17M_\odot\rm{yr}^{-1}$) and stellar masses ($\sim3\times10^{8}-6\times10^{10}M_\odot$). We also include a sample of 39 highly-inclined ($i\gtrsim60^{o}$), mostly late-type galaxies (see Table \ref{tb1}) at $d\lesssim30$~Mpc observed by {\it Chandra} and analysed by \cite{li.wang.2013}. The studied sample covers a range of about 2 orders of magnitude in stellar mass ($\sim10^{9}-10^{11}M_\odot$). Furthermore, we added 6 massive ($M_*\gtrsim10^{11}M_\odot$) spiral galaxies at $d\lesssim100$~Mpc observed by {\it XMM-Newton} \citep{li.etal.2017}. For all these late-type galaxies, the systematic analysis of point source contamination, which is essential for X-ray studies of the hot gas of star-forming galaxies, is addressed extensively.
\subsection{The IllustrisTNG simulations}
\label{sec:sims}
The simulated galaxies used in this study are obtained from the IllustrisTNG\footnote{http://www.tng-project.org} project, a set of cosmological magneto-hydrodynamical simulations (\citealt{nelson.etal.2018,naiman.etal.2018, marinacci.etal.2018,pillepich.etal.2018a,springel.etal.2018}).
These are performed with the {\sc arepo} code (\citealt{springel.2010}), include a wide range of astrophysical processes, and are run with cosmological parameters consistent with results from Planck observations \citep{planck.2016}: matter density $\Omega_{\rm m}= 0.3089$, baryon density $\Omega_{\rm b}=0.0486$, dark energy density $\Omega_\Lambda=0.6911$, Hubble constant $H_0=67.74\ \rm{km\ s}^{-1}\rm{Mpc}^{-1}$, power spectrum normalization characterized by $\sigma_8=0.8159$, and primordial spectral index $n_{\rm s}=0.9667$.
The TNG model of galaxy formation \citep{weinberger.etal.2017, pillepich.etal.2018} is based on the original Illustris model (\citealt{vogelsberger.etal.2013,torrey.etal.2014}) and includes primordial and metal-line radiative cooling, prescriptions for star formation and evolution, supernovae feedback, metal enrichment, and supermassive black hole growth and feedback.
The TNG suite includes runs with various volumes and resolutions. For this work, we use two simulated samples of galaxies at $z=0$ extracted from the TNG100 and TNG50 flagship runs. TNG100 covers a cosmological comoving volume of $(110.7\ {\rm Mpc})^3$ with a baryon mass resolution of $m_{\rm baryon}=1.4\times10^{6}M_\odot$. The recently-completed TNG50 (\citealt{pillepich.etal.2019, nelson.etal.2019}) offers the best resolution amongst the TNG simulations, with 16 times better mass resolution than TNG100 over a volume of $(51.7\ {\rm Mpc})^3$. Moreover, within the star-forming regions of TNG50 galaxies, the spatial resolution of the gas cells lies in the 70-140 pc range, with stellar and dark matter softening below 300 pc \citep[see Table 1 and Figure 1 of][ for more detail]{pillepich.etal.2019}. We combine the two datasets in a complimentary way, as TNG100 is optimal for probing the most massive galaxies, while TNG50 is necessary when studying low-mass galaxies.
\subsubsection{SMBH growth and their feedback in the TNG model}
Of particular importance for this study is the implementation of BH physics, which we hence summarize here \citep[see][ for more detail]{weinberger.etal.2017,weinberger.etal.2018}.
For any Friend-of-Friend (FoF) halo identified on the fly with mass larger than $7.38\times 10^{10}M_\odot$ and with no BH yet, a SMBH with a mass of $1.18\times10^{6}M_\odot$ is seeded. Thus, the SMBH can grow by accretion of gas via an Eddington-limited Bondi model (see equations 1-3 in \citealt{weinberger.etal.2018}) or via merging with other SMBHs following the merging of their host galaxies.
For the modelling of BH feedback, the TNG model employs a two-mode scenario in which SMBHs can release feedback energy into the surrounding environment either in the form of thermal energy (thermal mode) or kinetic energy (kinetic mode). The total amount of the injected energy depends on the accretion rate onto the SMBH (see equations 7-9 as well as the corresponding numerical values for the efficiency parameters in \citealt{weinberger.etal.2017}). In the TNG model, the division into the two modes is controlled by the BH accretion rate. While the thermal mode is present when SMBHs are at high accretion rates, the kinetic mode is switched on when the value of the Eddingtion ratio drops below the threshold:
\begin{equation}
\chi={\rm min}\bigg[0.002\bigg(\frac{M_{\rm BH}}{10^8M_\odot}\bigg),0.1\bigg], \label{eqn1}
\end{equation}
where $M_{\rm BH}$ is the SMBH mass. The numerical values in equation (\ref{eqn1}) are determined so that the TNG simulated galaxies show realistic properties in terms of their stellar component, such as the stellar mass function and spatial extent of the stellar population at $z=0$.
\subsection{Measurement of observables from simulated galaxies}
In this Section we describe the definitions as well as the procedures used to compute observable quantities from simulated data.
\subsubsection{Mock X-ray analysis and Intrinsic X-ray properties}
\label{sec:mocks}
For comparison with X-ray observations, we carry out mock X-ray analyses of simulated galaxies to obtain values for the X-ray luminosities and the gas temperatures that closely mimic those determined from actual observations. Here we are after the X-ray signal produced by the diffuse, hot, metal-enriched gas within and around galaxies and we deliberately neglect the X-ray contribution from point-like sources such as black holes, supernova remnants or X-ray binaries -- which in fact are not modeled explicitly in the simulations.
In practice, the analysis involves two steps: i) generating mock spectra for a collection of gas cells within a region of interest for each simulated galaxy, and ii) fitting the integrated mock spectra to obtain X-ray quantities (such as the X-ray gas temperature, $T_{\rm X}$, and the X-ray luminosity, $L_{\rm X}$) on a galaxy by galaxy basis.
For any given object, the gas cells are selected from a cylindrical region that is randomly oriented with respect to the galaxy structure -- in our case, along the z-axis of the simulation box -- and is centered at the galaxy position -- i.e. the location of its most gravitationally bound resolution element as determined by the {\sc subfind} halo finder. To account for projection effects, the cylinder height is equal to $10\times R_{\rm e}$, where $R_{\rm e}$ is the effective radius (or half light radius, see the definition below) and we measure the X-ray signals within projected circles with $1$ or $5\times R_{\rm e}$ radii. Only non star-forming gas cells are used, though we have verified that including star-forming gas cells increases the total X-ray emission by an insignificant amount ($\lesssim 1/1000$ per galaxy), because the X-ray emission of low-temperature gas cells ($T<10^{10.5}$ K) is negligible. We discuss the contribution from the subgrid hot components of the star-forming gas cells in Appendix \ref{sec:appC}. Moreover, only gas cells that are gravitationally bound to the galaxy of interest, which are identified by the {\sc subfind} algorithm, are considered. In the case of central galaxies, our mock X-ray signals therefore automatically excise the contribution from e.g. satellite galaxies, as typically done in observations. In the case of satellite galaxies orbiting in more massive groups and clusters, our mock X-ray measurements naturally exclude the contribution from the background ICM. However, in the case of a central galaxy, in our mocks no additional contribution is subtracted off, namely we do not model and exclude a possibly separate contribution from the ICM, as sometimes done in observations.
Note that within this framework, the X-ray contribution from outflowing gas may not be fully accounted for. On the one hand, by construction, wind particles, i.e. those that are launched within the TNG non-local stellar feedback scheme (see \citealt{pillepich.etal.2018} for detailed description of the stellar feedback model), cannot contribute at all to the X-ray signal measured here, but it should be noted that wind particles are spawned from star-forming regions within galaxies, so from cold gas that should not contribute in any case to the X-ray signal. On the other hand, high-velocity gas outflows may not be accounted for within the adopted setup because they may be missed by the {\sc subfind} algorithm as they may not be gravitationally bound to their host halo due to their fast moving nature: such cells may span a wide range of temperatures and densities and may in principle contribute to the X-ray signals. We have attempted to examine the relative contribution of high-velocity gas components by inspecting the comparison between X-ray emissions that come from {\sc subfind}-selected gas cells and those identified by the Friend-of-Friend algorithm. This test is suitable for central galaxies only: we find that at the high-mass end ($M_{\rm K}<-24$) there is no significant difference between X-ray luminosities (within $R_{\rm e}$) that are measured based on {\sc subfind}- and FoF-selected gas cells, while at the low-mass end the difference is more noticeable, in particular for star-forming galaxies whereby the FoF-selected X-ray emission is on average a factor of $\sim2$ higher than the {\sc subfind}-selected value. Nonetheless, we notice that the difference is significantly smaller than the intrinsic scatter of the X-ray luminosity of central galaxies at the same mass range\footnote{The $84^{th}$ percentile value of the X-ray luminosity ($<R_{\rm e}$) for central galaxies with $M_{\rm K}>-24$ is a factor of $\sim12$ higher than the median value.}. On a different note, for satellite galaxies, this test shows that the FoF-cell-based measurements would be heavily contaminated by the signal from the gas of the central galaxy of the same host halo, thereby supporting our choice to adopt the gravitationally-bound material as the source of the X-ray signals for both centrals and satellites.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{plots/fig1.pdf}
\caption{Examples of mock X-ray spectra and best-fitting curves with one- (1T) and two-temperature (2T) APEC models for a high-mass (left) and a low-mass (right) galaxy taken from the TNG50 simulation at $z=0$ (see text in Section \ref{sec:mocks} for a detailed description of the mock X-ray analysis).}
\label{fig:1}
\end{figure*}
For each gas cell, a mock spectrum is generated based on its gas density, temperature, and metallicity assuming a single temperature APEC model (version 3.0.9) plus galactic absorption (``wabs(apec)'') using the \texttt{fakeit} procedure that is implemented in the XSPEC\footnote{https://heasarc.gsfc.nasa.gov/xanadu/xspec/} (\citealt{smith.etal.2001}) package. The mock spectrum is created for an exposure time of $100$ ks, which is consistent with the typical depth of actual observations (see Tab. \ref{tb1}), no background is applied and statistical errors are assumed to be Poisson, based on the photon counts. We use a column density $n_{\rm{H}}=10^{20}\rm{cm}^{-2}$. To be more realistic, the simulated spectra are convolved with the response files of {\it Chandra}\footnote{We use response files for the default pointing of the 20th-cycle {\it Chandra} ACIS-S detector.}, assuming an energy resolution of $150$ eV. The final spectrum is obtained by adding up all spectra created from the gas cells belonging to the given region of interest. The mock spectra are then fitted assuming either a single temperature APEC model (1T, ``wabs(apec)'') or a two-temperature model (2T, ``wabs(apec+apec)''), by using the counts in the $[0.3-7]$ keV range and by fixing the galaxy metallicity to its emission-weighted values\footnote{$Z_{\rm ew}=\frac{\sum_{\rm i}\epsilon_{\rm i}\times Z_{\rm i}}{\sum_{\rm i}\epsilon_{\rm i}}$, where $Z_{\rm i}$ is the metallicity of the $i^{th}$ gas cell and $\epsilon_{\rm i}$ is the X-ray emission computed as described in equation (\ref{eqn2}).} (see Appendix \ref{sec:appB} for a detailed inspection). We adopt the solar abundances values provided by \cite{anders.grevesse.1989}. The fit thus returns best-fitting values and associated uncertainties for all parameters of the fitting model, namely the gas temperature(s) and the normalisation (which is proportional to the gas density squared). The X-ray luminosity is derived from the best-fit plasma model (APEC) for each simulated galaxy.
For illustration of the mock X-ray analysis procedure, we show a couple of examples of mock X-ray spectra as well as their fits in Fig. \ref{fig:1} for two present-day galaxies at the high and the low-mass end from the TNG50 simulations. In general, a 1T model is sufficient to fit the X-ray spectra of hot gas in galaxies across the considered mass range, except in high-mass systems where a 2T model is often required to improve the fitting. Those represent the most massive galaxies at the center of groups or clusters with non-negligible temperature gradients that make a 1T model inadequate for the fit. Nonetheless, we have verified that, on average, the X-ray relations obtained with 1T or 2T models vary by an insignificant amount compared to their own intrinsic scatters. Therefore, for the rest of this paper, we opt to show results obtained from fitting the mock spectra with 1T models only.
To examine how reliable the mock X-ray analysis is, we compare its results with theoretical quantities that can be directly measured based on each gas cell properties, such as the gas temperature, density, and metallicity.
The {\it intrinsic} total X-ray luminosity of a simulated galaxy is obtained by summing the X-ray emission from all the gas cells within a region of interest:
\begin{equation}
L_{\rm X, intrinsic}=\sum_{\rm i} \epsilon_{\rm i}, \label{eqn2}
\end{equation}
where $\epsilon_{\rm i}$ is the X-ray gas emission in the $[0.3-5]$ keV band computed for the $i^{\rm th}$ gas cell assuming a 1T APEC model.
Averaged gas temperatures can be measured using two different weights, the gas mass-weighted ($T_{\rm mw}$) and the emission-weighted ($T_{\rm ew}$), according to the formula:
\begin{equation}
T_{\rm mw, ew}=\frac{\sum_{\rm i} w_{\rm i}\times T_{\rm i}}{\sum_{\rm i} w_{\rm i}}, \label{eqn3}
\end{equation}
where $T_{\rm i}$ is the $i^{th}$ gas cell temperature, and $w_{\rm i}=m_{\rm gas,i}$ (gas mass) for the case of $T_{\rm mw}$ and $w_{\rm i}=\epsilon_i$ for the case of $T_{\rm ew}$.
As we explicitly show in Appendix~\ref{sec:appA}, it is not possible to obtain mock X-ray measurements for all simulated galaxies in our mass-limited samples. Especially at the low-mass end, galaxies may produce zero or a very low number of photons that are received by {\it Chandra} in a 100~ks exposure time. This may occur because of the limited numerical resolution or for actual physical reasons, e.g. the gas of the considered galaxies may be too cold to emit photons in the energy range of interest. In the case of a very low number of photons, it is not possible to obtain an X-ray temperature from spectral fitting. To avoid galaxies that do not produce a sufficient number of photons that in turn may result in bad or impossible fits and unreliable mock X-ray measurements, we flag systems with $T_{\rm X}$ lying beyond $3\sigma$ off the average $T_{\rm X}-T_{\rm ew}$ relation: these are excluded from the analysis. All results throughout the paper will therefore include only TNG galaxies with reliable mock X-ray measurements for both $T_{\rm X}$ and $L_{\rm X}$: these are labeled ``X-ray detected''. Approximately, the ``X-ray detected'' sample consists of galaxies that have $L_{\rm X}(<R_{\rm e})\gtrsim5\times10^{37}$ erg~s$^{-1}$. A more detailed discussion on this selection and about the comparison between mock X-ray quantities and theoretically computed quantities is given in Appendix~\ref{sec:appA}.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{plots/fig2.pdf}
\caption{The stellar mass-magnitude relation for TNG100 ({\it left}) and TNG50 ({\it right}) galaxies at $z=0$. The main panels show the scatter plots of stellar mass and magnitude, while the sub-panels represent the histograms of the two quantities. The grey data points denote simulated galaxies selected to have at least a minimum stellar mass (denoted by dashed lines): to ensure sufficient resolution of stellar properties. In red, we show galaxies for which we have reliable X-ray measurements. All plots in this paper are based on TNG100 and TNG50 denoted here by red points: they are objects that pass the mock X-ray analysis as presented in Section \ref{sec:mocks} and Appendix~\ref{sec:appA}. The solid lines represent the best-fitting $M_*-M_{\rm K}$ relations as provided in equations (\ref{eqn4}) and (\ref{eqn5}).}
\label{fig:2}
\end{figure*}
\subsubsection{Other galaxy properties}
\label{sec:props}
The hot gas properties of galaxies are expected to be inextricably causally related to their stellar and black hole activities. Therefore, beside computing X-ray quantities as described above, we also utilize other measurements to characterize TNG galaxies, specifically in relation to their stellar content and SMBH properties.
\begin{itemize}
\item {\it Half-mass ($r_{1/2}$) and half-light ($R_{\rm e}$) radii.} The former refers to physical radius within which half of the total stellar mass of the galaxy is contained. The half-light radius, also called effective radius, is defined as the radius within which half of the stellar light of the galaxy is contained. In both cases, all gravitationally-bound stellar particles are considered for the size measurements, instead of e.g. accounting only for the light down to an effective surface brightness limit. In this work, in order to characterise the extent of hot atmospheres, we use the 2D circularized projected half-light radii computed in the K-band (\citealt{genel.etal.2017}): these do not account for the effects of dust.
\item {\it Stellar mass} ($M_*$) is the mass in the stellar component measured within twice the half-mass radius (i.e. $<2r_{1/2}$). We use this for mere reference, and not for comparisons to observations.
\item {\it K-band absolute magnitude} ($M_{\rm K}$) is computed from the total luminosity in the K-band of the stellar particles that lie within $2\times r_{1/2}$. It is noted that no dust attenuation is modelled, thus this quantity is aimed to be compared with the extinction-corrected $M_{\rm K}$ from observational data. In fact, as noted in Section~\ref{sec:obs}, observationally-derived values are obtained from integrating a galaxy's magnitude via extrapolation of the light with a single Sersic profile: we comment in the next Sections to what levels the mismatch of operational definitions impacts our simulation-observation comparison.
\item {\it Galaxy color ($u-r$)} is obtained from integrated stellar light measured within 30 kpc in SDSS-u and SDSS-r bands and accounting for the effects of dust (see \citealt{nelson.etal.2018} for detailed discussion). We use $u-r$ colors to separate the simulated samples into two classes:
\begin{itemize}
\item blue: $u-r \leq 2.1$;
\item red: $u-r > 2.1$.
\end{itemize}
\item {\it Stellar morphology} refers to parameters that describe the 3D shape of the stellar distribution. Following \cite{pillepich.etal.2019} (see also \citealt{chua.etal.2019}), we use axis ratios (see \citealt{pillepich.etal.2019} for detailed description), to characterize stellar distribution, e.g. disky versus spheroidal or elongated galaxies.
\item {\it Specific star formation rate ($\rm{sSFR}$)} is defined as the ratio of the instantaneous star-formation rate to stellar mass, both within twice the stellar half mass radius: $\rm{SFR}(<2r_{1/2})/M_{*}(<2r_{1/2})$.
\item {\it Star formation activity flags} are used to specify the star formation status of a galaxy. We employ the operational definitions taken from \cite{pillepich.etal.2019} to classify simulated galaxies based on their instantaneous star-formation rate (SFR) and the logarithmic distance with respect to the star-forming main sequence at the corresponding stellar mass ($\Delta\log_{10}({\rm SFR})$). More specifically, the following flags are used in this work:
\begin{itemize}
\item star-forming: $\Delta\log_{10}({\rm SFR})>-0.5$.
\item quenched: $\Delta\log_{10}({\rm SFR})\leq-1.0$.
\end{itemize}
\item {\it BH feedback-to-binding energy ratio} is defined as $E_{\rm kin}/E_{\rm bin}\equiv\int \dot{E}_{\rm kin}dt/E_{\rm bin}$, where the numerator is the accumulated kinetic feedback released by the central super-massive black hole, and the denominator is the gravitational potential energy computed for all the gas cells within $2\times r_{1/2}$ (see also \citealt{terrazas.etal.2019} for the use of this quantity).
\end{itemize}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{plots/fig3.pdf}
\caption{X-ray flux maps for a random selection of TNG50 galaxies from $\sim10^{12}M_\odot$ ({\it top left}) to $\sim10^{9.6}M_\odot$ ({\it bottom right}) in stellar mass, at $z=0$. Each map has a size of $10R_{\rm e}\times10R_{\rm e}$ and the signal is integrated through a depth of $10R_{\rm e}$. Dashed circles represent regions within $R_{\rm e}$, the K-band half-light radius. The maps are created with assumed angular-diameter distance $D_{\rm A}\approx43.7$ Mpc ($z\sim0.01$).}
\label{fig:3}
\end{figure*}
\subsection{The TNG100 and TNG50 simulated galaxies and their hot atmospheres}
\label{sec:tng}
Starting from {\sc subfind} haloes \citep{springel.etal.2001}, we select mass-limited samples of $z=0$ galaxies from TNG100 ($M_*>3\times10^9M_\odot$) and TNG50 ($M_*>10^8M_\odot$), so that in both simulations all galaxies are resolved with at least a few thousand stellar particles. Note that TNG50 has about 16 times better mass resolution than TNG100, albeit sampling a smaller volume: in the following, by simultaneously studying the hot atmospheres of both TNG100 and TNG50, we provide an estimate of how numerical resolution affects our quantitative results.
Throughout the paper, we consider both central and satellite galaxies, with no distinction. We verify that including satellites does not change significantly any of our qualitative or quantitative conclusions. Moreover, unless otherwise explicitly stated, we consider all galaxy types, independently of morphology, color or star formation state.
These mass-selected samples account for $11233$ and $7540$ galaxies for TNG100 and TNG50, respectively, and are shown as gray dots in Fig. \ref{fig:2} on the $M_*-M_{\rm K}$ diagram. However, as mentioned in Section~\ref{sec:mocks} and described in Appendix~\ref{sec:appA}, not all galaxies produce enough photons in the $[0.3-7.0]~\rm{keV}$ energy band in a 100~ks exposure with {\it Chandra} to ensure a reliable spectral fit. In Fig. \ref{fig:2}, red data points indicate TNG galaxies with available and reliable mock X-ray properties (labeled as ``X-ray detected''): in practice, we find that simulated galaxies with a few $10^9\ M_\odot$ and above ($M_{\rm K}\siml-21$) start to host X-ray emitting gas (with $L_{\rm X}\gtrsim 5\times10^{37}$ erg~s$^{-1}$). The final sample of TNG100 (TNG50) X-ray-detected galaxies consists of 3523 (736) objects, which is about 31 per cent (10 per cent) of the original mass-selected sample\footnote{The fractional different between TNG100 and TNG50 is due to the different adopted minimum stellar mass cut: if we restricted the TNG50 sample to galaxies more massive than $M_*>3\times10^9M_\odot$, as for TNG100, more than 40 per cent would have well-defined X-ray measurements.}. Of the X-ray detected samples, about $67\%$ ($73\%$) are quenched galaxies, and about $21\%$ ($15\%$) are star-forming galaxies for TNG100 (TNG50), based on the star formation activity flags defined in Section~\ref{sec:props}.
To validate the use of the K-band magnitude ($M_{\rm K}$) as a mass proxy, we examine the $M_*-M_{\rm K}$ relation for our sample of simulated X-ray bright galaxies and verify that their K-band magnitude does indeed correlate strongly with the stellar mass following the relations:
\begin{equation}
\log_{10}M_*=10.2-0.47\times(M_{\rm K}+23),\ \ \rm{(TNG100)} \label{eqn4}
\end{equation}
\begin{equation}
\log_{10}M_*=10.2-0.43\times(M _{\rm K}+23),\ \ \rm{(TNG50)} \label{eqn5}
\end{equation}
with an intrinsic scatter of $\sim0.1$ dex. We note that, while the slopes are consistent, the normalisation of the TNG relations is about $0.3$ dex smaller than the corresponding relation used for the MASSIVE sample (see equation 2 in \citealt{ma.etal.2014}). A possible reason for the offset lies in the different simulated and observed measurements of the stellar mass. The observed estimation is based on the work by \cite{cappellari.etal.2013} who approximated the stellar mass as $M_*\approx2M_{1/2}$ (see equation 28 in their paper), where $M_{1/2}$ is the total mass measured within the half-light radius from dynamical modeling. We verify in the TNG simulations that the same approximation would result in overestimating the stellar mass by $\sim0.3-0.6$ dex at $M_{\rm K}\sim -23$. This result further explains why we prefer to use a mass proxy, e.g. $M_{\rm K}$, for the comparison with observations, instead of stellar mass directly.
Finally, in Fig.~\ref{fig:3}, we show the X-ray maps of a selection of TNG50 simulated hot atmospheres, from high (top left) to low masses (bottom right). We notice a marked diversity in the X-ray morphology across the sample. At the high-mass end ($M_*\gtrsim5\times10^{11}M_\odot$), the hot atmospheres appear relatively smooth, volume-filling, and they extend far beyond the stellar distribution ($\gtrsim 5R_{\rm e}$). This result is expected, as massive galaxies reside in massive haloes that maintain a stable accretion shock at the virial radius, which heats the accreted gas to the virial temperature (e.g. \citealt{birnboim.dekel.2003}). In addition to gravitational heating, previous studies of the TNG simulations show that in massive systems feedback powered by gas accretion onto the central SMBH is the dominant extra heating channel (\citealt{weinberger.etal.2018}), which can disperse the gas from the central regions \citep{terrazas.etal.2019}, heat it up (Zinger et al. in prep), and drive high-speed galactic outflows (\citealt{nelson.etal.2019}).
Moving toward lower masses, the hot atmospheres become less extended and less volume-filling, with the X-ray emission appearing more concentrated in the central regions ($\siml R_{\rm e}$).
In the TNG simulations, for galaxies with stellar mass below $10^{10}~M_\odot$, beside gravitational heating, stellar feedback is the main channel of extra energy (\citealt{weinberger.etal.2018}). In the middle row, for example, the hot atmospheres exhibit bipolar features, with the X-ray emitting gas extending beyond the galactic disk: these are indeed star-forming, disky galaxies and the cold, gaseous, star-forming disks appear as black ``edge-on'' regions.
\section{Comparison between TNG and Observed Early-Type Galaxies}
\label{sec:3}
ETGs have been the main focus of past X-ray observations because they have been considered on average massive enough to host hot atmospheres that emit abundantly in the X-ray band. Previous theoretical studies (e.g. \citealt{McCarthy.etal.2010,lebrun.etal.2014,planelles.etal.2014,choi.etal.2015,liang.etal.2016,henden.etal.2018,dave.etal.2019}) showed that the hot gas content in those massive galaxies is particularly susceptible to SMBH feedback, thereby making their X-ray observations an ideal avenue to constrain models of SMBH feedback.
However, in order to make meaningful and quantitative comparisons, it is critical to define a sample of simulated galaxies that properly represents the observed sample of ETGs elected for the comparison.
In this Section, we first describe the selection of a sample of ETG-like galaxies from TNG based on various optical properties and then compare X-ray relations, i.e. $L_{\rm X}-M_{\rm K}$ and $T_{\rm X}-M_{\rm K}$, between the selected simulated and observed datasets.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{plots/fig4.pdf}
\caption{Stellar-light properties of the simulated and observed samples of early-type galaxies at $z\simeq0$. In these panels, we only show those TNG100 (TNG50) galaxies with $M_*>3\times10^9M_\odot$ ($>10^8M_\odot$) that provide enough X-ray photons for our mock {\it Chandra} observations to measure reliable temperatures (see Section~\ref{sec:mocks} for detail). {\it Upper left:} the color-magnitude diagram for non-disky galaxies in TNG100 and TNG50 (grey and brown circles, respectively) is shown along with the corresponding observed samples of $\textrm{ATLAS}^{\rm 3D}$ and MASSIVE galaxies. The contours, computed for the TNG100 data, specify concentration levels of $90$, $60$, $30$, and $10$ per cent of the galaxy distribution density maxima. Those observed galaxies that do not have a $u-r$ measurement are assigned a zero value of $u-r$. The dashed-line specifies the threshold of $u-r=2.1$ above which TNG galaxies are selected for the comparison with these observational datasets. {\it Upper right:} the two histograms of magnitude ($M_{\rm K}$, top) and color ($u-r$, bottom) are shown for the selected simulated sample (grey and brown histograms) as well as the observed sample. {\it Lower row:} comparison on the $R_{\rm e}-M_{\rm K}$ plane for the TNG galaxies selected as ``ETG-like'' and the observed datasets. The shaded area represents $1\sigma$ envelope about the median relation for the TNG100 sample.}
\label{fig:4}
\end{figure*}
\subsection{Matching the simulated to the observed samples of ETGs}
\label{sec:matching}
Properly accounting for, and thus reproducing in the simulations, the selection of observed datasets is challenging: for example, the MASSIVE and $\textrm{ATLAS}^{\rm 3D}$ early-type galaxies are selected based on morphological criteria, i.e. they are ellipticals and S0s. Yet, such selection is not easily reproducible, because it is based on visual inspection, and more quantitative criteria may not return the same galaxy sample.
We attempt to select TNG galaxies that resemble the reference $\textrm{ATLAS}^{\rm 3D}$ and MASSIVE samples of ETGs from \cite{goulding.etal.2016} by imposing the following criteria:
\begin{itemize}
\item K-band absolute magnitude ($M_{\rm K}$): we apply the magnitude cut $M_{\rm K}<-21.5$ to match the $\textrm{ATLAS}^{\rm 3D}$ low-mass threshold.
\item Morphology: we select only non-disky simulated galaxies to mimic the observed sample of ellipticals and S0s. For this task, we apply the same criteria for the stellar axis ratios used in \cite{pillepich.etal.2019}, namely we consider as non-disky all those galaxies that do {\it not} satisfy the following properties: $q>0.66$ and $s<0.33$.
\item Stellar color: we take red TNG galaxies, i.e. with $u-r>2.1$.
\end{itemize}
These add to the requirements of having a minimum galaxy stellar mass and of being X-ray luminous (see Sections~\ref{sec:mocks} and \ref{sec:tng}).
The demographics of the simulated ETG-like galaxies are presented in the upper row of Fig.~\ref{fig:4} (grey and brown symbols for TNG100 and TNG50, respectively) in comparison to the compiled early-type sample of \cite{goulding.etal.2016}. There we show the color-magnitude (left) and the magnitude/color histograms (right).
It is apparent from the color-magnitude diagram that the magnitude and morphological criteria alone are not adequate to disentangle quenched from star-forming galaxies in simulations. Both TNG100 and TNG50 non-disky galaxies display bimodal distribution in the $(u-r)-M_{\rm K}$ diagram. Therefore we opt to apply a color cut for the simulated objects, $u-r>2.1$, which matches the minimum value of the observed sample\footnote{The color data of the observed sample is collected from the NASA-Sloan Atlas database: www.nsatlas.org (see e.g. \citealt{blanton.moustakas.2009}).}.
In the right panels, we inspect the magnitude and color distributions. Grey and brown histograms represent TNG100 and TNG50 ETG-like galaxies, i.e. non-disky and red objects, respectively. A couple of points are worth emphasising when it comes to discussing the comparison of hot gas properties later.
i) The selected TNG100 and TNG50 samples are more or less similar to the observed ones regarding the magnitude distribution except at the bright end and that the latter are somewhat overall flatter. In particular, in TNG100, the brightest simulated systems appear to be $\sim1.5$ magnitude brighter than the observed one. It is important to emphasize that it is difficult to replicate the exact $M_{\rm K}$ measurement performed for the observed data: in fact, we do not do that here, as the simulation magnitudes account for the stellar light from within twice the stellar half mass radius while the observed ones are obtained from extrapolating a single Sersic profile. From the observation side, there have been concerns (e.g. \citealt{lauer.etal.2007,ma.etal.2014}) that the relatively shallow photometry (the $1\sigma$ surface brightness limit is $20\ {\rm mag\ arcsec^{-2}}$) provided by the 2MASS survey might bias low the measurement of the K-band magnitude. As the radial range used for fitting the light profile is too small to obtain an accurate Sersic index, the total stellar luminosity could be underestimated especially for the cases of massive extended galaxies. On the other hand, from the simulation side, we have already compared TNG100 galaxy stellar mass functions at $z=0$ to observational results \citep{pillepich.etal.2018a}. We verify that when limiting the radius within which the simulated $M_{\rm K}$ is computed to a smaller value, e.g. $r=30$ kpc instead of $2\times r_{1/2}$\footnote{For $M_{\rm K}<-26$, the typical value of half-mass radius is $r_{1/2}\sim30$ kpc (see also the bottom plot of Fig. \ref{fig:4} for the values of the half-light radius, i.e. $R_{\rm e}$).}, the discrepancy in $M_{\rm K}$ between the brightest galaxies in TNG100 and observations is reduced to $\sim0.8$ magnitude.
ii) For the colors, while the simulated distributions in the red region centre around the value of $u-r=2.5$, the observed sample is slightly shifted to a higher value of $\sim2.6$. Moreover, there are a couple of MASSIVE systems that are significantly redder than the simulated ones ($u-r\gtrsim3$). Finally, the simulated non-disky galaxies can extend to much lower values of the $u-r$ distribution. \citealt{nelson.etal.2018} have demonstrated that, across morphological types, the TNG100 galaxy population is in striking quantitative agreement with the SDSS $g-r$ and $u-r$ vs. mass distributions, but indeed for a small discrepancy of $u-r\sim 0.1$ at the highest mass end. In light of this, it is nevertheless clear that the X-ray MASSIVE ETGs represent a highly biased sample of red galaxies. This could be due to the fact that the MASSIVE survey probes a volume that is about four times larger than the simulated volume in TNG100.
Before comparing X-ray quantities between simulations and observations, another stellar quantity that needs to be compared is the effective radius, $R_{\rm e}$, for it is used in X-ray observations to mark the size of the hot atmospheres (e.g. \citealt{goulding.etal.2016,babyk.etal.2018, lakhchaura.etal.2019}). This is of particular importance for massive galaxies, as their hot atmospheres can extend well beyond the stellar distribution and well into the intra-group/cluster medium. For this purpose, we show in the lower panel of Fig. \ref{fig:4} the effective radius-magnitude relations for the simulated and observed samples. In agreement with the findings by \citealt{genel.etal.2017} who compared TNG100 optical sizes to a number of observational results, simulated and observed datasets of X-ray luminous ETGs occupy similar regions in the radius-magnitude space. Marginalizing over selection biases and possible mismatches in the ways the effective radii are operationally measured, this result assures that the properties of the X-ray emitting gas are measured across spatial regions that are consistent within a factor of 1-1.5 between TNG and the observed samples. In fact, the effects of numerical resolution are the reason why TNG100 galaxies have somewhat larger sizes at fixed magnitude than TNG50 ones, by a factor of up to 1.5 or so \citep[see also][]{pillepich.etal.2019}.
\subsection{TNG and observed X-ray relations for ETGs}
\label{sec:comparing}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{plots/fig5.pdf}
\caption{Comparison between simulated and observed ETG X-ray relations measured within $R_{\rm e}$ ({\it left}) and $5R_{\rm e}$ ({\it right}) radius, at $z\simeq0$. The TNG100 and TNG50 selected samples (grey and brown circles) comprise ETG-like galaxies, i.e. non-disky, magnitude-selected ($\rm{M_{\rm K}}<-21.5$), and color-selected ($u-r>2.1$) galaxies in addition to a selection in stellar mass and X-ray measurements (see text in Section \ref{sec:matching}). The shaded area represents $1\sigma$ uncertainty about the median relation for the TNG100 sample.}
\label{fig:5}
\end{figure*}
In Fig. \ref{fig:5}, we present the comparison between the simulated and the observed $T_{\rm X}-M_{\rm K}$ and $L_{\rm X}-M_{\rm K}$ relations for ETGs measured within $R_{\rm e}$ (left) and $5R_{\rm e}$ (right). Overall, TNG and the available observations show similar trends of X-ray properties over the considered range of magnitudes: in fact, they all occupy similar regions in the X-ray-magnitude parameter space, returning a non trivial validation of the TNG model and its underlying models for the feedback processes.
At the bright end of the $M_{\rm K}$ distribution ($M_{\rm K}<-24$), namely for the most massive galaxies ($M_*>5\times10^{10}M_\odot$), both the TNG and the observed X-ray properties (in logarithmic scale) can be reasonably described by a linear function of the magnitude. On the other hand, at the faint end ($M_{\rm K}>-24$), the X-ray temperatures (luminosities) flatten (upturn) for lower-mass galaxies and exhibit larger scatter. The upward tail in the $L_{\rm X}-M_{\rm K}$ relation occurs below $M_{\rm K}\sim-24$, which represents the transition scale between star-forming and quenched galaxies. As we will explicitly demonstrate and discuss later, this reflects the significant variations in the X-ray luminosities of the two galaxy populations.
To quantify the linear dependence on $M_{\rm K}$ at the bright end, we describe and fit the X-ray properties with the following formula:
\begin{equation}
\log_{10} (F/F_0) = \alpha+\beta\times (M_{\rm K}+26), \label{eqn6}
\end{equation}
where $F$ represents either $T_{\rm X}$ or $L_{\rm X}$, and $F_0$ is the pivotal point. The parameters $\alpha$ and $\beta$ are the best-fitting normalization and slope, respectively. Using the fitting package {\sc linmix\underline{ }err} \citep{kelly.2007}, we fit to equation (\ref{eqn6}) both the TNG100 simulated result, which covers a sufficiently large range of $M_{\rm K}$ for systems with $M_{\rm K}<-24$, as well as the observed data: best-fitting parameters are reported in Table \ref{tb2}.
\begin{table*}
\caption{\label{tb2}
Best-fitting parameters of equation (\ref{eqn6}) for ETG-like TNG100 galaxies ($M_*>3\times10^9M_\odot$, non-disky, $M_{\rm K}<-24$, $u-r>2.1$, and X-ray detected) and the observed samples at $z\simeq0$. We provide the observational fit for mere reference as in fact it is done on a combination of data points with incongruous selections and measurements. For the TNG fits, note that the X-ray properties are determined via Chandra-like mock observations with 100 ks of exposure time and including the telescope responses. Here TNG galaxy magnitudes account for all the K-band light from within twice the stellar half mass radius.}
\begin{center}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{cc|cccc|ccc}
\hline
Relation & $F_0$ & $\alpha$ & $\beta$ & scatter &$\mid$& $\alpha$ & $\beta$ & scatter \\
\hline
Within $R_{\rm e}$ & & & TNG100 & &$\mid$& & Observations & \\
& & & & &$\mid$& &($\rm{ATLAS^{3D}}$+MASSIVE+Lakhchaura+19) & \\
$T_{\rm X}-M_{\rm K}$ & 1 keV & $-0.03\pm0.02$ &$-0.27\pm0.01$ & $0.21\pm0.01$ &$\mid$& $-0.01\pm0.03$ & $-0.20\pm0.03$ & $0.15\pm0.02$ \\
$L_{\rm X}^{[0.3-5]\rm{keV}}-M_{\rm K}$ & $10^{40}$ erg/s & $0.77\pm0.05$ &$-1.10\pm0.04$ & $0.56\pm0.02$ &$\mid$& $1.54\pm0.11$ & $-1.10\pm0.12$ & $0.60\pm0.06$ \\
\hline
Within $5R_{\rm e}$ & & & TNG100 & &$\mid$& & Observations (Babyk+18) & \\
$T_{\rm X}-M_{\rm K}$ & 1 keV & $0.02\pm0.01$ &$-0.29\pm0.01$ & $0.15\pm0.01$ &$\mid$& $0.02\pm0.03$ & $-0.18\pm0.03$ & $0.16\pm0.02$ \\
$L_{\rm X}^{[0.5-6]\rm{keV}}-M_{\rm K}$ & $10^{40}$ erg/s & $1.54\pm0.05$ &$-1.20\pm0.04$ & $0.61\pm0.02$ &$\mid$& $1.51\pm0.13$ & $-0.83\pm0.15$ & $0.72\pm0.07$\\
\hline
\end{tabular}}
\end{center}
\end{table*}
The comparison between TNG100 and TNG50 allows us to assess the effects of numerical resolution and sampling. It is already known that within the TNG model improved resolution implies smaller sizes (see previous Section) and larger galaxy stellar masses and luminosities \citep[up to factors of 1.2-1.4 in galaxy stellar mass at $z=0$ in these mass ranges, e.g.][]{pillepich.etal.2018}. Nevertheless, overall TNG100 and TNG50 galaxies occupy similar regions in the parameter spaces with two noticeable differences. At the highest-mass end, TNG50 X-ray temperatures appear biased low compared to TNG100: this is probably due to the absence of a large number of galaxies living at the center of massive clusters, as the TNG50 volume is almost 10 times smaller than TNG100 and the most massive TNG50 haloes have masses of $10^{14}M_\odot$. Moreover, on average, the TNG50 X-ray temperatures and luminosities within $5R_{\rm e}$ at fixed magnitude are smaller by a up to a factor of a few: this could be due to the effects of resolution on the nominal apertures for the measurements (smaller in TNG50 than in TNG100) or due to the resulting slightly different thermodynamical properties of the gas, or a combination of both.
Within these uncertainties, the simulated $T_{\rm{X}}-M_{\rm K}$ relations are in good agreement with the observed relations at both radii. There is no significant variation in the gas temperature relations between the two radii, except for a somewhat smaller simulated scatter for larger apertures: 0.21 vs. 0.15 dex for TNG100. For $M_{\rm K}<-24$, the TNG100 simulated $T_{\rm X}-M_{\rm K}$ relation is well approximated by a linear function as described in equation (\ref{eqn6}) with the slope $\beta\sim-0.3$\footnote{We note that $M_{\rm K}$ is negative hence a negative slope means a positive correlation between the two quantities.}. The observed slope is slightly shallower ($\beta\sim-0.2$). As $M_{\rm K}$ is closely related to galaxy stellar mass (as shown in Section~\ref{sec:tng}), these findings imply that the gas temperature, which represents its thermal energy, is primarily determined by a galaxy's potential (see also \citealt{goulding.etal.2016}). At the faint end of the $M_{\rm K}$ distribution, on the other hand, $T_{\rm X}$ starts to stabilize around 0.3 keV yet with larger scatter, being consistent with the idea that lower mass galaxies are more sensitive to non-gravitational heating processes such as stellar and AGN feedback. Even though the energy range for fitting is limited to the $[0.3-7.0]$ keV band, where {\it Chandra} is sensitive and reasonably well calibrated, it is still possible to detect cool systems with atmospheric effective temperatures down to $T_{\rm X}\sim0.1$ keV, as such gas produces X-ray line emission of OVII and OVIII in the {\it Chandra} band. These lines are strong and their ratios will provide a good ``thermometer'' even if we only detect a relatively small number of photons.
For the $L_{\rm X}-M_{\rm K}$ relation, the TNG100 and TNG50 simulations reproduce reasonably well the observed trends at both radii, even though there is a slight offset in normalization (at $<R_{\rm e}$) depending on exactly which simulation and observational datasets are considered. For instance, the median value of the simulated $L_{\rm X}(<R_{\rm e})$, at $M_{\rm K}=-26$, is lower than the observed median value by a factor of $\sim5$. Yet, given the large scatter in both simulated and observed samples, the discrepancy is less than $1\sigma$. At larger radii, $<5R_{\rm e}$, the ratio between observed (\citealt{babyk.etal.2018}) and TNG100 $L_{\rm X}$ median values is less than a factor of 2 and they are fully statistically consistent. For TNG100, the X-ray luminosity measured within $5R_{\rm e}$ is increased by a factor of $\sim5$ on average with little dependence on magnitude with respect to the one measured within $R_{\rm e}$.
Taken at face value, given that the simulations and all the observational datasets appear to be consistent for the $T_{\rm X}-M_{\rm K}$ relations, the difference in X-ray luminosity could be an indication of a lack of hot gas within the central regions of simulated galaxies compared to observations. For instance, given that $L_{\rm X}\propto f_{\rm g}^2$, where $f_{\rm g}\equiv M_{\rm gas}/M_{\rm tot}$ is the hot gas fraction, an offset in the X-ray luminosity by a factor of $\sim5$ can be translated into an offset in the hot gas fraction by a factor of $\sim2$. The result might in turn suggest a too strong black hole feedback in the most massive galaxies which blows out too much gas from the central regions. In fact, the mismatch could be more simply due to a difference (actual or of measurement) in the K-band luminosity towards the highest-mass end. In practice, the small offset might be of observational origin. As discussed in Section~\ref{sec:props}, the observed $M_{\rm K}$ could be underestimated due to the relative shallow photometry of the 2MASS survey. For illustration, if we use $M_{\rm K}(<30\ {\rm kpc})$ instead of $M_{\rm K}(<2\times r_{1/2})$, the $L_{\rm X}$ offset between simulations and observations is reduced to a factor of $\sim4$. Finally, we note that the offset in $L_{\rm X}$ is mainly caused by the high-mass end galaxies of the MASSIVE sample, which is obtained from a $\sim4$ times larger volume compared to the simulated volume (TNG100), and it could be biased toward the most X-ray luminous galaxies in the nearby Universe.
Another point worth noting is that the scatter in $L_{\rm X}$ at a given magnitude is remarkably larger, about 3 times, than the temperature scatter in both simulated and observed samples. This suggests that for massive galaxies at the same stellar mass, their X-ray luminosity may depend significantly on other galaxy properties that affect or correlate with the amount of hot gas, such as galaxy kinematics (e.g. fast versus slow rotators, see \citealt{sarzi.etal.2013}). Further investigation into the scatter of the X-ray luminosity, though intriguing, is beyond the scope of the current work and should be addressed in a detailed future study.
\begin{figure*}
\centering
\includegraphics[width=0.89\textwidth]{plots/fig6a.pdf}
\includegraphics[width=0.89\textwidth]{plots/fig6b.pdf}
\caption{ The dependence on galaxy type of the X-ray luminosity from the diffuse, hot and metal-enriched gas. {\it Top row:} the left panel shows the TNG $L_{\rm X}-M_{\rm K}$ relation within $R_{\rm e}$ in which the data points are color-coded according to their $u-r$ information with respect to the threshold value that defines ETGs as presented in Sections~\ref{sec:props} and \ref{sec:matching}. The contours locate the blue cloud and red sequence loci of the TNG100 simulation. In the right panel, a relation between $L_{\rm X}$ and $u-r$ is shown for TNG100 systems with $-25<M_{\rm K}<-23$ at two different radii: $R_{\rm e}$ and $5R_{\rm e}$. The contours specify the loci of the blue cloud and the red sequence for the case where $L_{\rm X}$ is measured within $R_{\rm e}$. {\it Bottom row:} X-ray luminosity as a function of $M_{\rm K}$ magnitude is shown for TNG simulated galaxies in comparison with X-ray observations. The TNG100 and TNG50 data is represented by color-filled and dashed-line contours, respectively, color-coded according to their $u-r$ colors. The two blue and red solid lines represent the best-fit relations, as reported in equations (\ref{eqn7}) and (\ref{eqn8}), for the observed late- and early-type galaxies, respectively.}
\label{fig:6}
\end{figure*}
\section{The dependence of X-ray properties on galaxy type}
\label{sec:predictions}
After verifying that the TNG simulations realistically reproduce the observed X-ray properties of the hot atmospheres in massive ETGs, we now use the TNG simulations to get insights on the X-ray properties of low-mass and star-forming galaxies. Of particular interest is how the X-ray properties depend on galaxy type. This follows up on previous studies from observational works (e.g \citealt{mineo.etal.2012, li.wang.2013b, li.etal.2017}), which reveal a tight correlation between galaxies gaseous X-ray emission and their star-formation rates, whereby suggesting a close connection between the hot diffuse gas content and the galaxies star-formation state. This result is tentatively supported by numerical studies (e.g. \citealt{crain.etal.2010,davies.etal.2019b}): for instance, \cite{davies.etal.2019b} show that, using the EAGLE simulation, the star-formation rate of simulated galaxies are significantly positively correlated with their gas content, which is in turn manifested in observable quantities such as the X-ray luminosity or the thermal Sunyaev-Zel'dovich flux integrated throughout the halo. \citealt{crain.etal.2010} found similar correlations with earlier numerical simulations of galaxies.
\subsection{The $L_{\rm X}-M_{\rm K}$ relations for star-forming and quenched galaxies}
We first examine the simulated $L_{\rm X}-M_{\rm K}$ relation in connection to the star formation activity of the galaxies using their color, i.e. $u-r$, as a proxy for their star formation state.
The top left panel in Fig. \ref{fig:6} shows the $L_{\rm X}-M_{\rm K}$ relation in TNG100 and TNG50 in which the data points are color-coded according to their $u-r$ values. We apply the same color cut as presented in Section~\ref{sec:props}, i.e. $u-r>2.1$, to define quenched or red galaxies (colored in red), and $u-r\leq2.1$ to select the star-forming galaxies (colored in blue). The number fraction of quenched (star-forming) galaxies is about $20\%$ ($80\%$) and $18\%$ ($82\%$) for TNG100 and TNG50, respectively. Density contours are drawn to indicate the most populated regions of the two populations in TNG100. A clear pattern emerges in both TNG100 and TNG50, in which the quenched galaxies occupy the bright end ($M_{\rm K}<-24$) but are clustered around a relatively low $L_{\rm X}$ ($L_{\rm X}\sim10^{39}$ erg~s$^{-1}$), while the star-forming galaxies populate the faint end ($M_{\rm K}>-24$) and their $L_{\rm X}$ is centered around a higher value ($L_{\rm X}\sim10^{40}$ erg~s$^{-1}$). Critically, at the magnitude range where the star-forming and quenched galaxies overlap ($M_{\rm K}\sim-24$), the TNG simulations predict a clear X-ray luminosity separation between the two populations, with the star-forming galaxies being X-ray {\it brighter} than the quenched systems. The separation is more pronounced in TNG100 than in TNG50.
To better quantify how the X-ray luminosity depends on the star-formation state, we select simulated galaxies in TNG100 whose $M_{\rm K}$ falls in the range where the star-forming and quenched galaxies overlap, i.e. $-25<M_{\rm K}<-23$ within which the quenched (star-forming) fraction is about $30\%$ ($70\%$), and plot their median values of $L_{\rm X}$ as well as the corresponding $1\sigma$ scatter as a function of $u-r$, as shown in the top right panel in Fig. \ref{fig:6}. We show the results for $L_{\rm X}$ measured within two apertures, $R_{\rm e}$ and $5R_{\rm e}$, to explore how this effect depends on radius. At both apertures, the X-ray luminosity decreases steeply with color, falling by more than one order of magnitude from the blue to the red end. At the red end, the luminosity measured at an intermediate radius, i.e. within $5R_{\rm e}$, is systematically larger than the one measured within $R_{\rm e}$. This result suggests that in the range of K-band magnitudes where the two populations overlap, the difference in $L_{\rm X}$ between the star-forming and quenched galaxies is more prominent when measured in the central regions than at larger radii.
The dichotomy in X-ray luminosity between star-forming and quenched galaxies is intriguing since it is an observationally testable prediction.
In addition to the observations of early-type galaxies presented in Section~\ref{sec:comparing}, we also compare our results with {\it Chandra} and {\it XMM-Newton} observations of lower-mass late-type galaxies taken from \cite{mineo.etal.2012}, \cite{li.wang.2013}, and \cite{li.etal.2017}.
The final observed sample consists of 163 galaxies (108 early-type, 55 late-type) spanning a range of over 7 magnitudes ($-26<M_{\rm K}<-19$).
For comparison with X-ray observations of disk galaxies, it is worth mentioning a caveat regarding the difference in studied volume of the hot atmospheres between simulations and observations. Unlike the case of massive elliptical/lenticular galaxies, the X-ray observations of disk galaxies (e.g. \citealt{mineo.etal.2012, li.wang.2013}) are mainly taken from a boxy volume with the size characterized by $D25$, which is defined as the B-band projected diameter of the ellipse major axis at isophotal level $25\ \rm{mag}\ \rm{arcsec}^{-2}$, and $r25$ which is the ratio of the major to minor axes of the ellipse (see, e.g. Fig. 4 in \citealt{li.wang.2013} for an illustration). Instead, in this work we compute the simulated X-ray quantities within a cylindrical volume characterized by the effective radius for both star-forming and quenched galaxies, as described in Section~\ref{sec:mocks}. Compared to simulations (not shown here), the available observed values, taken from \cite{li.wang.2013}, of the major and minor axes of the D25 ellipse fall within the range of the simulated $R_{\rm e}$ distribution. Moreover, we verify that for low-mass systems, the simulated X-ray measurements within $R_{\rm e}$ cover the bulk of the total galactic X-ray emission of that galaxy, e.g. for galaxies with $M_{\rm K}>-24$, $L_{\rm X}(<R_{\rm e})$ contributes more than $80\%$ of the total luminosity ($[L_{\rm X}(<5R_{\rm e})$). This result justifies the use of the simulated measurements within $R_{\rm e}$ for the comparison with the late-type observations.
In the bottom panel of Fig. \ref{fig:6}, we show the X-ray luminosity-magnitude relation for both observed late- and early-type galaxies (datapoints in shades of blue and red, respectively) overplotted to TNG simulated data represented by contours that locate the loci of star-forming (blue) and quenched (red) galaxies in TNG100 (color-filled) and TNG50 (dashed-line contours).
Neglecting the inhomogeneity of the datasets collected here, the observed $L_{\rm X}-M_{\rm K}$ relation can be approximately described by a broken linear function of magnitude, in which early-type and late-type galaxies follow two distinct linear relations:
\begin{equation}
\resizebox{0.49\textwidth}{!}{
${\rm Early-type:}\ \log_{10}\big(\frac{L_{\rm X}}{10^{40}{\rm erg/s}}\big)=(-21.5\pm1.3)+(-0.88\pm0.05)\times M_{\rm K}, \label{eqn7}$
}
\end{equation}
\begin{equation}
\resizebox{0.49\textwidth}{!}{
${\rm Late-type:}\ \log_{10}\big(\frac{L_{\rm X}}{10^{40}{\rm erg/s}}\big)=(-6.6\pm0.9)+(-0.25\pm0.04)\times M_{\rm K}, \label{eqn8}$
}
\end{equation}
with intrinsic scatters of $0.66$ dex and $0.48$ dex for early-types and late-types, respectively. The early-type relation is significantly steeper, where the slope is larger by about a factor of 3 than the late-type slope. Interestingly, at the joint between the two populations, $M_{\rm K}\sim[-23, -24]$, late- and early-type galaxies are segregated into high- and low-$L_{\rm X}$ regions, respectively, on the $L_{\rm X}-M_{\rm K}$ plane. This segregation causes the scatter in the X-ray luminosity to be remarkably large, with data points scattering over more than two orders of magnitude in $L_{\rm X}$.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{plots/fig7.pdf}
\caption{The hot gas properties with respect to the star formation activity in TNG100 at $z=0$. Clock-wise, the plots are shown for $L_{\rm X}-M_{\rm K}$, $f_{\rm g}-M_{\rm K}$ ($f_{\rm g}$ is normalised to the baryon fraction $\Omega_{\rm b}/\Omega_{\rm m}$ and it is the mass of all gas, not just the hot gas, divided by the total mass), $T_{\rm mw}-M_{\rm K}$, and $T_{\rm X}-M_{\rm K}$ relations. Note the difference in the plotted temperature range in the y-axis of the two temperature plots. The data points are color-coded according to the relative difference between their specific star formation rate ($\rm{sSFR}$) and the running median of the $\rm{sSFR}-M_{\rm K}$ relation at their magnitude. The vertical dotted line marks roughly the transition point, $M_{\rm K}=-24$, between star-forming and quenched galaxies. The contours specify the locations of the two populations, star-forming (blue) and quenched (red), defined according to the star-formation flags described in Section~\ref{sec:props}.}
\label{fig:7}
\end{figure*}
Qualitatively, the tentatively observed segregation of X-ray luminosity between late- and early-type galaxies is consistent with the predictions of the TNG simulations. Nonetheless, the qualitative agreement between observations and simulations at this stage should not be overinterpreted for two reasons: i) the observed sample of late-type galaxies is statistically limited and not well defined in terms of the range in near-infrared magnitude and the probed volume; and ii) observationally it is challenging to obtain the truly diffuse X-ray emission of the hot ISM due to contamination by unresolved emission of compact sources (e.g. high-mass X-ray binaries), whose contribution can only be estimated in a statistical and model-dependent way (see e.g. \citealt{mineo.etal.2012} for a thorough study). Another point worth discussing here is the fact that in the magnitude range $-24<M_{\rm K}<-22$, we see a number of remarkably luminous late-type galaxies ($\sim16\%$) with $L_{\rm X}\gtrsim10^{41}$ erg~s$^{-1}$ in TNG100, which are as bright as the most massive giant ellipticals. Such highly X-ray luminous atmospheres in late-type galaxies have so far not been observed in the local Universe, which may imply that the TNG simulations overpredict the gas phase X-ray emission for a fraction of star-forming galaxies. This in turn may link to the inefficiency of the implemented feedback models in removing gas out of the central regions thereby resulting in a number of too gas-rich galaxies at the low-mass end. Or the overpredicted X-ray luminosity could result from that the gas is over-heated by both AGN and stellar feedback in galaxies at the transition range (see discussion at the end of Section \ref{sec:5}). More detailed analyses of those highly-luminous star-forming galaxies as well as the connection with the associated feedback processes, which are beyond the scope of the current study, would provide essential diagnostics for the TNG model. Besides, the discrepancy between TNG and observations could be partly due to the fact that there has not been a sensitive high spatial resolution all sky X-ray surveys of star-forming galaxies and TNG100 encompasses a larger volume, about $5$ times, than the one probed by the current late-type observations. The upcoming all sky survey with {\it eROSITA} \citep{merloni.etal.2012} will provide excellent X-ray catalogs of nearby galaxies to examine this prediction of the TNG model.
\section{The origin of the Luminosity Diversity}
\label{sec:5}
The diversity in $L_{\rm X}$ presented in the previous Section suggests that the X-ray emission of hot atmospheres is closely linked to the star formation state of the host galaxies. In turn, at least within the TNG galaxy formation model, this is closely connected to the stellar and SMBH feedback processes. To further investigate the origin of the X-ray luminosity difference in star-forming and quenched galaxies, in this Section we inspect the thermal properties as well as the gas content in the two sets of systems and their connection with the SMBH driven feedback. For the sake of brevity, we only employ the TNG100 sample for this theoretical study, though the following results are also qualitatively applicable to the TNG50 sample.
\subsection{Connection with Gas content}
In Fig. \ref{fig:7} we show the TNG100 relations between the X-ray luminosity, gas mass fraction ($f_{\rm g}$), X-ray temperature ($T_{\rm X}$) and mass-weighted temperature ($T_{\rm mw}$) measured within $R_{\rm e}$ as a function of the K-band magnitude, in which the data points are color-coded based on their specific star formation rate (${\rm sSFR}$) with respect to the median of the ${\rm sSFR}-M_{\rm K}$ relation at the same magnitude. In this way, we can isolate the intrinsic correlation between X-ray properties or gas content and sSFR from their correlation with the K-band magnitude that traces the stellar mass.
On top of that, contours are used to indicate the parameter space occupied by star-forming (blue) and quenched (red) galaxies, which are flagged depending on their relative position with respect to the main sequence (see Section~\ref{sec:props} for a detailed definition). Analog plots for the gas density and metallicity can be found in Appendix \ref{sec:appB}.
As expected, the two populations of quenched and star-forming galaxies, when flagged according to their instantaneous star formation rate, display a similar separation on the $L_{\rm X}-M_{\rm K}$ plane as in the case of the color-based study presented in Section~\ref{sec:predictions}. However, when color-coded by $\rm{sSFR}$ at a given magnitude, there is no clear segregation between galaxies above and below the median $\rm{sSFR}$ value across the considered magnitude range except at $M_{\rm K}\sim-24$. At this magnitude (stellar mass), highly star-forming galaxies are much more X-ray luminous than their counterparts with low star-formation rates. Importantly, the gas mass fraction is also found to have the largest scatter at this magnitude, where galaxies below the median $\rm{sSFR}$ are gas depleted by more than an order of magnitude, compared to those above the median.
The temperature relations, $T_{\rm X}-M_{\rm K}$ and $T_{\rm mw}-M_{\rm K}$ display different patterns for star-forming versus quenched galaxies. The former shows no clear division between the two populations even at $M_{\rm K}\sim-24$, where star-forming and quenched galaxies have similar X-ray temperatures ($T_{\rm X}\sim0.2-0.3$ keV). On the other hand, a clear separation is found in the $T_{\rm mw}-M_{\rm K}$ plane where the quenched galaxies are about an order of magnitude hotter than their star-forming counterparts. The different patterns between the two temperature estimators can be explained by the fact that the X-ray temperatures are mainly determined by the temperatures of the gas cells that emit efficiently in the X-ray band (i.e. $[0.3-5]$ keV)\footnote{In fact, $T_{\rm X}$ is a close estimator of the emission-weighted temperature ($T_{\rm ew}$). See Appendix~\ref{sec:appA} for a detailed comparison of temperature estimators.}, therefore it is likely biased high, especially for low-mass systems, compared to the mass-weighted temperature, which presumably represents the averaged thermal energy of all the considered gas cells.
\begin{figure*}
\centering
\includegraphics[width=0.79\textwidth]{plots/fig8.pdf}
\caption{Main panel: the effects of SMBH kinetic feedback on X-ray luminosity across the explored magnitude range for TNG100. The data points are color-coded according to the relative difference between their value of feedback-to-binding energy ratio ($E_{\rm kin}/E_{\rm bin}$, see the text for definition) and the running median of the $E_{\rm kin}/E_{\rm bin}-M_{\rm K}$ relation at their magnitude. This energy ratio reflects the capacity of SMBH feedback to push gas out of the central region of the galaxy. It becomes clearly visible at $M_{\rm{K}}\sim-24$ ($M_*\sim10^{10.7}M_\odot$, dotted line), where the SMBH feedback becomes sufficiently powerful to push out large amounts of hot gas thereby significantly reducing the X-ray luminosity. The contours represent the two populations of star-forming and quenched galaxies, identical to those in the top-left panel of Fig. \ref{fig:7}. For completeness, we also show galaxies with $E_{\rm kin}=0$ as empty circles, namely those galaxies in which SMBH feedback is still in the thermal mode. Sub panel: the $L_{\rm X}-M_{\rm BH}$ relation for TNG100 galaxies with magnitude in the transition range ($[-25, -23]$) is represented by the median relation and the $1\sigma$ envelope. The contours locate the loci of star-forming (blue) and quenched (red) populations which are separated at $M_{\rm BH}\sim10^{8.1}M_\odot$ (denoted by the vertical dotted line).}
\label{fig:8}
\end{figure*}
In summary, the results shown in Fig. \ref{fig:7} reveal that the quenched galaxies in TNG are on average hotter but poorer in gas than the star-forming systems. We further note that the quenched population is also slightly metal-poorer than the star-forming (see Appendix~\ref{sec:appB}). Though overall the contribution from metal line emission dominates the total gaseous X-ray emission of galaxies across the whole considered range of magnitude, we notice that being metal-poorer is not the main reason for which quenched galaxies exhibit significantly less X-ray luminosity than the star-forming population: we demonstrate and discuss this in some detail in Appendix \ref{sec:appB}. All these results indicate that gas depletion is primarily responsible for the lower X-ray luminosity of quenched galaxies. In other words, quenched galaxies have lower values of $L_{\rm X}$ because they contain less (albeit hotter) gas than star-forming galaxies. This finding explains the previous connection between the diversity in $L_{\rm X}$ and galaxy types and is consistent with previous results (e.g. \citealt{nelson.etal.2018, nelson.etal.2018b,terrazas.etal.2019, davies.etal.2019}) which indicate that gas removal is the primary cause of star formation quenching in TNG galaxies, in addition to gas heating (Zinger et al. in prep.).
\subsection{Connection to black hole feedback}
The results found in the previous Section provide an important connection between the gas content, which is closely linked to the quenching mechanism, and the diversity in $L_{\rm X}$. Since the latter can be verified observationally, it offers a critical test for the quenching mechanism in the TNG simulations.
Previous studies of BH feedback in TNG, e.g. \cite{weinberger.etal.2017}, \cite{weinberger.etal.2018}, \cite{nelson.etal.2018}, and \cite{terrazas.etal.2019}, suggest that the kinetic mode of SMBH feedback may play an important role in quenching star formation. The feedback does not only heat the gas, but it can also lift gas to higher altitudes or strip the galaxy of its star-forming material. Here, we aim to explore the imprint of the SMBH kinetic feedback on the X-ray luminosity of the hot atmospheres. For this task, we consider galaxies that host at least one supermassive black hole at their center and which have already switched to the kinetic mode (i.e. $\int \dot{E}_{\rm kin}dt>0$). By following \cite{terrazas.etal.2019}, we compute the ratio of the accumulated SMBH kinetic feedback to the gas binding energy $E_{\rm kin}/E_{\rm bin}$, as described in Section~\ref{sec:props}.
In the main panel of Fig. \ref{fig:8}, we show the $L_{\rm X}-M_{\rm K}$ relation for TNG100 galaxies, color-coding the data points according to the relative difference with respect to the median value of the energy ratio $E_{\rm kin}/E_{\rm bin}-M_{\rm K}$ relation at the given magnitude. For completeness, the galaxies in which SMBH feedback is still in the thermal mode ($E_{\rm kin}=0$) are also presented (empty circles). In addition. we specify the loci of star-forming and quenched galaxies via contours as done in the top-left panel of Fig. \ref{fig:7}.
In general, the energy ratio does not show much scatter, except in the range of $M_{\rm K}\sim[-23, -25]$, where the ratio varies by up to three orders of magnitude. As shown in previous studies (e.g. \citealt{terrazas.etal.2019}), for low-mass systems ($M_*<10^{10.7}M_\odot$), the gas binding energy appears larger than the accumulated BH kinetic feedback. As galaxies increase their mass to $M_*\sim10^{10.7}M_\odot$ ($M_{\rm K}\sim-24$), at which scale the mass of the central SMBH reaches the critical value $M_{\rm BH}\simeq10^8M_\odot$ as described in equation (\ref{eqn1}), the integrated kinetic feedback energy ($E_{\rm kin}\simeq10^{59}$ erg) starts to dominate the galactic gravitational potential, causing the energy ratio to increase by over three orders of magnitude.
We note that at $M_{\rm K}\sim-24$, where the energy ratio exhibits the largest scatter, we see a clear separation in X-ray luminosity for galaxies above and below the median value of $E_{\rm kin}/E_{\rm bin}$. Galaxies above the median value, namely systems where the kinetic feedback dominates the binding energy, are significantly fainter in X-rays compared to the galaxies that lie below the median. This result clearly suggests a casual relationship between the SMBH driven feedback activity and the diversity in $L_{\rm X}$, where the SMBH kinetic feedback lifts appreciable amounts of gas from the central potential of massive galaxies, driving their X-ray luminosity low, and quenching their star formation. It also explains the origin of the mass scale, $M_*\sim10^{10.7}M_\odot$ ($M_{\rm K}\sim-24$), where the $L_{\rm X}$ diversity occurs, as it corresponds to the scale where SMBHs start to effectively switch from thermal to kinetic mode of feedback \citep[see][ for a discussion]{weinberger.etal.2017, terrazas.etal.2019, davies.etal.2019}.
As in the TNG model the SMBHs feedback activity is closely connected to their mass, to illustrate this point we show in the sub panel of Fig.~\ref{fig:8} the relation between X-ray luminosity and BH mass for a subsample of TNG100 galaxies that lie in the transition range, i.e. $-25<M_{\rm K}<-23$. As shown in the plot, the two populations are separated at $M_{\rm BH}\sim10^{8.2-8.3}M_\odot$ and a remarkable drop in the X-ray luminosity from star-forming to quenched galaxies, in which the latter is about an order of magnitude less luminous than the former. The sharp division between the two populations at $M_{\rm BH}\sim 10^{8.2-8.3}M_\odot$, which result from an ensemble of choices in the model in addition to the Eddington ratio threshold for switching SMBH feedback from thermal to kinetic mode as described in equation (\ref{eqn1}), has been challenged by some observational data (see \citealt{terrazas.etal.2019} for a detailed discussion) which prefer a broader scatter in $M_{\rm BH}$ at the transition range than what emerges from the TNG simulations.
Our finding is in line with recent theoretical studies by \citealt{davies.etal.2019b, davies.etal.2019} based on the EAGLE simulations: according to the EAGLE galaxy formation model, a strong negative correlation is found between the scatter in the gas content -- which can be probed via X-ray and Sunyaev-Zel'dovich (SZ) observations (see Figure 4 in \citealt{davies.etal.2019b}) -- and the scatter in the SMBH mass at a fixed halo mass. The result is in turn linked to the ability of the SMBHs to expel gas via feedback, which also in the EAGLE simulations play an essential role in quenching star formation in central galaxies. Despite the differences in the implemented model of SMBH feedback in EAGLE compared to the TNG model (see \citealt{davies.etal.2019} for a full discussion), the results agree on the crucial role played by SMBH feedback on establishing the population of quenched galaxies via gas ejection, in addition to gas heating.
Finally, SMBH kinetic feedback might not be solely responsible for the observed large separation between the quenched and star-forming galaxies in $L_{\rm X}$. This separation could be further amplified by SMBH thermal and stellar feedback, which could boost the atmospheric X-ray luminosity in star-forming galaxies. For low-mass galaxies ($M_*\lesssim10^{9.5}M_\odot$) in TNG, the SMBH thermal feedback at high-accretion rates is negligible as the injected thermal energy is quickly lost in the star-forming gas phase (see Figure 1 in \citealt{weinberger.etal.2018}). It becomes an important heating channel in the mass range $M_*\sim[10^{10}-10^{10.5}]M_\odot$, which is close to the transition range ($M_{\rm K}\sim-24$), whereby it contributes significantly to the gas phase X-ray emission of star-forming galaxies. In addition, stellar feedback, though being a sub-dominant energy channel at the considered mass range, still releases non-negligible feedback energy and it is expected to both return an appreciable amount of material into the interstellar medium and heat the gas above its virial temperature.
\section{Summary and conclusions}
\label{sec:6}
In this paper we have investigated the properties of galactic hot atmospheres using a large sample of simulated galaxies taken from the IllustrisTNG cosmological simulations. Specifically, we used galaxies from the TNG100 and TNG50 runs with stellar masses (K-band magnitudes) spanning the $10^{8-12.5} M_\odot$ ($[-17,-28]$) range at $z=0$. We have carried out mock X-ray analyses from the diffuse, hot and metal-enriched gas of the simulated objects as if they were observed with {\it Chandra} and then compared the simulated X-ray scaling relations, such as $T_{\rm X}-M_{\rm K}$ and $L_{\rm X}-M_{\rm K}$, to those obtained from a collection of X-ray observations of nearby galaxies, including for example $\textrm{ATLAS}^{\rm 3D}$ and MASSIVE early-type galaxies with X-ray measurements. We thus used the simulations to gain critical insights into the diversity of the hot atmospheres and the connection between the kinetic SMBH feedback -- which provides a mechanism for quenching the star formation in the TNG simulations -- and the X-ray properties of the gaseous atmospheres of these galaxies.
The main results of our study can be summarized as follows:
\begin{enumerate}
\item Most TNG galaxies with a stellar mass above a few $10^{9}M_\odot$ host X-ray emitting atmospheres that can easily be detected by {\it Chandra} with a 100 ks exposure (Fig.~\ref{fig:2}). The X-ray morphology of such hot atmospheres can be diverse, with more massive systems hosting more extended and more volume-filling gas than lower-mass objects, and with star-forming galaxies exhibiting biconical features of hot gas extending beyond their cold, star-forming gaseous disks (Fig.~\ref{fig:3}).
\item After selecting for early-type like galaxies similar to those of available observational datasets taken e.g. from the $\textrm{ATLAS}^{\rm 3D}$ and MASSIVE surveys, we show that TNG returns $T_{\rm X}-M_{\rm K}$ and $L_{\rm X}-M_{\rm K}$ relations that are consistent with observations (Fig. \ref{fig:5}). This consistency constitutes a non trivial validation of the TNG simulations and of their underlying models for stellar and black hole feedback that are responsible for rearranging and heating the gas within and around galaxies.
\item According to the IllustrisTNG simulations, star-forming and quiescent galaxies exhibit markedly distinct X-ray luminosity vs. K-band magnitude relations. In particular, the TNG simulations predict a clear X-ray luminosity separation between star-forming and quiescent galaxies at $M_{\rm K}\sim -24$, corrsponding to $M_*\sim10^{10.7}M_\odot$, with star-forming galaxies being X-ray {\it brighter} than their quenched counterparts, by up to two orders of magnitudes (Fig. \ref{fig:6}). The difference is more prominent within the central regions ($<R_{\rm e}$) than at larger radii ($5R_{\rm e}$) and it is qualitatively broadly consistent with currently available X-ray data of late and early-type galaxies in the local Universe.
\item On average, the quenched galaxies in IllustrisTNG host gas atmospheres that are hotter but contain significantly less gas than the star-forming galaxies at the same magnitude (Fig. \ref{fig:7}). This indicates that, most likely, the $L_{\rm X}$ diversity between the two populations is driven primarily by gas depletion within quenched galaxies. In other words, quenched galaxies have lower values of $L_{\rm X}$ because they contain less gas than star-forming galaxies, albeit being hotter. This finding is consistent with previous results indicating that gas removal and heating are the primary causes of star formation quenching, at least in TNG.
\item As for the star-formation quenching itself, we show that, according to the TNG simulations, the X-ray luminosity of galactic atmospheres correlates with BH activity and, in particular, the X-ray luminosity dichotomy between star-forming and quiescent galaxies occurs at the same mass scale where the energy injected via SMBH kinetic feedback significantly exceeds the gravitational binding energy of the gas within galaxies (Fig. \ref{fig:8}). This result suggests a direct causal relationship between the SMBH feedback, the physical state of galactic atmospheres, and star-formation.
\end{enumerate}
The $L_{\rm X}$ dichotomy found in our work has been indirectly addressed in some previous numerical studies (e.g. \citealt{croton.etal.2006,lebrun.etal.2014,choi.etal.2015}) though the discussion in those studies was more about the effects of different SMBH feedback models, e.g. thermal versus kinetic feedback or thermal feedback with various treatments. For instance, \cite{choi.etal.2015} showed that galaxies simulated with thermal feedback are more star-forming (i.e. bluer) and exhibit higher X-ray luminosities than those simulated with mechanical kinetic feedback. However, none of the previous studies addressed the $L_{\rm X}$ diversity problem using a self-consistent model which explains the distinction between the star-forming and quiescent galaxies, as we do in the current study.
To conclude, in this paper we have uncovered an observationally testable, quantitative prediction from the IllustrisTNG simulations. State-of-the-art cosmological simulations of galaxy formation, such as IllustrisTNG (and also EAGLE), support a scenario whereby the quenching of star formation in massive galaxies is caused directly by gas removal from the central regions of galaxies and heating by SMBH feedback. The upcoming all sky survey with {\it eROSITA} will provide the necessary data to perform robust tests for the $L_{\rm X}$ dichotomy between the hot atmospheres of star-forming and quenched galaxies predicted here and to hence further probe the quenching mechanism in the Universe.
\section*{ACKNOWLEDGEMENTS}
We would like to thank the reviewer Benjamin Oppenheimer for constructive comments and suggestions that helped improve the paper. This work was supported by the Lend\"ulet LP2016-11 grant awarded by the Hungarian Academy of Sciences. The authors would like to thank Elad Zinger for useful conversations. The primary TNG simulations were realized with computing time granted by the Gauss Centre for Supercomputing (GCS): TNG50 under GCS Large-Scale Project GCS-DWAR (2016; PIs Nelson/Pillepich) and TNG100 under GCS-ILLU (2014; PI Springel) on the GCS share of the supercomputer Hazel Hen at the High Performance Computing Center Stuttgart (HLRS). GCS is the alliance of the three national supercomputing centres HLRS (Universit{\"a}t Stuttgart), JSC (Forschungszentrum J{\"u}lich), and LRZ (Bayerische Akademie der Wissenschaften), funded by the German Federal Ministry of Education and Research (BMBF) and the German State Ministries for Research of Baden-W{\"u}rttemberg (MWK), Bayern (StMWFK) and Nordrhein-Westfalen (MIWF). Post-processing analyses for this paper were carried out on the Draco and Cobra supercomputers at the Max Planck Computing and Data Facility (MPCDF).
This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology.
\bibliographystyle{mnbst}
|
cond-mat/9701011
|
\section{Introduction}
A system is said to exhibit damage spreading (DS) if the ``distance'' between
two of its replicas, that evolve under the same thermal noise but from
slightly different initial conditions, increases with time.
Even though DS was first introduced in the
context of biologically motivated dynamical systems
\cite{Kauffman}, it has evolved into
an important tool in physics. It is used in equilibrium \cite{z}
for measuring accurately dynamic exponents and also out of equilibrium,
to study the influence of initial conditions on the temporal evolution
of various systems. In particular, one hoped that DS
could be used to identify ``phases'' of {\it chaotic}
behavior in systems with no intrinsic dynamics,
such as Ising ferromagnets~\cite{Creutz,Stanley} and
spin-glasses\cite{DerWeis}. Such hopes were dampened when it
was realized that different algorithmic implementations of the
same physical system's dynamics
(such as Glauber versus heat bath or Metropolis Monte Carlo)
can have different DS properties \cite{Mariz,JandArc}.
This implies that DS is not an intrinsic
property of a system\cite{GrJSP79},
since two equally legitimate algorithms yield
contradictory results. This problem was addressed
recently in \cite{PreviousPaper}, where we realized that one {\it can}
define ``phases'' on the basis of their DS properties
in an algorithm-independent manner.
To do this one must, however, consider simultaneously the
{\it entire set} $\cal A$
of possible algorithms (dynamic procedures) that are consistent with the
physics of the model studied (such as detailed balance, interaction
range and symmetries). Every system
must belong to one of three possible DS phases, depending
on whether damage spreads for all, none or a part of the members of the set
$\cal A$.
Once we have been led to consider a large family of algorithms, it
was natural to revisit an old question, such as the
possibility for DS in the one-dimensional (1-$d$) Ising
ferromagnet. In this case all conventional dynamic procedures agree
that damage does not spread. We show here that once the family of dynamic
procedures is extended in the spirit explained above, {\it a DS transition is
possible in the 1-$d$ Ising model.} Having found such a DS transition, it
is again natural to investigate to which universality class it belongs. So far
this issue could be addressed only for the 2-$d$ case; since it
is much easier to obtain high-quality numerical data in 1-$d$,
we were able to test carefully a conjecture of Grassberger
\cite{GrJSP79}, according to which
the generic universality class of damage spreading transitions
is directed percolation (DP). This indeed is correct, but we discovered that
if the dynamics that is being used has certain symmetries,
{\it the DS transition is not in the DP class}. Interestingly this is the
case for Glauber dynamics of the $H=0$ Ising model,
for which the DS transition is non-DP.
We start by reviewing briefly~\cite{Mariz,JandArc}
the conventional algorithms - Glauber, heat bath (HB) and Metropolis -
and show that they form a
particular subset of some
general set of legitimate rules $\cal A$. All members of $\cal A$
satisfy detailed balance
with respect to the same Hamiltonian; hence all these rules
generate the same equilibrium
ensemble as the conventional algorithms and
are equally legitimate to
mimic the temporal evolution of an Ising system in contact
with a thermal reservoir.
Next, we introduce two ``new'' dynamic rules, which
constitute just another subset
of $\cal A$, and show that for these two rules
a DS transition {\it does} occur in the
1-$d$ Ising model. Moreover, as we show in the example of the second rule,
an additional $Z_2$ symmetry of the DS order parameter leads to a transition
that is not in the DP universality class.
\section{Previous work, with conventional algorithms}
Denote the site which is being updated by $i$
and the set of its neighbors by $j$. The energy at time $t$ is given by
\begin{equation}
\frac{{\cal H}}{k_BT} = -\sum_ih_i(t)\sigma_i(t),
\hspace{5mm}
h_i(t) = \sum_j K_{ij} \sigma_j(t), \nonumber
\end{equation}
where $K_{ij}=J/k_BT$ and
$\sigma_i(t)=\pm 1$.
Define a transition probability $p_i(t)$:
\begin{equation}
p_i(t)= \frac{e^{h_i(t)}}{e^{h_i(t)}+e^{-h_i(t)}}.
\label{eq:pi}
\end{equation}
The update rules of HB, Glauber and Metropolis dynamics
are expressed in terms of random numbers
$z=z_i(t)$, selected with equal probability from the interval $[0,1]$.
The rule for {\it standard HB} is
\begin{equation}
\label{StandardHB}
\sigma_i(t+1)={\rm sign}[p_i(t)-z].
\end{equation}
A different dynamic process is obtained by
generating at each site {\it two independent} random numbers,
$z_+$ and $z_-$, and using the first if $\sigma_i(t)=+1$ and
the second when $\sigma_i(t)=-1$. The rules of this {\it uncorrelated HB}
dynamics may be written as
\begin{equation}
\label{UncorrHB}
\sigma_i(t+1)=
\left\{ \begin{array}{ll}
{\rm sign}[p_i(t)-z_+] & \mbox{if $ \sigma_i(t)=+1$} \\
{\rm sign}[p_i(t)-z_-] & \mbox{if $ \sigma_i(t)=-1$}
\end{array} \right. .
\end{equation}
{\it Glauber} dynamics uses only one random number per site:
\begin{equation}
\label{Glauber}
\sigma_i(t+1)=
\left\{ \begin{array}{ll}
+{\rm sign}[p_i(t)-z] & \mbox{if $ \sigma_i(t)=+1$} \\
-{\rm sign}[1-p_i(t)-z] & \mbox{if $ \sigma_i(t)=-1$}
\end{array} \right. .
\end{equation}
This rule can be expressed in the form of (\ref{UncorrHB}) but with the
two random numbers completely {\em anticorrelated}, i.e., $z_++z_-=1$.
Finally the rules for {\it Metropolis} dynamics read
\begin{equation}
\label{Metropolis}
\sigma_i(t+1)=
\left\{ \begin{array}{ll}
+{\rm sign}[p^+_i(t)-z] & \mbox{if $ \sigma_i(t)=+1$} \\
-{\rm sign}[p^-_i(t)-z] & \mbox{if $ \sigma_i(t)=-1$}
\end{array} \right. ,
\end{equation}
where $p^\pm_i(t)=\min(1,e^{\mp 2 h_i(t)} )$.
It is easy to show that given $\sigma_{i-1}(t),\sigma_i(t),\sigma_{i+1}(t)$,
the probability to get $\sigma_i(t+1)= +1$
is the same for standard HB, uncorrelated HB, and Glauber
dynamics\footnote{For Metropolis dynamics the transition probabilities
are different}.
Hence, by observing the temporal evolution of a {\it single}
Ising system, one cannot tell by which of these
methods was its trajectory in configuration
space generated.
\begin{table}
\small
\begin{tabular}{|c||c|c|c|c|c|}
correlation & Glauber & usual & uncorr. & dynamics & dynamics \\
function & dyn. & HB & HB & of eq.(11) & of eq.(18)\\
\hline \hline
$\langle r_{---}\,r_{--+} \rangle $&
$1-\kappa$ & $1-\kappa$ & $1-\kappa$ & $\kappa-1$ & $\lambda(1-\kappa)$
\\
$\langle r_{---}\,r_{-+-} \rangle $&
$2 \kappa -1$ & $1$ & $\kappa^2$ & $1$ & $2 \kappa -1$ \\
$\langle r_{---}\,r_{-++} \rangle $&
$\kappa-1$ & $1-\kappa$ & $0$ & $\kappa-1$ & $\lambda(\kappa-1)$ \\
$\langle r_{---}\,r_{+-+} \rangle $&
$1-2\kappa$ & $1-2\kappa$ & $1-2\kappa$ & $1-2\kappa$ & $1-2\kappa$\\
$\langle r_{---}\,r_{+++} \rangle $&
$-1$ & $1-2\kappa$ & $-\kappa^2$ & $1-2\kappa$ & $-1$ \\
$\langle r_{--+}\,r_{-+-} \rangle $&
$\kappa-1$ & $1-\kappa$ & $0$ & $\kappa-1$ & $\lambda(\kappa-1)$ \\
$\langle r_{--+}\,r_{-++} \rangle $&
$-1$ & $1$ & $0$ & $1$ & $-1$ \\
$\langle r_{--+}\,r_{+--} \rangle $&
$1$ & $1$ & $1$ & $1$ & $1$\\
$\langle r_{--+}\,r_{+-+} \rangle $&
$1-\kappa$ & $1-\kappa$ & $1-\kappa$ & $\kappa-1$ & $\lambda(1-\kappa)$
\\
$\langle r_{--+}\,r_{++-} \rangle $&
$-1$ & $1$ & $0$ & $1$ & $-1$ \\
$\langle r_{-+-}\,r_{+-+} \rangle $&
$-1$ & $1-2\kappa$ & $-\kappa^2$ & $1-2\kappa$ & $-1$ \\
\end{tabular}
\caption{Two-point correlations in the one-dimensional Ising model
for various dynamic rules.
We used the notation $\kappa = \tanh \frac{2J}{k_BT}$.}
\label{TableGlauberHB}
\end{table}
The difference between these dynamics may become
evident only when we observe the evolution of two replicas, i.e.,
study damage spreading!
Indeed Stanley et al \cite{Stanley} and also Mariz et al \cite{Mariz}
found, using Glauber dynamics, that damage spreads
for the 2-$d$ Ising model for $T>T_c$; similarly
for Metropolis dynamics \cite{Mariz}. More recently Grassberger
\cite{GrassJPA} claimed that the DS transition occurs
slightly below $T_c$ for Glauber which
was also observed in the corresponding
mean field theory \cite{Vojta}.
On the other hand damage does not
spread at any temperature with standard HB
for neither the 2-$d$ \cite{Mariz} nor the
3-$d$ Ising models \cite{DerWeis}.
The 3-$d$ model did exhibit DS for
$T>T^*$ with $T^*<T_c$ when Metropolis \cite{Costa} and Glauber
\cite{GrassJPA,LaCaer} dynamics were used.
In the 1-$d$ Ising model with HB, Glauber or Metropolis dynamics
no damage spreading has been observed.
\section{General class of dynamic procedures for the Ising model}
The dynamic rules considered here for the $1$-$d$ Ising model consist
of local updates, where a random variable $r=\pm1$ is
assigned to the spin $\sigma_i$:
\begin{equation}
\sigma_i(t+1) := r_{\sigma_{i-1}(t),\sigma_{i}(t),\sigma_{i+1}(t)}\,.
\end{equation}
This random variable is generated in some probabilistic procedure
using one or several random numbers. Like in the conventional algorithms
discussed above, we allow
the random variable to depend only on the values taken at time $t$ by
the updated spin itself and the spins with which it interacts
(i.e., its nearest
neighbors). The set of all one-point
functions $\langle r_{\sigma_{i-1},\sigma_{i},\sigma_{i+1}} \rangle$
determines the transfer matrix of a {\it single} system.
Here $\langle \ldots \rangle$ denotes the average over many
independent realizations of random numbers.
The simultaneous evolution (and, hence, DS) of {\it two replicas}
$\{ \sigma \}$ and $\{ \sigma' \}$ is, however, governed by a joint
transfer matrix of the two systems which, in turn,
is completely determined by the two-point functions
$\langle r_{\sigma_{i-1},\sigma_{i},\sigma_{i+1}}
r_{\sigma'_{i-1},\sigma'_{i},\sigma'_{i+1}} \rangle$.
In general, $n$-point functions determine the joint
transfer matrix of $n$ replicas. An important requirement is that
all correlation functions have to
be invariant under the symmetries of the model~\cite{PreviousPaper}.
For a homogeneous Ising chain
in zero field these symmetries are invariance under reflection
\begin{eqnarray}
\langle r_{\sigma_{i-1},\sigma_i,\sigma_{i+1}} \rangle &=&
\langle r_{\sigma_{i+1},\sigma_i,\sigma_{i-1}} \rangle \,,\\
\langle r_{\sigma_{i-1},\sigma_i,\sigma_{i+1}}
r^\prime_{\sigma^\prime_{i-1},\sigma^\prime_i,
\sigma^\prime_{i+1}} \rangle &=&
\langle r_{\sigma_{i+1},\sigma_i,\sigma_{i-1}}
r^\prime_{\sigma^\prime_{i+1},\sigma^\prime_i,
\sigma^\prime_{i-1}} \rangle \nonumber \,,
\end{eqnarray}
and global inversion of all spins ($Z_2$ symmetry):
\begin{eqnarray}
&&\langle r_{\sigma_{i-1},\sigma_i,\sigma_{i+1}} \rangle =
-\langle r_{-\sigma_{i-1},-\sigma_i,-\sigma_{i+1}}
\rangle \nonumber \,, \\
&&\langle r_{\sigma_{i-1},\sigma_i,\sigma_{i+1}}
r^\prime_{\sigma^\prime_{i-1},\sigma^\prime_i,
\sigma^\prime_{i+1}} \rangle =\\
&& \hspace{1cm} \langle r_{-\sigma_{i-1},-\sigma_i,
-\sigma_{i+1}} r^\prime_{-\sigma^\prime_{i-1},
-\sigma^\prime_i,-\sigma^\prime_{i+1}} \rangle \nonumber \,.
\end{eqnarray}
For both HB and for Glauber dynamics the one-point functions are given by
\begin{equation}
\label{OnePointCorrelations}
\langle r_{\sigma_{i-1},\sigma_{i},\sigma_{i+1}} \rangle =
2 p_i-1.
\end{equation}
The corresponding transfer matrices for single systems
are, hence, identical.
On the other hand the two-point functions for HB and Glauber
dynamics are different so that damage
evolves differently (see Table I).
Still, damage does not spread in $1$-$d$ for any
of these algorithms at any temperature.
\section{Dynamic rule for which damage does spread in 1-d}
Consider the following dynamics for the $1$-$d$ Ising model:
\begin{equation}
\label{NeighborDynamics}
r_{\sigma_{i-1},\sigma_i,\sigma_{i+1}} =
\left\{
\begin{array}{ll}
+{\rm sign}(p_i-z) & \mbox{if} \ \sigma_{i-1} = \sigma_{i+1} \\
-{\rm sign}(1-p_i-z) & \mbox{if} \ \sigma_{i-1} \neq \sigma_{i+1}
\end{array}
\right.
\end{equation}
As can be checked easily, this dynamical rule yields the same one-point
correlations as in eq. (\ref{OnePointCorrelations}). Therefore,
the evolution of a single replica using this rule cannot be
distinguished from that of Glauber or HB dynamics. However, the
two-point correlations (and therewith damage spreading properties)
are different (see Table I).
Unlike Glauber and HB, this dynamics does exhibit
a damage spreading transition in $1$-$d$. This can be seen as follows.
At $T=\infty$ eq. (\ref{NeighborDynamics}) reduces to
\begin{equation}
r_{\sigma_{i-1},\sigma_i,\sigma_{i+1}} =
\sigma_{i-1}\sigma_{i+1}\,{\rm sign}(\frac12-z)\,,
\end{equation}
which implies
that the local damage
$\Delta_i(t)=1-\delta_{\sigma_i(t),\sigma^\prime_i(t)}$
evolves deterministically:
\begin{equation}
\label{InfiniteTemperature}
\Delta_i(t+1) =
\left\{
\begin{array}{ll}
0 & \mbox{if} \ \Delta_{i-1}(t) = \Delta_{i+1}(t) \\
1 & \mbox{if} \ \Delta_{i-1}(t) \neq \Delta_{i+1}(t)
\end{array}
\right.
\end{equation}
Since this is exactly the update rule of a
Domany-Kinzel model \cite{DomanyKinzel}
in the active phase (with $p_1=1$ and $p_2=0$), we conclude that
for $T=\infty$ damage spreads. On the other hand, for $T=0$
eq. (\ref{NeighborDynamics}) reduces to
\begin{equation}
\label{ZeroTemperature}
r_{\sigma_{i-1},\sigma_i,\sigma_{i+1}} =
\left\{
\begin{array}{ll}
\sigma_{i-1} & \mbox{if} \ \sigma_{i-1}=\sigma_{i+1} \\
{\rm sign}(z-\frac12) & \mbox{if} \ \sigma_{i-1} \neq \sigma_{i+1}
\end{array}
\right. \, .
\end{equation}
In this case damage evolves probabilistically and cannot be viewed as an
independent process. One can, however, show that the expectation value
to get damage at site $i$, averaged over many realizations of random numbers
satisfies the inequality
$\langle \Delta_i(t+1)\rangle \leq \frac12
\langle \Delta_{i-1}(t)+\Delta_{i+1}(t) \rangle$, that is
$\langle \Delta(t+1) \rangle \leq \langle \Delta(t) \rangle$. This
means that for $T=0$ damage does not spread. In fact, simulating the
spreading process one observes a DS transition at finite
temperature.
A typical temporal evolution near the transition is
shown in Fig. \ref{FigureOne}.
In order to determine the critical exponents
that characterize the DS transition, we perform dynamic
Monte-Carlo simulations \cite{DynamicMC}. Two replicas
are started from
\begin{figure}
\epsfxsize=85mm
\epsffile[60 600 540 770]{fig1.ps
\caption{ Temporal evolution of damage in the 1-$d$ Ising model of size $200$
with the dynamics of eq.~(11) near the DS transition $J/k_BT^*$=0.2305.
Each configuration is represented by a row of pixels and time goes downwards.
The two replicas are started from identical initial conditions. At an
early time, a damage of 5 sites is inserted in the center. }
\label{FigureOne}
\end{figure}
\noindent
\begin{figure}
\epsfxsize=90mm
\epsffile[70 430 520 770]{fig2.ps}
\caption{Numerical results for the 1-$d$ Ising model
with {\bf A:}~the dynamics of eq.~(11) and {\bf B:}
the dynamics of eq.~(18). The measured quantities
are explained in the text.}
\label{FigureTwo}
\end{figure}
\noindent
identical random initial conditions, where one
damaged site is inserted at the center. Both replicas then evolve
according to the dynamic rules of the system using the same set of
random numbers. In order to minimize finite-size
effects, we simulate a large system of $5000$ sites with periodic boundary
conditions. For various temperatures we perform $10^6$ independent runs
up to $1500$ time steps. However, in many runs damage heals very soon
so that the run can be stopped earlier. As usual in this type of
simulations, we measure the survival probability $P(t)$, the number
of damaged sites $\Delta(t)$, and the mean-square spreading of damage from
the center $R^2(t)$ averaged over the active runs. At the DS transition,
these quantities are expected to scale algebraically in the large time
limit:
\begin{equation}
\label{ScalingQuantities}
P(t) \sim t^{-\delta} \,, \hspace{10mm}
\Delta(t) \sim t^\eta \,, \hspace{10mm}
R^2(t) \sim t^z\,.
\end{equation}
The critical exponents $\delta,\eta,z$
are related to the density exponent $\beta$
and the scaling exponents $\nu_\perp$, $\nu_{||}$ by
$\delta = \beta/\nu_{||}$, $z = 2 \nu_\perp /\nu_{||}$
and obey the hyperscaling relation $4\delta+2\eta = dz$.
At criticality, the quantities (\ref{ScalingQuantities})
show straight lines in double logarithmic plots.
Off criticality, these lines are curved. Using
this criterion we estimate the critical temperature for the
DS transition by $J/k_BT^*=0.2305(5)$. The exponents $\delta$, $\eta$,
and $z$ are measured at criticality while the density exponent $\beta$
is determined off criticality by measuring the stationary Hamming distance
$\Delta(T) \sim (T-T^*)^\beta$ in the spreading phase.
The results of our simulations are
shown in Fig. \ref{FigureTwo}. From the slopes in the double
logarithmic plots we obtain the estimates
$\delta=0.165(5)$, $\eta=0.315(10)$, $z=1.29(3)$,
and $\beta=0.26(2)$ which are in fair agreement with the
known~\cite{JensenPRL96} exponents for directed percolation
$\delta=0.15947(3)$, $\eta=0.31368(4)$, $z=1.26523(4)$, and $\beta=0.27649(4)$.
We therefore conclude that in agreement with
Grassberger's conjecture \cite{GrJSP79}, the DS transition belongs to
the DP universality class.
This is very plausible; as far as the damage variable is concerned
there is a {\it single absorbing state} (of no damage at all) and the
transition is from a phase in which the system ends up in this state
to one in which it does not, just as is the case for DP.
\section{Damage spreading transition with non-DP exponents}
Different critical properties are expected
\cite{GrassbergerAB,Menyhard,MonDim,BARW,TwoAbsStates,Cardy}
for rules with two
distinct absorbing states (of the damage variables!) related by symmetry.
It is important to note that the $Z_2$ symmetry of the Ising system
does not suffice - inverting all spins in {\it both} replicas does not
change the damage variable
(the Hamming distance between the two configurations). Therefore,
we are looking for dynamic rules which (a) have two types of absorbing states;
one with no damage and the
other with full damage. Furthermore, (b) the two play completely
symmetric roles. One can see that both (a) and (b)
hold for rules that satisfy the condition
\begin{equation}
\label{Z2Symmetry}
r_{\sigma_{i-1},\sigma_i,\sigma_{i+1}} =
-r_{-\sigma_{i-1},-\sigma_i,-\sigma_{i+1}}\,.
\end{equation}
The immediate consequence of this condition is that if a configuration
$\{ \sigma (t) \}$ evolves in one time step into $\{ \sigma (t+1) \}$,
then the spin-reversed configuration $\{ -\sigma (t) \}$ will evolve into
precisely $\{ -\sigma (t+1) \}$. Imagine now simultaneous evolution of
two replicas with initial states $\{ \sigma \}$ and $\{ \sigma' \}$,
giving rise
to a damage field $\{ \Delta \}$. Reversal of the initial state on {\it one}
of the replicas will give sign-reversed spin states on this replica and hence
the damage field $\{ -\Delta \}$ will evolve.
Thus, for rules
that satisfy condition (16), the {\it damage variable} has an $Z_2$ symmetry.
A particular consequence of this symmetry is that if two initial states
are the exact sign-reversed of one another, this will persist at all subsequent
times. Therefore, inasmuch as $\Delta =0$ (no damage) is an absorbing state,
so is the situation of full damage, $\Delta = 1$. For systems with such
$Z_2$ symmetry we expect the DS transition (if it exists) to exhibit non-DP
behavior.
\begin{figure}
\epsfxsize=85mm
\epsffile[60 580 530 770]{fig3.ps}
\caption{$Z_2$-symmetric damage spreading transition.
Two replicas with 200 sites are started from
identical random initial conditions. At an early time 5 damaged sites
are introduced in the center. For fixed
temperature $J/k_BT=0.25$ a typical temporal evolution of damage is shown for
(a) Glauber dynamics $\lambda=1$, (b) near the transition $\lambda^*=0.82$
and (c) in the spreading regime $\lambda=0$. Because of the symmetry,
islands of damaged sites can heal only at the edges.}
\label{FigureThree}
\end{figure}
\noindent
It is quite remarkable to note that Glauber dynamics satisfies
eq. (\ref{Z2Symmetry})! The $Z_2$-symmetry of damage
in the 1-$d$ Glauber model is illustrated in Fig. \ref{FigureThree}a.
One can see that compact islands of damaged sites are formed
because damage does not heal spontaneously inside such islands
but only at the edges. However, as mentioned earlier, there is no DS
transition in the 1-$d$ Glauber model.
Consider now a different dynamic rule:
\begin{equation}
\label{PCSpreadDynamics}
r_{\sigma_{i-1},\sigma_i,\sigma_{i+1}} =
\left\{
\begin{array}{ll}
+{\rm sign}(p_i-z) & \mbox{if} \ \sigma_{i-1}\sigma_{i}\sigma_{i+1} = 1 \\
-{\rm sign}(1-p_i-z) & \mbox{if} \ \sigma_{i-1}\sigma_{i}\sigma_{i+1} =-1
\end{array}
\right.
\end{equation}
For this rule, which also
satisfies eq. (\ref{Z2Symmetry}),
we observe
in simulations that damage always spreads
(see Fig. \ref{FigureThree}c).
In order to generate an $Z_2$-symmetric
DS transition in $1$-$d$, we use a rule that
interpolates between this and Glauber.
This can be done by introducing a second parameter
$0 \leq \lambda \leq 1$
and `switching' between Glauber dynamics and rule (\ref{PCSpreadDynamics})
as follows: in each update an additional random number $\tilde{z}$
is generated. If $\tilde{z} \geq \lambda$, rule (\ref{PCSpreadDynamics})
is applied, otherwise Glauber dynamics is used.
This mixed dynamics can be expressed as
\begin{equation}
\label{MixDynamics}
r_{\sigma_{i-1},\sigma_i,\sigma_{i+1}} =
\left\{
\begin{array}{ll}
+{\rm sign}(p_i-z) & \mbox{if} \ y = 1 \\
-{\rm sign}(1-p_i-z) & \mbox{if} \ y =-1 \\
\end{array}
\right. \,,
\end{equation}
where $y=\frac12 \sigma_i[(1+\sigma_{i-1}\sigma_{i+1})+
(1-\sigma_{i-1}\sigma_{i+1}) {\rm sign}(\lambda-\tilde{z})]$.
Again this rule leads to the one-point correlations of
eq. ~(\ref{OnePointCorrelations}), i.e., the temporal evolution of a single
replica is the same as in Glauber and HB dynamics. However,
varying $\lambda$ (at fixed T) we find a critical value $\lambda^*$
where a DS transition occurs.
A typical temporal evolution of damage near the transition is
shown in Fig.~\ref{FigureThree}b.
Since `damage' and `no damage' play a symmetric role,
the Hamming distance $\Delta$ (the density of damaged sites) cannot
be used as an order parameter. Instead one has to use the
density of {\em kinks} $N$ (domain walls) between damaged and healed
domains. By definition, the number of kinks is conserved modulo two
which establishes a parity conservation law. As can be seen in
Fig. \ref{FigureThree}, two processes compete with each other:
kinks annihilate mutually $(2X \rightarrow 0)$ and already existing
kinks branch into an odd number of kinks ($X \rightarrow 3X,
5X, \ldots$). Both processes resemble a branching annihilating
walk with an even number of offspring.
This branching process has a continuous
phase transition that belongs to the so-called
parity-conserving (PC) universality class. Phase transitions of this
type have been observed in a variety of models, including certain
probabilistic cellular automata \cite{GrassbergerAB}, nonequilibrium
kinetic Ising models with combined zero- and infinite-temperature
dynamics \cite{Menyhard}, interacting monomer-dimer models \cite{MonDim},
branching-annihilating random walks \cite{BARW} and certain lattice
models with two absorbing states \cite{TwoAbsStates}. In all these models
the symmetry appears either as a parity conservation law or as an explicit
$Z_2$-symmetry among different absorbing phases. A field theory
describing PC transitions is currently developed in \cite{Cardy}.
The PC universality class is characterized by the exponents $\delta=0.285(5)$,
$\eta=0.00(1)$, $z=1.15(1)$, and $\beta=0.92(2)$. In fact, repeating
the numerical simulations described above for $J/k_BT=0.25$ and
$\lambda^*=0.82(1)$ (see Fig. \ref{FigureTwo}), we obtain the estimates
$\delta=0.295(10)$,
$\eta=0.01(2)$, $z=1.17(3)$, and $\beta=0.86(5)$, which are in
fair agreement with the known values.
We therefore conclude that the DS transition observed for the
dynamics of eq. (\ref{MixDynamics}) belongs to the PC universality class.
Furthermore, our findings imply that the DS transitions observed
\cite{GrassJPA} for
the 2-$d$ Ising model with Glauber dynamics should also exhibit PC
exponents (remember: $d=2$) in zero field, and {\it cross over} to
(2-$d$) DP values when a field is switched on.
\vspace{3mm}
We thank D. Stauffer for sharing with us his knowledge of the DS
literature and for encouragement. This work was supported by
The Minerva Foundation and by the Germany-Israel Science Foundation (GIF).
|
2106.14243
|
\section{Introduction}
\label{sec:intro}
Scientists routinely make causal generalization in their research. This perplexing scientific and philosophical problem demands resolution of many challenging issues \citep{Shadishbook} when the causal effect is possibly heterogeneous or depends on subject characteristics. Such generalizability \citep{Cole2010,tipton2013improving,buchanan2018generalizing} is also known as external validity \citep{Rothwell2005}, or transportability \citep{pearl2014external,bareinboim2016causal,Rudolph2017} in literature. In some recent papers \citep{dahabreh2020extending,degtiar2021review} terms ``generalizability'' and ``transportability'' bear different meanings. Our paper focuses on the setting where the source population is external to the target population.
For average treatment effect (ATE) estimation, properly planned and conducted randomized trials are internally valid; however, not necessarily generalizable in the presence of heterogeneous treatment effect. In other words, the unbiased estimate of the ATE from a randomized trial may not equal to the ATE of a target population if trial participants can not well represent the target population with respect to the distribution of effect modifiers \citep{dahabreh2020extending}.
While exploring such heterogeneity is itself of great interest, this article focuses on generalizing aggregate causal quantities such as the ATE.
In the past decade, an active area of such causal generalization research is to bridge findings from a randomized trial to a target population \citep{Cole2010,tipton2013improving,Rudolph2017,buchanan2018generalizing,dahabreh2020extending}. Most of these methods rely on modeling the trial participation probability, which quantifies the similarity between trial participants and patients in the target population. The estimated probability is used in the subsequent analysis for reweighting \citep{Cole2010,buchanan2018generalizing} or post-stratification \citep{Cole2010}. Some existing methods also incorporate outcome modeling to improve estimation efficiency, such as the targeted maximum likelihood estimators \citep{Rudolph2017} and augmented inverse probability weighted estimators \citep{dahabreh2020extending,yang2020doubly}.
Although these approaches effectively adjust for the compositional difference between the source participants and the target population and reduce estimation bias, an essential premise is overlap, which essentially requires that every individual in the target population have matched source participants with similar characteristics. When there is insufficient overlap, reweighting-based adjustment usually introduces large variability to the estimation result, and outcome modeling approaches rely heavily on extrapolation and thus are also unstable.
We consider this same research problem of causal generalization from a source population to a target population, although we assume that treatments may have been given to the source population in a possibly non-randomized fashion as in typical observational studies. The corresponding outcomes and subject characteristics or covariates have been observed in the source population. On the other hand, only subject characteristics have been collected in the target population. Our goal is also to characterize the overlap between the two populations on the basis of observed characteristics so that the generalization is most stable. \citet{stuart2011use} proposed to assess the similarity between a cohort of trial participants and a target population using the difference in the mean participation probability; \citet{tipton2014generalizable} adopted a similar strategy, but replaced the difference in mean with a distributional difference. However, these works did not quantify how the proposed metrics relate to the causal effect estimation, or provide remedies to cope with insufficient overlap. Further, their methods are limited to the scenarios where the source population data is from randomized trials.
The issue of limited overlap has been studied by many researchers in the context of observational studies for a single population, where estimation relies on sufficient overlap between the treatment arms. The ``overlap'' is then defined in terms of propensity score or the probability of treatment assignment. A common approach is to restrict the population of interest to a subset that has sufficient overlap. \citet{dehejia1999causal} and \citet{lopez2017estimation} discarded individuals with very small or large propensity scores. \citet{crump2009dealing} justified this approach from a semiparametric efficiency perspective and suggested rule-of-thumb propensity score thresholds of 0.1 and 0.9 to trim the population. \citet{crump2009dealing}'s approach was then extended to multiple treatment cases \citep{yang2016propensity}. Another popular approach is to find a weighted ATE that is least affected by limited overlap \citep{li2018balancing}.
All the above methods dealing with limited overlap boil down to defining a new estimand for aggregate causal effect estimation, either by subsetting or reweighting the study population, in a data-dependent manner. An immediate question is the implication of the resulting estimand. This prompted \cite{Rosenbaum.jcgs.2011} to introduce the concept of marginal subjects as those having some probability of receiving the treatment (i.e. with sufficient overlap). The estimand is then the aggregate causal effect for the subpopulation of marginal subjects. To enhance interpretability of such subpopulation, \cite{traskin2011defining} developed a tree approach.
This paper focuses on generalizing causal estimands from a source population to a target population with potentially limited covariate overlap. We address limited overlap in terms of participation probability and propensity score simultaneously by characterizing their impact on estimation precision based on the semiparametric efficiency framework \citep{tsiatis2007semiparametric}. A key quantity, which will be referred to as the \textit{generalizability score}, is then introduced as a yardstick to evaluate and select subpopulations of the target population for causal generalization. Selection based on the score results in optimal efficiency of causal generalization among all subsets that cover the same proportion of the target population. A plot of the generalizability score also enables evaluating the sensitivity of the estimation efficiency to different proportions of the target population and therefore facilitates practical choices for generalization. A simplified version of the score avoids using any outcome information from the source population, and thus can prevent introducing deliberate biases associated with inadvertent access to such information \citep{crump2006moving,Rubinsim2007,rubin2008}. Both simulation studies and real data analysis demonstrate convincing results for such selection. Because our selection of the subset can be done without accessing outcome data, the logic of existing approaches to deal with the definition of the resulting estimands \citep{crump2009dealing,traskin2011defining,Rosenbaum.jcgs.2011} is applicable to our paper.
In Section \ref{sec:prelim}, we introduce the framework and underlying assumptions, followed by a more detailed discussion on the impact of limited overlap. In Section \ref{sec:method}, we present the major methodological results, where we derive a semiparametric efficiency bound for the estimation task. The efficiency bound naturally gives rise to the notion of the generalizability score. We illustrate the proposed approach through simulation studies and a real data example in Section \ref{sec:simu} and \ref{sec:data}. We conclude the paper with a discussion in Section \ref{sec:disc}.
\section{Preliminaries}
\label{sec:prelim}
\subsection{Notations and assumptions}
\label{sec:framework}
Suppose we observe two samples from two distinct populations $\mathcal{S}$ (source) and $\mathcal{T}$ (target). The source sample is from a randomized trial or an observational study and for subject $i$ we observe covariate $X_i \in \mathcal{X}$, treatment assignment $A_i\in\{0, 1\}$, and a corresponding outcome $Y_i$. For individuals in the target sample, we only have their covariate information $X_i$. Let $S_i$ be the population indicator such that $S_i=1$ for $i\in {\cal S}$ and $S_i=0$ for $i\in {\cal T}$. Therefore our observed data consist of $\{ (X_i, A_i, Y_i, S_i)\!:\, S_i=1 \}$ and $\{ (X_i, S_i)\!:\, S_i=0 \}$. The total sample size is $n$.
We use the potential outcome framework \citep{rosenbaum1983central,imbens_rubin_2015} to formulate the causal problem. Under the Stable Unit Treatment Value Assumption (SUTVA), which posits no interference between different individuals and no hidden variation of treatments, each individual $i$ has two potential outcomes $Y_i(0)$ and $Y_i(1)$, the values of the outcome that would be observed if $i$ were to receive control or treatment, respectively. Then the observed outcome in the source sample is $Y_i = Y_i(A_i)$. We associate each observation with a ``full'' random variate $(X_i , S_i , A_i , Y_i(0), Y_i(1))$, which across $i$ are assumed to be i.i.d. draws from a joint distribution of $(X, S, A, Y(0), Y(1))$. All the probabilities and expected values below are taken with respect to this distribution.
We assume that the treatment assignment mechanism in the source sample is determined by a propensity score $\pi(x) = \prob (A = 1 \mid X = x, S=1)$ \citep{rosenbaum1983central}. If the source sample is from a randomized trial, then $\pi(x)$ is known. In general, it needs to be estimated. We further denote $\rho(x) = \prob(S=1 \mid X=x)$ and refer this as participation probability \citep{dahabreh2020extending}.
To identify the causal effect for the target population, we will work with the following four assumptions in addition to the SUTVA. The first two, adopted from \citet{rosenbaum1983central}, are for identification of aggregate causal effects on the source population. Assumptions 3 and 4, adopted from \citet{Rudolph2017} and \citet{Dahabreh2019}, link the source population $\mathcal{S}$ to the target population $\mathcal{T}$ and enable us to generalize the aggregate causal effects.
\begin{assumption}[Unconfoundedness of treatment assignment]
\label{assp:unconf}
In the source population, $(Y(0), Y(1))$ are conditionally independent of $A$ given $X$:
$(Y(0), Y(1)) \independent A \mid X, S=1$.
\end{assumption}
\begin{assumption}[Overlap of propensity score]
\label{assp:ovlptrt}
The propensity score of the source population is bounded away from 0 and 1: for some $c > 0$, $c \le \pi(X) \le 1 - c$ almost surely.
\end{assumption}
\begin{assumption}[Causal exchangability between populations]
\label{assp:exchange}
The source population and the target population have the same conditional average treatment effect (CATE): $\E \{Y(1) - Y(0) \mid X, S=0\} = \E \{Y(1) - Y(0) \mid X, S=1\}$ almost surely.
\end{assumption}
\begin{assumption}[Overlap of participation probability]
\label{assp:ovlppop}
Conditional on the covariates, the participation probability is bounded away from 0: $\rho(X) > c$ almost surely for some $c > 0$.
\end{assumption}
Assumptions \ref{assp:ovlptrt} and \ref{assp:ovlppop} require the propensity score be bounded away from 0 and 1 and the participation probability be bounded away from 0. This is to ensure overlap in the covariate distributions. However, if either of them only holds for very small value of $c$, the overlap might not be sufficient to guarantee stable estimation. We will elucidate this issue in the next section and the focus of this paper is to deal with it. Note that propensity score and participation probability could depend on different sets of covariates. In particular, the propensity score only depends on the confounders, and the participation probability only depends on the effect modifiers \citep{stuart2011use}. However, to make such a distinction, it would require strong prior knowledge related to the mechanism of treatment assignment and study participation \citep{vanderweele2012confounding}. Therefore, for conciseness of presentation, we use a unified symbol $X$ to denote all the covariates, not dividing them into different sets.
We denote the conditional mean and variance of the potential outcomes in the source population by
$\mu_{a, \mathcal{S}}(x) = \E\{Y(a)\mid X=x, S=1\},
\sigma^2_{a, \mathcal{S}}(x) = \var\{ Y(a)\mid X=x, S=1\}.$
One can also define $\mu_{a, \mathcal{T}}(x)$ and $\sigma^2_{a, \mathcal{T}}(x)$ similarly. The CATE for the source population is $\theta_\mathcal{S}(x) =\mu_{1, \mathcal{S}}(x) - \mu_{0, \mathcal{S}}(x)$ and for the target population is $\theta_\mathcal{T}(x) =\mu_{1, \mathcal{T}}(x) - \mu_{0, \mathcal{T}}(x)$. Assumption \ref{assp:exchange} states that there is a common CATE $\theta(x)$:
\begin{equation*}
\theta(x) =\theta_\mathcal{S}(x) =\theta_\mathcal{T}(x).
\end{equation*}
The ATEs for the source and target populations are $$\theta_\mathcal{S} = \E\big\{\theta(X) \mid S = 1 \big\}, \quad \theta_\mathcal{T} = \E\{ \theta(X) \mid S = 0\}.$$
Proof of identifiability for our estimand $\theta_{\cal T}$ under the aforementioned assumptions is provided in the Supplementary Materials.
Note that even though $\theta_\mathcal{S}(x) =\theta_\mathcal{T}(x)$ under Assumption \ref{assp:exchange}, we generally have $\theta_\mathcal{S} \not=\theta_\mathcal{T}$ as the distributions of $X$ may differ between $\mathcal{S}$ and $\mathcal{T}$ unless $\rho(x)$ is a constant.
\subsection{Impact of limited overlap}
\label{sec:impact_ovlp}
We first recap the impact of limited overlap on estimating $\theta_\mathcal{S}$ in observational studies. Under the SUTVA and Assumptions \ref{assp:unconf} and \ref{assp:ovlptrt}, many estimators have been developed to consistently estimate $\theta_\mathcal{S}$ from the observed data, such as the inverse probability weighted (IPW) estimators \citep{lunceford2004stratification} or outcome regression estimators \citep{hahn1998role}. The overlap in the treatment arms plays a crucial role in the stable estimation of $\theta_\mathcal{S}$ as it ensures that there are comparable subjects in different treatment arms. For example, the IPW estimators proceed by reweighting each observation with the inverse of the probability of getting the assigned treatment, so it might put extremely large weights on a few observations in the presence of limited overlap as the propensity score approaches to 0 or 1 for some covariate values. Consequently, the IPW estimators will be unduly influenced by these extreme weights. Similarly, the outcome regression estimators will also suffer from high instability in such scenarios because they essentially rely on imputing the potential outcomes, which might unreliable on the region with limited overlap. A common approach to alleviate this issue is to restrict the attention to areas of data with sufficient overlap \citep{dehejia1999causal,crump2009dealing}.
The problem of insufficient overlap is exacerbated for the estimation of $\theta_\mathcal{T}$. In order to generalize the source sample information to estimate $\theta_\mathcal{T}$, we need to address both the covariate overlap between the treatment arms in the source population, and the covariate overlap between the two populations. That is, stable estimation of $\theta_\mathcal{T}$ relies on finding similar source samples (from both arms) for each individual in the target sample.
To see this, let us consider a H\'ajek-type IPW estimator, which is a natural reweighting method to adjust for compositional difference \citep{Cole2010,dahabreh2020extending}:
\begin{equation}
\label{eq:Hajek}
\hat{\theta}_{\mathcal{T}}^{(IPW)} =
\frac{\sum_{i=1}^n \hat{w}_{i1} S_i A_i Y_i}
{\sum_{i=1}^n \hat{w}_{i1} S_i A_i}
-
\frac{\sum_{i=1}^n \hat{w}_{i0} S_i (1-A_i) Y_i}
{\sum_{i=1}^n \hat{w}_{i0} S_i (1-A_i)}
,\end{equation}
where
\begin{equation}
\label{eq:weights}
\hat{w}_{i1} = \frac{1-\hat{\rho}(X_i)}{\hat{\rho}(X_i)\hat{\pi}(X_i)}
\text{ and }
\hat{w}_{i0} = \frac{1-\hat{\rho}(X_i)}{ \hat{\rho}(X_i) \{1 - \hat{\pi}(X_i)\}}
,\end{equation}
and $\hat{\rho}(x)$, $\hat{\pi}(x)$ denote the estimated participation probability and propensity score models. We can see that extreme weights can result from small values of $\hat{\pi}(X_i)$, $1-\hat{\pi}(X_i)$ or $\hat{\rho}(X_i)$, which would occur when there is lack of overlap.
As a result, $ \hat{\theta}_\mathcal{T}$ will be highly influenced by a few observations of large weights and thus is highly sensitive to random errors in the observed outcomes. Other estimators based on probability weights, such as matching and stratification \citep{tipton2013improving}, also face the same issue.
Although not explicitly involving probability weights, methods based on outcome regression (OR) \citep{dahabreh2020extending}:
\begin{equation}
\label{eq:outcomeRegression}
\hat{\theta}_{\mathcal{T}}^{(OR)} =
\frac{\sum_{i=1}^n (1 - S_i)\{\hat{\mu}_{1,\mathcal{S}}(X_i) - \hat{\mu}_{0,\mathcal{S}}(X_i)\}}
{\sum_{i=1}^n (1 - S_i)}
\end{equation}
are also vulnerable to the lack of overlap. Here $\hat{\mu}_{a, \mathcal{S}}(x), a\in\{0,1\}$ denote the outcome models fitted on the source sample. These methods essentially impute the potential outcomes for all the individuals in the target sample, thus requiring extrapolation on the covariate region with limited overlap. If $\hat{\mu}_{a, \mathcal{S}}(x)$'s are estimated with parametric models, such extrapolation is highly dependent on correct model specification, as we will further illustrate in Section \ref{sec:simu}. On the other hand, if $\hat{\mu}_{a, \mathcal{S}}(x)$'s are estimated with non-parametric models, such extrapolation is done based on a few observations in the source sample and thus is unreliable.
Therefore, \cite{dahabreh2020extending} duly recommended examining the distribution of the estimated participation probabilities, or equivalently $\hat{\rho}(x)$ when the source sample is from a randomized trial with $\pi(x)=\pi$. However when the source sample is from an observational study, it is less clear how to summarize covariate overlap in terms of both $\rho(x)$ and $\pi(x)$. Furthermore, it is unknown how the other aspects about the potential outcomes in the source population would affect the estimation of $\theta_\mathcal{T}$.
\section{Methodology}
\label{sec:method}
\subsection{Efficiency bound}
\label{sec:method-bound}
We consider restricting the estimation to a subpopulation of the target population. The subpopulation consists of individuals whose covariate values belong to a subset $B \subset \mathcal{X}$. Then we denote the ATE of the subpopulation as
\begin{equation}
\label{eq:theta_B_T}
\theta_{B, \mathcal{T}} = \E\{\theta(X)\mid X\in B, S = 0\}.
\end{equation}
In particular, $\theta_\mathcal{T} = \theta_{\mathcal{X}, \mathcal{T}}$. In what follows, we characterize the impact of overlap on the estimation precision of the aggregate causal effect $\theta_{B, \mathcal{T}}$ for any $B$. From our characterization, we propose a generalizability score to quantify the dissimilarity between an individual and the source population. For a fixed subset size, we can use the generalizability score to select subsets that are optimal for estimation efficiency.
To characterize the impact of overlap on the estimation of $\theta_{B, \mathcal{T}}$ without being tied to any specific estimator, our method is developed upon the semiparametric efficiency theory \citep{tsiatis2007semiparametric}. We focus on regular and asymptotically linear (RAL) estimators. The definition of RAL estimators can be found in \citet[Section 3.1]{tsiatis2007semiparametric}. Most commonly-used estimators are RAL, including all the estimators we will use in Sections \ref{sec:simu} and \ref{sec:data}. Let $\hat{\theta}_{B, \mathcal{T}}$ be an RAL estimator for $\theta_{B, \mathcal{T}}$, and we can write
\begin{equation}
\label{eq:decomp}
\hat{\theta}_{B, \mathcal{T}} - \theta_{B, \mathcal{T}} = (\hat{\theta}_{B, \mathcal{T}} - \tilde{\theta}_{B, \mathcal{T}}) + (\tilde{\theta}_{B, \mathcal{T}} - \theta_{B, \mathcal{T}}).
\end{equation}
Here $\tilde{\theta}_{B, \mathcal{T}}$ is the sample version of \eqref{eq:theta_B_T}:
\begin{equation*}
\tilde{\theta}_{B, \mathcal{T}} =
\frac
{\sum_{i=1}^{n} (1-S_i)\ind_{B}(X_i)\theta(X_i)}
{\sum_{i=1}^{n} (1-S_i)\ind_{B}(X_i) }
,\end{equation*}
where $\ind_{B}(X_i) = 1$ if $X_i \in B$ and 0 otherwise. The two terms on the right-hand side of \eqref{eq:decomp} are generally uncorrelated. Since the second term is the difference between a sample average of $\theta(X)$ and a population average, its uncertainty of the second term lies entirely on the sampling variability of $X$ and the treatment effect heterogeneity. Hence, the impact of insufficient overlap is fully captured by the first term. By focusing on the asymptotic variance of $\hat{\theta}_{B, \mathcal{T}} - \tilde{\theta}_{B, \mathcal{T}}$ instead of $\hat{\theta}_{B, \mathcal{T}} - \tilde{\theta}_{B, \mathcal{T}}$, we can better target our efforts on minimizing the impact of limited overlap. In contrast, taking the second term into consideration may result in a subset with little heterogeneity in order to minimize variance. Therefore, we state our theorem below on the variance bound of $\hat{\theta}_{B, \mathcal{T}} - \tilde{\theta}_{B, \mathcal{T}}$.
\begin{theorem}
\label{thm:eifB}
Suppose Assumptions \ref{assp:unconf}-\ref{assp:ovlppop} hold, then for any RAL estimator $\hat{\theta}_{B, \mathcal{T}}$, the asymptotic variance of $\sqrt{n}(\hat{\theta}_{B, \mathcal{T}}-\tilde{\theta}_{B, \mathcal{T}})$ is at least
\begin{equation}
\label{eq:varB}
\begin{aligned}
V(B)
&=
\frac{1}{\E [\{1-\rho(X)\}\ind_{B}(X)]^2}
\E\left[
\frac{\{1-\rho(X)\}^2\ind_{B}(X)}{\rho(X)} \left\{
\frac{\sigma^2_{1, \mathcal{S}}(X)}{\pi(X)} + \frac{\sigma^2_{0, \mathcal{S}}(X)}{1-\pi(X)}
\right\}
\right]
.\end{aligned}
\end{equation}
\end{theorem}
\medskip
This asymptotic variance bound holds regardless of whether the propensity score $\pi(x)$ is known or not. The participation probability $\rho(x)$ is taken as unknown as this is typically the case in practice. The proof of Theorem \ref{thm:eifB} is relegated to the Supplementary Materials. There we show that this bound is achieved by the semiparametric efficient estimator for $\theta_{B, \mathcal{T}}$.
\subsection{Generalizability score and generalization subset selection}
\label{sec:method-subsample}
Based on the expression of $V(B)$, we introduce a {\em generalizability score}
\begin{equation}
\label{eq:kappa}
\kappa(x) = \frac{1-\rho(x)}{\rho(x)} \left\{
\frac{\sigma^2_{1, \mathcal{S}}(x)}{\pi(x)} + \frac{\sigma^2_{0, \mathcal{S}}(x)}{1-\pi(x)}
\right\}
\end{equation}
so that \eqref{eq:varB} can be expressed as
\begin{equation}
\label{eq:varB2}
V(B) = \frac{\E\left[ \kappa(X) \mid X \in B, S=0 \right]}{\prob(X \in B, S=0)}
.\end{equation}
The following theorem explores the relationship between $\kappa(x)$ and the set function $V(\cdot)$.
\begin{theorem}
\label{thm:optB}
Suppose Assumptions \ref{assp:unconf}-\ref{assp:ovlppop} hold. For any $\gamma > 0$, define
\begin{equation}
\label{eq:B_gamma}
B_\gamma = \left\{
x\in \mathcal{X}
:
\kappa(x)
\le \gamma
\right\}
.\end{equation}
Then:
\begin{itemize}
\item[(a)] For any $B \subset \mathcal{X}$ that satisfies $\prob(X \in B \mid S=0) = \prob(X \in B_\gamma \mid S=0)$, we have
\begin{equation*}
V(B_\gamma) \le V(B)
.\end{equation*}
\item[(b)] The optimal subset that minimizes $V(B)$ is $B^* = B_{\gamma^*}$, where $\gamma^* > 0$ satisfies
\begin{equation}
\label{eq:gamma_star}
\gamma^* = 2 \E\left[ \kappa(X) \mid \kappa(X) \le \gamma^*, S=0 \right]
.\end{equation}
\end{itemize}
\end{theorem}
\medskip
The proof of Theorem \ref{thm:optB} is relegated to the Supplementary Materials. Part (a) of the theorem indicates that $B_\gamma$ achieves the optimal efficiency bound among all the subsets that cover the same proportion of the target population. Since $B_\gamma$ is defined through $\kappa(x)$, this result suggests that we can use $\kappa(x)$ as a yardstick to rank and select subjects in the target population for generalization. The function $\kappa(x)$ effectively combines the participation probability $\rho(x)$ and the propensity score $\pi(x)$, unifying the two aspects of overlap into one single numerical value.
Beside $\rho(x)$ and $\pi(x)$, $\kappa(x)$ also contains $\sigma^2_{1, \mathcal{S}}(x)$ and $\sigma^2_{0, \mathcal{S}}(x)$ which are the conditional variances of the potential outcomes in the source population. In theory, both $\sigma^2_{1, \mathcal{S}}(x)$ and $\sigma^2_{0, \mathcal{S}}(x)$ can be estimated from the source sample. However in practice, conditional quantities are hard to estimate precisely. One can either using smoothing techniques to roughly estimate these quantities or assume homoscedasticity and set $\sigma^2_{1, \mathcal{S}}(x) = \sigma^2_{0, \mathcal{S}}(x) =\sigma^2$ for some $\sigma^2$ \citep{crump2009dealing,li2018balancing,kallus2020more}. The exact value of $\sigma^2$ is not relevant to the use of $\kappa(x)$ because $\sigma^2$ is a constant multiplier and does not affect the relative scale of $\kappa(x)$. Therefore under the assumption of homoscedasticity, we can set $\sigma^2_{1, \mathcal{S}}(x) = \sigma^2_{0, \mathcal{S}}(x) =1$ in $\kappa(x)$. Then $\kappa(x)$ is completely determined by $\pi(x)$ and $\rho(x)$, and we don't need to use any observed outcome information in the source population when selecting the subpopulation for generalization. This eliminates the chance of introducing deliberate biases \citep{crump2006moving}. As highlighted in \citet{Rubinsim2007, rubin2008}, not peeking at the outcome information at the design phase is critical for assuring the objectivity of treatment effect estimation. For the rest of this article, we will set $\sigma^2_{1, \mathcal{S}}(x)= \sigma^2_{0, \mathcal{S}}(x) = 1$.
Figure \ref{fig:kappa} plots the generalizability score as a function participation probability $\rho(x)$ and propensity score $\pi(x)$. Since $\kappa(x)$ changes rapidly near the margins of the plot region and has an unbounded range, we rescale it onto $[0,1]$ with a monotonic transformation, $f(\kappa) = \kappa / (16+\kappa)$, to help visualize it, where the choice of 16 is rather arbitrary. A high generalizability score is due to either a small $\rho(x)$, which suggests lack of overlap between the populations, or an extreme $\pi(x)$ close to 0 or 1, which indicates limited overlap between the treatment arms in the source population. So for individuals with a high generalizability score, it is harder to find close comparisons in the treated or control group in the source sample. That is why they are less suitable to be selected into the target subpopulation, as indicated by Theorem \ref{thm:optB}.
Choosing the size of $B_\gamma$, which is determined by the cut-off value $\gamma$, is also an important issue. On one hand, choosing a smaller cut-off will produce a subset with better overlap. On the other hand, a small cut-off will not only lead to a small target subsample, but may also have a reverse effect on the estimation precision since individuals in the source sample will also be excluded in estimation if they are outside the subset. Part (b) of Theorem \ref{thm:optB} provides a characterization of the optimal cut-off value that minimizes $V(B)$. The optimality condition \eqref{eq:gamma_star} is similar to a first-order optimality condition in common optimization problems. It does not guarantee a unique solution, though this is usually the case in our experience. However, it reveals an interesting characteristic of the optimal subset: the highest value of $\kappa(x)$ in the subset is no greater than twice its average, which suggests $\kappa(x)$ has a relatively even distribution on the subset. Therefore, any $\gamma$ that satisfies
\begin{equation}
\label{eq:gamma_star_hat}
\gamma \le 2 \E \{\kappa(X) \mid \kappa(X)\le\gamma, S=0\}
\end{equation}
is a reasonable choice in the sense that the corresponding $B_\gamma$ does not contain individuals with extremely high $\kappa(x)$ as compared to the average level within the subset. Since the estimand $\theta_{B_\gamma, \mathcal{T}}$ is generally closer to $\theta_\mathcal{T}$ with a higher $\gamma$, in practice we can choose the cut-off value ${\gamma}^*$ to be the largest $\gamma$ that satisfies \eqref{eq:gamma_star_hat}. This is similar to the suggestion in \citet{crump2009dealing}. In particular, when $\kappa(X_i) \le 2 {\E}\{\kappa(X) \mid S=0\}$ for all $i \in \mathcal{T}$, the optimal target subsample would be the whole target sample. This subpopulation selection method will be used in our simulation studies and data analysis.
An alternative practical way to choose a desirable cut-off in practice is to compute $V({B}_\gamma)$ for some prespecified $\gamma$ values, such as some quantiles of $\{\kappa(X_i): i \in \mathcal{T}\}$, and then choose the largest $\gamma$ value that produces an acceptable $V({B}_\gamma)$. This method will be useful when one wishes to cover as much of the target population as possible in the generalizing aggregate causal effects without largely inflating the variance. This idea will be illustrated in Section \ref{sec:data}.
\section{Simulation Study}
\label{sec:simu}
\subsection{Setup}
\label{sec:simu-setup}
We adopt a rejection sampling procedure to generate covariates in both the source and target samples. In particular, we generate $X_i = (X_{i1}, \dots, X_{i5})$ from a standard multivariate normal distribution $ N(\mathbf{0}_5, \mathbf{I}_5)$. Then we accept $X_i$ with probabilities $\tilde{\rho}(X_i)$ for the source sample (until sample sizes reach 600) and $1 - \tilde{\rho}(X_i)$ for the target sample (until sample sizes reach 800), where $\tilde{\rho}(X_i)$ is specified as one of the following four models
\begin{itemize
\item[(P1)] $\tilde{\rho}(x) = \textnormal{logistic}(0.4 x_1 + 0.4 x_2 + 0.4 x_3)$,
\item[(P2)] $\tilde{\rho}(x) = \textnormal{logistic}(0.8 x_1 + 0.8 x_2 + 0.8 x_3)$,
\item[(P3)] $\tilde{\rho}(x) = \textnormal{logistic}(0.4 x_1 + 0.3 x_2^3 + 0.2 x_3^2)$,
\item[(P4)] $\tilde{\rho}(x) = \textnormal{logistic}(0.8 x_1 + 0.6 x_2^3 + 0.4 x_3^2)$,
\end{itemize}
where $\textnormal{logistic}(z) = 1 / (1 + e^{-z})$. Under this sampling mechanism we have the density of the source covariates $p_s(x) \propto p_{\textnormal{normal}}(x)\tilde{\rho}(x)$ and of the target covariates $p_t(x) \propto p_{\textnormal{normal}}(x)\{1-\tilde{\rho}(x)\}$, where $p_{\textnormal{normal}}(.)$ is the density of a standard normal distribution. Thus the true participation probability model is $\rho(x) = \textnormal{logistic}(c + \textnormal{logit}(\tilde{\rho}(x)))$ for some constant $c$, where $\textnormal{logit}(z) = \log(z/(1-z))$ is the inverse of the logistic function. Figure \ref{fig:simu_rho_density} plots the distributions of the participation probability for the source and target populations under each of these settings. As can be seen, in (P1) and (P3) the source and target samples have relatively good overlap, whereas in (P2) and (P4) they have relatively bad overlap.
In the source sample, we set $\pi(x) = \textnormal{logistic}(0.3 x_1 - 0.3 x_3)$ and simulate the treatment assignments by $A_i \sim \textnormal{Bernoulli}(\pi(X_i))$. The observed outcomes are generated as $Y_i = (1 - A_i) \mu_{0, \mathcal{S}}(X_i) + A_i \mu_{1, \mathcal{S}}(X_i) + N(0, 1)$. Both linear and nonlinear potential outcome models ($a=0, 1$) are considered:
\begin{itemize}
\item[(O1)] $\mu_{a, \mathcal{S}}(x) = x_1 + a(0.5x_1 + x_2 + 1)$,
\item[(O2)] $\mu_{a, \mathcal{S}}(x) = x_1 - x_4 + (a - 0.5) \{0.4 (x_1 - 0.5)^2 + 0.5 x_2^2\}$.
\end{itemize}
To carry out the estimation of $\theta_{B, \mathcal{T}}$ we consider estimators of the following form, which can be found in \citet{dahabreh2020extending}:
\begin{equation}
\label{eq:estimatorform}
\begin{aligned}
\hat{\theta}_{B, \mathcal{T}}
&=
\frac{\sum_{i:X_i\in B} w_{i1} S_i A_i (Y_i - u_{i1})}
{\sum_{i:X_i\in B} w_{i1} S_i A_i }
-
\frac{\sum_{i:X_i\in B} w_{i0} S_i (1-A_i) (Y_i - u_{i0})}
{\sum_{i:X_i\in B} w_{i0} S_i (1-A_i)}
\\&\quad+
\frac{\sum_{i:X_i\in B} (1-S_i)(u_{i1} - u_{i0})}
{\sum_{i:X_i\in B} (1-S_i)}
.\end{aligned}
\end{equation}
When $w_{ia} = \hat{w}_{ia}$ as defined in \eqref{eq:weights} and $u_{ia} = 0$, $\hat{\theta}_{B, \mathcal{T}}$ corresponds to the IPW estimator \eqref{eq:Hajek}, but is restricted on the subset $B$. When $w_{ia} = 0$ (with the first two terms of \eqref{eq:estimatorform} set as 0) and $u_{ia} = \hat{\mu}_{a,\mathcal{S}}(X_i)$, it corresponds to the OR estimator \eqref{eq:outcomeRegression}. Moreover, when $w_{ia} = \hat{w}_{ia}$ and $u_{ia} = \hat{\mu}_{a,\mathcal{S}}(X_i)$, this gives rise to the augmented IPW (AIPW) estimator, which can yield consistent estimates under correct specification of either the propensity score and participation probability models or of the outcome regression models.
In our implementation, both $\pi(x)$ and $\rho(x)$ are estimated with logistic regression models and the outcome models are estimated with simple linear regression models. The true propensity score model is correctly specified under all scenarios. In the Supplementary Materials we include additional results under misspecified propensity score models. The participation probability model is correctly specified under (P1) and (P2) but misspecified under (P3) and (P4), and the outcome models are only correctly specified under (O1).
The probability estimates $\hat{\pi}(x)$ and $\hat{\rho}(x)$ from the full sample are used to compute the simplified version of the generalizability score, denoted by $\hat{\kappa}(x)$, which is based on \eqref{eq:kappa} but with $\sigma^2_{a, \mathcal{S}}(x)$ set to 1 for $a \in \{0, 1\}$. Then $\hat{\kappa}(x)$ is used to construct $\widehat{B}^* = \{x \in \mathcal{X}: \hat{\kappa}(x) \le \hat{\gamma}^*\}$, where $\hat{\gamma}^*$ is the largest $\gamma$ that satisfies
$$
\gamma \le 2 \hat{\E} \{\hat{\kappa}(X) \mid \hat{\kappa}(X)\le\gamma, S=0\}
$$
according to \eqref{eq:gamma_star_hat} and $\hat{\E}(\cdot)$ denotes the empirical mean. Once $\widehat{B}^*$ is selected, all the probabilities and outcome models are re-estimated using the subsample within $\widehat{B}^*$ before computing the $\hat{\theta}_{\widehat{B}^*, \mathcal{T}}$, as advocated by \citet{crump2009dealing,li2019addressing}.
\subsection{Results}
\label{sec:simu-result}
We study the estimation precision improvement from subpopulation selection by contrasting the estimation results for the full target population to those for the subpopulation in $\widehat{B}^*$, which are measured in terms of bias, root mean square error (RMSE), and the average width and coverage rate of 95\% confidence interval (CI). Note that the subpopulation results are with respect to $\theta_{\widehat{B}^*, \mathcal{T}}$, the ATE on the selected subpopulation.
To construct the 95\% CIs for each estimator, in each simulation run we estimate the standard error using the bootstrap \citep{efron1994introduction}, and then set the CI as the range within 1.96 standard error of the corresponding point estimate.
Table \ref{tab:simu} summarizes the results based on 1000 repetitions. We also report the average proportion of the target sample that is kept in the subset $\widehat{B}^*$ under each scenario. We can see that for all three estimators, restricting the estimation to $\widehat{B}^*$ results in smaller RMSE and narrower CIs, especially when the overlap is insufficient ((P2) and (P4)). The IPW estimator gains its efficiency because the estimation weights $w_{ia}$ are more stable and less likely to have extreme values over $\widehat{B}^*$; the OR estimator gets improved because the outcome models have a better fit on the target sample when restricted on the region $\widehat{B}^*$ with good overlap; the AIPW estimator benefits from both aspects. When there is model misspecification, the bootstrap CIs are more likely to achieve the nominal coverage rate when restricting the analysis on the subpopulation. Under (O1), we observe less improvement for the OR and AIPW estimators; this is because when a parametric outcome model is correctly specified, the issue of lacking overlap is largely mitigated by model extrapolation. Additional simulation results for studying impact of incorrectly specified the propensity score model and the impact of using a larger cut-off value can be found in Section S.4 of the Supplementary Materials. Under these settings, we also observe substantial precision improvement for the estimation from restricting to a subpopulation.
\section{Coordinated-Transitional Care (C-TraC) Program}
\label{sec:data}
In this section, we illustrate the proposed approach by evaluating the treatment effect of C-TraC Program versus the standard care on 30-day rehospitalization \citep{gilmore2014development}. The C-TraC Program is a telephone-based, protocol-driven intervention designed to support and empower patients to properly manage their post-discharge care. The population of interest for this program is mainly patients who are 65 years or older (Medicare patients) and who have been hospitalized. Due to both patient factors such as limited cognition or living alone and system factors such as lack of adequate transitional care and lack of patient education, such a population has a high tendency to be readmitted to a hospital within 30 days of being discharged.
The observational study consists of patients who met the inclusion criteria and were discharged from the UW-Hospital between January 2013 and April 2018. We break this data set in the middle and consider patients who were discharged prior to 2016 as the source sample, and the rest as the target sample. In this way, we also can compare our estimated target sample causal quantities with the actual observed outcomes in the target sample.
Among the source sample, 206 patients participated in the C-TraC program, among which 42 were readmitted to the hospital within 30 days, and 507 patients didn't participate in the program, among which 109 were rehospitalized. Program participation and rehospitalization information is also available for the target sample, but will be held out from the estimation and used as a benchmark to evaluate the estimation accuracy.
With the input from our collaborators, we use ten covariates measured at study entry date as covariates. They include two risk scores. The well-known LACE index score for risk of readmission/death within thirty days of discharge \citep{vanWalraven551} and Hendrich II Fall Risk score for high risk patient falls identification \citep{HENDRICH20039}. The LACE score is based on four features of an inpatient hospital episode: length of stay (LoS), admission type, comorbidities and the number of accident and emergency (A\&E) visits made by a patient in the 6 months prior to their initial admission. The Hendrich score is based on 7 risk factors in its model. Besides these two scores, the other variables include age, disease status (diabetes, cancer history, respiratory symptoms, and malnutrition), prescription medication for malnutrition, and lab test markers for malnutrition/liver/kidney functions (ALB) and liver disease (ALT).
We estimate the propensity score $\pi(x)$ by applying logistic regression on the source sample, and estimate the participation probability $\rho(x)$ by applying logistic regression on the combined sample.
To assess the fit of the probability estimates, we check the covariate balances by performing (unweighted and weighted) one-way ANOVA on each of the covariates across three groups: treated, control and target sample. The results are summarized in the 2nd to 5th columns of Table \ref{tab:covariate_Bal}, where ``weighted'' means inverse probability weights are used in the fitting the ANOVA. After weighting, the balance of all the covariates across the three groups is largely improved. This suggests that the logistic models provide a reasonable fit for $\pi(x)$ and $\rho(x)$.
We compute the generalizability score, denoted by $\hat{\kappa}(x)$, using the fitted participation probability and propensity score models and substituting $\sigma^2_{a, \mathcal{S}}(x)$ with 1. We show the distribution of the generalizability score among the target sample in Figure \ref{fig:data_kappa}. The distribution has a long tail on the right, which suggests insufficient overlap for these individuals. The vertical dashed line represents $\hat{\gamma}^*$ the largest value of $\gamma$ that satisfies \eqref{eq:gamma_star_hat}, which is used to construct the subset $\widehat{B}^* = \{x \in \mathcal{X}: \hat{\kappa}(x) \le \hat{\gamma}^*\}$. About 64.1\% of the target sample and 84.3\% of the source sample are included in this subset. The last two columns of Table \ref{tab:covariate_Bal} report the covariate imbalance for the subsample after weighting adjustment, which confirm that the logistic models remain a good fit for $\pi(x)$ and $\rho(x)$.
We estimate the ATE and its standard error on the whole covariate region $\mathcal{X}$ and on the selected subset $\widehat{B}^*$, respectively. The estimators are the same as those given by \eqref{eq:estimatorform}. Since we are dealing with binary outcomes, for the OR and AIPW approaches, outcome regression models are fitted using logistic regression. Standard error estimates are obtained from 2000 bootstrap replications. These estimates with the standard errors are displayed in rows 1 and 3 of Table \ref{tab:data_estimate}. To benchmark the accuracy of the estimates, we also conduct standard ATE estimation using the treatment assignment and outcome information from the target sample, and display these estimates in rows 2 and 4 of the table. Such standard estimates would be closer to the true values, but are infeasible in the generalization setting. For any given method, focusing on $\widehat{B}^*$ (rows 3 and 4) leads to much closer estimates to the infeasible standard estimates, compared to the results on the whole covariate region (rows 1 and 2).
To further investigate the impact of using other cut-off values in the subpopulation selection process, we repeat the aforementioned procedure of estimating the treatment effect and standard error for subsets of the form $\widehat{B}_\gamma = \{x \in \mathcal{X}: \hat{\kappa}(x) \le \gamma\}$, with $\gamma$ being the 10th, 15th, \dots, 100th percentiles of $\{\hat{\kappa}(X_i): i \in \mathcal{T}\}$. Accordingly, the subsamples under these cut-offs cover 10\%, 15\%, \dots, 100\% of the target sample, respectively. The left panel of Figure \ref{fig:var_thr} plots the bootstrap standard error estimates against the proportion of the target sample covered. The vertical dashed line corresponds to $\hat{\gamma}^*$. As we can see, the curves for the three estimators follow a very similar pattern: as the size of the subpopulation grows, the standard errors first decrease as the sample size increases, but then bounce back as the impact of limited overlap dominates. All the curves attain the minimum at around the proportion corresponding to $\hat{\gamma}^*$. It is worth noticing that all the curves is relatively flat around the minimum. Hence, if we select a cut-off value slightly higher than $\hat{\gamma}^*$, we can obtain a larger subpopulation without heavily sacrificing the estimation precision. This might be preferable in practice because with a larger subpopulation the corresponding estimand is generally closer to the ATE on the entire target population.
For comparison, we also evaluate $V(\widehat{B}_\gamma)$ plugging $\kappa(x)$ into \eqref{eq:varB2}. This is done for all $\gamma \in \{\hat{\kappa}(X_i): i \in \mathcal{T}\}$, each cut-off corresponding to a subsample of different size. On the right panel of Figure \ref{fig:var_thr} we plot $V(\widehat{B}_\gamma)^{\nicefrac{1}{2}}$ against the proportion of target sample that $\widehat{B}_\gamma$ contains. Although $V(\widehat{B}_\gamma)$ is based on efficiency bound rather than the actual variance of the given estimators and is computed with the homoscedasticity simplification, its value follows a very similar trend to the standard error curves on the left panel. The computation of $V(\widehat{B}_\gamma)$ is much more efficient than bootstrap resampling and, more importantly, does not require using any outcome information. Therefore, we can also use $V(\widehat{B}_\gamma)$ to guide our selection of $\gamma$.
Lastly, the variance bound $V(B)$ in \eqref{eq:varB2} is derived regardless of how $B$ is selected, and is a metric of the impact of overlap for any subset. Our subset selection has focused on the sublevel sets of the generalizability score $\kappa(x)$, which could be complex if the functional form of $\kappa(x)$ is complex. If one wishes to construct a more regular subset that is easier to describe, we can apply the tree building procedure in \citet{traskin2011defining} using $V(B)$ as a guidance. Specifically, suppose we have constructed a subset $\widehat{B}$ by trimming $\kappa(x)$, and we label the units as $1$ if they are within $\widehat{B}$ and $0$ otherwise, then we can build a classification tree to approximate these labels. Let $\widetilde{B}$ be the subset given by the classification tree. The ratio $ V(\widetilde{B}) / V(\widehat{B}) $ can be used as a guidance for tuning the complexity of the tree. The resulting $\widehat{B}$ would be easier to interpret as it may depend on a few covariates. In the Supplementary Materials we provide an illustration of this tree building procedure applied to C-TraC Program study.
\section{Discussion}
\label{sec:disc}
In this paper, we systematically study the impact of covariate overlap on generalizing ATE estimation to a target population. In order to deal with the issue of insufficient overlap that one often encounters in real applications, we propose to limit our attention to subpopulations of the target population. The resulting ATE estimand might differ from the ATE on the entire target population, but allows for a much more stable and reliable estimation. We quantify the impact of overlap based on the semiparametric efficiency theory, and derive a generalizability score to guide the selection of subpopulations. The generalizability score summarize the variance inflation due to insufficient overlap in both the propensity score and participation probability simultaneously. To reduce estimation variance due to insufficient overlaps, individuals with a large generalizability score are less suitable to be included in the subpopulation. We also empirically demonstrate that one could assume homoscedasticity of the potential outcomes and set the conditional variance terms as constant in the generalizability score. In this way, we can select the subpopulation without peeking at the observed outcomes from the source population, thus avoiding deliberate biases. The simulation and real data analysis results demonstrate substantial precision improvement by utilizing the generalizability score.
When applying our method, practitioners should be aware that the resulting ATE estimation is with respect to the subpopulation selected, which is defined by trimming individuals with anomalously high generalizability score. We have characterized the optimal cut-off value $\gamma^*$ that minimizes the impact of limited overlap; however, it might be desirable in practice to use a cut-off moderately higher than $\gamma^*$ to cover a larger proportion of the target population. As demonstrated in Section \ref{sec:data}, this may not severely compromise the estimation precision. A graphical approach (based on the right panel of Figure \ref{fig:var_thr}) might be a practical alternative method to select the cut-off.
Throughout, we have focused on the scenarios where the source sample is external to the target population ($\mathcal{S} \not \subset \mathcal{T}$), which is also known as non-nested design.
In the setting of nested design ($\mathcal{S} \subset \mathcal{T}$), for example, generalizing the result from a randomized trial to all trial-eligible individuals \citep{Cole2010},
our proposed approach can be easily adapted to address similar overlap issues. In this case, the efficiency bound would become
$$
\E \left[
\ind_{B}(X) \left\{
\frac{\sigma^2_{1, \mathcal{S}}(X)}{\rho(x)\pi(x)} + \frac{\sigma^2_{0, \mathcal{S}}(X)}{\rho(x)(1 - \pi(x))}
\right\}
\right] \bigg / \prob(B)^2,
$$
and we can similarly define the generalizability score as $\tilde{\kappa}(x) = \sigma^2_{1, \mathcal{S}}(X)/\{\rho(x)\pi(x)\} + \sigma^2_{0, \mathcal{S}}(X)/[\rho(x)\{1 - \pi(x)\}]$.
\section{Supplementary Materials}
The codes for this paper are available at \url{https://github.com/DRuiCHEN/genScore}.
\section*{Acknowledgments}
We thank the reviewers, the associate editor and co-editors for their helpful comments that greatly improved this paper.
Research reported in this work was funded through a Patient-Centered Outcomes Research
Institute (PCORI) Award (ME-2018C2-13180). The views in this work are solely the responsibility
of the authors and do not necessarily represent the views of the Patient-Centered
Outcomes Research Institute (PCORI), its Board of Governors or Methodology Committee.\\
{\it Conflict of Interest}: None declared.
\bibliographystyle{biorefs}
|
1808.09344
|
\section{Introduction}
Intersection graphs of geometrical objects in the plane are among the most studied graph classes and have applications in various domains such as for instance biology, statistics, psychology and computing (see \cite{McMc}). We define the \textit{intersection graph} $G$ of a family $\mathcal{F}$ of non empty sets as the graph whose vertices correspond to the elements of $\mathcal{F}$, and two vertices are adjacent in $G$ if and only if the corresponding elements in $\mathcal{F}$ have a non-empty intersection.
Golumbic et al. introduced in \cite{Golumbic} the class of \textit{edge intersection graphs of paths on a grid} (\textit{EPG graphs}), i.e. graphs for which there exists a collection of nontrivial paths on a rectangular grid in one-to-one correspondance with their vertex set, such that two vertices are adjacent if and only if the corresponding paths share at least one edge of the grid, and showed that every graph is in fact an EPG graph. A natural restriction which was thereupon considered, suggests to limit the number of \textit{bends} (i.e. 90 degrees turns at a grid-point) that a path may have; for $k \geq 0$, the class \textit{$B_k$-EPG} consists of those EPG graphs admitting a representation in which each path has at most $k$ bends.
Since their introduction, $B_k$-EPG graphs have been extensively studied from several points of view (see for instance \cite{NCA,Ries,biedl,cohen1,francis,Golumbic,heldt1,heldt2,pergel,BRies}). One major interest is the so-called \textit{bend number}; for a graph class $\mathcal{G}$, the \textit{bend number} of $\mathcal{G}$ is the minimum integer $k\geq 0$ such that every graph $G\in \mathcal{G}$ is a $B_k$-EPG graph. The problem of determining the bend number of graph classes has been widely investigated (see for instance \cite{biedl,francis,Golumbic,heldt1} for planar graphs, Halin graphs, line graphs, outerplanar graphs).
Since $B_0$-EPG graphs are equivalent to the well-studied class of interval graphs, a particular attention has been paid to $B_1$-EPG graphs. The authors in \cite{heldt2} showed that recognising $B_1$-EPG graphs is an NP-complete problem, a result which was further extended to $B_2$-EPG graphs in \cite{pergel}. Therefore, special graph classes were considered. For instance, the authors in \cite{Ries} provided characterisations of some subclasses of chordal graphs which are $B_1$-EPG by families of minimal forbidden induced subgraphs; in \cite{cohen1}, the authors presented a characterisation of cographs that are $B_1$-EPG and provided a linear time recognition algorithm.
In this paper, we are interested in a subclass of circular arc graphs (CA for short), namely \textit{proper circular arc graphs}. In \cite{NCA}, the authors showed that CA graphs are $B_3$-EPG and further proved that normal circular arc graphs have bend number equal to 2, a result from which we can easily deduce that the bend number of proper circular arc graphs is 2 (see Section \ref{sec:prelim}). They also considered additional constraints on the EPG representations by demanding that the union of the paths lies on the boundary of a rectangle of the grid (\textit{EPR graphs}). Similarly to EPG graphs, they defined for $k \geq 0$ the class \textit{$B_k$-EPR} and proved that not all circular arc graphs are $B_3$-EPR (it is easily seen that CA = $B_4$-EPR = EPR). With the intent of pursuing the work done in \cite{NCA}, we here provide a characterisation of proper circular arc graphs that are $B_1$-EPG by a family of minimal forbidden induced subgraphs (see Section \ref{sec:proper}) which is a first step towards characterising the minimal graphs in (CA $\cap$ $B_2$-EPG) $\backslash$ (CA $\cap$ $B_1$-EPG). We conclude Section \ref{sec:proper} by noting that a characterisation by a family of minimal forbidden induced subgraphs of proper circular arc graphs which are $B_1$-EPR easily follows from \cite{NCA} and \cite{PHCA}.
\section{Preliminaries}
\label{sec:prelim}
Throughout this paper, all considered graphs are connected, finite and simple. For all graph theoretical terms and notations not defined here, we refer the reader to \cite{Bondy}.
Let $G=(V,E)$ be an undirected graph with vertex set $V$ and edge set $E$. A \textit{clique} (resp. \textit{independent set}) is a subset of vertices that are pairwise adjacent (resp. nonadjacent). If $X_1$ and $X_2$ are two disjoint subsets of vertices, we say that $X_1$ \textit{is complete to} (resp. \textit{is anti-complete to}) $X_2$, which we denote by $X_1 - X_2$ (resp. $X_1 \cdots X_2$), if every vertex in $X_1$ is adjacent (resp. nonadjacent) to every vertex in $X_2$. A \textit{dominating set} $D$ in $G$ is a subset of vertices such that every vertex not in $D$ is adjacent to at least one vertex in $D$.
We denote by $C_n$, $n\geq 3$, the \textit{chordless cycle} on $n$ vertices and by $K_n$, $n \geq 1$, the \textit{complete graph} on $n$ vertices. A \textit{k-wheel}, $k \geq 3$, denoted by $W_k$, is a chordless cycle on $k$ vertices with an additional vertex, referred to as the \textit{center} of the wheel, adjacent to every vertex of the cycle. The \textit{3-sun}, denoted by $S_3$, consists of an independent set $S=\{s_0,s_1,s_2\}$ and a clique $K=\{k_0,k_1,k_2\}$ such that $s_i$ is adjacent to $k_i$ and $k_{i+1}$, $i=0,1,2$, where indices are taken modulo 3. Given a graph $G$ and an integer $k \geq 0$, the \textit{power graph} $G^k$ of $G$ has the same vertex set as $G$ with two vertices being adjacent in $G^k$ if and only if their distance (i.e. the length of a shortest path between the two vertices) in $G$ is at most $k$.
If $G=(V,E)$ is a graph and $X \subseteq V$ is a subset of vertices, we denote by $G\backslash X$ the graph obtained from $G$ by deleting all vertices in $X$. Equivalently, $G\backslash X$ is the \textit{subgraph of $G$ induced by $V\backslash X$}, denoted by $G[V\backslash X]$. If $X$ consists of a single vertex, say $X=\{x\}$, we simply write $G\backslash x$. The \textit{complement graph} of $G$ is the graph $\overline{G}$ having the same vertex set as $G$ with two vertices being adjacent in $\overline{G}$ if and only if they are nonadjacent in $G$. The \emph{disjoint union of $G_1$ and $G_2$} is denoted by $G_1 \cup G_2$.
Let $\mathcal H$ be a collection of graphs. For $H\in\mathcal H$, we say that $G$ \emph{contains no induced $H$} if $G$ contains no induced subgraph isomorphic to $H$. A graph is \emph{$\mathcal H$-free} if it contains no induced subgraph isomorphic to some graph belonging to $\mathcal H$.
Recall that an \textit{interval graph} is an intersection graph of intervals on the real line. A graph is said to be \textit{chordal} if it does not contain any chordless cycle of length at least four as an induced subgraph. An independent set of three vertices such that each pair is joined by a path that avoids the neighborhood of the third is called an \textit{asteroidal triple}. The following is a well-known characterisation of interval graphs.
\begin{theorem}[\cite{Lekker}]
\label{theo:interval}
A graph is an interval graph if and only if it is chordal and contains no asteroidal triple.
\end{theorem}
A \textit{circular arc graph} (\textit{CA graph}) is an intersection graph of open arcs on a circle, i.e. a graph $G=(V,E)$ is a circular arc graph if one can associate an open arc on a circle with each vertex such that two vertices are adjacent if and only if their corresponding arcs intersect. If $\mathcal{C}$ denotes the corresponding circle and $\mathcal{A}$ the corresponding set of arcs, then $\mathcal{R} = (\mathcal{C}, \mathcal{A})$ is called a \textit{circular arc representation} of $G$. A circular arc graph having a circular arc representation where no two arcs cover the circle is called a \textit{normal circular arc graph} (\textit{NCA graph}). A circular arc graph having a circular arc representation where no arc properly contains another is called a \textit{proper circular arc graph} (\textit{PCA graph}). It is well known that every PCA graph admits a representation which is simultaneously proper and normal (see \cite{Tucker}); in particular, every PCA graph is a NCA graph. The following theorem provides a minimal forbidden induced subgraph characterisation for PCA graphs (see Fig. \ref{Fig:PCA}).
\begin{theorem}[\cite{PCA}]
\label{PCA}
A graph is a PCA graph if and only if it is $\{G_i, C_{n+4} \cup K_1, \overline{C_{2n + 3} \cup K_1}, \overline{C_{2n+6}}, 1 \leq i \leq 6, n \geq 0\}$-free.
\end{theorem}
A graph $G$ is a \textit{Helly circular arc graph} (\textit{HCA graph}) if it has a circular arc representation in which any subset of pairwise intersecting arcs has a common point on the circle. A graph that admits a circular arc representation which is simultaneously normal and Helly, i.e. no three arcs or less cover the circle, is called a \textit{normal Helly circular arc graph} (\textit{NHCA graph}). Similarly, one can define the class of \textit{proper Helly circular arc graphs} (\textit{PHCA graphs}) corresponding to those graphs that admit a circular arc representation in which no three arcs cover the circle and no arc properly contains another. It was shown in \cite{sNHCA} that a PCA graph is PHCA if it admits a proper circular arc representation in which no two or three arcs cover the circle; in particular, every PHCA graph is a NHCA graph.
\tikzset{
circ/.style = {circle,draw,fill,inner sep=1pt},
invisible/.style = {circle,draw=none,inner sep=0pt,font=\tiny}
}
\begin{center}
\begin{figure}
\centering
\captionsetup[subfigure]{labelformat=empty}
\begin{minipage}[b]{0.3\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (c) at (0,0) {};
\node[circ] (b) at (1.5,0) {};
\node[circ] (f) at (0.75,1.3) {};
\node[circ] (g) [right of=f] {};
\draw[-] (c) -- (b) node[circ,midway] (a) {};
\draw[-] (c) -- (f) node[circ,midway] (d) {};
\draw[-] (b) -- (f) node[circ,midway] (e) {};
\draw[-] (a) -- (d)
(d) -- (e)
(e) -- (a);
\end{tikzpicture}
\caption{$G_1$}
\end{subfigure}
\vspace*{5mm}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (a) {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circ] (f) [right of=a,xshift=0.25cm] {};
\node[circ] (g) [right of=f,xshift=-0.25cm] {};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (d)
(a) edge[-] (e)
(a) edge[-] (f)
(b) edge[-] (c)
(b) edge[-] (e)
(c) edge[-] (d)
(c) edge[-] (f)
(c) edge[-] (g)
(d) edge[-] (e)
(d) edge[-] (f)
(d) edge[-] (g)
(f) edge[-] (g);
\end{tikzpicture}
\caption{$G_4$}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.3\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (a) {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circ] (f) [left of=a,xshift=-0.25cm] {};
\node[circ] (g) [right of=a,xshift=0.25cm] {};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (d)
(a) edge[-] (e)
(b) edge[-] (c)
(b) edge[-] (e)
(b) edge[-] (f)
(c) edge[-] (d)
(c) edge[-] (g)
(d) edge[-] (e)
(d) edge[-] (g)
(e) edge[-] (f);
\end{tikzpicture}
\caption{$G_2$}
\end{subfigure}
\vspace*{5mm}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (a) {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circ] (f) [left of=a,xshift=-0.25cm] {};
\node[circ] (g) [right of=a,xshift=0.25cm] {};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (d)
(a) edge[-] (e)
(a) edge[-] (g)
(b) edge[-] (c)
(b) edge[-] (e)
(b) edge[-] (f)
(c) edge[-] (d)
(c) edge[-] (g)
(d) edge[-] (e)
(d) edge[-] (g)
(e) edge[-] (f);
\end{tikzpicture}
\caption{$G_5$}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.3\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (a) {};
\node[circ] (b) [below of=a,yshift=0.5cm] {};
\node[circ] (c) [above right of=a,xshift=-0.35cm,yshift=-0.35cm] {};
\node[circ] (d) [above left of=a,xshift=0.35cm,yshift=-0.35cm] {};
\node[circ] (e) [below left of=b,xshift=-0.35cm,yshift=0.25cm] {};
\node[circ] (f) [below right of=b,xshift=0.35cm,yshift=0.25cm] {};
\node[circ] (g) [above of=a,yshift=0.25cm] {};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (d)
(b) edge[-] (c)
(b) edge[-] (d)
(b) edge[-] (e)
(b) edge[-] (f)
(c) edge[-] (d)
(c) edge[-] (f)
(c) edge[-] (g)
(d) edge[-] (e)
(d) edge[-] (g)
(e) edge[-] (f)
(e) edge[-] (g)
(f) edge[-] (g);
\end{tikzpicture}
\caption{$G_3$}
\end{subfigure}
\vspace*{5mm}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[invisible] (fake) {};
\node[circ] (a) [below left of=fake,xshift=0.25cm,yshift=0.25cm] {};
\node[circ] (b) [above of=fake,yshift=-0.6cm] {};
\node[circ] (c) [below right of=fake,xshift=-0.25cm,yshift=0.25cm] {};
\node[circ] (d) [below right of=c,xshift=-0.3cm,yshift=0.3cm] {};
\node[circ] (e) [below left of=a,xshift=0.3cm,yshift=0.3cm] {};
\node[circ] (f) [above of=b,yshift=-0.4cm] {};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (e)
(b) edge[-] (c)
(b) edge[-] (f)
(c) edge[-] (d);
\end{tikzpicture}
\caption{$G_6$}
\end{subfigure}
\end{minipage}
\caption{Minimal forbidden induced subgraphs for PCA graphs.}
\label{Fig:PCA}
\end{figure}
\end{center}
Consider a rectangular grid $\mathcal{G}$ where the horizontal lines are referred to as \textit{rows} and the vertical lines as \textit{columns}. A grid-point lying on row $x$ and column $y$ is referred to as $(x,y)$. If $\mathcal{P}$ is a collection of nontrivial simple paths on the grid, the \textit{edge intersection graph $G$ of $\mathcal{P}$} is the graph whose vertex set is in one-to-one correspondance with $\mathcal{P}$ and two vertices are adjacent if and only if the corresponding paths share at least one grid-edge. The path representing some vertex $v$ will be denoted by $P_v$. Then $(\mathcal{G},\mathcal{P})$ is referred to as an \textit{EPG representation} of $G$ or a \textit{k-bend EPG representation} of $G$ if every path of $\mathcal{P}$ has at most $k$-bends (i.e. 90 degrees turns at a grid-point) with $k \geq 0$. The class of graphs admitting a $k$-bend EPG representation is called \textit{$B_k$-EPG}.
A graph $G$ is said to be an \textit{edge intersection graph of paths on a rectangle} (\textit{EPR graph}) if there exists a set of paths $\mathcal{P}$ on a rectangle $\mathcal{R}$ of the grid in one-to-one correspondance with the vertex set of $G$, where two vertices are adjacent in $G$ if and only if their corresponding paths share at least one grid-edge; $(\mathcal{G},\mathcal{R},\mathcal{P})$ is then referred to as an \textit{EPR representation} of $G$. For $k \geq 0$, we denote by \textit{$B_k$-EPR} the class of graphs for which there exists an EPR representation where every path has at most $k$ bends.
The authors in \cite{NCA} proved that NCA graphs have a bend number of 2 and presented an infinite family of NCA graphs, namely $\{C_{4k-1}^k, k \geq 2\}$, which are not $B_1$-EPG. Since any $C_{4k-1}^k, k \geq 2$ is in fact a PCA graph, we deduce the following corollary from the fact that PCA $\subset$ NCA.
\begin{corollary}
PCA graphs have a bend number of 2.
\end{corollary}
\section{Proper circular arc $B_1$-EPG graphs}
\label{sec:proper}
As we have seen in Section \ref{sec:prelim}, the bend number of proper circular arc graphs is 2. In this section, we provide a characterisation, by a family of minimal forbidden induced subgraphs (see Fig. \ref{PCAB1}), for PCA graphs which are $B_1$-EPG.
\tikzset{
circ/.style = {circle,draw,fill,inner sep=1pt},
nonedge/.style={decorate,decoration={snake,amplitude=.3mm,segment length=1mm},draw},
clique/.style = {circle,draw,inner sep=1pt,font=\tiny},
invisible/.style = {circle,draw=none,inner sep=0pt,font=\tiny}
}
\begin{center}
\begin{figure}[h]
\captionsetup[subfigure]{labelformat=empty}
\begin{minipage}[b]{0.24\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (a) {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circ] (f) [above right of=d] {};
\node[circle,draw=none,inner sep=1pt] (fake) [below left of=b] {};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (d)
(a) edge[-] (e)
(b) edge[-] (c)
(b) edge[-] (e)
(c) edge[-] (d)
(c) edge[-,bend right=10] (f)
(d) edge[-] (f)
(d) edge[-] (e)
(e) edge[-,bend left=10] (f);
\end{tikzpicture}
\caption{$H_1$}
\end{subfigure}
\vspace*{5mm}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (a) {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circ] (f) [above right of=d] {};
\node[circ] (g) [above left of=b] {};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (d)
(a) edge[-] (e)
(a) edge[-,bend right=20] (f)
(a) edge[nonedge] (g)
(b) edge[-] (c)
(b) edge[-] (e)
(b) edge[-] (g)
(c) edge[-] (d)
(c) edge[-,bend right=10] (f)
(d) edge[-] (f)
(d) edge[-] (e)
(e) edge[-,bend left=10] (f)
(e) edge[-] (g)
(f) edge[-] (g);
\end{tikzpicture}
\caption{$H_5$}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.24\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (a) {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circ] (f) [above right of=d] {};
\node[circ] (g) [below right of=c] {};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (d)
(a) edge[-] (e)
(a) edge[-,bend left=20] (f)
(a) edge[-,bend right=20] (g)
(b) edge[-] (c)
(b) edge[-] (e)
(b) edge[-,bend right=10] (g)
(c) edge[-] (d)
(c) edge[-,bend right=10] (f)
(c) edge[-] (g)
(d) edge[-] (f)
(d) edge[-] (e)
(d) edge[-,bend left=10] (g)
(e) edge[-,bend left=10] (f);
\end{tikzpicture}
\caption{$H_2$}
\end{subfigure}
\vspace*{5mm}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\begin{pgfinterruptboundingbox}
\node[circle,draw=none] (a) {};
\end{pgfinterruptboundingbox}
\node[circ] (a') [left of=a,xshift=0.5cm] {};
\node[circ] (a'') [right of=a,xshift=-0.5cm] {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circ] (f) [above left of=d] {};
\node[circ] (g) [above left of=b] {};
\draw (a') edge[-] (b)
(a') edge[-] (c)
(a') edge[-] (d)
(a') edge[-] (e)
(a') edge[-] (g)
(a') edge[-] (a'')
(a'') edge[-] (b)
(a'') edge[-] (c)
(a'') edge[-] (d)
(a'') edge[-] (e)
(a'') edge[-] (f)
(b) edge[-] (c)
(b) edge[-] (e)
(b) edge[-] (g)
(c) edge[-] (d)
(d) edge[-] (f)
(d) edge[-] (e)
(e) edge[-] (f)
(e) edge[-] (g);
\end{tikzpicture}
\caption{$H_6$}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.24\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (a) {};
\node[circ] (a') [left of=a,xshift=0.5cm] {};
\node[circ] (a'') [right of=a, xshift=-0.5cm] {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circle,draw=none,inner sep=1pt] (fake) [below right of=b]{};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (d)
(a) edge[-] (e)
(a) edge[-] (a')
(a) edge[-] (a'')
(a') edge[-] (b)
(a') edge[-] (c)
(a') edge[-] (d)
(a') edge[-] (e)
(a'') edge[-] (b)
(a'') edge[-] (c)
(a'') edge[-] (d)
(a'') edge[-] (e)
(b) edge[-] (c)
(b) edge[-] (e)
(c) edge[-] (d)
(d) edge[-] (e);
\end{tikzpicture}
\caption{$H_3$}
\end{subfigure}
\vspace*{5mm}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (a) {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circ] (f) [above right of=d,yshift=-0.25cm,xshift=-0.25cm] {};
\node[circ] (g) [below left of=b,yshift=0.25cm,xshift=0.25cm] {};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (d)
(a) edge[-] (e)
(a) edge[-,bend left=20] (f)
(a) edge[-,bend left=20] (g)
(b) edge[-] (c)
(b) edge[-] (e)
(b) edge[-] (g)
(c) edge[-] (d)
(c) edge[-,bend right=10] (f)
(c) edge[-,bend left=10] (g)
(d) edge[-] (f)
(d) edge[-] (e)
(e) edge[-,bend left=10] (f)
(e) edge[-,bend right=10] (g)
(f) edge[-,bend right] (g);
\end{tikzpicture}
\caption{$H_7$}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.24\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\begin{pgfinterruptboundingbox}
\node[circle,draw=none] (a) {};
\end{pgfinterruptboundingbox}
\node[circ] (a') [above of=a,yshift=-0.5cm] {};
\node[circ] (a'') [below of=a, yshift=0.5cm] {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circ] (f) [below right of=d] {};
\node[circle,draw=none,inner sep=1pt] (fake) [below left of=b]{};
\draw (a') edge[-] (b)
(a') edge[-] (c)
(a') edge[-] (d)
(a') edge[-] (e)
(a') edge[-] (f)
(a'') edge[-] (b)
(a'') edge[-] (c)
(a'') edge[-] (d)
(a'') edge[-] (e)
(a'') edge[-] (f)
(b) edge[-] (c)
(b) edge[-] (e)
(c) edge[-] (d)
(c) edge[-] (f)
(d) edge[-] (f)
(d) edge[-] (e);
\end{tikzpicture}
\caption{$H_4$}
\end{subfigure}
\vspace*{5mm}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=0.75cm]
\node[circ] (a) {};
\node[circ] (b) [below left of=a] {};
\node[circ] (c) [below right of=a] {};
\node[circ] (d) [above right of=a] {};
\node[circ] (e) [above left of=a] {};
\node[circ] (f) [above left of=d] {};
\node[circ] (g) [below right of=d] {};
\draw (a) edge[-] (b)
(a) edge[-] (c)
(a) edge[-] (d)
(a) edge[-] (e)
(a) edge[-] (g)
(b) edge[-] (c)
(b) edge[-] (e)
(c) edge[-] (d)
(c) edge[-] (g)
(d) edge[-] (f)
(d) edge[-] (e)
(d) edge[-] (g)
(e) edge[-] (f)
(f) edge[-,bend left] (g);
\end{tikzpicture}
\caption{$H_8$}
\end{subfigure}
\end{minipage}
\caption{PCA graphs which are minimally non $B_1$-EPG (the serpentine line connecting two vertices indicates the existence of either an edge or a nonedge between those two vertices).}
\label{PCAB1}
\end{figure}
\end{center}
\begin{theorem}
\label{thmPCA}
Let $G$ be a PCA graph. Then $G$ is $B_1$-EPG if and only if $G$ is $\{H_i, C_{4k-1}^k, 1 \leq i \leq 8, k \geq 2\}$-free.
\end{theorem}
\begin{proof}
\textit{\underline{Necessary condition}.} Let us show that for all $1 \leq i \leq 8$, $H_i$ is not $B_1$-EPG. Observe that all these graphs contain an induced 4-wheel; we denote by $a$, $b$, $c$ and $d$ the four vertices of the 4-cycle of the 4-wheel, and $e$ (one of) its center(s). As shown in \cite{Ries}, this 4-cycle can only be represented by either a true pie or a false pie as $\mathcal{P}_e$ should intersect all four corresponding paths of the cycle (see Fig. \ref{b1}).
Assume henceforth that the 4-wheel is represented by a true pie using column $x$ and row $y$ of the grid (a similar reasoning applies if it is represented by a false pie). Then, $\mathcal{P}_e$ lies either on column $x$ or row $y$ and strictly contains the grid-point $(x,y)$, since it must intersect every path of the 4-cycle (see Fig. \ref{tworep}). Consequently, $H_3$ can not be $B_1$-EPG.
If a vertex $v$ is adjacent to three vertices of the 4-cycle, say $a$, $b$ and $c$, its associated path must contain row $x$ and/or column $y$, since it has to intersect $\mathcal{P}_a$, $\mathcal{P}_b$ and $\mathcal{P}_c$. However, $\mathcal{P}_v$ can not lie entirely on column $x$ or row $y$ as it would otherwise intersect $\mathcal{P}_d$. Hence, it lies on both similarly to $\mathcal{P}_b$; but then $\mathcal{P}_v$ necessarily intersects $\mathcal{P}_e$, which implies that $H_1$ is not $B_1$-EPG (see Fig. \ref{three}).
If a second vertex $w$ intersects three different vertices of the 4-cycle, we distinguish two cases. Either those vertices are $b$, $c$ and $d$ (the case where they are $d$, $a$ and $b$ is symmetric), then as shown previously, $\mathcal{P}_w$ lies on column $x$ and row $y$ similarly to $\mathcal{P}_c$, and therefore intersects $\mathcal{P}_v$. Or they are $c$, $d$ and $a$ in which case $\mathcal{P}_w$ lies on column $x$ and row $y$ similarly to $\mathcal{P}_d$, and consequently shares only the grid-point $(x,y)$ with $\mathcal{P}_v$. Hence, neither $H_2$ nor $H_7$ are $B_1$-EPG.
If $z$ is a vertex adjacent to exactly two consecutive vertices of the 4-cycle, say $a$ and $d$, then $\mathcal{P}_z$ uses row $y$, where $\mathcal{P}_a$ and $\mathcal{P}_d$ intersect, without strictly containing the grid-point $(x,y)$ as otherwise it would intersect other vertices of the 4-cycle (note that $\mathcal{P}_z$ may then only intersect centers of the 4-wheel which are adjacent and lie on row $y$). Hence, $H_4$ and $H_5$ are not $B_1$-EPG (see Fig. \ref{two}).
Finally, since a vertex $t$ adjacent to only $a$ and $b$ would use column $x$, where $\mathcal{P}_a$ and $\mathcal{P}_b$ intersect, without strictly containing the grid-point $(x,y)$, $H_6$ and $H_8$ cannot be $B_1$-EPG. We conclude this part of the proof by noticing that $C_{4k-1}^k \not \in B_1$-EPG for $k \geq 2$, as shown in \cite{NCA}.
\begin{center}
\begin{figure}[h]
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{subfigure}[b]{\linewidth}
\begin{minipage}[b]{.45\textwidth}
\centering
\begin{tikzpicture}
\coordinate (o) at (-0.75,0);
\coordinate (e) at (0.75,0);
\coordinate (n) at (0,0.75);
\coordinate (s) at (0,-0.75);
\draw[thick] ($(o) + (0,0.05)$) -- (-0.05,0.05) -- ($(n) + (-0.05,0)$);
\draw[thick] ($(o) + (0,-0.05)$) -- (-0.05,-0.05) -- ($(s) + (-0.05,0)$);
\draw[thick] ($(e) + (0,0.05)$) -- (0.05,0.05) -- ($(n) + (0.05,0)$);
\draw[thick] ($(e) + (0,-0.05)$) -- (0.05,-0.05) -- ($(s) + (0.05,0)$);
\end{tikzpicture}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\centering
\begin{tikzpicture}
\coordinate (o) at (-0.75,0);
\coordinate (e) at (0.75,0);
\coordinate (n) at (0,0.75);
\coordinate (s) at (0,-0.75);
\draw[thick] ($(o) + (0,0.1)$) -- (-0.1,0.1) -- ($(n) + (-0.1,0)$);
\draw[thick] (o) -- (e);
\draw[thick] (n) -- (s);
\draw[thick] ($(e) + (0,-0.1)$) -- (0.1,-0.1) -- ($(s) + (0.1,0)$);
\end{tikzpicture}
\end{minipage}
\caption{A true pie (left) and a false pie (right).}
\label{b1}
\end{subfigure}
\vspace*{5mm}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{minipage}[b]{.45\textwidth}
\centering
\begin{tikzpicture}[scale=.8]
\coordinate (o) at (-1,0);
\coordinate (e) at (1,0);
\coordinate (n) at (0,1);
\coordinate (s) at (0,-1);
\draw[thick] (o) -- ($(e)+ (0.2,0)$) node[pos=0.99,above] {\tiny $\mathcal{P}_e$};
\draw[thick] ($(o) + (0,0.1)$) -- (-0.05,0.1) -- ($(n) + (-0.05,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_a$};
\draw[thick] ($(o) + (0,-0.1)$) -- (-0.05,-0.1) -- ($(s) + (-0.05,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_d$};
\draw[thick] ($(e) + (0,0.1)$) -- (0.05,0.1) -- ($(n) + (0.05,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_b$};
\draw[thick] ($(e) + (0,-0.1)$) -- (0.05,-0.1) -- ($(s) + (0.05,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_c$};
\draw[thick] ($(o) + (0,0.2)$) -- ($(o) + (0.6,0.2)$) node[midway,label=above:{\tiny $\mathcal{P}_z$}] {};
\end{tikzpicture}
\end{minipage}
\begin{minipage}[b]{.45\textwidth}
\centering
\begin{tikzpicture}[scale=.8]
\coordinate (o) at (-1,0);
\coordinate (e) at (1,0);
\coordinate (n) at (0,1);
\coordinate (s) at (0,-1);
\draw[thick] ($(o) + (.55,0)$) -- ($(e)+ (0.2,0)$) node[pos=0.99,above] {\tiny $\mathcal{P}_e$};
\draw[thick] ($(o) + (0,0.1)$) -- (-0.05,0.1) -- ($(n) + (-0.05,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_a$};
\draw[thick] ($(o) + (0,-0.1)$) -- (-0.05,-0.1) -- ($(s) + (-0.05,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_d$};
\draw[thick] ($(e) + (0,0.1)$) -- (0.05,0.1) -- ($(n) + (0.05,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_b$};
\draw[thick] ($(e) + (0,-0.1)$) -- (0.05,-0.1) -- ($(s) + (0.05,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_c$};
\draw[thick] ($(o) + (0,0.2)$) -- ($(o) + (0.45,0.2)$) node[midway,label=above:{\tiny $\mathcal{P}_z$}] {};
\end{tikzpicture}
\end{minipage}
\caption{Vertex $z$ is adjacent to $a$, $d$ and a center (left) or not (right).}
\label{two}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{.5\textwidth}
\centering
\begin{subfigure}[b]{\linewidth}
\begin{minipage}[b]{0.45\textwidth}
\centering
\begin{tikzpicture}[scale=.8]
\coordinate (o) at (-1,0);
\coordinate (e) at (1,0);
\coordinate (n) at (0,1);
\coordinate (s) at (0,-1);
\draw[thick] (o) -- ($(e)+ (0.2,0)$) node[pos=0.99,above] {\tiny $\mathcal{P}_e$};
\draw[thick] ($(o) + (0,0.1)$) -- (-0.05,0.1) -- ($(n) + (-0.05,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_a$};
\draw[thick] ($(o) + (0,-0.1)$) -- (-0.05,-0.1) -- ($(s) + (-0.05,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_d$};
\draw[thick] ($(e) + (0,0.1)$) -- (0.05,0.1) -- ($(n) + (0.05,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_b$};
\draw[thick] ($(e) + (0,-0.1)$) -- (0.05,-0.1) -- ($(s) + (0.05,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_c$};
\end{tikzpicture}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\centering
\begin{tikzpicture}[scale=.8]
\coordinate (o) at (-1,0);
\coordinate (e) at (1,0);
\coordinate (n) at (0,1);
\coordinate (s) at (0,-1);
\draw[thick] (s) -- ($(n)+ (0,0.2)$) node[pos=0.99,above] {\tiny $\mathcal{P}_e$};
\draw[thick] ($(o) + (0,0.05)$) -- (-0.1,0.05) -- ($(n) + (-0.1,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_a$};
\draw[thick] ($(o) + (0,-0.05)$) -- (-0.1,-0.05) -- ($(s) + (-0.1,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_d$};
\draw[thick] ($(e) + (0,0.05)$) -- (0.1,0.05) -- ($(n) + (0.1,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_b$};
\draw[thick] ($(e) + (0,-0.05)$) -- (0.1,-0.05) -- ($(s) + (0.1,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_c$};
\end{tikzpicture}
\end{minipage}
\caption{Representations of $W_4$ with a true pie.}
\label{tworep}
\end{subfigure}
\vspace*{5mm}
\begin{subfigure}[b]{\linewidth}
\begin{minipage}[b]{0.45\textwidth}
\centering
\begin{tikzpicture}[scale=.8]
\coordinate (o) at (-1,0);
\coordinate (e) at (1,0);
\coordinate (n) at (0,1);
\coordinate (s) at (0,-1);
\draw[thick] (o) -- ($(e)+ (0.2,0)$) node[pos=0.99,above] {\tiny $\mathcal{P}_e$};
\draw[thick] ($(o) + (0,0.1)$) -- (-0.05,0.1) -- ($(n) + (-0.05,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_a$};
\draw[thick] ($(o) + (0,-0.1)$) -- (-0.05,-0.1) -- ($(s) + (-0.05,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_d$};
\draw[thick] ($(e) + (0,0.1)$) -- (0.05,0.1) -- ($(n) + (0.05,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_b$};
\draw[thick] ($(e) + (0,-0.1)$) -- (0.05,-0.1) -- ($(s) + (0.05,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_c$};
\draw[thick] ($(n) + (0.15,-0.2)$) -- (0.15, 0.2) -- ($(e) + (-0.2,0.2)$) node[pos=0.99,above] {\tiny $\mathcal{P}_v$};
\end{tikzpicture}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\centering
\begin{tikzpicture}[scale=.8]
\coordinate (o) at (-1,0);
\coordinate (e) at (1,0);
\coordinate (n) at (0,1);
\coordinate (s) at (0,-1);
\draw[thick] (s) -- ($(n)+ (0,0.2)$) node[pos=0.99,above] {\tiny $\mathcal{P}_e$};
\draw[thick] ($(o) + (0,0.05)$) -- (-0.1,0.05) -- ($(n) + (-0.1,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_a$};
\draw[thick] ($(o) + (0,-0.05)$) -- (-0.1,-0.05) -- ($(s) + (-0.1,0)$) node[pos=0.99,left] {\tiny $\mathcal{P}_d$};
\draw[thick] ($(e) + (0,0.05)$) -- (0.1,0.05) -- ($(n) + (0.1,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_b$};
\draw[thick] ($(e) + (0,-0.05)$) -- (0.1,-0.05) -- ($(s) + (0.1,0)$) node[pos=0.99,right] {\tiny $\mathcal{P}_c$};
\draw[thick] ($(n) + (0.2,-0.2)$) -- (0.2,0.15) -- ($(e) + (0,0.15)$) node[pos=0.99,above] {\tiny $\mathcal{P}_v$};
\end{tikzpicture}
\end{minipage}
\caption{Vertex $v$ is adjacent to $a$, $b$ and $c$.}
\label{three}
\end{subfigure}
\end{minipage}
\caption{$B_1$-EPG representations.}
\end{figure}
\end{center}
\textit{\underline{Sufficient condition}.} Let $G=(V,E)$ be a PCA graph which is $\{H_i, C_{4k-1}^k, 1 \leq i \leq 8, k \geq2\}$-free. Consider a normal proper representation $\mathcal{R} = (\mathcal{C}, \mathcal{A})$ of $G$, where $\mathcal{C}$ is a circle and $\mathcal{A}$ is a set of open arcs of $\mathcal{C}$ in one-to-one correspondance with the vertices of $G$ (notice that such a representation exists due to \cite{Tucker}). Before turning to the proof, let us first make the following observation.
\begin{observation}
\label{obs1}
Any set of arcs in $\mathcal{A}$ covering the circle $\mathcal{C}$ corresponds to a dominating set in $G$. In particular, if $G$ contains a 4-wheel as an induced subgraph, then $G$ has a dominating triangle.
\end{observation}
\textit{Proof.} It is clear that such a set of arcs corresponds to a dominating set in $G$. If $G$ contains a 4-wheel, then the arcs corresponding to 4-cycle $C$ of the 4-wheel cover the circle $\mathcal{C}$. But then the arc representing the center of the 4-wheel together with two arcs corresponding to two vertices of $C$ must also cover $\mathcal{C}$, i.e. $G$ has a dominating triangle. $\diamond$\\
If we assume that no three arcs in $\mathcal{A}$ cover $\mathcal{C}$, then $G$ is PHCA and the result follows from the fact that PHCA $\cap \{C_{4k -1}^k\}_{k \geq 2}$-free $\subset$ NHCA $\cap \{C_{4k -1}^k\}_{k \geq 2}$-free $=B_1$-EPR (see \cite{NCA}). Hence, we may assume that there exist three arcs in $\mathcal{A}$ covering $\mathcal{C}$. If $G$ contains a 4-wheel, let $C = \{x_1,x_2,x_3\}$ denote the dominating triangle following from Observation \ref{obs1}, with $x_1$ being the center of the 4-wheel. Otherwise, let $C = \{x_1, x_2, x_3\}$ be any triangle whose corresponding arcs cover $\mathcal{C}$. In both cases, each vertex is adjacent to at least two vertices of $C$. Indeed, if $G$ contains a $4$-wheel, then this follows from the fact that $G$ is claw-free (see Theorem \ref{PCA}); if $G$ contains no $4$-wheel, this follows from the fact that $\mathcal{R}$ is a proper representation. For $j \in \{1,2,3\}$, denote by $\mathcal{A}_{j,j+1} = \{x \in V ~|~ xx_{j-1} \not \in E\}$, where indices are taken modulo 3, the subset of vertices adjacent to only $x_j$ and $x_{j+1}$. Note that each $\mathcal{A}_{j,j+1}$ is a clique as $G$ would otherwise contain an induced claw, namely $x_{j+1}, x_{j-1}, x, x'$ for any two $x, x' \in \mathcal{A}_{j,j+1}$ such that $xx' \not \in E$. Similarly, consider the subset of vertices $\mathcal{A}_c = \{x \in V ~|~ \forall j \in \{1,2,3\}, xx_j \in E\}$ adjacent to all three vertices of $C$. We now distinguish cases depending on whether $G$ contains a 4-wheel as an induced subgraph or not.
\begin{case}[\textit{$G$ contains an induced 4-wheel}]
According to the above, $\mathcal{A}_{1,2}$ and $\mathcal{A}_{1,3}$ are not anticomplete. Thus, there exist $x_4 \in \mathcal{A}_{1,3}$ and $x_5 \in \mathcal{A}_{1,2}$ such that $x_4x_5 \in E$, which together with $x_2$ and $x_3$ form the 4-cycle $C'$ of the 4-wheel. Since $C'$ is dominating and $G$ has no induced claw, each remaining vertex of $G$ is adjacent to at least two vertices of $C'$. Consider accordingly the subset of vertices $\mathcal{A}_{3,4}$ (resp. $\mathcal{A}_{4,5}$, $\mathcal{A}_{5,2}$) adjacent to only $x_3$ and $x_4$ (resp. $x_4$ and $x_5$, $x_5$ and $x_2$), the subset of vertices $\mathcal{A}_2$ (resp. $\mathcal{A}_3$; $\mathcal{A}_4$; $\mathcal{A}_5$) adjacent to only $x_5$, $x_2$ and $x_3$ (resp. $x_2$, $x_3$ and $x_4$; $x_3$, $x_4$ and $x_5$; $x_4$, $x_5$ and $x_2$) and the subset of vertices $\mathcal{A}_{c'} = \{ x \in V ~|~ \forall i \in \{2,3,4,5\}, xx_i \in E\}$ adjacent to all vertices of $C'$ (note that $x_1 \in \mathcal{A}_{c'}$). Since $G$ contains no induced claw, each $\mathcal{A}_{j,j+1}$ is a clique, as well as each $\mathcal{A}_j$. Furthermore, since $G$ contains no induced:
\begin{itemize}
\item[$\bullet$] $H_1$, each $\mathcal{A}_j$ is complete to $\mathcal{A}_{c'}$, for $j=2,3,4,5$;
\item[$\bullet$] $H_2$, we have $\mathcal{A}_2 - \mathcal{A}_3 - \mathcal{A}_4 - \mathcal{A}_5 - \mathcal{A}_2$;
\item[$\bullet$] $H_7$, we have $\mathcal{A}_2 \cdots \mathcal{A}_4$ and $\mathcal{A}_3 \cdots \mathcal{A}_5$;
\item[$\bullet$] $H_5$, we have $\mathcal{A}_{j,k} \cdots \mathcal{A}_i$ for all $ i \neq j,k$ with $(j,k) \in \{(2,3), (3,4), (4,5), (5,2)\}$;
\item[$\bullet$] $H_8$ and 5-wheel (by Theorem \ref{PCA}), we have $\mathcal{A}_{2,3} \cdots \mathcal{A}_{3,4} \cdots \mathcal{A}_{4,5} \cdots \mathcal{A}_{5,2} \cdots \mathcal{A}_{2,3}$;
\item[$\bullet$] $\overline{C_6}$ (by Theorem \ref{PCA}), we have $\mathcal{A}_{2,3} \cdots \mathcal{A}_{4,5}$ and $\mathcal{A}_{3,4} \cdots \mathcal{A}_{5,2}$;
\item[$\bullet$] claw (by Theorem \ref{PCA}), we have $\mathcal{A}_j - \mathcal{A}_{j,k} - \mathcal{A}_{k}$ for $(j,k) \in \{(2,3), (3,4), (4,5), (5,2)\}$.
\end{itemize}
Now, if we assume that both $\mathcal{A}_{2,3}$ and $\mathcal{A}_{4,5}$ are nonempty, then both are complete to $\mathcal{A}_{c'}$ as $G$ would otherwise contain either $G_2$ or $G_5$ as an induced subgraph (see Fig. \ref{Fig:PCA}). But then $\mathcal{A}_{c'}$ is a clique since $G$ does not contain $H_4$ as an induced subgraph. Consequently, $\mathcal{A}_{3,4}$ and $\mathcal{A}_{5,2}$ can not both be nonempty; if it were indeed the case, both $\mathcal{A}_{3,4}$ and $\mathcal{A}_{5,2}$ would also be complete to $\mathcal{A}_{c'}$, and $G$ would then contain an induced claw, namely $x_1,v,w$ and $z$, with $v\in \mathcal{A}_{2,3}$, $w\in \mathcal{A}_{3,4}$ and $z\in \mathcal{A}_{4,5}$. Hence, we may assume, without loss of generality, that $\mathcal{A}_{5,2} = \emptyset$ (the same reasoning applies if we assume that $\mathcal{A}_{3,4} = \emptyset$). But then $\mathcal{A}_{c'}$ and $\mathcal{A}_{3,4}$ must be anti-complete as $G$ would otherwise contain an induced claw (the same as before), and $G$ is consequently $B_1$-EPG (see Fig. \ref{cas1}).
\begin{center}
\begin{figure}[h]
\begin{minipage}[b]{0.5\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=1cm]
\node[clique] (Ac') {$\mathcal{A}_{c'}$};
\node[circ,label={[label distance=.05cm]180: \tiny $x_2$}] (a2) [below left of=Ac'] {};
\node[circ,label={[label distance=.05cm]0:\tiny $x_3$}] (a3) [below right of=Ac'] {};
\node[circ,label={[label distance=.05cm]0:\tiny $x_4$}] (a4) [above right of=Ac'] {};
\node[circ,label={[label distance=.05cm]180:\tiny $x_5$}] (a5) [above left of=Ac'] {};
\begin{pgfinterruptboundingbox}
\node[clique] (A2) [below left of=a2] {$\mathcal{A}_2$};
\node[clique] (A23) [below right of=a2] {$\mathcal{A}_{2,3}$};
\node[clique] (A3) [below right of=a3] {$\mathcal{A}_3$};
\node[clique] (A34) [above right of=a3] {$\mathcal{A}_{3,4}$};
\node[clique] (A4) [above right of=a4] {$\mathcal{A}_4$};
\node[clique] (A45) [above left of=a4] {$\mathcal{A}_{4,5}$};
\node[clique] (A5) [above left of=a5] {$\mathcal{A}_5$};
\end{pgfinterruptboundingbox}
\draw (Ac') edge[-] (a2)
(Ac') edge[-] (a3)
(Ac') edge[-] (a4)
(Ac') edge[-] (a5)
(Ac') edge[-] (A23)
(Ac') edge[-] (A45)
(Ac') edge[-,bend right=30] (A2)
(Ac') edge[-,bend left=30] (A3)
(Ac') edge[-,bend left=30] (A4)
(Ac') edge[-,bend right=30] (A5)
(a2) edge[-] (a3)
(a2) edge[-] (a5)
(a2) edge[-,bend left=10] (A5)
(a2) edge[-] (A2)
(a2) edge[-] (A23)
(a2) edge[-,bend right=10] (A3)
(a3) edge[-] (a4)
(a3) edge[-,bend left=10] (A2)
(a3) edge[-] (A23)
(a3) edge[-] (A3)
(a3) edge[-] (A34)
(a3) edge[-,bend right=10] (A4)
(a4) edge[-] (a5)
(a4) edge[-,bend left=10] (A3)
(a4) edge[-] (A34)
(a4) edge[-] (A4)
(a4) edge[-] (A45)
(a4) edge[-,bend right=10] (A5)
(a5) edge[-,bend left=10] (A4)
(a5) edge[-] (A45)
(a5) edge[-] (A5)
(a5) edge[-,bend right=10] (A2)
(A2) edge[-,bend left=25] (A5)
(A2) edge[-] (A23)
(A2) edge[-,bend right=25] (A3)
(A3) edge[-] (A23)
(A3) edge[-] (A34)
(A3) edge[-,bend right=25] (A4)
(A4) edge[-] (A34)
(A4) edge[-] (A45)
(A4) edge[-,bend right=25] (A5)
(A5) edge[-] (A45);
\end{tikzpicture}
\caption{General structure of $G$.}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.5\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}
\coordinate (s) at (0,-.5);
\coordinate (e) at (2,0);
\coordinate (n) at (0,.8);
\coordinate (o) at (-2,0);
\coordinate (c) at (0,0);
\node[invisible] (fake) at (0,-1) {};
\node[invisible] (Ac') at ($(e) + (0.6,0)$) {$\mathcal{A}_{c'}$};
\node[invisible] (A23) at ($(e) + (0.9,0.15)$) {$\mathcal{A}_{2,3}$};
\node[invisible] (a3) at ($(e) + (0.3,0.3)$) {$\mathcal{P}_{x_3}$};
\node[invisible] (A3) at ($(e) + (-.08,0.45)$) {$\mathcal{A}_3$};
\node[invisible] (a2) at ($(e) + (0.3,-0.15)$) {$\mathcal{P}_{x_2}$};
\node[invisible] (A2) at ($(e) + (0,-0.3)$) {$\mathcal{A}_2$};
\node[invisible] (A45) at ($(o) + (-0.6,0.15)$) {$\mathcal{A}_{4,5}$};
\node[invisible] (a4) at ($(o) + (-0.3,0.3)$) {$\mathcal{P}_{x_4}$};
\node[invisible] (A4) at ($(o) + (0,0.45)$) {$\mathcal{A}_4$};
\node[invisible] (a5) at ($(o) + (-0.3,-0.15)$) {$\mathcal{P}_{x_5}$};
\node[invisible] (A5) at ($(o) + (0,-0.3)$) {$\mathcal{A}_5$};
\node[invisible,label=right:\tiny $\mathcal{A}_{3,4}$](A34) at ($(n) + (0,0.3)$) {};
\draw[very thick] (o) -- (Ac');
\draw[thick] (A45) -- ($(o) + (1,0.15)$);
\draw[thick] (A23) -- ($(e) + (-1,0.15)$);
\draw[thick] (A34) -- ($(n) + (0,-.5)$);
\draw (a4) -- ($(o) + (1.85,0.3)$) -- ($(n) + (-0.15,0)$);
\draw (a5) -- ($(o) + (1.85,-0.15)$) -- ($(s) + (-0.15,0)$);
\draw ($(s) + (0.15,0)$) -- ($(s) + (0.15,.35)$) -- (a2);
\draw (a3) -- ($(e) + (-1.85,0.3)$) -- ($(n) + (0.15,0)$);
\draw[thick] (A4) -- ($(o) + (1.7,0.45)$) -- ($(n) + (-0.3,0)$);
\draw[thick] (A5) -- ($(o) + (1.7,-0.3)$) -- ($(s) + (-0.3,0)$);
\draw[thick] ($(s) + (0.3,0)$) -- ($(s) + (0.3,.2)$) -- (A2);
\draw[thick] (A3) -- ($(e) + (-1.7,0.45)$) -- ($(n) + (0.3,0)$);
\end{tikzpicture}
\caption{A $B_1$-EPG representation of $G$.}
\end{subfigure}
\end{minipage}
\caption{$G$ contains $W_4$ and both $\mathcal{A}_{2,3}$ and $\mathcal{A}_{4,5}$ are nonempty.}
\label{cas1}
\end{figure}
\end{center}
Now, assume without loss of generality, that $\mathcal{A}_{2,3} = \mathcal{A}_{5,2} = \emptyset$ and $\mathcal{A}_{3,4}, \mathcal{A}_{4,5} \neq\emptyset$. We know from the above that $\mathcal{A}_{3,4}$ is anti-complete to $\mathcal{A}_{4,5}$. Also, for all $x \in \mathcal{A}_{c'}$, $x$ must be either complete to $\mathcal{A}_{3,4}$ and anti-complete to $\mathcal{A}_{4,5}$ or, conversely, anti-complete to $\mathcal{A}_{3,4}$ and complete to $\mathcal{A}_{4,5}$, as $G$ would otherwise contain an induced claw; indeed, if there exist $x' \in \mathcal{A}_{3,4}$ and $x'' \in \mathcal{A}_{4,5}$ such that $x$ is nonadjacent (resp. adjacent) to both $x'$ and $x''$, then $x_4,x',x''$ and $x$ (resp. $x,x',x''$ and $x_2$) induce a claw. Thus, we can partition $\mathcal{A}_{c'}$ into subsets $\mathcal{A}_{c'}^i = \{x \in \mathcal{A}_{c'} ~|~ \forall x' \in \mathcal{A}_{i+2,i+3}, xx' \in E\}$ for $i = 1,2$, and, since $G$ is $H_4$-free, both of these subsets are cliques. Assuming $\mathcal{A}_{c'}^1$ and $\mathcal{A}_{c'}^2$ are non empty, there cannot exist $x \in \mathcal{A}_{c'}^1$ and $x' \in \mathcal{A}_{c'}^2$ such that $xx' \not \in E$ since $G$ would otherwise contain an induced $G_2$ (see Fig. \ref{Fig:PCA}); indeed, $x_3, x, x_5, x'$ and $x_2$ would form a 4-wheel, with one vertex of $\mathcal{A}_{3,4}$ adjacent to only $x_3$ and $x$, and one vertex of $\mathcal{A}_{4,5}$ adjacent to only $x_5$ and $x'$. Hence, $\mathcal{A}_{c'}^1 \cup \mathcal{A}_{c'}^2$ is a clique; but then, $G$ contains $H_6$ as an induced subgraph. Thus, we have to assume that exactly one of $\mathcal{A}_{c'}^1$ and $\mathcal{A}_{c'}^2$ is empty, and $G$ is then $B_1$-EPG as an induced subgraph of the previous case.
Suppose now that only one of the $\mathcal{A}_{j,j+1}$ is non empty, for instance $\mathcal{A}_{3,4}$. If $x \in \mathcal{A}_{c'}$ is adjacent to some vertex $x' \in \mathcal{A}_{3,4}$, then $x$ is complete to $\mathcal{A}_{3,4}$ as $G$ would otherwise contain an induced $G_4$ (see Fig. \ref{Fig:PCA}). We can therefore partition $\mathcal{A}_{c'}$ into two subsets $\mathcal{A}_{c'}^a = \{x \in \mathcal{A}_{c'} ~|~ \forall x' \in \mathcal{A}_{3,4}, xx' \in E\}$ and $\mathcal{A}_{c'}^{na} = \{x \in \mathcal{A}_{c'} ~|~ \forall x' \in \mathcal{A}_{3,4}, xx' \not \in E\}$. Since $G$ does not contain an induced $H_4$, $\mathcal{A}_{c'}^a$ must be a clique; and, since $G$ does not contain an induced claw, $\mathcal{A}_{c'}^{na}$ must also be a clique. If one of $\mathcal{A}_{c'}^a$ and $\mathcal{A}_{c'}^{na}$ is empty, then $G$ is $B_1$-EPG as an induced subgraph of the first case. If both are non empty, then $\mathcal{A}_{c'}^a \cup \mathcal{A}_{c'}^{na}$ is a clique, as $G$ would otherwise contain an induced $G_3$ (see Fig. \ref{Fig:PCA}), and $G$ is consequently $B_1$-EPG (see Fig. \ref{cas2}).
\begin{center}
\begin{figure}[h]
\begin{minipage}[b]{0.5\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=1cm]
\node[invisible] (c) {};
\node[circ,red] (Ac'a) [right of=c,xshift=-.7cm] {};
\node[circ,blue] (Ac'na) [left of=c,xshift=.7cm] {};
\node[circ,label={[label distance=.05cm]180: \tiny $x_2$}] (a2) [below left of=c] {};
\node[circ,label={[label distance=.05cm]0: \tiny $x_3$}] (a3) [below right of=c] {};
\node[circ,label={[label distance=.05cm]0: \tiny $x_4$}] (a4) [above right of=c] {};
\node[circ,label={[label distance=.05cm]180: \tiny $x_5$}] (a5) [above left of=c] {};
\begin{pgfinterruptboundingbox}
\node[clique] (A2) [below left of=a2] {$\mathcal{A}_2$};
\node[clique] (A3) [below right of=a3] {$\mathcal{A}_3$};
\node[clique] (A34) [above right of=a3] {$\mathcal{A}_{3,4}$};
\node[clique] (A4) [above right of=a4] {$\mathcal{A}_4$};
\node[clique] (A5) [above left of=a5] {$\mathcal{A}_5$};
\end{pgfinterruptboundingbox}
\draw (Ac'a) edge[-] (a2)
(Ac'a) edge[-] (a3)
(Ac'a) edge[-] (a4)
(Ac'a) edge[-] (a5)
(Ac'a) edge[-,bend left=30] (A2)
(Ac'a) edge[-,bend left=30] (A3)
(Ac'a) edge[-,bend right=30] (A4)
(Ac'a) edge[-,bend right=30] (A5)
(Ac'a) edge[-] (Ac'na)
(Ac'a) edge[-] (A34)
(Ac'na) edge[-] (a2)
(Ac'na) edge[-] (a3)
(Ac'na) edge[-] (a4)
(Ac'na) edge[-] (a5)
(Ac'na) edge[-,bend right=30] (A2)
(Ac'na) edge[-,bend right=30] (A3)
(Ac'na) edge[-,bend left=30] (A4)
(Ac'na) edge[-,bend left=30] (A5)
(a2) edge[-] (a3)
(a2) edge[-] (a5)
(a2) edge[-,bend left=10] (A5)
(a2) edge[-] (A2)
(a2) edge[-,bend right=10] (A3)
(a3) edge[-] (a4)
(a3) edge[-,bend left=10] (A2)
(a3) edge[-] (A3)
(a3) edge[-] (A34)
(a3) edge[-,bend right=10] (A4)
(a4) edge[-] (a5)
(a4) edge[-,bend left=10] (A3)
(a4) edge[-] (A34)
(a4) edge[-] (A4)
(a4) edge[-,bend right=10] (A5)
(a5) edge[-,bend left=10] (A4)
(a5) edge[-] (A5)
(a5) edge[-,bend right=10] (A2)
(A2) edge[-,bend left=25] (A5)
(A2) edge[-,bend right=25] (A3)
(A3) edge[-] (A34)
(A3) edge[-,bend right=25] (A4)
(A4) edge[-] (A34)
(A4) edge[-,bend right=25] (A5);
\end{tikzpicture}
\caption{General structure of $G$ (in red: $\mathcal{A}_{c'}^a$; in blue: $\mathcal{A}_{c'}^{na}$).}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.5\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[scale=1.1]
\coordinate (s) at (0,-1);
\coordinate (e) at (2,0);
\coordinate (n) at (0,1);
\coordinate (o) at (-2,0);
\coordinate (c) at (0,0);
\node[invisible] (fake) at (0,-2) {};
\node[invisible,label=below:{\tiny $\mathcal{A}_{c'}^a$}] (Ac'a) at ($(o) + (-0.6,0)$) {};
\node[invisible,label=above:{\tiny $\mathcal{A}_{c'}^{na}$}] (Ac'na) at ($(o) + (-0.6,0.15)$) {};
\node[invisible] (A34) at ($(e) + (0.6,0.15)$) {$\mathcal{A}_{3,4}$};
\node[invisible] (a4) at ($(e) + (0.22,0.3)$) {$\mathcal{P}_{x_4}$};
\node[invisible] (A4) at ($(e) + (-.15,0.45)$) {$\mathcal{A}_4$};
\node[invisible] (a3) at ($(e) + (0.3,-0.15)$) {$\mathcal{P}_{x_3}$};
\node[invisible] (A3) at ($(e) + (0,-0.3)$) {$\mathcal{A}_3$};
\node[invisible] (a5) at ($(o) + (-0.2,0.3)$) {$\mathcal{P}_{x_5}$};
\node[invisible] (A5) at ($(o) + (0.1,0.45)$) {$\mathcal{A}_5$};
\node[invisible] (a2) at ($(o) + (-0.2,-0.15)$) {$\mathcal{P}_{x_2}$};
\node[invisible] (A2) at ($(o) + (0.1,-0.3)$) {$\mathcal{A}_2$};
\draw[very thick] ($(Ac'a) + (-0.1,0)$) -- (e);
\draw[thick] ($(Ac'na) + (-0.1,0)$) -- ($(e) + (-1.5,0.15)$);
\draw[thick] (A34) -- ($(e) + (-1,0.15)$);
\draw (a5) -- ($(o) + (1.95,0.3)$) -- ($(n) + (-0.05,0)$);
\draw (a2) -- ($(o) + (1.95,-0.15)$) -- ($(s) + (-0.05,0)$);
\draw ($(s) + (0.05,0)$) -- ($(s) + (0.05,0.85)$) -- (a3);
\draw (a4) -- ($(e) + (-1.95,0.3)$) -- ($(n) + (0.05,0)$);
\draw[thick] (A5) -- ($(o) + (1.8,0.45)$) -- ($(n) + (-0.2,0)$);
\draw[thick] (A2) -- ($(o) + (1.8,-0.3)$) -- ($(s) + (-0.2,0)$);
\draw[thick] ($(s) + (0.2,0)$) -- ($(s) + (0.2,0.7)$) -- (A3);
\draw[thick] (A4) -- ($(e) + (-1.8,0.45)$) -- ($(n) + (0.2,0)$);
\end{tikzpicture}
\caption{A $B_1$-EPG representation of $G$.}
\end{subfigure}
\end{minipage}
\caption{$G$ contains $W_4$ and only $\mathcal{A}_{3,4}$ is non empty.}
\label{cas2}
\end{figure}
\end{center}
Finally, if we assume that every $\mathcal{A}_{j,j+1}$ is empty, since $G$ does not contain $H_3$ or a claw as an induced subgraph, we can partition $\mathcal{A}_{c'}$ into two cliques, $\mathcal{A}_{c'}^1$ and $\mathcal{A}_{c'}^2$, which are anti-complete, and again $G$ is $B_1$-EPG (see Fig. \ref{cas3}).
\end{case}
\begin{center}
\begin{figure}[h]
\begin{minipage}[b]{0.5\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=1cm]
\node[invisible] (c) {};
\node[circ,red] (Ac'1) [right of=c,xshift=-.7cm] {};
\node[circ,blue] (Ac'2) [left of=c,xshift=.7cm] {};
\node[circ,label={[label distance=.05cm]180: \tiny $x_2$}] (a2) [below left of=c] {};
\node[circ,label={[label distance=.05cm]0: \tiny $x_3$}] (a3) [below right of=c] {};
\node[circ,label={[label distance=.05cm]0: \tiny $x_4$}] (a4) [above right of=c] {};
\node[circ,label={[label distance=.05cm]180: \tiny $x_5$}] (a5) [above left of=c] {};
\begin{pgfinterruptboundingbox}
\node[clique] (A2) [below left of=a2] {$\mathcal{A}_2$};
\node[clique] (A3) [below right of=a3] {$\mathcal{A}_3$};
\node[clique] (A4) [above right of=a4] {$\mathcal{A}_4$};
\node[clique] (A5) [above left of=a5] {$\mathcal{A}_5$};
\end{pgfinterruptboundingbox}
\draw (Ac'1) edge[-] (a2)
(Ac'1) edge[-] (a3)
(Ac'1) edge[-] (a4)
(Ac'1) edge[-] (a5)
(Ac'1) edge[-,bend left=30] (A2)
(Ac'1) edge[-,bend left=30] (A3)
(Ac'1) edge[-,bend right=30] (A4)
(Ac'1) edge[-,bend right=30] (A5)
(Ac'2) edge[-] (a2)
(Ac'2) edge[-] (a3)
(Ac'2) edge[-] (a4)
(Ac'2) edge[-] (a5)
(Ac'2) edge[-,bend right=30] (A2)
(Ac'2) edge[-,bend right=30] (A3)
(Ac'2) edge[-,bend left=30] (A4)
(Ac'2) edge[-,bend left=30] (A5)
(a2) edge[-] (a3)
(a2) edge[-] (a5)
(a2) edge[-,bend left=10] (A5)
(a2) edge[-] (A2)
(a2) edge[-,bend right=10] (A3)
(a3) edge[-] (a4)
(a3) edge[-,bend left=10] (A2)
(a3) edge[-] (A3)
(a3) edge[-,bend right=10] (A4)
(a4) edge[-] (a5)
(a4) edge[-,bend left=10] (A3)
(a4) edge[-] (A4)
(a4) edge[-,bend right=10] (A5)
(a5) edge[-,bend left=10] (A4)
(a5) edge[-] (A5)
(a5) edge[-,bend right=10] (A2)
(A2) edge[-,bend left=25] (A5)
(A2) edge[-,bend right=25] (A3)
(A3) edge[-,bend right=25] (A4)
(A4) edge[-,bend right=25] (A5);
\end{tikzpicture}
\caption{General structure of $G$ (in red: $\mathcal{A}_{c'}^1$; in blue: $\mathcal{A}_{c'}^2$).}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.5\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}
\coordinate (s) at (0,-.5);
\coordinate (e) at (2,0);
\coordinate (n) at (0,.5);
\coordinate (o) at (-2,0);
\coordinate (c) at (0,0);
\node[invisible] (fake) at (0,-1) {};
\node[invisible] (Ac'1) at ($(e) + (0.7,0)$) {$\mathcal{A}_{c'}^1$};
\node[invisible] (a3) at ($(e) + (0.3,0.15)$) {$\mathcal{P}_{x_3}$};
\node[invisible] (A3) at ($(e) + (-.06,0.3)$) {$\mathcal{A}_3$};
\node[invisible] (a2) at ($(e) + (0.3,-0.15)$) {$\mathcal{P}_{x_2}$};
\node[invisible] (A2) at ($(e) + (0,-0.3)$) {$\mathcal{A}_2$};
\node[invisible] (a4) at ($(o) + (-0.3,0.15)$) {$\mathcal{P}_{x_4}$};
\node[invisible] (A4) at ($(o) + (0,0.3)$) {$\mathcal{A}_4$};
\node[invisible] (a5) at ($(o) + (-0.3,-0.15)$) {$\mathcal{P}_{x_5}$};
\node[invisible] (A5) at ($(o) + (0,-0.3)$) {$\mathcal{A}_5$};
\node[invisible,label=right:\tiny $\mathcal{A}_{c'}^2$] (Ac'2) at ($(n) + (0,0.3)$) {};
\draw[very thick] (o) -- (Ac'1);
\draw[very thick] (s) -- (Ac'2);
\draw (a4) -- ($(o) + (1.85,0.15)$) -- ($(n) + (-0.15,0)$);
\draw (a5) -- ($(o) + (1.85,-0.15)$) -- ($(s) + (-0.15,0)$);
\draw ($(s) + (0.15,0)$) -- ($(s) + (0.15,.35)$) -- (a2);
\draw (a3) -- ($(e) + (-1.85,0.15)$) -- ($(n) + (0.15,0)$);
\draw[thick] (A4) -- ($(o) + (1.7,0.3)$) -- ($(n) + (-0.3,0)$);
\draw[thick] (A5) -- ($(o) + (1.7,-0.3)$) -- ($(s) + (-0.3,0)$);
\draw[thick] ($(s) + (0.3,0)$) -- ($(s) + (0.3,.2)$) -- (A2);
\draw[thick] (A3) -- ($(e) + (-1.7,0.3)$) -- ($(n) + (0.3,0)$);
\end{tikzpicture}
\caption{A $B_1$-EPG representation of $G$.}
\end{subfigure}
\end{minipage}
\caption{$G$ contains $W_4$ and each $\mathcal{A}_{j,j+1}$ is empty.}
\label{cas3}
\end{figure}
\end{center}
\begin{case}[\textit{$G$ contains no induced 4-wheel}]
Assume henceforth that $\mathcal{A}_{1,2}, \mathcal{A}_{2,3}$ and $\mathcal{A}_{3,1}$ are pairwise anti-complete. If all three subsets are non empty, then for all $x \in \mathcal{A}_c$, there must exist $j \in \{1,2,3\}$ such that $x$ is complete to both $\mathcal{A}_{j-1,j}$ and $\mathcal{A}_{j,j+1}$, and anti-complete to $\mathcal{A}_{j+1,j+2}$ (otherwise $G$ would contain an induced claw). Hence, we can partition $\mathcal{A}_c$ into three subsets $\mathcal{A}_c^j = \{x \in \mathcal{A}_c ~|~ \forall x' \in \mathcal{A}_{j-1,j} \cup \mathcal{A}_{j,j+1}, xx' \in E \text{ and } \forall x' \in \mathcal{A}_{j+1,j+2}, xx' \not \in E\}$ ($j \in \{1,2,3\}$) which must be cliques since $G$ does not contain an induced claw. But then either $\mathcal{A}_c$ is a clique, in which case $G$ is $B_1$-EPG (see Fig. \ref{cas4}), or there exists $x \in \mathcal{A}_c^j$ and $x' \in \mathcal{A}_c^{j+1}$ such that $xx' \not \in E$, and $G$ contains a 4-wheel with $x,x'',x',x_{j+2}$ (for some $x'' \in \mathcal{A}_{j,j+1}$) as its 4-cycle and $x_j$ as its center, which is contrary to our assumption.
\begin{center}
\begin{figure}[h]
\begin{minipage}[b]{0.5\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}[node distance=.8cm]
\node[clique] (Ac3) {$\mathcal{A}_{c}^3$};
\node[clique] (Ac2) [below right of=Ac3] {$\mathcal{A}_{c}^2$};
\node[clique] (Ac1) [below left of=Ac3] {$\mathcal{A}_{c}^1$};
\node[circ,label=left:{\tiny $x_1$}] (a1) [below left of=Ac1,xshift=-0.5cm] {};
\node[circ,label=right:{\tiny $x_2$}] (a2) [below right of=Ac2,xshift=0.5cm] {};
\node[circ,label=above:{\tiny $x_3$}] (a3) [above of=Ac3] {};
\node[clique] (A12) [below right of=Ac1,yshift=-1cm] {$\mathcal{A}_{1,2}$};
\node[clique] (A23) [above right of=Ac3,xshift=1cm,yshift=0.5cm] {$\mathcal{A}_{2,3}$};
\node[clique] (A31) [above left of=Ac3,xshift=-1cm,yshift=0.5cm] {$\mathcal{A}_{3,1}$};
\draw (Ac1) edge[-] (a1)
(Ac1) edge[-] (a2)
(Ac1) edge[-] (a3)
(Ac1) edge[-] (Ac2)
(Ac1) edge[-] (Ac3)
(Ac1) edge[-] (A31)
(Ac1) edge[-] (A12)
(Ac2) edge[-] (a1)
(Ac2) edge[-] (a2)
(Ac2) edge[-] (a3)
(Ac2) edge[-] (Ac3)
(Ac2) edge[-] (A23)
(Ac2) edge[-] (A12)
(Ac3) edge[-,bend right=15] (a1)
(Ac3) edge[-,bend left=15] (a2)
(Ac3) edge[-] (a3)
(Ac3) edge[-] (A23)
(Ac3) edge[-] (A31)
(a1) edge[-] (a2)
(a1) edge[-] (a3)
(a1) edge[-] (A31)
(a1) edge[-] (A12)
(a2) edge[-] (a3)
(a2) edge[-] (A12)
(a2) edge[-] (A23)
(a3) edge[-] (A23)
(a3) edge[-] (A31);
\end{tikzpicture}
\caption{General structure of $G$.}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.5\textwidth}
\begin{subfigure}[b]{\linewidth}
\centering
\begin{tikzpicture}
\coordinate (e) at (2,0);
\coordinate (n) at (0,2);
\coordinate (o) at (-2,0);
\node[invisible] (fake) at (0,-1.5) {};
\node[invisible] (x1) at ($(e) + (0.6,0)$) {$\mathcal{P}_{x_1}$};
\node[invisible] (Ac1) at ($(o) + (-0.6,-0.15)$) {$\mathcal{A}_c^1$};
\node[invisible] (A12) at ($(e) + (0.9,0.15)$) {$\mathcal{A}_{1,2}$};
\node[invisible] (x2) at ($(e) + (0.35,0.3)$) {$\mathcal{P}_{x_2}$};
\node[invisible] (Ac2) at ($(e) + (-.1,0.44)$) {$\mathcal{A}_c^2$};
\node[invisible] (A31) at ($(o) + (-0.6,0.15)$) {$\mathcal{A}_{3,1}$};
\node[invisible] (x3) at ($(o) + (-0.3,0.3)$) {$\mathcal{P}_{x_3}$};
\node[invisible] (Ac3) at ($(o) + (0,0.45)$) {$\mathcal{A}_c^3$};
\node[invisible,label=right:\tiny $\mathcal{A}_{2,3}$] (A23) at ($(n) + (0,0.3)$) {};
\draw[very thick] (o) -- (x1);
\draw[thick] (Ac1) -- ($(e) + (0,-0.15)$);
\draw[thick] (A31) -- ($(o) + (1,0.15)$);
\draw[thick] (A12) -- ($(e) + (-1,0.15)$);
\draw[thick] (A23) -- ($(n) + (0,-1)$);
\draw (x3) -- ($(o) + (1.85,0.3)$) -- ($(n) + (-0.15,0)$);
\draw (x2) -- ($(e) + (-1.85,0.3)$) -- ($(n) + (0.15,0)$);
\draw[thick] (Ac3) -- ($(o) + (1.7,0.45)$) -- ($(n) + (-0.3,0)$);
\draw[thick] (Ac2) -- ($(e) + (-1.7,0.45)$) -- ($(n) + (0.3,0)$);
\end{tikzpicture}
\caption{A $B_1$-EPG representation of $G$.}
\end{subfigure}
\end{minipage}
\caption{$G$ does not contain $W_4$, all three $\mathcal{A}_{j,j+1}$ are nonempty and $\mathcal{A}_c$ is a clique.}
\label{cas4}
\end{figure}
\end{center}
\end{case}
If we now assume that at least one of the subsets $\mathcal{A}_{j,j+1}$ is empty, then at least one vertex $x_i$ of $C$ is adjacent to every vertex of $G$ and $G$ is consequently an interval graph. Indeed, assume that $G\backslash x_i$ contains an induced cycle $C' = y_1, \cdots, y_l$ with $l>3$. Then, together with $x_i$, it would form an $l$-wheel. But since $G$ is proper, it cannot contain a $k$-wheel with $k > 4$. Hence $l = 4$ and $G$ would contain a 4-wheel which is contrary to our assumption. Therefore, $G$ has no induced cycle of length larger than 3, i.e. $G$ is chordal. Furthermore, if there existed three pairwise nonadjacent vertices in $G\backslash x_i$, then together with $x_i$ they would induce a claw. Hence, $G\backslash x_i$ contains no asteroidal triple, i.e. $G$ is an interval graph (see Theorem \ref{theo:interval}), and therefore $B_1$-EPG.
\end{proof}
From the characterisation of $B_1$-EPR graphs, i.e. intersection graphs of paths on a rectangle of a grid where each path has at most one bend, given in \cite{NCA}, we deduce the following characterisation by a family of minimal forbidden induced subgraphs of PCA graphs which are $B_1$-EPR. It is easily seen that the class of circular arc graphs is exactly the class $B_4$-EPR. The authors of \cite{NCA} further proved that NCA graphs have a bend number, with respect to EPR representations, of 2; hence, since PCA $\subset$ NCA, PCA graphs also have a bend number, with respect to EPR representations, of at most 2.
\begin{corollary}
Let $G$ be a PCA graph. Then $G$ is $B_1$-EPR if and only if $G$ is $\{W_4, S_3, C_{4k-1}^k, k \geq 2\}$-free.
\end{corollary}
\begin{proof}
It was shown in \cite{NCA} that for $k \geq 2$, $C_{4k-1}^k \not \in B_1$-EPG; a fortiori, $C_{4k-1}^k \not \in B_1$-EPR. We also know from \cite{Ries} that $W_4 \not \in B_1$-EPR and from \cite{Golumbic} that $S_3 \not \in B_1$-EPR.
Conversely, in \cite{PHCA} the authors proved that PCA $\cap$ $\{W_4,S_3\}$-free $=$ PHCA. The result then follows from the fact that PHCA $\cap$ $\{C_{4k-1}^k, k \geq 2\}$-free $\subset$ NHCA $\cap$ $\{C_{4k-1}^k, k \geq 2\}$-free $=$ $B_1$-EPR (see~\cite{NCA}).
\end{proof}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we present characterisations by (infinite) families of minimal forbidden induced subgraphs for $B_1$-EPG $\cap$ PCA and $B_1$-EPR $\cap$ PCA. This is a first step towards finding a characterisation of the minimal graphs in (CA $\cap ~B_2$-EPG) $\backslash$ (CA $\cap ~B_1$-EPG), a question left open in \cite{NCA}.
\section*{Acknowledgments}
This research was carried out when Dr M.P. Mazzoleni was visiting the University of Fribourg. The support of this institution is gratefully acknowledged.
|
2204.02787
|
\section{Approach}
This section presents the approach in detail.
Before going through the four components introduced in \Cref{sec:overview overview}, we define the query language to specify what kind of code changes to search for.
\subsection{Query Language}
\label{sec:queryingLanguage}
To search for specific kinds of code changes, DiffSearch{} accepts queries that describe the code before and after the change.
Our goal is to provide a query language that developers can learn with minimal effort and that supports all constructs of the target programming language.
To this end, the query language is an extension of the target programming language, i.e., it includes all rules on the target language and additional features useful for queries.
\renewcommand{\syntleft}{\normalfont\itshape}
\renewcommand{\syntright}{}
\begin{figure}
\setlength{\grammarparsep}{.2em}
\setlength{\grammarindent}{8em}
\small
\begin{grammar}
<Query> ::= <Snippet> $\rightarrow$ <Snippet>
<Snippet> ::= <Stmt>* | <Expression>
| _
<Stmt> ::= $\langle$...$\rangle$
| (Target language rules)
<Expression> ::= EXPR
| EXPR$\langle$<Number>$\rangle$
| $\langle$...$\rangle$ | (Target language rules)
<AssignOperator> ::= OP
| OP$\langle$<Number>$\rangle$
| (Target language rules)
<BinaryOperator> ::= binOP
| binOP$\langle$<Number>$\rangle$
| (Target language rules)
<UnaryOperator> ::= unOP
| unOP$\langle$<Number>$\rangle$
| (Target language rules)
<Identifier> ::= ID
| ID$\langle$<Number>$\rangle$
| (Target language rules)
<Literal> ::= LT
| LT$\langle$<Number>$\rangle$
| (Target language rules)
\end{grammar}
\caption{Simplified grammar of queries. Non-terminals are in \textit{italics}.}
\label{fig:grammar}
\end{figure}
\Cref{fig:grammar} shows the grammar of our query language.
A query consists of two sequences of statements, which describe the old and new code, respectively.
The syntax for statements is inherited from the target programming language and not shown in the grammar.
Instead of a regular code snippet, a query may contain an underscore to indicate the absence of any code, which is useful to describe code changes that insert or remove code.
The grammar extends the target language by adding placeholders for specific syntactic entities, namely expressions, operators, identifiers, and literals.
For each such entity, a query can either describe with an unnamed placeholder that there should be any such entity, e.g., \code{EXPR} for any expression, or repeatedly refer to a specific entity with a named placeholder, e.g., using \code{EXPR<1>} and \code{EXPR<2>}.
Named placeholders will be bound to the same entity across the entire query, e.g., to say that the same expression \code{EXPR<1>} must appear on both sides. We also introduce the wildcard \code{<...>} that matches any statement or expression.
\begin{table}
\caption{Examples of Java changes and matching queries.}
\label{tab:grammarExamples}
\setlength{\tabcolsep}{5pt}
\small
\begin{tabular}{@{}ll@{}}
\toprule
Code change&DiffSearch query \\
\midrule
\begin{minipage}{8em}
\small
\begin{Verbatim}
- evt.trig();
\end{Verbatim}
\end{minipage}
&
\begin{minipage}[t]{8em}
\small
\begin{Verbatim}
ID.ID();
\end{Verbatim}
\end{minipage}
\hspace{-.3em}$\rightarrow$\hspace{.3em}
\begin{minipage}[t]{8em}
\begin{Verbatim}
_
\end{Verbatim}
\end{minipage}
\\
\midrule
\begin{minipage}[t]{6.5em}
\small
\begin{Verbatim}
- if (x > 0)
- y = 1;
+ if (x < 0)
+ y = 0;
\end{Verbatim}
\end{minipage}
&
\begin{minipage}[t]{8em}
\small
\begin{Verbatim}
if (EXPR)
ID OP LT;
\end{Verbatim}
\end{minipage}
\hspace{-.3em}$\rightarrow$\hspace{.3em}
\begin{minipage}[t]{8em}
\small
\begin{Verbatim}
if (EXPR)
ID OP LT;
\end{Verbatim}
\end{minipage}
\\
\midrule
\begin{minipage}[t]{6.5em}
\small
\begin{Verbatim}
- run(k);
- now(k);
+ runNow(k);
\end{Verbatim}
\end{minipage}
&
\begin{minipage}[t]{8em}
\small
\begin{Verbatim}
run(EXPR<0>);
now(EXPR<0>);
\end{Verbatim}
\end{minipage}
\hspace{-.3em}$\rightarrow$\hspace{.3em}
\begin{minipage}[t]{9em}
\small
\begin{Verbatim}
runNow(EXPR<0>);
\end{Verbatim}
\end{minipage}
\\
\bottomrule
\end{tabular}
\end{table}
To illustrate the query language, \Cref{tab:grammarExamples} gives a few examples of code changes and a corresponding query that matches the code change.
The first two examples use unnamed placeholders, e.g., to match arbitrary identifiers.
The third example uses a named placeholder:
The \code{EXPR<0>} in both the old and new part of the query means that this expression, here \code{k}, remains the same despite the code change, which replaces two calls with one.
\subsection{Tree-based Representation of Code Changes and Queries}
\label{sec:trees}
One goal of DiffSearch{} is to be mostly language-agnostic, making it easy to apply the approach to different programming languages.
Our current version supports Java, JavaScript, and Python.
To this end, the approach represents code changes and queries using a parse tree, i.e., a representation that is straightforward to obtain for any programming language.
The benefit of parse trees is that they abstract away some details, such as irrelevant whitespace, yet provide an accurate representation of code changes.
To represent a set of commits in a version history as pairs of trees, DiffSearch{} first splits each commit into hunks, which results in a set of code changes (\Cref{def:code change}).
The approach then parses the old and new code of a hunk using the programming language grammar into a single tree that represents the code change.
Likewise, to represent a query, DiffSearch{} parses the query into a parse tree using our extension of the grammar (\Cref{fig:grammar}).
For example, \Cref{fig:trees} shows the parse trees of a change and a query.
The change on the left corresponds to Code change~2 from \Cref{sec:overview}, which swaps \code{x} and \code{y} of a call to \code{isValidPoint}.
\begin{figure*}
\centering
\includegraphics[width=.9\linewidth]{images/parseTrees}
\caption{Parse tree representations of Code change~2 (left) and the query from \Cref{sec:overview} (right). Only some of all considered features are highlighted for illustration.}
\label{fig:trees}
\end{figure*}
An interesting challenge in parsing code changes and queries are syntactically incomplete code snippets.
For example, the code changes in \Cref{sec:overview} open a block with \code{\{} but do not close it with \code{\}}, because the line with the closing curly brace was not changed.
DiffSearch{} addresses this challenge by relaxing the grammar of the target language so that it accepts individual code lines even when they are syntactically incomplete.
For example, we relax the grammar to allow for unmatched parentheses and partial expressions.
An alternative to parse trees are abstract syntax trees (ASTs).
We build on parse trees instead because ASTs abstract away many syntactic details that may be relevant in queries.
For example, consider the following code change that adds parentheses to make a complex expression easier to read:
\begin{tabular}{ll}
&
\begin{minipage}[t]{10em}
\begin{Verbatim}[fontsize=\small]
flag = alive || x && y;
\end{Verbatim}
\end{minipage}
\\
$\rightarrow$
&
\begin{minipage}[t]{6em}
\begin{Verbatim}[fontsize=\small]
flag = alive || (x && y);
\end{Verbatim}
\end{minipage}
\end{tabular}
\vspace{.3em}
Because the added parentheses preserve the semantics of the expression, they are abstracted away in a typical AST, i.e., the old and new code have the same AST.
As a result, an AST-based representation could neither represent this change nor a query to search for it.
\subsection{Extracting Features}
\label{sec:features}
Based on the tree representation of code changes and queries, the feature extraction component of DiffSearch{} represents each tree as a set of features.
The goal of this step is to enable quickly searching through hundreds of thousands of code changes.
By projecting both code changes and queries into the same feature space, we enable the approach to compare them efficiently.
An alternative would be to pairwise compare each code change with a given query~\cite{Fluri2007,Lawall2016}.
However, such a pairwise comparison would require an amount of computation time that is linear w.r.t.\ the number of code changes, which would negatively affect the scalability.
DiffSearch{} uses two kinds of features.
The first kind of feature is \emph{node features}, which encodes the presence of a node in the parse tree.
For the example in \Cref{fig:trees}, the dotted, blue lines show three of the extracted node features.
The second kind of feature is \emph{parse tree triangles}, which encode the presence of a specific subtree.
Each parse tree triangle is a tree that consists of a node and all its descendants up to some configurable depth.
We use a depth of one as a default, i.e., a triangle contains a node and its immediate child nodes.
For the example in \Cref{fig:trees}, the dashed, red lines highlight two of the extracted triangles.
The triangle at the top encodes the fact that there is an if statement, while the other triangle encodes the fact that the code contains an expression list with exactly two expressions.
The two kinds of features complement each other because node features encode information about individual nodes, including identifiers and operators, whereas parse tree triangles represent how nodes are structured.
For each code change or query, the approach extracts a separate set of features for the old and the new code.
With this separation, the features encode whether specific code elements are added or removed in a code change.
The feature sets for code changes and queries are constructed in the same way, except that DiffSearch{} removes node features for placeholder nodes, e.g., \code{ID} or \code{EXPR}, from the query.
The rationale is that we want the features of a query to be subset of the features of a matching code change, but placeholder nodes never appear in code changes.
Different code changes and queries yield different numbers of features.
To efficiently compare a given query against arbitrary code changes, DiffSearch{} represents all features of a code change or query as a fixed-size feature vector.
The feature vector is a binary vector of length $l_{\mathit{n}} + l'_{\mathit{n}} + l_{\mathit{tri}} + l'_{\mathit{tri}}=l$, where $l_{\mathit{n}}$ and $l'_{\mathit{n}}$ are the number of bits to represent the node features of the old and new code, respectively, and likewise for $l_{\mathit{tri}}$ and $l'_{\mathit{tri}}$ for the parse tree triangle features.
We use $l=1,000$ by default, dividing it equally among the four components, which strikes a balance between representing a diverse set of features and efficiency during indexing and retrieval.
Section~\ref{sec:eval parameters} evaluates different sizes for the feature vector length.
\begin{algorithm}[tb]
\begin{algorithmic}[1]
\Require{Set $F$ of features, target size $l_{\mathit{target}}$}
\Ensure{Feature vector $v$}
\State $v \leftarrow$ vector of $l_{\mathit{target}}$ zeros
\ForAll{$f \in F$}
\State $h \leftarrow \mathit{hash}(f)$
\State $v[h \bmod{} l_{\mathit{target}}] \leftarrow 1$\label{line:mod1}
\EndFor
\State \textbf{return} $v$
\end{algorithmic}
\caption{Represent features as fixed-size vector.}\label{alg:featureVector}
\end{algorithm}
\Cref{alg:featureVector} summarizes how DiffSearch{} maps a set $F$ of features into a fixed-size vector $v$.
The algorithm computes a hash function over the string representations of individual nodes in a feature, sums up the hash values into a value $h$, and sets the $h$-th index of the feature vector to one.
To ensure that the index is within the bounds of $v$, line~\ref{line:mod1} performs a modulo operation.
For each code change or query, the algorithm is invoked four times to map each of the four feature sets into a fixed-size vector.
\subsection{Indexing and Retrieving Code Changes}
\label{sec:indexingRetrieval}
To prepare for responding to queries, DiffSearch{} runs an offline phase that indexes the given set of code changes.
The indexing and retrieval components of the approach build on FAISS, which is prior work on efficiently searching for similar vectors across a large set of vectors~\cite{johnson2019billion}.
In the first step of the offline phase, DiffSearch{} parses all code changes and stores the parse trees on disk.
In the second step, DiffSearch{} generates the feature vectors of the code changes using the corresponding parse trees.
Given the set $V_{\mathit{changes}}$ of feature vectors of all code changes, the approach computes an index into these vectors.
After the offline indexing phase, DiffSearch{} accepts queries.
For a given query, the approach computes a feature vector $v_{\mathit{query}}$ (\Cref{sec:features}), and then uses the index to efficiently retrieve the most similar feature vectors of code changes.
FAISS allows for efficiently answering approximate nearest neighbor queries, without comparing the query against each vector in $V_{\mathit{changes}}$.
The nearest neighbors are based on the L2 (Euclidean) distance.
To ensure that the presence of matching features is weighted higher than the absence of features, we multiple $v_{\mathit{query}}$ by a constant factor $\frac{l}{2}+1$ before running the nearest neighbor query.
To illustrate this decision consider an example with three feature vectors: A query $v_Q=(0,0,1)$, a potential match $v_P=(1,1,1)$ with the third feature in common, and a mismatch $v_M=(0,0,0)$.
Naively computing the Euclidean distances yields $d(v_Q,v_P)= \sqrt{2}$ and $d(v_Q,v_M) = \sqrt{1}$, i.e., the mismatch would be closer to the query than the potential match.
Instead, after multiplying $v_Q$ with the constant factor $\frac{3}{2}+1$, we have $d(v_Q,v_P)= \sqrt{4.25}$ and $d(v_Q,v_M) = \sqrt{6.25}$, i.e., the potential match is now closer to the query than to the mismatch.
The approach retrieves the $k$ most similar code changes for a given query.
We use $k=5,000$ by default, and Section~\ref{sec:eval parameters} evaluates other values.
The retrieved candidate code changes are ranked based on their distance to the query, and we use this ranking to sort the final search results shown to a user.
\subsection{Matching of Candidate Search Results}
\label{sec:pruning}
Given the $k$ candidate code changes retrieved for a given query as described in \Cref{sec:indexingRetrieval}, DiffSearch{} could return all of them to the user.
However, the feature-based search does not guarantee precision, i.e., that all the retrieved code changes indeed match the query.
One reason is that the features capture only local information, but do not encode the entire parse tree in a lossless way.
Another reason is that the features do not encode the semantics of named placeholders, i.e., they cannot ensure that placeholders are expanded consistently across the old and new code.
To guarantee that all code changes returned in response to a query precisely match the query, the matching component of DiffSearch{} takes the candidate search results obtained via the feature-based retrieval and checks for each candidate whether it indeed matches the query.
Intuitively, a code change matches a query if the placeholders and wildcards in the query can be expanded in a way that yields code identical to the code change or some subset of the code change.
More formally, we define this idea as follows:
\begin{definition}[Match]
\label{def:match}
Given a code change $c \rightarrow c'$ and a query $q \rightarrow q'$, let $t_c, t_{c'}, t_q, t_{q'}$ be the corresponding parse trees.
The code change matches the query if
\begin{itemize}
\item $t_q$ can be expanded into some subtree of $t_c$ and
\item $t_{q'}$ can be expanded into some subtree of $t_{c'}$
\end{itemize}
so that all of the following conditions hold:
\begin{itemize}
\item Each placeholder is expanded into a subtree of the corresponding syntactic entity.
\item All occurrences of a named placeholder are consistently mapped to identical subtrees.
\item Each wildcard is expanded to an arbitrary, possibly empty subtree.
\end{itemize}
\end{definition}
For example, consider the query and code change in \Cref{fig:trees} again.
They match because the tree on the right can be expanded into the tree on the left.
The expansion maps the named placeholders \code{ID<1>} to \code{isValidPoint}, \code{EXPR<1>} to the subtree that represents \code{x}, and \code{EXPR<2>} to the subtree that represents \code{y}.
Moreover the wildcards in the query are both mapped to the empty tree.
As an example of a code change that does not match this query, consider Code change~1 from \Cref{sec:overview} again.
The parse tree of the query cannot be expanded into the parse tree of that code change because there is no way of expanding the query tree while consistently mapping \code{EXPR<1>} and \code{EXPR<2>} to the three method arguments \code{a-1}, \code{b}, and \code{c}.
To check whether a candidate code change indeed matches the given query, DiffSearch{} compares the parse tree of the query with the parse tree of the code change in a top-down, left-to-right manner.
The basic idea is to search for a mapping of nodes in the query tree to nodes in the parse tree that consistently maps named placeholders to identical subtrees.
On top of this basic idea, the matching algorithm faces two interesting challenges.
We illustrate the challenges with the following query, which searches for code changes where two call statements get replaced by an assignment of a literal to an identifier. The following example shows the query on the left and a matching code change on the right:
\vspace{.3em}
\begin{tabular}{@{}lcr@{}}
\begin{minipage}{3em}
\begin{Verbatim}[fontsize=\small]
ID();
<...>
ID();
\end{Verbatim}
\end{minipage}
&
$\rightarrow$
&
\begin{minipage}{6em}
\begin{Verbatim}[fontsize=\small]
ID = LT;
\end{Verbatim}
\end{minipage}
\hspace{2em}
\begin{minipage}{3em}
\begin{Verbatim}[fontsize=\small]
foo();
bar();
baz();
\end{Verbatim}
\end{minipage}
$~\rightarrow$
\begin{minipage}{6em}
\begin{Verbatim}[fontsize=\small]
x = 5;
foo();
y = 7;
\end{Verbatim}
\end{minipage}
\end{tabular}
\vspace{.3em}
The first challenge is because queries are allowed to match parts of a change, which is useful to find relevant changes surrounded by other, irrelevant changed code.
While useful, this property of queries also implies that the query may match at multiple places within a given code change.
In the above example, the \code{ID = LT;} part of the query may match both \code{x = 5;} and \code{y = 7;}.
The second challenge is because queries may contain wildcards, which is useful to leave parts of a query unspecified.
Wildcards also cause a single query to possibly match in multiple ways.
For the above example, the wildcard could be between the calls of \code{foo} and \code{bar}, between the calls of \code{bar} and \code{baz}, or it could match the call of \code{bar()}.
Because of these two challenges, matching must consider different ways of mapping a query onto a code change, which results in a search space of possible matches that must be explored.
\begin{algorithm}[tb]
\caption{Check if a code change matches a query.}
\label{alg:match}
\small
\begin{algorithmic}[1]
\Require{Code change $c \rightarrow c'$ and query $q \rightarrow q'$}
\Ensure{True if they match, False otherwise.}
\State $t_c, t_{c'} \leftarrow \mathit{parse}(c \rightarrow c')$
\State $t_q, t_{q'} \leftarrow \mathit{parse}(q \rightarrow q')$
\State $N_{\mathit{toMatch}} \leftarrow (\mathit{allNodes}(q) \cup \mathit{allNodes}(q')) \setminus \mathit{wildcards}$
\State $W \leftarrow \mathit{candidateMappings}(t_c, t_{c'}, t_q, t_{q'})$
\While{$W$ is not empty}
\State $M \leftarrow$ Take a mapping from $W$
\State $n_q \leftarrow \mathit{nextUnmatchedNode}(M, t_q, t_{q'})$
\State $n_{pq} \leftarrow$ Parent of $n_q$
\State $n_{pc} \leftarrow$ Look up $n_{pq}$ in $M$
\For{$c$ \mbox{\textbf{in}} all not yet matched children of $n_{pc}$}
\If{$\mathit{canAddToMap}(M, c, n_q)$}
\State $M' \leftarrow$ Copy of $M$ with $n_q \mapsto c$
\If{$\mathit{keys}(M') \cap N_{\mathit{toMatch}} = \emptyset$\\
\hspace{5.1em} \mbox{\textbf{and}} $\mathit{isValid}(M, t_c, t_{c'}, t_q, t_{q'})$}
\State \textbf{return} true \label{line:isMatch}
\EndIf
\Else
\State Add $M'$ to $W$
\EndIf
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
DiffSearch{} addresses these challenges in \Cref{alg:match}, which checks whether a given query and code change match.
The algorithm starts by parsing the code change into trees $t_c$ and $t_{c'}$, which represent the old and new part of the change, and likewise for the query.
The core of the algorithm is a worklist-based search through possible mappings between nodes in the parse tree of the query and nodes in the parse tree of the code change.
These mappings are represented as a map $M$ from nodes in the query trees to nodes in the code change trees.
Each mapping $M$ in the worklist $W$ represents a possible way of matching the query against the code change.
To determine whether all nodes in the query have been successfully mapped, the algorithm maintains a set $N_{\mathit{toMatch}}$ of all the nodes in the query that must be matched.
The algorithm explores mappings in $W$ until it either finds a mapping that covers all nodes in $N_{\mathit{toMatch}}$, or until it has unsuccessfully explored all mappings in $W$.
\Cref{alg:match} relies on several helper functions.
One of them, $\mathit{candidateMappings}$, computes the starting points for the algorithm by returning all possible mappings of the roots of $t_q$ and $t_{q'}$ to nodes in the code change trees.
The $\mathit{nextUnmatchedNode}$ function performs a top-down, left-to-right pass through the query trees to find a node that is not yet in the current map $M$.
The $\mathit{canAddToMap}$ function checks if adding a mapping $n_q \mapsto c$ is consistent with an already existing map $M$.
Specifically, it checks that $n_q$ is not yet among the keys of $M$, that $c$ is not yet among the values of $M$, and that the two nodes are either identical non-placeholder nodes or that $n_q$ is a placeholder that can be consistently mapped to $c$ as specified in \Cref{def:match}.
Finally, the helper function $\mathit{isValid}$ checks whether a mapping $M$ that covers all to-be-matched nodes ignores nodes in the change tree only when there is a corresponding wildcard in the query tree. The algorithm postpones this check to $\mathit{isValid}$ to reduce the total number of mappings to explore.
Matching a single code change against a query might cause the algorithm to explore many different mappings, and DiffSearch{} typically invokes \Cref{alg:match} not only once but for tens or hundreds of candidate search results.
To ensure that the approach responds to queries quick enough for interactive usage, we optimize \Cref{alg:match} by pruning code changes that certainly cannot match a given query.
To this end, the approach checks if all leaf nodes in the parse tree of a query occur at least once in the parse tree of the code change.
For example, consider the following query, which searches for changes in the right-hand side of assignments to a variable \code{myVar}:\footnote{Because the \scode{myVar =}\hspace{.5em} part of the code remains the same, the query expresses that the literal captured by the unnamed placeholder \scode{LT} is changing.}
\vspace{.2em}
\hspace{2em}
\begin{tabular}{rlcr}
&
\begin{minipage}[t]{6em}
\begin{Verbatim}[fontsize=\small]
myVar = LT;
\end{Verbatim}
\end{minipage}
&
$\rightarrow$
&
\begin{minipage}[t]{6em}
\begin{Verbatim}[fontsize=\small]
myVar = LT;
\end{Verbatim}
\end{minipage}
\end{tabular}
\vspace{.3em}
If a code change does not include any token \code{myVar}, then the optimization immediately decides that the code change cannot match the query and skips \Cref{alg:match}.
\section{Conclusion}
We present a scalable and precise search engine for code changes.
Given a query that describes code before and after a change, the approach retrieves within seconds relevant examples from a corpus of a million code changes.
Our query language extends the underlying programming language with wildcards and placeholders, providing an intuitive way of formulating queries to search for code changes.
Key to the scalability of DiffSearch{} is to encode both queries and code changes into a common feature space, enabling efficient retrieval of candidate search results.
Matching these candidates against the query guarantees that every returned search result indeed fits the query.
The approach is mostly language-agnostic, and we empirically evaluate it on Java, JavaScript, and Python.
DiffSearch{} answers most queries in less than a second, even when searching through large datasets.
The recall ranges between 80.7\% and 90.4\%, depending on the target language, and can be further increased at the expense of response time.
We also show that users find relevant code changes more effectively with DiffSearch{} than with a regular expression-based search.
Finally, as an example of how the approach could help researchers, we use it to gather a dataset of 74,903 code changes that match recurring bug fix patterns.
We envision DiffSearch{} to serve as a tool useful to both practitioners and researchers, and to provide a basis for future work on searching for code changes.
\section*{Acknowledgment}
This work was supported by the European Research Council (ERC, grant
agreement 851895), and by the German Research Foundation within the
ConcSys and DeMoCo projects.
\section{Implementation}
\label{sec:implementation}
We implement the DiffSearch{} idea into a practical search engine that supports multiple programming languages, currently Java, JavaScript, and Python.
To gather raw code changes, the implementation uses ''git log -p''.
For each change, a parse tree is created using ANTLR4\footnote{https://www.antlr.org/}, using the grammar of the target programming language, modified to support queries and to allow for syntactically incomplete code fragments (\Cref{sec:queryingLanguage}).
The indexing and retrieval components build on the FAISS library~\cite{johnson2019billion}, which supports efficient vector similarity queries for up to billions of vectors.
Once changes are indexed, the search engine is a server that responds to queries via one of two publicly available interfaces: a web interface for interactive usage and a web service for larger-scale usage, e.g., to create a dataset of changes.
\section{Introduction}\label{sec:intro}}
\IEEEPARstart{H}{undreds} of thousands of code changes are stored in the version histories of code repositories.
To benefit from this immense source of knowledge, practitioners and researchers often want to search for specific kinds of code changes.
For example, developers may want to search through their own repositories to find again a code change performed in the past, or search for commits that introduce a specific kind of problem.
Developers may also want to search through changes in repositories by others, e.g., to understand how code gets migrated from one API to another, or to retrieve examples of common refactorings for educational purposes.
A question on Stack Overflow on how to systematically search through code changes\footnote{\url{https://stackoverflow.com/questions/2928584/how-to-grep-search-committed-code-in-the-git-history}} has received over half a million views, showing that practitioners are interested in finding changes from the past.
Besides practitioners, researchers also commonly search for specific kinds of code changes.
For example, a researcher evaluating a bug finding tool~\cite{ase2018-study} or a program repair tool~\cite{cacm2019-program-repair,DBLP:conf/icse/TanYYMR17,motwani2020quality} may be interested in examples of specific kinds of bug fixes.
Likewise, researchers working on machine learning models that predict when and where to apply specific code changes require examples of such changes as training data~\cite{oopsla2019}.
Finally, researchers systematically study when and how developers perform specific kinds of changes to increase our understanding of development practices~\cite{negara2014mining,Rak-amnouykit2020,Nguyen2019,ase2020}.
Unfortunately, there currently is no efficient and effective technique for systematically searching large version histories for specific kinds of changes.
The solutions proposed in the above Stack Overflow post are all based on matching regular expressions against raw diffs.
However, searching for anything beyond the most simple change patterns with a regular expression is cumbersome and likely to result in irrelevant code changes.
Another existing technique is GitHub's commit search, which allows for searching through commit messages and meta-information, such as developer names and project names.
Nevertheless, commit search does not support searching for specific code transformations.
Finally, previous research proposes techniques that linearly scan version histories for specific patterns~\cite{Fluri2007,kawrykow2011non,pan2009toward,Lawall2016}.
However, due to their linear design, these techniques do not scale well to searching through hundreds of thousands of changes in a short time.
This paper presents DiffSearch{}, a scalable and precise search engine for code changes.
DiffSearch{} is enabled by three key contributions.
First, we design a query language that is intuitive to use and easy to adapt to different programming languages.
The query language extends the target programming language with wildcards and placeholders that abstract specific syntactic categories, e.g., expressions.
Second, to ensure scalability, the approach is split into an indexing part, which maps code changes into a feature space, and a retrieval part, which matches a given query in the feature space. We design specific features for code changes, extracting useful information to match different changes on code source.
Finally, to ensure precision, i.e., that a found code change indeed fits the given query, a crucial part of the approach is to match candidate code changes against the given query.
We present an efficient algorithm that checks if a query can be expanded into a code change.
DiffSearch{} is designed in a mostly language-agnostic way, making it easy to apply the approach to different languages.
In particular, we restrict ourselves to a very lightweight static analysis of code changes.
The query language and parts of the search algorithm build upon the context-free grammar of the target programming language.
As a proof-of-concept, DiffSearch{} currently supports three widely used languages: Java, JavaScript, and Python.
Our approach relates to work on searching for code, which retrieves code snippets that match keywords~\cite{sourcerer,Gu2018}, test cases~\cite{reissCodeSearch}, or partial code snippets~\cite{Luan2019,kim2018facoy}.
While code search engines often have a design similar to ours, i.e., based on indexing and retrieval, they consider only a single snapshot of code, but no code changes.
Other related work synthesizes an edit program from one or more code changes~\cite{Fluri2007,Falleri2014,Rolim2017,Gao2020,Erdweg2021} and infers recurring code change patterns~\cite{DBLP:conf/pldi/PaletovTRV18,Nguyen2019}.
Starting from concrete changes, these approaches yield abstractions of them.
Our work addresses the inverse problem: given a query that describes a set of code changes, find concrete examples that match the query.
Finally, our work relates to clone detection~\cite{kamiya2002ccfinder,Li2006,jiang2007deckard,roy2008nicad,sajnani2016sourcerercc}, as DiffSearch{} searches for code changes that resemble a query.
Our work differs from clone detection by considering code changes (and not individual snippets of code), by focusing on guaranteed matches instead of similar code, and by responding to queries quickly enough for interactive use.
We evaluate the effectiveness and scalability of DiffSearch{} with one million code changes in each Java, Python, and JavaScript.
We find that the approach responds to queries within a few seconds, scaling well to large sets of code changes.
The search has a mean recall of 80.7\% for Java, 89.6\% for Python, and 90.4\% for JavaScript, which can be increased even further in exchange for a slight increase in response time.
A user study shows that DiffSearch{} enables users to effectively retrieve code changes, clearly outperforming a regular expression-based search through raw diffs.
As a case study to show the usefulness of DiffSearch{} for researchers, we apply the approach to gather a dataset of 74,903 bug fixes.
In summary, this paper contributes the following:
\begin{itemize}
\item A \emph{query language} that extends the target programming language with placeholders and wildcards, making it easy to adapt the approach to different languages.
\item A technique for searching code changes that ensures \emph{scalability} through approximate, indexing-based retrieval, and that ensures \emph{precision} via exact matching.
\item Empirical evidence that the approach effectively finds thousands of relevant code changes, scales well to more than a million changes from different projects, and successfully helps users to answer a diverse set of queries.
\end{itemize}
\smallskip
\noindent
The implementation\footnote{\url{https://github.com/sola-st/DiffSearch}} and a web interface\footnote{\url{http://diffsearch.software-lab.org}} of DiffSearch are publicly available.
\section{Example and Overview}\label{sec:overview}
\subsection{Motivating Example}\label{sec:motivation}
To illustrate the problem and how DiffSearch{} addresses it, consider the following example query.
The query searches for code changes that swap the arguments passed to a call that is immediately used in a conditional.
Such a query could be used to find fixes of swapped argument bugs~\cite{oopsla2017}.
{
\renewcommand{\arraystretch}{0.85}
\begin{tabular}{rlc}
&
\begin{minipage}[t]{13.5em}
\begin{Verbatim}[fontsize=\small]
if(ID<1>(EXPR<1>, EXPR<2>)){
<...>
\end{Verbatim}
\end{minipage}\\
$\rightarrow$ &
\begin{minipage}[t]{15em}
\begin{Verbatim}[fontsize=\small]
if(ID<1>(EXPR<2>, EXPR<1>)){
<...>
\end{Verbatim}
\end{minipage}
\end{tabular}
}
\vspace{.3em}
Our query language is an extension of the target programming language, Java in the example, and adds placeholders for some syntactic categories.
For example, the \code{ID<1>} placeholder will match any identifier, and the \code{EXPR<1>} placeholder matches any expression.
Instead of such placeholders, queries can also include concrete identifiers and literals, e.g., to search for specific API changes.
As the set of code changes to search through, suppose we have the following three examples, of which only the second matches the query:
\vspace{.3em}
\noindent \emph{Code change 1:}\\
\hspace*{-2.2em}
\begin{tabular}{rlcr}
&
\begin{minipage}[t]{9.5em}
\begin{Verbatim}[fontsize=\small]
if(check(a - 1, b)){
\end{Verbatim}
\end{minipage}
&
\hspace{1em}$\rightarrow$
&
\begin{minipage}[t]{3em}
\begin{Verbatim}[fontsize=\small]
if(check(a - 1, c)){
\end{Verbatim}
\end{minipage}
\end{tabular}
\vspace{.3em}
\vspace{.3em}
\noindent \emph{Code change 2:}\\
\hspace*{-2.2em}
\begin{tabular}{rlcr}
&
\begin{minipage}[t]{11.3em}
\begin{Verbatim}[fontsize=\small]
if(isValidPoint(x, y)){
\end{Verbatim}
\end{minipage}
&
\hspace{.7em}$\rightarrow$\hspace{-.4em}
&
\begin{minipage}[t]{2em}
\begin{Verbatim}[fontsize=\small]
if(isValidPoint(y, x)){
\end{Verbatim}
\end{minipage}
\end{tabular}
\vspace{.3em}
\vspace{.3em}
\noindent \emph{Code change 3:}\\
\hspace*{-2.2em}
\begin{tabular}{rlcr}
&
\begin{minipage}[t]{10.5em}
\begin{Verbatim}[fontsize=\small]
while(var > k - 1){
sum += count(var);
\end{Verbatim}
\end{minipage}
&
$\rightarrow$\hspace{-.7em}
&
\begin{minipage}[t]{6em}
\begin{Verbatim}[fontsize=\small]
while(var > k){
sum += 2 * count(var);
\end{Verbatim}
\end{minipage}
\end{tabular}
\subsection{Problem Statement}
An important design decision is the granularity of code changes to consider.
The options range from changes of individual lines, which would limit the approach to very simple code changes, to entire commits, which may span multiple files, several dozens of lines~\cite{DBLP:conf/iwpc/AlaliKM08}, often containing multiple entangled logical changes~\cite{DBLP:conf/icse/KawrykowR11,DBLP:conf/icse/BarnettBBL15,DBLP:journals/ese/HerzigJZ16,Partachi2020a}.
We opt for a middle ground between these two extremes and consider code changes at the level of ``hunks'', i.e., consecutive lines that are added, modified, or removed together.
\begin{definition}[Code change]
\label{def:code change}
A code change $c \rightarrow c'$ consists of two pieces of code, which each consists of a sequence $[l_1,..,l_m]$ of consecutive lines of code extracted from a file in the target language.
\end{definition}
\begin{definition}[Query]
\label{def:query}
A query $q \rightarrow q'$ consists of two patterns, which each are a sequence $[l_1,..,l_m]$ of lines of code in an extension of the target programming language.
The language extension adds wildcards, a special ``empty'' symbol, and placeholders for specific syntactic categories, e.g., to match an arbitrary expression or identifier.
\end{definition}
Given these two ingredients, the problem we address is:
\begin{definition}[Search for code changes]
\label{def:search}
Given a set $C$ of code changes and a query $q \rightarrow q'$, find a set $M \subseteq C$ of code changes such that each $(c \rightarrow c') \in M$ matches $q \rightarrow q'$.
We say that a code change $c \rightarrow c'$ matches a query $q \rightarrow q'$ if there exist an expansion of the placeholders and wildcards in $q \rightarrow q'$ that lead to $c \rightarrow c'$.
\end{definition}
By ensuring that, for any retrieved code change, the query can be expanded to the code change, DiffSearch{} guarantees that every result of a search precisely matches the query.
\subsection{Main Idea of the Approach}
\label{sec:overview overview}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{images/overview}
\caption{Overview of the approach.}
\label{fig:overview}
\end{figure}
DiffSearch{} consists of four components that are used in an offline and an online phase as illustrated in \Cref{fig:overview}.
In the offline phase, the approach analyzes and indexes a large set of code changes.
The \emph{Parsing \& Feature extraction} component of the approach parses and abstracts concrete code changes and queries into a set of features, mapping both into a common feature space.
For our example query in Section~\ref{sec:motivation}, the features encode, e.g., that a call expression appearing within the condition of an if statement is changed and that the changed call has two arguments.
To enable quickly searching through hundreds of thousands of code changes, the \emph{Indexing} component of DiffSearch{} indexes the given feature vectors~\cite{johnson2019billion} once before accepting queries.
In the online phase, the input is a query that describes the kind of code changes to find.
Based on the pre-computed index and the feature vector of a given query, the \emph{Retrieval} component retrieves those code changes that are most similar to the query.
For our motivating example, this yields Code change~1 and Code change~2 because both change the arguments passed to a call.
The similarity-based retrieval does not guarantee precision, i.e., that each candidate code change indeed matches the query.
The \emph{Matching \& Ranking} component of DiffSearch{} removes any candidates that do not match the query by checking whether the placeholders and wildcards in the query can be expanded into concrete code in a way that yields the candidate code change.
For our example, matching will eliminate Code change~1, as it does not swap arguments, and eventually returns Code change~2 as a search result to the user.
\section{Evaluation}
Our evaluation focuses on five research questions:
\begin{itemize}
\item RQ1: What is the recall of DiffSearch{}? (Section~\ref{sec:eval recall})
\item RQ2: How efficient and scalable is DiffSearch{}? (Section~\ref{sec:evalScalability})
\item RQ3: Does DiffSearch{} enable users to find relevant code changes more effectively than a regular expression-based search through raw diffs? (Section~\ref{sec:eval user study})
\item RQ4: Is DiffSearch{} useful for finding examples of recurring bug fix patterns? (Section~\ref{sec:eval bug patterns})
\item RQ5: How do parameters of the approach influence the results? (Section~\ref{sec:eval parameters})
\end{itemize}
For each of RQ1, RQ2, and RQ5, we present results for all three currently supported target languages: Java, JavaScript, and Python.
For each language, we gather at least one million code changes from repositories that are among the top 100 of their language based on GitHub stars.
For RQ3 and RQ4, we focus on Java as the target language because RQ3 is based on a user study and because RQ4 builds on a Java dataset created by prior work~\cite{Karampatsis2019a}.
The experiments are performed on a server with 48 Intel Xeon CPU cores clocked at 2.2GHz, 250GB of RAM, running Ubuntu~18.04.
\subsection{RQ1: Recall}
\label{sec:eval recall}
While the precision of DiffSearch{}'s results is guaranteed by design (Section~\ref{sec:pruning}),
the approach may miss code changes due to its feature-based search, which ensures scalability but may fail to include an expected code change into the candidate matches.
Additionally, DiffSearch{} only considers $k$ candidate changes, so it can find at most $k$ results even though queries could have more than $k$ matching code changes.
To establish a ground truth, we randomly sample code changes $c \rightarrow c'$ from all indexed Java, Python, and JavaScript code changes and formulate a corresponding query $q \rightarrow q'$ using the following four strategies.
The \emph{as-is} strategy simply copies $c$ into $q$ and $c'$ into $q'$.
The \emph{less-placeholders} strategy replaces some of the identifiers, operators, and literals with corresponding placeholders or wildcards.
The \emph{more-placeholders} strategy, similarly, replaces the majority of the identifiers, operators, and literals.
Finally, the \emph{generalized} strategy replaces most or all of the identifiers, operators, and literals.
For each strategy and each programming language, we randomly sample 20 code changes and construct a query for it.
We then compare each query against all 1,001,797 Java, 1,007,543 JavaScript, and 1,016,619 Python code changes using the matching component of DiffSearch{}.
While significantly slower than the feature-supported search that DiffSearch{} uses otherwise, this approach allows us to determine the set of all code changes expected to be found for a query, because Algorithm~\ref{alg:match} precisely computes whether a code change matches a query.
\begin{table}[t]
\centering
\caption{Recall of DiffSearch{} across 80 queries per language.}
\label{tab:Recall}
\setlength{\tabcolsep}{17pt}
\begin{tabular}{@{}lrrr@{}}
\toprule
Queries & Java & Python & JavaScript \\ \midrule
As-is & 90.6\% & 100.0\% & 100.0\% \\
Less-placeholders & 83.5\% & 99.9\% & 99.8\% \\
More-placeholders & 74.2\% & 96.7\% & 95.8\% \\
Generalized & 76.7\% & 74.9\% & 66.1\% \\ \midrule
\textbf{Total} & \textbf{80.7\%} & \textbf{89.6\%} & \textbf{90.4\%} \\ \bottomrule
\end{tabular}
\end{table}
Table~\ref{tab:Recall} shows the recall of DiffSearch{} w.r.t.\ the ground truth, i.e., the percentage of all ground truth code changes that the approach finds.
On average across the 80 queries per programming language, DiffSearch{} has a recall of 80.7\% for Java, 89.6\% for Python, and 90.4\% for JavaScript.
More specific queries tend to lead to a higher recall.
The reason is that the parse tree of a more generalized query shares fewer features with a matching code change, e.g., because a complex subtree is folded into an \code{EXPR} node.
The slightly higher recall for Python and JavaScript can be explained by two observations.
First, code changes in Java tend to be slightly larger, causing more nodes on the parse trees, which reduces the chance to find a suitable candidate change.
Second, across the 80 queries, there are 236,836 ground truth code changes for Java, but only 69,626 and 59,789 for Python and JavaScript, respectively, making finding all ground truth code changes in Java a harder problem.
We discuss in Section~\ref{sec:eval parameters} that the recall can be increased even further by retrieving more candidate matches, at the expense of a slightly increased response time.
\subsection{RQ2: Efficiency and Scalability}
\label{sec:evalScalability}
A major goal of this work is to enable quickly searching through hundreds of thousands of code changes.
The following evaluates how the number of code changes to search through influences the efficiency of queries, i.e., how well DiffSearch{} scales to large amounts of changes.
As queries to run, we use the 80 queries described in \Cref{sec:eval recall}.
For each query, we measure how long DiffSearch{} takes to retrieve code changes from ten increasingly large datasets, ranging from 10,000 to 1,000,000 code changes.
\begin{figure*}
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/JavaScalability}
\caption{DiffSearch~(Java).}
\end{subfigure}\hspace{0.5em}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/PythonScalability}
\caption{DiffSearch~(Python).}
\end{subfigure}\hspace{0.5em}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/JavaScriptScalability}
\caption{DiffSearch~(JavaScript).}
\end{subfigure}
\vspace{1em}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/JavaScalabilityslow}
\caption{DiffSearch~without indexing\\(Java).}
\end{subfigure}
\hspace{0.5em}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/PythonScalabilityslow}
\centering
\caption{DiffSearch~without indexing\\(Python).}
\end{subfigure}\hspace{0.5em}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/JavaScriptScalabilityslow}
\caption{DiffSearch~without indexing\\(JavaScript).}
\end{subfigure}
\caption{Response time across differently sized datasets (average and 95\% confidence interval). Top: Full DiffSearch{}. Bottom: DiffSearch{} without indexing.}
\label{fig:scalability}
\end{figure*}
The top row of \Cref{fig:scalability} shows the results for the full DiffSearch{} approach.
Answering a query typically takes between 0.5 and 2 seconds.
Moreover, the response time remains constant when searching through more code changes.
The reasons are (i) that FAISS~\cite{johnson2019billion} provides constant-time retrieval in the vector space, and (ii) that the time for matching candidate changes against the query is proportional to the constant number $k$ of candidate changes.
Comparing the three programming languages, we find that they yield similar performance results, which is due to the fact that most parts of our implementation are language-agnostic.
We conclude that DiffSearch{} scales well to hundreds of thousands of changes and remains efficient enough for interactive use.
The bottom row of \Cref{fig:scalability} shows the same experiment when removing the indexing and retrieval steps of DiffSearch{} (note: different y-axis).
Instead, the approach linearly goes through all code changes and compares them against a given query using the matching component only.
Answering a query takes up to 41 seconds on average, showing that the feature-based indexing is essential to ensure DiffSearch{}'s scalability.
Even though scalability is most relevant for the online part of DiffSearch{}, we also measure how long the offline part takes.
In total, analyzing a million code changes to extract feature vectors and indexing these vectors takes up to five hours.
As this is a one-time effort that does not influence the response time, we consider it acceptable in practice.
\subsection{RQ3: User Study}
\label{sec:eval user study}
\subsubsection{Study Setup}
We perform a user study to measure whether DiffSearch{} enables users to effectively retrieve code changes within a given time budget, and to compare our approach with a regular expression-based baseline.
To this end, we provide natural language descriptions of kinds of code changes and ask each user to find up to ten matching code changes per description within two minutes.
We choose this time limit based on empirical results on code search sessions, which are reported to have a median length of 89 seconds~\cite{Sadowski2015}.
We then ask the users how many satisfying code changes they could find.
Each user works on each kind of query with both DiffSearch{} and the baseline tool, alternating which tool to use first.
\emph{Queries.}
The descriptions of the queries (Table~\ref{tab:user study}) are designed with two criteria in mind.
First, they cover different syntactic categories of changes, including additions (\#3, \#4, \#7), modifications (\#6), and removals (\#10) of statements; changes within existing statements (\#1, \#2, \#5, \#9); and changes that surround an existing statement with a new statement (\#8).
Second, the queries cover a diverse range of reasons for changing code, including code improvements to increase robustness (\#4, \#7, \#8), code cleanup (\#10), changes of functionality (\#6, \#9), bug fixes (\#1, \#2, \#5), and uses of a new API (\#3).
\emph{Baseline.}
Because DiffSearch{} is the first search engine specifically designed for code changes, there is no established tool to compare against.
Instead, we use a regular expression-based approach suggested in the Stack Overflow question cited in Section~\ref{sec:intro} as a baseline, which we call REGEX.
Regular expressions are well known and widely used for general search tasks.
Naively applying regular expressions to the git history of many projects, as suggested on Stack Overflow, leads to unacceptably high response times (tens or even hundreds of seconds, depending on the query).
Instead, we preprocess the output of \emph{git log} by removing information unrelated to the task, such as commit messages and file names, which reduces the size of the file and makes the response time acceptable.
\emph{Participants and setup.}
We recruit ten participants with solid knowledge about regular expressions, consisting of seven PhD students, two senior undergraduate students, and one senior developer.
The participants do not overlap with the authors of this paper.
The users access DiffSearch{} through a web interface that resembles a standard search engine, but has two text input fields, for the old and new code, respectively.\footnote{The web interface is available to reviewers, see end of Section~\ref{sec:intro}.}
For REGEX, participants use a terminal and their favorite tool to search with regular expressions, e.g., \emph{grep}.
We provide about 750 words of instructions to the participants, which explain the task, the query language of DiffSearch{}, and how to search through raw diffs using REGEX.
\subsubsection{Quantitative Results}
\begin{table*}
\centering
\caption{Query descriptions for user study and summary of search results.}
\label{tab:user study}
\setlength{\tabcolsep}{1.6pt}
\begin{tabular}{@{}rp{25em}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}|c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{}}
\toprule
\textbf{Id} & \textbf{Query description} & \multicolumn{10}{c}{\textbf{DiffSearch \textbar{} REGEX}} \\
\cmidrule{3-13}
& & \multicolumn{3}{c}{User 1} & \multicolumn{3}{c}{User 2} & \multicolumn{3}{c}{User 3} & \multicolumn{3}{c}{User 4} & \multicolumn{3}{c}{User 5} & \multicolumn{3}{c}{User 6} & \multicolumn{3}{c}{User 7} & \multicolumn{3}{c}{User 8} & \multicolumn{3}{c}{User } & \multicolumn{3}{c}{User 10} & \multicolumn{3}{c}{Total} \\
\midrule
1 & Find changes in which a return statement that returns a literal changes to returning the result of a method call. & 10&\textbar{}&~0 & 10 & \textbar{} & ~0 & ~0& \textbar{} & 10 & ~0 & \textbar{}& ~0 & 10& \textbar{}& ~0 & ~7& \textbar{}& ~0 & 10 & \textbar{} & ~0 & ~7 & \textbar{}& ~0 & ~7& \textbar{}& ~0 & ~7 & \textbar{}& ~0 & ~68 & \textbar{}& ~10 \\
2 & Find changes where the developer swaps the arguments of a method call. & ~0 & \textbar{} & ~0 & ~0 & \textbar{} & ~0 & 10 & \textbar{} & ~0 & 10 & \textbar{}& ~0 & 10& \textbar{}& ~0 & ~0& \textbar{}& ~0 & 10 & \textbar{}& ~0 & 10 & \textbar{}& ~0 & 10 & \textbar{}& ~0 & 10 & \textbar{}& ~0 & ~70& \textbar{}& ~~0 \\
3 & Find changes that add an import of a class in the form ``import somePkg.SomeClass''. & 10 & \textbar{}& ~0 & ~0 & \textbar{} & 10 & 10& \textbar{}& 10 & 10& \textbar{}& 10 & ~0& \textbar{}& 10 & 10& \textbar{}& 10 & 10& \textbar{}& 10 & 10& \textbar{}& ~0 & 10& \textbar{}& ~0 & 10 & \textbar{}& ~0 & ~80& \textbar{}& ~60 \\
4 & Find changes that add a call to close some resource, e.g., a stream or file reader. & ~0 & \textbar{}& ~0 & 10 & \textbar{} & 10 & 10& \textbar{}& 10 & 10& \textbar{}& ~0 & 10& \textbar{}& 10 & ~0& \textbar{} & 10 & 10 & \textbar{}& 10 & 10& \textbar{}& ~0 & 10& \textbar{}& ~0 & 10& \textbar{}& 10 & ~80& \textbar{}& ~60 \\
5 & Find changes where the condition of an if statement with a body changes from ``-= null'' to ``!= null''. & ~4 & \textbar{}& ~0 & 10 & \textbar{} & ~0 & ~4 & \textbar{}& ~5 & ~0& \textbar{}& ~0 & ~7& \textbar{}& ~0 & ~0& \textbar{}& ~2 & ~4& \textbar{}& ~0 & ~4& \textbar{}& ~0 & ~0& \textbar{}& ~0 & ~5& \textbar{}& ~0 & ~38& \textbar{}& ~~7 \\
6 & Find changes that remove a method call with one argument. & 10& \textbar{}& ~0 & 10 & \textbar{} & ~1 & 10& \textbar{}& 10 & 10 &\textbar{}& ~0 & 10 &\textbar{}& ~0 & 10 & \textbar{} & 10 & 10 &\textbar{}& 10 & 10 &\textbar{}& ~0 & 10& \textbar{}& ~0 & 10 & \textbar{}& ~0 & 100& \textbar{}& ~31 \\
7 & Find changes that insert an assertion using Java's ``assert'' keyword. & 10& \textbar{}& ~0 & 10& \textbar{}& 10 & ~0& \textbar{}& 10 & ~0& \textbar{}& ~2 & 10& \textbar{}& 10 & ~0& \textbar{}& 10 & 10 &\textbar{}& 10 & 10& \textbar{}& ~0 & 10 &\textbar{}& ~0 & 10& \textbar{}& 10 & ~70& \textbar{}& ~62 \\
8 & Find changes in which a code snippet is surrounded with a try/catch block. & ~0 & \textbar{}& ~0 & ~0 & \textbar{} & ~0 & ~0& \textbar{}& ~0 & ~0& \textbar{}& ~0 & ~0& \textbar{}& ~0 & ~10& \textbar{}& ~0 & ~4& \textbar{}& 10 & 10& \textbar{}& ~0 & ~0& \textbar{}& ~0 & ~1 &\textbar{}& ~0 & ~25& \textbar{}& ~~0 \\
9 & Find changes where the condition of a while loop is changed. & 10 & \textbar{}& ~0 & 10 & \textbar{} & 10 & 10& \textbar{}& ~2 & 10& \textbar{}& ~0 & 10& \textbar{}& ~0 & ~0 & \textbar{}& ~1 & 10& \textbar{}& ~0 & ~0& \textbar{}& ~0 & 10 & \textbar{}& ~0 & 10 & \textbar{}& ~0 & ~80& \textbar{}& ~13 \\
10 & Find changes that remove a call to System.out.println(...). & 10 & \textbar{} & ~0 & 10 & \textbar{} & 10 & 10& \textbar{} &10 & 10& \textbar{}& ~0 & 10& \textbar{}& 10 & 10& \textbar{}& 10 & 10& \textbar{}& 10 & 10& \textbar{}& ~0 & 10 &\textbar{}& ~0 & 10 &\textbar{}& 10 & 100& \textbar{}& ~60 \\
\midrule
& Total & 64 &\textbar{}& ~0 & 70 &\textbar{}& 51 & 64 &\textbar{}& 67 & 60& \textbar{}& 12 & 77 &\textbar{}& 40 & 47& \textbar{}& 53 & 88& \textbar{}& 50 & 81 &\textbar{}& ~0 & 77& \textbar{}& ~0 & 83 &\textbar{} &30 & 711& \textbar{}& 303 \\
\bottomrule
\end{tabular}
\end{table*}
\Cref{tab:user study} shows the number of search results obtained using DiffSearch{} and REGEX.
Across the entire study, the participants find 711 code changes with DiffSearch{} but only 303 with REGEX.
Inspecting individual queries shows that, while some are harder than others, at least one user finds ten code changes for each query.
For 77.0\% of DiffSearch{} queries, users retrieve at least one code change with DiffSearch{}, whereas with REGEX, users get at least one code change for only 35.0\% of all queries.
For 65.0\% of DiffSearch{} queries, users find the desired number of ten code changes, but only 29.0\% of users succeed with REGEX.
Overall, we conclude that DiffSearch{} enables users to effectively find code changes, and that the approach clearly outperforms the REGEX-based baseline.
\subsubsection{Qualitative Results}
While DiffSearch{} clearly outperforms REGEX for all ten queries, there are some user-query pairs where REGEX yields more results than DiffSearch{}.
Analyzing these cases shows two main reasons.
First, some users were effective with regular expressions by searching for simple code changes that only add or only remove a single line of code.
For example, for query \#3, some users simply searched for ``+ import (.*)''.
Second, some users formulated regular expression queries that are more general than the natural language description we provide and then manually filtered the results to find the ten relevant code changes.
For example, for query \#5, a user searched for ``\-if((.*?)){'' and then manually checked for conditions that involve \code{null}.
Moreover about satisfying results, all the users get enough results for query \#6 with queries like ''ID(EXPR); $\rightarrow$ \_'', underlining how easy it is querying DiffSearch{}. Another example is about query \#10, where all the users use a query like ''System.out.println(EXPR); $\rightarrow$ \_'' and they get 100 satisfying results. The user study also shows how fast the users learn to use DiffSearch{}. For example, Users 2 and 5 on query \#3 find 0 code changes with DiffSearch{}, while they find 10 code changes on query \#4. As a result, the users learn with the experience the DiffSearch{} query syntax. For example User 2 for query \#3 use queries like ''\_ $\rightarrow$ import LT().LT()'' and ''\_ $\rightarrow$ import LT$<$...$>$.LT$<$...$>$'' that are syntactically invalid. After some tries they understand the query and they perform better on the following queries.
These examples illustrate that DiffSearch{} is particularly useful when searching for non-trivial code changes and to avoid false positive results.
We also asked for informal feedback about both tools, to better understand their strengths and weaknesses.
Users report three reasons for preferring DiffSearch{} over REGEX.
First, they find the DiffSearch{} query language easier to use than regular expression syntax, because it builds upon the underlying programming language.
In particular, some users affirm that in two minutes they were able to type a DiffSearch{} query, but not a working regular expression, especially for complex queries, such as multi-line code changes. Second, REGEX often was much slower than DiffSearch{} because it linearly searches through all code changes.
This inefficiency, especially for more complex regular expressions, caused some users to not find any relevant code changes in the given time.
Finally, some users mention that REGEX syntax is not precise enough to formulate effective queries, leading to many false positives.
\subsection{RQ4: Searching for Bug Fixes}
\label{sec:eval bug patterns}
As a case study for using DiffSearch{}, we apply it to search for instances of bug fix patterns, which could help, e.g., to establish a dataset for evaluating bug detection tools~\cite{ase2018-study}, automated program repair tools~\cite{oopsla2019}, or for training a learning-based bug detection tool~\cite{oopsla2018-DeepBugs}.
We build on a set of 16 patterns defined by prior work~\cite{Karampatsis2019a}, of which we use twelve (\Cref{tab:EffectivenessQueries}).
The remaining four bug fix patterns are all about single-token changes, e.g., changing a numeric literal or changing a modifier, which currently cannot be expressed with our query language.
For the twelve supported patterns, we formulate queries based on the descriptions of the patterns and then search for them with DiffSearch{}.
We use two different datasets for this case study.
First, a set of around 10,000 code changes, called \emph{SStuBs commits}, that contains all those commits where the prior work~\cite{Karampatsis2019a} found instances of the bug fix patterns through custom-built analysis scripts, which we call \emph{SStuBs}.
Second, a set of around 1,000,000 code changes, called \emph{Large}, sampled from all the repositories analyzed in the prior work.
\begin{table}
\caption{Effectiveness of DiffSearch{} in finding instances of bug fix patterns~\cite{Karampatsis2019a}.}
\label{tab:EffectivenessQueries}
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{@{}rlrrr|r@{}}
\toprule
& \multirow{2}{*}{Description} & \multicolumn{3}{c}{SStuBs commits (10k)} & \multicolumn{1}{c}{Large (1M)} \\ \cmidrule(l){3-6}
& & \multicolumn{1}{c}{SStuBs} & \multicolumn{1}{c}{DiffSearch} & \multicolumn{1}{c|}{Both} & \multicolumn{1}{c}{DiffSearch} \\ \midrule
1 & Change only caller & 132 & 1,880 & 121 & 5,974 \\
2 & Change binary operator & 211 & 347 & 131 & 2,979 \\
3 & More specific if & 130 & 592 & 116 & 5,660 \\
4 & Less specific if & 166 & 592 & 150 & 5,387 \\
5 & Wrong function name & 1,141 & 1,439 & 935 & 8,109 \\
6 & Same caller, more args & 557 & 2,108 & 432 & 11,207 \\
7 & Same caller, less args & 110 & 2,123 & 75 & 10,798 \\
8 & Same caller, swap args & 98 & 2,285 & 89 & 9,042 \\
9 & Change unary operator & 126 & 134 & 70 & 6,081 \\
10 & Change binary operand & 91 & 347 & 73 & 2,136 \\
11 & Add throws exception & 60 & 1,834 & 34 & 3,848 \\
12 & Delete throws exception & 45 & 2,278 & 44 & 3,682 \\ \midrule
& Total & 2,867 & 15,959 & 2,270 & 74,903 \\ \bottomrule
\end{tabular}
\end{table}
\Cref{tab:EffectivenessQueries} shows for each bug fix pattern how many code changes the different approaches find.
DiffSearch{} returns a total of 15,959 code changes for the first dataset and 74,903 for the second dataset.
Computing the intersection with the results retrieved by SStuBs, DiffSearch{} finds 79.2\% of their changes, a result consistent with the Java recall computed in RQ1.
Moreover, DiffSearch{} finds many more matching code changes, increasing the dataset from 2,867 to 15,959 examples of bug fixes.
The reason is that our queries are more general than the custom analysis scripts in SStuBs and include, e.g., also code changes that perform other changes besides the specific bug fix.
The number of code changes found by DiffSearch{} is higher than the number of commits (10k) because a single commit may match multiple patterns.
For example, a change that swaps two arguments and modifies a function name will appear in patterns~5 and~8.
Overall, DiffSearch{} is effective at finding various examples of bug fix patterns, showing the usefulness of the approach for creating large-scale datasets.
\subsection{RQ5: Impact of Parameters}
\label{sec:eval parameters}
We perform a sensitivity analysis for the two main parameters of DiffSearch{}:
the length $l$ of feature vectors (Section~\ref{sec:features}), and the number $k$ of candidate matches retrieved via the feature vectors (Section~\ref{sec:indexingRetrieval}).
We select a set of values from 1,000 to 20,000 for $k$ and from 500 to 4,000 for $l$, i.e., values below and above the defaults, and then measure their impact on the time to answer queries, the recall, and the size of the index.
\begin{table}[t]
\caption{Impact of length $l$ of feature vectors and number $k$ of candidates (default configuration is bold).}
\label{tab:parameters}
\setlength{\tabcolsep}{9pt}
\centering
\begin{tabular}{@{}rrrrrrr@{}}
\toprule
$k$ & $l$ & \multicolumn{3}{@{}c@{}}{Response time (s)} & Recall & Size of \\
\cmidrule{3-5}
&& min & avg & max & (\%) & index (GB) \\
\midrule
\multicolumn{7}{@{}l@{}}{\emph{Java:}} \\
\midrule
1,000 & 1,000 & 1.5 & 1.9 & 3.5 & 71.8 & 4.0 \\
\textbf{5,000} & \textbf{1,000} & \textbf{1.5} & \textbf{2.2} & \textbf{9.0} & \textbf{80.7} & \textbf{4.0} \\
10,000 & 1,000 & 1.7 & 2.5 & 9.4 & 84.9 & 4.0 \\
20,000 & 1,000 & 1.8 & 3.1 & 17.7 & 87.3 & 4.0 \\
5,000 & 500 & 0.8 & 1.3 & 8.1 & 79.3 & 2.0 \\
5,000 & 2,000 & 3.0 & 4.2 & 9.9 & 80.6 & 8.0 \\
5,000 & 4,000 & 5.8 & 7.4 & 15.3 & 78.1 & 16.0 \\
\midrule
\multicolumn{7}{@{}l@{}}{\emph{Python:}} \\
\midrule
1,000 & 1,000 & 3.0 & 4.1 & 5.5 & 81.9 & 4.1 \\
\textbf{5,000} & \textbf{1,000} & \textbf{1.8} & \textbf{2.4} & \textbf{3.5} & \textbf{89.8} & \textbf{4.1} \\
10,000 & 1,000 & 3.5 & 5.0 & 8.9 & 91.6 & 4.1 \\
20,000 & 1,000 & 4.1 & 6.0 & 12.4 & 93.7 & 4.1 \\
5,000 & 500 & 1.0 & 1.6 & 3.1 & 86.6 & 2.0 \\
5,000 & 2,000 & 2.7 & 4.9 & 40.8 & 89.8 & 8.1\\
5,000 & 4,000 & 6.1 & 7.9 & 13.1 & 83.4 & 16.3 \\
\midrule
\multicolumn{7}{@{}l@{}}{\emph{JavaScript:}} \\
\midrule
1,000 & 1,000 & 1.2 & 1.9 & 2.8 & 85.4 & 4.0 \\
\textbf{5,000} & \textbf{1,000} & \textbf{1.3} & \textbf{2.0} & \textbf{2.8} & \textbf{90.4} & \textbf{4.0} \\
10,000 & 1,000 & 1.4 & 2.3 & 3.3 & 94.0 & 4.0 \\
20,000 & 1,000 & 1.8 & 2.9 & 5.7 & 95.6 & 4.0 \\
5,000 & 500 & 0.7 & 1.2 & 2.1 & 90.3 & 2.0 \\
5,000 & 2,000 & 3.1 & 4.5 & 5.4 & 92.5 & 8.0 \\
5,000 & 4,000 & 5.1 & 9.2 & 12.8 & 88.6 & 16.1 \\
\bottomrule
\end{tabular}
\end{table}
Table~\ref{tab:parameters} shows the results.
We find that retrieving more candidate code changes, i.e., a higher $k$, slightly increases the response time.
The reason is that matching more code changes against the query increases the time taken by the matching phase.
On the positive side, increasing $k$ increases the recall, reaching 87.3\% for Java, 93.7\% for Python, and 95.6\% for JavaScript when $k$=20,000, while still providing an acceptable average response time.
Parameter $l$ increases the time to answer a query because a larger feature vector slows down the nearest neighbor search.
Likewise, a larger $l$ also increases the size of the index.
Since increasing $l$ beyond our default does not significantly increase recall, we use $l$=1,000 as the default to have a manageable index size and a reasonable response time.
\section{Related Work}
\emph{Code Search.}
Code search engines allow users to find code snippets based on method signatures~\cite{reissCodeSearch}, existing code examples~\cite{kim2018facoy,Luan2019,Premtoon2020}, or natural language queries~\cite{Gu2018,Sachdev2018,cambronero2019deep}.
Sourcerer provides an infrastructure that combines several of the above ideas~\cite{sourcerer}.
Early work by Paul et al.~\cite{Paul1994} proposes a mechanism similar to the placeholders in our query language.
The most important difference between these approaches and DiffSearch{} is that we search for changes of code, not for code snippets within a single snapshot of code.
Another difference is that DiffSearch{} guarantees that all search results match the given query, whereas the existing techniques, with the exception of \cite{Premtoon2020}, are aimed at similarity only.
Prequel has a goal similar to DiffSearch{}, and matches patches against user-provided rules that the code before and after a patch must comply with~\cite{Lawall2016}.
The approaches differ in two aspects.
First, Prequel's rules are based on the semantic patch language of Coccinelle~\cite{Lawall2018} and may include executable code, e.g., queries are Turing-complete.
In contrast, our queries are purely declarative and build on the underlying programming language.
Second, Prequel performs a regular expression-based pre-filtering for each query, followed by a linear search through all commits.
As a result, answering a query may take minutes or, if the pre-filtering is not effective, even longer~\cite{Lawall2016}.
In contrast, DiffSearch{} avoids a linear search via feature-based retrieval, and hence, responds to queries across hundreds of thousands of code changes within seconds.
Several ideas to improve the user's interaction with a code search engine have been proposed, such as refining search results based on user's feedback about the quality of results~\cite{martie2017understanding,sivaraman2019active}.
Other work resolves vocabulary mismatches between queries and code~\cite{sirres2018augmenting}.
Future work could adopt similar ideas to searching for code changes.
\emph{Code Changes as Edit Scripts.}
To reason about code changes, several techniques derive edit scripts on ASTs~\cite{Fluri2007,Hashimoto2008,Falleri2014,Erdweg2021}, providing an abstract description of the change that can then be applied elsewhere~\cite{Meng2011}.
Lase generalizes from multiple code changes into a single edit script~\cite{Meng2013}.
Future work could explore using an edit script-based representation of code changes to search for code changes.
An advantage of our parse tree-based feature extraction is that it does not require aligning the old and new code, allowing us to featurize hundreds of thousands of code changes in reasonable time.
\emph{Mining Code Changes.}
Work on mining code repositories and learning from code changes shows development histories to be a rich source of implicitly stored knowledge.
For example, existing approaches leverage version histories to
extract repetitive code changes~\cite{negara2014mining,Nguyen2019,nguyen2013study},
predict code changes~\cite{Tufano2019},
predict bugs~\cite{Livshits2005,Kim2008}, or to
learn about API usages~\cite{nguyen2016api,Paletov2018}.
Mining approaches typically consider all code changes in a project's version history or filter changes using simple patterns, e.g., keywords in commit messages.
In contrast, DiffSearch{} allows for identifying code changes that match a specific query.
\emph{Learning from Code Changes.}
Large sets of code changes enable learning-based techniques.
One line of work learns from specific kinds of changes, e.g., fixes of particular bug patterns, how to apply this kind of change to other code for automated program repair~\cite{Rolim2017,Rolim2018,oopsla2019}.
Another line of work ranks potential program repairs based on their similarity to common code change patterns~\cite{Le2016}.
DiffSearch{} could help gather datasets of changes for these approaches to learn from, e.g., based on queries for bug fixing patterns.
\emph{Other Analyses of Code Changes.}
There are various other analyses of code changes, of which we discuss only a subset here.
Hashimoto et al.\ propose a technique for reducing a diff to the essence of a bug~\cite{Hashimoto2018}.
Nielsen et al.~\cite{nielsen2021semantic} use JavaScript code change templates to fix code broken due to library evolution.
Another approach automatically documents code changes with a natural language description~\cite{Buse2010}.
SCC~\cite{giger2011comparing} and DeepJIT~\cite{Hoang2019} are predictive models that estimate how likely a code change is to introduce a bug.
A related problem is to find the bug-inducing code change for a given bug report~\cite{Wen2016,Wu2018}.
DiffBase~\cite{Wu2021} encodes facts about different versions of a program to facilitate multi-version program analyses.
CodeShovel~\cite{Grund2021} tracks a method from its creation to its current state throughout a version history.
All these approaches relate to our work by also reasoning about code changes, but they aim for different goals than DiffSearch{}.
\emph{Clone Detection.}
DiffSearch{} relates to code clone detectors~\cite{kamiya2002ccfinder,Li2006,jiang2007deckard,roy2008nicad,sajnani2016sourcerercc}, as answering a query resembles finding clones of the query.
Clone detectors are typically evaluated on a single snapshot of a code base, and they may take several minutes or even hours to terminate~\cite{sajnani2016sourcerercc}.
In principle, one could use an off-the-shelf code clone detector to search for specific kinds of code changes, where the old and parts of the query must be clones of the old and new parts of a change.
However, this approach would search for clones among all code changes for each query, which may not be fast enough for an interactive search engine.
Another difference is that DiffSearch{} guarantees to yield code changes that match a query, whereas clone detectors are interested in similar but not necessarily exactly matching code.
Some clone detectors summarize code in ways related to our feature extraction.
For example, Deckard~\cite{jiang2007deckard} computes characteristic vectors of parse trees and SourcererCC~\cite{sajnani2016sourcerercc} indexes large amounts of code into a bag-of-tokens representation.
Integrating such ideas into the feature-based retrieval in DiffSearch{} could further improve recall.
\section{Threats to Validity}
\todo{merge in the text}
\paragraph{Internal Validity}
Several factors may influence our results.
First, to establish a ground truth for recall we use all code changes that match according to the algorithm in Section~\ref{sec:pruning}.
While designed to be sound, bugs in the implementation of the algorithm might cause mistakes in the ground truth.
We mitigate this threat through extensive automated and manual validation of the implementation.
Second, the user study may be subject to mistakes and biases by the participants.
We try to mitigate this threat by giving the same task to nine participants with different backgrounds and levels of experience.
Finally, the time limit of 120 seconds imposed in the user study might be longer than real users are willing to invest, or not long enough for some users to formulate adequate queries.
We choose this time limit based on empirical results on code search sessions, which are reported to have a median length of 89 seconds and an average length of 210 seconds~\cite{Sadowski2015}.
\paragraph{External Validity}
Several factors may influence the generalizability of our results.
First, the queries we use may not be representative for queries actual users would run.
As DiffSearch{} is the first search engine of its kind, we cannot base our experimental design on queries observed elsewhere.
We mitigate this threat by selecting queries that cover different syntactic categories of changes, different reasons for changing the code, and by building the ground truth for evaluation recall from real-world code changes.
Second, other programming languages than those we evaluate on may create additional challenges.
We reduce this risk by implementing DiffSearch{} for three languages and by keeping most parts of the approach language-agnostic, except for adapting the grammar of the language into a query language.
\section{Approach}
This section presents the approach in detail.
Before going through the four components introduced in \Cref{sec:overview overview}, we define the query language to specify what kind of code changes to search for.
\subsection{Query Language}
\label{sec:queryingLanguage}
To search for specific kinds of code changes, DiffSearch{} accepts queries that describe the code before and after the change.
Our goal is to provide a query language that developers can learn with minimal effort and that supports all constructs of the target programming language.
To this end, the query language is an extension of the target programming language, i.e., it includes all rules on the target language and additional features useful for queries.
\renewcommand{\syntleft}{\normalfont\itshape}
\renewcommand{\syntright}{}
\begin{figure}
\setlength{\grammarparsep}{.2em}
\setlength{\grammarindent}{8em}
\small
\begin{grammar}
<Query> ::= <Snippet> $\rightarrow$ <Snippet>
<Snippet> ::= <Stmt>* | <Expression>
| _
<Stmt> ::= $\langle$...$\rangle$
| (Target language rules)
<Expression> ::= EXPR
| EXPR$\langle$<Number>$\rangle$
| $\langle$...$\rangle$ | (Target language rules)
<AssignOperator> ::= OP
| OP$\langle$<Number>$\rangle$
| (Target language rules)
<BinaryOperator> ::= binOP
| binOP$\langle$<Number>$\rangle$
| (Target language rules)
<UnaryOperator> ::= unOP
| unOP$\langle$<Number>$\rangle$
| (Target language rules)
<Identifier> ::= ID
| ID$\langle$<Number>$\rangle$
| (Target language rules)
<Literal> ::= LT
| LT$\langle$<Number>$\rangle$
| (Target language rules)
\end{grammar}
\caption{Simplified grammar of queries. Non-terminals are in \textit{italics}.}
\label{fig:grammar}
\end{figure}
\Cref{fig:grammar} shows the grammar of our query language.
A query consists of two sequences of statements, which describe the old and new code, respectively.
The syntax for statements is inherited from the target programming language and not shown in the grammar.
Instead of a regular code snippet, a query may contain an underscore to indicate the absence of any code, which is useful to describe code changes that insert or remove code.
The grammar extends the target language by adding placeholders for specific syntactic entities, namely expressions, operators, identifiers, and literals.
For each such entity, a query can either describe with an unnamed placeholder that there should be any such entity, e.g., \code{EXPR} for any expression, or repeatedly refer to a specific entity with a named placeholder, e.g., using \code{EXPR<1>} and \code{EXPR<2>}.
Named placeholders will be bound to the same entity across the entire query, e.g., to say that the same expression \code{EXPR<1>} must appear on both sides. We also introduce the wildcard \code{<...>} that matches any statement or expression.
\begin{table}
\caption{Examples of Java changes and matching queries.}
\label{tab:grammarExamples}
\setlength{\tabcolsep}{5pt}
\small
\begin{tabular}{@{}ll@{}}
\toprule
Code change&DiffSearch query \\
\midrule
\begin{minipage}{8em}
\small
\begin{Verbatim}
- evt.trig();
\end{Verbatim}
\end{minipage}
&
\begin{minipage}[t]{8em}
\small
\begin{Verbatim}
ID.ID();
\end{Verbatim}
\end{minipage}
\hspace{-.3em}$\rightarrow$\hspace{.3em}
\begin{minipage}[t]{8em}
\begin{Verbatim}
_
\end{Verbatim}
\end{minipage}
\\
\midrule
\begin{minipage}[t]{6.5em}
\small
\begin{Verbatim}
- if (x > 0)
- y = 1;
+ if (x < 0)
+ y = 0;
\end{Verbatim}
\end{minipage}
&
\begin{minipage}[t]{8em}
\small
\begin{Verbatim}
if (EXPR)
ID OP LT;
\end{Verbatim}
\end{minipage}
\hspace{-.3em}$\rightarrow$\hspace{.3em}
\begin{minipage}[t]{8em}
\small
\begin{Verbatim}
if (EXPR)
ID OP LT;
\end{Verbatim}
\end{minipage}
\\
\midrule
\begin{minipage}[t]{6.5em}
\small
\begin{Verbatim}
- run(k);
- now(k);
+ runNow(k);
\end{Verbatim}
\end{minipage}
&
\begin{minipage}[t]{8em}
\small
\begin{Verbatim}
run(EXPR<0>);
now(EXPR<0>);
\end{Verbatim}
\end{minipage}
\hspace{-.3em}$\rightarrow$\hspace{.3em}
\begin{minipage}[t]{9em}
\small
\begin{Verbatim}
runNow(EXPR<0>);
\end{Verbatim}
\end{minipage}
\\
\bottomrule
\end{tabular}
\end{table}
To illustrate the query language, \Cref{tab:grammarExamples} gives a few examples of code changes and a corresponding query that matches the code change.
The first two examples use unnamed placeholders, e.g., to match arbitrary identifiers.
The third example uses a named placeholder:
The \code{EXPR<0>} in both the old and new part of the query means that this expression, here \code{k}, remains the same despite the code change, which replaces two calls with one.
\subsection{Tree-based Representation of Code Changes and Queries}
\label{sec:trees}
One goal of DiffSearch{} is to be mostly language-agnostic, making it easy to apply the approach to different programming languages.
Our current version supports Java, JavaScript, and Python.
To this end, the approach represents code changes and queries using a parse tree, i.e., a representation that is straightforward to obtain for any programming language.
The benefit of parse trees is that they abstract away some details, such as irrelevant whitespace, yet provide an accurate representation of code changes.
To represent a set of commits in a version history as pairs of trees, DiffSearch{} first splits each commit into hunks, which results in a set of code changes (\Cref{def:code change}).
The approach then parses the old and new code of a hunk using the programming language grammar into a single tree that represents the code change.
Likewise, to represent a query, DiffSearch{} parses the query into a parse tree using our extension of the grammar (\Cref{fig:grammar}).
For example, \Cref{fig:trees} shows the parse trees of a change and a query.
The change on the left corresponds to Code change~2 from \Cref{sec:overview}, which swaps \code{x} and \code{y} of a call to \code{isValidPoint}.
\begin{figure*}
\centering
\includegraphics[width=.9\linewidth]{images/parseTrees}
\caption{Parse tree representations of Code change~2 (left) and the query from \Cref{sec:overview} (right). Only some of all considered features are highlighted for illustration.}
\label{fig:trees}
\end{figure*}
An interesting challenge in parsing code changes and queries are syntactically incomplete code snippets.
For example, the code changes in \Cref{sec:overview} open a block with \code{\{} but do not close it with \code{\}}, because the line with the closing curly brace was not changed.
DiffSearch{} addresses this challenge by relaxing the grammar of the target language so that it accepts individual code lines even when they are syntactically incomplete.
For example, we relax the grammar to allow for unmatched parentheses and partial expressions.
An alternative to parse trees are abstract syntax trees (ASTs).
We build on parse trees instead because ASTs abstract away many syntactic details that may be relevant in queries.
For example, consider the following code change that adds parentheses to make a complex expression easier to read:
\begin{tabular}{ll}
&
\begin{minipage}[t]{10em}
\begin{Verbatim}[fontsize=\small]
flag = alive || x && y;
\end{Verbatim}
\end{minipage}
\\
$\rightarrow$
&
\begin{minipage}[t]{6em}
\begin{Verbatim}[fontsize=\small]
flag = alive || (x && y);
\end{Verbatim}
\end{minipage}
\end{tabular}
\vspace{.3em}
Because the added parentheses preserve the semantics of the expression, they are abstracted away in a typical AST, i.e., the old and new code have the same AST.
As a result, an AST-based representation could neither represent this change nor a query to search for it.
\subsection{Extracting Features}
\label{sec:features}
Based on the tree representation of code changes and queries, the feature extraction component of DiffSearch{} represents each tree as a set of features.
The goal of this step is to enable quickly searching through hundreds of thousands of code changes.
By projecting both code changes and queries into the same feature space, we enable the approach to compare them efficiently.
An alternative would be to pairwise compare each code change with a given query~\cite{Fluri2007,Lawall2016}.
However, such a pairwise comparison would require an amount of computation time that is linear w.r.t.\ the number of code changes, which would negatively affect the scalability.
DiffSearch{} uses two kinds of features.
The first kind of feature is \emph{node features}, which encodes the presence of a node in the parse tree.
For the example in \Cref{fig:trees}, the dotted, blue lines show three of the extracted node features.
The second kind of feature is \emph{parse tree triangles}, which encode the presence of a specific subtree.
Each parse tree triangle is a tree that consists of a node and all its descendants up to some configurable depth.
We use a depth of one as a default, i.e., a triangle contains a node and its immediate child nodes.
For the example in \Cref{fig:trees}, the dashed, red lines highlight two of the extracted triangles.
The triangle at the top encodes the fact that there is an if statement, while the other triangle encodes the fact that the code contains an expression list with exactly two expressions.
The two kinds of features complement each other because node features encode information about individual nodes, including identifiers and operators, whereas parse tree triangles represent how nodes are structured.
For each code change or query, the approach extracts a separate set of features for the old and the new code.
With this separation, the features encode whether specific code elements are added or removed in a code change.
The feature sets for code changes and queries are constructed in the same way, except that DiffSearch{} removes node features for placeholder nodes, e.g., \code{ID} or \code{EXPR}, from the query.
The rationale is that we want the features of a query to be subset of the features of a matching code change, but placeholder nodes never appear in code changes.
Different code changes and queries yield different numbers of features.
To efficiently compare a given query against arbitrary code changes, DiffSearch{} represents all features of a code change or query as a fixed-size feature vector.
The feature vector is a binary vector of length $l_{\mathit{n}} + l'_{\mathit{n}} + l_{\mathit{tri}} + l'_{\mathit{tri}}=l$, where $l_{\mathit{n}}$ and $l'_{\mathit{n}}$ are the number of bits to represent the node features of the old and new code, respectively, and likewise for $l_{\mathit{tri}}$ and $l'_{\mathit{tri}}$ for the parse tree triangle features.
We use $l=1,000$ by default, dividing it equally among the four components, which strikes a balance between representing a diverse set of features and efficiency during indexing and retrieval.
Section~\ref{sec:eval parameters} evaluates different sizes for the feature vector length.
\begin{algorithm}[tb]
\begin{algorithmic}[1]
\Require{Set $F$ of features, target size $l_{\mathit{target}}$}
\Ensure{Feature vector $v$}
\State $v \leftarrow$ vector of $l_{\mathit{target}}$ zeros
\ForAll{$f \in F$}
\State $h \leftarrow \mathit{hash}(f)$
\State $v[h \bmod{} l_{\mathit{target}}] \leftarrow 1$\label{line:mod1}
\EndFor
\State \textbf{return} $v$
\end{algorithmic}
\caption{Represent features as fixed-size vector.}\label{alg:featureVector}
\end{algorithm}
\Cref{alg:featureVector} summarizes how DiffSearch{} maps a set $F$ of features into a fixed-size vector $v$.
The algorithm computes a hash function over the string representations of individual nodes in a feature, sums up the hash values into a value $h$, and sets the $h$-th index of the feature vector to one.
To ensure that the index is within the bounds of $v$, line~\ref{line:mod1} performs a modulo operation.
For each code change or query, the algorithm is invoked four times to map each of the four feature sets into a fixed-size vector.
\subsection{Indexing and Retrieving Code Changes}
\label{sec:indexingRetrieval}
To prepare for responding to queries, DiffSearch{} runs an offline phase that indexes the given set of code changes.
The indexing and retrieval components of the approach build on FAISS, which is prior work on efficiently searching for similar vectors across a large set of vectors~\cite{johnson2019billion}.
In the first step of the offline phase, DiffSearch{} parses all code changes and stores the parse trees on disk.
In the second step, DiffSearch{} generates the feature vectors of the code changes using the corresponding parse trees.
Given the set $V_{\mathit{changes}}$ of feature vectors of all code changes, the approach computes an index into these vectors.
After the offline indexing phase, DiffSearch{} accepts queries.
For a given query, the approach computes a feature vector $v_{\mathit{query}}$ (\Cref{sec:features}), and then uses the index to efficiently retrieve the most similar feature vectors of code changes.
FAISS allows for efficiently answering approximate nearest neighbor queries, without comparing the query against each vector in $V_{\mathit{changes}}$.
The nearest neighbors are based on the L2 (Euclidean) distance.
To ensure that the presence of matching features is weighted higher than the absence of features, we multiple $v_{\mathit{query}}$ by a constant factor $\frac{l}{2}+1$ before running the nearest neighbor query.
To illustrate this decision consider an example with three feature vectors: A query $v_Q=(0,0,1)$, a potential match $v_P=(1,1,1)$ with the third feature in common, and a mismatch $v_M=(0,0,0)$.
Naively computing the Euclidean distances yields $d(v_Q,v_P)= \sqrt{2}$ and $d(v_Q,v_M) = \sqrt{1}$, i.e., the mismatch would be closer to the query than the potential match.
Instead, after multiplying $v_Q$ with the constant factor $\frac{3}{2}+1$, we have $d(v_Q,v_P)= \sqrt{4.25}$ and $d(v_Q,v_M) = \sqrt{6.25}$, i.e., the potential match is now closer to the query than to the mismatch.
The approach retrieves the $k$ most similar code changes for a given query.
We use $k=5,000$ by default, and Section~\ref{sec:eval parameters} evaluates other values.
The retrieved candidate code changes are ranked based on their distance to the query, and we use this ranking to sort the final search results shown to a user.
\subsection{Matching of Candidate Search Results}
\label{sec:pruning}
Given the $k$ candidate code changes retrieved for a given query as described in \Cref{sec:indexingRetrieval}, DiffSearch{} could return all of them to the user.
However, the feature-based search does not guarantee precision, i.e., that all the retrieved code changes indeed match the query.
One reason is that the features capture only local information, but do not encode the entire parse tree in a lossless way.
Another reason is that the features do not encode the semantics of named placeholders, i.e., they cannot ensure that placeholders are expanded consistently across the old and new code.
To guarantee that all code changes returned in response to a query precisely match the query, the matching component of DiffSearch{} takes the candidate search results obtained via the feature-based retrieval and checks for each candidate whether it indeed matches the query.
Intuitively, a code change matches a query if the placeholders and wildcards in the query can be expanded in a way that yields code identical to the code change or some subset of the code change.
More formally, we define this idea as follows:
\begin{definition}[Match]
\label{def:match}
Given a code change $c \rightarrow c'$ and a query $q \rightarrow q'$, let $t_c, t_{c'}, t_q, t_{q'}$ be the corresponding parse trees.
The code change matches the query if
\begin{itemize}
\item $t_q$ can be expanded into some subtree of $t_c$ and
\item $t_{q'}$ can be expanded into some subtree of $t_{c'}$
\end{itemize}
so that all of the following conditions hold:
\begin{itemize}
\item Each placeholder is expanded into a subtree of the corresponding syntactic entity.
\item All occurrences of a named placeholder are consistently mapped to identical subtrees.
\item Each wildcard is expanded to an arbitrary, possibly empty subtree.
\end{itemize}
\end{definition}
For example, consider the query and code change in \Cref{fig:trees} again.
They match because the tree on the right can be expanded into the tree on the left.
The expansion maps the named placeholders \code{ID<1>} to \code{isValidPoint}, \code{EXPR<1>} to the subtree that represents \code{x}, and \code{EXPR<2>} to the subtree that represents \code{y}.
Moreover the wildcards in the query are both mapped to the empty tree.
As an example of a code change that does not match this query, consider Code change~1 from \Cref{sec:overview} again.
The parse tree of the query cannot be expanded into the parse tree of that code change because there is no way of expanding the query tree while consistently mapping \code{EXPR<1>} and \code{EXPR<2>} to the three method arguments \code{a-1}, \code{b}, and \code{c}.
To check whether a candidate code change indeed matches the given query, DiffSearch{} compares the parse tree of the query with the parse tree of the code change in a top-down, left-to-right manner.
The basic idea is to search for a mapping of nodes in the query tree to nodes in the parse tree that consistently maps named placeholders to identical subtrees.
On top of this basic idea, the matching algorithm faces two interesting challenges.
We illustrate the challenges with the following query, which searches for code changes where two call statements get replaced by an assignment of a literal to an identifier. The following example shows the query on the left and a matching code change on the right:
\vspace{.3em}
\begin{tabular}{@{}lcr@{}}
\begin{minipage}{3em}
\begin{Verbatim}[fontsize=\small]
ID();
<...>
ID();
\end{Verbatim}
\end{minipage}
&
$\rightarrow$
&
\begin{minipage}{6em}
\begin{Verbatim}[fontsize=\small]
ID = LT;
\end{Verbatim}
\end{minipage}
\hspace{2em}
\begin{minipage}{3em}
\begin{Verbatim}[fontsize=\small]
foo();
bar();
baz();
\end{Verbatim}
\end{minipage}
$~\rightarrow$
\begin{minipage}{6em}
\begin{Verbatim}[fontsize=\small]
x = 5;
foo();
y = 7;
\end{Verbatim}
\end{minipage}
\end{tabular}
\vspace{.3em}
The first challenge is because queries are allowed to match parts of a change, which is useful to find relevant changes surrounded by other, irrelevant changed code.
While useful, this property of queries also implies that the query may match at multiple places within a given code change.
In the above example, the \code{ID = LT;} part of the query may match both \code{x = 5;} and \code{y = 7;}.
The second challenge is because queries may contain wildcards, which is useful to leave parts of a query unspecified.
Wildcards also cause a single query to possibly match in multiple ways.
For the above example, the wildcard could be between the calls of \code{foo} and \code{bar}, between the calls of \code{bar} and \code{baz}, or it could match the call of \code{bar()}.
Because of these two challenges, matching must consider different ways of mapping a query onto a code change, which results in a search space of possible matches that must be explored.
\begin{algorithm}[tb]
\caption{Check if a code change matches a query.}
\label{alg:match}
\small
\begin{algorithmic}[1]
\Require{Code change $c \rightarrow c'$ and query $q \rightarrow q'$}
\Ensure{True if they match, False otherwise.}
\State $t_c, t_{c'} \leftarrow \mathit{parse}(c \rightarrow c')$
\State $t_q, t_{q'} \leftarrow \mathit{parse}(q \rightarrow q')$
\State $N_{\mathit{toMatch}} \leftarrow (\mathit{allNodes}(q) \cup \mathit{allNodes}(q')) \setminus \mathit{wildcards}$
\State $W \leftarrow \mathit{candidateMappings}(t_c, t_{c'}, t_q, t_{q'})$
\While{$W$ is not empty}
\State $M \leftarrow$ Take a mapping from $W$
\State $n_q \leftarrow \mathit{nextUnmatchedNode}(M, t_q, t_{q'})$
\State $n_{pq} \leftarrow$ Parent of $n_q$
\State $n_{pc} \leftarrow$ Look up $n_{pq}$ in $M$
\For{$c$ \mbox{\textbf{in}} all not yet matched children of $n_{pc}$}
\If{$\mathit{canAddToMap}(M, c, n_q)$}
\State $M' \leftarrow$ Copy of $M$ with $n_q \mapsto c$
\If{$\mathit{keys}(M') \cap N_{\mathit{toMatch}} = \emptyset$\\
\hspace{5.1em} \mbox{\textbf{and}} $\mathit{isValid}(M, t_c, t_{c'}, t_q, t_{q'})$}
\State \textbf{return} true \label{line:isMatch}
\EndIf
\Else
\State Add $M'$ to $W$
\EndIf
\EndFor
\EndWhile
\end{algorithmic}
\end{algorithm}
DiffSearch{} addresses these challenges in \Cref{alg:match}, which checks whether a given query and code change match.
The algorithm starts by parsing the code change into trees $t_c$ and $t_{c'}$, which represent the old and new part of the change, and likewise for the query.
The core of the algorithm is a worklist-based search through possible mappings between nodes in the parse tree of the query and nodes in the parse tree of the code change.
These mappings are represented as a map $M$ from nodes in the query trees to nodes in the code change trees.
Each mapping $M$ in the worklist $W$ represents a possible way of matching the query against the code change.
To determine whether all nodes in the query have been successfully mapped, the algorithm maintains a set $N_{\mathit{toMatch}}$ of all the nodes in the query that must be matched.
The algorithm explores mappings in $W$ until it either finds a mapping that covers all nodes in $N_{\mathit{toMatch}}$, or until it has unsuccessfully explored all mappings in $W$.
\Cref{alg:match} relies on several helper functions.
One of them, $\mathit{candidateMappings}$, computes the starting points for the algorithm by returning all possible mappings of the roots of $t_q$ and $t_{q'}$ to nodes in the code change trees.
The $\mathit{nextUnmatchedNode}$ function performs a top-down, left-to-right pass through the query trees to find a node that is not yet in the current map $M$.
The $\mathit{canAddToMap}$ function checks if adding a mapping $n_q \mapsto c$ is consistent with an already existing map $M$.
Specifically, it checks that $n_q$ is not yet among the keys of $M$, that $c$ is not yet among the values of $M$, and that the two nodes are either identical non-placeholder nodes or that $n_q$ is a placeholder that can be consistently mapped to $c$ as specified in \Cref{def:match}.
Finally, the helper function $\mathit{isValid}$ checks whether a mapping $M$ that covers all to-be-matched nodes ignores nodes in the change tree only when there is a corresponding wildcard in the query tree. The algorithm postpones this check to $\mathit{isValid}$ to reduce the total number of mappings to explore.
Matching a single code change against a query might cause the algorithm to explore many different mappings, and DiffSearch{} typically invokes \Cref{alg:match} not only once but for tens or hundreds of candidate search results.
To ensure that the approach responds to queries quick enough for interactive usage, we optimize \Cref{alg:match} by pruning code changes that certainly cannot match a given query.
To this end, the approach checks if all leaf nodes in the parse tree of a query occur at least once in the parse tree of the code change.
For example, consider the following query, which searches for changes in the right-hand side of assignments to a variable \code{myVar}:\footnote{Because the \scode{myVar =}\hspace{.5em} part of the code remains the same, the query expresses that the literal captured by the unnamed placeholder \scode{LT} is changing.}
\vspace{.2em}
\hspace{2em}
\begin{tabular}{rlcr}
&
\begin{minipage}[t]{6em}
\begin{Verbatim}[fontsize=\small]
myVar = LT;
\end{Verbatim}
\end{minipage}
&
$\rightarrow$
&
\begin{minipage}[t]{6em}
\begin{Verbatim}[fontsize=\small]
myVar = LT;
\end{Verbatim}
\end{minipage}
\end{tabular}
\vspace{.3em}
If a code change does not include any token \code{myVar}, then the optimization immediately decides that the code change cannot match the query and skips \Cref{alg:match}.
\section{Conclusion}
We present a scalable and precise search engine for code changes.
Given a query that describes code before and after a change, the approach retrieves within seconds relevant examples from a corpus of a million code changes.
Our query language extends the underlying programming language with wildcards and placeholders, providing an intuitive way of formulating queries to search for code changes.
Key to the scalability of DiffSearch{} is to encode both queries and code changes into a common feature space, enabling efficient retrieval of candidate search results.
Matching these candidates against the query guarantees that every returned search result indeed fits the query.
The approach is mostly language-agnostic, and we empirically evaluate it on Java, JavaScript, and Python.
DiffSearch{} answers most queries in less than a second, even when searching through large datasets.
The recall ranges between 80.7\% and 90.4\%, depending on the target language, and can be further increased at the expense of response time.
We also show that users find relevant code changes more effectively with DiffSearch{} than with a regular expression-based search.
Finally, as an example of how the approach could help researchers, we use it to gather a dataset of 74,903 code changes that match recurring bug fix patterns.
We envision DiffSearch{} to serve as a tool useful to both practitioners and researchers, and to provide a basis for future work on searching for code changes.
\section*{Acknowledgment}
This work was supported by the European Research Council (ERC, grant
agreement 851895), and by the German Research Foundation within the
ConcSys and DeMoCo projects.
\section{Implementation}
\label{sec:implementation}
We implement the DiffSearch{} idea into a practical search engine that supports multiple programming languages, currently Java, JavaScript, and Python.
To gather raw code changes, the implementation uses ''git log -p''.
For each change, a parse tree is created using ANTLR4\footnote{https://www.antlr.org/}, using the grammar of the target programming language, modified to support queries and to allow for syntactically incomplete code fragments (\Cref{sec:queryingLanguage}).
The indexing and retrieval components build on the FAISS library~\cite{johnson2019billion}, which supports efficient vector similarity queries for up to billions of vectors.
Once changes are indexed, the search engine is a server that responds to queries via one of two publicly available interfaces: a web interface for interactive usage and a web service for larger-scale usage, e.g., to create a dataset of changes.
\section{Introduction}\label{sec:intro}}
\IEEEPARstart{H}{undreds} of thousands of code changes are stored in the version histories of code repositories.
To benefit from this immense source of knowledge, practitioners and researchers often want to search for specific kinds of code changes.
For example, developers may want to search through their own repositories to find again a code change performed in the past, or search for commits that introduce a specific kind of problem.
Developers may also want to search through changes in repositories by others, e.g., to understand how code gets migrated from one API to another, or to retrieve examples of common refactorings for educational purposes.
A question on Stack Overflow on how to systematically search through code changes\footnote{\url{https://stackoverflow.com/questions/2928584/how-to-grep-search-committed-code-in-the-git-history}} has received over half a million views, showing that practitioners are interested in finding changes from the past.
Besides practitioners, researchers also commonly search for specific kinds of code changes.
For example, a researcher evaluating a bug finding tool~\cite{ase2018-study} or a program repair tool~\cite{cacm2019-program-repair,DBLP:conf/icse/TanYYMR17,motwani2020quality} may be interested in examples of specific kinds of bug fixes.
Likewise, researchers working on machine learning models that predict when and where to apply specific code changes require examples of such changes as training data~\cite{oopsla2019}.
Finally, researchers systematically study when and how developers perform specific kinds of changes to increase our understanding of development practices~\cite{negara2014mining,Rak-amnouykit2020,Nguyen2019,ase2020}.
Unfortunately, there currently is no efficient and effective technique for systematically searching large version histories for specific kinds of changes.
The solutions proposed in the above Stack Overflow post are all based on matching regular expressions against raw diffs.
However, searching for anything beyond the most simple change patterns with a regular expression is cumbersome and likely to result in irrelevant code changes.
Another existing technique is GitHub's commit search, which allows for searching through commit messages and meta-information, such as developer names and project names.
Nevertheless, commit search does not support searching for specific code transformations.
Finally, previous research proposes techniques that linearly scan version histories for specific patterns~\cite{Fluri2007,kawrykow2011non,pan2009toward,Lawall2016}.
However, due to their linear design, these techniques do not scale well to searching through hundreds of thousands of changes in a short time.
This paper presents DiffSearch{}, a scalable and precise search engine for code changes.
DiffSearch{} is enabled by three key contributions.
First, we design a query language that is intuitive to use and easy to adapt to different programming languages.
The query language extends the target programming language with wildcards and placeholders that abstract specific syntactic categories, e.g., expressions.
Second, to ensure scalability, the approach is split into an indexing part, which maps code changes into a feature space, and a retrieval part, which matches a given query in the feature space. We design specific features for code changes, extracting useful information to match different changes on code source.
Finally, to ensure precision, i.e., that a found code change indeed fits the given query, a crucial part of the approach is to match candidate code changes against the given query.
We present an efficient algorithm that checks if a query can be expanded into a code change.
DiffSearch{} is designed in a mostly language-agnostic way, making it easy to apply the approach to different languages.
In particular, we restrict ourselves to a very lightweight static analysis of code changes.
The query language and parts of the search algorithm build upon the context-free grammar of the target programming language.
As a proof-of-concept, DiffSearch{} currently supports three widely used languages: Java, JavaScript, and Python.
Our approach relates to work on searching for code, which retrieves code snippets that match keywords~\cite{sourcerer,Gu2018}, test cases~\cite{reissCodeSearch}, or partial code snippets~\cite{Luan2019,kim2018facoy}.
While code search engines often have a design similar to ours, i.e., based on indexing and retrieval, they consider only a single snapshot of code, but no code changes.
Other related work synthesizes an edit program from one or more code changes~\cite{Fluri2007,Falleri2014,Rolim2017,Gao2020,Erdweg2021} and infers recurring code change patterns~\cite{DBLP:conf/pldi/PaletovTRV18,Nguyen2019}.
Starting from concrete changes, these approaches yield abstractions of them.
Our work addresses the inverse problem: given a query that describes a set of code changes, find concrete examples that match the query.
Finally, our work relates to clone detection~\cite{kamiya2002ccfinder,Li2006,jiang2007deckard,roy2008nicad,sajnani2016sourcerercc}, as DiffSearch{} searches for code changes that resemble a query.
Our work differs from clone detection by considering code changes (and not individual snippets of code), by focusing on guaranteed matches instead of similar code, and by responding to queries quickly enough for interactive use.
We evaluate the effectiveness and scalability of DiffSearch{} with one million code changes in each Java, Python, and JavaScript.
We find that the approach responds to queries within a few seconds, scaling well to large sets of code changes.
The search has a mean recall of 80.7\% for Java, 89.6\% for Python, and 90.4\% for JavaScript, which can be increased even further in exchange for a slight increase in response time.
A user study shows that DiffSearch{} enables users to effectively retrieve code changes, clearly outperforming a regular expression-based search through raw diffs.
As a case study to show the usefulness of DiffSearch{} for researchers, we apply the approach to gather a dataset of 74,903 bug fixes.
In summary, this paper contributes the following:
\begin{itemize}
\item A \emph{query language} that extends the target programming language with placeholders and wildcards, making it easy to adapt the approach to different languages.
\item A technique for searching code changes that ensures \emph{scalability} through approximate, indexing-based retrieval, and that ensures \emph{precision} via exact matching.
\item Empirical evidence that the approach effectively finds thousands of relevant code changes, scales well to more than a million changes from different projects, and successfully helps users to answer a diverse set of queries.
\end{itemize}
\smallskip
\noindent
The implementation\footnote{\url{https://github.com/sola-st/DiffSearch}} and a web interface\footnote{\url{http://diffsearch.software-lab.org}} of DiffSearch are publicly available.
\section{Example and Overview}\label{sec:overview}
\subsection{Motivating Example}\label{sec:motivation}
To illustrate the problem and how DiffSearch{} addresses it, consider the following example query.
The query searches for code changes that swap the arguments passed to a call that is immediately used in a conditional.
Such a query could be used to find fixes of swapped argument bugs~\cite{oopsla2017}.
{
\renewcommand{\arraystretch}{0.85}
\begin{tabular}{rlc}
&
\begin{minipage}[t]{13.5em}
\begin{Verbatim}[fontsize=\small]
if(ID<1>(EXPR<1>, EXPR<2>)){
<...>
\end{Verbatim}
\end{minipage}\\
$\rightarrow$ &
\begin{minipage}[t]{15em}
\begin{Verbatim}[fontsize=\small]
if(ID<1>(EXPR<2>, EXPR<1>)){
<...>
\end{Verbatim}
\end{minipage}
\end{tabular}
}
\vspace{.3em}
Our query language is an extension of the target programming language, Java in the example, and adds placeholders for some syntactic categories.
For example, the \code{ID<1>} placeholder will match any identifier, and the \code{EXPR<1>} placeholder matches any expression.
Instead of such placeholders, queries can also include concrete identifiers and literals, e.g., to search for specific API changes.
As the set of code changes to search through, suppose we have the following three examples, of which only the second matches the query:
\vspace{.3em}
\noindent \emph{Code change 1:}\\
\hspace*{-2.2em}
\begin{tabular}{rlcr}
&
\begin{minipage}[t]{9.5em}
\begin{Verbatim}[fontsize=\small]
if(check(a - 1, b)){
\end{Verbatim}
\end{minipage}
&
\hspace{1em}$\rightarrow$
&
\begin{minipage}[t]{3em}
\begin{Verbatim}[fontsize=\small]
if(check(a - 1, c)){
\end{Verbatim}
\end{minipage}
\end{tabular}
\vspace{.3em}
\vspace{.3em}
\noindent \emph{Code change 2:}\\
\hspace*{-2.2em}
\begin{tabular}{rlcr}
&
\begin{minipage}[t]{11.3em}
\begin{Verbatim}[fontsize=\small]
if(isValidPoint(x, y)){
\end{Verbatim}
\end{minipage}
&
\hspace{.7em}$\rightarrow$\hspace{-.4em}
&
\begin{minipage}[t]{2em}
\begin{Verbatim}[fontsize=\small]
if(isValidPoint(y, x)){
\end{Verbatim}
\end{minipage}
\end{tabular}
\vspace{.3em}
\vspace{.3em}
\noindent \emph{Code change 3:}\\
\hspace*{-2.2em}
\begin{tabular}{rlcr}
&
\begin{minipage}[t]{10.5em}
\begin{Verbatim}[fontsize=\small]
while(var > k - 1){
sum += count(var);
\end{Verbatim}
\end{minipage}
&
$\rightarrow$\hspace{-.7em}
&
\begin{minipage}[t]{6em}
\begin{Verbatim}[fontsize=\small]
while(var > k){
sum += 2 * count(var);
\end{Verbatim}
\end{minipage}
\end{tabular}
\subsection{Problem Statement}
An important design decision is the granularity of code changes to consider.
The options range from changes of individual lines, which would limit the approach to very simple code changes, to entire commits, which may span multiple files, several dozens of lines~\cite{DBLP:conf/iwpc/AlaliKM08}, often containing multiple entangled logical changes~\cite{DBLP:conf/icse/KawrykowR11,DBLP:conf/icse/BarnettBBL15,DBLP:journals/ese/HerzigJZ16,Partachi2020a}.
We opt for a middle ground between these two extremes and consider code changes at the level of ``hunks'', i.e., consecutive lines that are added, modified, or removed together.
\begin{definition}[Code change]
\label{def:code change}
A code change $c \rightarrow c'$ consists of two pieces of code, which each consists of a sequence $[l_1,..,l_m]$ of consecutive lines of code extracted from a file in the target language.
\end{definition}
\begin{definition}[Query]
\label{def:query}
A query $q \rightarrow q'$ consists of two patterns, which each are a sequence $[l_1,..,l_m]$ of lines of code in an extension of the target programming language.
The language extension adds wildcards, a special ``empty'' symbol, and placeholders for specific syntactic categories, e.g., to match an arbitrary expression or identifier.
\end{definition}
Given these two ingredients, the problem we address is:
\begin{definition}[Search for code changes]
\label{def:search}
Given a set $C$ of code changes and a query $q \rightarrow q'$, find a set $M \subseteq C$ of code changes such that each $(c \rightarrow c') \in M$ matches $q \rightarrow q'$.
We say that a code change $c \rightarrow c'$ matches a query $q \rightarrow q'$ if there exist an expansion of the placeholders and wildcards in $q \rightarrow q'$ that lead to $c \rightarrow c'$.
\end{definition}
By ensuring that, for any retrieved code change, the query can be expanded to the code change, DiffSearch{} guarantees that every result of a search precisely matches the query.
\subsection{Main Idea of the Approach}
\label{sec:overview overview}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{images/overview}
\caption{Overview of the approach.}
\label{fig:overview}
\end{figure}
DiffSearch{} consists of four components that are used in an offline and an online phase as illustrated in \Cref{fig:overview}.
In the offline phase, the approach analyzes and indexes a large set of code changes.
The \emph{Parsing \& Feature extraction} component of the approach parses and abstracts concrete code changes and queries into a set of features, mapping both into a common feature space.
For our example query in Section~\ref{sec:motivation}, the features encode, e.g., that a call expression appearing within the condition of an if statement is changed and that the changed call has two arguments.
To enable quickly searching through hundreds of thousands of code changes, the \emph{Indexing} component of DiffSearch{} indexes the given feature vectors~\cite{johnson2019billion} once before accepting queries.
In the online phase, the input is a query that describes the kind of code changes to find.
Based on the pre-computed index and the feature vector of a given query, the \emph{Retrieval} component retrieves those code changes that are most similar to the query.
For our motivating example, this yields Code change~1 and Code change~2 because both change the arguments passed to a call.
The similarity-based retrieval does not guarantee precision, i.e., that each candidate code change indeed matches the query.
The \emph{Matching \& Ranking} component of DiffSearch{} removes any candidates that do not match the query by checking whether the placeholders and wildcards in the query can be expanded into concrete code in a way that yields the candidate code change.
For our example, matching will eliminate Code change~1, as it does not swap arguments, and eventually returns Code change~2 as a search result to the user.
\section{Evaluation}
Our evaluation focuses on five research questions:
\begin{itemize}
\item RQ1: What is the recall of DiffSearch{}? (Section~\ref{sec:eval recall})
\item RQ2: How efficient and scalable is DiffSearch{}? (Section~\ref{sec:evalScalability})
\item RQ3: Does DiffSearch{} enable users to find relevant code changes more effectively than a regular expression-based search through raw diffs? (Section~\ref{sec:eval user study})
\item RQ4: Is DiffSearch{} useful for finding examples of recurring bug fix patterns? (Section~\ref{sec:eval bug patterns})
\item RQ5: How do parameters of the approach influence the results? (Section~\ref{sec:eval parameters})
\end{itemize}
For each of RQ1, RQ2, and RQ5, we present results for all three currently supported target languages: Java, JavaScript, and Python.
For each language, we gather at least one million code changes from repositories that are among the top 100 of their language based on GitHub stars.
For RQ3 and RQ4, we focus on Java as the target language because RQ3 is based on a user study and because RQ4 builds on a Java dataset created by prior work~\cite{Karampatsis2019a}.
The experiments are performed on a server with 48 Intel Xeon CPU cores clocked at 2.2GHz, 250GB of RAM, running Ubuntu~18.04.
\subsection{RQ1: Recall}
\label{sec:eval recall}
While the precision of DiffSearch{}'s results is guaranteed by design (Section~\ref{sec:pruning}),
the approach may miss code changes due to its feature-based search, which ensures scalability but may fail to include an expected code change into the candidate matches.
Additionally, DiffSearch{} only considers $k$ candidate changes, so it can find at most $k$ results even though queries could have more than $k$ matching code changes.
To establish a ground truth, we randomly sample code changes $c \rightarrow c'$ from all indexed Java, Python, and JavaScript code changes and formulate a corresponding query $q \rightarrow q'$ using the following four strategies.
The \emph{as-is} strategy simply copies $c$ into $q$ and $c'$ into $q'$.
The \emph{less-placeholders} strategy replaces some of the identifiers, operators, and literals with corresponding placeholders or wildcards.
The \emph{more-placeholders} strategy, similarly, replaces the majority of the identifiers, operators, and literals.
Finally, the \emph{generalized} strategy replaces most or all of the identifiers, operators, and literals.
For each strategy and each programming language, we randomly sample 20 code changes and construct a query for it.
We then compare each query against all 1,001,797 Java, 1,007,543 JavaScript, and 1,016,619 Python code changes using the matching component of DiffSearch{}.
While significantly slower than the feature-supported search that DiffSearch{} uses otherwise, this approach allows us to determine the set of all code changes expected to be found for a query, because Algorithm~\ref{alg:match} precisely computes whether a code change matches a query.
\begin{table}[t]
\centering
\caption{Recall of DiffSearch{} across 80 queries per language.}
\label{tab:Recall}
\setlength{\tabcolsep}{17pt}
\begin{tabular}{@{}lrrr@{}}
\toprule
Queries & Java & Python & JavaScript \\ \midrule
As-is & 90.6\% & 100.0\% & 100.0\% \\
Less-placeholders & 83.5\% & 99.9\% & 99.8\% \\
More-placeholders & 74.2\% & 96.7\% & 95.8\% \\
Generalized & 76.7\% & 74.9\% & 66.1\% \\ \midrule
\textbf{Total} & \textbf{80.7\%} & \textbf{89.6\%} & \textbf{90.4\%} \\ \bottomrule
\end{tabular}
\end{table}
Table~\ref{tab:Recall} shows the recall of DiffSearch{} w.r.t.\ the ground truth, i.e., the percentage of all ground truth code changes that the approach finds.
On average across the 80 queries per programming language, DiffSearch{} has a recall of 80.7\% for Java, 89.6\% for Python, and 90.4\% for JavaScript.
More specific queries tend to lead to a higher recall.
The reason is that the parse tree of a more generalized query shares fewer features with a matching code change, e.g., because a complex subtree is folded into an \code{EXPR} node.
The slightly higher recall for Python and JavaScript can be explained by two observations.
First, code changes in Java tend to be slightly larger, causing more nodes on the parse trees, which reduces the chance to find a suitable candidate change.
Second, across the 80 queries, there are 236,836 ground truth code changes for Java, but only 69,626 and 59,789 for Python and JavaScript, respectively, making finding all ground truth code changes in Java a harder problem.
We discuss in Section~\ref{sec:eval parameters} that the recall can be increased even further by retrieving more candidate matches, at the expense of a slightly increased response time.
\subsection{RQ2: Efficiency and Scalability}
\label{sec:evalScalability}
A major goal of this work is to enable quickly searching through hundreds of thousands of code changes.
The following evaluates how the number of code changes to search through influences the efficiency of queries, i.e., how well DiffSearch{} scales to large amounts of changes.
As queries to run, we use the 80 queries described in \Cref{sec:eval recall}.
For each query, we measure how long DiffSearch{} takes to retrieve code changes from ten increasingly large datasets, ranging from 10,000 to 1,000,000 code changes.
\begin{figure*}
\captionsetup[subfigure]{justification=centering}
\centering
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/JavaScalability}
\caption{DiffSearch~(Java).}
\end{subfigure}\hspace{0.5em}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/PythonScalability}
\caption{DiffSearch~(Python).}
\end{subfigure}\hspace{0.5em}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/JavaScriptScalability}
\caption{DiffSearch~(JavaScript).}
\end{subfigure}
\vspace{1em}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/JavaScalabilityslow}
\caption{DiffSearch~without indexing\\(Java).}
\end{subfigure}
\hspace{0.5em}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/PythonScalabilityslow}
\centering
\caption{DiffSearch~without indexing\\(Python).}
\end{subfigure}\hspace{0.5em}
\begin{subfigure}[t]{.32\linewidth}
\includegraphics[width=\linewidth]{images/JavaScriptScalabilityslow}
\caption{DiffSearch~without indexing\\(JavaScript).}
\end{subfigure}
\caption{Response time across differently sized datasets (average and 95\% confidence interval). Top: Full DiffSearch{}. Bottom: DiffSearch{} without indexing.}
\label{fig:scalability}
\end{figure*}
The top row of \Cref{fig:scalability} shows the results for the full DiffSearch{} approach.
Answering a query typically takes between 0.5 and 2 seconds.
Moreover, the response time remains constant when searching through more code changes.
The reasons are (i) that FAISS~\cite{johnson2019billion} provides constant-time retrieval in the vector space, and (ii) that the time for matching candidate changes against the query is proportional to the constant number $k$ of candidate changes.
Comparing the three programming languages, we find that they yield similar performance results, which is due to the fact that most parts of our implementation are language-agnostic.
We conclude that DiffSearch{} scales well to hundreds of thousands of changes and remains efficient enough for interactive use.
The bottom row of \Cref{fig:scalability} shows the same experiment when removing the indexing and retrieval steps of DiffSearch{} (note: different y-axis).
Instead, the approach linearly goes through all code changes and compares them against a given query using the matching component only.
Answering a query takes up to 41 seconds on average, showing that the feature-based indexing is essential to ensure DiffSearch{}'s scalability.
Even though scalability is most relevant for the online part of DiffSearch{}, we also measure how long the offline part takes.
In total, analyzing a million code changes to extract feature vectors and indexing these vectors takes up to five hours.
As this is a one-time effort that does not influence the response time, we consider it acceptable in practice.
\subsection{RQ3: User Study}
\label{sec:eval user study}
\subsubsection{Study Setup}
We perform a user study to measure whether DiffSearch{} enables users to effectively retrieve code changes within a given time budget, and to compare our approach with a regular expression-based baseline.
To this end, we provide natural language descriptions of kinds of code changes and ask each user to find up to ten matching code changes per description within two minutes.
We choose this time limit based on empirical results on code search sessions, which are reported to have a median length of 89 seconds~\cite{Sadowski2015}.
We then ask the users how many satisfying code changes they could find.
Each user works on each kind of query with both DiffSearch{} and the baseline tool, alternating which tool to use first.
\emph{Queries.}
The descriptions of the queries (Table~\ref{tab:user study}) are designed with two criteria in mind.
First, they cover different syntactic categories of changes, including additions (\#3, \#4, \#7), modifications (\#6), and removals (\#10) of statements; changes within existing statements (\#1, \#2, \#5, \#9); and changes that surround an existing statement with a new statement (\#8).
Second, the queries cover a diverse range of reasons for changing code, including code improvements to increase robustness (\#4, \#7, \#8), code cleanup (\#10), changes of functionality (\#6, \#9), bug fixes (\#1, \#2, \#5), and uses of a new API (\#3).
\emph{Baseline.}
Because DiffSearch{} is the first search engine specifically designed for code changes, there is no established tool to compare against.
Instead, we use a regular expression-based approach suggested in the Stack Overflow question cited in Section~\ref{sec:intro} as a baseline, which we call REGEX.
Regular expressions are well known and widely used for general search tasks.
Naively applying regular expressions to the git history of many projects, as suggested on Stack Overflow, leads to unacceptably high response times (tens or even hundreds of seconds, depending on the query).
Instead, we preprocess the output of \emph{git log} by removing information unrelated to the task, such as commit messages and file names, which reduces the size of the file and makes the response time acceptable.
\emph{Participants and setup.}
We recruit ten participants with solid knowledge about regular expressions, consisting of seven PhD students, two senior undergraduate students, and one senior developer.
The participants do not overlap with the authors of this paper.
The users access DiffSearch{} through a web interface that resembles a standard search engine, but has two text input fields, for the old and new code, respectively.\footnote{The web interface is available to reviewers, see end of Section~\ref{sec:intro}.}
For REGEX, participants use a terminal and their favorite tool to search with regular expressions, e.g., \emph{grep}.
We provide about 750 words of instructions to the participants, which explain the task, the query language of DiffSearch{}, and how to search through raw diffs using REGEX.
\subsubsection{Quantitative Results}
\begin{table*}
\centering
\caption{Query descriptions for user study and summary of search results.}
\label{tab:user study}
\setlength{\tabcolsep}{1.6pt}
\begin{tabular}{@{}rp{25em}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{\hspace{5\tabcolsep}}|c@{\hspace{0.5\tabcolsep}}c@{\hspace{0.5\tabcolsep}}c@{}}
\toprule
\textbf{Id} & \textbf{Query description} & \multicolumn{10}{c}{\textbf{DiffSearch \textbar{} REGEX}} \\
\cmidrule{3-13}
& & \multicolumn{3}{c}{User 1} & \multicolumn{3}{c}{User 2} & \multicolumn{3}{c}{User 3} & \multicolumn{3}{c}{User 4} & \multicolumn{3}{c}{User 5} & \multicolumn{3}{c}{User 6} & \multicolumn{3}{c}{User 7} & \multicolumn{3}{c}{User 8} & \multicolumn{3}{c}{User } & \multicolumn{3}{c}{User 10} & \multicolumn{3}{c}{Total} \\
\midrule
1 & Find changes in which a return statement that returns a literal changes to returning the result of a method call. & 10&\textbar{}&~0 & 10 & \textbar{} & ~0 & ~0& \textbar{} & 10 & ~0 & \textbar{}& ~0 & 10& \textbar{}& ~0 & ~7& \textbar{}& ~0 & 10 & \textbar{} & ~0 & ~7 & \textbar{}& ~0 & ~7& \textbar{}& ~0 & ~7 & \textbar{}& ~0 & ~68 & \textbar{}& ~10 \\
2 & Find changes where the developer swaps the arguments of a method call. & ~0 & \textbar{} & ~0 & ~0 & \textbar{} & ~0 & 10 & \textbar{} & ~0 & 10 & \textbar{}& ~0 & 10& \textbar{}& ~0 & ~0& \textbar{}& ~0 & 10 & \textbar{}& ~0 & 10 & \textbar{}& ~0 & 10 & \textbar{}& ~0 & 10 & \textbar{}& ~0 & ~70& \textbar{}& ~~0 \\
3 & Find changes that add an import of a class in the form ``import somePkg.SomeClass''. & 10 & \textbar{}& ~0 & ~0 & \textbar{} & 10 & 10& \textbar{}& 10 & 10& \textbar{}& 10 & ~0& \textbar{}& 10 & 10& \textbar{}& 10 & 10& \textbar{}& 10 & 10& \textbar{}& ~0 & 10& \textbar{}& ~0 & 10 & \textbar{}& ~0 & ~80& \textbar{}& ~60 \\
4 & Find changes that add a call to close some resource, e.g., a stream or file reader. & ~0 & \textbar{}& ~0 & 10 & \textbar{} & 10 & 10& \textbar{}& 10 & 10& \textbar{}& ~0 & 10& \textbar{}& 10 & ~0& \textbar{} & 10 & 10 & \textbar{}& 10 & 10& \textbar{}& ~0 & 10& \textbar{}& ~0 & 10& \textbar{}& 10 & ~80& \textbar{}& ~60 \\
5 & Find changes where the condition of an if statement with a body changes from ``-= null'' to ``!= null''. & ~4 & \textbar{}& ~0 & 10 & \textbar{} & ~0 & ~4 & \textbar{}& ~5 & ~0& \textbar{}& ~0 & ~7& \textbar{}& ~0 & ~0& \textbar{}& ~2 & ~4& \textbar{}& ~0 & ~4& \textbar{}& ~0 & ~0& \textbar{}& ~0 & ~5& \textbar{}& ~0 & ~38& \textbar{}& ~~7 \\
6 & Find changes that remove a method call with one argument. & 10& \textbar{}& ~0 & 10 & \textbar{} & ~1 & 10& \textbar{}& 10 & 10 &\textbar{}& ~0 & 10 &\textbar{}& ~0 & 10 & \textbar{} & 10 & 10 &\textbar{}& 10 & 10 &\textbar{}& ~0 & 10& \textbar{}& ~0 & 10 & \textbar{}& ~0 & 100& \textbar{}& ~31 \\
7 & Find changes that insert an assertion using Java's ``assert'' keyword. & 10& \textbar{}& ~0 & 10& \textbar{}& 10 & ~0& \textbar{}& 10 & ~0& \textbar{}& ~2 & 10& \textbar{}& 10 & ~0& \textbar{}& 10 & 10 &\textbar{}& 10 & 10& \textbar{}& ~0 & 10 &\textbar{}& ~0 & 10& \textbar{}& 10 & ~70& \textbar{}& ~62 \\
8 & Find changes in which a code snippet is surrounded with a try/catch block. & ~0 & \textbar{}& ~0 & ~0 & \textbar{} & ~0 & ~0& \textbar{}& ~0 & ~0& \textbar{}& ~0 & ~0& \textbar{}& ~0 & ~10& \textbar{}& ~0 & ~4& \textbar{}& 10 & 10& \textbar{}& ~0 & ~0& \textbar{}& ~0 & ~1 &\textbar{}& ~0 & ~25& \textbar{}& ~~0 \\
9 & Find changes where the condition of a while loop is changed. & 10 & \textbar{}& ~0 & 10 & \textbar{} & 10 & 10& \textbar{}& ~2 & 10& \textbar{}& ~0 & 10& \textbar{}& ~0 & ~0 & \textbar{}& ~1 & 10& \textbar{}& ~0 & ~0& \textbar{}& ~0 & 10 & \textbar{}& ~0 & 10 & \textbar{}& ~0 & ~80& \textbar{}& ~13 \\
10 & Find changes that remove a call to System.out.println(...). & 10 & \textbar{} & ~0 & 10 & \textbar{} & 10 & 10& \textbar{} &10 & 10& \textbar{}& ~0 & 10& \textbar{}& 10 & 10& \textbar{}& 10 & 10& \textbar{}& 10 & 10& \textbar{}& ~0 & 10 &\textbar{}& ~0 & 10 &\textbar{}& 10 & 100& \textbar{}& ~60 \\
\midrule
& Total & 64 &\textbar{}& ~0 & 70 &\textbar{}& 51 & 64 &\textbar{}& 67 & 60& \textbar{}& 12 & 77 &\textbar{}& 40 & 47& \textbar{}& 53 & 88& \textbar{}& 50 & 81 &\textbar{}& ~0 & 77& \textbar{}& ~0 & 83 &\textbar{} &30 & 711& \textbar{}& 303 \\
\bottomrule
\end{tabular}
\end{table*}
\Cref{tab:user study} shows the number of search results obtained using DiffSearch{} and REGEX.
Across the entire study, the participants find 711 code changes with DiffSearch{} but only 303 with REGEX.
Inspecting individual queries shows that, while some are harder than others, at least one user finds ten code changes for each query.
For 77.0\% of DiffSearch{} queries, users retrieve at least one code change with DiffSearch{}, whereas with REGEX, users get at least one code change for only 35.0\% of all queries.
For 65.0\% of DiffSearch{} queries, users find the desired number of ten code changes, but only 29.0\% of users succeed with REGEX.
Overall, we conclude that DiffSearch{} enables users to effectively find code changes, and that the approach clearly outperforms the REGEX-based baseline.
\subsubsection{Qualitative Results}
While DiffSearch{} clearly outperforms REGEX for all ten queries, there are some user-query pairs where REGEX yields more results than DiffSearch{}.
Analyzing these cases shows two main reasons.
First, some users were effective with regular expressions by searching for simple code changes that only add or only remove a single line of code.
For example, for query \#3, some users simply searched for ``+ import (.*)''.
Second, some users formulated regular expression queries that are more general than the natural language description we provide and then manually filtered the results to find the ten relevant code changes.
For example, for query \#5, a user searched for ``\-if((.*?)){'' and then manually checked for conditions that involve \code{null}.
Moreover about satisfying results, all the users get enough results for query \#6 with queries like ''ID(EXPR); $\rightarrow$ \_'', underlining how easy it is querying DiffSearch{}. Another example is about query \#10, where all the users use a query like ''System.out.println(EXPR); $\rightarrow$ \_'' and they get 100 satisfying results. The user study also shows how fast the users learn to use DiffSearch{}. For example, Users 2 and 5 on query \#3 find 0 code changes with DiffSearch{}, while they find 10 code changes on query \#4. As a result, the users learn with the experience the DiffSearch{} query syntax. For example User 2 for query \#3 use queries like ''\_ $\rightarrow$ import LT().LT()'' and ''\_ $\rightarrow$ import LT$<$...$>$.LT$<$...$>$'' that are syntactically invalid. After some tries they understand the query and they perform better on the following queries.
These examples illustrate that DiffSearch{} is particularly useful when searching for non-trivial code changes and to avoid false positive results.
We also asked for informal feedback about both tools, to better understand their strengths and weaknesses.
Users report three reasons for preferring DiffSearch{} over REGEX.
First, they find the DiffSearch{} query language easier to use than regular expression syntax, because it builds upon the underlying programming language.
In particular, some users affirm that in two minutes they were able to type a DiffSearch{} query, but not a working regular expression, especially for complex queries, such as multi-line code changes. Second, REGEX often was much slower than DiffSearch{} because it linearly searches through all code changes.
This inefficiency, especially for more complex regular expressions, caused some users to not find any relevant code changes in the given time.
Finally, some users mention that REGEX syntax is not precise enough to formulate effective queries, leading to many false positives.
\subsection{RQ4: Searching for Bug Fixes}
\label{sec:eval bug patterns}
As a case study for using DiffSearch{}, we apply it to search for instances of bug fix patterns, which could help, e.g., to establish a dataset for evaluating bug detection tools~\cite{ase2018-study}, automated program repair tools~\cite{oopsla2019}, or for training a learning-based bug detection tool~\cite{oopsla2018-DeepBugs}.
We build on a set of 16 patterns defined by prior work~\cite{Karampatsis2019a}, of which we use twelve (\Cref{tab:EffectivenessQueries}).
The remaining four bug fix patterns are all about single-token changes, e.g., changing a numeric literal or changing a modifier, which currently cannot be expressed with our query language.
For the twelve supported patterns, we formulate queries based on the descriptions of the patterns and then search for them with DiffSearch{}.
We use two different datasets for this case study.
First, a set of around 10,000 code changes, called \emph{SStuBs commits}, that contains all those commits where the prior work~\cite{Karampatsis2019a} found instances of the bug fix patterns through custom-built analysis scripts, which we call \emph{SStuBs}.
Second, a set of around 1,000,000 code changes, called \emph{Large}, sampled from all the repositories analyzed in the prior work.
\begin{table}
\caption{Effectiveness of DiffSearch{} in finding instances of bug fix patterns~\cite{Karampatsis2019a}.}
\label{tab:EffectivenessQueries}
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{@{}rlrrr|r@{}}
\toprule
& \multirow{2}{*}{Description} & \multicolumn{3}{c}{SStuBs commits (10k)} & \multicolumn{1}{c}{Large (1M)} \\ \cmidrule(l){3-6}
& & \multicolumn{1}{c}{SStuBs} & \multicolumn{1}{c}{DiffSearch} & \multicolumn{1}{c|}{Both} & \multicolumn{1}{c}{DiffSearch} \\ \midrule
1 & Change only caller & 132 & 1,880 & 121 & 5,974 \\
2 & Change binary operator & 211 & 347 & 131 & 2,979 \\
3 & More specific if & 130 & 592 & 116 & 5,660 \\
4 & Less specific if & 166 & 592 & 150 & 5,387 \\
5 & Wrong function name & 1,141 & 1,439 & 935 & 8,109 \\
6 & Same caller, more args & 557 & 2,108 & 432 & 11,207 \\
7 & Same caller, less args & 110 & 2,123 & 75 & 10,798 \\
8 & Same caller, swap args & 98 & 2,285 & 89 & 9,042 \\
9 & Change unary operator & 126 & 134 & 70 & 6,081 \\
10 & Change binary operand & 91 & 347 & 73 & 2,136 \\
11 & Add throws exception & 60 & 1,834 & 34 & 3,848 \\
12 & Delete throws exception & 45 & 2,278 & 44 & 3,682 \\ \midrule
& Total & 2,867 & 15,959 & 2,270 & 74,903 \\ \bottomrule
\end{tabular}
\end{table}
\Cref{tab:EffectivenessQueries} shows for each bug fix pattern how many code changes the different approaches find.
DiffSearch{} returns a total of 15,959 code changes for the first dataset and 74,903 for the second dataset.
Computing the intersection with the results retrieved by SStuBs, DiffSearch{} finds 79.2\% of their changes, a result consistent with the Java recall computed in RQ1.
Moreover, DiffSearch{} finds many more matching code changes, increasing the dataset from 2,867 to 15,959 examples of bug fixes.
The reason is that our queries are more general than the custom analysis scripts in SStuBs and include, e.g., also code changes that perform other changes besides the specific bug fix.
The number of code changes found by DiffSearch{} is higher than the number of commits (10k) because a single commit may match multiple patterns.
For example, a change that swaps two arguments and modifies a function name will appear in patterns~5 and~8.
Overall, DiffSearch{} is effective at finding various examples of bug fix patterns, showing the usefulness of the approach for creating large-scale datasets.
\subsection{RQ5: Impact of Parameters}
\label{sec:eval parameters}
We perform a sensitivity analysis for the two main parameters of DiffSearch{}:
the length $l$ of feature vectors (Section~\ref{sec:features}), and the number $k$ of candidate matches retrieved via the feature vectors (Section~\ref{sec:indexingRetrieval}).
We select a set of values from 1,000 to 20,000 for $k$ and from 500 to 4,000 for $l$, i.e., values below and above the defaults, and then measure their impact on the time to answer queries, the recall, and the size of the index.
\begin{table}[t]
\caption{Impact of length $l$ of feature vectors and number $k$ of candidates (default configuration is bold).}
\label{tab:parameters}
\setlength{\tabcolsep}{9pt}
\centering
\begin{tabular}{@{}rrrrrrr@{}}
\toprule
$k$ & $l$ & \multicolumn{3}{@{}c@{}}{Response time (s)} & Recall & Size of \\
\cmidrule{3-5}
&& min & avg & max & (\%) & index (GB) \\
\midrule
\multicolumn{7}{@{}l@{}}{\emph{Java:}} \\
\midrule
1,000 & 1,000 & 1.5 & 1.9 & 3.5 & 71.8 & 4.0 \\
\textbf{5,000} & \textbf{1,000} & \textbf{1.5} & \textbf{2.2} & \textbf{9.0} & \textbf{80.7} & \textbf{4.0} \\
10,000 & 1,000 & 1.7 & 2.5 & 9.4 & 84.9 & 4.0 \\
20,000 & 1,000 & 1.8 & 3.1 & 17.7 & 87.3 & 4.0 \\
5,000 & 500 & 0.8 & 1.3 & 8.1 & 79.3 & 2.0 \\
5,000 & 2,000 & 3.0 & 4.2 & 9.9 & 80.6 & 8.0 \\
5,000 & 4,000 & 5.8 & 7.4 & 15.3 & 78.1 & 16.0 \\
\midrule
\multicolumn{7}{@{}l@{}}{\emph{Python:}} \\
\midrule
1,000 & 1,000 & 3.0 & 4.1 & 5.5 & 81.9 & 4.1 \\
\textbf{5,000} & \textbf{1,000} & \textbf{1.8} & \textbf{2.4} & \textbf{3.5} & \textbf{89.8} & \textbf{4.1} \\
10,000 & 1,000 & 3.5 & 5.0 & 8.9 & 91.6 & 4.1 \\
20,000 & 1,000 & 4.1 & 6.0 & 12.4 & 93.7 & 4.1 \\
5,000 & 500 & 1.0 & 1.6 & 3.1 & 86.6 & 2.0 \\
5,000 & 2,000 & 2.7 & 4.9 & 40.8 & 89.8 & 8.1\\
5,000 & 4,000 & 6.1 & 7.9 & 13.1 & 83.4 & 16.3 \\
\midrule
\multicolumn{7}{@{}l@{}}{\emph{JavaScript:}} \\
\midrule
1,000 & 1,000 & 1.2 & 1.9 & 2.8 & 85.4 & 4.0 \\
\textbf{5,000} & \textbf{1,000} & \textbf{1.3} & \textbf{2.0} & \textbf{2.8} & \textbf{90.4} & \textbf{4.0} \\
10,000 & 1,000 & 1.4 & 2.3 & 3.3 & 94.0 & 4.0 \\
20,000 & 1,000 & 1.8 & 2.9 & 5.7 & 95.6 & 4.0 \\
5,000 & 500 & 0.7 & 1.2 & 2.1 & 90.3 & 2.0 \\
5,000 & 2,000 & 3.1 & 4.5 & 5.4 & 92.5 & 8.0 \\
5,000 & 4,000 & 5.1 & 9.2 & 12.8 & 88.6 & 16.1 \\
\bottomrule
\end{tabular}
\end{table}
Table~\ref{tab:parameters} shows the results.
We find that retrieving more candidate code changes, i.e., a higher $k$, slightly increases the response time.
The reason is that matching more code changes against the query increases the time taken by the matching phase.
On the positive side, increasing $k$ increases the recall, reaching 87.3\% for Java, 93.7\% for Python, and 95.6\% for JavaScript when $k$=20,000, while still providing an acceptable average response time.
Parameter $l$ increases the time to answer a query because a larger feature vector slows down the nearest neighbor search.
Likewise, a larger $l$ also increases the size of the index.
Since increasing $l$ beyond our default does not significantly increase recall, we use $l$=1,000 as the default to have a manageable index size and a reasonable response time.
\section{Related Work}
\emph{Code Search.}
Code search engines allow users to find code snippets based on method signatures~\cite{reissCodeSearch}, existing code examples~\cite{kim2018facoy,Luan2019,Premtoon2020}, or natural language queries~\cite{Gu2018,Sachdev2018,cambronero2019deep}.
Sourcerer provides an infrastructure that combines several of the above ideas~\cite{sourcerer}.
Early work by Paul et al.~\cite{Paul1994} proposes a mechanism similar to the placeholders in our query language.
The most important difference between these approaches and DiffSearch{} is that we search for changes of code, not for code snippets within a single snapshot of code.
Another difference is that DiffSearch{} guarantees that all search results match the given query, whereas the existing techniques, with the exception of \cite{Premtoon2020}, are aimed at similarity only.
Prequel has a goal similar to DiffSearch{}, and matches patches against user-provided rules that the code before and after a patch must comply with~\cite{Lawall2016}.
The approaches differ in two aspects.
First, Prequel's rules are based on the semantic patch language of Coccinelle~\cite{Lawall2018} and may include executable code, e.g., queries are Turing-complete.
In contrast, our queries are purely declarative and build on the underlying programming language.
Second, Prequel performs a regular expression-based pre-filtering for each query, followed by a linear search through all commits.
As a result, answering a query may take minutes or, if the pre-filtering is not effective, even longer~\cite{Lawall2016}.
In contrast, DiffSearch{} avoids a linear search via feature-based retrieval, and hence, responds to queries across hundreds of thousands of code changes within seconds.
Several ideas to improve the user's interaction with a code search engine have been proposed, such as refining search results based on user's feedback about the quality of results~\cite{martie2017understanding,sivaraman2019active}.
Other work resolves vocabulary mismatches between queries and code~\cite{sirres2018augmenting}.
Future work could adopt similar ideas to searching for code changes.
\emph{Code Changes as Edit Scripts.}
To reason about code changes, several techniques derive edit scripts on ASTs~\cite{Fluri2007,Hashimoto2008,Falleri2014,Erdweg2021}, providing an abstract description of the change that can then be applied elsewhere~\cite{Meng2011}.
Lase generalizes from multiple code changes into a single edit script~\cite{Meng2013}.
Future work could explore using an edit script-based representation of code changes to search for code changes.
An advantage of our parse tree-based feature extraction is that it does not require aligning the old and new code, allowing us to featurize hundreds of thousands of code changes in reasonable time.
\emph{Mining Code Changes.}
Work on mining code repositories and learning from code changes shows development histories to be a rich source of implicitly stored knowledge.
For example, existing approaches leverage version histories to
extract repetitive code changes~\cite{negara2014mining,Nguyen2019,nguyen2013study},
predict code changes~\cite{Tufano2019},
predict bugs~\cite{Livshits2005,Kim2008}, or to
learn about API usages~\cite{nguyen2016api,Paletov2018}.
Mining approaches typically consider all code changes in a project's version history or filter changes using simple patterns, e.g., keywords in commit messages.
In contrast, DiffSearch{} allows for identifying code changes that match a specific query.
\emph{Learning from Code Changes.}
Large sets of code changes enable learning-based techniques.
One line of work learns from specific kinds of changes, e.g., fixes of particular bug patterns, how to apply this kind of change to other code for automated program repair~\cite{Rolim2017,Rolim2018,oopsla2019}.
Another line of work ranks potential program repairs based on their similarity to common code change patterns~\cite{Le2016}.
DiffSearch{} could help gather datasets of changes for these approaches to learn from, e.g., based on queries for bug fixing patterns.
\emph{Other Analyses of Code Changes.}
There are various other analyses of code changes, of which we discuss only a subset here.
Hashimoto et al.\ propose a technique for reducing a diff to the essence of a bug~\cite{Hashimoto2018}.
Nielsen et al.~\cite{nielsen2021semantic} use JavaScript code change templates to fix code broken due to library evolution.
Another approach automatically documents code changes with a natural language description~\cite{Buse2010}.
SCC~\cite{giger2011comparing} and DeepJIT~\cite{Hoang2019} are predictive models that estimate how likely a code change is to introduce a bug.
A related problem is to find the bug-inducing code change for a given bug report~\cite{Wen2016,Wu2018}.
DiffBase~\cite{Wu2021} encodes facts about different versions of a program to facilitate multi-version program analyses.
CodeShovel~\cite{Grund2021} tracks a method from its creation to its current state throughout a version history.
All these approaches relate to our work by also reasoning about code changes, but they aim for different goals than DiffSearch{}.
\emph{Clone Detection.}
DiffSearch{} relates to code clone detectors~\cite{kamiya2002ccfinder,Li2006,jiang2007deckard,roy2008nicad,sajnani2016sourcerercc}, as answering a query resembles finding clones of the query.
Clone detectors are typically evaluated on a single snapshot of a code base, and they may take several minutes or even hours to terminate~\cite{sajnani2016sourcerercc}.
In principle, one could use an off-the-shelf code clone detector to search for specific kinds of code changes, where the old and parts of the query must be clones of the old and new parts of a change.
However, this approach would search for clones among all code changes for each query, which may not be fast enough for an interactive search engine.
Another difference is that DiffSearch{} guarantees to yield code changes that match a query, whereas clone detectors are interested in similar but not necessarily exactly matching code.
Some clone detectors summarize code in ways related to our feature extraction.
For example, Deckard~\cite{jiang2007deckard} computes characteristic vectors of parse trees and SourcererCC~\cite{sajnani2016sourcerercc} indexes large amounts of code into a bag-of-tokens representation.
Integrating such ideas into the feature-based retrieval in DiffSearch{} could further improve recall.
\section{Threats to Validity}
\todo{merge in the text}
\paragraph{Internal Validity}
Several factors may influence our results.
First, to establish a ground truth for recall we use all code changes that match according to the algorithm in Section~\ref{sec:pruning}.
While designed to be sound, bugs in the implementation of the algorithm might cause mistakes in the ground truth.
We mitigate this threat through extensive automated and manual validation of the implementation.
Second, the user study may be subject to mistakes and biases by the participants.
We try to mitigate this threat by giving the same task to nine participants with different backgrounds and levels of experience.
Finally, the time limit of 120 seconds imposed in the user study might be longer than real users are willing to invest, or not long enough for some users to formulate adequate queries.
We choose this time limit based on empirical results on code search sessions, which are reported to have a median length of 89 seconds and an average length of 210 seconds~\cite{Sadowski2015}.
\paragraph{External Validity}
Several factors may influence the generalizability of our results.
First, the queries we use may not be representative for queries actual users would run.
As DiffSearch{} is the first search engine of its kind, we cannot base our experimental design on queries observed elsewhere.
We mitigate this threat by selecting queries that cover different syntactic categories of changes, different reasons for changing the code, and by building the ground truth for evaluation recall from real-world code changes.
Second, other programming languages than those we evaluate on may create additional challenges.
We reduce this risk by implementing DiffSearch{} for three languages and by keeping most parts of the approach language-agnostic, except for adapting the grammar of the language into a query language.
|
2003.05207
|
\section{Introduction}
\subsubsection{Reprogramming the quantum random oracle. }
\def\varepsilon{\varepsilon}
We reconsider the recent work of Don, Fehr, Majenz and Schaffner~\cite{DFMS19} on the quantum random oracle model (QROM). On a technical level, they showed how to reprogram the QROM adaptively at {\em one} input. More precisely, for any oracle quantum algorithm ${\cal A}^H$, making $q$ calls to a random oracle $H$ and outputting a pair $(x,z)$ so that some predicate $V(x,H(x),z)$ is satisfied, they showed existence of a ``simulator'' $\cal S$ that mimics the random oracle, extracts $x$ from ${\cal A}^H$ by measuring one of the oracle queries to $H$, and then reprograms $H(x)$ to a given value $\Theta$ so that $z$ output by ${\cal A}^H$ now satisfies $V(x,\Theta,z)$, except with a multiplicative $O(q^2)$ loss in probability (plus a negligible additive loss). We emphasize that the challenging aspect of this problem is that ${\cal A}^H$'s queries to $H$ may be in quantum superposition, and thus measuring such a query disturbs the state and thus the behavior of ${\cal A}^H$. Still, Don et al. managed to control this disturbance sufficiently. In independent work and using very different techniques, Liu and Zhandry~\cite{LZ19} showed a similar kind of result, but with a $O(q^9)$ loss.
As an immediate application of this technique, it is then concluded that the Fiat-Shamir transformation of a $\Sigma$-protocol is as secure (in the QROM) as the original $\Sigma$-protocol (in the standard model), up to a $O(q^2)$ loss, i.e., any of the typically considered security notions is preserved under the Fiat-Shamir transformation, even in the quantum setting. In combination with prior work on simulating signature queries \cite{Unruh2017,Kiltz2017}, security (in the QROM) of Fiat-Shamir signatures that arise from ordinary $\Sigma$-protocols then follows as a corollary.
Given important examples of {\em multi-round} public-coin interactive proofs, used in, e.g., MQDSS \cite{MQDSS} and for Bulletproofs~\cite{Bulletproofs}%
\footnote{The security of the original Bulletproofs protocol relies on the hardness of discrete-log; however, work in progress considers post-quantum secure versions~\cite{PQBP-talk}.},
a natural question that arises is whether these techniques and results extend to the reprogrammability of the QROM at {\em multiple} inputs and the security of the Fiat-Shamir transformation (in the QROM) of {\em multi-round} public-coin interactive proofs. Another question is whether the $O(q^2)$ loss (for the original $\Sigma$-protocols) is optimal, or whether one might hope for a linear loss as in the classical case.
In this work, we provide answers to both these natural questions\,---\,and more.
\subsubsection{A technical hurdle for generalizing \cite{DFMS19} to multi-round Fiat-Shamir. }
To start with, we observe that the naive approach of applying the original result of \cite{DFMS19} inductively so as to reprogram multiple inputs one by one does not wor
. This is due to a subtle technical issue that has to do with the precise statement of the original result. In more detail, the statement involves an additive error term $\varepsilon_x \geq 0$ that depends on the particular choice of the point $x$, which is (adaptively) chosen to be the input on which the random oracle (RO) is reprogrammed. The guarantee provided by \cite{DFMS19} is that this error term stays negligible even {\em when summed over} all $x$'s, i.e., $\sum_x \varepsilon_x = negl$. The formulation of the result for individual $x$'s with control over $\sum_x \varepsilon_x$ is important for the later applications to the Fiat-Shamir transformation.
However, when applying the result twice in a row, with the goal being to reprogram the RO at two inputs $x_1,x_2$, then we end up with two error terms $\varepsilon_{x_1}$ and $\varepsilon^{x_1}_{x_2}$ (with the second one depending on $x_1$), where the first one stays negligible when summed over $x_1$ and the second one stays negligible when summed over $x_2$ (for any $x_1$); but it is unclear that the sum $\varepsilon_{x_1,x_2} := \varepsilon_{x_1} + \varepsilon^{x_1}_{x_2}$ stays negligible when summed over $x_1$ {\em and} $x_2$, which is what we would need to get the corresponding generalized statement.
\subsubsection{Our results }
As a first contribution, we revise the {\em original} result from \cite{DFMS19} of reprogramming the QROM at one input by showing an {\em improved} version that has {\em no} additive error term, but only the original multiplicative $O(q^2)$ loss. For typical direct cryptographic applications, this improvement makes no big quantitative difference due to the error term being negligible, but: (1) it makes the statement cleaner and easier to formulate, (2) somewhat surprisingly, the proof is simpler than that of the original result in \cite{DFMS19}, and (3) most importantly, it removes the technical hurdle to extend to multiple inputs. Indeed, we then get the desired multi-input reprogrammability result by means of a not too difficult, though somewhat tedious, induction argument.
Building on our multi-input reprogrammability result above, our next goal then is to show the security of the Fiat-Shamir transformation (in the QROM) of multi-round public-coin interactive proofs. In contrast to the original result in [DFMS19] for the Fiat-Shamir transformation of Σ-protocols some additional work is needed here, to deal with the order of the messages extracted from the Fiat-Shamir adversary. Thus, as a stepping stone, we consider and analyze a variant of the above multi-input reprogrammability result, which enforces the right order of the extracted messages. As a simple corollary of this, we then obtain the desired security of multi-round Fiat-Shamir. Here, the multiplicative loss becomes $O(q^{2n})$ for a $(2n+1)$-round public-coin interactive proof with constant $n$.
In the context of digital signatures, the original motivation for the Fiat-Shamir transformation, we extend previous results by Unruh \cite{Unruh2017} and Don et al. \cite{DFMS19} to show that Fiat-Shamir signature schemes based on a multi-round, honest-verifier zero knowledge public-coin interactive quantum proof of knowledge have standard signature security (existential unforgeability under chosen message attacks, UF-CMA) in the QROM. Assuming the additional collision-resistance-like property of computationally unique responses, they are even strongly unforgeable. We go on to apply this result to the signature scheme MQDSS \cite{MQDSS}, a candidate in the ongoing NIST standardization process for post-quantum cryptographic schemes \cite{NIST}, providing its first QROM proof.
Another application of our multi-round Fiat-Shamir result would for instance be to Bulletproofs~\cite{Bulletproofs}.
As a second application of our multi-input reprogrammability result, we show security (in the QROM) of the non-interactive OR-proof introduced by Liu, Wei and Wong \cite{LWW04}, further analyzed by Fischlin, Harasser and Janson \cite{FHJ20}. While the well-known (interactive) OR-proof by Cramer, Damg\r{a}rd and Schoenmakers \cite{CDS94} is a $\Sigma$-protocol and thus the results from \cite{DFMS19} apply, the inherently non-interactive OR-proof by Liu et al. does {\em not} follow this blueprint of being obtained as the Fiat-Shamir transformation of a $\Sigma$-protocol (though in some sense it is ``close'' to being of this form).
We show here how the $2$-input version of our multi-input reprogrammability result implies security of this OR-proof in the QROM.
Our last contribution is a lower bound that shows that the multiplicative $O(q^2)$ loss in the security argument of the Fiat-Shamir transformation of $\Sigma$-protocols is tight (up to a factor $4$). Thus, the $O(q^2)$ loss is unavoidable in general. Furthermore, we extend this lower bound to the Fiat-Shamir transformation of multi-round interactive proofs as considered in this work, and we show that also here to obtained loss $O(q^{2n})$ is in general optimal, up to a constant that depends on $n$ only.
\subsubsection{Related work}
Before the recently obtained reduction \cite{DFMS19,LZ19} was available, the Fiat-Shamir tranform in the QROM was studied in a number of works \cite{Unruh2017,Dagdelen,Kiltz2017}, where weaker security properties were shown. In addition, Unruh developed an alternative transform \cite{Unruh2015} that provided QROM security at the expense of an increased proof size. The Unruh transform was later generalized to apply to 5-round public coin interactive proof systems \cite{SOFIA}.
\section{Notation}
Up to some modifications, we follow closely the notation used in~\cite{DFMS19}.
We consider a (purified) oracle quantum algorithm $\cal A$ that makes $q$ queries to an {\em oracle}, i.e., an unspecified function $H: {\cal X} \to {\cal Y}$ with finite non-empty sets ${\cal X},{\cal Y}$.
Formally, $\cal A$ is described by a sequence of unitaries $A_1,\ldots,A_q$ and an initial state $\ket{\phi_0}$.%
\footnote{Alternatively, we may regard $\ket{\phi_0}$, as an additional input given to $\cal A$. }
For technical reasons that will become clear later, we actually allow (some of) the $A_i$'s to be a {\em projection} followed by a unitary (or vice versa). One can think of such a projection as a measurement performed by the algorithm, with the algorithm aborting except in case of a particular measurement outcome.
For any concrete choice of $H: {\cal X} \to {\cal Y}$, the algorithm $\cal A$ computes the state
$$
\ket{\phi_q^H} := {\cal A}^H \ket{\phi_0} := A_q\mathcal{O}^H \cdots A_1\mathcal{O}^H \ket{\phi_0} \, ,
$$
where $\mathcal{O}^H$ is the unitary defined by $\mathcal{O}^H : \ket{c}\ket{x}\ket{y} \mapsto \ket{c}\ket{x}\ket{y \oplus c \!\cdot\! H(x)}$ for any triple $c \in \{0,1\}$, $x \in \cal X$ and $y \in \cal Y$, with $\mathcal{O}^H$ acting on appropriate registers.
We emphasize that we allow {\em controlled} queries to $H$. Per se, this gives the algorithm more power, and thus will make our result only stronger, but it is easy to see that such controlled queries to the standard quantum oracle for a function can always be simulated by means of ordinary queries, at the price of one additional query.\footnote{Allowing controlled queries to the random oracle is also the more natural model compared to restricting to plain access to the unitary. After all, the motivation for the QROM is that in the real world, an attacker can implement the modeled hash function on their quantum computer, so they can definitely implement the controlled version as well.}
The final state ${\cal A}^H \ket{\phi_0}$ is considered to be a state over registers $\ensuremath{\textit{\textsf{X}}} = \ensuremath{\textit{\textsf{X}}}_1\ldots\ensuremath{\textit{\textsf{X}}}_n$, $\ensuremath{\textit{\textsf{Z}}}$ and $\ensuremath{\textit{\textsf{E}}}$.
Following~\cite{DFMS19}, we introduce the following notation. For $0 \leq i,j \leq q$ we set
$$
\mathcal{A}_{i\rightarrow j}^H := A_{j}\mathcal{O}^H \cdots A_{i+1}\mathcal{O}^H \, ,
$$
where, by convention, $\mathcal{A}_{i\rightarrow j}^H$ is set to $\mathbb{1}$ if $j \leq i$. Furthermore, we let
$$
\ket{\phi_i^H} := \big(\mathcal{A}_{0\rightarrow i}^H\big)\ket{\phi_0}
$$
be the state of $\cal A$ after the $i$-th step but right before the $(i+1)$-st query, which is consistent with $\ket{\phi_q^H}$ above.
For a given function $H: {\cal X} \to {\cal Y}$ and for fixed $x \in {\cal X}$ and $\Theta \in {\cal Y}$, we define the {\em reprogrammed} function $H\!*\!\Theta x: {\cal X} \to {\cal Y}$ that coincides with $H$ on ${\cal X} \setminus \{x\}$ but maps $x$ to $\Theta$. With this notation at hand, we can then write
$$
\big(\mathcal{A}_{i\rightarrow q}^{H*\Theta x}\big) \, \big(\mathcal{A}_{0\rightarrow i}^{H}\big) \, \ket{\phi_0} = \big(\mathcal{A}_{i\rightarrow q}^{H*\Theta x}\big)\ket{\phi_i^H}
$$
for an execution of $\cal A$ where the oracle is reprogrammed at a given point $x$ after the $i$-th query. We stress that $(\mathcal{A}_{i\rightarrow q}^{H*\Theta x}) (\mathcal{A}_{0\rightarrow i}^{H})$ can again be considered to be an oracle quantum algorithm $\cal B$, which depends on $\Theta \in {\cal Y}$, that makes $q$ queries to (the unprogrammed) function $H$. Indeed, the (controlled) queries to the reprogrammed oracle ${H*\Theta x}$ can be simulated by means of controlled queries to $H$ (using one additional ``work qubit'')
\footnote{Here it is crucial that we allow {\em controlled} queries to $H$. }
Exploiting that, in addition to unitaries, we allow projections as elementary operations, we can also understand $(\mathcal{A}_{i\rightarrow q}^{H*\Theta x}) X (\mathcal{A}_{0\rightarrow i}^{H})$ to be an oracle quantum algorithm again that makes oracle queries to $H$, where $X$ is the projection $X = \proj{x}$, acting on the corresponding query register to the oracle.
More generally, for any ${\mathbf x} = (x_1,\ldots,x_n)\in {\cal X}^n$ \emph{without duplicate entries}, i.e., $x_i \neq x_j$ for $i \neq j$, and for any ${{\mathbf \Theta}}\in {\cal Y}^n$, we define
\begin{align*}
&H*{\mathbf \Theta \mathbf x} = H*{\Theta_1 x_1}*\cdots*{\Theta_n x_n} : \, {\cal X} \to {\cal Y}\\
&x \mapsto
\begin{cases}
\Theta_i &\text{if $x = x_i$ for some $i \in \{1,\ldots,n\}$} \\
H(x) &\text{otherwise}.
\end{cases}
\end{align*}
This will then allow us to consider $(\mathcal{A}_{i_2\rightarrow q}^{H*\Theta_1 x_1 * \Theta_2 x_2}) X_2 (\mathcal{A}_{i_1\rightarrow i_2}^{H*\Theta_1 x_1}) X_1 (\mathcal{A}_{0\rightarrow i_1}^{H})$ as an oracle quantum algorithm with oracle queries to $H$, etc.
Eventually, we are interested in the probability that after the execution of the original algorithm ${\cal A}^H$, and upon measuring register $\ensuremath{\textit{\textsf{X}}}$ in the computational basis to obtain ${\mathbf x} = (x_1,\ldots,x_n) \in {\cal X}^n$, the state of register $\ensuremath{\textit{\textsf{Z}}}$ is of a certain form dependent on ${\mathbf x}$ and $H({\mathbf x}) = (H(x_1),\ldots,H(x_n))$.
Such a requirement (for a fixed ${\mathbf x}$) is captured by a projection
$$
G_{\bf x}^{H} = \proj{{\mathbf x}} \otimes \Pi_{{\mathbf x},H({\mathbf x})} \, ,
$$
where $\{\Pi_{{\mathbf x},{\mathbf \Theta}}\}_{{\mathbf x},{\mathbf \Theta}}$ is a family of projections with ${\mathbf x} \in {\cal X}^n$ and ${\mathbf \Theta} \in {\cal Y}^n$, and with the understanding that $\proj{{\mathbf x}}$ acts on $\ensuremath{\textit{\textsf{X}}}$ and $\Pi_{{\mathbf x},H({\mathbf x})}$ on register $\ensuremath{\textit{\textsf{Z}}}$. We refer to such a family of projections as a \emph{quantum predicate}.
We use $G_{\mathbf x}^{\mathbf \Theta}$ as a short hand for $G_{\mathbf x}^{H*{\mathbf \Theta}{\mathbf x}}$, and we write $G_x^H$ and $G_x^{\Theta}$ with $x \in \cal X$ and $\Theta \in \cal Y$ for the case $n = 1$.
For an arbitrary but fixed ${\bf x}_\circ \in {\cal X}^n$, we are then interested in the probability
$$
\Pr\bigr[\,{\bf x}\!=\! {\bf x}_\circ \wedge V({\bf x},H({\bf x}),z) : ({\bf x},z) \leftarrow {\cal A}^H \,\bigl]\, = \bigl\|G_{{\bf x}_\circ}^H \ket{\phi_q^H}\bigr\|_2^2 \, .
$$
where the left hand side is our notation for this probability, where we understand ${\cal A}^H$ to be an algorithm that outputs the measured $\bf x$ together with the quantum state $z$ in register $\ensuremath{\textit{\textsf{Z}}}$, and $V$ to be the quantum predicate specified by the projections $\Pi_{{\mathbf x},{\mathbf\Theta}}$. Correspondingly, $\Pr\bigr[ x\!=\! x_\circ \wedge V(x,H(x),z) : (x,z) \leftarrow {\cal A}^H \bigl]\, = \|G_{x_\circ}^H \ket{\phi_q^H}\|_2^2$ for the $n=1$ case.
\section{An improved single-input reprogramming result}\label{secmainresult}
For the case $n=1$, Don et al.~\cite{DFMS19} show the existence of a black-box {\em simulator} $\cal S$ such that for any oracle quantum algorithm $\cal A$ as considered above with oracle access to a {\em uniformly random} $H$, it holds that
\begin{align}\begin{split}
\Pr_\Theta\bigr[&x\!=\! x_\circ \wedge V(x,\Theta,z) : (x,z) \leftarrow \langle{\cal S}^{\cal A} , \Theta\rangle\bigl]
\\ &
\geq \frac{1}{2(q\!+\!1)(2q\!+\!3)} \Pr_H\bigl[x\!=\! x_\circ \wedge V(x,H(x),z) : (x,z) \leftarrow {\cal A}^{H} \bigr] - \varepsilon_{x_\circ} \, ,
\end{split}\label{eq:old}\end{align}%
for any $x_\circ \in \cal X$, where the $\varepsilon_{x_\circ}$'s are non-negative and their sum over $x_\circ \in \cal X$ is bounded by $1/(2q|{\cal Y}|)$, i.e., negligible whenever $|{\cal Y}|$ is superpolynomial. The notation $(x,z) \leftarrow \langle{\cal S}^{\cal A} , \Theta\rangle$ is to be understood in that in a first stage ${\cal S}^{\cal A}$ outputs $x$, and then on input $\Theta$ it outputs $z$.
At the core, Equation \eqref{eq:old} follows from Lemma~1 of~\cite{DFMS19} which shows that
\begin{align}\begin{split}
\E_{\Theta,i,b}&\left[\big\|(\proj{x} \otimes \Pi_{x,\Theta}) \big(\mathcal{A}_{i+b\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+b}^{H}\big)X\ket{\phi_i^H}\big\|_2^2\right] \\
&\geq \frac{\E_{\Theta}\Bigl[\big\|(\proj{x} \otimes \Pi_{x,\Theta})\ket{\phi_q^{H*\Theta x}}\big\|_2^2\Bigr]}{2(q+1)(2q+3) } - \frac{\big\|X\ket{\phi_q^H}\big\|_2^2}{2(q+1)|{\cal Y}| } \, ,
\end{split}
\label{eq:oldtechn}
\end{align}
and from which the construction of $\cal S$ can be extracted. The bound \eqref{eq:old} on the ``success probability'' of $\cal S$ then follows from the observation that $\cal S$ can simulate the calls to $H$ and to $H\!*\!\Theta x$ by means of a $2(q\!+\!1)$-wise independent hash function, and that $H$ and $H\!*\!\Theta x$ are indistinguishable for random $H$ and $\Theta$.
In this section we show an improved variant of Equation \eqref{eq:old}, which avoids the additive error term $\varepsilon_{x_\circ}$. While having negligible quantitative effect in typcial situations, it makes the statement simpler. In addition, as explained in the introduction, it circumvents a technical issue one encounters when trying to extend to the multi-input case. Furthermore, our improved version comes with a simpler proof.%
\footnote{We thank Dominique Unruh for the idea that it might be possible to avoid the additive error term, and for proposing an argument for achieving that, which inspired us to find the simpler argument we eventually used. }
The approach is to avoid the additive error term in Equation \eqref{eq:oldtechn}. We achieve this by slightly tweaking the simulator $\cal S$. From the technical perspective, while on the left hand side of Equation \eqref{eq:oldtechn} the expectation is over a random $i \in \{0,\ldots,q\}$, selecting one of the $q+1$ queries of $\cal A$ at random (where the $\ensuremath{\textit{\textsf{X}}}$ register of the output state is considered to be a final query), and a random $b \in \{0,1\}$, our new version has syntactically the same left hand side, but with the expectation over a random pair $(i,b) \in (\{0,\ldots,q\!\shortminus\!1\}\times \{0,1\})\cup \{(q,0)\}$ instead. This allows us to absorb the additive error term into the success probability of the simulator. Furthermore, it holds for any {\em fixed} choice of $\Theta$ (and not only on average for a random choice).
\begin{lem}
\label{lem:mainresult}
Let $\cal A$ be a $q$-query oracle quantum algorithm. Then, for any function $H: {\cal X}\rightarrow {\cal Y}$, any $x \in \cal X$ and $\Theta \in \cal Y$, and any projection $\Pi_{x,\Theta}$, it holds that
\begin{align*}
\E_{i,b}\left[\big\|(\proj{x} \otimes \Pi_{x,\Theta})\big(\mathcal{A}_{i+b\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+b}^{H}\big)X\ket{\phi_i^H}\big\|_2^2\right] &\!\geq\! \frac{\big\|(\proj{x} \otimes \Pi_{x,\Theta}) \ket{\phi_q^{H*\Theta x}}\big\|_2^2}{(2q+1)^2 } \, ,
\end{align*}
where the expectation is over uniform $(i,b) \in (\{0,\ldots,q\!\shortminus\!1\}\times \{0,1\})\cup \{(q,0)\} $.%
\end{lem}
This new version of Equation \eqref{eq:oldtechn} translates to a simulator $\mathcal{S}$ that works by running $\mathcal{A}$, but with the following modifications. First, one of the $q+1$ queries of $\mathcal{A}$ (also counting the final output in register~$\ensuremath{\textit{\textsf{X}}}$) is measured, and the measurement outcome $x$ is output by (the first stage of) $\mathcal{S}$. We emphasize that the crucial difference to \cite{DFMS19} is that each of the $q$ actual queries is picked with probability \smash{$\frac{2}{2q+1}$}, while the final output is picked with probability \smash{$\frac{1}{2q+1}$}.
Then, very much as in \cite{DFMS19}, this very query of $\cal A$ is answered either using the original $H$ {\em or} using the reprogrammed oracle $H\!*\!\Theta x$, with the choice being made at random\footnote{If it is the final output that is measured then there is nothing left to reprogram, so no choice has to be made. }, while all the remaining queries of $\cal A$ are answered using oracle $H\!*\!\Theta x$. Finally, (the second stage of) $\cal S$ outputs whatever $\mathcal{A}$ outputs.
In line with Theorem~1 in~\cite{DFMS19}, i.e.~Equation \eqref{eq:old} above, we obtain the following result from Lemma~\ref{lem:mainresult}.
\begin{thm}[Measure-and-reprogram, single input]\label{thm:main}
Let ${\cal X}$ and ${\cal Y}$ be finite non-empty sets.
There exists a black-box two-stage quantum algorithm $\cal S$ with the following property.
Let $\cal A$ be an arbitrary oracle quantum algorithm that makes $q$ queries to a uniformly random $H: {\cal X}\rightarrow {\cal Y}$ and that outputs some $x \in {\cal X}$ and a (possibly quantum) output~$z$. Then, the two-stage algorithm ${\cal S}^{\cal A}$ outputs some $x \in {\cal X}$ in the first stage and, upon a random $\Theta \in {\cal Y}$ as input to the second stage, a (possibly quantum) output~$z$, so that for any \mbox{$x_\circ\in {\cal X}$} and any (possibly quantum) predicate $V$:
\begin{align*}
\Pr_\Theta\bigr[x\!=\! x_\circ& \wedge V(x,\Theta,z) : (x,z) \leftarrow \langle{\cal S}^{\cal A} , \Theta\rangle\bigl]
\switch{\;}{\\ &}
\geq \frac{1}{(2q+1)^2} \Pr_H\bigl[x\!=\! x_\circ \wedge V(x,H(x),z) : (x,z) \leftarrow {\cal A}^{H} \bigr] \, .
\end{align*}%
Furthermore, $\cal S$ runs in time polynomial in $q$, $\log|\mathcal X|$ and $\log|\mathcal Y|$.
\end{thm}
%
The proof of Lemma~\ref{lem:mainresult} follows closely the proof of Equation \eqref{eq:old} in~\cite{DFMS19}, but the streamlined statement and simulator allow to cut some corners.
\begin{proof}[of Lemma~\ref{lem:mainresult}]
For any $0\leq i \leq q$, inserting a resolution of the identity and exploiting that
$$
\big(\mathcal{A}_{i+1\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+1}^H\big)\big(\mathbb{1}-X\big)\ket{\phi_i^H} = \big(\mathcal{A}_{i\rightarrow q}^{H*\Theta x}\big)\big(\mathbb{1}-X\big)\ket{\phi_i^H} \, ,
$$
we can write
\switch{
\begin{align*}
\big(\mathcal{A}_{i+1\rightarrow q}^{H*\Theta x}\big)\ket{\phi_{i+1}^H}
&= \big(\mathcal{A}_{i+1\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+1}^H\big)\big(\mathbb{1}-X\big)\ket{\phi_i^H} \hspace{-8ex}& +\: \big(\mathcal{A}_{i+1\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+1}^H\big)X\ket{\phi_i^H} \notag\\
&= \big(\mathcal{A}_{i\rightarrow q}^{H*\Theta x}\big)\big(\mathbb{1}-X\big)\ket{\phi_i^H} \hspace{-8ex}\!\!&\!\! +\: \big(\mathcal{A}_{i+1\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+1}^H\big)X\ket{\phi_i^H}\\
&= \big(\mathcal{A}_{i\rightarrow q}^{H*\Theta x}\big)\ket{\phi_i^H} - \big(\mathcal{A}_{i\rightarrow q}^{H*\Theta x}\big)X\ket{\phi_i^H} \hspace{-8ex}\!\!&\!\! +\: \big(\mathcal{A}_{i+1\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+1}^H\big)X\ket{\phi_i^H}
\end{align*}
}{
\begin{align*}
&\big(\mathcal{A}_{i+1\rightarrow q+1}^{H*\Theta x}\big)\ket{\phi_{i+1}^H} &\\
&\hspace{30pt}= \big(\mathcal{A}_{i+1\rightarrow q+1}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+1}^H\big)\big(\mathbb{1}-X\big)\ket{\phi_i^H} \!\!&\!\! +\: \big(\mathcal{A}_{i+1\rightarrow q+1}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+1}^H\big)X\ket{\phi_i^H} \notag\\
&\hspace{30pt}= \big(\mathcal{A}_{i\rightarrow q+1}^{H*\Theta x}\big)\big(\mathbb{1}-X\big)\ket{\phi_i^H} \!\!&\!\! +\: \big(\mathcal{A}_{i+1\rightarrow q+1}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+1}^H\big)X\ket{\phi_i^H}\\
&\hspace{30pt}= \big(\mathcal{A}_{i\rightarrow q+1}^{H*\Theta x}\big)\ket{\phi_i^H} - \big(\mathcal{A}_{i\rightarrow q+1}^{H*\Theta x}\big)X\ket{\phi_i^H} \!\!&\!\! +\: \big(\mathcal{A}_{i+1\rightarrow q+1}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+1}^H\big)X\ket{\phi_i^H}
\end{align*}
}
Rearranging terms, applying $G_{x}^\Theta = (\proj{x} \otimes \Pi_{x,\Theta})$ and using the triangle equality, we can thus bound
\begin{align*}
\big\| G_{x}^\Theta \big(\mathcal{A}_{i\rightarrow q}^{H*\Theta x}\big) \ket{\phi_i^H} \big\|_2
\leq \big\| G_{x}^\Theta & \big(\mathcal{A}_{i+1\rightarrow q}^{H*\Theta x}\big)\ket{\phi_{i+1}^H}\big\|_2 \\& + \big\| G_{x}^\Theta \big(\mathcal{A}_{i\rightarrow q}^{H*\Theta x}\big)X\ket{\phi_i^H}\big\|_2\\ & \qquad + \big\| G_{x}^\Theta \big(\mathcal{A}_{i+1\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+1}^H\big)X\ket{\phi_i^H}\big\|_2 \, .
\end{align*}
Summing up the respective sides of the inequality over $i=0,\ldots,q-1$, we get
\begin{equation*}
\big\| G_x^\Theta\ket{\phi_{q}^{H*\Theta x}}\big\|_2 \:\leq\: \big\| G_x^\Theta\ket{\phi_{q}^H}\big\|_2 + \!\!\!\sum_{\substack{0\leq i < q \\ b\in \{0,1\}}}\!\!\! \big\| G_x^\Theta \big(\mathcal{A}_{i+b\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+b}^H\big)X\ket{\phi_i^H}\big\|_2 \, .\label{eqnsquarthis}
\end{equation*}
By squaring both sides, dividing by $2q+1$ (i.e., the number of terms on the right hand side), and using Jensen's inequality on the right hand side, we obtain
$$
\frac{\big\| G_x^\Theta\ket{\phi_{q}^{H*\Theta x}}\big\|_2^2}{2q+1} \leq \big\| G_x^\Theta\ket{\phi_{q}^H}\big\|_2^2 + \!\!\!\sum_{\substack{0\leq i < q \\ b\in \{0,1\}}}\!\!\!\big\| G_x^\Theta \big(\mathcal{A}_{i+b\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+b}^H\big)X\ket{\phi_i^H}\big\|_2^2
$$
and thus, noting that we can write $\big\| G_x^\Theta\ket{\phi_{q}^H}\big\|_2^2$ as
$$\big\|G_x^\Theta \big(\mathcal{A}_{i+b\rightarrow q+1}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+b}^H\big)X\ket{\phi_i^H}\big\|_2^2$$
with $i=q$ and $b=0$,
\begin{equation*}\label{eq:intermediate0}
\frac{\big\| G_x^\Theta\ket{\phi_{q}^{H*\Theta x}}\big\|_2^2}{(2q+1)^2}\;\leq\; \E_{i,b}\left[\big\| G_x^\Theta \big(\mathcal{A}_{i+b\rightarrow q}^{H*\Theta x}\big)\big(\mathcal{A}_{i\rightarrow i+b}^H\big)X\ket{\phi_i^H}\big\|_2^2\right] \, .
\end{equation*}
\qed\end{proof}
For completeness, let us spell out how Theorem~8 of~\cite{DFMS19} on the generic security of the Fiat-Shamir transformation (in the QROM) can now be re-phrased, avoiding the negligible error term present in \cite{DFMS19}. We refer to~\cite{DFMS19} or to our later Section~\ref{sec:mFS} for the details on the Fiat-Shamir transformation.
\begin{thm}\label{thm:FS}
There exists a black-box quantum polynomial-time two-stage quantum algorithm $\cal S$ such that for any adaptive Fiat-Shamir adversary $\cal A$, making $q$ queries to a uniformly random function $H$ with appropriate domain and range, and for any $x_\circ \in {\cal X}$:
\begin{align*}
\Pr\bigr[x\!=\! x_\circ \wedge v = accept& :(x,v) \leftarrow \langle{\cal S}^{\cal A} , {\cal V}\rangle\bigl]
\switch{\;}{\\ &}
\geq \frac{1}{(2q+1)^2} \Pr_H\bigr[x\!=\! x_\circ \wedge V^H_{FS}(x,\pi) : (x,\pi) \leftarrow {\cal A}^H \bigl] \, .
\end{align*}
\end{thm}
%
\section{Multi-input reprogrammability}\label{subsec:tech-n}
In this section, we extend our (improved) results on adaptively reprogramming the quantum random oracle at {\em one} point $x \in {\cal X}$ to {\em multiple} points $x_1,\ldots,x_n \in {\cal X}$. This in turn will allow us to extend the results on the security of the Fiat-Shamir transformation to {\em multi-round} protocols. We point out again that the improvement of Lemma \ref{lem:mainresult} over Lemma 1 in ~\cite{DFMS19} plays a crucial role here, in that it circumvents the trouble with the negligible error term that occurs when trying to extend the result from~\cite{DFMS19} to the setting considered here.
The starting point is the following generalized version of the problem considered in Section~\ref{secmainresult}. We assume an oracle quantum algorithm ${\cal A}^H$ that makes $q$ queries to a random oracle $H: {\cal X}\rightarrow {\cal Y}$ and then produces an output of the form $(x_1,\ldots,x_n,z)$, where $z$ may be quantum, such that a certain (quantum) predicate $V(x_1, H(x_1),\ldots,x_n,H(x_n),z)$ is satisfied with some probability.
The goal then is to turn such an ${\cal A}^H$ into a multi-stage quantum algorithm ${\cal S}$ (the {\em simulator}) that, stage by stage, outputs the $x_i$'s and takes corresponding $\Theta_i$'s as input, and eventually outputs a (possibly quantum) $z$ with the property that $V(x_1, \Theta_1,\ldots,x_n,\Theta_n,z)$ is satisfied with similar probability.
\subsection{The general case}
Naively, one might hope for an ${\cal S}$ that outputs $x_1$ in the first stage (obtained by measuring one of the queries of ${\cal A}^H$), and then on input $\Theta_1$ proceeds by outputting $x_2$ in the second stage (obtained by measuring one of the subsequent queries of ${\cal A}^H$), etc. However, since ${\cal A}^H$ may query the hashes of $x_1,\ldots,x_n$ in an arbitrary order, we cannot hope for this to work.
Therefore, we have to allow $\cal S$ to produce $x_1,\ldots,x_n$ in an arbitrary order as well.%
\footnote{Looking ahead, in Section~\ref{sec:EnforceOrder} we will force ${\cal A}^H$ to query, and thus $\cal S$ to extract, $x_1,\ldots,x_n$ in the {\em right} order by requiring $x_2$ to contain $H(x_1)$ as a substring, $x_3$ to contain $H(x_2)$ as a substring, etc. This will be important for the the multi-round Fiat-Shamir application. }
Formally, we consider $\cal S$ with the following syntactic behavior: in the first stage it outputs a permutation $\pi$ together with $x_{\pi(1)}$ and takes as input $\Theta_{\pi(1)}$, and then for every subsequent stage $1 < i \leq n$ it outputs $x_{\pi(i)}$ and takes as input $\Theta_{\pi(i)}$; eventually, in the final stage (labeled by $n+1$) it outputs $z$. In line with earlier notation, but taking this additional complication into account, we denote such an execution of $\cal S$ as $(\pi,\pi({\mathbf x}),z) \leftarrow \langle{\cal S}^{\cal A}, \pi({\mathbf \Theta})\rangle$.
A final issue is that if $x_i = x_j$ then $H(x_i) = H(x_j)$ as well, whereas $\Theta_i$ and $\Theta_j$ may well be different. Thus, we can only expect $\cal S$ to work well when $x_1,\ldots x_n$ has no duplicates.
For us to be able to mathematically reason about the simulator described above, we introduce some additional notation.
For the basic simulator from Lemma \ref{lem:mainresult} we write, using $r_1=(b_1,i_1)$, as
$$
\mathcal{S}^{H,{\cal A}}_{\Theta_1,x_1,r_1} :=
\mathcal{S}^{H,{\cal A},\Theta_1,x_1,r_1} := \big(\mathcal{A}_{i_1+b_1\rightarrow q}^{H*\Theta_1x_1}\big)\big(\mathcal{A}_{i_1\rightarrow i_1+b_1}^{H}\big)X_1\big(\mathcal{A}_{0\rightarrow i_1}^{H}\big) \, .
$$
This can be recursively extended by applying it to ${\cal A}^H$ now being $\mathcal{S}^{H,{\cal A}}_{\Theta_1,x_1,r_1}$ so as to obtain
$$
\mathcal{S}^{H,{\cal A}}_{\Theta_{1,2},x_{1,2},r_{1,2}} := \big(\mathcal{S}_{i_2+b_2\rightarrow q}^{H*\Theta_2x_2,{\cal A},\Theta_1,x_1,r_1}\big)\big(\mathcal{S}_{i_2\rightarrow i_2+b_2}^{H,{\cal A},\Theta_1,x_1,r_1}\big)X_2\big(\mathcal{S}_{0\rightarrow i_2}^{H,{\cal A},\Theta_1,x_1,r_1}\big).
$$
In general, we can consider the following operator, which simulates $\mathcal{A}$ and performs $n$ measurements:
$$
\mathcal{S}_{{\mathbf \Theta},\mathbf x,{\mathbf r}}^{H,{\cal A}} := \big(\mathcal{S}_{i_n+b_n\rightarrow q}^{H*\Theta_nx_n,{\cal A},{\overline{\mathbf \Theta}},{\overline{\mathbf x}},\overline{\mathbf r}}\big)\big(\mathcal{S}_{i_n\rightarrow i_n+b_n}^{H,{\cal A},{\overline{\mathbf \Theta}},{\overline{\mathbf x}},\overline{\mathbf r}}\big)X_n\big(\mathcal{S}_{0\rightarrow i_n}^{H,{\cal A},{\overline{\mathbf \Theta}},{\overline{\mathbf x}},\overline{\mathbf r}}\big).
$$
where, for arbitrary but fixed $n$ and ${\bf\Theta} = (\Theta_1,\ldots,\Theta_n) \in {\cal Y}^n$, the notation $\overline{\mathbf \Theta}$ is understood as $\overline{\mathbf \Theta} = (\Theta_1,\ldots,\Theta_{n-1}) \in {\cal Y}^{n-1}$, and correspondingly for $\mathbf x$ etc.
Finally, when considering {\em fixed} ${\mathbf \Theta} \in {\cal Y}^n$ and ${\mathbf x} \in {\cal X}^n$, we write
$$
S_{\mathbf{r}}^H({\cal A}) := \mathcal{S}_{{\mathbf \Theta},\mathbf x,{\mathbf r}}^{H,{\cal A}} \, .
$$
At the core of our multi-round result will be the following technical lemma, which generalizes Lemma \ref{lem:mainresult}.
\begin{lem}\label{lem:mainresultmulti}
Let $\cal A$ be a $q$-query oracle quantum algorithm. Then, for any function $H: {\cal X}\rightarrow {\cal Y}$, any ${\mathbf x} \in {\cal X}^n$ and ${\mathbf \Theta}^n\in {\cal Y}^n$, and any projection $\Pi_{{\mathbf x,\mathbf \Theta}}$, it holds that
\begin{align*}
&\frac{\big\|\big(\proj{\mathbf x}\otimes\Pi_{{\mathbf x, \mathbf \Theta}}\big){\cal A}^{H*{\mathbf \Theta \mathbf x}}\ket{\phi_0}\big\|_2^2}{(2q+1)^{2n}}\leq \E_{\mathbf r}\left[\big\|\big(\proj{\mathbf x}_A\otimes\Pi_{{\mathbf x, \mathbf\Theta}}\big)\mathcal{S}_{\mathbf r}^H({\cal A})\ket{\phi_0}\big\|_2^2\right].
\end{align*}
\end{lem}
\begin{proof}
The proof is by induction on $n$, where the base case is given by Lemma~\ref{lem:mainresult}.
For the induction step we first apply the base case, substituting $x_n$ for $x_1$, $\Theta_n$ for $\Theta_1$, $r_n$ for $r_1$, $H\!*\!{\overline{\mathbf \Theta}\overline{\mathbf x}}$ for $H$, and $\hat{\Pi}_{x_n,\Theta_n}$ for $\Pi_{x_1,\Theta_1}$, where
$$\hat{\Pi}_{x_n,\Theta_n} = \proj{x_1}\otimes\ldots\otimes\proj{x_{n\shortminus 1}}\otimes\Pi_{\mathbf x,\mathbf \Theta}
$$
to obtain
\begin{align*}
&\frac{\big\|\big(\proj{x_n}\otimes \hat{\Pi}_{x_n,\Theta_n}\big)\mathcal{A}^{\left(H*{\overline{\mathbf \Theta}\overline{\mathbf x}}\right)*\Theta_nx_n}\ket{\phi_0}\big\|_2^2}{(2q+1)^2}\\
&\quad\ \ \leq \E_{r_n}\left[\big\|\big(\proj{x_n}_A\otimes \hat{\Pi}_{x_n,\Theta_n}\big)\mathcal{S}_{r_n}^{H*{\overline{\mathbf \Theta}\overline{\mathbf x}}}({\cal A})\ket{\phi_0}\big\|_2^2\right]
\end{align*}
which we can write as
\begin{align}\label{eqn:exprn}
\frac{\big\|\big(\proj{\mathbf x}\otimes\Pi_{{\mathbf x,\mathbf \Theta}}\big)\mathcal{A}^{H*{\mathbf \Theta \mathbf x}}\ket{\phi_0}\big\|_2^2}{(2q+1)^{2n}}&\leq \frac{\E_{r_n}\left[\big\|\big(\proj{\mathbf x}\otimes\Pi_{\mathbf x,\mathbf \Theta}\big)\mathcal{S}_{r_n}^{H*{\overline{\mathbf \Theta}\overline{\mathbf x}}}({\cal A})\ket{\phi_0}\big\|_2^2\right]}{(2q+1)^{2(n\shortminus 1)}}
\end{align}
dividing both sides by $(2q+1)^{2(n\shortminus 1)}$ and swapping registers appropriately (to make sure that the register which contains $x_n$ comes after the others).
Now fix $r_n$. We define
$$
\hat{\Pi}_{{\overline{\mathbf x},\overline{\mathbf \Theta}}} := \proj{x_n}\otimes\Pi_{\mathbf x,\mathbf \Theta}.
$$
and apply the induction hypothesis for $n\!-\!1$, substituting ${\cal S}_{r_n}^{H*\overline{\mathbf \Theta} \overline{\mathbf x}}({\cal A})$ for ${\cal A}^{H*{\overline{\mathbf \Theta} \overline{\mathbf x}}}$, and $\hat{\Pi}_{{\overline{\mathbf x}},\overline{\mathbf \Theta}}$ for $\Pi_{{\overline{\mathbf x}},\overline{\mathbf \Theta}}$, in order to derive
\begin{align*}
\frac{\big\|\big(\proj{\mathbf x}\otimes\Pi_{\mathbf x,\mathbf \Theta}\big)\mathcal{S}_{r_n}^{H*{\overline{\mathbf \Theta}\overline{\mathbf x}}}({\cal A})\ket{\phi_0}\big\|_2^2}{(2q+1)^{2(n\shortminus 1)}} &= \frac{\big\|\big(\proj{\overline{\mathbf{x}}}\otimes\hat{\Pi}_{{\overline{\mathbf x}},\overline{\mathbf \Theta}}\big)\mathcal{S}_{r_n}^{H*{\overline{\mathbf \Theta}\overline{\mathbf x}}}({\cal A})\ket{\phi_0}\big\|_2^2}{(2q+1)^{2(n\shortminus 1)}} \\
&\leq \E_{\overline{\mathbf r}}\left[\big\|\big(\proj{\overline{\mathbf{x}}}\otimes\hat{\Pi}_{{\overline{\mathbf x}},\overline{\mathbf \Theta}}\big)\mathcal{S}_{\overline{\mathbf r}}^{H}({\cal S}_{r_n}({\cal A}))\ket{\phi_0}\big\|_2^2\right] \\
&=\E_{\overline{\mathbf{r}}}\left[\big\|\big(\proj{\mathbf x}\otimes\Pi_{{\mathbf x,\mathbf \Theta}}\big)\mathcal{S}_{\mathbf r}^{H}({\cal A})\ket{\phi_0}\big\|_2^2\right].
\end{align*}
Since this inequality holds for any fixed $r_n$, it also holds in expectation over $r_n$. Substituting it in \mbox{Equation \ref{eqn:exprn}}, we retrieve the statement of the lemma.
\qed\end{proof}
\begin{rem}\label{rem:disjointslots}
In case of ${\bf x} = (x_1,\ldots,x_n) \in {\cal X}^n$ {\em without duplicate entries}, it follows from the resulting mutual orthogonality of the projections $X_j$ and the definition of $\mathcal{S}_{\mathbf r}^H({\cal A})$ that the following holds. The term in the expectation $\E_{\mathbf r}$ in the inequality of Lemma~\ref{lem:mainresultmulti} vanishes for any ${\bf r} = ({\bf i},{\bf b})$ for which there exist two distinct coordinates $j \neq k$ with $i_j = i_k$. As such, we may well understand this expectation to be over ${\bf r} = ({\bf i},{\bf b})$ for which $i_j \neq i_k$ whenever $j \neq k$; this only increases the expectation.%
\footnote{One might try to exploit this actual improvement in the bound; however, for typical choices of parameters, with $n$ a small constant and $q$ large, this is insignificant. }
In other words, we may assume that random {\em distinct} queries are measured in order to extract $x_1,\ldots,x_n$.
\end{rem}
\begin{thm}[Measure-and-reprogram, multiple inputs]\label{thm:multiplemar}
Let $n$ be a positive integer, and let ${\cal X},{\cal Y}$ be finite non-empty sets.
There exists a black-box polynomial-time $(n\!+\!1)$-stage quantum algorithm $\cal S$ with the syntax as outlined at the start of this section, satisfying the following property.
Let $\cal A$ be an arbitrary oracle quantum algorithm that makes $q$ queries to a uniformly random $H: {\cal X}\rightarrow {\cal Y}$ and that outputs a tuple ${\mathbf x} \in {\cal X}^n$ and a (possibly quantum) output~$z$. Then,
for any $\mathbf x^\circ\in X^n$ \emph{without duplicate entries} and for any predicate $V$:
\begin{align*}
\Pr_{{{\mathbf \Theta}}}\bigr[{\mathbf x}\!=\! \mathbf x^\circ& \wedge V({\mathbf x},{{\mathbf \Theta}},z) : (\pi,\pi({\mathbf x}),z) \leftarrow \langle{\cal S}^{\cal A} , \pi({\mathbf \Theta})\rangle\bigl]
\\ &
\geq \frac{1}{(q+1)^{2n}} \Pr_H\bigl[{\mathbf x}\!=\! \mathbf x^\circ \wedge V({\mathbf x},H(\mathbf{x}),z) : ({\mathbf x},z) \leftarrow {\cal A}^{H} \bigr] \, .
\end{align*
\end{thm}
\begin{proof}
We consider the inequality of Lemma~\ref{lem:mainresultmulti} with the expectation over $\bf r$ understood as in Remark~\ref{rem:disjointslots}.
Additionally taking the expectation over $H$ and ${\mathbf \Theta}$ on both sides, we obtain
\begin{align*}
&\E_{H,\mathbf{\Theta}}\left[\frac{\big\|\big(\proj{\mathbf x}\otimes\Pi_{{\mathbf x, \mathbf \Theta}}\big){\cal A}^{H*{\mathbf \Theta \mathbf x}}\ket{\phi_0}\big\|_2^2}{(2q+1)^{2n}}\right]\leq \E_{H,\mathbf{\Theta},\mathbf r}\left[\big\|\big(\proj{\mathbf x}\otimes\Pi_{{\mathbf x, \mathbf\Theta}}\big)\mathcal{S}_{\mathbf r}^H({\cal A})\ket{\phi_0}\big\|_2^2\right]
\end{align*}
and note that this is equivalent to
\begin{align*}
&\E_{H}\left[\frac{\big\|\big(\proj{\mathbf x}\otimes\Pi_{{\mathbf x, H(\mathbf{x})}}\big){\cal A}^{H}\ket{\phi_0}\big\|_2^2}{(2q+1)^{2n}}\right]\leq \E_{H,\mathbf{\Theta},\mathbf r}\left[\big\|\big(\proj{\mathbf x}\otimes\Pi_{{\mathbf x, \mathbf\Theta}}\big)\mathcal{S}_{\mathbf r}^H({\cal A})\ket{\phi_0}\big\|_2^2\right].
\end{align*}
since all values $\Theta_j$ and $H(x_j)$ have the same distribution. The term $\mathcal{S}_{\mathbf r}^H({\cal A})\ket{\phi_0} = \mathcal{S}_{{\mathbf \Theta},\mathbf x,{\mathbf r}}^{H,{\cal A}}\ket{\phi_0}$ corresponds to the output of the simulator that uses oracle access to $H$ to run ${\cal A}$ on an initial state $\ket{\phi_0}$, while measuring queries $i_j$ (finding $x_j$ as the outcome) and reprogramming the oracle at $x_j$ to $\Theta_j$ from the $(i_j+b_j)$-th query onwards, with $(i_j,b_j)=r_j$.
Next, we note that the value of the right hand side does not change \cite{Zhandry2012a} when instead of giving ${\cal S}$ oracle access to $H$, we let it choose a random instance from a family of $2q$-wise\footnote{It is easy to see that the result of \cite{Zhandry2012a} also holds for controlled-query algorithms. Alternatively, the $q$ controlled queries can be simulated using $q+1$ plain queries, and a $2(q+1)$-wise independent function can be used.} independent hash functions to simulate ${\cal A}$ on.
The choice of ${\mathbf r}$ uniquely determines the permutation $\pi$ with the property \smash{$i_{\pi(1)} < \cdots < i_{\pi(n)}$}; by definition of $\mathcal{S}_{{\mathbf \Theta},\mathbf x,{\mathbf r}}^{H,{\cal A}}$, the values ${\mathbf x} = (x_1,\ldots, x_n)$ are then extracted from the adversary's queries in the order \mbox{$\pi(\mathbf x) = (x_{\pi(1)},\ldots, x_{\pi(n)})$}.
Since ${\cal S}$ chooses this $\mathbf{r}$ itself, we can assume that it includes $\pi$ in its output. Likewise, the simulator takes as input to every stage\,---\,from the second to the \mbox{$(n\!+\!1)$-st}\,---\,a fresh random value, in the order given by $\pi(\mathbf{\Theta})$. However, by definition of $\Pi_{{\mathbf x, \mathbf \Theta}}$ the final output of the simulator satisfies the predicate $V$ with respect to the given order (without $\pi$), i.e. such that $V(\mathbf x,{\mathbf \Theta},z) = 1$, as is the claim of the theorem.
\qed\end{proof}
\subsection{The time-ordered case}\label{sec:EnforceOrder}
In some applications, like the multi-round version of the Fiat-Shamir transformation, we need that the simulator extracts the messages in the right order. This can be achieved by replacing the hash {\em list} $H({\bf x}) = \big(H(x_1),\ldots,H(x_n)\big)$, consisting of individual hashes, by a hash {\em chain}, where subsequent hashes depend on previous hashes. Intuitively, this enforces $\cal A$ to query the oracle in the given order.
Formally, considering a function $H: ({\cal X}_0 \cup {\cal Y}) \times {\cal X}\rightarrow {\cal Y}$ and given a tuple ${\mathbf x} = (x_0,x_1,\ldots,x_n)$ in ${\cal X}_0 \times {\cal X}^n$, we define the {\em hash chain} $\mathbf{h}^{H,\mathbf{x}} = \big(h_1^{H,\mathbf{x}},\ldots,h_n^{H,\mathbf{x}}\big)$ given by
$$
h_1^{H,\mathbf{x}}=H(x_0,x_1)
\qquad\text{and}\qquad
h_i^{H,\mathbf{x}} := H\big(h_{i-1}^{H,\mathbf{x}},x_i\big)
$$
for $2\leq i\leq n$.
\begin{thm}[Measure-and-reprogram, enforced extraction order]\label{thm:enforced}
Let $n$ be a positive integer, and let ${\cal X}_0,{\cal X}$ and ${\cal Y}$ be finite non-empty sets.
There exists a black-box polynomial-time $(n\!+\!1)$-stage quantum algorithm ${\cal S}$, satisfying the following property.
Let $\cal A$ be an arbitrary oracle quantum algorithm that makes $q$ queries to a uniformly random $H: ({\cal X}_0\cup {\cal Y}) \times {\cal X}\rightarrow {\cal Y}$ and that outputs a tuple ${\mathbf x} = (x_0,x_1,\ldots,x_n) \in \left({\cal X}_0\times{\cal X}^n\right)$ and a (possibly quantum) output~$z$.
Then, for any $\mathbf{x}^\circ\in ({\cal X}_0\times {\cal X}^n)$ without duplicate entries and for any predicate $V$:
\begin{align*}
&\Pr_{{{\mathbf \Theta}}}\bigr[\mathbf{x}\!=\! \mathbf{x}^\circ \wedge V({\mathbf x},{{\mathbf \Theta}},z) : ({\mathbf x},z) \leftarrow \langle{{\cal S}^A} , {\mathbf \Theta}\rangle\bigl]
\\ &
\geq \frac{n!}{(q+n+1)^{2n}} \Pr_H\bigl[\mathbf{x}\!=\! \mathbf{x}^\circ\wedge V({\mathbf x},\mathbf{h}^{H,\mathbf{x}},z) : ({\mathbf x},z) \leftarrow {\cal A}^{H} \bigr]-\epsilon_{\mathbf x^\circ}\, .
\end{align*
where $\epsilon_{\mathbf x^\circ}$ is equal to $\frac{n!}{|{\cal Y}|}$ when summed over all $\mathbf{x^\circ}$.
\end{thm}
\begin{rem}\label{rem:decrease-additive-error}
The additive error term $n!/|{\cal Y}|$ stems from the fact that the extraction in the right order fails if $\cal A$ succeeds in guessing one (or more) of the hashes in the hash chain. The claimed term can be improved to $(n-1)^2/|{\cal Y}| + n!/|{\cal Y}|^2$ by doing a more fine-grained analysis, distinguishing between permutations $\pi \neq \mathrm{id}$ that bring 2 elements ``out of order'' or more. In any case, it can be made arbitrary small by extending the range $\cal Y$ of $H$ for computing the hash chain.
\end{rem}
\begin{proof}
First, we note that $V({\mathbf x},\mathbf{h}^{H,\mathbf{x}},z)= V'(\mathbf{v},H(\mathbf{v}),z)$ for ${\bf v} = (v_1,\ldots,v_n)$ given by $v_1 = (x_0,x_1)$ and $v_i = \big(h_{i-1}^{H,\mathbf{x}},x_i\big) = \big(H(v_{i-1}),x_i\big)$ for $i\geq 2$, and $V'(\mathbf{v},\mathbf{h},z) := \big[\,V(\mathbf{x},\mathbf{h},z) \,\wedge\, h'_{i} \!=\! h_{i-1} \forall i\geq 2 \,\big]$ for any $\bf v$ of the form $v_1 = (x_0,x_1)$ and $v_i = \big(h'_i,x_i\big)$ for $i\geq 2$. Next, at the cost of $n$ additional queries, we can extend $\cal A$ to an algorithm ${\cal A}_+$ that actually outputs $(\mathbf v,z)$, since ${\cal A}_+$ can easily obtain the $H(v_i)$'s by making $n$ queries to $H$. These observations together give
\begin{align*}
\Pr_H\bigl[\mathbf{x}\!=\! \mathbf x^\circ& \wedge V({\mathbf x},\mathbf{h}^{H,\mathbf{x}},z) : ({\mathbf x},z) \leftarrow {\cal A}^{H} \bigr]
=
\Pr_H\bigl[\mathbf x\!=\! \mathbf x^\circ \wedge V'(\mathbf{v},H(\mathbf{v}),z) : ({\mathbf v},z) \leftarrow {\cal A}_+^{H} \bigr] \, .
\end{align*}
Let $\mathbf v^\circ = (v_1^\circ,\ldots,v_n^\circ)$ with $v_i^\circ := (h^\circ_i,x^\circ_i)$, where $h_1^\circ = x^\circ_0$ and $h_i^\circ \in {\cal Y}$ is arbitrary but fixed for $i \geq 2$. Let $\mathbf\Theta$ be uniformly random in ${\cal Y}^n$. An application of Theorem \ref{thm:multiplemar} yields a simulator $\hat{\cal S}$ with
\begin{align*}
\Pr_{{{\mathbf \Theta}}}\bigr[{\mathbf v}\!=\! \mathbf v^\circ& \wedge V'({\mathbf v},{{\mathbf \Theta}},z) : (\pi,\pi({\mathbf v}),z) \leftarrow \langle{\hat{\cal S}}^{\cal A_+} , \pi({\mathbf \Theta})\rangle\bigl]
\\ &
\geq \frac{1}{(q+n+1)^{2n}} \Pr_H\bigl[{\mathbf v}\!=\! \mathbf v^\circ \wedge V'({\mathbf v},H(\mathbf{v}),z) : ({\mathbf v},z) \leftarrow {\cal A}_+^{H} \bigr] \, .
\end{align*}
Summing both sides of the inequality over $h_i^\circ$ for $i\geq 2$ yields
\begin{align}
\begin{split}
\Pr_{{{\mathbf \Theta}}}\bigr[{\mathbf x}\!=\! \mathbf x^\circ& \wedge V'({\mathbf v},{{\mathbf \Theta}},z) : (\pi,\pi({\mathbf v}),z) \leftarrow \langle{\hat{\cal S}}^{\cal A_+} , \pi({\mathbf \Theta})\rangle\bigl]
\\ &
\geq \frac{1}{(q+n+1)^{2n}} \Pr_H\bigl[{\mathbf x}\!=\! \mathbf x^\circ \wedge V'({\mathbf v},H(\mathbf{v}),z) : ({\mathbf v},z) \leftarrow {\cal A}_+^{H} \bigr]
\\ &
= \frac{1}{(q+n+1)^{2n}} \Pr_H\bigl[\mathbf{x}\!=\! \mathbf x^\circ \wedge V({\mathbf x},\mathbf{h}^{H,\mathbf{x}},z) : ({\mathbf x},z) \leftarrow {\cal A}^{H} \bigr] \, .\end{split}\label{eq:plainapplication}
\end{align}
Recalling its construction, the simulator ${\hat{\cal S}}^{\cal A_+}$ begins by sampling a uniformly random permutation $\pi$, so we can write
\begin{align}
\begin{split}
\Pr_{{{\mathbf \Theta}}}\bigr[&{\mathbf x}\!=\! \mathbf x^\circ \wedge V'({\mathbf v},{{\mathbf \Theta}},z) : (\pi,\pi({\mathbf v}),z) \leftarrow \langle{\hat{\cal S}}^{\cal A_+} , \pi({\mathbf \Theta})\rangle\bigl]
\\ &
=\frac 1 {n!}\sum_{\sigma\in S_n}\Pr_{{\mathbf \Theta}}\bigr[{\mathbf x}\!=\! \mathbf x^\circ \wedge V'({\mathbf v},{{\mathbf \Theta}},z): (\pi,\pi(\mathbf v),z) \leftarrow \langle{\hat{\cal S}}^{\cal A_+} , {\pi(\mathbf\Theta)}\rangle\big|\pi=\sigma\bigl] \, .
\end{split}
\label{eq:permsum}
\end{align}
By definition, the predicate $V'({\mathbf v},\mathbf{\Theta},z) $ (with $\mathbf{v}$ of the form as explained above) is false whenever there exists an $i\geq 2$ such that $h_i\neq \Theta_{i-1}$. Now suppose that $\pi\neq\mathrm{id}$, then there must be some $j$ such that $\pi(j)<\pi(j-1)$. This implies that the first $\pi(j)$ stages of $\hat{\cal S}^{\cal A_+}$ which together (in the $\pi(j)$-th stage) produce $v_j=(h_j,x_j)$ are independent of $\Theta_{j-1}$, since $\Theta_{j-1}$ is given as input only at the {\em later} stage $\pi(j-1)$. We thus have the following, taking it as understood, here and in the sequel, that the random variables $\pi,\mathbf{v},\mathbf{\Theta}$ and $z$ are as in~(\ref{eq:permsum}).
\begin{align*}
\Pr\bigl[{\mathbf x}\!=\! \mathbf x^\circ \wedge V'({\mathbf v},{{\mathbf \Theta}},z)\big|\pi\neq\mathrm{id}\bigr]
&\le \Pr\bigl[{\mathbf x}\!=\! \mathbf x^\circ \wedge h_j=\Theta_{j-1}|\pi\neq\mathrm{id}\bigr]
=\frac{\Pr\bigl[{\mathbf x}\!=\! \mathbf x^\circ|\pi\neq\mathrm{id}\bigr]}{|{\cal Y}|} \, .
\end{align*}
Using Equation \eqref{eq:permsum}, we can bound
\begin{align*}
\frac 1 {n!}\sum_{\sigma\in S_n}\Pr\bigr[{\mathbf x}\!=\! \mathbf x^\circ \wedge V'({\mathbf v},{{\mathbf \Theta}},z)\big|\pi\!=\!\sigma\bigl]
&\leq \frac 1 {n!}\Pr\bigr[{\mathbf x}\!=\! \mathbf x^\circ \wedge V'({\mathbf v},{{\mathbf \Theta}},z)\big|\pi\!=\!\mathrm{id}\bigl]
+\frac{\Pr\bigl[{\mathbf x}\!=\! \mathbf x^\circ|\pi\!\neq\!\mathrm{id}\bigr]}{|{\cal Y}|} \, .
\end{align*}
We note that by definition of $V'$,
\begin{align*}
\Pr\bigr[{\mathbf x}\!=\! \mathbf x^\circ \wedge V({\mathbf x},{{\mathbf \Theta}},z)\big|\pi=\mathrm{id}\bigl] &\geq
\Pr\bigr[{\mathbf x}\!=\! \mathbf x^\circ \wedge V'({\mathbf v},{{\mathbf \Theta}},z)\big|\pi=\mathrm{id}\bigl]\ .
\end{align*}
Furthermore, we may define a new simulator ${\cal S}$ which takes oracle access to $\cal A$ and turns it into $\cal A_+$, and always chooses $\pi=\mathrm{id}$ instead of a random permutation. Where $\hat{\cal S}$ would output $(\mathbf v,z)$, ${\cal S}$ ignores the $\mathbf{h}$-part of $\mathbf{v}$ and simply outputs $(\mathbf x,z)$. We then have
\begin{align*}
\Pr_{{{\mathbf \Theta}}}\bigr[&\mathbf{x}\!=\! \mathbf{x}^\circ \wedge V({\mathbf x},{{\mathbf \Theta}},z) : ({\mathbf x},z) \leftarrow \langle{{\cal S}^A} , {\mathbf \Theta}\rangle\bigl]
\\ &
\geq \frac{n!}{(q+n+1)^{2n}} \Pr_H\bigl[\mathbf{x}\!=\! \mathbf x^\circ \wedge V({\mathbf x},\mathbf{h}^{H,\mathbf{x}},z) : ({\mathbf x},z) \leftarrow {\cal A}^{H} \bigr] -\epsilon_{\mathbf x^\circ}\, .
\end{align*}
with $\epsilon_{\mathbf x^\circ}$ given by $\epsilon_{\mathbf x^\circ}:= n!\cdot\Pr_{\mathbf \Theta}\bigl[{\mathbf x}= \mathbf x^\circ|\pi\neq\mathrm{id}\bigr]/|{\cal Y}|$.
\qed\end{proof}
\section{The multi-round Fiat-Shamir transformation}\label{sec:mFS}
A straightforward generalization of the Fiat-Shamir transformation can be applied to arbitrary (i.e., multi-round) public-coin interactive proof systems (PCIP). We show here security of this multi-round Fiat-Shamir transformation in the QROM.
\subsection{Public coin interactive proofs and multi-round Fiat-Shamir}
We begin by defining PCIPs, mainly to fix notation, and the corresponding multi-round Fiat-Shamir transformation.
\begin{defi}[Public coin interactive proof system (PCIP)]
A $(2n\!+\!1)$-round public coin interactive proof system (PCIP) $\mathsf{\Pi} = ({\cal P}, {\cal V})$ for a
language $\mathcal{L}$ is a $(2n\!+\!1)$-round two-party interactive protocol of the form, with $\cal C$ being a finite non-empty set, and $V$ a predicate:
\begin{empheq}[box=\widefbox]{align*}
&\underline{\text{Prover } {\cal P}(x)}&&&&\underline{\text{Verifier } {\cal V}(x)}\\
&&&\overset{a_1}{\longrightarrow}&\\
&&&\overset{c_1}{\longleftarrow}&&c_1\overset{\,\$}{\leftarrow} {\cal C} \\
&&&\quad\vdots&&\\
&&&\overset{a_n}{\longrightarrow}&\\
&&&\overset{c_n}{\longleftarrow}&&c_n\overset{\,\$}{\leftarrow} {\cal C} \\
&&&\overset{z}{\longrightarrow}&&\textup{Accept iff } V(x,a_1,c_1,...,a_n,c_n,z) = 1
\end{empheq}\switch{\vspace{-1.5ex}}{}
\end{defi}
\begin{rem}
If the language $\mathcal L$ is definied by means of an (efficiently verifiable) witness relation $R \subseteq {\cal X} \times {\cal W}$, then the prover typcially gets a witness $w$ for $x$ as an additional input. We then also say that $\mathsf \Pi$ is a PCIP {\em for the relation $R$}.
In case of a $(2n\!+\!1)$-round PCIP $\mathsf{\Pi}$ for a witness relation $R$ that is {\em hard on average}, meaning that there exists an instance generator $\ensuremath{\mathsf{Gen}}\xspace$ with the property that for $(w,x) \leftarrow \ensuremath{\mathsf{Gen}}\xspace$ it holds that $(w,x) \in R$, but given $x$ alone it is computationally hard to find $w$ with $(w,x) \in R$, $\mathsf{\Pi}$ is also called an {\em identification scheme}.
\end{rem}
Just as in the ordinary Fiat-Shamir transformation, the interaction used to enforce the time order between the prover committing to the message $a_i$ and receiving the challenge $c_i$ can be replaced by means of a hash function. In addition, we can include the previous challenge (i.e. the previous hash value) in the hash determining the next challenge to enforce the ordering of the $n$ pairs $(a_i, c_i)$ according to increasing $i$. We thus obtain the following non-interactive proof system.
\begin{defi}[Fiat-Shamir transformation for general PCIP (mFS)]\label{def:mFS}\ \\
Given an $(2n\!+\!1)$-round PCIP $\mathsf{\Pi} = ({\cal P}, {\cal V})$ for a
language $\mathcal{L}
$
and a hash function $H$ with appropriate domain, and range equal to $\cal C$, we define the non-interactive proof system $\mathsf{FS[\Pi]}= ({\cal P}^H_{FS}, {\cal V}^H_{FS})$ as follows. The prover $\cal P$ outputs
\begin{align*}
(x,a_1,...,a_n,z)&\leftarrow {\cal P}^H_{FS}
\end{align*}
where $z$ and $a_i$ for $i=1,...,n$ are computed using $\cal P$, and the challenges are computed as
\begin{align*}
c_1&=H(0,x,a_1)\text{ and} \\
c_i&=H(i-1,c_{i-1},a_i) \text{ for } i=2,...,n \, ,
\end{align*}
The verifier outputs `accept' iff $V(x,a_1,c_1,...,a_n,c_n,z) = 1$ for $c_1=H(0,x,a_1)$ and $c_i=H(i-1,c_{i-1},a_i)$, $i=2,...,n$, denoted by $V_{FS}(x,a_1,c_1,...,a_n,c_n,z) = 1$.
\end{defi}
\begin{rem}\label{rem:prefixes}
The challenge number $i$ (minus 1) is included in the hash input to ensure that the challenges are generated using distinct inputs to $H$ with probability 1. This is to enable us to apply Theorem \ref{thm:enforced}, which only holds for duplicate-free lists of hash inputs. In fact, any additional strings can be included in the argument when computing $c_i$ using $H$, without influencing the security properties of the non-interactive proof system in a detrimental way. In the literature one sometimes sees that the entire previous transcript is hashed (in which case the counter number $i$ may then be omitted).
\end{rem}
\subsection{General security of multi-round Fiat-Shamir in the QROM}
When constructing a reduction for mFS, this reduction is participating as a prover in the underlying PCIP, and is hence only provided with random challenges one at a time. We thus need the special simulator from Theorem \ref{thm:enforced}, which always outputs the corresponding messages in the right order. The success of this simulator is based on the very essence of the Fiat-Shamir transformation, namely the fact that the intractability of the hash function takes the role of the interaction in enforcing a time order in the transcript of the PCIP.
The security of the multi-round Fiat-Shamir transformation follows as a simple Corollary of Theorem \ref{thm:enforced}.
\begin{cor}\label{cor:mFS}
There exists a black-box quantum polynomial-time $(n\!+\!1)$-stage quantum algorithm $\cal S$ such that for any adaptive adversary $\cal A$ against the multi-round Fiat-Shamir transformed version $\mathsf{FS[\Pi]}$ of a $(2n\!+\!1)$-round PCIP $\mathsf\Pi$, making $q$ queries to a uniformly random function $H$ with appropriate domain and range equal $\cal C$, and for any $x^\circ\in {\cal X}$:
\begin{align*}
\Pr\bigr[&x = x^\circ\wedge v = accept :(x,v) \leftarrow \langle{\cal S}^{\cal A} , {\cal V}\rangle\bigl]
\switch{\;}{\\ &}
\geq \frac{n!}{(2q+n+1)^{2n}} \Pr_H\bigr[x = x^\circ \wedge V^H_{FS}(x,\pi) : (x,\pi) \leftarrow {\cal A}^H \bigl] -\epsilon_{x^\circ}\, .
\end{align*}
where the additive error term $\epsilon_{x^\circ}$ is equal to $\frac{n!}{|{\cal C}|}$ when summed over all $x^\circ$.
\end{cor}
\begin{proof}
We may simply set $\mathbf{x^\circ} = (x^\circ,(0,a_1),\ldots,(n-1,a_n))$ for arbitrary $a_1,\ldots,a_n$, apply Theorem \ref{thm:enforced} and then sum over all choices of $a_1,\ldots,a_n$ to obtain the claimed inequality. Note that the round indices ensure that every such $\mathbf{x^\circ}$ is duplicate free, satisfying the corresponding requirement of Theorem \ref{thm:enforced}.
\end{proof}
Note that the additive error terms reflect the fact that the random oracle only \emph{approximately} succeeds in enforcing the original time order in the transcript of the PCIP. However, it can be made arbitrarily small, as discussed below.
\begin{rem}\label{rem:extendC}
There exist PCIPs with soundness error much smaller than $1/|\mathcal C|$. As an example, consider the sequential repetition of a \textSigma-protocol\xspace with special soundness. Here, the soundness error is $1/|\mathcal C|^n$. In this case, the term proportional to $1/|\mathcal C|$ renders the bound from the above theorem trivial. Note however, that (i) this situation is extremely artificial, as there is absolutely no reason to repeat sequentially instead of in parallel, and (ii) the additive error term can be made arbitrarily small by considering a variant $\mathsf\Pi'$ of $\mathsf\Pi$ where the random challenges are enlarged with a certain number of bits that are ignored otherwise, see Remark \ref{rem:decrease-additive-error}.
In fact, we suspect that the observation from (i) is true in a much broader sense: if a PCIP still has negligible soundness error when allowing the adversary to learn one of the challenges $c_i$ in advance of sending the corresponding commitment-type message $a_i$, it seems like the number of rounds can be reduced and the loss in soundness error can be won back by parallel repetition.
\end{rem}
As for the case of the Fiat-Shamir transformation for \textSigma-protocols\xspace, the general reduction implies that security properties that protect against dishonest provers carry over from the interactive to the non-interactive proof system. For a definition of the properties considered in the following theorem, see, e.g. \cite{DFMS19}. The quantum proof-of-knowledge-property was intoduced in \cite{Unruh2012}.
\begin{cor}[Preservation of Soundness/PoK]\label{cor:PresSoundPoK}
Let $\mathsf{\Pi}$ be a constant-round PCIP that has (statistical/computational) soundness, and/or the (statistical/computational) quantum proof-of-knowledge-property, respectively.
Then, in the QROM, $\mathsf{FS[\Pi]}$ has (statistical/computational) soundness, and/or the (statistical/computational) quantum proof-of-knowledge-property, too.
\end{cor}
\begin{proof}
Corollary \ref{cor:mFS} turns any dishonest prover ${\cal A}_{\mathsf{FS[\Pi]}}$ for $\mathsf{FS[\Pi]}$ with success probability $\epsilon$ into a dishonest prover ${\cal A}_{\mathsf{\Pi}}$ for $\mathsf{\Pi}$, with success probability $\epsilon\cdot(2q+1)^{-2n}$, where $2n+1$ is the number of rounds in $\mathsf{\Pi}$. Since $n$ is constant and $q$ is polynomial in the security parameter, the success probabilities of the respective provers are polynomially related. The claimed implications follow now using the same arguments as in Corollaries 13 and 16 in \cite{DFMS19}
\qed\end{proof}
\section{Tightness of the reductions}
Here, we show tightness of our results. We start with proving tightness of Theorems \ref{thm:main} and \ref{thm:FS} (up to essentially a factor $4$). This implies that a $O(q^2)$-loss is unavoidable in general. Indeed, the following result shows that for a large and natural class of \textSigma-protocols\xspace $\mathsf{\Sigma}$, there exists an attack against $\mathsf{FS[\Sigma]}$ that succeeds with a probability $q^2$ times larger than the best attack against $\mathsf{\Sigma}$. The attack is based on an application of Grover's quantum algorithm for unstructured search.
To our surprise, we could not find an analysis of Grover's algorithm in the regime we require in the literature. Grover search has been analyzed in the case of an unknown number of solutions \cite{BBGT98}, but the focus of that work is on analyzing the expected number of queries required to find a solution, while we analyze the probability with which the Grover search algorithm succeeds for a \emph{fixed but arbitrary} number of queries.
\begin{thm}\label{thm:qsqauredboost}
Let ${\cal L}$ be a language, and let $\mathsf{\Sigma}$ be a \textSigma-protocol\xspace for ${\cal L}$ with challenge set ${\cal C}$, special soundness and perfect honest-verifier zero-knowledge. Furthermore, we assume that the triples $(a,c,z)$ produced by the simulator ${\cal S}_{\mathrm{ZK}}(x)$ are always accepted by the verifier even for instances $x \not\in \cal L$, and that $a$ has min-entropy $\gamma$.%
\footnote{These additional assumptions on the simulator could be avoided, but they simplify the proof. Furthermore, for typical \textSigma-protocols\xspace they are satisfied. In particular, the simulated transcripts for hard instances are accepted by the verifier with high probability. Otherwise, the two polynomial-time algorithms could otherwise be used to solve the hard instances, a contradiction. } Then for any $q$ such that $(q^2+1)\cdot e^2\cdot (5q)^6 < |{\cal C}|$ and $2^\gamma/(5q)^3 > 2$, there exists a $q$-query dishonest prover that succeeds with probability $q^2/|{\cal C}|$ in producing a valid $\mathsf{FS[\Sigma]}$-proof for an instance $x \not\in \cal L$.
\end{thm}
The idea of the attack against $\mathsf{FS[\Sigma]}$ is quite simple. For a \textSigma-protocol\xspace that is {\em special} honest-verifier zero-knowledge, meaning that the simulation works by first sampling the challenge $c$ and the repsonse $z$ and then computing a fitting answer $a$ as a function $a(c,z)$, one simply does a Grover search to find a pair $(c,z)$ for which $H\bigl(x,a(c,z)\bigr) = c$. For a typical $H$, this will give a quadratic improvement over the classical search, which, for a random $H$, succeeds with probability $q/|{\cal C}|$ (due to the special soundness). A subtle issue is that, for some (unlikely) choices of $H$, there are actually {\em many} $(c,z)$ for which $H\bigl(x,a(c,z)\bigr) = c$, in which case the Grover search ``overshoots''. In the formal proof below, this is dealt with by controlling the probability of $H$ having this (unlikely) property. Also, it removes the {\em special} honest-verifier zero-knowledge property by doing the Grover search over the randomness of the simulator, which requires some additional caution.
\begin{rem}\label{rem:AttackExt}
It is not hard to see that Theorem~\ref{thm:qsqauredboost} still holds in the following two variations of the statement. (1) $H(x,a)$ is random and independent for different choices of $a$, but is {\em not} necessarily independent for different choices of $x$. (2) The \textSigma-protocol\xspace $\mathsf\Sigma$ is replaced by ${\mathsf\Sigma}'$, which has its challenge enlarged with a certain number of bits that are ignored otherwise, in line with Remark~\ref{rem:extendC}, and $\mathsf{FS[\Sigma']}$ then uses an $H$ with a correspondingly enlarged range.%
\footnote{While (1) follows by inspecting the proof, (2) holds more generically: the dishonest prover attacking $\mathsf{FS[\Sigma']}$ simply runs the prover attacking $\mathsf{FS[\Sigma]}$ but enlarges the output register of the hash queries, with the corresponding state being set to be the fully mixed state in each query, and then dismisses these additional qubits again.}
\end{rem}
\begin{proof}
Let ${\cal S}_{\mathrm{ZK}}$ be the zero-knowledge simulator given by the perfect honest-verifier zero-knowledge property of $\mathsf{\Sigma}$. Consider an adversary $\mathcal{A}_{FS}$ against $\mathsf{FS[\Sigma]}$, that works as follows for an arbitrary instance $x\notin \mathcal{L}$:
\begin{itemize}
\item Define the function $f^H: R\rightarrow \{0,1\}$ (where $R$ is the set of random coins for ${\cal S}_{\mathrm{ZK}}$) as
$$
f^H(\rho) = \begin{cases}
1&\text{for }{\cal S}_{\mathrm{ZK}}(x;\rho)\rightarrow (a,c,z) \wedge H(x||a) = c
\\0&\text{otherwise}.
\end{cases}
$$
\item Use Grover's algorithm for $q$ steps, to try and find $\rho$ s.t. $f(\rho) = 1$
\item Run ${\cal S}_{\mathrm{ZK}}(x;\rho) \rightarrow (a,c,z)$ and output $(x,a||z)$.
\end{itemize}
Let $p_1^H$ be the fraction of random coins from $R$ that map to $1$ under $f^H$. Note that by the special soundness of $\Sigma$, in any accepting triple $a$ determines $c$ and we thus have $\E_H[p_1^H] = \frac{1}{|\cal C|}$. By the way Grover works, after $q$ iterations (requiring $q$ queries to $H$) the probability $p_2^H$ of finding such an input is $\sin^2((2q+1)\Theta^H)$, where $0\leq \Theta^H \leq \pi/2$ is such that $\sin^2(\Theta^H) = p_1^H$. Now as long as $\Theta$ is not too large to begin with (i.e. as long as the Grover search will not `overshoot'), $p_2^H$ is approximately a factor $q^2$ larger than $p_1^H$. Our goal will be to show that also on average over $H$, the improvement is at least $q^2$. To this end we define $H_{\text{bad}} := \{H : p_1^H > \sin^2(\frac{\pi}{6q+3})\}$ and $H_{\text{good}}$ its complement. Then,
\begin{align*}
\E_H[p_2^H] &= (1-\alpha)\cdot\E_{H}\left[p_2^H | H\in H_\text{good}\right] + \alpha\cdot \E_{H}\left[p_2^H|H\in H_{\text{bad}}\right]\\
&\geq (1-\alpha)\cdot\E_{H}\left[p_2^H|H\in H_\text{good}\right]
\end{align*}
where $\alpha=\Pr_H[H\in H_{\text{bad}}]$ and $1-\alpha = \Pr_H[H\in H_{\text{good}}]$.
We first compute $\E_{H_{\text{good}}}\left[p_2^H\right]$. Let $H\in H_{\text{good}}$. We have $(2q+1)\Theta^H \leq \frac{\pi}{3}$. Since $\frac{\text{d}}{\text{d}\Theta}\sin(\Theta) = \cos(\Theta)\geq 1/2$ for $\Theta\in [0,\frac{\pi}{3}]$, and $\Theta \geq \sin(\Theta)$, it follows that
$$
\sin((2q+1)\cdot\Theta^H) \qquad\geq\qquad \sin(\Theta^H) + \frac{2q\cdot\Theta^H}{2}\qquad \geq\qquad (q+1)\cdot\sin(\Theta^H).
$$
Using $\sin(\Theta)\geq 0$ for $\Theta\in [0,\frac{\pi}{3}]$, we obtain
$$
p_2^H = \sin^2((2q+1)\cdot\Theta^H) \geq (q+1)^2\cdot\sin^2(\Theta^H) = (q+1)^2\cdot p_1^H.
$$
Therefore,
\begin{align}\label{eqn:p2}
\begin{split}
\E_H[p_2^H] \qquad & \geq \qquad\E_{H}\left[p_2^H | H\in H_\text{good}\right]\cdot \Pr_H[H\in H_{\text{good}}]\\
&\geq\qquad(q+1)^2\cdot\E_{H}\left[p_1^H | H\in H_\text{good}\right]\cdot \Pr_H[H\in H_{\text{good}}]\\
&\geq \qquad(q+1)^2\cdot\left(\E_H[p_1^H] - \Pr_H[H\in H_{\text{bad}}] \right).
\end{split}
\end{align}
Next we bound $\alpha = \Pr_H[H\in H_{\text{bad}}] = \Pr_H[p_1^H > \sin^2(\frac{\pi}{6q+3})]$. Note that for $p_1^H$ to be large, we need that for many first messages $a$, $H(a)$ must be the unique challenge $c$ for which there exist an accepting response. For a random $H$ this is unlikely to happen. Formally, we argue as follows, using the Chernoff bound eventually.
We first define the following equivalence relation:
$$
\rho \sim \rho' \text{ iff } {\cal S}_{\mathrm{ZK}}(\rho) = (a,c,z) \wedge {\cal S}_{\mathrm{ZK}}(\rho') = (a,c',z') \text{ for }\rho,\rho'\in R.
$$
$R/_{\!\sim}$ then denotes the set of equivalence classes $[\rho] = \{\rho' \in R \,|\,\rho \sim \rho'\}$.
By the perfect special soundness property and the assumptions on ${\cal S}_{\mathrm{ZK}}$, we have that $a$ determines $c$ (remember that $x\notin {\cal L}$), and therefore $f^H$ is constant on elements within a given equivalence class.
Thus, $f^H: R/_{\!\sim} \rightarrow \{0,1\}$.
For two distinct equivalence classes $[\rho]\neq [\rho']$, we have
$$
\Pr_H[f^H([\rho]) = 1 \wedge f^H([\rho']) = 1] = \Pr_H[f^H([\rho]) = 1]\cdot \Pr_H[f^H([\rho']) = 1] \, ,
$$
since $H(x||a)$ is chosen independently for different $a$. Finally, taking $X^H := \sum_{[\rho]} f^H([\rho])$ we have
\begin{align*}
p_1^H& = \Pr_\rho[f^H(\rho)=1] = \frac{\sum_{\rho} f(\rho)}{|R|}\\
& = \frac{\sum_{[\rho]} \left(f^H([\rho])\cdot |[\rho]|\right)}{|R|} \leq \frac{|[\rho_{\max}]|\cdot\sum_{[\rho]} f^H([\rho])}{|R|} = X^H \cdot 2^{-\gamma}
\end{align*}
where $[\rho_{\max}]$ is the $[\rho]$ that maximizes $|[\rho]|$. It follows that
\begin{align*}
\alpha& = \Pr_H[p_1^H > \sin^2\left(\frac{\pi}{6q+3}\right)] \\
&\leq\Pr_H\left[X^H > \sin^2\left(\frac{\pi}{6q+3}\right)\cdot 2^\gamma\right]\leq \Pr_H\left[X^H > \frac{2^\gamma}{|{\cal C}|} + \frac{2^\gamma}{(5q)^3}\right]
\end{align*}
where we used $\sin^2(x)> x^3$ for $0\leq x \leq 0.80$ and $\frac{\pi}{6q+3} > \frac{1}{5q} + \sqrt[3]{\frac{1}{|{\cal C}|}}$ for ${|\cal C|} > (5q)^3$ in the last inequality.
By definition of $f$, for any $[\rho]$ we have $\Pr_H\left[f(\rho)=1\right]=\frac{1}{|{\cal C}|}$, hence
\begin{align*}
\E_{H}\left[X\right]= \sum_{[\rho]}\E_H[f^H([\rho])]=\sum_{[\rho]}\Pr_H[f^H([\rho]) =1] =\frac{|R/_{\!\sim}|}{|{\cal C}|}\geq \frac{2^\gamma}{|{\cal C}|}.
\end{align*}
We use the following Chernoff bound:
\begin{align*}
\Pr_H\left[X^H > (1+\delta)\cdot \E
_{H}\left[X^H\right] \right] &< \left(\frac{e^{\delta}}{(1+\delta)^{1+\delta}}\right)^{\E
_{H}\left[X^H\right]} < \left(\frac{e^{1+\delta}}{\delta^{1+\delta}}\right)^{\E
_{H}\left[X^H\right]}\\
& = \left(\frac{e}{\delta}\right)^{\E
_{H}\left[X^H\right]\cdot(1+\delta)}.
\end{align*}
Setting $\delta:=\frac{|{\cal C}|}{(5q)^3}$, together with the inequalities derived above this leads to
\begin{align*}
\alpha \leq
\left(\frac{e\cdot (5q)^3}{|{\cal C}|}\right)^{\frac{2^\gamma}{|{\cal C}|} + \frac{2^\gamma}{(5q)^3}}
< \frac{e^2\cdot (5q)^6}{|{\cal C}|^2}< \frac{1}{|{\cal C}|\cdot (q^2+1)}
\end{align*}
where we used $\frac{2^\gamma}{(5q)^3} > 2$ in the second to last, and $|{\cal C}| > (q^2+1)\cdot e^2\cdot (5q)^6$ in the last inequality.
Plugging this bound into Equation \ref{eqn:p2}, we get
$$
\E_{H}[p_2^H] \geq(q^2+1)\cdot \left(p_1-\frac{1}{|{\cal C}|\cdot(q^2+1)}\right) = \frac{q^2}{|{\cal C}|} + \frac{1}{|{\cal C}|} - \frac{1}{|{\cal C}|} = \frac{q^2}{|{\cal C}|}.
$$
Thus, the success probability of our adversary $\mathcal{A}_{FS}$ after making $q$ queries to $H$ is at least $\frac{q^2}{|{\cal C}|}$.
\qed\end{proof}
The tightness of Corollary \ref{cor:mFS} follows from the above tightness result for the case of \textSigma-protocols\xspace in a fairly straightforward manner.
\begin{thm}\label{thm:qtothe2nboost}
For every positive integer $n$, there exists a $(2n\!+\!1)$-round PCIP $\mathsf{\Pi}$ with soundness error $\epsilon$ and challenge space $\mathcal C$ such that $|{\cal C}| \geq 1/\epsilon$ and such that there exists a $q$-query dishonest prover $\cal A$ on $\mathsf{FS(\Pi)}$ with success probability $n^{-2n}q^{2n}\epsilon$.
\end{thm}
Before proving the theorem, we show how it implies the tightness of Theorem \ref{cor:mFS}.
\begin{cor}
The security loss in the bound in Corollary \ref{cor:mFS} is optimal, up to a multiplicative factor that depends on $n$ only.
\end{cor}
\begin{proof}
Let ${\mathsf\Pi}$ be a PCIP as shown to exist in Theorem \ref{thm:qtothe2nboost}. Let $\epsilon_\Pi$, and $\epsilon_{\mathsf{FS(\Pi)}}(q)$, be the soundness error of $\mathsf \Pi$, and the one of its Fiat Shamir transformation against $q$-query adversaries, respectively. By Theorem \ref{thm:qtothe2nboost},
\begin{equation}
\epsilon_{\mathsf{FS(\Pi)}}(q)\ge n^{-2n}q^{2n}\epsilon_{\mathsf \Pi}.
\end{equation}
Theorem \ref{cor:mFS}, on the other hand, yields
\begin{align}
\epsilon_{\mathsf \Pi}&\ge \frac{n!}{(2q+n+1)^{2n}}\epsilon_{\mathsf{FS(\Pi)}}(q)-\frac{n!}{|\mathcal C|}\\
&\ge \frac{n!}{(2q+n+1)^{2n}}\epsilon_{\mathsf{FS(\Pi)}}(q)-n!\epsilon_{\mathsf \Pi},
\end{align}
where we used the condition on the challenge space size from Theorem \ref{thm:qtothe2nboost} in the last line. Rearranging terms we obtain
\begin{align}
\epsilon_{\mathsf{FS(\Pi)}}(q)&\le (2q+n+1)^{2n}\left(1+\frac 1{n!}\right)\epsilon_{\mathsf{\Pi}}(q)\\
&\le 2(n+3)^2q^{2n}\epsilon_{\mathsf{\Pi}}(q),
\end{align}
where we have used $1\le q$ in the last line. In summary, we have constants $c_1=n^{-2n}$ and $c_2= 2(n+3)^{2n}$ such that
\begin{equation}
c_1 q^{2n}\epsilon_{\mathsf \Pi}\le \epsilon_{\mathsf{FS(\Pi)}}(q)\le c_2 q^{2n}\epsilon_{\mathsf \Pi}.
\end{equation}
\qed\end{proof}
\begin{proof} [of Theorem \ref{thm:qtothe2nboost}]
Let $\hat{\mathsf \Sigma}$ be a \textSigma-protocol\xspace for a language $\cal L$ fulfilling the requirements of Theorem~\ref{thm:qsqauredboost}. Let the challenge space be denoted by $\hat{ \mathcal C}$. Given an arbitrary positive integer, we define an $(2n\!+\!1)$-round PCIP $\mathsf\Pi$ for the same language $\cal L$ by means of $n$ sequential independent executions of $\hat{\mathsf \Sigma}$ . Concretely, the $2n+1$ messages of $\mathsf \Pi$ are given in terms of the messages $\hat a_i, \hat c_i$ and $\hat z_i$ of the $i$-th repetition of $\hat{\mathsf \Sigma}$ as
\begin{align*}
a_1&=\hat a_1\\
c_i&=(\hat c_i, r_i)\ \mathrm{for}\ i=1,...,n\\
a_i&=(\hat a_i, \hat z_{i-1})\ \mathrm{for}\ i=2,...,n, \ \mathrm{and}\\
z&=\hat z_{n},
\end{align*}
where $r_i$ is an independent random string of arbitrary (but fixed) length, which is ignored otherwise (in line with Remark~\ref{rem:extendC}).
The purpose of $r_i$ is to make the challenge space $\cal C$ of $\mathsf\Pi$ arbitrary large, as required.
The verification procedure of $\mathsf \Pi$ simply checks if all the triples $(\hat a_i, \hat c_i, \hat z_i)$ are accepted by $\hat{\mathsf \Sigma}$.
By the special soundness property of $\hat{\mathsf \Sigma}$, the soundness error of this PCIP is $\epsilon=|\hat{\cal C}|^{-n}$.
Using Theorem \ref{thm:qsqauredboost}, we can attack the Fiat-Shamir transformation of $\hat{\mathsf \Sigma}$ repeatedly to devise an attack agains $\mathsf{FS(\Pi)}$: first use Theorem~\ref{thm:qsqauredboost} to find $\hat a_1$ and $\hat z_1$, then use it again to find $\hat a_2$ and $\hat z_2$, etc., having the property that with the correctly computed challenges these form valid triples for an instance $x \not\in \cal L$. In each invocation of Theorem~\ref{thm:qsqauredboost} we use a $q'$-query attack, which then succeeds with probability $q'^2/|{\cal \hat{C}}|$. Thus, using in total $q = n q'$ queries, we succeed in breaking $\mathsf{FS[\Pi]}$ with probability $q'^{2n}/|{\cal \hat{C}}|^n = n^{-2n}q^{2n}\epsilon$, as claimed.
There are two issues we neglected in the above argument. First, we actually employ Theorem~\ref{thm:qsqauredboost} for attacking a {\em variant} of $\hat{\mathsf \Sigma}$ that has its challenge enlarged (and thus is not special sound); and, second, the challenge $c_i$ is computed as
$$
c_i = H(i-1,...,H(1,H(0,x,\hat a_1),\hat a_2),...,\hat a_i) \, ,
$$
which is {\em not} a uniformly random function of $x$ and $\hat a_i$ (but only of $\hat a_i$). However, by Remark~\ref{rem:AttackExt}, the attack from Theorem \ref{thm:qsqauredboost} still applies.
\qed\end{proof}
\section{Applications}
\subsection{Digital signature schemes from multi-round Fiat-Shamir}\label{sec:sig}
One of the prime applications of the Fiat-Shamir transformation is the construction of digital signature schemes from interactive identification schemes. In this context, multi-round variants have also been used. An example where a QROM reduction is especially desirable is MQDSS \cite{MQDSS}, a candidate digital signature scheme in the ongoing NIST standardization process for post-quantum cryptographic schemes \cite{NIST}. This digital signature scheme is constructed by applying the multi-round Fiat-Shamir transformation to the 5-round identification scheme by Sakumoto, Shirai, and Hiwatari \cite{SSH} based on the hardness of solving systems of multivariate quadratic equations.
In this section, we present a generic construction of a digital signature scheme based on multi-round FS, and give a proof sketch of its strong unforgeability under chosen message attacks. We refrain from giving a full, self-contained proof here so as to not distract from our main technical result and its implications. Many, though not all, parts of the argument are very similar to the ones made elsewhere for the 3-round case.
The following construction is a straightforward generalization of the original construction of Fiat and Shamir.
\begin{defi}[Fiat-Shamir signatures from a general PCIP]\label{def:mFS-sig}
Given an $(2n\!+\!1)$-round public coin identification scheme $\mathsf{\Pi} = ({\ensuremath{\mathsf{Gen}}\xspace},{\cal P}, {\cal V})$ for a witness
relation $R$
and a
hash function $H$ with appropriate domain and range equal to $\cal C$, we define the digital signature scheme $\mathsf{Sig[\Pi]}= (\ensuremath{\mathsf{Gen}}\xspace, \ensuremath{\mathsf{Sign}}\xspace, \ensuremath{\mathsf{Verify}}\xspace)$ as follows. The key generation algorithm $\ensuremath{\mathsf{Gen}}\xspace$ is just the one from $\Pi$. The signing algorithm $\ensuremath{\mathsf{Sign}}\xspace$, on input a secret key $sk$ and a message $m$, outputs
\begin{align*}
\sigma=(a_1,...,a_n,z)&\leftarrow \ensuremath{\mathsf{Sign}}\xspace_{sk}(m)
\end{align*}
where $z$ and $a_i$ for $i=1,...,n$ are computed using ${\cal P}(pk)$, and the challenges are computed as
\begin{align*}
c_1&=H(0, pk,m,a_1)\text{ and} \\
c_i&=H(i-1,c_{i-1},a_i) \text{ for } i=2,...,n \, .
\end{align*}
The verification algorithm $\ensuremath{\mathsf{Verify}}\xspace$, on input a public key $pk$, a message $m$ and a signature $\sigma=(a_1,...,a_n,z)$, computes $c_i$ as specified above, outputs `accept' iff ${\cal V}_{pk}(a_1,c_1,...,a_n,c_n,z) = 1$, denoted by $\ensuremath{\mathsf{Verify}}\xspace_{pk}(m,\sigma) = 1$.
\\~\\
We note that the above definition is equivalent to the following, alternative formulation: Let\linebreak $\ensuremath{\mathsf{Sign}}\xspace_{sk}(m)$ produce $\sigma$ by running $P_{FS}^H(x||m)$, and let $\mathsf{Verify}(m,\sigma)$ be equal to the outcome of\linebreak $V_{FS}^H(x||m)$, where $(P_{FS}^H,V_{FS}^H) = \mathsf{FS[\Pi^*]}$ and $\mathsf{\Pi^*} = ({\cal P^*,\cal V^*})$ is the identification scheme obtained from $\mathsf{\Pi}$ by setting ${\cal P^*}(x||m) = {\cal P}(x)$ and ${\cal V^*}(x||m) = {\cal V}(x)$ for any $m$. This alternative formulation will be convenient in the proof of Theorem \ref{thm:(s)UFCMA}.
\end{defi}
\begin{rem}
As in the case of the plain multi-round Fiat-Shamir transformation, one can include arbitrary additional strings in the argument when computing the challenges $c_i$. Examples where this is done include the MQDSS signature scheme \cite{MQDSS}, where the message $m$ and the first commitment $a_1$ are also included in the argument for computing the second challenge, and Bulletproofs, where the challenges are computed by hashing the entire transcript up to that point \cite{Bulletproofs}.
\end{rem}
As an identification scheme is an interactive honest-verifier zero knowledge proof of knowledge of a secret key, the above signature scheme is a a non-interactive zero knowledge proof of knowledge of a secret key according to Corollary \ref{cor:mFS}. For a digital signature scheme, however, the stronger security notion of (strong) unforgeability against chosen message ((s)UF-CMA) attacks is required.
In the following, we give a proof sketch for the fact that the above signature scheme is (s)UF-CMA. This fact follows immediately once we have convinced ourselves that a certain result by Unruh about the Fiat-Shamir transformation holds for the multi-round case as well: For the Fiat-Shamir transformation of \textSigma-protocols\xspace, extractability implies a stronger notion of extractability enabling a proof of (s)UF-CMA \cite{Unruh2017}. Here, we just patch the parts of the proof from \cite{Unruh2017} that make use of the fact that the underlying PCIP has only three rounds.
For the following we need the notion of a PCIP having computationally unique responses.
\begin{defi}[Computationally unique responses - PCIP]
A $(2n\!+\!1)$-\switch{\linebreak}{}round PCIP
$\mathsf{\Pi} = (\cal P, \cal V)$ is said to have {\em computationally unique responses} if given a partial transcript\switch{}{\linebreak} $(x,a_1,c_1,\ldots a_i,c_i)$ it is computationally hard to find two accepting conversations that both extend the partial transcript but differ in (at least) $a_{i+1}$ (here we consider $z$ to be equal to $a_{n+1}$), i.e.~for $con_i=x,a_1,c_1,\ldots a_i,c_i,a_{i+1}^{(j)},c_{i+1}^{(j)}\ldots,a^{(j)}_n,c^{(j)}_n,z^{(j)}$, $j=1,2$ we have that
$$
\Pr\left[{{\cal V}(con_1) = 1 \wedge {\cal V}(con_2) = 1}: (con_1,con_2)\leftarrow {\cal A}\right]
$$
is negligible for computationally bounded (quantum) ${\cal A}$, where $a^{(1)}_{i+1}\neq a^{(2)}_{i+1}$.
\end{defi}
Equipped with this definition, we can state the main result of this section.
\begin{thm}[(s)UF-CMA of multi-round FS signatures]\label{thm:(s)UFCMA}
Let $\mathsf{\Pi}$ be a \switch{\linebreak}{}PCIP for some hard relation $R$, which is a quantum proof of knowledge and satisfies completeness, HVZK, and has unpredictable commitments\footnote{We take unpredictable commitments for PCIP's to be exactly the same as for \textSigma-protocols\xspace, with the first message playing the role of the commitment.} as well as a superpolynomially large challenge space. Then $\mathsf{Sig[\Pi]}$ is existentially unforgeable under chosen message attack (UF-CMA).
If $\mathsf{\Pi}$ in addition has computationally unique responses, $\mathsf{Sig[\Pi]}$ is {\em strongly} existentially unforgeable under chosen message attack (sUF-CMA).
\end{thm}
In \cite{Unruh2017} (Theorem 24, and 25, respectively), it is proven that an extractable FS proof system (of an HVZK \textSigma-protocol\xspace, and of an HVZK \textSigma-protocol\xspace with computationally unique responses, respectively) satisfies the stronger notion of {\em (strong) simulation-sound extractability}. In addition, it is shown that such a FS proof system gives rise to a (s)UF-CMA signature scheme if the underlying relation is hard. Corollary \ref{cor:PresSoundPoK} implies that $\mathsf{FS[\Pi^*]}$ is indeed extractable if $\mathsf\Pi$ is extractable. Below we rely on the proof in \cite{Unruh2017} to argue simulation-sound extractability, only pointing out a particular difference for the multi-round case.
\begin{proof}[sketch]
Since $\mathsf{\Pi}$ is a quantum proof of knowledge, so is $\mathsf{\Pi^*}$. By Corollary \ref{cor:PresSoundPoK}, $\mathsf{FS[\Pi^*]}$ is a quantum proof of knowledge (extractable), and by Theorem 20 in \cite{Unruh2017} (which easily generalizes to the multi-round setting), completeness, unpredictable commitments\footnote{This property is required to have sufficient entropy on the inputs to the oracle that are reprogrammed by the zero-knowledge simulator ${\cal S}_{ZK}$. While ${\cal S}_{ZK}$ may reprogram the oracle on inputs $(i-1,c_{i-1},a_i)$ for $i>1$, it is enough to require the first message $a_1$ to have sufficient entropy, since with $c_{i-1}$, these later inputs all include a uniformly random element from the superpolynomially large challenge space.} and HVZK of $\mathsf{\Pi^*}$ together imply ZK for $\mathsf{FS[\Pi^*]}$. For the proof that $\mathsf{FS[\Pi^*]}$ is also simulation-sound extractable, we refer to the proof of Theorem 24 in \cite{Unruh2017}, noting only that in the hop from Game 1 to Game 2 we have to adjust the argument as follows: Let ${\cal S}_{ZK}$ be the zero-knowledge simulator that runs the HVZK simulator from $\mathsf{\Pi}^*$ and reprograms the oracle as necessary. We write $H_f$ for the oracle $H$ after it has been reprogrammed by ${\cal S}_{ZK}$, at the end of the run of ${\cal A}$. We have to show that $V_{FS}^{H_f}(x,a_1,\ldots,a_n,z) = 1$ implies $V_{FS}^{H}(x,a_1,\ldots,a_n,z) = 1$, where $(x,a_1,\ldots,a_n,z)$ is the final output of ${\cal A}$. Suppose the implication does not hold. Then either (i) $\ H_f(0,x,a_1)\neq H(0,x,a_1)$ or (ii) $ H_f(i-1,c_{i-1},a_i)\neq H(i-1,c'_{i-1},a_i)$ for some $i$, where $c_{i-1}$ is the $(i\!-\!1)$-st challenge as recomputed by $V_{FS}^{H_f}$ and $c'_{i-1}$ is the one computed by $V_{FS}^H$. In case (i) holds, ${\cal A}$ has queried $x$ and the corresponding forged proof that was output by ${\cal S}_{ZK}$ starts with $a_1$. In case (ii), assume that $H_f(j-1,c_{j-1},a_j)= H(j-1,c_{j-1},a_j)$ for all $j < i$, so that $c_{i-1} = c'_{i-1}$. Then,
$$
H_f(i-1,...,H(1,H(0,x, a_1), a_2),...,a_i) \neq H(i-1,...,H(1,H(0,x, a_1), a_2),...,a_i)
$$
which means that ${\cal A}$ either queried $x$ and the corresponding forged proof that was output by ${\cal S}_{ZK}$ starts with $a_1$, or else ${\cal A}$ has queried some $x'$ such that
\begin{align*}
H(i-2,\ldots,H(&1,H(0,x', a'_1),a'_2),\ldots a'_{i-1}) \\
&= H(i-2,\ldots,H(1,H(0,x, a_1),a_2),\ldots, a_{i-1})
\end{align*}
and $a_i = a'_i$, where $(a'_1,\ldots,a'_i)$ is part of the ${\cal S}_{ZK}$ proof resulting from the query $x'$. By the fact that $H$ is a random oracle, it is infeasible for $\mathcal{A}$ to find such an $x'$.
In the context of weak simulation-sound extractability, the fact that ${\cal A}$ has queried $x$ is enough to derive a contradiction. For the strong variant, we now have that ${\cal S}_{ZK}$ has output $(x,a_1,a'_2,\ldots,a'_n,$\switch{}{\linebreak} $z')$ such that $${\cal V}(x,a_1,H_f(0,x,a_1),a'_2,c'_2\ldots,a'_n,c'_n,z') = 1$$ and ${\cal A}$ has output $(x,a_1,a_2,\ldots,a_n,z)$ such that $${\cal V}(x,a_1,H_f(0,x,a_1), a_2,c_2,\ldots,a_n,c_n,z) =1$$ (and ${\cal A}$ knows both since it interacted with ${\cal S}_{ZK}$). By the computationally unique responses property of $\mathsf{\Pi}$, it must be that $a_2 = a_2'$. But then it follows that $$c_2 = H_f(1,H_f(0,x,a_1), a_2) = H_f(1,H_f(0,x,a_1), a'_2) = c'_2$$ (remember that both proofs are accepting with respect to $H_f$) which in turn implies that $a_3 = a'_3$, etc. Thus, we obtain that ${\cal A}$ has output a proof that was produced by ${\cal S}_{ZK}$, yielding a contradiction. We conclude that $$V_{FS}^{H_f}(x,a_1,\ldots,a_n,z) = 1\text{ implies }V_{FS}^{H}(x,a_1,\ldots,a_n,z) = 1$$ except with negligible probability.
In the rest of the proof of Theorems 24 and 25 in \cite{Unruh2017}, no properties specific to a three-round scheme are used, and so the results extend to the PCIP context, that is, $\mathsf{FS[\Pi^*]}$ is (strongly) simulation-sound extractable. Now applying Theorem 31 from \cite{Unruh2017}, we obtain that $\mathsf{Sig[\Pi]}$ is (s)UF-CMA.
\qed\end{proof}
Together with the fact that commit-and-open PCIPs can easily be made quantum extractable in the right sense by using standard hash-based commitments based on a collapsing hash function, we obtain the security of the MQDSS signature scheme. Recall that the standard hash-based commitment scheme works as follows. On input $s$, the commitment algorithm samples a random opening string $u$ and outputs it together with the commitment $c=H(s,u)$. Opening just works by recomputing the hash and comparing it with $c$ . Note that, while this commitment scheme is collapse-binding \cite{Unruh2016}, we need the stronger property of collapsingness of the function defined by the commitment algorithm that, on input a string and some randomness, outputs a commitment (collapse-binding only requires the collapsingness with respect to the committed string, not the opening information).
\begin{cor}[sUF-CMA of MQDSS]
Let $\mathsf\Pi_{\mathrm{SSH}}$ be the 5-round identification scheme from \cite{SSH} repeated in parallel a suitable number of times and instantiated with the standard hash-based commitment scheme using a collapsing hash function. Then the Fiat-Shamir signature scheme constructed from $\mathsf\Pi_{\mathrm{SSH}}$ is sUF-CMA.
\end{cor}
\begin{proof}[sketch]
In $\mathsf\Pi_{\mathrm{SSH}}$, the honest prover's first message consists of two commitments, and the second and final messages contain functions of the strings committed to in the first message. This structure, together with the computational binding property (implied by the collapse binding property) of the commitments, immediately implies that $\mathsf\Pi_{\mathrm{SSH}}$ has computationally unique responses. According to Corollary \ref{cor:SSH-PoK} in the appendix, $\mathsf\Pi_{\mathrm{SSH}}$ is a quantum proof of knowledge. It also has HVZK according to \cite{SSH}. Finally, the first message of $\mathsf\Pi_{\mathrm{SSH}}$ is clearly unpredictable. An application of Theorem \ref{thm:(s)UFCMA} finishes the proof.
\qed\end{proof}
\subsection{Sequential Or Proofs}
A second application of our multi-input version of the measure-and-reprogram result is to the OR-proof as introduced by Liu, Wei and Wong \cite{LWW04} and further analyzed by Fischlin, Harasser and Janson \cite{FHJ}. This is an alternative (non-interactive) proof for proving existence/knowledge of (at least) one of two witnesses without revealing which one, compared to the well known technique by Cramer, Damg\r{a}rd and Schoenmakers \cite{CDS94}.
Formally, given two $\Sigma$-protocols $\mathsf{\Sigma}_0$, and $\mathsf{\Sigma}_1$, for languages ${\cal L}_0$, and ${\cal L}_1$, respectively, \cite{LWW04} proposes as a non-interactive proof for the OR-language ${\cal L}_{\vee} = \{ (x_0,x_1) \,:\, x_0 \!\in\! {\cal L}_0 \vee x_1 \!\in\! {\cal L}_1\}$ a quadruple $\pi_{\vee} = (a_0,a_1,z_0,z_1)$ such that
$$
V_{\vee}^H(x_0,x_1,\pi_{\vee})\! :=\! \bigl[V_0\bigl(x_0,a_0,H(1,x_0,x_1,a_1),z_0\bigr) \wedge V_1\bigl(x_1,a_1,H(0,x_0,x_1,a_0),z_1\bigr)\bigr]
$$
is satisfied. Fischlin et al. call this construction {\em sequential OR proof}.
We emphasize that the two challenges $c_0$ and $c_1$ are computed ``over cross'', i.e., the challence $c_0$ for the execution of $\mathsf{\Sigma}_0$ is computed by hashing $a_1$, and vice versa. It is straightforward to verify that if $\mathsf{\Sigma}_0$ and $\mathsf{\Sigma}_1$ are special honest-verifier zero-knowledge, meaning that for any challenge $c$ and response $z$ one can efficiently compute a first message $a$ such that $(a,c,z)$ is accepted, then it is sufficient to be able to succeed in {\em one} of the two {\em interactive} protocols $\mathsf{\Sigma}_0$ and $\mathsf{\Sigma}_1$ in order to honestly produce such an OR-proof $\pi_{\vee}$. Thus, depending on the context, it is sufficient that one instance is in the corresponding language, or that the prover knows one of the two witnesses, to produce $\pi_{\vee}$.
Indeed, if, say, $x_0 \in {\cal L}_0$ (and a witness $w_0$ is available), then $\pi_{\vee}$ can be produced as follows. Prepare $a_0$ according to $\mathsf{\Sigma}_0$, compute $c_1 := H(0,x_0,x_1,a_0)$ and simulate $z_1$ and $a_1$ using the special honest-verifier zero-knowledge property of $\mathsf{\Sigma}_1$ so that $V_1(x_1,a_1,c_1,z_1)$ is satisfied, and then compute the response $z_0$ for the challenge $c_0 := H(1,x_0,x_1,a_1)$ according to $\mathsf{\Sigma}_0$.
On the other hand, intuitively one expects that one of the two instances must be true in order to be able to successfully produce a proof. Indeed, \cite{LWW04} shows security of the sequential OR in the (classical) ROM. \cite{FHJ} go a step further and show security in the (classical) {\em non-programmable} ROM. Here we show that our multi-input version of the measure-and-reprogram result (as a matter of fact the 2-input version) implies security in the QROM.
\begin{thm}
There exists a black-box quantum polynomial-time interactive algorithm $\hat{\cal P}$, which first outputs a bit $b$ and two instances $x_0,x_1$, and in a second stage acts as an interactive prover that runs $\mathsf{\Sigma}_b$ on instance $x_b$, such that for any adversary $\cal A$ making $q$ queries to a uniformly random function $H$ and for any $x_0^\circ,x_1^\circ$:
\begin{align*}
&\Pr\bigr[x_0 = x_0^\circ \,\wedge\, x_1 = x_1^\circ \,\wedge\, v_b = accept :(b,x_0,x_1,v_b) \leftarrow \langle\hat{\cal P}^{\cal A} , {\cal V}_b\rangle\bigl]
\\ &
\geq \frac{1}{(2q+1)^4} \Pr_H\bigr[x_0 = x_0^\circ \,\wedge\, x_1 = x_1^\circ \,\wedge\, V^H_{\vee}(x_0,x_1,\pi_{\vee}) : (x_0,x_1,\pi_{\vee}) \leftarrow {\cal A}^H \bigl] \, .
\end{align*}
\end{thm}
As explained above, the execution $(b,x_0,x_1,v_b) \leftarrow \langle\hat{\cal P}^{\cal A} , {\cal V}_b\rangle$ should be understood in that $\hat{\cal P}^{\cal A}$ first outputs $x_0,x_1$ and $b$, and then it engages with ${\cal V}_b$ to execute $\mathsf{\Sigma}_b$ on instance $x_b$. Thus, the statement ensures that if ${\cal A}^H$ succeeds to produce a convincing proof $\pi_{\vee}$ then $\hat{\cal P}^{\cal A}$ succeeds to convincingly run $\mathsf{\Sigma}_0$ {\em or} $\mathsf{\Sigma}_1$ (with similar success probability), where it is up to $\hat{\cal P}^{\cal A}$ to choose which one it wants to do.
Of course, the statement translates to the {\em static} setting where the two instances $x_0$ and $x_1$ are {\em fixed} and not produced by the dishonest prover.
\begin{proof}
The algorithm ${\cal A}$ fits well into the statement of Theorem \ref{thm:multiplemar} with the two extractable inputs $\tilde x_0 = (0,x_0,x_1,a_0)$ and $\tilde x_1 = (1,x_0,x_1,a_1)$. Thus, we can consider the 3-stage algorithm ${\cal S}$ ensured by Theorem \ref{thm:multiplemar}, which behaves as follows with at least the probability given by the right hand side of the claimed inequality. In the first stage, it outputs a permutation on the set $\{0,1\}$, which we represent by a bit $b \in \{0,1\}$ with $b=0$ corresponding to the identity permutation, as well as $\tilde x_b = (b,x_0,x_1,a_b)$.
On input a random $\Theta_b = c_{1-b}$ (``locally'' chosen by $\hat{\cal P}$), ${\cal S}$ then outputs $\tilde x_{1-b} = (1-b,x_0,x_1,a_{1-b})$. Finally, on input a random $\Theta_{1-b} = c_b$ (provided by ${\cal V}_b$ as the challenge upon the first message $a_b$), ${\cal S}$ outputs $z_0,z_1$ so that $V_{\vee}$ is satisfied with the challenges $c_b$ and $c_{1-b}$, and thus in particular $V_b\bigl(x_b,a_b,c_b,z_b\bigr)$ is satisfied.
This directly shows the existence of $\hat{\cal P}$ as claimed.
\qed\end{proof}
\section{Acknowledgement}
We thank Dominque Unruh for hinting towards the possibility of the improved Theorem 2 (compared to [DFMS19]), see also Footnote 8, and
Andreas H\"ulsing
for helpful discussions.
CM was funded by a NWO VENI grant (Project No. VI.Veni.192.159). SF was partly supported by the EU Horizon 2020 Research and Innovation Program Grant 780701 (PROMETHEUS). JD was funded by
ERC-ADG project 740972 (ALGSTRONGCRYPTO).
\bibliographystyle{alpha}
|
1011.2617
|
\section{Introduction}
The solar atmosphere is known to be strongly magnetised and dynamic in nature \cite{Vaiana73,Schrijver99}. The fine structure of this structured plasma environment consists of a wide range of distinct magnetic features like coronal loops, open flux tubes, prominences, etc. These structures play an important role in solar dynamics since they can support various type of MHD waves and oscillations which are thought to contribute to the solution of the long-standing problem of solar coronal heating (for some recent reviews see, \opencite{Klimchuk06}; \opencite{Erdelyi08a}; \opencite{Taroyan08}; \opencite{Taroyan09}). Observations indicate that waves and oscillations are ubiquitous in the solar atmosphere (\opencite{Wang03}; \opencite{Tomczyk07}; \opencite{Pontieu07b}; \opencite{Okamoto07}; \opencite{ErdelyiTaroyan08}; \opencite{Van08}; \opencite{Jess09}. For recent reviews see, \opencite{Nakariakov05}; \opencite{Banerjee07}; \opencite{Moortel09}; \opencite{Zaqarashvili09}; \opencite{Mathioudakis11}). Many of the above reputs highlight that the waves or oscillations detected are seen to be strongly damped. A common suggested mechanisms for damping are resonant absorption \cite{Ruderman,Goossens02,Aschwanden03} and thermal conduction (\opencite{Ofman02}; \opencite{Moortel03}, \citeyear{Moortel04a}; \opencite{Moortel04b}; \opencite{Mendoza04}; \opencite{Erdelyi08}) which cause the damping of fast kink and (slow or acoustic) longitudinal waves, respectively. More recently, Morton and Erd\'{e}lyi (\citeyear{Morton09b}, \citeyear{Morton09c}) argue that radiation of the background plasma may be a dominant damping mechanism for coronal oscillations.
In recent years, coronal waves have been used as a diagonstic tool to infer otherwise unmeasurable or hard to measure plasma parameters e.g. magnetic field \cite{Nakariakov01,ErdelyiTaroyan08} and scale height \cite{Verth08}. Such a technique is known as magneto-seismology and was suggested by \inlinecite{Uchida70}; \inlinecite{Zaitsev83} and \inlinecite{Roberts84} in the context of coronal diagnostics. The concept was proposed to be used to any magnetic structure of the Sun by \inlinecite{Erdelyi06} and labelled as solar magneto-seismology. The models used for deriving the plasma parameters are all static in these earlier works, i.e. the background (or equilibrium) plasma is not dynamic, and background motion with cooling/heating is more than often neglected. However, there are numerous observations which show that the corona, and in particular coronal loops, are highly dynamic in nature, exhibiting flows, cooling and heating of the local plasma with a wide range of timescales. For accurate values of plasma parameters to be obtained from coronal magneto-seismology this dynamic nature of the plasma needs to be incorporated into the models and the importance of a dynamic background must be assessed.
One feature which should be taken into account when studying the nature of coronal oscillations is the temperature evolution of the coronal plasma. There has been a plethora of observations which show a number of different temperature evolution scenarios (e.g. \opencite{Winebarger03}; \opencite{Nagata}; \opencite{Lopez07}). \inlinecite{Nagata} have observed that there are at least two categories of temperature evolution. Hot loops, which are heated to temperatures $T>2.5$ MK, are seen by the x-ray imagers, e.g., Yohkoh's soft x-ray telescope (SXT) and Hinode's x-ray telescope (XRT). The hot loops are short lived and are seen to undergo relatively fast cooling down to EUV temperatures appearing in EUV imagers, e.g., SOHO/EIT. Another category is cool loops which are observed with temperatures $0.4-1.3$ MK in EUV images and have relatively long lifetimes. The temperature changes on a slow scale when compared to the lifetime of perturbations, or even the loop one's itself and the loops exhibit limited dynamic behaviour. The physical process of the plasma cooling depends upon the loop temperature (cool or hot). It has been found that radiation is the dominant mechanism for the cooling of the EUV loops ($T<2.0$ MK) whereas thermal conduction is the cooling method for loops in the region of $T>2.0$ MK. The majority of observed coronal loops that exhibit oscillations are reported to have a temperature decrease in an exponential form with cooling times of $500-2000$ s \cite{Aschwanden08,Ugarte09}.
Propagating plasma disturbances were detected by SOHO/UVCS along coronal plumes (\opencite{Ofman97}, \citeyear{Ofman99}, \citeyear{Ofman001a}). \inlinecite{Deforest98} observed similar compressive disturbances in polar plumes by using SOHO/EIT. These observed compressive disturbances are interpreted as propagating, slow magneto-acoustic waves, where the suggested damping mechanism for the waves was compressive viscosity (\opencite{Ofman99}, \citeyear{Ofman002b}). This problem was further investigated by \inlinecite{Mendoza04} who considered a gravitational stratification in addition to dissipative processes of thermal conduction, viscosity and radiation. They found that oscillations are even more efficiently damped in stratified plasmas. Similar intensity disturbances were observed in coronal loops by TRACE \cite{Nightingale99,Schrijver99,Moortel00,McEwan06} and EIT/SOHO \cite{Berghmans99}. \inlinecite{Nakariakov00} have also identified these disturbances as slow magneto-acoustic waves and they found that the wave evolution is affected by dissipation and gravitational stratification. Moreover, \inlinecite{Wang03} and \inlinecite{Taroyan07} have observed ($T>6$~MK) hot loop oscillations by SUMER and SXT. They suggested that these oscillations are standing longitudinal (acoustic) waves and are triggered by a footpoint microflare. It was reported that after an initial temperature increase the loop is observed to be cooling. Recently, \inlinecite{ErdelyiTaroyan08} and \inlinecite{Wang09} have detected 5 minutes quasi-periodic oscillations in transition region and additional five coronal lines by Hinode/EIS at the footpoint of a coronal loop. These oscillations were interpreted as slow magneto-acoustic waves propagating upward from the transition region into the corona. Further, it was found that the amplitude of oscillations decreases with increasing height. It is suggested that the source of these oscillations are the leakage of the photospheric {\it p}-modes through the chromosphere and transition region into the corona, i.e. a similar mechanism that is put forward to explain spicule formation \cite{Pontieu04,Pontieu06} and transition region moss oscillations \cite{Pontieu03}.
A theoretical study of how the slow mode under solar coronal conditions can be extracted from the MHD equations has been done by, e.g. \inlinecite{Roberts06}. The magnetic field was assumed straight in the vertical direction in a gravitationally stratified medium. This theory has been applied to the slow waves in coronal loops observed by SUMER and TRACE. \inlinecite{Luna11} generalised these studies to concurrently occurring magnetic and density stratification, i.e. taking into account the strongly expanding nature of loops that support MHD waves.
The hot loops are thought to be heated by impulsive heating events, e.g. flares, micro-flares, nano-flares. The origin of these energisation mechanisms could be either waves or reconnection (\opencite{Antolin08a}, \citeyear{Antolin08b}, \citeyear{Antolin09}). There have been a number of simulations investigating the evolution of coronal loops after heating events (see, e.g. \opencite{Jakimiec92}; \opencite{Cargill94}; \opencite{Markus09}; \opencite{Taroyan09}, \citeyear{Taroyan10}; \opencite{Taroyanetal10}). Impulsive heating events only last for a relatively short duration and if the plasma reaches a hot enough temperature by the end of the heating, thermal conduction dominates the cooling of the plasma. From calculations by e.g. \inlinecite{Cargill94}, the decrease in temperature due to thermal conduction takes an almost exponential form.
The effect of thermal conduction on MHD waves has been extensively studied. However, all the pervious models have assumed the background plasma is static, i.e. no change in the equilibrium plasma quantities over time. \inlinecite{Ibanez93} studied the propagation of thermal and magnetosonic waves in optically thin plasma. They found that thermal waves are always damped whereas the magnetosonic waves are damped in the range of temperature $10^4$ K $\leq T<10^8$ K due to thermal conduction mechanism. Moreover, \inlinecite{Ofmanwang} and \inlinecite{Mendoza04} have used a 1D nonlinear dissipative MHD model to study the Doppler shift oscillations of hot coronal loops observed by SUMER. The oscillations were interpreted as slow magnetosonic waves. Compressive viscosity was found to be less significant on damping loop oscillations compared to the thermal conduction mechanism. The propagation of MHD slow waves in a static 1D isothermal medium has also been studied by \inlinecite{Moortel03} where thermal conduction and compressive viscosity are considered as damping mechanisms for perturbations. They found that `enhanced' thermal conduction is the dominant damping mechanism of slow waves in coronal loops. Further to this, \inlinecite{Moortel04a} investigate the behaviour of slow MHD waves in a stratified and diverging static atmosphere. The introduction of stratification reduced the effectiveness of thermal conduction on the damping of the slow wave. This result is in disagreement with those of \inlinecite{Mendoza04}, who found an opposite effect for slow standing waves. Further, there is reported a decay of oscillation amplitude by optically thin radiation and area divergence although this is only found to change the rate of damping by a few percent. Overall, it is found in many earlier studies that thermal conduction is an essential cause of the damping of loop oscillations in static equilibria when compared to the other mechanisms under the coronal conditions (i.e. $T=10^6$ K). However, there is an important point to highlight: although thermal conduction seems to be an efficient dissipative mechanism among the various damping mechanisms, considered, it can not efficiently model the {\it observed} decay of amplitudes, when coronal parameters are substituted into the classical formulae of thermal conduction obtained by \inlinecite{Braginskii65} and \inlinecite{Chapman58}, etc. This means that modelling efforts need to consider alternatives. We provide here one
The damping of oscillations due to a cooling {\it background} coronal plasma is a novel and physically natural idea. \inlinecite{Morton09b} found that the cooling of the background plasma could lead to the damping of even fast kink (i.e. by largely incompressible) oscillations. In addition to this, they noticed that the ratio of fundamental mode to the first harmonic decreases as the loop cools. Furthermore, \inlinecite{Morton09c} found that the damping due to the cooling of the plasma could account for the observed damping in a number of examples of observed transverse oscillations. At the time of writing, this number was almost all the known examples available in the literature (\opencite{Aschwanden99}, \citeyear{Aschwanden02}; \opencite{Nakariakov99}). In a recent theoretical work by \inlinecite{Morton09d}, the effect of a radiative cooling background plasma state on the propagation of magneto-acoustic waves in a uniformly magnetized plasma was investigated. Although the approximation of unbounded uniformity of the plasma may seem to be a severe simplification, this step was necessary in order to give insight into the underlying physics. The radiation mechanism was assumed to be the mechanism for the cooling of the plasma and the plasma cooled exponentially in time, which is capturing well the key features of the observational data. As a result, they found that the slow and fast modes are damped due to cooling. The radiative cooling was shown to damp the slow mode by up to $60\%$ within characteristic lifetimes. This effect is much stronger than the contribution predicted by \inlinecite{Moortel04a} and there was no need of additional, less realistic assumption about the coefficient of thermal conduction.
In this paper, we study further the propagation of longitudinal MHD waves in a uniform magnetized plasma under the influence of a cooling background state. The cooling of the plasma is assumed, here, to be dominated by thermal conduction, so is applicable to oscillations in hot loops. Solutions to the background plasma equations show the cooling profile of the plasma is exponential in time in agreement with observational reports. The dispersion relation which describes the slow modes and their properties is derived. An analytic expression for the time-dependent amplitude of the waves is also derived. The results show that thermal conduction has a dominant effect on the slow modes where their amplitude is strongly damped in plasmas with a dynamic background state. The rate of damping is quantified in the model and is found to be dependent upon the amount of stratification in the plasma and the initial temperature of the background. The damping is also shown to be weakly dependent upon position along the slowly varying equilibrium.
\section{Governing equations}
Consider a homogeneous magnetised plasma where the background temperature is changing as a function of time due to thermal conduction and density is constant in time. The magnetic field is uniform and in the $z$ direction, i.e. $\mathbf{B}_0=B_0\bf{\hat{z}}$.
The governing MHD equations for the background plasma take the following form
\begin{eqnarray}
\frac{\partial{\rho}}{\partial{t}}+\nabla.(\rho\mathbf{v}) =0,\label{eq:cont}\\
\rho\frac{\partial{\mathbf{v}}}{\partial{t}}+\rho(\mathbf{v}.\nabla)\mathbf{v}=-\nabla{p}
+\frac{1}{\mu_0}(\nabla\times\mathbf{B})\times\mathbf{B}+\rho g,\\
\frac{R}{\tilde\mu}\frac{\rho^\gamma}{(\gamma-1)}\left[\frac{\partial{}}{\partial{t}}\frac{T}{\rho^{\gamma-1}}
+(\mathbf{v}.\nabla)\frac{T}{\rho^{\gamma-1}}\right]=\kappa\nabla^2{T},\\
\frac{\partial{\mathbf{B}}}{\partial{t}}=\nabla\times(\mathbf{v}\times\mathbf{B}),\\
{p}=\frac{R}{\tilde\mu}\rho{T},\label{eq:gas-law}
\end{eqnarray}
where $\mathbf{v}$ is the flow velocity, $\mathbf{B}$ is the magnetic field, $g$ is the gravity, $\mu_0$ is the magnetic permeability of free space, $\gamma$ is the ratio of specific heats, $R$ is the gas constant, $\tilde{\mu}$ is the mean molecular weight, $T$ is the temperature, $\kappa\nabla^2{T}$ is thermal conduction term, $\rho$ and $p$ are the plasma density and its pressure, respectively.
The medium is assumed to be cooling due to thermal conduction with no temporal change in density and it is also assumed that there is no background flow (i.e. $\mathbf{v}_0=0$), so Eqs. $(\ref{eq:cont})-(\ref{eq:gas-law}$) reduce to
\begin{eqnarray}
\mathbf{v}_{0}=0 ,\;\frac{\partial{\rho}_{0}}{\partial{t}}=0,\\ \nabla{p}_{0}=\rho_0g,\label{background_pressure}\\
\rho_{0}=\rho_{0}(z),\\
{p}_{0}=\frac{R}{\tilde\mu}\rho_{0}{T}_{0}, \label{eq: background-gas low} \\
\frac{{R}\,\rho_{0}}{\tilde\mu(\gamma-1)}\frac{\partial{T}_{0}}{\partial{t}}=\kappa\nabla^2T_{0}, \label{eq1}\
\end{eqnarray}
where the (0) index denotes background quantity.
Since Eq. ($\ref{background_pressure}$) can be written as
\begin{equation}
\frac{1}{p_0}dp_0=\frac{l}{H}d\tilde{z},\label{scale_height}
\end{equation}
where $z=l\tilde{z}$ (here $l$ is some characteristic length scale) and $H=p_0/\rho_0g$, and we are interested in a small characteristic scale length which is much smaller than the scale height, $H$, then we arrive at
\begin{equation}
\nabla{p}_{0}\approx0,
\end{equation}
which is representative to a weakly stratified atmosphere, i.e. $l/H\ll1$.
The solution to Eq. ($\ref{eq1}$) by separation of variables, and using Eq. ($\ref{eq: background-gas low}$) gives the temperature as
\begin{equation}
T_{0}(z,t)=T_i\exp{\left(\frac{-(\gamma-1)\lambda\tilde\mu\kappa{t}}{R\rho_i}\right)}[{-c}_{2}z^2+c_{3}z+c_4],
\end{equation}
and the density as
\begin{equation}
\rho_0(z)=\frac{\rho_i}{{-c}_{2}z^2+c_{3}z+c_4}, \label{back_den}
\end{equation}
which is physically valid when $c_2, c_3\ll1$ and $c_3>c_2$, where $\lambda$ is the separation constant, $c_2=\lambda/2$, $c_3$ and $c_4$ are constants, $T_i$ is the initial temperature at $z=0$ and $t=0$, and $\rho_i$ is the density at $z=0$.
Note that the background temperature decreases exponentially with time. It has been shown for radiative cooling loops that an exponential profile provides a good fit for the observed cooling (\opencite{Aschwanden08}; \opencite{Morton09b}, \citeyear{Morton09c}). To the best of our knowledge, there are no such observations for hot loops cooling due to thermal conduction, that is why $\partial\rho_0/\partial t=0$ here.
The form of the density profile in Eq. ($\ref{back_den}$) is consistent with a weakly stratified atmosphere. For a gravitational stratified atmosphere the density profile would be formed to be
\begin{equation}
\rho_0=\rho_i\exp\left(-\frac{z}{H}\right).\label{grav-stratif}
\end{equation}
The hydrostatic scale height $H$ is given by $H=47T\left[{\frac{\textmd{Mm}}{\textmd{MK}}}\right]$, hence for hot loops $H$ is in the range $94-282$ Mm.
In the case of weak stratification, i.e. $l/H\ll1$, the density can be approximated as
\begin{equation}
\rho_0\approx\rho_i\left(1-\frac{z}{H}+\frac{1}{2}\left(\frac{z}{H}\right)^2\right).\label{weak-stratif}
\end{equation}
Eq. ($\ref{back_den}$) can be written
$$
\rho_0=\frac{\rho_i}{1+y}\approx\rho_i(1-y)\simeq\rho_i\left[1-(-c_2z^2+c_3z+c_4-1)\right],
$$
upon the assumption $y$ is small. Comparing to Eq. ($\ref{weak-stratif}$), then the values for the constants are $c_4=1$, $c_3=\frac{1}{H}$ and $c_2=\frac{\lambda}{2}=\frac{1}{2}\left(\frac{1}{H}\right)^2$ and hence $y$ is small.
Perturbing the background equations, the linearized MHD equations can be found by writing all variables in the form
$$
f(z,t)=f_0(z,t)+f_1(z,t),
$$
and neglecting all terms containing squared perturbed variables. Here the subscript (0) represents the equilibrium quantities and the subscript (1) indicates the perturbed quantities.
Since thermal conduction is known to have a strong effect on slow modes, we will concentrate our analysis on the properties of slow modes. It has been shown by a number of authors, e.g. \inlinecite{Roberts06} and \inlinecite{Luna11}, that the slow modes can be isolated by assuming $v_{1x} = v_{1y} = 0$. Therefore the linear dissipative MHD equations for the slow modes in the presence of thermal conduction reduce to a 1-D system given by
\begin{eqnarray}
\frac{\partial\rho_{1}}{\partial{t}}+\rho_{0}\frac{\partial{v_{1}}}{\partial z}+v_{1}\frac{\partial\rho_{0}}{\partial{z}}=0,\label{Eq:dimless-cont}\\
\rho_{0}\frac{\partial{v}_{1}}{\partial{t}}=-\frac{\partial{p}_{1}}{\partial z},\label{Eq:motion}\\
\frac{R}{\tilde\mu}\left[\frac{\rho_{1}}{\gamma-1}\frac{\partial{T}_{0}}{\partial{t}}+\frac{\rho_{0}}{\gamma-1}\frac{\partial{T}_{1}}{\partial{t}}+\rho_{0}T_{0}\frac{\partial{v}_{1}}{\partial z}\
-\frac{T_0}{\gamma-1}{v}_{1}\frac{\partial\rho_{0}}{\partial z}\right]=\kappa\nabla^2T_{1},\\
\frac{\partial\mathbf{B}_{1}}{\partial{t}}=\nabla\times(\mathbf{v}_{1}\times\mathbf{B}_{0})=B_{0}\frac{\partial\mathbf{v}_{1}}{\partial{z}}-\mathbf{B}_{0}\
(\frac{\partial\mathbf{v}_{1}}{\partial z}),\\
{p}_{1}=\frac{R}{\tilde{\mu}}\left[\rho_{0}T_{1}+\rho_{1}T_{0}\right],\\
\nabla\cdot\mathbf{B}_{1}=0.\label{Eq:magnetic-field}
\end{eqnarray}
Here $v_1\equiv v_{1z}$.
Dimensionless variables can now be introduced to simplify the equations, where the following dimensionless quantities are suggested:
\begin{eqnarray}
\tilde\rho_{1}=\frac{\rho_{1}}{\rho_{i}},\,\tilde\rho_{0}=\frac{\rho_{0}}{\rho_{i}},\;\tilde{p}_{1}=\frac{p_{1}}{p_{i}},\ \tilde{T}_{1}=\frac{T_{1}}{T_{i}},\,\tilde{T}_{0}=\frac{T_{0}}{T_{i}},\,
\mathbf{\tilde{v}}_{1}=\frac{\mathbf{v}_{1}}{c_{si}},\,\\\nonumber c_{si}=\frac{l}{\tau},\,
c^2_{si}=\frac{\gamma{p}_{i}}{\rho_{i}},\,\tilde{t}=\frac{t}{\tau},\,\tilde{z}=\frac{z}{l},\,\tilde\lambda=\left(\frac{l}{H}\right)^2\!\!,\,
\tilde{c_2}=lc_2,\,\tilde{c_3}=lc_3.
\end{eqnarray}
Here $c_{si}$ is the initial sound speed, $l$ is the wavelength of the oscillations and $\tau$ is the sound travel time of a wavelength.
Thus the equation of energy in terms of dimensionless variables removing tilde, will be
\begin{eqnarray}
\rho_{1}\frac{\partial{T}_{0}}{\partial{t}}+\rho_{0}\frac{\partial{T}_{1}}{\partial{t}}+(\gamma-1)\rho_{0}{T}_{0}\nabla.\mathbf{v}_{1}\
-T_{0}v_{1}\frac{\partial\rho_{0}}{\partial{z}}=d\frac{\partial^2T_{1}}{\partial{z}^2}, \
\;\textmd{where}\quad d=\frac{(\gamma-1)\kappa\,\rho_i{T_i}}{\gamma\,{p}_i^2\,\tau}.\label{Eq:dimless-energy1}
\end{eqnarray}
Using coronal values, (see e.g. \opencite{Moortel03}), we find $d$ is a small quantity, where the standard coronal values of all variables
\begin{equation}
\left\{
\begin{array}{ll}
T_0=1-6\;\textmd{MK},\\
\rho_0=1.67\times10^{-12}\; \textmd{kg m}^{-3},\\
\kappa=10^{-11}\,T_0^{5/2}\; \textmd{W m}^{-1}\; \textmd{deg}^{-1},\\
\tilde{\mu}=0.6,\\
R=8.3\times10^3\;\textmd{m}^2\;\textmd{s}^{-2}\;\textmd{deg}^{-1},\\
\gamma=5/3,\\
\tau=300\; \textmd{s},\\
\end{array}
\right.
\end{equation}
give a value of $d=0.04$ for $T=1$ MK and $d=0.61$ for $T=6$ MK.
From the linearized ideal gas law equation
\begin{equation}
{p}_{1}=\rho_{0}{T}_{1}+\rho_{1}{T}_{0},\label{eq2}\
\end{equation}
and the continuity equation
\begin{equation}
\frac{\partial\rho_{1}}{\partial{t}}+\rho_{0}\frac{\partial{v}_{1}}{\partial z}+v_{1}\frac{\partial\rho_{0}}{\partial{z}}=0,\label{eq:dimless-cont}
\end{equation}
the energy equation takes the following form
\begin{equation}
\frac{\partial{p}_{1}}{\partial{t}}+\gamma\rho_{0}{T}_{0}\frac{\partial{v}_{1}}{\partial{z}}
=d\frac{\partial^2T_{1}}{\partial{z}^2}.\label{eq:dimless-energy}
\end{equation}
Differentiating Eq. ($\ref{eq:dimless-energy}$) with respect to $z$ and using Eq. ($\ref{Eq:motion}$), we arrive at
\begin{equation}
\frac{\partial^2{v}_{1}}{\partial{t}^2}-T_{0}\frac{\partial^2{v}_{1}}{\partial{z}^2}\
=-\frac{d}{\gamma\rho_{0}}\frac{\partial^3{T}_{1}}{\partial{z}^3}.\label{Eq:modified-dimless-energy}
\end{equation}
To solve the equation of energy, we start from the continuity equation and using the gas law equation to find an equation in terms of velocity variable.
Substituting Eq. ($\ref{eq2}$) into Eq. ($\ref{eq:dimless-cont}$), we obtain
\begin{equation}
\frac{1}{T_{0}}\frac{\partial{p}_{1}}{\partial{t}}-\frac{p_{1}}{T_{0}^2}\frac{\partial{T}_{0}}{\partial{t}}\
-\frac{\rho_{0}}{T_{0}}\frac{\partial{T}_{1}}{\partial{t}}+\frac{\rho_{0}T_{1}}{T_{0}^2}\frac{\partial{T}_{0}}{\partial{t}}\
+\rho_{0}\frac{\partial{v}_{1}}{\partial{z}}+{v}_{1}\frac{\partial\rho_{0}}{\partial{z}}=0.\label{eq3}\\
\end{equation}
The dimensionless background temperature is
$$
T_0(z,\,t)=\frac{\exp(-\lambda{d}{t})}{\rho_0(z)},
$$
so
\begin{equation}
\frac{\partial{T}_{0}}{\partial{t}}=\delta{T}_{0},\;\ \textmd{where}\quad\delta=-\lambda{d}.\label{eq4}\
\end{equation}
Then, after substituting ($\ref{eq4}$) and multiplying by ${T}_{0}$, Eq. ($\ref{eq3}$) will be
\begin{equation}
\frac{\partial{p}_{1}}{\partial{t}}-\delta{p}_{1}-\rho_{0}\frac{\partial{T}_{1}}{\partial{t}}+\delta\rho_{0}T_{1}\
+\rho_{0}T_{0}\frac{\partial{v}_{1}}{\partial{z}}+T_{0}{v}_{1}\frac{\partial\rho_{0}}{\partial{z}}=0.\label{Eq:cont-gas_law}
\end{equation}
Substituting this equation into Eq. $(\ref{eq:dimless-energy})$, we obtain the following equation
\begin{equation}
(\gamma-1)\rho_0T_0\frac{\partial^2v_1}{\partial t \partial z}-\delta\rho_0T_0\frac{\partial v_1}{\partial z}-\frac{\partial}{\partial t}(T_0v_1)\frac{\partial\rho_0}{\partial z}=d\frac{\partial^3T_1}{\partial t\partial z^2}-\rho_0\frac{\partial^2T_1}{\partial t^2}+\delta\rho_0\frac{\partial T_1}{\partial t},
\end{equation}
which represents another relation between the perturbed temperature and perturbed velocity in addition to Eq. ($\ref{Eq:modified-dimless-energy}$). Now, we aim to reach the governing equation for the velocity perturbation. Differentiating Eq. $(\ref{Eq:cont-gas_law})$ once with respect to $t$ and three times with respect to $z$, and using Eqs. ($\ref{eq:dimless-energy}$) and ($\ref{Eq:modified-dimless-energy}$) with some cumbersome algebraic operations, we arrive at
\begin{eqnarray}
\gamma\frac{\rho_{0}}{d}\frac{\partial^4{v}_{1}}{\partial{t}^4}-\gamma\left[\frac{\rho_{0}\delta}{d}+\frac{3}{\rho_0^2}
\left(\frac{\partial\rho_0}{\partial{z}}\right)^2-\frac{2}{\rho_0}\frac{\partial^2\rho_{0}}{\partial{z}^2}
-\frac{1}{\rho_{0}}\frac{\partial\rho_0}{\partial{z}}\frac{\partial}{\partial{z}}+\frac{\partial^2}{\partial{z}^2}\right]
\frac{\partial^3{v}_{1}}{\partial{t}^3}\nonumber\\
+\,\gamma\left[\frac{3\delta}{\rho_0^2}\left(\frac{\partial\rho_0}{\partial{z}}\right)^2-\frac{2\delta}{\rho_0}
\frac{\partial^2\rho_{0}}{\partial{z}^2}-\frac{\delta}{\rho_{0}}\frac{\partial\rho_0}{\partial{z}}\frac{\partial}{\partial{z}}
+(\delta-\frac{\rho_{0}T_{0}}{d})\frac{\partial^2}{\partial{z}^2}\right]\frac{\partial^2{v}_{1}}{\partial{t}^2}\nonumber\\
-\left[\frac{\gamma\delta\rho_{0}{T}_{0}}{d}\frac{\partial^2{}}{\partial{z}^2}-2\frac{\partial{T}_{0}}{\partial{z}}
\frac{\partial^3}{\partial{z}^3}-T_{0}\frac{\partial^4}{\partial{z}^4}\right]\frac{\partial{v}_{1}}{\partial{t}}
+\left[2\delta\frac{\partial{T}_{0}}{\partial{z}}\frac{\partial^3}{\partial{z}^3}
+\delta{T}_{0}\frac{\partial^4{}}{\partial{z}^4}\right]{v}_{1}=0.\label{eq5}
\end{eqnarray}
To simplify Eq. ($\ref{eq5}$) we neglect all terms of order $d\delta$ and higher degree in $d$ or $\delta$ as $d$ and $\delta$ are small parameters. Then, after multiplying equation ($\ref{eq5}$) by $d$, we obtain the governing equation
\begin{eqnarray}
\gamma\rho_{0}\frac{\partial^4{v}_{1}}{\partial{t}^4}-\gamma\rho_{0}T_{0}\frac{\partial^2}{\partial{z}^2}\frac{\partial^2{v}_{1}}{\partial{t}^2}=
\gamma\left[\rho_{0}\delta+\frac{3d}{\rho_0^2}
\left(\frac{\partial\rho_0}{\partial{z}}\right)^2-\frac{2d}{\rho_0}\frac{\partial^2\rho_{0}}{\partial{z}^2}\right.\nonumber \\
\left.-\frac{d}{\rho_{0}}
\frac{\partial\rho_0}{\partial{z}}\frac{\partial}{\partial{z}}+d\frac{\partial^2}{\partial{z}^2}\right]
\frac{\partial^3{v}_{1}}{\partial{t}^3}
+\left[\gamma\delta\rho_{0}{T}_{0}\frac{\partial^2}{\partial{z}^2}-2d\frac{\partial{T}_0}{\partial{z}}
\frac{\partial^3}{\partial{z}^3}-dT_{0}\frac{\partial^4{}}{\partial{z}^4}\right]\frac{\partial{v}_{1}}{\partial{t}}.
\end{eqnarray}
The background quantities are slowly varying when compared to the perturbed quantities, so eliminating the small terms gives
\begin{equation}
\frac{\partial}{\partial t}\left[\frac{\partial^2{v}_{1}}{\partial{t}^2}-T_{0}\frac{\partial^2v_1}{\partial{z}^2}\right]=
\delta\left[\frac{\partial^2v_1}{\partial{t}^2}+T_0\frac{\partial^2v_1}{\partial{z}^2}\right]+\frac{d}{\rho_0}\frac{\partial^2}{\partial z^2}\left[\frac{\partial^2{v}_{1}}{\partial{t}^2}-\frac{T_0}{\gamma}\frac{\partial^2v_1}{\partial{z}^2}\right].\label{eneq}
\end{equation}
In the case of no thermal conduction, Eq. ($\ref{eneq}$) reduces to the wave equation that represents (longitudinal) acoustic mode propagating in a flux tube and has the form
\begin{equation}
\frac{\partial^2v_1}{\partial t^2}-T_0\frac{\partial^2 v_1}{\partial z^2}=0,\label{wave}
\end{equation}
while in the case of unstratified atmosphere, i.e. no change in the background plasma quantities, Eq. ($\ref{eneq}$) leads to the model governing equation found by \inlinecite{Moortel03} as follows
\begin{equation}
\frac{\partial}{\partial t}\left[\frac{\partial^2{v}_{1}}{\partial{t}^2}-T_{0}\frac{\partial^2v_1}{\partial{z}^2}\right]=d\,\frac{\partial^2}{\partial z^2}\left[\frac{\partial^2{v}_{1}}{\partial{t}^2}-\frac{1}{\gamma}\frac{\partial^2v_1}{\partial{z}^2}\right].
\end{equation}
Eq. ($\ref{eneq}$) can now be checked. Assuming the background is constant in time and space, i.e. static and unstratified, one can Fourier-analyse all perturbations, $\exp({i}(\omega{t}-{k}{z}))$. Hence, Eq. ($\ref{eneq}$) reduces to the dispersion relation
\begin{equation}
\omega^3-id\,\omega^2{k}^2-\omega{k}^2+i\frac{d}{\gamma}{k}^4=0,
\end{equation}
where $\omega$ is the frequency and $k$ is the wavenumber. This dispersion relation has been obtained first by \inlinecite{Field65}. This provides confidence that Eq. ($\ref{eneq}$) is consistent with earlier studies (see, e.g. \opencite{Field65} and \opencite{ Moortel03}).
Note that, in the limit ${d}\longrightarrow0$, i.e. no thermal conduction, Eq. ($\ref{eneq}$) reduces to Eq. ($\ref{wave}$) i.e. $\omega^2=k^2\Longrightarrow\omega=\pm{k}$ which is, rightly, the undamped (longitudinal) acoustic mode.
\section{Analytical solutions}
We now seek to find an analytic solution to the governing Eq. ($\ref{eneq}$). Unlike previous models which include thermal conduction, there is now a time dependence due to the temporally changing background temperature. This means we can not Fourier analyse in time. Instead, since the RHS of Eq. ($\ref{eneq}$) has derivatives multiplied by a small factor, $d$, the WKB approximation (see, e.g. \opencite{Bender}) can be applied to find an approximate solution. The WKB approximation has a greater accuracy for the smaller value of $d$, however, it can still provide an accurate solution even for $d\approx1$. Since the variables in Eq. ($\ref{eneq}$) depend on time $t$ and space $z$, we introduce `slow' variable $t_1=dt$ and `local' variable $\zeta=dz$ to solve this equation. The slow timescale physically means that the conductive cooling timescale is longer than the period of the oscillations. The `local' lengthscale means any significant changes in quantities in the $z$ direction occur on length scales longer than the wavelength of the oscillation. The velocity in terms of the new scaled variables and by the WKB approximation has the form
\begin{equation}
v_1(\zeta,\,t_1)=Q(\zeta,\,t_1)\exp\left(\frac{i}{d}\Theta(\zeta,\,t_1)\right),\label{wkb}
\end{equation}
where $Q(\zeta,\,t_1)$ and $\Theta(\zeta,\,t_1)$ are functions to be calculated.
Substituting Eq. ($\ref{wkb}$) into Eq. ($\ref{eneq}$) and taking the largest order terms in $d$, which is $d^{-4}$, we obtain
\begin{equation}
\left(\frac{\partial\Theta}{\partial{t}_1}\right)^2-c_s^2(z,\,t)\left(\frac{\partial\Theta}{\partial\zeta}\right)^2=0,
\qquad c_s(z,\,t)=\sqrt{T_0}.\label{freqeq}
\end{equation}
If we assume $\partial\Theta/\partial t=\omega(z,\,t)$ and $\partial\Theta/\partial\zeta=k(z,\,t)$, where $\omega(z,\,t)$ is frequency and $k(z,\,t)$ is wavenumber, then Eq. ($\ref{freqeq}$) is in fact a temporally and spatially dependent dispersion relation for the longitudinal (acoustic) mode.
The next largest order terms in $d$ (of order $d^{-3}$) give the equation for the amplitude,
\begin{equation}
-2\gamma\rho_0c_s\,\frac{\partial{Q}}{\partial{t}_1}\frac{\partial\Theta}{\partial\zeta}
+\frac{\gamma}{2}\lambda\rho_0c_s\,Q\frac{\partial\Theta}{\partial\zeta}
-\gamma\rho_0c_s\, \frac{\partial{c}_s}{\partial\zeta}Q\frac{\partial\Theta}{\partial\zeta}-(\gamma-1)c_s\,Q
\left(\frac{\partial\Theta}{\partial\zeta}\right)^3 +2\gamma\rho_0c_s^2\frac{\partial{Q}}{\partial\zeta}\frac{\partial\Theta}{\partial\zeta}=0,
\end{equation}
which, after some algebra, has the form
\begin{equation}
\frac{-1}{c_s}\,\frac{\partial{Q}}{\partial{t}_1}+\frac{\partial{Q}}{\partial\zeta}
+\left[\frac{\lambda}{4c_s}-\frac{1}{2c_s}\, \frac{\partial{c}_s}{\partial\zeta}-\frac{(\gamma-1)}{2\gamma\rho_0c_s}
\left(\frac{\partial\Theta}{\partial\zeta}\right)^2\right]Q=0. \label{ampliteq}
\end{equation}
Eqs. ($\ref{freqeq}$) and ($\ref{ampliteq}$) will be solved by using the method of characteristics so we need to derive boundary conditions at $z=0$.
To achieve this we study a thin layer around $z=0$ where we may assume the spatial gradients of both $T_0$ and $\rho_0$ are very small, so they can be considered constant in space in this region. This enables the use of Fourier-analysis such that, there is $\sim\! \exp(ikz)$, for perturbed variables. Eq. ($\ref{eneq}$) reduces to
\begin{equation}
\gamma\rho_0\frac{d^4v_1}{d{t}^4}+\gamma[\rho_0\lambda d+dk^2]\frac{d^3v_1}{d{t}^3}
+\gamma\rho_0T_0k^2\frac{d^2v_1}{d{t}^2}-[\gamma\lambda{d}\rho_0T_0k^2-dT_0k^4]\frac{d{v_1}}
{d{t}}=0.\label{eq:Ene-charact}
\end{equation}
Next, we apply the WKB method to this equation by assuming that the perturbation in terms of the variable $t_1$, where $t_1=dt$, has the form
\begin{equation}
v_1(t_1)=Q_1(t_1)\exp\left(\frac{i}{d}\Theta_1(t_1)\right).
\end{equation}
Substituting into Eq. ($\ref{eq:Ene-charact}$), the highest order equation in $d$ gives
\begin{equation}
\frac{d\Theta_1}{d{t}_1}=c_s(t_1)k.\label{Eq:Bound-con1}
\end{equation}
Eq. ($\ref{Eq:Bound-con1}$) has the solution
\begin{equation}
\Theta_1(t_1)=\frac{2k}{\lambda}[1-c_s(t_1)].\label{Bound-con1}
\end{equation}
The next order equation in $d$ is
\begin{equation}
\frac{d{Q_1}}{d{t}_1}-\left(\frac{\lambda}{4}-\frac{(\gamma-1)}{\gamma}\frac{k^2}{2}\right)Q_1=0,
\end{equation}
which has the solution
\begin{equation}
Q_1(t_1)=\exp\left[\left(\frac{\lambda}{4}-\frac{(\gamma-1)}{\gamma}\frac{k^2}{2}\right)t_1\right],\label{Bound-con2}
\end{equation}
where $k$ is the wavenumber at $z=0$.
We now have the necessary approximate boundary conditions at $z=0$ to proceed with the solutions of Eqs. ($\ref{freqeq}$) and ($\ref{ampliteq}$) using the method of characteristics.
Then, the solutions of Eqs. ($\ref{freqeq}$) and ($\ref{ampliteq}$) can be found by the method of characteristics and we introduce the variables $r$ and $s$ to parameterise these equations. The characteristic equations of Eq. ($\ref{freqeq}$) are given by
$$
\frac{\partial{t_1}}{\partial{s}}=\frac{-1}{c_s},\qquad \frac{\partial\zeta}{\partial{s}}=1, \qquad \frac{\partial\Theta}{\partial{s}}=0,
$$
with boundary conditions for the characteristics curves
$$
t_1(r,0)=r, \qquad \zeta(r,0)=0,\qquad \Theta(r,0)=\Theta_1(r),
$$
where $\Theta_1(r)$ is given by ($\ref{Bound-con1}$) and $(r,0,\Theta_1)$ is a point in the curve $(t_1(r,s),\zeta(r,s),\Theta(r,s))$ which lies in the surface $\{(t_1,\zeta,\Theta(\zeta,t_1))\}$ that represents the graph of the function $\Theta(\zeta,t_1)$.
The solutions of the characteristic equations are
\begin{equation}
s(\zeta,t_1)=\zeta,\label{Eq:parameter-s}
\end{equation}
\begin{equation}
r(\zeta,t_1)=\frac{-2}{\lambda}\,\ln\left[\exp(\frac{-\lambda{t_1}}{2})-\frac{\lambda{d}}{2\sqrt{c_2}}\,
\arctan\left(\frac{c_3\sqrt{F(\zeta)}+2c_2(\zeta-\frac{c_3d}{2c_2})}{2\sqrt{c_2}\sqrt{F(\zeta)}-c_3\sqrt{c_2}(\zeta
-\frac{c_3d}{2c_2})}\right)\right],\label{Eq:parameter-r}
\end{equation}
\begin{equation}
\Theta(\zeta,t_1)=\frac{-2k}{\lambda}\left(\exp(\frac{-\lambda{t_1}}{2})-\frac{\lambda{d}}{2\sqrt{c_2}}\,
\arctan\left(\frac{c_3\sqrt{F(\zeta)}+2c_2(\zeta-\frac{c_3d}{2c_2})}{2\sqrt{c_2}\sqrt{F(\zeta)}-c_3\sqrt{c_2}(\zeta
-\frac{c_3d}{2c_2})}\right)-1\right).\label{Eq:frequency}
\end{equation}
Similarly, the characteristic equations of Eq. ($\ref{ampliteq}$) are
$$
\frac{\partial{t_1}}{\partial{s}}=\frac{-1}{c_s},\qquad \frac{\partial\zeta}{\partial{s}}=1, \qquad \frac{\partial Q}{\partial{s}}=\left(\frac{\lambda}{4c_s}-\frac{1}{2c_s}\, \frac{\partial{c}_s}{\partial\zeta}-\frac{(\gamma-1)}{2\gamma\rho_0c_s}
\left(\frac{\partial\Theta}{\partial\zeta}\right)^2\right)Q,
$$
with boundary conditions
$$
t_1(r,0)=r, \qquad \zeta(r,0)=0,\qquad Q(r,0)=Q_1(r),
$$
where $Q_1(r)$ is given by ($\ref{Bound-con2}$) and $(r,0,Q_1)$ is a point in the curve $(t_1(r,s),\zeta(r,s),Q(r,s))$ which lies in the surface $\{(t_1,\zeta,Q(\zeta,t_1))\}$ that represents the graph of the function $Q(\zeta,t_1)$.
The solutions of the characteristic equations are Eqs. ($\ref{Eq:parameter-s}$), ($\ref{Eq:parameter-r}$) and
\begin{eqnarray}
Q(\zeta,t_1)=\exp\left[\left(\frac{\lambda}{4}-\frac{\gamma-1}{\gamma}\frac{k^2}{2}\right)\left\{r+\frac{d}{\sqrt{c_2}}
\exp(\frac{\lambda{t_1}}{2})\arctan\left(\frac{c_3\sqrt{F(\zeta)}+2c_2(\zeta-\frac{c_3d}{2c_2})}{2\sqrt{c_2}\sqrt{F(\zeta)}-c_3\sqrt{c_2}(\zeta
-\frac{c_3d}{2c_2})}\right)\right\}\right.\nonumber \\ \label{Eq:amplitude}
\left.+\,\frac{1}{4}\ln\left(\frac{d^2}{F(\zeta)}\right)\right],
\end{eqnarray}
where
\begin{equation}
F(\zeta)=-c_2\zeta^2+c_3d\zeta+d^2=\frac{d^2}{\rho_0(\zeta)}.\label{Eq:density-d_parameter}
\end{equation}
Taking Taylor expansion of the amplitude and ignoring the small terms under the assumption of slowly varying background coronal plasma and weakly stratified atmosphere, we arrive at
\begin{equation}
Q(\zeta,t_1)=1-\left(\frac{\gamma-1}{\gamma}\frac{k^2}{2}-\frac{\lambda}{4}\right)t_1-\frac{1}{4}\sqrt\lambda\;\zeta.
\end{equation}
Eq. ($\ref{wkb}$), combined with Eqs. $($\ref{Eq:parameter-r}$)-($\ref{Eq:density-d_parameter}$)$, provides a full solution to Eq. ($\ref{eneq}$). All other perturbed quantities can now be calculated using Eqs. $($\ref{Eq:dimless-cont}$)-($\ref{Eq:magnetic-field}$)$. The equations may not reveal much information yet as they are very complicated in nature. We examine a limiting case for the amplitude to reveal some properties. Assume that $\lambda=0$ (unstratified atmosphere), then the system of equations ($\ref{Eq:parameter-r}$), ($\ref{Eq:amplitude}$) and ($\ref{Eq:density-d_parameter}$) reduces to
\begin{equation}
Q(\zeta,t_1)=\exp\left[\left(-\frac{(\gamma-1)}{\gamma}\frac{k^2}{2}\right) t_1\right].
\end{equation}
This limit is in good agreement with its counterpart in \inlinecite{Moortel03}. The amplitude of the wave is damped and the damping is dependent upon the measure of $d$ and $k$.
\section{Numerical calculations}
It has been shown in \inlinecite{Morton09d} that the WKB estimates provide good approximations to the frequency and amplitude variations in time and space when the plasma is cooling due to radiation. We see no reasons why this should be any different here, where thermal dissipation in the format of thermal conduction is considered. Numerical calculations are now used to evaluate how the analytic solutions of the MHD waves behave in a system with variable background.
The amplitude of slow wave is computed using characteristic coronal values. In Figure \ref{Amp-damping} we plot the amplitude of the longitudinal (acoustic) wave for different values of $\lambda$ (which is the separation constant and is related to the squared reciprocal of scale height) and the value of thermal ratio, $d$, at $z=0$. It is found that the amplitude decays rapidly in time for all values of $\lambda$ and $d$, over typical observed timescales of oscillations. In particular, Figure \ref{Amp-damping1a} depicts that the oscillation amplitude declines dramatically under the coronal values of temperature $10^6$ K. Figure \ref{Amp-damping1b} illustrates a faster decay of the wave amplitude in a region of temperature $3\times10^6$ K, i.e. for even hotter loops. The decay of the amplitude is enhanced even further for very hot (SXT/XRT) loops, i.e. $T=6\times10^6$ K as evidenced by Figure \ref{Amp-damping1c}. In each plot shown in Figure \ref{Amp-damping}, it is also clearly seen that increasing the stratification of the plasma, i.e. increasing $\lambda$, decreases the rate of the damping. This is consistent with \inlinecite{Moortel04a}, who showed that adding stratification decreased the rate of damping of the slow mode due to thermal conduction. The new and important result here is that the rate of damping is much stronger than that predicted by \inlinecite{Moortel04a}. Further, there is no need here to artificially amplify the heat conduction as opposed to \inlinecite{Moortel04a}. A note of caution is, however, needed when interpreting these results. The temperature of the plasma drops rapidly due to the cooling. As $T\rightarrow2$ MK radiation will start to become the dominant damping mechanism. So as $T$ reaches 2 MK the damping of the longitudinal mode may not look as shown here. A full numerical simulation incorporating the variable background and the variable value of $d$ will need to undertaken that is well beyond the scope of the present study and may require a full numerical approach even for an initial insight because of the far to complex mathematical modelling (e.g. generating background flows, etc.).
In Figure \ref{Amp-damping_k} we show how the magnitude of the thermal conduction coefficient, $\kappa$, effects the rate of damping of the longitudinal (acoustic) mode. Altering the value of $\kappa$ by an order of magnitude leads to large changes in the rate of damping, i.e the amplitude of longitudinal (acoustic) wave decreases rapidly with an increasing value of $\kappa$.
Next, Figure \ref{Amp-damping_z} shows how the decay of the amplitude varies for different values of $z=(0.0, 0.1, 0.5)$. Two values of thermal ratio $d=(0.04, 0.22)$ are taken to illustrate the influence of thermal conduction on cool and hot corona, $T$=(1 MK, 3 MK), and the characteristic value of $\lambda=0.1$. This plot shows that the strength of the damping due to thermal conduction will be for all intents and purposes, the same along a loop.
\begin{figure}[h]
\vspace{-1 cm}
\centering
\hspace{-0.8 cm}
\subfloat{\label{Amp-damping1a}\includegraphics[height=0.4\textheight, width=0.375\textwidth]{Amplitude23.eps}}
\hspace{-0.9 cm}
\subfloat{\label{Amp-damping1b}\includegraphics[height=0.4\textheight, width=0.375\textwidth]{Amplitude24.eps}}
\hspace{-0.9 cm}
\subfloat{\label{Amp-damping1c}\includegraphics[height=0.4\textheight, width=0.375\textwidth]{Amplitude21.eps}}
\hspace{-0.8cm}
\vspace{-1 cm}
\caption{The graphs show the amplitude of oscillations with different values of $\lambda$ $(0.1, 0.5, 0.7)$ characterising stratification and specific value of $d$, i.e. the value of thermal ratio, at $z=0$. (a) $d=0.04$ ($T=1$ MK), (b) $d=0.22$ ($T=3$ MK), (c) $d=0.61$ ($T=6$ MK).}\label{Amp-damping}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[height=0.5\textheight,width=0.5\textwidth]{Amplitude27.eps}
\vspace{-1 cm}
\caption{The graph shows the amplitude of oscillations with different values of the thermal conduction coefficient, $\kappa=(10^{-10},10^{-11},10^{-12})$ at $z=0$ and $\lambda=0.1$ where $T=3$ MK.}\label{Amp-damping_k}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[height=0.5\textheight,width=0.6\textwidth]{Amplitude251.eps}
\vspace{-1 cm}
\caption{The graph shows the amplitude of oscillations as function of time at variable positions along the coronal magnetic field lines, e.g. $z=(0.0, 0.1, 0.5)$ and $\lambda=0.1$, and with different values of $d=(0.04, 0.22)$.}\label{Amp-damping_z}
\end{figure}
\section{Discussion and Conclusion}
The influence of variable background on the longitudinal (field-aligned) MHD wave propagating in a homogeneous, magnetised plasma in a weakly stratified atmosphere has been investigated. The background plasma is assumed to be cooling as the magneto-acoustic wave propagates. Thermal conduction is the dominant mechanism causing the plasma cooling and changing pressure as a function of time. The magnetic field was assumed to be constant and directed along the $z$ direction. A governing equation is derived under the assumption that the background plasma is cooling on a time scale greater than or comparable to the characteristic period of the perturbations. A time- and space-dependent dispersion relation that describes the propagation of the longitudinal magneto-acoustic mode is obtained and an analytic equation for the time-dependent amplitude is also derived. The effect of cooling background due to thermal conduction on the amplitude of hot loop oscillations was then studied.
The governing equation in terms of time and space is solved by using the WKB theory. Leading and first order equations for the dispersion relation and wave amplitude, respectively, are obtained and solved analytically. An approximate solution which represents the properties of the field-aligned acoustic wave is found with the aid of method of characteristics. Numerical evaluations illustrate the behaviour of the analytic solutions. A comparison of wave behaviour for a range of initial temperatures is studied.
Enhanced by the cooling background plasma, the amplitude of longitudinal (acoustic) waves was found to rapidly decay due to the influence of thermal conduction. The rate of damping of the oscillations was found to depend on the initial temperature of the plasma and the amount of stratification. It was previously shown that thermal conduction leads to the damping of slow mode oscillations. For example, the amplitude of slow modes in \inlinecite{Moortel03} was detected to undergo damping whereas in this study the amplitude was found to experience a much stronger damping. It should be noted that \inlinecite{Morton09d} reported that the damping of the slow mode was also heavy where the amplitude of wave suffers a rapid decrease due to the cooling of the background plasma by another dissipation, namely by radiation mechanism. Consequently, we conclude that the magneto-acoustic oscillations of hot corona can be exposed to a strong damping because of cooling by thermal conduction while radiation is a dominant method of damping cool coronal oscillations.
Overall, the presented results into the study of the damping of coronal oscillation imply that both radiation and thermal conduction mechanisms should be taken into account even for background plasma as a result of their efficient effect on dominating the properties and lifetimes of MHD oscillations in cool and hot corona, respectively. The main point is that these dissipative processes introduce a dynamic plasma background. Such background in often reported in observations. The MHD wave theory of dynamic plasma is in its infant stage and we argue that for an adequate modelling of the solar coronal processes such studies as consideration of dynamic plasma background in the modelling is essential. The temporal evolution of the background plasma has an extremely important effect on the properties of waves and needs to be incorporated into future models. This is especially necessary if deriving plasma parameters from magneto-seismological methods, e.g. in applications to the solar corona.
\medskip
\begin{Ack}
The authors would like to thank Prof M.S. Ruderman for useful discussion. R.E. acknowledges M. K\'{e}ray for patient encouragement. The authors are also grateful to NSF, Hungary (OTKA, Ref. No. K67746), Science and Technology Facilities Council (STFC), UK and Ministry of Higher Education, Oman for the financial support.
\end{Ack}
\bibliographystyle{spr-mp-sola}
|
1011.2195
|
\section{Introduction and discussion}
\setcounter{equation}{0}
The recent revival of interest in metastable supersymmetry breaking in
quantum field theory is largely due to the work of Intriligator,
Seiberg and Shih~\cite{Intriligator:2006dd} (ISS). This work presents
a mechanism to naturally circumvent some of the problems afflicting
other models for dynamic supersymmetry breaking
(DSB)~\cite{Shadmi:1999jy, Nelson:1993nf, Intriligator:2007py,
Intriligator:2007cp}. A natural question that was posed immediately
after~\cite{Intriligator:2006dd} is whether metastable vacua also
exist in string realizations of supersymmetric field theories.
For type IIA brane--engineering models of supersymmetric field
theories, the answer to this question is negative~\cite{Bena:2006rg}.
Indeed, these models are constructed using D4 branes ending on
codimension--two defects inside NS5
branes~\cite{Ooguri:2006bg,Franco:2006ht,Bena:2006rg}, which source
NS5 worldvolume fields that grow logarithmically at infinity. In
supersymmetric vacua this logarithmic growth encodes the running of
the gauge theory coupling constant with the
energy~\cite{Witten:1997sc, Hori:1997ab, Brandhuber:1997iy,
Giveon:1998sr}, but these logarithmic modes are different in the
candidate metastable brane configuration and in the supersymmetric
one. This implies that the candidate metastable brane configuration
and the supersymmetric one differ by an infinite amount, and hence
cannot decay into each other. Hence, the type IIA brane construction
does not describe a metastable vacuum of a supersymmetric theory, but
instead a nonsupersymmetric vacuum of a nonsupersymmetric theory.
Another arena where one might try to find string theory realizations of
metastable vacua are IIB holographic duals of certain supersymmetric gauge theories.
The best--known example in this class was proposed by Kachru, Pearson
and Verlinde~\cite{Kachru:2002gs, DeWolfe:2004qx}, who argued that a background with
anti--D3 branes at the bottom of the Klebanov--Strassler warped deformed
conifold~\cite{Klebanov:2000hb} is dual to a metastable vacuum of the
dual supersymmetric gauge theory. Since the Klebanov--Strassler
solution has positive D3 brane charge dissolved in flux, the anti--D3
branes can annihilate against this charge (this annihilation happens
via the polarization of the anti--D3 branes into an NS5 brane~\cite{Myers:1999ps,Polchinski:2000uf}), and this bulk process
is argued to correspond to the decay of the metastable vacuum to the
supersymmetric one in the dual field theory.
Another proposal for a metastable vacuum obtained by putting
anti--branes at the bottom of a smooth warped throat with positive
brane charge dissolved in flux has recently been made by Klebanov and
Pufu~\cite{Klebanov:2010qs}, who argued that probe anti--M2 branes at
the tip of a supersymmetric warped M--theory background with
transverse Stenzel space~\cite{Stenzel:1993}, give rise to a
long--lived metastable vacuum. The supersymmetric solution, first
found by Cveti\v c, Gibbons, L\"u and Pope (CGLP)
in~\cite{Cvetic:2000db} has M2 charge dissolved in fluxes and a large
$S^4$ in the infrared. The anti--branes can annihilate against the
charge dissolved in fluxes by polarizing into M5 branes~\cite{Bena:2000zb}
wrapping three--spheres inside the $S^4$.
The probe brane analyses described above, while indicative that a
metastable vacuum might exist, are however not enough to establish
this. One possible issue which can cause the backreacted solution to
differ significantly from the probe analysis is the presence of
non-normalizable modes. If the anti-branes indeed source such modes
then the candidate metastable configuration is not dual to a
non--supersymmetric vacuum of a supersymmetric theory, but to a
non--supersymmetric vacuum of a non--supersymmetric theory, and the
supersymmetry breaking is not dynamical but explicit. The existence
of non--normalizable modes is not visible in the probe approximation
(much like the existence of type IIA log--growing modes was not
visible in $g_s=0$ brane
constructions~\cite{Ooguri:2006bg,Franco:2006ht}), but only upon
calculating the backreaction of the probe branes -- a not too easy
task.
In~\cite{Bena:2009xk} two of the authors and M.~Gra\~ na found the
possible first--order backreacted solution sourced by a stack of
anti--D3 branes smeared on the large $S^3$ at the bottom of the
Klebanov--Strassler (KS) solution, and found two very interesting
features: first, of the 14 physical modes describing $SU(2) \times
SU(2) \times Z_2$--invariant perturbations of the warped deformed
conifold, only one mode enters in the expression of the force that a
probe D3 brane feels in this background. Hence, since anti--D3 branes
attract probe branes, if the perturbed solution is to have any chance
to describe backreacted anti--D3 branes, this mode must be
present\footnote{The asymptotic behavior of the force matches the one
argued for in~\cite{Kachru:2003sx}, and the existence of this mode
was first intuited in~\cite{DeWolfe:2008zy} which set out to study
the UV asymptotics of the perturbations corresponding to anti--D3
branes in the KT background\cite{Klebanov:2000nc}.}. The second
feature of this solution is that if the force mode is present, the
infrared\footnote{An IR analysis of some of the non--supersymmetric
isometry--preserving perturbations of the Klebanov--Strassler
background can also be found in~\cite{McGuirk:2009xx}.} must contain a certain singularity,
which has {\it finite} action\footnote{This was first observed by I. Klebanov.}. Note that
having a finite action does not automatically make a singularity
acceptable -- negative--mass Schwarzschild is an obvious
counterexample~\cite{Horowitz:1995ta}. As discussed
in~\cite{Bena:2009xk}, if this singularity is unphysical, then the
solution sourced by the anti--D3 branes cannot be thought of as a
small perturbation of the KS solution, and therefore does not describe
a metastable vacuum of the dual theory. If this singularity is
physical, the first--order solution does describe anti--D3 branes at
the bottom of the KS solution, and work is in progress to determine
what are the features of this solution, and whether the perturbative
anti--D3 brane solution describes or not metastable vacua of the dual
theory.
The purpose of this paper is to calculate the first--order
backreaction of the other proposed metastable configuration with
anti--branes in a background with charge dissolved in fluxes: the
anti--M2 branes in the Stenzel--CGLP solution~\cite{Cvetic:2000db}. In
order to do this we smear the anti--M2 branes on the large $S^4$ at
the bottom of the Stenzel--CGLP solution, and solve for all possible
deformations of this background that preserve its $\text{SO}(5)$
symmetry. We consider an ansatz for these deformations ; the space of
deformations is parameterized by 6 functions of one variable
satisfying second--order differential equations. However, when
perturbing around a supersymmetric solution, Borokhov and
Gubser~\cite{Borokhov:2002fm} have observed that these second--order
equations factorize into first--order ones, that are much easier to
solve. Nevertheless, in order to apply the Borokhov--Gubser method,
one needs to find the superpotential underlying the supersymmetric
solution, which for the warped fluxed Stenzel--CGLP solution was not
known until now. The first result of this paper, presented in Section 2, is to
find this superpotential\footnote{This is the equivalent of the
Papadopoulos--Tseytlin superpotential for the KS
solution~\cite{Papadopoulos:2000gj, Cassani:2010na, Bena:2010pr}.},
and derive two sets of first--order equations governing the space of
deformations.
We then show in Section 3 that the force felt by a probe M2 brane in
the most general perturbed background depends on only one of the
``conjugate--momentum'' functions that appear when solving the
first--order system, and hence on only one of the 10 constants
parameterizing the deformations around the supersymmetric solution. We
then solve in Section 4 the two sets of first--order differential
equations. Amazingly enough, the solutions for the first set of
equations (for the conjugate--momentum functions) can be found
explicitly in terms of incomplete elliptic integrals (a huge
improvement on the situation in~\cite{Bena:2009xk}). We also find the
homogeneous solutions to the other equations and give implicitly the
full solution to the system in terms of integrals. We also provide the
explicit UV and IR expansions of the full space of deformations, and
find which deformations correspond to normalizable modes and which
deformations correspond to non--normalizable modes.
In Section 5 we then use the machinery we developed to recover the
perturbative expansion of the known solution sourced by BPS M2 branes
smeared on the $S^4$ at the tip of the Stenzel--CGLP
solution~\cite{Cvetic:2000db}, and analyze the infrared of the
possible solution sourced by anti--M2 branes. After removing some
obviously unphysical divergences and demanding that in the
first--order backreacted solution a probe M2 brane feels a nonzero
force, we find that the only backreacted solution that can correspond
to anti--M2 branes must have an infrared singularity, coming from a
four--form field strength with two or three legs on the three--sphere
that is shrinking to zero size at the tip of the Stenzel space.
Hence, the first--order backreacted solution for the anti--M2 branes
has the same two key features as the anti--D3 branes in KS: the force
felt by a probe M2 brane in this background depends only on one of the
10 physical perturbation modes around this solution, and the solution
where the force--carrying mode is turned on must have an infrared
singularity coming from a divergent energy in the M--theory four--form
field strength. Nevertheless, unlike in the ``anti--D3 in KS"
solution, the action of this infrared singularity also diverges.
Again, if this singularity is physical, our first--order backreacted
solution describes anti--M2 branes in the CGLP background, and, to our
knowledge, would be the first backreacted supergravity solution dual
to metastable susy--breaking in 2+1 dimensions since the work of Maldacena and N\u
astase~\cite{Maldacena:2001pb}. This may be of interest both in the
programme of using the AdS/CFT correspondence to describe
strongly-interacting condensed--matter systems, and
also in view of the relevance of three--dimensional QFT's at strong
coupling to a recent holographic model of four--dimensional
cosmology~\cite{McFadden:2010na}. On the other hand, if the
singularity is not physical then the backreaction of the anti--M2
branes cannot be taken into account perturbatively; this indicates
that the only solution with proper anti--M2 brane boundary conditions
in the infrared is the solution for anti--M2 branes in a CGLP
background with anti--M2 brane charge dissolved in flux, and hence the
anti--M2 branes flip the sign of the M2 brane charge dissolved in flux.
Given the similarity of the results of the ``anti--D3 in KS'' and of
the ``anti--M2 in CGLP'' analyses and the drastically--different
calculations leading to them, it is rather natural to expect that the
underlying physics of the two setups is the same: either both
singularities are physical, which indicates that anti--branes in
backgrounds with charge dissolved in fluxes give rise to metastable
vacua, or they are both unphysical, which supports the idea that
anti--branes in such backgrounds cannot be treated as a perturbation of the original
solution, and may flip the sign of the charge dissolved
in flux. Furthermore, our analysis suggests that one cannot use the
finiteness of the action as a criterion for accepting a singularity.
This would allow the anti--D3 singularity and exclude the anti--M2
one, which would be rather peculiar, given the striking resemblance of
the two systems.
There are a few possible explanations for the singularities we
encounter in the anti--M2 and anti--D3 solutions. One is that these singularities are accompanied by
stronger, physical singularities, coming from the smeared anti--M2 or anti--D3
sources, and one can hope that whatever mechanism renders the stronger
singularities physical may cure the subleading ones as well. Another
explanation is that the subleading singularities are a result of
smearing the antibranes. This is a difficult argument to support with
calculational evidence, as the unsmeared solution is a formidable
problem even for BPS branes in Stenzel spaces \cite{Krishnan:2008gx,
Pufu:2010ie}. Furthermore, a naive comparison of the anti--M2 and
anti--D3 solutions indicates that the stronger the physical singularity
associated with the brane sources is, the stronger the subleading
singularity will be. Hence, it is likely that unsmearing will make
things worse, not better. Note also that one cannot link the divergent
four--form field strength with the M5 branes into which the anti--M2
branes at the tip of the Stenzel--CGLP solution polarize -- they have
incompatible orientations.
It is also interesting to remember that when one attempts to build
string realisations of four-dimensional metastable vacua, either via
brane constructions~\cite{Bena:2006rg} or via
AdS-CFT~\cite{Bena:2009xk}, the non--normalizable modes one encounters are
log--growing modes, which one could in hindsight have expected
from the generic running of coupling constants of four--dimensional
gauge theories with the energy.
For anti--M2 branes there is no such link. There exist both AdS/CFT
duals of metastable vacua of 2+1 dimensional gauge
theories~\cite{Maldacena:2001pb}, as well as brane--engineering
constructions of such metastable vacua (using D3 branes ending on
codimension--three defects inside NS5 branes)~\cite{Giveon:2009bv}.
The nonexistence of an anti--M2 metastable vacuum could only be seen
in supergravity, and comes from the way the fields of the anti--M2
brane interact with the magnetic fields that give rise to the charge
dissolved in fluxes. This may indicate there is a problem with trying
to construct metastable vacua in string theory by putting antibranes
in backgrounds with charge dissolved in fluxes. In an upcoming
paper~\cite{Giecold:D2} we will also argue that anti--D2 branes in
backgrounds with D2 brane charge dissolved in
fluxes~\cite{Cvetic:2001ma}, that one of us investigated
in~\cite{Giecold:2009wj}, have similar problems.
\section{Perturbations around a supersymmetric solution}
\setcounter{equation}{0}
We are interested in the backreaction of a set of anti--M2 branes
spread on a four--sphere at the bottom of the warped Stenzel
geometry~\cite{Stenzel:1993} with nontrivial fluxes. Smearing the
anti--M2's is necessary in order for the perturbed solution to have
the same $\text{SO}(5)$ global symmetry as the supersymmetric solution
of Cveti\v c, Gibbons, L\"u and Pope (CGLP)~\cite{Cvetic:2000db}. The
perturbed metric and flux coefficients are then functions of only one
radial variable, and generically satisfy $n$ second--order differential
equations.
However, when perturbing around a supersymmetric solution governed by
a superpotential, Borokhov and Gubser~\cite{Borokhov:2002fm} have
observed that these $n$ second--order equations factorize into $n$
first--order equations for certain momenta and $n$ first--order
equations for the metric and flux coefficients, and that furthermore
the $n$ equations for the momenta do not contain the metric and flux
coefficients, and hence can be solved independently. This technique
has been used in several related works~\cite{Bena:2009xk,
Borokhov:2002fm, Kuperstein:2003yt} and we consider this to be the
technique of choice for deformation problems that depend on just one
coordinate.
\subsection{The first--order Borokhov--Gubser formalism}
While the following summary can be found by now in several sources, we include it here for completeness.
When the equations of motion governing the fields $\phi^a$ of a certain supersymmetric solution come from the reduction to a one--dimensional Lagrangian
\begin{eqnarray}
\mathcal{L} = - \frac{1}{2} G_{ab} \frac{d \phi^a}{ d \tau} \frac{d \phi^b}{ d \tau} - V(\phi) \label{Lag1}
\end{eqnarray}
whose potential $V(\phi)$ comes from a superpotential,
\begin{eqnarray}
V(\phi) = \frac{1}{8} G^{ab }\frac{\partial W}{ \partial \phi^a}\frac{\partial W}{ \partial \phi^b}, \label{VWW} \, .
\end{eqnarray}
The Lagrangian is written as
\begin{eqnarray}
\mathcal{L} = - \frac{1}{2} G_{ab} \left( \frac{d \phi^a}{ d \tau} -\frac{1}{2} G^{ac} \frac{\partial W}{ \partial \phi^c} \right) \left( \frac{d \phi^a}{ d \tau} -\frac{1}{2} G^{ac} \frac{\partial W}{ \partial \phi^c} \right) - \frac{d W}{d \tau} \, ,
\end{eqnarray}
and the supersymmetric solutions satisfy
\begin{eqnarray}
\frac{d\phi^a}{d\tau}-\frac{1}{2} G^{ab} \frac{\partial W}{\partial \phi^b} =0\label{gradflow} \, .
\end{eqnarray}
We now want to find a perturbation in the fields $\phi^a$ around their supersymmetric background value $\phi^a_0$
\begin{eqnarray}
\label{split}
\phi^a=\phi^a_0+\phi^a_1(X) + {\cal O}(X^2)\ ,
\end{eqnarray}
where $X$ represents the set of perturbation parameters in which $\phi^a_1$ is linear.
The deviation from the gradient flow equations for the perturbation $\phi_1^a$ is measured
by the conjugate momenta $\xi_a$
\bea
\xi_a &\equiv& G_{ab}(\phi_0) \left(\frac{d\phi_1^b}{d\tau} -M^b_{\ d} (\phi_0) \phi_1^d \right) \ ,\label{xidef} \\
M^b{}_d&\equiv&\frac{1}{2} \frac{\partial}{\partial \phi^d} \left( G^{bc} \frac{\partial W}{\partial \phi^c} \right) \ .
\eea
The $\xi_a$ are linear in the expansion parameters $X$, hence they are of the same order as the $\phi_1^a$. When all the $\xi_a$ vanish the deformation is supersymmetric.
The main point of this construction is that the second--order equations of motion governing the perturbations reduce to a set of first--order linear equations for $(\xi_a,\phi^a)$:
\bea
\frac{d\xi_a}{d\tau} + \xi_b M^b{}_a(\phi_0) &=& 0 \, , \label{xieq} \\
\frac{d\phi_1^a}{d\tau} - M^a{}_b(\phi_0) \phi_1^b &=& G^{ab} \xi_b \label{phieq} \, .
\eea
Note that equation~\eqref{phieq} is just a rephrasing of the definition of the $\xi_a$ in~\eqref{xidef}, while~\eqref{xieq} implies the equations of motion. Since one considers these perturbations in a metric ansatz in which the reparametrization invariance of the radial variable is fixed, in addition to these equations one must enforce the zero--energy condition
\be \label{ze}
\xi_a \frac{d \phi_0^{a}}{d r} = 0 \, .
\ee
\subsection{The perturbation ansatz}
Using the analysis of the CGLP solution in~\cite{Klebanov:2010qs},
one can easily see that the ansatz for the $\text{SO}(5)$--invariant
eleven--dimensional supergravity solution we are looking for is
\bea\label{ansatz metric}
ds^2 & =& e^{-2z(r)} dx_{\mu} dx^{\mu} + e^{z(r)} \left[ e^{2\, \gamma(r)} \, dr^2 + e^{2 \, \alpha(r)} \sigma_{i}^2 + e^{2\, \beta(r)}\tsig_{i}^2 + e^{2\, \gamma(r)} \nu^2 \right] \nonumber\\
& =& e^{-2z(r)} dx_{\mu} dx^{\mu} + d\tau^2 + a(\tau)^2 \sigma_{i}^2 + b(\tau)^2\tsig_{i}^2 + c(\tau)^2 \nu^2 \, , \\
G_4 &=& d K(\tau) \wedge dx^0 \wedge dx^1 \wedge dx^2 + m\, F_4 \ ,
\eea
where $F_4 = d A_3$ and
\bea
A_3 &=& f(\tau)\,\tsig_1 \wedge\tsig_2 \wedge\tsig_3 + h(\tau)\, \epsilon^{ijk}\, \sigma_i \wedge \sigma_j \wedge \tsig_k \\
\Rightarrow F_4 &=& \dot f \, d\tau \wedge \tilde{\sigma}_1 \wedge \tilde{\sigma}_2 \wedge \tilde{\sigma}_3 + \dot h \,\epsilon^{ijk} \, d\tau \wedge \sigma_i \wedge \sigma_j \wedge \tilde{\sigma}_k \nonumber\\
&& + \frac{1}{2}\, (4 h - f)\, \epsilon^{ijk}\, \nu \wedge \sigma_i \wedge \tilde{\sigma}_j \wedge \tilde{\sigma}_k - 6\, h\,\nu \wedge \sigma_1 \wedge \sigma_2 \wedge \sigma_3 \, . \label{F4 ansatz}
\eea
Our notation for the one--forms on the Stenzel space is by now standard~\cite{Klebanov:2010qs}, in the sense that with the definitions
\be
\sigma_i = L_{1i} \, ,\ \ \ \
\tsig_i = L_{2i} \, ,\ \ \ \
\nu = L_{12} \, ,
\ee
they satisfy
\bea\label{d sigma nu L}
d\sigma_i &=& \nu \wedge\tsig_i + L_{ij}\wedge \sigma_j\,,\\
d\tilde{\sigma}_i &=& -\nu \wedge \sigma_i + L_{ij}\wedge\tsig_j\,,\\
d\nu &=& -\sigma_i \wedge \tilde{\sigma_i}\,,\\
dL_{ij} &=& L_{ik} \wedge L_{kj} - \sigma_i \wedge \sigma_j -\tsig_i \wedge\tsig_j\,.
\eea
Integrating one particular component of the equation of motion for the flux
\be
d*G_4 = \half G_4\w G_4
\ee
gives
\be
\label{H tilde min}
K^{\prime} = 6\, m^2\, \Bslb h\, (f - 2\, h) - \frac{1}{54} \Bsrb\, e^{-3(\alpha + \beta) - 6 z} \, ,
\ee
where we have chosen the integration constant such that the BPS solution~\cite{Cvetic:2000db}
is regular, i.e.~there are no explicit source M2 branes. We refer to Appendix A for more details.
Performing a standard dimensional reduction on this ansatz down to one dimension, we obtain the following
Lagrangian
\be
L=(T_{gr}+T_{mat}) - (V_{gr}+V_{mat})
\ee
with the gravitational and matter sectors given by
\bea
T_{gr} &=& 3\, e^{3\,\left(\alpha + \, \beta \right)}\, \left[ \alpha^{\prime 2} + \beta^{\prime 2} - \frac{3}{4}\, z^{\prime 2} + 3 \alpha^{\prime} \beta^{\prime} + \alpha^{\prime} \gamma^{\prime} + \beta^{\prime} \gamma^{\prime} \right] \, , \label{T gr def} \\
V_{gr} &=& \frac{3}{4}\, e^{\alpha + \beta}\, \Bslb e^{4 \alpha} + e^{4 \beta} + e^{4 \gamma} - 2\, e^{2 \alpha + 2 \beta} - 6\, e^{2 \alpha + 2 \gamma }\Bsrb \label{V gr def}
\eea
and
\bea
T_{mat} &=& - \frac{m^2}{4}\, e^{3\, \alpha + \beta - 3\, z} \left( f^{\prime 2}\, e^{-4\beta} + 12\, h^{\prime 2}\, e^{-4\alpha} \right)\, , \label{Tmat} \\
V_{mat} &=&\, 3\, m^2\, e^{\alpha + 3\, \beta - 3\, z}\, \left[3\, h^2\,e^{-4\alpha} + \frac{1}{4}\,(4h-f)^2\, e^{-4\beta} \right] \nonumber\\
&& + 9\, m^4\,e^{- 3\, \left( \alpha + \beta + 2\, z\right)}\, \left[ h\, ( f - 2\, h ) - \frac{1}{54}\right]^2 .\label{Vmat}
\eea
The superpotential is given by
\be
W = - 3 \, e^{2 \alpha + 2 \beta} \, (e^{2\alpha} + e^{2\beta} + e^{2\gamma}) - 6\, m^2\, e^{-3 z}\, \left[ h\, (f - 2\, h) - \frac{1}{54} \right].
\ee
It is worth noting that equation~\eq{VWW} only defines the superpotential up one independent minus sign which can then be absorbed in \eq{xieq} and \eq{phieq} by changing the sign of the radial variable and the $\xi_a$. However, with the wisdom of hindsight, we choose a radial variable such that fields decay at infinity and not minus infinity, thus simultaneously fixing the sign of the superpotential.
\subsection{The supersymmetric background}
Here we summarize the expressions that the fields in our ansatz take when
specialized to the zeroth--order CGLP solution~\cite{Cvetic:2000db}
around which we endeavor to study supersymmetric and
non--supersymmetric perturbations.
We should note that the CGLP solution with transverse Stenzel geometry is
to the warped M--theory solution with transverse Stiefel
space~\cite{Ceresole:1999zg} what the IIB Klebanov--Strassler
solution~\cite{Klebanov:2000hb} and the deformed
conifold~\cite{Candelas:1989js} are to the Klebanov--Tseytlin
solution~\cite{Klebanov:2000nc} and the singular conifold. The Stenzel
space is a higher--dimensional generalization of the deformed
conifold. A useful summary of many details of the supergravity solution
can be found in \cite{Martelli:2009ga} and proposals for the dual
field theory can be found in \cite{Martelli:2009ga,Jafferis:2009th}
The supersymmetric solution around which we will perturb was found in~\cite{Cvetic:2000db}. It can be summarized in our ansatz by
\bea\label{0th solt}
e^{2\, \alpha_0} &=& \frac{1}{3}\, \left( 2 + \cosh(2\, r) \right)^{1/4}\, \cosh(r) \, , \\
e^{2\, \beta_0} &=& \frac{1}{3}\, \left( 2 + \cosh(2\, r) \right)^{1/4}\, \sinh(r) \tanh(r) \, ,\\
e^{2\, \gamma_0} &=& \left( 2 + \cosh(2\, r)\right)^{-3/4}\, \cosh^3(r) \, , \\
f_0 &=& \frac{1}{3^{3/2}} \frac{\left( 1 - 3 \cosh^2 (r) \right)}{\cosh^3(r)} \, ,\\
h_0 &=& - \frac{1}{3^{3/2}\, 2} \frac{1}{\cosh(r)} \, ,\\
e^{3z_0(y)} &=&2^{5/2}\, 3\, m^2 \int^{\infty}_{y} \frac{du}{\left( u^4 - 1\right)^{5/2}} \, ,
\eea
where
\begin{eqnarray}\label{r to y}
y^4 \equiv 2 + \cosh(2\, r) \, .
\end{eqnarray}
With this change of coordinate we can write
\bea
e^{3\, z_0} & =&\sqrt{2}\, m^2 \frac{y\, \left(7 - 5\, y^4 \right)}{\left( y^4 - 1\right)^{3/2}} + 5\, \sqrt{2}\, m^2 F\left(\text{arcsin}\left(\frac{1}{y}\right) \mid - 1\right)\, ,
\eea
where the incomplete elliptic integral of the first kind is
\begin{eqnarray}\label{F integral}
F(\phi \mid q) = \int_0^{\phi} \left( 1 - q\, \sin(\theta)^2 \right)^{-1/2} d\theta
\end{eqnarray}
and we have fixed the integration constant (denoted $c_0$ in~\cite{Cvetic:2000db}) by requiring $e^{3z_0}\ra 0$ as $r \ra \infty$.
\subsection{Explicit equations}
We now write out explicitly the two sets of equations \eq{xieq} and \eq{phieq}. In both cases a particular field redefinition simplifies
things substantially.
\subsubsection{$\xi_a$ equations}
The $\xi^{a}$ equations \eqref{xieq} simplify in the basis
\begin{eqnarray}\label{New xi basis}
\tilde{\xi}_a = \left(\xi_1 + \xi_2 + \xi_3, \xi_1 - \xi_2 + 3\, \xi_3, \xi_1 + \xi_2 - 3\, \xi_3,\xi_4, \xi_5, \xi_6 \right)\, .
\end{eqnarray}
In the order which we solve them, the equations are
\bea
\tilde{\xi}_4^{\prime} &=& 6\, m^2\, e^{-3 (\alpha_0+\beta_0+z_0)}\, \left((f_0 - 2\, h_0)\, h_0 -\frac{1}{54}\right)\, \tilde{\xi}_4 \, ,\label{xi4} \\
\tilde{\xi}_1^{\prime} &=& 12\, m^2\, e^{-3 (\alpha_0+\beta_0+z_0)}\, \left( ( f_0 - 2\, h_0)h_0 -\frac{1}{54} \right)\, \tilde{\xi}_4 \, ,\label{xi1} \\
\tilde{\xi}_5^{\prime} &=& \frac{1}{2}\, e^{\alpha_0-\beta_0}\, \tilde{\xi}_6 - 2\, m^2\, h_0\, e^{-3 (\alpha_0+\beta_0+z_0)}\,\tilde{\xi}_4 \label{xi5} , \\
\tilde{\xi}_6^{\prime} &=& 6\, e^{- 3(\alpha_0 - \beta_0)}\, \tilde{\xi}_5 - 2\, e^{\alpha_0 - \beta_0}\, \tilde{\xi}_6 - 2\, m^2\, e^{-3 (\alpha_0 + \beta_0 + z_0)}\, (f_0 - 4\, h_0)\, \tilde{\xi}_4 \, ,\label{xi6} \\
\tilde{\xi}_3^{\prime} &=& \frac{2}{9}\, e^{-3 (\alpha_0+\beta_0+z_0)}\, \left[18\, e^{2 (\alpha_0+\beta_0+\gamma_0)+3 z_0}\, \tilde{\xi}_3 + m^2\, \left( 54\, h_0\, (f_0 - 2\, h_0) - 1\right)\,\tilde{\xi}_4 \right] \, \non\\&& \label{xi3} \\
\tilde{\xi}_2^{\prime} &= & \frac{1}{2}\, e^{-3 \alpha_0 - \beta_0}\, \Big[ 2\, e^{2 (\alpha_0 + \beta_0)} \tilde{\xi}_2 - 6\, e^{2 (\alpha_0+\gamma_0)} \tilde{\xi}_3 - 72\, h_0\, e^{4 \beta_0}\, \tilde{\xi}_5 \nonumber\\
&&+ e^{4 \alpha_0}\, \left( - 3\, \tilde{\xi}_1 + 2\, \tilde{\xi}_2 + 3\, \tilde{\xi}_3 + 2\, ( f_0\, - 4\, h_0)\, \tilde{\xi}_6 \right) \Big]\, , \label{xi2}
\eea
where we remind the reader that a prime denotes a derivative with respect to $r$ not $y$~\eqref{r to y}.
\subsubsection{$\phi^{a}$ equations} \label{sec:phieq}
The $\phi_a$ equations benefit from a field redefinition as well,
\bea\label{phi redef}
\phi^{a} &=& \left( \alpha, \beta, \gamma, z, f, h \right) \, , \\
\tilde{\phi}_{a} &=& (\phi_1 - \phi_2, \phi_1 + \phi_2 - 2\, \phi_3, \phi_3, \phi_4, \phi_5, \phi_6)\,
\eea
and we find
\bea
\tilde{\phi}_1^{\prime} &=& \frac{1}{12}\, e^{-3 (\alpha_0+\beta_0)}\, \left[ - 3\, \tilde{\xi}_1 + 4\, \tilde{\xi}_2 + 3\, \left(\tilde{\xi}_3 - 4\, e^{2 (\alpha_0+\beta_0)}\, \left(e^{2 \alpha_0} + e^{2 \beta_0} \right) \tilde{\phi}_1 \right) \right] \, ,\label{phi1} \\
\tilde{\phi}_2^{\prime} &=& \frac{1}{12}\, e^{-3 (\alpha_0 + \beta_0)}\, \left[ - 3\, \tilde{\xi}_1 + 7\, \tilde{\xi}_3 + 12\, e^{2 (\alpha_0 + \beta_0)}\, \left(3\, \left( e^{2 \beta_0} - e^{2 \alpha_0} \right)\, \tilde{\phi}_1 - 4\, e^{2 \gamma_0}\, \tilde{\phi}_2 \right) \right] \, ,\non \\&& \label{phi2}\\
\tilde{\phi}_{3}^{\prime} &=& \frac{1}{12}\, e^{-3 (\alpha_0 + \beta_0)}\, \left[ \tilde{\xi}_1 - 3\,\left( \tilde{\xi}_3 + 6\, e^{2 (\alpha_0+\beta_0)}\, \left( \left( e^{2 \beta_0} - e^{2 \alpha_0} \right)\, \tilde{\phi}_1 - e^{2 \gamma_0}\, \tilde{\phi}_2 \right) \right) \right] \, , \non \\&& \label{phi3}\\
\tilde{\phi}_5^{\prime} &=& \frac{2}{m^2}\, e^{-3 (\alpha_0 - \beta_0)}\, \left[ e^{3 z_0}\, \tilde{\xi}_5 + 3\, m^2\, (3\, h_0\, \tilde{\phi}_1 - \tilde{\phi}_6 ) \right]\, ,\label{phi5} \\
\tilde{\phi}_6^{\prime} &=& \frac{1}{6\, m^2}\, e^{\alpha_0 - \beta_0}\, \left[ e^{3 z_0}\, \tilde{\xi}_6 - 3\, m^2\, (f_0\, \tilde{\phi}_1 - 4\, h_0\, \tilde{\phi}_1 + \tilde{\phi}_5 - 4\, \tilde{\phi}_6 ) \right] \, ,\label{phi6} \\
\tilde{\phi}_4^{\prime}& = &\, \frac{1}{9}\, e^{-3 (\alpha_0 + \beta_0 + z_0)}\, \Big[ 2\, e^{3 z_0}\, \tilde{\xi}_4 + m^2\, \Big( \left[ 1 - 54\, h_0\, (f_0 - 2\, h_0) \right] \tilde{\phi}_4 + 18\, f_0\, \tilde{\phi}_6\nonumber\\
&& + \tilde{\phi}_2+ 2\, \tilde{\phi}_3 + 18\, h_0\, \left[ \tilde{\phi}_5 - 4\, \tilde{\phi}_6 - 3\, ( f_0 - 2\, h_0 )\, (\tilde{\phi}_2 + 2\, \tilde{\phi}_3 ) \right] \Big) \Big] \, . \label{phi4}
\eea
\section{The force on a probe M2}
\setcounter{equation}{0}
Before solving the above equations, we compute the force on a probe
M2--brane in the perturbed solution space. As was found in the
analogous IIB scenario~\cite{Bena:2009xk}, the force turns out to
benefit from remarkable cancellations and is ultimately quite simple.
The membrane action for a probe M2 brane (which by abusing notation we
refer to as the DBI action) is
\begin{align}\label{V DBI}
V^{DBI} & = \sqrt{-\, g_{00}\, g_{11}\, g_{22}} \, ,\nonumber\\
& = e^{-3 z}
\end{align}
and, in the first--order approximation, its derivative with respect to $r$ is
\begin{align}\label{F DBI}
F^{DBI} = - \frac{dV_0^{DBI}}{dr} + 3\, e^{-3 z_0} \left( \tilde{\phi}_4^{\prime} - 3\, z_0^{\prime}\, \tilde{\phi}_4 \right) \, .
\end{align}
We next consider the derivative of the WZ action with respect to $r$, which gives the force exerted on the M2--brane by the $G^{(4)}$ field :
\begin{align}\label{F WZ}
F^{WZ} & = - \frac{dV^{WZ}}{dr} \, , \nonumber\\
& = G^{(4)}_{012r}\, , \nonumber\\
& = - 6\, m^2\, \left[ h\, (f - 2\, h) - \frac{1}{54} \right]\, e^{-3(\alpha + \beta) - 6 z}\, .
\end{align}
The zeroth--order and first--order WZ forces thus are
\begin{eqnarray}
F_0^{WZ} = - 6\, m^2\, \left[ h_0\, (f_0 - 2\, h_0) - \frac{1}{54} \right]\, e^{-3(\alpha_0 + \beta_0) - 6 z_0}\,
\end{eqnarray}
and
\begin{align}
F_1^{WZ} =&\, - 6\, m^2\, \Big[ h_0\, (\tilde{\phi}_5 - 2\, \tilde{\phi}_6 ) + \tilde{\phi}_6 \, (f_0 - 2\, h_0) \nonumber\\ & - 3\, (\tilde{\phi}_2 + 2\, \tilde{\phi}_3 + 2\, \tilde{\phi}_4)\, \Blp h_0\, (f_0 - 2\, h_0) - \frac{1}{54} \Brp \Big]\, e^{- 3 (\alpha_0 + \beta_0) - 6 z_0}\, .
\end{align}
Combining these two contributions to the force we see that the zeroth--order contributions cancel as expected. Then using the explicit $\phi^a$ equations from section~\ref{sec:phieq} we find the beautiful result
\bea\label{F probe}
F & =& F_1^{DBI} + F_1^{WZ} \nonumber\\
& =& \frac{2}{3}\, e^{-3\,(\alpha_0 + \beta_0 + z_0)(r)}\, \tilde{\xi}_4(r) \, .\nonumber
\eea
At this point it is worthwhile to preemptively trumpet the result \eqref{xi4 integral 2} from Section \ref{sec:solutions} where the exact solution for the mode $\txi_4$ is found:
\bea
F& = & \frac{2}{3}\, e^{-3\,(\alpha_0 + \beta_0)(r)}\, Z_0\, X_4 \nonumber\\
& =& \frac{18\, Z_0\, X_4}{\left( 2 + \cosh2 r \right)^{3/4}\, \sinh^3r} \, ,
\eea
where $Z_0$ is some numerical factor which we found convenient not to absorb into the $X_4$ integration constant,
\begin{eqnarray}\label{Zzero}
Z_0 \equiv e^{-3 z_0(0)} \, .
\end{eqnarray}
So, the UV expansion of the force felt by a probe M2 brane in the
first--order perturbed solution is always
\begin{eqnarray}\label{force UV} F_r
\sim X_4\, e^{-9r/2} + {\cal O}(e^{-17 r/2}) \, .
\end{eqnarray}
In terms of $\rho$, the ``standard'' radial
coordinate\footnote{Related to $r$ via $\cosh(2\, r) \sim
\rho^{8/3}$.}, this force comes from a potential proportional to
$\rho^{-6}$, which agrees with a straightforward extension of the brane--antibrane force
analysis of~\cite{Kachru:2003sx} to this system. This will be further
discussed in a forthcoming publication~\cite{wip}.
\section{The space of solutions} \label{sec:solutions}
\setcounter{equation}{0}
In this section we find the generic solution to the
system~\eqref{xi4}--\eqref{phi4}. This solution space has twelve
integration constants of which ten are physical. We have managed to
solve the $\txi_a$ equations exactly whereas for the $\phi_a$
equations we have resorted to solving them in the IR and UV limits.
\subsection{Analytic solutions for the $\tilde{\xi}$'s}
The first equation~\eqref{xi4} is solved by
\begin{eqnarray}\label{xi4 integral}
\tilde{\xi}_4 = X_{4}\, \text{exp}\left( 6\, m^2\, \int_{0}^{r} dr^{\prime}\, e^{-3 (\alpha_0+\beta_0+z_0)}\, \left[(f_0 - 2\, h_0)\, h_0 -\frac{1}{54}\right] \right) \, ,
\end{eqnarray}
which appears to be a double integral. However, using a standard notation for the warp factor $H_0=e^{3z_0}$, since we have
\be
\frac{d H_0}{dr} = - 2^33\, m^2 \frac{e^{2 \gamma_0}}{\sinh^32 r} \tanh^4 r \, ,
\ee
we actually find
\begin{align}\label{xi4 integral 2}
\tilde{\xi}_4 & = X_{4}\, \text{exp}\left( \int_{0}^{r}\, dr^{\prime}\, \frac{1}{H_0} \frac{dH_0}{dr'} \right) \, , \nonumber\\
& = X_{4}\, e^{ 3 (z_{0}(r)-z_0(0) ) } \, .
\end{align}
It immediately follows that
\begin{eqnarray}\label{xi1 general solt}
\tilde{\xi}_1 = X_1 + 2\, X_4\, e^{ 3 (z_{0}(r)-z_0(0) )} \, .
\end{eqnarray}
We find convenient not to include $e^{-3 z_0(0)}$ into the integration constant $X_4$, and will use the notation
\begin{eqnarray}
Z_0 \equiv e^{-3 z_0(0)} \, .
\end{eqnarray}
We were also able to find exact analytic expressions for $\tilde{\xi}_3$ and $\tilde{\xi}_{5,6}$, in term of $y^4 \equiv 2 + \cosh(2\, r)$ :\begin{align}\label{xi3 exact}
\tilde{\xi}_3 =&\, y^4\, \left(y^4-3\right)^2\, X_3 - \frac{m^2\, Z_0\, X_4}{18\, \sqrt{2}}\, \frac{y\, \left( y^4 - 3\right)}{\left( y^4 - 1 \right)^{3/2}}\, \Bigg[- 96 + 599\, y^4 - 550\, y^8 + 119\, y^{12} \nonumber\\ & - y^{3}\, \sqrt{y^4-1}\, \left(3 - 4\, y^4 + y^8 \right)\, \bigg(163\, F\left(\text{arcsin}\left(\frac{1}{y}\right) \mid -1 \right) \nonumber\\ & + 22\, \left[ \Pi \left( -\sqrt{3} ; -\text{arcsin}\left(\frac{1}{y}\right) \mid -1\right) + \Pi \left(\sqrt{3};-\text{arcsin}\left(\frac{1}{y}\right) \mid -1 \right) \right] \bigg) \Bigg] \, , \nonumber\\
\end{align}
where $F(\phi \mid q)$ is given in \eq{F integral} and $\Pi(n ; \phi \mid m) $ is an incomplete elliptic integral of the third kind
\be
\Pi(n;\phi| m)= \int_{0}^{\phi} \frac{d \theta}{\left(1 - n\, \sin \left( \theta \right)^2 \right)\, \sqrt{1 - m\, \sin \left( \theta \right)^2}}.
\ee
The expressions for $\tilde{\xi}_{5,6}$ are as follows :
\begin{align}\label{xi5 exact}
& \tilde{\xi}_5 = \frac{1}{4\, \sqrt{2}\, \left(y^4 - 3\right)\, \sqrt{y^4 - 1}}\, \Bigg[ \sqrt{6}\, Z_0\, X_4\, m^2\, y\, \left(13 - 11\, y^4 \right)\, \sqrt{y^4 - 1} \nonumber\\ & + 4\, \left[ \left(y^4 - 1\right)^2\, X_5 +\left(y^4 - 3\right)\, \left(1+y^4\right)\, X_6 \right] \nonumber\\ & + \sqrt{6}\, Z_0\, m^2\, X_4\, \bigg[ \left(19 + 7 y^4 \left(y^4 - 2\right) \right)\, F \left( \text{arcsin}\left( \frac{1}{y} \right) \mid -1 \right) \nonumber\\ & - 2\, \left(y^4 - 3\right)\, \left(1+y^4\right)\, \left( \Pi \left( -\sqrt{3}; -\text{arcsin}\left(\frac{1}{y}\right) \mid -1 \right) + \Pi \left( \sqrt{3};-\text{arcsin}\left(\frac{1}{y}\right) \mid -1 \right) \right) \bigg] \Bigg] \, ,\nonumber\\
\end{align}
\begin{align}\label{xi6 exact}
& \tilde{\xi}_6 = \frac{\sqrt{2}}{\left(y^4 - 3\right)\, \left(y^4 - 1\right)^{3/2}}\, \Bigg[ \left(y^4 - 7\right)\, \left(y^4 - 1\right)^2\, \bigg[ X_5 + \sqrt{\frac{3}{2}}\, Z_0\, m^2\, X_4\, \Big( \frac{7\, y - 5\, y^5}{\left(y^4 - 1\right)^{3/2}} \nonumber\\ & + 5\, F \left( \text{arcsin}\left( \frac{1}{y} \right) \mid -1 \right) \Big) \bigg] + \frac{1}{4}\, \left(y^4 - 3\right)^2\, \Bigg[ -\sqrt{6}\, Z_0\, m^2\, X_4\, y\, \sqrt{y^4 - 1} \nonumber\\ & + 4\, \left(y^4 - 3\right)\, X_6 - \sqrt{6}\, Z_0\, m^2\, X_4\, \left(y^4 - 3\right)\, \Bigg(3\, F\left(\text{arcsin}\left( \frac{1}{y} \right) \mid-1 \right) \nonumber\\ & + 2\, \left( \Pi \left(-\sqrt{3}; -\text{arcsin}\left( \frac{1}{y}\right) \mid -1 \right) + \Pi \left( \sqrt{3}; -\text{arcsin}\left( \frac{1}{y} \right) \mid -1 \right) \right)\Bigg) \Bigg] \Bigg] \, .\nonumber\\
\end{align}
Lastly, $\tilde{\xi}_2$ is given by the zero--energy condition \eq{ze} but its explicit form does not appear to be too enlightening.
In Appendix B we provide the IR and UV series expansions of the above solutions for $\tilde{\xi}^{i}$.
\subsection{Solving the $\phi^{i}$ equations}
\subsubsection{The space of solutions}
We now solve the system of equations for $\phi^i$~\eqref{phi1}--\eqref{phi6} using the Lagrange method of variation of parameters.
Equation~\eqref{phi1} is solved by
\begin{eqnarray}\label{phi1 X4 0}
\tilde{\phi}_1 = \frac{\tilde{\lambda}^1(r)}{\sinh(2\, r)}\, ,
\end{eqnarray}
with
\begin{eqnarray}\label{lambda1 X4 0}
\tilde{\lambda}^1 = \frac{9}{2}\, \int\, \frac{\cosh(r)}{\sinh(r)^2\, \left( 2 + \cosh(2\, r) \right)^{3/4}}\, \left[ - 3\, \tilde{\xi}_1 + 4\, \tilde{\xi}_2 + 3\, \tilde{\xi}_3 \right] + Y_1^{IR} \, .
\end{eqnarray}
$\tilde{\xi}_2$ and $\tilde{\xi}_3$ are given in Section 4.1 above and $\sinh(2\, r)^{-1}$ is the homogeneous solution to the $\tilde{\phi}_1$ equation.
The same Lagrange method is used for $\tilde{\phi}_2$, which is given by
\begin{eqnarray}\label{phi2 X4 0}
\tilde{\phi}_2 = \frac{\tilde{\lambda}^2(r)}{\sinh(r)^4\, \left( 2 + \cosh(2\, r) \right)} \, ,
\end{eqnarray}
where
\begin{align}\label{lambda2 X4 0}
\tilde{\lambda}^2 =&\, \frac{9}{4}\, \int \, \sinh(r)\, \left( 2 + \cosh(2\, r)\right)^{1/4}\, \left[ - 3\, \tilde{\xi}_1 + 7\, \tilde{\xi}_3 - \frac{4}{3}\, \frac{\sinh(r)^2}{\cosh(r)}\, \left( 2 + \cosh(2\, r) \right)^{3/4}\, \tilde{\phi}_1 \right] \nonumber\\ & \, \, \, + Y_2^{IR} \, .
\end{align}
From this, we obtain an integral expression for $\tilde{\phi}_3$ :
\begin{eqnarray}\label{phi3 X4 0}
\tilde{\phi}_3 = \frac{9}{4}\, \int \, \frac{\left[ \tilde{\xi}_1 - 3\, \tilde{\xi}_3 + \frac{2}{3}\, \frac{\sinh(r)^2}{\cosh(r)}\, \left( 2 + \cosh(2\, r) \right)^{3/4}\, \tilde{\phi}_1 + 2\, \frac{\sinh(r)^2\, \cosh(r)^3}{\left( 2 + \cosh(2\, r) \right)^{1/4}}\, \tilde{\phi}_2 \right]}{\sinh(r)^3\, \left( 2 + \cosh(2\, r) \right)^{3/4}} + Y_3^{IR} \, . \nonumber\\
\end{eqnarray}
The fluxes $\left( \tilde{\phi}_5, \tilde{\phi}_6 \right) = \left( f, h \right)$ are given by
\begin{equation}\label{phi5,6 X4 0}
\left(
\begin{array}{ccc}
\tilde{\phi}_5 \\
\tilde{\phi}_6 \\
\end{array}
\right) = \left(
\begin{array}{ccc}
\cosh(r)^3\, \tanh(r)^6 \, & \cosh(r)^3\, \left[ 2 - 3\, \tanh(r)^2 \right] \\
\frac{1}{2}\, \left[ \text{sech}(r) - \cosh(r)^3 \right] \, & \frac{1}{2}\, \cosh(r)^3 \\
\end{array}
\right)\, \left(
\begin{array}{cc}
\tilde{\lambda}_5 \\
\tilde{\lambda}_6 \\
\end{array}
\right) \, ,
\end{equation}
where the derivatives of $\tilde{\lambda}_5 $ and $\tilde{\lambda}_6 $ are given by
\begin{equation}\label{lambda5,6 X4 0}
\left(
\begin{array}{ccc}
\tilde{\lambda}_5^{\prime} \\
\tilde{\lambda}_6^{\prime} \\
\end{array}
\right) = \left(
\begin{array}{ccc}
\frac{1}{4}\, \cosh(r)\, \text{coth}(r)^2 \, & \frac{1}{2}\, \left[ \cosh(r) - 2 \text{coth}(r)\, \text{csch}(r) \right] \\
\frac{1}{8}\, \left[ 3 + \cosh(2\, r) \right]\, \text{sech}(r) \, & \frac{1}{2}\, \sinh(r)\, \tanh(r)^3 \\
\end{array}
\right)\, \left(
\begin{array}{cc}
b_5 \\
b_6 \\
\end{array}
\right) \, ,
\end{equation}
and $b_5, b_6$ are the right--hand side of~\eqref{phi5} and~\eqref{phi6} respectively. The $2 \times 2$ matrix appearing in~\eqref{lambda5,6 X4 0} is the inverse of the matrix of homogeneous solutions written in~\eqref{phi5,6 X4 0}. We will call $Y_5$ and $Y_6$ the constants arising from integrating~\eqref{lambda5,6 X4 0}, even though the two functions $\tilde{\phi}_5$ and $\tilde{\phi}_6$ depend on both of them.
Finally, relying on the same method, the equation for $\tilde{\phi}_4$ is solved to
\begin{eqnarray}\label{phi4 X4 0}
\tilde{\phi}_4 = e^{-3 z_0(r)}\, \tilde{\lambda}_4 \, , \qquad \tilde{\lambda}_4 = \int\, e^{3 z_0(r)}\, b_4(r) + Y_4^{IR} \, ,
\end{eqnarray}
where $b_4(r)$ is the right--hand side of~\eqref{phi4} (setting $\tilde{\phi}_4$ to zero).
\subsubsection{IR behavior}
We now give the IR expansions of the $\phi^{i}$'s. We only write the divergent and constant terms since terms which are regular in the IR do not provide any constraint on our solution space. $Z_0$ is defined in~\eqref{Zzero}.
The $X_{i}$ integration constants are those appearing in the exact solutions for the $\tilde{\xi}_i$'s~\eqref{xi4 integral 2}--\eqref{xi6 exact} :
\begin{align}
\tilde{\phi}_1 =&\, - \frac{1}{r^2}\, \left[ \frac{27\, X_1 + 30\, X_4 - 16\, \sqrt{3}\, X_5 }{4\, 3^{3/4}}\right] +\frac{1}{2\, r}\, Y_1^{IR} \nonumber\\ & + \left[ \frac{ 189\, X_1 + \left( 498 - 198\, 3^{1/4}\, Z_0\, m^2 \right)\, X_4 + 80\, \sqrt{3}\, X_5 }{12\, 3^{3/4}} \right] + {\cal O}(r) \, , \nonumber\\
\end{align}
\begin{align}
\tilde{\phi}_2 =& \, \frac{Y_2^{IR}}{3\, r^4} + \frac{1}{r^2}\, \left[ \frac{9}{4}\, 3^{1/4}\, X_1 + \frac{3}{2}\, 3^{1/4}\, X_4 - 2\, \sqrt{3}\, 3^{1/4}\, X_5 - \frac{4}{9}\, Y_2^{IR} \right] - \frac{1}{2\, r}\, Y_1^{IR} \nonumber\\ & - \left[ 6\, 3^{1/4}\, X_1 + \frac{23}{2}\, 3^{1/4}\, X_4 - 6\, \sqrt{3}\, Z_0\, m^2\, X_4 - \frac{1}{3^{1/4}}\, X_5 - \frac{41}{135}\, Y_2^{IR} \right] \nonumber\\ & + {\cal O}(r) \, , \nonumber\\
\end{align}
\begin{align}
\tilde{\phi}_3 =&\, -\frac{Y_2^{IR}}{8\, r^4} - \frac{1}{r^2}\, \left[ \frac{9\, 3^{1/4}\, X_1 - 12\, 3^{3/4}\, X_5 - 4\, Y_2^{IR} }{24} \right] \nonumber\\ & + \left[ Y_3^{IR} + \frac{3^{1/4}}{8}\, \left(-18\, 3^{1/4}\, Z_0\, m^2\, X_4 + 21\, X_1 + 48\, X_4 + 4\, \sqrt{3}\, X_5 \right)\, \text{log}(r) \right] \nonumber\\ & + {\cal O}(r) \, , \nonumber\\
\end{align}
\begin{align}
\tilde{\phi}_4 =&\, - \frac{1}{r^2}\, \left[ \frac{ 18\, X_4 - 4\, \sqrt{3}\, X_5 + Z_0\, m^2\, \left( Y_2^{IR} - 24\, \sqrt{3}\, Y_6^{IR} \right) }{8\, 3^{3/4}} \right] \nonumber\\ & - \bigg[ \frac{1}{4}\, \left( Z_0\, m^2\, \left(\frac{3\, \sqrt{3}}{2}\, X_4 - X_5 \right) - 4\, Z_0\, Y_4^{IR} \right) \nonumber\\ & + \frac{1}{48}\, Z_0^2\, m^4\, \left( \sqrt{3}\, Y_2^{IR} - 72\, Y_6^{IR} \right) + \bigg[ \frac{3}{2}\, 3^{1/4}\, X_4 - \frac{X_5}{3^{1/4}} \nonumber\\ & + \frac{1}{36}\, Z_0\, m^2\, \Big(81\, \sqrt{3}\, X_1 + 78\, \sqrt{3}\, X_4 - 168\, X_5 + 11\, 3^{1/4}\, Y_2^{IR} - 72\, 3^{3/4}\, Y_6^{IR} \Big) \bigg]\, \text{log}(r) \bigg] \nonumber\\ & + {\cal O}(r) \, ,\nonumber\\
\end{align}
\begin{align}
\tphi_5 = 2 Y_6^{IR} + \left[ {9 \over 8}\, 3^{3/4}\, X_1 + {3 \over 4}\, 3^{3/4}\, X_4 - 2\, 3^{1/4}\, X_5 + {1 \over 2\, Z_0\, m^2}\, \left( X_5 + {\sqrt{3} \over 2}\, X_4 \right) \right]\, r^2 + {\cal O}(r^3) ,
\end{align}
\begin{align}
\tilde{\phi}_6 = &\, \frac{1}{r^2}\, \frac{X_5 + \frac{\sqrt{3}}{2}\, X_4}{6\, Z_0\, m^2} \nonumber\\ & + \left[ \frac{3^{3/4}}{16}\, X_1 - \frac{1}{18}\, \frac{X_5 + \frac{\sqrt{3}}{2}\, X_4}{Z_0\, m^2} - \frac{7}{72}\, 3^{3/4}\, X_4 - \frac{5}{18}\, 3^{1/4}\, X_5 + \frac{1}{2}\, Y_6^{IR} \right] + {\cal O}(r) \, .
\end{align}
Note that in the $\tphi_5$ expansion we have also displayed the term of order $r^2$ -- this term will be relevant for the singularity analysis in Section 6.
\subsubsection{UV behavior}
We provide the UV asymptotics for all six $\tilde{\phi}_i$'s, incorporating terms which decay not faster than $e^{-13 r/2}$. However, as appears in Table 1 below, a few modes have leading behavior in the UV which is even more convergent than this.
\begin{align}
\tilde{\phi}_1 = &\, \frac{18}{2^{1/4}}\, X_3\, e^{-r/2} + 2\, Y_1^{UV}\, e^{-2 r} - 4\, 2^{3/4}\, \left[ \frac{27}{2}\, X_1 - 27\, X_3 + 8\, \sqrt{3}\, \left( X_5 + X_6 \right) \right]\, e^{-5 r/2} \nonumber\\ & - \left[ \frac{1089}{10\, 2^{1/4}}\, X_3 - \frac{128}{5}\, 2^{3/4}\, \sqrt{3}\, \left( X_5 + X_6 \right) \right]\, e^{-9 r / 2} + 2\, Y_1^{UV}\, e^{-6 r} \nonumber\\ & + {\cal O}(e^{-13 r / 2}) \, ,
\end{align}
\begin{align}
\tilde{\phi}_2 =&\, \frac{21}{5\, 2^{1/4}}\, X_3\, e^{3 r / 2} - \frac{17523}{140\, 2^{1/4}}\, e^{-5 r/2} X_3 - 12\, Y_1^{UV}\, e^{-4 r} \nonumber\\ & + 4\, 2^{3/4}\, \left[ 99\, X_1 - \frac{1719}{10}\, X_3 + 64\, \sqrt{3}\, \left( X_5 + X_6 \right) \right] \, e^{-9 r / 2} + 32\, Y_2^{UV} e^{-6 r} \nonumber\\ & + {\cal O}(e^{-13 r / 2}) \, ,
\end{align}
\begin{align}
\tilde{\phi}_{3} =&\, - \frac{27}{10\, 2^{1/4}}\, X_3\, e^{3 r / 2} + Y_3^{UV} + \frac{9693}{280\, 2^{1/4}}\, X_3\, e^{-5 r/2} + \frac{15}{4}\, Y_1^{UV}\, e^{-4 r} \nonumber\\ & - 2^{3/4}\, \left[ 130\, X_1 - \frac{1113}{5}\, X_3 + \frac{256}{\sqrt{3}}\, \left( X_5 + X_6 \right) \right]\, e^{-9 r/2} - 12\, Y_2^{UV}\, e^{-6 r} \nonumber\\ & + {\cal O}(e^{-13 r/2}) \, ,\nonumber\\
\end{align}
\begin{align}
\tilde{\phi}_{4} =&\, \frac{3}{16\, 2^{3/4}}\, \frac{Y_4^{UV}}{m^2}\, e^{9 r/2} + \frac{27}{26\, 2^{3/4}}\,\frac{Y_4^{UV}}{m^2}\, e^{5 r/2} + \frac{9}{5\, 2^{1/4}}\, X_3\, e^{3 r/2} + \frac{350271}{183872\, 2^{3/4}}\, \frac{Y_4^{UV}}{m^2}\, e^{r/2} \nonumber\\ & - 2\, \left[ Y_3^{UV} + \sqrt{3}\, \left( Y_5^{UV} - Y_6^{UV} \right) \right] + \frac{216}{325}\, 2^{3/4}\, X_3\, e^{-r/2} + \frac{484605}{298792\, 2^{3/4}}\,\frac{Y_4^{UV}}{m^2}\, e^{-3r/2} \nonumber\\ & + \frac{144}{13}\, \sqrt{3}\, Y_6^{UV}\, e^{-2 r} +\frac{3985953003}{14077700\, 2^{1/4}}\, X_3\, e^{-5 r/2} + \frac{7978373883}{21130570240\, 2^{3/4}}\, \frac{Y_4^{UV}}{m^2}\, e^{-7 r/2} \nonumber\\ & + \left[ \frac{273}{34}\, Y_1^{UV} + \frac{78912\, \sqrt{3}}{2873}\, Y_6^{UV} \right]\, e^{-4 r} \nonumber\\ & - 2^{3/4}\, \left[ 4\, \frac{229}{5}\, X_1 - \frac{1707341851}{2691325}\, X_3 + 4\, \frac{256}{3\, \sqrt{3}} \left( X_5 + X_6 \right) \right]\, e^{-9 r/2} \nonumber\\ & +\frac{473729599251}{995778122560\, 2^{3/4}}\, \frac{Y_4^{UV}}{m^2}\, e^{-11r / 2} + {\cal O}(e^{-6 r}) \, ,\nonumber\\
\end{align}
\begin{align}
\tilde{\phi}_{5} =&\, \frac{1}{8}\, \left(Y_5^{UV} - Y_6^{UV} \right)\, e^{3 r} - \frac{9}{8}\, \left( Y_5^{UV} - Y_6^{UV} \right)\, e^{r} + \frac{1}{8}\, \left( 39\, Y_5^{UV} + 9\, Y_6^{UV} \right)\, e^{-r} \nonumber\\ & + 19\, \frac{4\, 2^{3/4}}{\sqrt{3}}\, X_3\, e^{-3r/2} + \left[ \frac{14}{3\, \sqrt{3}}\, Y_1^{UV} - \frac{1}{8}\, \left(111\, Y_5^{UV} + Y_6^{UV} \right) \right]\, e^{-3 r} \nonumber\\ & - 4\, 2^{3/4}\, \left[ 2\, \frac{279}{65}\, \sqrt{3}\, X_1 + \frac{147}{65}\, \sqrt{3}\, X_3 + 2\, \frac{308}{39}\, \left( X_5 + X_6 \right) \right]\, e^{-7 r/2} \nonumber\\ & + 10\, \left[ - \frac{2}{\sqrt{3}}\, Y_1^{UV} + 3\, Y_5^{UV} \right] e^{-5 r} \nonumber\\ & + \frac{56}{1105}\, 2^{3/4}\, \left[ 3071\, \sqrt{3}\, X_1 - \frac{166409\, \sqrt{3}}{56}\, X_3 + \frac{18716}{3}\, \left( X_5 + X_6 \right) \right]\, e^{-11 r/2} \nonumber\\ & + {\cal O}(e^{-13r/2}) \, ,\nonumber\\
\end{align}
\begin{align}
\tilde{\phi}_{6} =&\, - \frac{1}{16}\, \left( Y_5^{UV} - Y_6^{UV} \right)\, e^{3 r} - \frac{3}{16}\, \left( Y_5^{UV} - Y_6^{UV} \right)\, e^{r} + \frac{1}{16}\, \left( 13\, Y_5^{UV} + 3\, Y_6^{UV} \right)\, e^{-r} \nonumber\\ & + \frac{10}{\sqrt{3}}\, 2^{3/4} X_3\, e^{-3r/2} + \left[ \frac{1}{3\, \sqrt{3}}\, Y_1^{UV} - \frac{1}{16}\, \left( 17\, Y_5^{UV} - Y_6^{UV} \right) \right]\, e^{-3 r} \nonumber\\ & - 4\, 2^{3/4}\,\left[ \frac{33}{65}\, \sqrt{3}\, X_1 + \frac{9\, \sqrt{3}}{130}\, X_3 + \frac{116}{117}\, \left( X_5 + X_6 \right)\right]\, e^{-7 r/2} \nonumber\\ & - \left[ \frac{2}{3\, \sqrt{3}}\, Y_1^{UV} - Y_5^{UV} \right]\, e^{-5 r} \nonumber\\ & + \frac{4}{1105\, \sqrt{3}}\, 2^{3/4}\, \left[ 3713\, X_1 - \frac{30221}{8}\, X_3 + 2932\, \sqrt{3}\, \left( X_5 + X_6 \right) \right]\, e^{-11 r/2} \nonumber\\ & + {\cal O}(e^{-13 r/2}) \, . \nonumber\\
\end{align}
To understand the holographic physics of the $\tilde{\phi}^{i}$ modes, we tabulate the leading UV behavior coming from each mode. To each local operator ${\cal O}_i$ of quantum dimension $\Delta$ in the field theory, the holographic dictionary associates two modes in the dual $AdS$ space, one normalizable and one non--normalizable~\cite{Banks:1998dd, Balasubramanian:1998de}. These two supergravity modes are dual respectively to the vacuum expectation value (VEV) $\langle 0 \mid {\cal O}_i \mid 0\rangle$ and the deformation of the action $\delta S\sim \int d^{d}x {\cal O}_i$:
\begin{align}
\text{normalizable modes}\, \sim \rho_{AdS}^{-\Delta}\, \leftrightarrow \, \text{field theory VEV's} \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \nonumber\\
\text{non--normalizable modes}\, \sim \rho_{AdS}^{\Delta-3}\, \leftrightarrow \, \text{field theory deformations of the action} \, . \nonumber
\end{align}
Here we refer to the standard $AdS$ radial coordinate $\rho_{AdS}$, to be distinguished from the radial coordinate on the cone, $\rho$. In the UV, we have $\rho \sim e^{3 r/4}$ and $\rho_{AdS} \sim \rho^2 / m^{1/3}$ with the factor of $m^{1/3}$ taken with respect to the conventions of~\cite{Klebanov:2010qs}.
In Table 1 we have summarized which integration constants correspond
to normalizable and non--normalizable modes. As stated in a previous
section, the $X_i$ are integration constants for the $\xi_i$ modes and
break supersymmetry, while the $Y_i$ are integration constants for the
modes $\phi^i$. It is very interesting to note that in all cases a
normalizable/non--normalizable pair consists of one BPS mode and one
non--BPS mode.
As already mentioned, the mode $\tilde{\xi}_4$, whose integration
constant is $X_4$ and which is the only mode accountable for the force
felt by a probe M2--brane in the first--order perturbation to the CGLP
background~\cite{Cvetic:2000db}, is the most convergent mode in the
UV, though this cannot be seen from the expansions we have provided
but is apparent at higher order in the asymptotics that we have computed.
\begin{table}[h]\label{UVmodetable}
\onelinecaptionsfalse
\begin{center}
\begin{tabular}{|c|c|c|c|c|c} \hline
dim $\Delta$ & non--norm/norm & int. constant \\ \hline
6 & $\rho_{AdS}^{3}/\rho_{AdS}^{-6}$ & $Y_{4}^{UV}/X_{4}$ \\\hline
5 & $\rho_{AdS}^{2}/\rho_{AdS}^{-5}$ & $Y_{5}^{UV}-Y_{6}^{UV}/X_{5}-X_{6}$ \\\hline
4 & $\rho_{AdS}/\rho_{AdS}^{-4}$ & $X_{3}/Y_{2}^{UV}$ \\\hline
3 & $\rho_{AdS}^0/\rho_{AdS}^{-3}$ & $Y_{3}/X_{2}$ \\\hline
7/3 & $\rho_{AdS}^{-2/3}/\rho_{AdS}^{-7/3}$ & $Y_{5}^{UV}+Y_{6}^{UV}/X_{5}+X_{6}$ \\\hline
5/3 & $\rho_{AdS}^{-4/3}/\rho_{AdS}^{-5/3}$ & $Y_{1}^{UV}/X_{1}$ \\\hline
\end{tabular}
\end{center}
\captionstyle{center}\caption{The UV behavior of the twelve $SO(5)$--invariant modes in the \protect\\ deformation space of the CGLP solution. As discussed below, only ten \protect\\ of these modes are physical, and the mode of dim. 3 is a gauge artifact.}
\end{table}
Taking into account a rescaling which culls $Y_3$ and the zero energy
condition which eliminates $X_2$, we are left with a total of ten
integration constants or five modes. The absence of a physical mode behaving
as $\rho_{AdS}^0$ is related to the quantization of the
level of the Chern--Simons matter theory. This is unlike in
four--dimensional gauge theories, where we expect a dimension--four
operator corresponding to the dilaton. Note also that we see explicitly the
dimension $\Delta = 7/3$ operator discussed in~\cite{Klebanov:2010qs}. We have been somewhat glib in
writing $X_5-X_6$ or $Y_5+Y_6$. The numerical factors in the
combination of those integration constants are actually different, but
can be rescaled to the shorthand notation we use.
\section{Boundary conditions for M2 branes}
\setcounter{equation}{0}
Within the space of solutions that we have derived in Section 4 we now proceed to find the modes which arise from the backreaction of a set of anti--M2 branes smeared on the finite--sized $S^4$ at the tip of the Stenzel-CGLP solution ($r=0$). For describing them it is necessary to carefully impose the correct infrared boundary conditions.
The gravity solution for a stack of localized M2--branes in flat space has a warp
factor $H(\rho)=1+Q/\rho^6$ and as $\rho \rightarrow 0$ the full solution is smooth due
to the infinite throat. However when these branes are smeared in
$n$--dimensions, the warp factor scales as $\rho^{-6+n}$ as $\rho \rightarrow 0$
since it is now the solution to a wave equation in dimension
$d = 8 - n$. This is the IR boundary condition that we will impose on the
solution.
We must furthermore bring to bear appropriate boundary conditions on the various fluxes.
This is rather simple for M2 branes in flat space, where
the energy from $G^{(4)}$ is the same as that from the curvature. In
the presence of other types of flux, the IR boundary conditions are more intricate. When the background is on--shell, contributions
to the stress tensor from all types of flux taken together cancel the
energy from the curvature: this is the basic nature of Einstein's
equation but this is too wobbly a criterion to signal the presence of
M2 branes. Instead, the right set of boundary conditions for M2 branes should enforce that the dominant contribution to the stress--energy tensor comes from the $G^{(4)}$ flux.
\subsection{BPS M2 branes}
The M2 brane charge varies with the radial coordinate $r$ of a section of the Stenzel space~\cite{Stenzel:1993}:
\begin{align}
\mathcal{Q}_{M2}(r) & = \frac{1}{(2\, \pi\, \ell_p)^6}\, \int_{\mathcal{M}_{7}} \star G_4 \, , \nonumber\\
& = - \frac{6\, m^2\, \text{Vol}\left( V_{5,2} \right)}{\left( 2\, \pi\, \ell_p \right)^6}\, \left( h_0(r)\, \left( f_0(r) - 2\, h_0(r) \right) - \frac{1}{54} \right) \, ,
\end{align}
with $\ell_p$ the Planck length in eleven dimensions, $\mathcal{M}_7$ a constant $r$ section of the transverse Stenzel space of volume $\text{Vol}\left( V_{5,2} \right) = \frac{27\, \pi^4}{128}$~\cite{Bergman:2001qi}.
The number of units of $G_4$ flux through the $S^4$ is
\begin{align}
q(r) & = \frac{1}{\left( 2\, \pi\, \ell_p\, \right)^3}\, \int_{S^4} G_4 \, , \nonumber\\
& = - \frac{16\, \pi^2\, m}{\left( 2\, \pi\, \ell_p \right)^3}\, h_0(r) \, .
\end{align}
In the smooth solution their IR values ($r \rightarrow 0$) are
\begin{eqnarray}\label{Q M2 M5 IR}
\mathcal{Q}_{M2}^{\ IR} = 0 \, , \qquad q^{IR} = \frac{1}{\left( 2\, \pi\, \ell_p \right)^3}\, \frac{8\, \pi^2 m}{3^{3/2}} \, ,
\end{eqnarray}
reflecting the fact that all M2 charge is dissolved in fluxes. One can obtain a BPS solution in which smeared M2 branes are added at the tip of the Stenzel space~\cite{Stenzel:1993} simply by shifting $\star G_4$ in such a way that $f - 4 h$ does not change\footnote{This combination multiplies a four-form field strength with one leg along $\nu$, one along $\sigma^i$ and two legs along two of the $\tilde{\sigma}^j$ directions which shrink in the IR ($e^{2 \beta_0} \sim r^2$)}. Under shifts of $f \rightarrow f + 2\, N$ and $h \rightarrow h + \frac{N}{2}$, the IR M2 brane charge changes to
\begin{eqnarray}
{\cal Q}_{M2} \rightarrow {\cal Q}_{M2} + \Delta {\cal Q}_{M2} \, ,
\end{eqnarray}
where we define
\begin{eqnarray}\label{Delta Q M2}
\Delta {\cal Q}_{M2} = - \frac{6\, m^2\, \text{Vol}\left( V_{5,2} \right)}{\left( 2\, \pi\, \ell_p \right)^6}\, \left( \frac{1}{2}\, N^2 - \frac{2}{3^{3/2}}\, N \right) \, ,
\end{eqnarray}
whereas the variation in the units of flux through the $S^4$ amounts to $\frac{8\, \pi^2\, m\, N}{\left( 2\, \pi\, \ell_p \right)^3}$.
This introduces in the IR a $- \Delta {\cal Q}_{M2} / r^2$ singularity in the warp factor
\begin{eqnarray}\
H_0(r) = 162\, m^2\, \int^{r} \frac{h_0\, \left(f_0 - 2 h_0\right) - \frac{1}{54}}{\sinh(r')^3\, \left( 2 + \cosh(2\, r')\right)^{3/4}}\, dr' \, .
\end{eqnarray}
This singularity is to be expected as we have smeared BPS M2 branes (whose harmonic function diverges as $1/r^6$ near the sources) on the $S^4$ of the transverse space.
It is interesting to see how this BPS solution arises in the first--order expansion around the BPS CGLP background~\cite{Cvetic:2000db} in the context of our perturbation apparatus.
Given that the $\xi^i$ modes are associated to supersymmetry--breaking, all the $X_i$ must be set to zero :
\begin{eqnarray}
X_i = 0 \, .
\end{eqnarray}
Since all the $\tilde{\xi}^i$ are zero,
\begin{eqnarray}\label{Y1UVIR}
Y_1^{IR} = Y_1^{UV} \, .
\end{eqnarray}
In the IR and the UV, $e^{z_0 + 2 \alpha_0}$, $e^{z_0 + 2 \beta_0}$ and $e^{z_0 + 2 \gamma_0}$ do not blow up but reach constant or vanishing values instead. So we impose
\begin{eqnarray}\label{Y12 = 0 BPS}
Y_1^{IR} = 0 \, , \qquad Y_2^{IR} = 0 \, , \qquad Y_4^{UV} = 0 \, .
\end{eqnarray}
As a result of~\eqref{Y12 = 0 BPS} and~\eqref{Y1UVIR}, the mode $\tilde{\phi_1}$ is identically zero. This yields $Y_2^{IR} = Y_2^{UV}$, $Y_3^{IR} = Y_3^{UV}$.
Since BPS M2 branes do not change the geometry of the Stenzel space but only the warp factor (much like BPS D3 branes also only change the warp factor and not the transverse geometry \cite{Grana:2000jj}) we expect the first--order perturbation to $e^{z + 2 \beta}$ to vanish both in the UV and in the IR, and thus
\begin{eqnarray}\label{Y3 Y4 Y6 Y5}
2\, Y_3 + e^{-3 z_0(0)}\, Y_4^{IR} + \frac{3}{2}\, m^4\, e^{-6 z_0(0)}\, Y_6^{IR} = 0 \, , \qquad Y_5^{UV} = Y_6^{UV} \, .
\end{eqnarray}
The constant $Y_4^{IR}$ is in turn determined by $Y_4^{UV}$.
Furthermore, the fields $\tilde{\phi}_5$, $\tilde{\phi}_6$ now obey the corresponding homogeneous equations and the solution is found by replacing $\tilde{\lambda}_{5,6}$ by $Y_{5,6}$.
The mode $\tilde{\phi}_4$ corresponds to the first--order perturbation of the warp factor. We allow an $1/r^2$ IR divergence, which means that $Y_6^{IR}$ doesn't necessarily need to vanish. We will see in a moment that this mode is related to the number $\Delta {\cal Q}_{M2}$ of added M2 branes. But first, we note that this does not give rise to a singularity that would be associated with $\tilde{\phi}_5 - 4\, \tilde{\phi}_6$, the perturbation to the term in $F_4$~\eqref{F4 ansatz} with legs on $\nu \wedge \sigma_i \wedge\tsig_j \wedge\tsig_k$. Indeed, the conditions we have imposed render this term harmless and independent of $Y_6^{IR}$: $\tilde{\phi}_5 - 4\, \tilde{\phi}_6 = 2\, Y_6 - 2\, Y_6 + {\cal O}(r) = {\cal O}(r)$.
Given that $Y_4^{IR}$ first shows up in the $\mathcal{O}(r^0)$ part of the IR expansion of $\tilde{\phi}_4$ there is no restriction on it. Moreover, $Y_5$ does not arise in any of the divergent or constant pieces in the $\tilde{\phi}^i$ IR expansions, but requiring no exponentially divergent terms in the UV imposes $Y_5 = Y_6$, in agreement with~\eqref{Y3 Y4 Y6 Y5}.
As a result, the perturbation corresponding to adding $\Delta {\cal Q}_{M2}$ M2 branes at the tip is obtained by just setting $Y_5 = Y_6 \sim - \Delta {\cal Q}_{M2}$. This perturbation causes the warp factor to diverge in the infrared as $- \Delta {\cal Q}_{M2} / r^2$ while all the other $\phi^i$ change by sub--leading terms apart from $\phi^5$ and $\phi^6$ which shift by some $N$ related to $\Delta {\cal Q}_{M2}$ through~\eqref{Delta Q M2}.
The UV expansion of the new warp factor is
\begin{align}
H & = e^{3 z_0}\, \left(1 + 3\, \tilde{\phi}_4 \right) \, , \nonumber\\
& = \frac{16}{3}\, 2^{3/4}\, m^2\, e^{-9r/2} \left( 1 - 6\, Y_3 \right) + {\cal O}(e^{-13 r/2}) \, , \nonumber\\
& = \frac{16}{3}\, 2^{3/4}\, m^2\, e^{-9r/2} \left( 1 + 3\, e^{-3z_0(0)}\, Y_4^{IR} + \frac{9}{2}\, m^4\, e^{-6 z_0(0)}\, Y_6 \right) + {\cal O}(e^{-13 r/2}) \, ,
\end{align}
where in the last line we used~\eqref{Y3 Y4 Y6 Y5}, and one can see that $Y_6$ multiplies a $1/\rho^6$ term, as expected from the exact solution.
\section{Constructing the anti--M2 brane solution}
\setcounter{equation}{0}
In order to construct a first--order backreacted solution sourced by anti--M2 branes at the tip of the CGLP solution, the first necessary condition is that the force a probe M2 brane feels be nonzero, which implies:
\begin{eqnarray}
X_4 \neq 0 \, .
\end{eqnarray}
Furthermore, since the infrared is that of a smooth solution perturbed with smeared anti--M2 branes, we require that no other field except those sourced by these anti--M2 branes have a divergent energy density in the infrared.
Requiring no $1 \over r^2$ or stronger divergences in $\tilde{\phi}_1$, $\tilde{\phi}_2$, $\tilde{\phi}_3$ and $\tilde{\phi}_6$ immediately implies:
\begin{align}
& X_5 = - \frac{\sqrt{3}}{2}\, X_4 \,, \nonumber\\
& Y_2^{IR} = 0 \, , \label{IR-reg}\\
& X_1 = - 2 \, X_4 \, , \nonumber
\end{align}
Barring any $1 \over r$ divergence in $\tilde{\phi}_{1,2}$ results in
\be
Y_1^{IR} = 0 \, .
\ee
The divergence in $\tilde{\phi}_4$ is now
\be
\tilde{\phi}_4 = 3^{1/4}\, {\sqrt{3}\, Z_0\, m^2\, Y_6^{IR} - X_4 \over r^2} + {\cal O}(r^0)
\ee
and this is the proper divergence for the warp factor of anti--M2 branes spread on the $S^4$ in the infrared. The energy density that one can associate with this physical divergence is
\be
\rho(E) \sim {d \tilde{\phi}_4 \over d r} \sim {1 \over r^6}
\ee
Another more subtle divergence in the infrared comes from the M--theory four--form field strength, which is
\bea
G_4 &=& d K(\tau) \wedge dx^0 \wedge dx^1 \wedge dx^2 + m\, F_4 \ ,
\eea
where~\eqref{F4 ansatz}
\bea
F_4 &=& \dot f \, d\tau \wedge \tilde{\sigma}_1 \wedge \tilde{\sigma}_2 \wedge \tilde{\sigma}_3 + \dot h \,\epsilon^{ijk} \, d\tau \wedge \sigma_i \wedge \sigma_j \wedge \tilde{\sigma}_k \nonumber\\
&& + \frac{1}{2}\, (4 h - f)\, \epsilon^{ijk}\, \nu \wedge \sigma_i \wedge \tilde{\sigma}_j \wedge \tilde{\sigma}_k - 6\, h\,\nu \wedge \sigma_1 \wedge \sigma_2 \wedge \sigma_3 \ ~.
\eea
The unperturbed metric in the IR is regular and is given by
\begin{eqnarray}\label{IR metric}
ds^2 = Z_0^{2/3} \, ds_4^2 + \frac{1}{3^{3/4}} \, Z_0^{-1/3}\, \left[ dr^2 + \nu^2 + \sigma_i^2 + r^2\, \tilde{\sigma}_i^2 \right] \, ,
\end{eqnarray}
with the constant $Z_0$ given in~\eqref{Zzero}. The vanishing metric components $g_{\tsig\tsig}$ lead to a
divergent energy density from the four--form field strength components:
\bea
F_{\nu \sigma \tilde{\sigma} \tilde{\sigma}} ~F_{\nu \sigma \tilde{\sigma} \tilde{\sigma}} g^{\nu \nu}\, g^{\sigma \sigma}\, g^{\tilde{\sigma} \tilde{\sigma}}\, g^{\tilde{\sigma} \tilde{\sigma}} &= & \frac{ 9\sqrt{3} Z_0^{4/3} X_4^2}{r^4} +{\cal O}(r^{-2})\\
F_{r \tsig \tilde{\sigma} \tilde{\sigma}} F_{r \tsig \tilde{\sigma} \tilde{\sigma}} g^{r r}\, g^{\tsig \tsig}\, g^{\tilde{\sigma} \tilde{\sigma}}\, g^{\tilde{\sigma} \tilde{\sigma}} &=& \frac{81\sqrt{3}Z_0^{3/4}X_4^2}{r^4}+{\cal O}(r^{-2}).
\eea
Unlike the analogous computations in IIB \cite{Bena:2009xk}, when integrating these energy densities the factor of $\sqrt{-G}\sim r^{-3}$ is not strong enough to render the action finite. Hence, this singularity has both a divergent energy density, and a divergent action.
As discussed in the Introduction, if this singularity is physical then the perturbative solution we find corresponds to the first--order backreaction of a set of anti--M2 branes in the Stenzel-CGLP background. If this singularity is not physical, then our analysis indicates that anti--M2 branes cannot be treated as a perturbation of this background, and hints towards the fact that antibranes in backgrounds with positive brane charge dissolved in fluxes do not give rise to metastable vacua.
\subsection*{Acknowledgments}
We would like to thank Mariana Gra\~na and Chris Herzog for interesting discussions.
This work is supported in part by a Contrat de Formation par la Recherche of CEA/Saclay, the DSM CEA/Saclay, the grants ANR--07--CEXC--006 and ANR--08--JCJC--0001--0, and by the ERC Starting Independent Researcher Grant 240210 -- String--QCD--BH.
\newpage
\begin{appendix}
\section{Subtleties in Section 2.}
To justify our choice of integration constant in~\eqref{H tilde min}, we derive the expression for the non--dynamical scalar $K^{\prime}_0$ in two different ways. First of all, we use the expression~\eqref{H tilde min} for $K^{\prime}$ that arises from its algebraic equation of motion. Inserting the zeroth--order expressions~\eqref{0th solt} of the fields appearing in this expression, we find
\begin{eqnarray}\label{H tilde ooth}
K_0^{\prime} = - 3\, m^2\, \frac{\sinh(r)}{\cosh(r)^4}\, \frac{e^{-6 z_0(r)}}{\left( 2 + \cosh(2\, r)\right)^{3/4}}\, .
\end{eqnarray}
On the other hand, let us proceed to see if this agrees with the expression obtained from the condition that the zeroth--order CGLP solution $K^{\prime}$ has to satisfy
\be
K_0^{\prime} = e^{- 6 z_0(r)}\, \frac{dH_0}{dr} \, ,
\ee
with $H_0$ solving
\begin{eqnarray}\label{H0 eom}
\nabla_8^2\, H_0 = - \frac{1}{2}\, m^2\, \mid F_4 \mid^2 \, .
\end{eqnarray}
This reduces to
\begin{align}\label{H0 eom 2}
\frac{dH_0}{dr} = 3\, 2^3\, m^2 \frac{e^{2 \gamma_0}}{\sinh(2\, r)^3} \left( \ell - \tanh(r)^4 \right)\,
\end{align}
and one must set $\ell = 0$ in order for the solution to be regular.
As a result,
\begin{eqnarray}\label{K0 2nd}
K_0^{\prime} = - 3\, m^2\, \frac{\sinh(r)}{\cosh(r)^4}\, \frac{e^{-6 z_0(r)}}{\left( 2 + \cosh(2\, r)\right)^{3/4}}\, ,
\end{eqnarray}
in agreement with the expression for $K_0^{\prime}$ found above from the equation of motion for this non--dynamical field determined in term of $f_0$ and $h_0$~\eqref{0th solt}.
\section{Behavior of $\tilde{\xi}$}
We collect here the infrared and ultraviolet asymptotic expansions of the exact solutions for $\tilde{\xi}^{i}$ which we have derived in Section 4.1.
\subsection{IR behavior of $\tilde{\xi}$}
The IR behavior of the $\tilde{\xi}_a$'s is the following :
\begin{eqnarray}
\tilde{\xi}_1^{IR} = X_1 + 2\, X_4 \left[ 1 - \frac{3^{1/4}}{2}\, m^2\, e^{-3 z_0(0)}\, r^2 \right] + {\cal O}(r^4) \, ,\nonumber
\end{eqnarray}
\begin{align}
& \tilde{\xi}_2^{IR} = \left[ \frac{3}{2}\, X_1 - \frac{4}{3\, \sqrt{3}}\, X_5 + \frac{7}{3}\, X_4 \right] + \Big[ \frac{3}{2}\, X_1 + \frac{8}{3\, \sqrt{3}}\, X_5 \nonumber\\ & + \frac{1}{3}\, X_4\, \left( 13 - 10\, 3^{1/4}\, e^{-3 z_0(0)}\, m^2 \right) \Big] r^2 + {\cal O}(r^4)\, , \nonumber
\end{align}
\begin{eqnarray}
\tilde{\xi}_3^{IR} = 3^{1/4}\, e^{-3 z_0(0)}\, m^2\, X_4\, r^2 + {\cal O}(r^4) \, ,
\end{eqnarray}
\begin{eqnarray}
\tilde{\xi}_4^{IR} = X_4 \left[ 1 - \frac{3^{1/4}}{2}\, m^2\, e^{-3 z_0(0)}\, r^2 \right] + {\cal O}(r^4) \ , \nonumber
\end{eqnarray}
\begin{align}
\tilde{\xi}_5^{IR} = &\, \frac{1}{r^2}\, \left[ X_5 + X_4\, \left( \frac{\sqrt{3}}{2} - \frac{3^{3/4}}{2}\, e^{-3 z_0(0)}\, m^2 \right) \right] \nonumber\\ & + \Bigg[ \frac{1}{6}\, \left( 7\, X_5 + 12\, X_6 \right) + X_4\, \bigg[ \frac{17}{20\, \sqrt{3}} - \frac{97}{12}\, 3^{3/4}\, e^{-3 z_0(0)}\, m^2 \nonumber\\ & - \sqrt{6}\, e^{-3 z_0(0)}\, m^2\, \Pi \left(-\sqrt{3};-\text{arcsin}\left(\frac{1}{3^{1/4}}\right) \mid -1 \right)\bigg] - 3^{3/4}\, e^{-3 z_0(0)}\, m^2\, X_4\, \text{log}(r) \Bigg] \nonumber\\ & + \Bigg[ \frac{53}{120}\, X_5 + \frac{1}{48}\, X_4\, \left(\frac{53}{5}\, \sqrt{3} + \frac{47}{5}\, 3^{3/4}\, e^{-3 z_0(0)}\, m^2 \right) \Bigg] r^2 + {\cal O}(r^4) \, , \nonumber
\end{align}
\begin{align}
\tilde{\xi}_6^{IR} = &\, - \frac{2}{r^2}\, \left[ 2\, X_5 + \sqrt{3}\, X_4 \right] + \left[ \frac{4}{3}\, X_5 + X_4\, \left( \frac{2}{\sqrt{3}} + 3^{3/4}\, e^{-3 z_0(0)}\, m^2 \right) \right] \nonumber\\ & + \left[ \frac{37}{30}\, X_5 + X_4\, \left( \frac{37}{20\, \sqrt{3}} - 2\, 3^{3/4}\, e^{-3 z_0(0)}\, m^2 \right) \right] r^2 + {\cal O}(r^4) \, . \nonumber
\end{align}
\newpage
\subsection{UV behavior of $\tilde{\xi}$}
The UV behavior of the $\tilde{\xi}_a$'s is as follows :
\begin{eqnarray}
\tilde{\xi}_1^{UV} = X_1 + \frac{32}{3}\, 2^{3/4}\, m^2\, X_4\, e^{-3 z_0(0)}\, e^{-\frac{9}{2} r} + {\cal O}(e^{-13 r/2})\, , \nonumber
\end{eqnarray}
\begin{align}
\tilde{\xi}_2^{UV} = &\, -\frac{3}{32}\, X_3\, e^{6 r} + \frac{3}{16}\, X_3\, e^{4 r} + \left[ \frac{3}{8}\, X_1 + \frac{3}{32}\, X_3 +\frac{2}{3\, \sqrt{3}} \left( X_5 + X_6 \right) \right]\, e^{2 r} \nonumber\\ & + \left[ \frac{3}{4}\, X_1 - \frac{3}{8}\, X_3 -\frac{8}{3\, \sqrt{3}}\,\left( X_5 + X_6 \right) \right] \nonumber\\ & + \left[ \frac{3}{8}\, X_1 + \frac{3}{32}\, X_3 + \frac{2}{3\, \sqrt{3}}\, \left( X_5 + X_6 \right) \right]\, e^{-2 r} \nonumber\\ & + \left[ \frac{3}{16}\, X_3 + \frac{64}{3\, \sqrt{3}}\, X_6 \right]\, e^{-4 r} + \frac{32}{7}\, 2^{3/4}\, e^{-3 z_0(0)}\, m^2\, X_4\, e^{-9r/2} \nonumber\\ & - \left[ \frac{3}{32}\, X_3 + \frac{256}{3\, \sqrt{3}}\, X_6 \right]\, e^{-6 r} + {\cal O}(e^{-13 r/2}) \, , \nonumber
\end{align}
\begin{align}
\tilde{\xi}_3^{UV} = &\, \frac{1}{8}\, X_3\, e^{6 r} - \frac{9}{8}\, X_3\, e^{2 r} + 2\, X_3 - \frac{9}{8}\, X_3\, e^{-2 r} \nonumber\\ & + \frac{32}{7}\, 2^{3/4}\, e^{-3 z_0(0)}\, m^2\, X_4\, e^{-9r/2} + \frac{1}{8}\, X_3\, e^{-6 r} + {\cal O}(e^{-13 r/2})\, , \nonumber
\end{align}
\begin{eqnarray}
\tilde{\xi}_4^{UV} = \frac{16}{3}\, 2^{3/4}\, m^2\, X_4\, e^{-3 z_0(0)}\, e^{-\frac{9}{2} r} + {\cal O}(e^{-13 r/2})\, ,
\end{eqnarray}
\begin{align}
\tilde{\xi}_5^{UV} =&\, \frac{1}{2}\, \left( X_5 + X_6 \right)\, e^{r} + \frac{5}{2}\, \left( X_5 + X_6 \right)\, e^{-r} + 2\, \left( 3\, X_5 - X_6 \right)\, e^{-3 r} \nonumber\\ & + 2\, \left( 5\, X_5 + X_6 \right)\, e^{-5 r} - \frac{96}{13}\, 2^{3/4}\, \sqrt{3}\, e^{-3 z_0(0)}\, m^2\, X_4\, e^{-11r/2} + {\cal O}(e^{-13r/2}) \, , \nonumber
\end{align}
\begin{align}
\tilde{\xi}_6^{UV} =&\, \left( X_5 + X_6 \right)\, e^{r} - 7\, \left( X_5 + X_6 \right)\, e^{-r} - 24\, \left( X_5 - X_6 \right)\, e^{-3 r} \nonumber\\ & - 8\, \left( 5\, X_5 + 7\, X_6 \right)\, e^{-5 r} - \frac{192}{13}\, 2^{3/4}\, \sqrt{3}\, m^2\, X_4\, e^{-3 z_0(0)}\, e^{-11r/2} + {\cal O}(e^{-13 r/2}) \, . \nonumber
\end{align}
\end{appendix}
\providecommand{\href}[2]{#2}\begingroup\raggedright
|
1011.2414
|
\section{Running of $Z_q$}
\subsection{Perturbative running}
In Landau gauge $Z_q$ has a vanishing anomalous dimension to leading order,
i.e. its running starts at $O(\alpha^2)$.
The perturbative running has been computed up to four
loops~\cite{Chetyrkin:1999pq} and references therein. The needed formulae
are accessible on the web site also indicated in ref.~\cite{Chetyrkin:1999pq}.
\subsection{Wilson expansion and non-perturbative running}
To handle non-perturbative corrections we use the theory of
Operator Product Expansion~\cite{Wilson69} and its application in estimating
power suppressed non-perturbative corrections via vacuum expectation
values~\cite{SVZ}.
In Landau gauge there exists only one dimension-2 operator
allowed to have a vacuum expectation value: $A^2 \equiv
\sum_{\mu=1,4}^{a=1,8} A_\mu^a A^{a\mu}$.
The Wilson coefficient of this operator has been computed
to leading logarithm in~\cite{Boucaud:2005rm}
and extensively for all propagators
up to $O(\alpha^4)$ in~\cite{Chetyrkin:2009kh}.
\subsubsection{$\VEV{A^2}$ tree level Wilson coefficients for $Z_q$}
In order to give a hint let us just sketch the tree level calculation
of that Wilson coefficient.
\begin{figure}[h]
\begin{center}
\includegraphics[width=50mm]{figs/diagram_prop.jpg}
\end{center}
\end{figure}
Consider the above diagram, describing a quark propagating in a constant
background gauge field. As a consequence the red bubble represents the
interaction of the quark with this background field:
$i g {\lambda_a}/2\, A \!\!\!/^a$.
The Feynman rules are then applied as usual. Neglecting the quark mass
it gives
$$
\frac{-ip \!\!\!/}{p^2}\left(\sum_{\mu=1, a=1}^{\mu=4, a=8}\sum_{\mu'=1, a'=1}^{\mu'=4, a'=8}i
g \frac{\lambda_a}2 A^{a}_{\mu}\gamma^{\mu} \frac{-ip \!\!\!/}{p^2}
i g \frac{\lambda_{a'}}2 A^{a'}_{\mu'} \gamma^{\mu'}\delta_{aa'}\delta_{\mu\mu'}\right)\frac{-ip \!\!\!/}{p^2}$$
\begin{eqnarray}
= -\frac {g^2} {12} \frac {\VEV{A^2}}{p^2} \times \frac{-ip \!\!\!/}{p^2}
\end{eqnarray}
where $\langle \, \, \rangle$ represents the vacuum expectation value,
$\sum \lambda_a^2/4 = C_F =4/3$ (proportional to the identity matrix in color
space), the sum over $\mu$ gives a
factor 4, and
\begin{eqnarray}\label{form}\langle(A_\mu^a)^2\rangle=\langle A^2/32\rangle, \qquad
\langle (A\cdot \hat p)^2 \rangle =
\langle A^2/4\rangle\end{eqnarray}
from the homogeneity of the vacuum for rotations in space-time and
color space.
For $Z_q$ defined by~\eq{Zqdef}, we get at tree level the following
non-perturbative contribution due to $\VEV{A^2}$:
\begin{eqnarray}
\delta Z_q = \frac {g^2} {12} \frac {\VEV{A^2}}{p^2}
\end{eqnarray}
\subsubsection{The Wilson coefficients at $O(\alpha^4)$}
The Wilson coefficient of $\VEV{A^2}$ for the quark propagator has been computed
up to $O(\alpha^4)$ in~\cite{Chetyrkin:2009kh} in the $\overline{\rm MS}$ scheme.
Our lattice data refer naturally to the RI'-MOM scheme. Some work is needed to
derive the correct analytic formula which allows a fitting of our lattice data.
We have derived this in the appendix~\ref{appendix2}.
\section{Lattice results and hypercubic corrections}
\label{sec:Zqhyp}
The hypercubic artefacts generate on the raw lattice data, $Z_q^{\mathrm
{latt}}$, the so called ``half-fishbone'' structure~\cite{Boucaud:2003dx} shown
in fig~\ref{fig:fishbone}. In this figure all the points labelled as explained
in \eq{pmu} and \eq{cubic} are plotted. The color code shows the value of the
ratio $p^{[4]}/(p^2)^2$ which is between 0.25 and 1. The closer to 1 means the
less ``democratic'' or ``tyrannic'' ones. We see, as expected, that the tyrannic points
are more affected by the artefacts. We also see that the gap between
$Z_q^{\mathrm {latt}}(a^2\,p^2, a^4p^{[4]}, a^6 p^{[6]}, ap_4, a^2\Lambda_{\rm
QCD}^2)$ at a given $ p^2$ can be as large as 0.07, i.e; about 10\%. Taking a
naive average, without a correct treatment of this artefact would leave a
systematic upward shift of about 5 \%. In section \ref{NPhyp} we have developed
a non-perturbative method to cure this artefact (this method being indeed
splited in two options, the SWF and the OWF).
There exist two other methods.
The oldest one is the ``democratic selection''. It amounts to keep only, say,
the points with $\mathrm{ratio} \le 0.3$ in fig~\ref{fig:fishbone}. One sees
that it eliminates a lot of data and it is still far from the ``egalitarian''
result (the lowest curve in fig~\ref{fig:fishbone}) which we consider as
better. We will thus not consider further this democratic selection.
The second other method to correct hypercubic artefacts uses perturbative calculation
and is detailed in the next section.
\begin{figure}[h]
\begin{center}
\includegraphics[width=10cm]{figs/fish_39_one_w.pdf}
\end{center}
\caption{\small This plot shows the raw data for $\beta=3.9$, $Z_q^{\mathrm
{latt}}(a^2\,p^2, a^4p^{[4]}, a^6 p^{[6]}, ap_4, a^2\Lambda_{\rm QCD}^2)$ in
\eq{eq:p4expan}, in terms of $a^2 p^2$ in the horizontal axis.
The ``half-fishbone structure'' due to hypercubic artefacts is
clearly seen. There is one point for every cubic (3-D) orbit. The color code
classifies the data according to their degree of ``democracy'' measured by
$\mathrm{ratio}=p^{[4]}/(p^2)^2 $. The lowest plot corresponds to the
non-perturbatively hypercubic corrected data (or ``egalitarian result'') resulting
form the one-window fit $Z_q^{\mathrm
{hyp\_corrected}}(a^2p^2,a^2\Lambda_{\rm QCD}^2)$ in \eq{eq:owfexpan}.
The data correspond to $\beta=3.9, a\mu=0.004$,
but the same features appear for all $\beta$'s. }
\label{fig:fishbone}
\end{figure}
\subsection{Perturbative correction}
\label{sec:pert}
\begin{figure}[hbt]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=9.7cm]{figs/Zq_pert_boosted_ptilde.pdf}
&
\hspace*{-1.75cm}
\includegraphics[width=9.7cm]{figs/pert.pdf}
\end{tabular}
\end{center}
\caption{\small L.h.s: data of fig.~\ref{fig:fishbone}
corrected by the perturbative subtraction, formula of~\eq{eq:subpertild},
section~\ref{sec:subpertild}.
R.h.s: same exercice with the prescription \eq{eq:subpert}
with \eq{eq:feli}. The color code is the same as
in fig~\ref{fig:fishbone} and $\mathrm{ratio}=p^{[4]}/(p^2)^2 $.
The horizontal axis is $a^2 p^2$ for both.}
\label{fig:pert}
\end{figure}
The perturbative method~\cite{Constantinou:2009tr} consists in computing at one loop in lattice
perturbation theory~\cite{Capitani:2002mp}, then, assuming that the lattice spacing
artefacts are reliably described by the $O(g^2 a^2)$ terms thus obtained, in
subtracting them from the lattice data. This method has been applied to quark
bilinear operators in~\cite{Constantinou:2010gr}. For comparison we have
applied here this method following the prescription described in section 3.2.2
of~\cite{Constantinou:2010gr} and also a variant of it.
Eq (24) in~\cite{Constantinou:2009tr} may be written in Landau gauge as
\begin{eqnarray}\label{eq:pert}
Z_q^{\mathrm pert}(a^2p^2)&=&Z_q^{\mathrm tree}(a^2p^2)
+ \tilde g^2\big(b_{q1} +
c_{q1} \; a^2p^2 + c_{q2}\; a^2p^2\log(a^2p^2) + \nonumber \\
&&c_{q3} \frac{a^2 p^{[4]}}{p^2} + c_{q4} \frac{a^2 p^{[4]}}{p^2} \log(a^2p^2)\big) \ ,
\end{eqnarray}
where, to follow~\cite{Constantinou:2010gr}, $\tilde g ^2 = g_{\mathrm
{boosted}}^2/(12\pi^2)$, with $g_{\mathrm
{boosted}}^2 = g_{\mathrm {bare}}^2/\mathrm {<plaquette>}$.
The coefficients are defined using the notations
of eq. (24) in~\cite{Constantinou:2009tr}:
\begin{eqnarray}\label{eq:feli}
\begin{array}{ll}
\displaystyle
c_{q1} = \epsilon^{(2,4)} \ , & \displaystyle c_{q2} = \frac{59}{240}+ \frac {c_1}{2} + \frac
{C_2}{60} \ , \nonumber \\
\displaystyle c_{q3} = \epsilon^{(2,1)} - \frac{3}{80} -
\frac{C_2}{10} \ , & \displaystyle \rule[0cm]{0cm}{0.9cm} c_{q4} = \frac{101}{120} -\frac{11}{30}\,C_2 \ .
\end{array}
\end{eqnarray}
\subsubsection{Prescription with $\tilde p_\mu$}
\label{sec:subpertild}
Using the prescription of eq.~(35) in~\cite{Constantinou:2010gr}, for every
cubic orbit we define the substracted quantity as~:
\begin{eqnarray}\label{eq:subpertild}
Z_q^{\mathrm {pert\_tilde}}(a^2\,{p}^2, a^4p^{[4]}, a^6 p^{[6]}, ap_4, a^2\Lambda_{\rm
QCD}^2) \ = \ Z_q^{\mathrm {latt}}(a^2\,{p}^2, a^4p^{[4]}, a^6 p^{[6]}, ap_4, a^2\Lambda_{\rm
QCD}^2) \nonumber \\
- \ \tilde g^2\left( c_{q1} \; a^2\tilde p^2 + c_{q2}\; a^2\tilde p^2\log(a^2\tilde p^2) +
c_{q3} \frac{a^2 \tilde p^{[4]}}{\tilde p^2} + c_{q4}
\frac{a^2\tilde p^{[4]}}{\tilde p^2} \log(a^2\tilde p^2) \right)
\end{eqnarray}
The result of this substraction is plotted in the l.h.s of fig.~\ref{fig:pert}.
The half-fishbone structure is still clearly visible
for $a^2p^2 > 1.6$. We then try another prescription.
\subsubsection{Prescription with $p_\mu$}
\label{sec:subpert}
The trace of \eq{Zq}, which introduces $\tilde p_\mu$, had to be applied
in eq.~(24) of~\cite{Constantinou:2009tr} to obtain \eq{eq:pert}.
We will now expand in $p_\mu$ before performing the trace and then keep the
$O(g^2 a^2)$ terms for substraction. This gives, using
again~\cite{Constantinou:2009tr},
\begin{eqnarray}\label{eq:subpert}
Z_q^{\mathrm {pert\_notilde}}(a^2\,{p}^2, a^4p^{[4]}, a^6 p^{[6]},
ap_4, a^2\Lambda_{\rm
QCD}^2) \ = \ Z_q^{\mathrm {latt}}(a^2\,{p}^2, a^4p^{[4]}, a^6 p^{[6]}, ap_4, a^2\Lambda_{\rm
QCD}^2)\nonumber \\
- \ \tilde g^2 \left(
c_{q1} \; a^2 p^2 + c_{q2}\; a^2 p\log(a^2 p^2) +
c_{q3}' \frac{a^2 p^{[4]}}{ p^2} + c_{q4}
\frac{a^2 p^{[4]}}{ p^2} \log(a^2 p^2) \right) \ ,
\end{eqnarray}
where
\begin{eqnarray}\label{eq:feli2}
\quad c_{q3}' \ = \ \frac {\epsilon^{(0,1)}} 6 + \epsilon^{(2,1)} - \frac{3}{80} -
\frac{C_2}{10} \ = \ c_{q3} + \frac {\epsilon^{(0,1)}} 6 \ .
\end{eqnarray}
This result is plotted in the r.h.s. of fig.~\ref{fig:pert}. With this variant, the half-fishbone is
significantly reduced but the dominant $O(4)$ artefact is overcorrected : a linear behaviour in
$a^2p^2$ is clearly visible, larger than with the first prescription.
\subsubsection{Lessons about the perturbative method}
We see that the two prescriptions start
differing significantly at $a^2p^2 \simeq 1$, which is not surprising since
higher order terms become significant, for example, an $a^4 p^{[4]}$ is also of
order 1 for tyrannic points, while $a^2 p^2 - a^2 \tilde p^2 \simeq 0.3
$. The perturbative method goes in the right direction, but it is impossible to
know a priori its quality without performing the tests we propose here. The
method contains several ambiguities: what to take for the coupling constant ?
use $p_\mu$ or $\tilde p_\mu$? Contrarily to the non-perturbative method it
provides both the hypercubic corrections and the $O(a^2p^2)$ ones. Conceptually
the perturbative method is very useful as it exhibits qualitative aspects which
may guide the use of the non-perturbative one, for example it justifies the
smoothness of the variation of the derivative $R$~\eq{eq:deriv} as a function of
$a^2p^2$ as well as that of the slope in $a^2p^2$. Finally we shall see that it
gives results similar to the non-perturbative ones.
\subsection{Non perturbative hypercubic correction}
\subsubsection{Sliding window fit versus one window fit}
We now apply the non-perturbative correction. We only
expand in $p^{[4]}$ since the higher order terms turn out to be
negligible in our momentum range. In section~\ref{NPhyp} we have presented two types of fits, similar
in spirit: the sliding window fit (SWF) described in section~\ref{sec:slid}
which amounts to using \eq{eq:p4expan} combined with \eq{eq:deriv} and the one
window fit (OWF) described in section~\ref{sec:one} which amounts to using
\eq{eq:ow}. In the l.h.s of \fig{fig:NP_subtracted} we show, in the case
of $\beta=3.9$, the comparison of hypercubic corrected data after applying
OWF and SWF. The difference does not appear to be large which is rather
encouraging. The OWF gives a slightly smoother result. For this value of $\beta$ the chi-squared
is not good (see table~\ref{tab:p4slopes}) but remember it uses
only two hypercubic parameters. Chi-squared for the other $\beta$'s are better.
\begin{figure}[hbt]
\begin{center}\begin{tabular}{cc}
\includegraphics[width=97mm]{figs/Zq_NPHC_39_a2p2.pdf}
&
\hspace*{-1.75cm}
\includegraphics[width=97mm]{figs/NP_subtracted_39.pdf}
\end{tabular}\end{center}
\caption{\small On the l.h.s we compare for $\beta=3.9, a\mu=0.004$
the non-perturbatively corrected $Z_q$ from the sliding-window fit (SWF),
section~\ref{sec:slid} and from the one-window one (OWF), section~\ref{sec:one}.
To show better the running we have also
subtracted the $O(a^2p^2)$ artefact which will be computed below. On
the r.h.s we show, using the OWF, the non-perturbatively subtracted
data~\eq{eq:NPsub}.
There is one point for every cubic (3-D) orbit. The
color code and the definition of the parameter ratio are the same as in fig.~\ref{fig:fishbone}.
The black line
corresponds to the OWF non-perturbatively corrected result:
$Z_q^{\mathrm {hyp\_corrected}}(a^2p^2,a^2\Lambda_{\rm QCD}^2)$ of
\eq{eq:owfexpan}. The $O(a^2p^2)$ has also been subtracted.
The horizontal axis is $a^2 p^2$.}
\label{fig:NP_subtracted}
\end{figure}
\subsubsection{Half-fishbone reduction test}
We need also to apply the half-fishbone reduction test as in the
perturbative case, i.e. to subtract to the raw data of every cubic orbit the
hypercubic correction. We present the OWF result. From \eq{eq:owfexpan} the
subtraction amounts to
\begin{eqnarray}\label{eq:NPsub}
Z_q^{\mathrm {non-pert\_OWF}}(a^2\,{p}^2, a^4p^{[4]}, a^6 p^{[6]},
ap_4, a^2\Lambda_{\rm
QCD}^2) &=& Z_q^{\mathrm {latt}}(a^2\,{p}^2, a^4p^{[4]}, a^6 p^{[6]}, ap_4,
a^2\Lambda_{\rm
QCD}^2)\nonumber \\ &-&
c_{a2p4} \, a^2 \frac{p^{[4]}}{p^2} - c_{a4p4}\,a^4\,p^{[4]}
\end{eqnarray}
The result is shown in the r.h.s of \fig{fig:NP_subtracted}, one point per cubic
orbit. The non-perturbatively corrected
$Z_q^{\mathrm {hyp\_corrected}}(a^2p^2,a^2\Lambda_{\rm QCD}^2)$ of
\eq{eq:owfexpan} is represented by the black line in the r.h.s of
\fig{fig:NP_subtracted}. It is well in the middle of the subtracted points, as
we would expect.
It is seen that the half-fishbones have been strongly reduced. One
sees a remainder of these artefacts due to the less democratic, or
``tyrannic'' points. These have only one non-vanishing component or one
large and a very small one. These points are not so many as seen in the plot,
their orbits are small which explains the larger error bars.
We have checked that these tyrannic points have indeed a small impact on the
hypercubic corrected result.
\subsubsection{The slopes in $p^{[4]}$}
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=100mm]{figs/slope.pdf}
\end{center}
\caption{\small The three lattice slopes $R$ defined in \eq{eq:deriv} for the three lattice
spacings and the perturbative one
$R_{\mathrm{pert}}=c_{q3}'+ c_{q4}\log(a^2p^2)$ for $\beta$ = 3.9,
in terms of $a^2 p^2$ in the horizontal axis.}
\label{fig:slopes}
\end{figure}
The sliding window fit solves for every window a value for the slope
$R(a^2p^2)$~\eq{eq:deriv}, i.e. the derivative $\partial Z_q^{\mathrm
{latt}}/\partial (a^2p^{[4]}/p^2)$. This allows for a study of the shape of this
function $R$. In \fig{fig:slopes} we plot this slope $R$ defined in
\eq{eq:deriv} for the three values of $\beta$. We also plot the equivalent
slope using the perturbative formula with the $p_\mu$ prescription for
$\beta=3.9$, section~\ref{sec:subpert}: $R_{\mathrm{pert}}$ = $c_{q3}' + c_{q4}\log(a^2p^2)$,
$c_{q3}'$ defined in \eq{eq:feli2} and $c_{q4}$ in \eq{eq:feli}. We see that
this perturbative slope is in fair agreement with the non-perturbative one,
explaining the good elimination of half-fishbones in the r.h.s. in
\fig{fig:pert}.
The three non-perturbative data in \fig{fig:slopes} give the impression to be
affine (a constant minus a linear term) over a rather large momentum interval. This is what is expressed in
\eq{eq:ow} from where we have deduced the one-window fit: a fit over the full
range [0.5-3.5] with two hypercubic parameters only~\footnote{Of course there
are additionally as many hypercubic insensitive parameters as there are
values of $p^2$ in the range, which are simply the values of
$Z_q^{\mathrm {hyp\_corrected}}(a^2p^2,a^2\Lambda_{\rm QCD}^2)$.}
\begin{figure}[hbt]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=8.5cm]{figs/comp_gA2.png} &
\includegraphics[width=8.5cm]{figs/comp_a2p2.png}
\end{tabular}
\end{center}
\caption{\small We plot the values of the fitted slopes $c_{a2p2}$ (r.h.s) and the
condensate $g^2\, \VEV{A^2}$ (l.h.s) as extracted from the $1/p^2$ contribution
to the fit, see table~\ref{tab:p2andA2}, \ref{tab:p2andA2_sliding} and
\ref{tab:merged}. In the
left plot we show the results from the OWF and from the SWF. It can be seen that
the $O(\alpha^4)$ formula for the Wilson coefficient computed by Chetyrkin and
Maier~\cite{Chetyrkin:2009kh} of $\VEV{A^2}$ (indicated by the "CM" initials)
is about 20 \% below the tree level result.
We show the value obtained from the merged results of the three lattice
spacings, table~\ref{tab:merged}. Finally, for the sake of comparison we show the result
from the strong coupling constant of~\cite{Blossier:2010ky}. The horizontal axis
is $a^2$ in fm$^2$.}
\label{fig:p2andA2}
\end{figure}
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=100mm]{figs/comp_Z.png}
\end{center}
\caption{\small We show the value of $Z_q^{\mathrm{pert}}$ defined in \eq{eq:zqfit} for all
three lattice spacings as a function of the bare coupling constant
$g^2= 6/\beta$.}
\label{fig:g2Z0}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{||c|c|c|c|c|c|c||}
\hline
\hline
$\beta$ & $a^2$ fm$^2$ & $c_{a2p4}$ & $c_{a4p4}$ & $c_{a2p4}/g^2$ &
$c_{a4p4}/g^2$ & $\chi^2$/d.o.f
\\ \hline
$3.9$ & 0.00689 & 0.067(4) &-0.0149(10)&0.044(3)&-0.0097(7)& 4.1 \\ \hline
$4.05$ &0.00456& 0.065(3)& -0.0144(5) & 0.044(2)&-0.0097(3)&0.53\\ \hline
$4.2$ &0.00303& 0.055(11)& -0.0124(4)&0.039(8) &-0.0089(3)&0.98 \\ \hline
\end{tabular}
\caption{\small Results for the slope in $a^2 \,p^{[4]}/p^2$ and $a^4 \,p^{[4]}$
and the same divided by $g^2$ in the one window fit.}
\label{tab:p4slopes}
\end{table}
The fitted values for $c_{a2p4}$ and $c_{a4p4}$ from the one window fit are given in
table~\ref{tab:p4slopes} as well as the same divided by $g^2$, since
perturbation theory expects at least for $c_{a2p4}$ to be $\propto g^2$.
Before dividing by $g^2$ a small scaling violation is apparent, which
corresponds to the non overlap of the curves at different $\beta$'s in \fig{fig:slopes}
It appears on table~\ref{tab:p4slopes} that dividing by $g^2$ improves
significantly the scaling. The $\chi^2$ in
table~\ref{tab:p4slopes} is not good for $\beta=3.9$, apparently due to some
structure at the lower end of the plot.
\section{Running including $\VEV{A^2}$ corrections from OPE}
\label{sec:running}
\begin{table}[h]
\centering
\begin{tabular}{||c|c|c|c|c|c||}
\hline
\hline
$\beta$ & $a^2$ fm$^2$ & $Z_q^{\mathrm{pert}}$ & $c_{a2p2}$ &
$g^2 \VEV{A^2}_{\mathrm tree }$ & $g^2 \VEV{A^2}_{\mathrm
CM}$
\\ \hline
$3.9$ & 0.00689 & 0.726(5) &0.0201(13)& 3.20(38) & 2.62(31) \\ \hline
$4.05$ &0.00456& 0.742(5) &0.0200(15)& 3.09(65) &2.57(54)\\ \hline
$4.2$ &0.00303& 0.760(3) & 0.0194(8) &3.23(55) & 2.74(47) \\ \hline
average & & & 0.0201(3) &3.18(28) & 2.64(23) \\ \hline
\end{tabular}
\caption{\small Results for $Z_q^{\mathrm{pert}}$(10GeV) and
$c_{a2p2}$~\eq{eq:zqfit} from the one-window-fit and the estimated $g^2
\VEV{A^2}$ vev from the $1/p^2$ term and from the the
Chetyrkin-Maier~\cite{Chetyrkin:2009kh} (CM) Wilson coefficient. Notice that
$Z_q^{\mathrm{pert}}$ and $c_{a2p2}$ from these two fits are very close.}
\label{tab:p2andA2}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{||c|c|c|c|c|c||}
\hline
\hline
$\beta$ & $a^2$ fm$^2$ & $Z_q^{\mathrm{pert}}$
&$c_{a2p2}$
& $g^2 \VEV{A^2}_{\mathrm tree}$ &
$g^2 \VEV{A^2}_{\mathrm CM}$
\\ \hline
$3.9$ & 0.00689 & 0.741(3)&0.0161(9) &2.07(37)& 1.70(31)\\ \hline
$4.05$ &0.00456& 0.753(5) &0.0168(14) &2.13(52)& 1.78(43)\\ \hline
$4.2$ &0.00303& 0.771(3) &0.0164(9) &1.59(60)& 1.36(51) \\ \hline
average & & & 0.0165(6) &1.99(26)(27) & 1.65(22)(27) \\ \hline
\end{tabular}
\caption{\small The same as in table~\ref{tab:p2andA2} using the data
from the sliding-windows-fit to hypercubic corrections.}
\label{tab:p2andA2_sliding}
\end{table}
In this section we will check the running of $Z_q$.
To this purpose, we shall use both
the formula \eq{eq:app_final2} derived in the appendix~\ref{appendix2}, to which
we add a lattice artefact term $\propto a^2p^2$ not yet subtracted:
\begin{eqnarray}\label{eq:zqfit}
Z_q^{\mathrm {hyp\_corrected}}(a^2p^2)
&=& Z_q^{\rm pert\; RI'}(\mu'^2) \,
c_{0 Z_q}^{\rm RI'}\left(\frac{p^ 2}{\mu'^2},\alpha(\mu')\right) \nonumber \\
&\times& \left( 1 \ + \,
\frac{c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)}
{c_{0 Z_q}^{\rm RI'}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)} \
\frac{c_{2 Z_q}^{\rm RI'}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)}
{c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)} \
\frac{\ \langle A^2\rangle_{R,\mu^2}}{32 \ p^2} \right) \nonumber \\
&+& c_{a2p2}\; a^2\,p^2 \ .
\end{eqnarray}
and a formula including only an OPE correction with a tree-level Wilson coefficient,
\begin{eqnarray}\label{eq:zqfit-tl}
Z_q^{\mathrm {hyp\_corrected}}(a^2p^2) \ = \
Z_q^{\rm pert\; RI'}(\mu'^2) \,
c_{0 Z_q}^{\rm RI'}\left(\frac{p^ 2}{\mu'^2},\alpha(\mu')\right)
\ \left( 1 \ + \frac{c_{1overp2}} {p^2} \right) +
c_{a2p2}\; a^2\,p^2 \ ,
\end{eqnarray}
where $c_{1overp2}=g^2\langle A^2 \rangle/12$. We use $\mu'=\mu = 10$ GeV as the renormalisation scale ;
$c_{0Z_q}^{\rm RI'}(p^2,\mu^2)$ is
computed from the four loop perturbative running of
$Z_q$~\cite{Chetyrkin:1999pq}:
\begin{eqnarray}\label{eq:C0def}
c_{0Z_q}^{\rm RI'}(p^2,\mu^2) \equiv
\frac{Z_q^{\mathrm{pert\; RI'}}(p^2, g^2_{\mathrm{bare}})}
{Z_q^{\mathrm{pert\; RI'}}(\mu^2, g^2_{\mathrm{bare}})} \ ;
\end{eqnarray}
$c_{2Z_q}^\mathrm{\overline{\rm MS}}(p^2,\mu^2)$
is the three loop Wilson coefficient of $\VEV{A^2}$ in the expansion of
$Z_q$~\cite{Chetyrkin:2009kh} and the ratio
\begin{eqnarray}
\frac{c_{2 Z_q}^{\rm RI'}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)}
{c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)} \ = \
\frac{1 - 0.1317 \ \alpha^2(\mu) - 0.5155 \ \alpha^3(\mu) } {1 - 0.1317 \ \alpha^2(p) - 0.5155 \ \alpha^3(p) }
\end{eqnarray}
was obtained in appendix A. We express the lattice spacing
(cut-off) dependence as a dependence in $g^2_{\mathrm{bare}}$.
$Z_q^{\mathrm{pert\; RI'}}(\mu^2, g^2_{\mathrm{bare}})$
is the perturbative contribution to $Z_q$ at the scale $\mu$ in the RI'-MOM
scheme.
In other words,
\begin{eqnarray}\label{eq:defrimom}
Z_q^{\mathrm {RI'}}(p^2,g^2_{\mathrm{bare}})&=&
Z_q^{\mathrm{pert}\; RI'}(p^2,
g^2_{\mathrm{bare}}) \nonumber \\
&\times& \left( 1 \ + \,
\frac{c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)}
{c_{0 Z_q}^{\rm RI'}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)} \
\frac{c_{2 Z_q}^{\rm RI'}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)}
{c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)} \
\frac{\ \langle A^2\rangle_{R,\mu^2}}{32 \ p^2} \right)
\end{eqnarray}
\hfill\break
From now on $Z_q^{\mathrm{pert}}$ will refer to $Z_q^{\mathrm{pert}\;
RI'}$.
We fit three parameters: $Z_q^{\mathrm{pert}}$, $c_{a2p2}$ and alternatively
$c_{1overp2}$ (which amounts to a tree level treatment of the $c_{2Z_q}$
coefficient) or the vev $g^2\VEV{A^2}$. In order to estimate the systematic
errors we will treat in parallel the one window fit and the sliding window
one. The results are reported in table~\ref{tab:p2andA2},
table~\ref{tab:p2andA2_sliding} , \fig{fig:p2andA2} and \fig{fig:g2Z0}. The
coefficient $c_{a2p2}$ obviously refers to an $O(4)$ invariant lattice spacing
artefact which is not detected by our non-perturbative hypercubic correction
method. We see in the right plot of fig.~\ref{fig:p2andA2} as well as in the
tables that this coefficient scales very well when expressed in lattice units,
as it should be. The coefficient of $1/p^2$, if it is related to a vev $
\VEV{A^2}$, should rather scale in physical units. We see in the left plot of
fig.~\ref{fig:p2andA2} that a constant value is rather well verified although
with large errors. The results presented in table~\ref{tab:p2andA2} and
\ref{tab:p2andA2_sliding} show that the estimates for $g^2\VEV{A^2}$ from OPE expressions
with Wilson coefficient from the Chetyrkin-Maier (CM) three-loop expression
are systematically about 20\% below the ones from the tree level one.
\subsection{Analysis from the non-perturbative hypercubic corrections}
\subsubsection{Comparison of the running from the OWF and the SWF}
From tables \ref{tab:p2andA2} and~\ref{tab:p2andA2_sliding} we see
that $g^2\,\VEV{A^2}$ is systematically larger for the OWF than for the SWF.
At first sight it seems surprising since the OWF and SWF hypercubic corrected
data are very similar, see \fig{fig:NP_subtracted}. One reason is the
correlation between $c_{a2p2}$ and $g^2\,\VEV{A^2}$:
$c_{a2p2}$ is also systematically larger for the OWF
than for the SWF. This correlation is understandable as the $a^2p^2$ increases
with $p^2$ while $1/p^2$ decreases. This is compensated by a $Z_q^{\mathrm{pert}}$
smaller for the OWF than for the SWF. We will consider these differences as a systematic
uncertainty in our fits and count them in the errors.
\subsubsection{Dependence on the fitting range}
\begin{figure}[hbt]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=85mm]{figs/plot_all_range.png}
\includegraphics[width=85mm]{figs/comp_range.png}
\end{tabular}
\end{center}
\caption{\small In the r.h.s plot we show how $g^2\VEV{A^2}$, fitted with the CM
Wilson coefficient, depends on the upper
range of our fits, always starting at $a^2p^2=0.5$ for $\beta=3.9$. The points
correspond to $a^2p^2 < 2.0, 2.5, 3.0, 3.5$. The r.h.s plot also shows the
$a^2p^2$ artefact slope. We find again a positive correlation between both series
of data. The l.h.s of the plot illustrates this, showing for the same data how the
fitting function depends on the upper bound of the range. }
\label{fig:range}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{||c|c|c|c|c||}
\hline
\hline
Upper bound & $Z_q^{\mathrm{pert}}$ & $c_{a2p2}$ &
$g^2 \VEV{A^2}_{\mathrm tree }$ &
$g^2 \VEV{A^2}_{\mathrm CM }$
\\ \hline
2.0 & 0.754(6) &0.0089(21)& 1.58(39) &1.28(32) \\ \hline
2.5 & 0.745(6) & 0.0130(18)& 2.05(37) & 1.67(30) \\ \hline
3.0 & 0.733(5) & 0.0175(15) & 2.73(36) & 2.22(30) \\ \hline
3.5 & 0.726(5) & 0.0201(13) & 3.20(38) & 2.62(31) \\ \hline
\end{tabular}
\caption{\small $\beta=3.9$: results for the $Z_q^{\mathrm{pert}}$ ($\mu$= 10\, GeV)
and $c_{a2p2}$~\eq{eq:zqfit} from the one-window hypercubic corrected
data and the estimated $g^2 \VEV{A^2}$ vev from the $1/p^2$, plotted as a
function of the upper bound of the fitting range (in GeV).}
\label{tab:range}
\end{table}
An additional test is to look for the effect of the fitting range. The
results are shown in table~\ref{tab:range} and \fig{fig:range}. One sees again a
correlation between the $c_{a2p2}$ slope and $g^2\,\VEV{A^2}$. Both decrease when
the fitting range shortens while correlatively $Z_q^{\mathrm{pert}}$ increases. The l.h.s of
\fig{fig:range} explains how this happens: when the range is shorter, the error
bars allow for a less curved fit. But never the fit reaches a value such that
$g^2\,\VEV{A^2}$ disappears. The shortest window $0.5 <a^2p^2 < 2.0$ gives
the smallest value for $g^2\,\VEV{A^2}$ but still 4 sigmas away from 0.
\subsection{Analysis from the perturbative hypercubic corrections}
It is then useful to check if similar results are obtained after
a perturbative correction to the hypercubic artefacts has been applied.
\begin{table}[h]
\centering
\begin{tabular}{||c|c|c|c|c||}
\hline
\hline
prescription & $Z_q^{\mathrm{pert}}$ & $c_{a2p2}$ &
$g^2 \VEV{A^2}_{\mathrm tree }$ &
$g^2 \VEV{A^2}_{\mathrm CM }$
\\ \hline
$\tilde p_\mu$ & 0.712(11) & -0.0026(23)& 2.98(1.49) & 2.53(1.23) \\ \hline
$p_\mu$ & 0.745(3) & -0.0061(8)& 1.76(35)& 1.45(29) \\ \hline
\end{tabular}
\caption{\small For $\beta=3.9$, results for the $Z_q^{\mathrm{pert}}(\mu^2,
g^2_{\mathrm{bare}})$
($\mu$= 10\, GeV) and $c_{a2p2}$~\eq{eq:zqfit}
and $g^2 \VEV{A^2}$ from the lattice data after a perturbative
hypercubic correction. $g^2 \VEV{A^2}$ is estimated at tree level from
the $1/p^2$ contribution.}
\label{tab:pert_cond}
\end{table}
We have used two prescriptions to apply the perturbative corrections. With the
data obtained from the $\tilde p_\mu$ prescription,
section~\ref{sec:subpertild}, we perform an average on all the cubic orbits of
every $p^2$ after a democratic selection $p^{[4]}/(p^2)^2 < 0.3$. This leaves us
with not too many points and it results in rather large statistical errors.
We then perform the same running fit than on the non-perturbatively hypercubic
corrected results: we fit with one perturbative running contribution, one
$1/p^2$ contribution and one $\propto a^2p^2$ artefact.
With the data from the $p_\mu$ prescription, section~\ref{sec:subpert}, we
perform the same fit over an average on all the cubic orbits of every $p^2$
without any democratic selection, since the hypercubic artefacts have already
been efficiently reduced. The results are in table~\ref{tab:pert_cond}.
A first remark is that the $c_{a2p2}$ coefficients are compatible with zero,
indicating that the perturbative correction has efficiently eliminated
this artefact.
The coefficient of the $1/p^2$ non-perturbative contribution is found different
from zero, in the same ballpark as the results from the non-perturbative
hypercubic correction, in tables~\ref{tab:p2andA2} and~\ref{tab:p2andA2_sliding}.
The $\tilde p_\mu$ prescription has too large errors to be conclusive
but the $p_\mu$ is five sigmas away from zero, very similar to
the results in table~\ref{tab:p2andA2_sliding}.
The value of $Z_q^{\mathrm{pert}}$ for the $\tilde p_\mu$
prescription is rather low but compatible within less than two sigmas from the
result for $\beta=3.9$ in table~\ref{tab:p2andA2_sliding}.
\subsection{Running of $Z_q^{\rm pert}$}
It is interesting to consider the dependence of $Z_q^{\mathrm{pert}}$ as a
function of $g^2$. This is plotted in fig.~\ref{fig:g2Z0} both for the OWF and
the SWF. It is strikingly linear, specially for the OWF. Indeed
from~\cite{Constantinou:2009tr}, eq.~(24) perturbation theory gives a linear
dependence with a slope $\simeq -0.19$. This comes from the coefficient $b_{q1}$
in \eq{eq:pert} which is multiplied by $g^2$. In our case we find from the OWF
\begin{eqnarray}\label{eq:zqone}
Z_q^{\rm pert}((10\, \mathrm {GeV})^2,g^2_{\mathrm {bare}})&=&
0.737(3) - 0.313 (6) \,(g^2_{\mathrm {bare}}-1.5) \ , \nonumber \\
Z_q^{\rm pert}((2\, \mathrm {GeV})^2,g^2_{\mathrm {bare}})&=& 0.766(3) -
0.324(6) \,(g^2_{\mathrm {bare}}-1.5) \ ;
\end{eqnarray}
and from the SWF
\begin{eqnarray}\label{eq:zqmany}
Z_q^{\rm pert}((10 \mathrm {GeV})^2,g^2_{\mathrm {bare}}) &=&
0.751(2)(7) - 0.273(6)\left(^{0.002}_{-0.038}\right) \,(g^2_{\mathrm {bare}}-1.5) \ ,
\nonumber \\
Z_q^{\rm pert}((2\, \mathrm {GeV})^2,g^2_{\mathrm {bare}}) &=&
0.780(3)(7) - 0.284(6) \,\left(^{0.002}_{-0.040}\right)(g^2_{\mathrm {bare}}-1.5) \ .
\end{eqnarray}
We see that the coefficients of $g^2$,
\begin{eqnarray}
\frac {\partial Z_q^{\mathrm{pert}}((2\, \mathrm {GeV})^2,g^2_{\mathrm {bare}})}
{\partial g^2} = \left\{
\begin{array}{lr}
-0.329(6) & \mathrm{OWF} \\
-0.287(26) & \mathrm{SWF}
\end{array}
\right. \ ,
\end{eqnarray}
are significantly larger than the perturbatively expected, -0.19. But the linear
behaviour predicted by perturbation theory is well verified, especially
for OWF.
\begin{figure}[hbt]
\begin{center}\begin{tabular}{cc}
\includegraphics[width=85mm]{figs/merged.png}&
\includegraphics[width=85mm]{figs/flat.png}
\end{tabular} \end{center}
\caption{\small The merged plot with the OWF results at $\beta=4.05$ and
$\beta=4.2$ rescaled to the $\beta=3.9$ thanks to the ratios of
$Z_q^{\mathrm{pert}}$ given in table~\ref{tab:p2andA2}. The l.h.s shows the
data corrected for all lattice artefacts. The r.h.s shows the same data
furthermore corrected by the perturbative running factor up to 10 GeV. The
horizontal axix is $p^2$ in GeV $^2$. The black line on the l.h.s corresponds to
the global fit with perturbative running and CM (three loops) wilson
coefficient for the $1/p^2$ term. The black line on the r.h.s corresponds only
to the $1/p^2$ times the three loops wilson coefficient added to $Z_q^{\mathrm
{pert}}((10\, \mathrm {GeV})^2, 6/3.9)=0.726$}
\label{fig:all_betas}
\end{figure}
\subsection{Merging the three lattice spacings.}
From \eq{eq:zqfit} and \eq{eq:defrimom} it is clear that
\begin{eqnarray}\label{eq:artefree}
Z_q^{\mathrm {RI'}}(p^2,g^2_{\mathrm{bare}})=
Z_q^{\mathrm {hyp\_corrected}}(a^2p^2) - c_{a2p2} a^2p^2 \ .
\end{eqnarray}
In this section we use the one window fit, section~\ref{sec:one}, and
the momentum $p^2$ is now expressed in physical units. For the coefficient
$c_{a2p2}$ we use the values in table~\ref{tab:p2andA2}:
$c_{a2p2}=0.0174$. The three $Z_q^{\mathrm {RI'}}$ for the three
$\beta$'s do not match due to the running of
$Z_q$ as a function of the lattice spacing.
To make them match it turns out that it is enough to take into account
the ratios of $Z_q^{\mathrm{pert}}$'s given in table~\ref{tab:p2andA2}.
We plot on the l.h.s of fig.~\ref{fig:all_betas} the three sets of data where
the $\beta=4.05, 4.2$ ones have been rescaled to the $\beta=3.9$
scale.
We see a rather good overlap. There is however a
flattening at the right end of every $\beta$ which stays within one sigma
from the other $\beta$'s.
We understand it as a failure of the hypercubic artefacts treatment.
On the r.h.s side of fig.~\ref{fig:all_betas} we plot the same number
corrected for perturbative running by the multiplicative factor
$0.726/Z_q^{\rm pert}(p^2, 6/3.9)$, where 0.726 is taken from
table~\ref{tab:merged}. The black line is just the non-perturbative
contribution added to 0.726.
The comparison of both plots in \fig{fig:all_betas} is enlightening.
We see that the non perturbative term contributes about one half of the change
between the smallest momenta and the largest ones. Both the perturbative running
and the non perturbative contribution are convex, which makes it
difficult to disentangle them. But we also see that the
perturbative running cannot account for the full variation of the data.
The best-fit parameters resulting from this merged analysis can be found
in table~\ref{tab:merged}, where we use $g^2_{\mathrm{bare}}=6.0/3.9$ since we have rescaled
all the data to the $\beta=3.9$ one.
The values in table~\ref{tab:merged} for $g^2(\mu^2)
\VEV{A^2}_{\mu^2\;\mathrm{merged}}$
turn out to be rather central in the set of values of
tables~\ref{tab:p2andA2} and~\ref{tab:p2andA2_sliding}.
The value for $Z_q^{\mathrm{pert}}(10 \,{\mathrm GeV}, 6./3.9)$ is in very good agreement
with \eq{eq:zqone}: $Z_q^{\mathrm{pert}}(10 \,{\mathrm GeV}, 6./3.9)=0.725(3)$.
\begin{table}[h]
\centering
\begin{tabular}{||c|c|c||}
\hline
\hline
$Z_q^{\mathrm{pert}}$ &
$g^2 \VEV{A^2}_{\mathrm tree }$ &
$g^2 \VEV{A^2}_{\mathrm CM }$
\\ \hline
0.726(2) & 3.13(43) &2.55(36) \\ \hline
\end{tabular}
\caption{\small Merged data from three $\beta$'s: results for the $Z_q^{\mathrm{pert}}$
($\mu$= 10\, GeV) rescaled to $\beta=3.9$, from the one-window hypercubic
corrected data (OWF) with tree level and the three loop
formula, \eq{eq:app_final2}.}
\label{tab:merged}
\end{table}
\subsection{Summarizing}
Many of our results for $g^2 \VEV{A^2}$ are shown in the l.h.s of
\fig{fig:p2andA2}.
We did not plot the range dependent data to not overload the plot but they fall
in the range covered by the data plotted in~\fig{fig:p2andA2}.
To summarise our results and estimate the systematic uncertainty
we consider the set of values in
tables~\ref{tab:p2andA2}, \ref{tab:p2andA2_sliding},\ref{tab:range} and
\ref{tab:merged}. We will make a separate average for the tree level data and
the $O(\alpha^4)$ (CM) ones since the comparison with estimates from other
quantities, such as the coupling constant, need to be performed in the
same scheme, expansion, order and scale. The scheme is $\overline{\rm MS}$ and the precise
implementation is detailed in the appendix. We computed an average of all
above-listed data, weighted by their inverse squared error. The inverse squared
statstical error is the sum of the inverse squared errors. The systematic error
is taken such as to incorporate the central values within the error-bars.
This is rather conservative.
For $Z_q^{\rm pert}$ we average \eq{eq:zqone} and \eq{eq:zqmany}
with a similar method. We get:
\begin{eqnarray}\label{eq:final}
g^2(\mu^2) \VEV{A^2}_{\mu^2\; tree} &=& 2.45(14)\left(^{+0.78}_{- 0.87}\right)
\;\mathrm {GeV}^2 \quad \mu=10\,
\mathrm{GeV} \ ,
\nonumber \\
g^2(\mu^2) \VEV{A^2}_{\mu^2\; CM} &=& 2.01(11)\left(^{+0.61}_{- 0.73}\right)
\;\mathrm {GeV}^2 \quad \mu=10\,
\mathrm{GeV} \ ,
\nonumber \\
Z_q^{\rm pert}((10\,{\mathrm {GeV}})^2,g^2_{\mathrm {bare}})
&=& 0.744(2)(7) - 0.311(6)\left(^{+0.002}_{- 0.038}\right)
\,(g^2_{\mathrm {bare}}-1.5) \ ,
\nonumber \\
Z_q^{\rm pert}((2\,{\mathrm {GeV}})^2,g^2_{\mathrm {bare}})
&=& 0.773(3)(7) - 0.323(6)\left(^{+0.002}_{- 0.040}\right)
\,(g^2_{\mathrm {bare}}-1.5) \ ;
\end{eqnarray}
where the first error is statistical and the second the systematic one.
The values of $Z_q^{\mathrm {RI'}}(p^2,g^2_{\mathrm{bare}})$ may then
be derived from \eq{eq:defrimom}:
\begin{eqnarray}\label{eq:finalbis}
Z_q^{\mathrm {RI'}}((2\,{\mathrm {GeV}})^2,g^2_{\mathrm {bare}})
&=& 0.805(14) - 0.336(6) \left(^{+0.002}_{- 0.042}\right)
\,(g^2_{\mathrm {bare}}-1.5) \ . \qquad {\rm [CM]}
\end{eqnarray}
The results obtained from the perturbative hypercubic correction,
table~\ref{tab:pert_cond} have not been used in the final estimate (because
we did not compute them for all $\beta's$) but they fall
within the bounds at less than one sigma.
Finally two lines in the l.h.s. plot of fig.~\ref{fig:p2andA2} show the
results for $g^2 \VEV{A^2}$ obtained from those in tab.~3 of ref.~\cite{Blossier:2010ky}:
\begin{eqnarray}\label{eq:fromalpha}
g^2 \VEV{A^2}_{10 \;{\mathrm {GeV}}} =
\left\{ \begin{array}{lr}
4.1 \pm 1.5 \ {\rm GeV}^2 & {\rm leading~log}\\
2.5 \pm 0.9 \ {\rm GeV}^2 & {\cal O}\left(\alpha^4\right) \\
\end{array}
\right. \ .
\end{eqnarray}
These values in \eq{eq:fromalpha} come from a totally different quantity: they
have been extracted from the running of the ghost-gluon coupling constant.
The results of ref.~\cite{Blossier:2010ky} were obtained by applying an OPE formula
including a Wilson coefficient approximated at the leading logarithm and at the
order ${\cal O}\left(\alpha^4\right)$, but expanded in terms of $\alpha_T$. Then,
in order to be properly compared with the results of this work, the values
of \eq{eq:fromalpha} incorporate the correction by the effect of expanding the OPE formula
in terms of the running coupling in $\overline{\rm MS}$. The lattice spacing applied
in ref.~\cite{Blossier:2010ky} to get a physical scale, $a(3.9)=0.0801$ fm, was also
slightly smaller than the one used in this work (see, for instance,
tab.~\ref{setup}) and this has been also taken into account in
obtaining \eq{eq:fromalpha}.
As a matter of fact, \eq{eq:fromalpha} exhibits a slower convergence of the perturbative series of
the Wilson-coefficient as in the present paper. From tables~\ref{tab:p2andA2},
\ref{tab:p2andA2_sliding}, \ref{tab:range} and \ref{tab:merged} we see that the
$O(\alpha^4)$ estimate is about 20 \% below the tree level while
in table 3 of~\cite{Blossier:2010ky} it is about 45 \% below the leading
logarithm one. The two estimates agree rather well within
the present accuracy.
\subsection{Conversion to $\overline{\mathrm {MS}}$}
The conversion to $\overline{\mathrm {MS}}$ can also be performed
from $Z_q^{\mathrm{pert}}$ or $Z_q^{\mathrm {RI'}}$ which contains
a non-perturbative contribution. Usually, in the literature, the values
are assumed to be perturbative.
The conversion of $Z_q^{\mathrm{pert}}$ into $\overline{\mathrm {MS}}$
will use the standard perturbative conversion formulae ~\cite{Chetyrkin:1999pq}.
We get
\begin{eqnarray}
Z_q^{\overline{\mathrm {MS}}\,\mathrm{pert}}( (2\,{\mathrm {GeV}})^2,
g^2_{\mathrm{bare}})/
Z_q^{\mathrm{pert}}((2\,{\mathrm {GeV}})^2, g^2_{\mathrm{bare}}) = 0.97 \ ,
\nonumber \\
Z_q^{\overline{\mathrm {MS}}\,\mathrm{pert}}((2\,{\mathrm {GeV}})^2,
g^2_{\mathrm {bare}}) = 0.750(3)(7) - 0.313(20)
\,(g^2_{\mathrm {bare}}-1.5) \ .
\label{eq:pertMS}
\end{eqnarray}
The central value of this result is about a 2 \% systematically below the
results of~\cite{Constantinou:2010gr}. This can be presumably interpreted as
a small systematic correction due to our subtraction of the non-perturbative contribution in obtaining
$Z_q^{\rm pert}$ in the RI'-MOM scheme and converting it to $Z_q^{\mathrm{\overline{\rm MS}}\, \mathrm{pert}}$ at
2 GeV.
From \eq{eq:app_final2} it is easy to see how to include the
$g^2 \VEV{A^2}$ non perturbative contribution. Up to now we have applied
the result of the appendix using RI'-MOM for the perturbative part and
$\overline{\mathrm {MS}}$ for the ratio in the corrective parenthesis.
Had we wished to use the $\overline{\mathrm {MS}}$ scheme for the perturbative
contribution, we would have the main inconvenience of not to know the OPE
contribution for $Z_q$ defined in the $\mathrm{\overline{\rm MS}}$ scheme. However, only in the aim
of roughly estimating the non-perturbative correction, we can assume the
OPE corrective parenthesis to remain the same and then get
\begin{eqnarray}\label{eq:npMS}
Z_q^{\overline{\mathrm {MS}}\,\mathrm{non-perturbative}}((2\,{\mathrm {GeV}})^2,
g^2_{\mathrm {bare}}) = 0.781(6)(21) - 0.326(21) \,(g^2_{\mathrm {bare}}-1.5) \ .
\end{eqnarray}
Notice that the non perturbative contribution is about 4\% at 2 GeV.
Nevertheless, the results of~\cite{Constantinou:2010gr} have been
obtained at momenta larger than 2 GeV. Although their estimates and
our $Z_q^{\rm RI'}$ may agree with each other, a subtraction of
the non-perturbative contribution in obtaining $Z_q^{\rm pert}$,
probably still required at the fitting window of~\cite{Constantinou:2010gr}
but not applied, could explain the small discrepancy of about a 2 \% that
we discussed above. The discrepancy is anyhow only affecting the conversion
of $Z_q^{\rm RI'}$ to $Z_q^{\mathrm{\overline{\rm MS}}\, \mathrm{pert}}$ at 2 GeV.
In Table 6 of~\cite{Constantinou:2010gr} one finds results for
$Z_q^{\overline{\mathrm {MS}}}$ obtained at 2 GeV by applying the standard perturbative conversion
formulae~\cite{Chetyrkin:1999pq}.
Indeed, they turn out to be in between our results
in \eq{eq:pertMS} and \eq{eq:npMS}. Quoting in order \eq{eq:pertMS}, ref.~\cite{Constantinou:2010gr}
and \eq{eq:npMS} we get: 0.738, 0.751 and 0.769 for $\beta=3.9$ and 0.755, 0.780 and 0.786
for $\beta=4.05$.
\subsection{Comparison of different estimates for the gluon condensate}
Let us now compare the present estimate of $g^2 \VEV{A^2}$ to previous ones
at $N_f=2$ and $N_f=0$, all taken at the renormalisation scale of 10 GeV and,
when needed, transformed to the very precise renormalization scheme for
the OPE expansion defined in the appendix \ref{appendix2}.
In~\cite{Boucaud:2005rm} a quenched study of $Z_q$ using Wilson-Clover and
overlap fermions ended with values of $\VEV{A^2}_{\rm MOM}$ in
the range 2.67-3.2 GeV$^2$ with typical errors
of 0.3 GeV$^2$. Notice that this computation was performed only up to leading
logarithm for the Wilson coefficient and that the choice was to expand the perturbative
series in terms of the coupling renormalized in the MOM scheme (this is why we use for
the VEV the label MOM). Then, we can apply the expressions derived in the appendix
\ref{appendix2} to obtain the estimates for $g^2 \VEV{A^2}$ in the above-mentioned
renormalization scheme appearing in table~\ref{tab:global_comp}.
However it is advocated in~\cite{Boucaud:2005rm} that
the $1/p^2$ contribution only increases by 10\% when going from MOM to $\mathrm{\overline{\rm MS}}$.
On the other hand we have seen that between tree level and three loops a
decrease of 20\% was observed. In~\cite{Boucaud:2005rm} an artefact
$\propto a/p^2$ was observed. We do not see it in the present analysis
since the scaling of $g^2 \VEV{A^2}$ as a function of the lattice spacing
indicates no visible $1/p^2$ contribution dependent on $a$.
In~\cite{Boucaud:2008gn} a summary was performed of different
estimates of $g^2 \VEV{A^2}$ from gluonic quantities at $N_f=0$~:
$\alpha_s$ from the three gluon vertex with equal momenta on the three legs (symmetric) and
from the three gluon vertex with one vanishing momentum (asymmetric), the ratio between the
ghost and gluon propagators and $\alpha_s$ from the ghost and gluon propagators,
using Taylor's theorem. The ones involving gluon and ghost propagators
agree fairly well but the latter is the most accurate. It gives
$g_T^2 \VEV{A^2} = 5.1^{+0.7}_{-1.1}$, although the applied OPE formula was obtained by
expanding the involved perturbative series in terms of $\alpha_T$. After the
appropriate transformation, one obtains the result shown in table~\ref{tab:global_comp}.
We also quote in the table the estimate of $g^2 \VEV{A^2}$ from the symmetric
three gluon vertex, more precise than the one coming from the asymmetric vertex, and that
appeared to be much higher than the estimate from $\alpha_T$ and compatible with that from
quark propagtor. In the case of the three-gluon estimates, no available
$O(\alpha^4)$ Wilson coefficient can help us to go beyond the leading logarithm
approximation. However, either comparing the leading-logarithm estimates of the
ones approximated at the order $O(\alpha^4)$, a clear discrepancy (by a factor of about two)
appears between the estimates from ghost and gluon propagators and those from vertices or
the quark propagator. This could imply that some systematic uncertainty is not completely
under control. One might, for instance, guess that $1/p^4$-contributions can be invoked to
reduce that discrepancy. For this to happen, the $1/p^4$-contributions had to be negative, and
to tend to increase the estimate of $g^2 \VEV{A^2}$, for the OPE formula of $\alpha_T$; while it
had to be positive, and reduce $g^2 \VEV{A^2}$, for the quark propagator. Indeed, although no
stable fit including $1/p^4$-contributions can be performed, the sign seems to be the right one
for $\alpha_T$ in \cite{Blossier:2010ky}. Also the right sign of the contributions to
$Z_q$ is found in ref.~\cite{Boucaud:2005rm}.
\begin{table}[h]
\centering
\begin{tabular}{|c|}
\hline
\rule[0cm]{4cm}{0cm} measurement (GeV$^2$)
\\
\begin{tabular}{|c|c|c|c|c|}
\hline
$N_f$ & order $g^2 \VEV{A^2}$ & $Z_q$ & $\alpha_T$ & 3-gluon \\
\hline
0 & LL & 9.4(3) & 5.2(1.1) & 10(3) \\
\cline{2-5}
& $O(\alpha^4)$ & 9.0(3) & 3.7(8) & \\
\hline
2 & LL & 2.7(4) & 4.1(1.5) & \\
\cline{2-5}
& $O(\alpha^4)$ & 2.55(36) & 2.5(9)& \\
\hline
\end{tabular}
\rule[-1.3cm]{0cm}{1.3cm}
\\
\hline
\end{tabular}
\caption{\small Comparison of estimates of $g^2 \VEV{A^2}$ from different quantities
an $N_f=0$ and $N_f=2$. All are taken at the scale $\mu= 10$\, GeV.
LL means leading logarithm for the Wilson coefficient. $O(\alpha^4)$ refers to
Chetyrkine and Maier computation. }
\label{tab:global_comp}
\end{table}
In~\cite{Blossier:2010ky} the strong coupling constant was computed along
similar lines to what is done here, on the same set of ETMC gauge
configurations with $N_f=2$. The necessity of a non-perturbative
$\propto 1/p^2$ contribution was also found and the resulting
condensate, $g^2 \VEV{A^2}_{10 \;{\mathrm GeV}} = 2.3(8)$, obtained through an OPE formula
approximated at the $O(\alpha^4)$-order and
expanded in $\alpha_T$, can be properly transformed~\footnote{We have taken into account
the different lattice spacing in~\cite{Blossier:2010ky}.} to give the value of
table~\ref{tab:global_comp}, also quoted in \eq{eq:fromalpha}, which agrees
strikingly well with the result
of tab.~\ref{tab:merged} (the one we also quote in tab.~\ref{tab:global_comp})
or that of \eq{eq:final}. The value obtained through a leading-logarithm-approximated
formula is also displayed in tab.~\ref{tab:global_comp}.
In~\cite{Martinelli:1996pk} Martinelli and Sachrajda proposed a criterium to validate
the use of operator expansion which we apply in this paper. They concluded that one should
compare the difference of the highest order of the perturbative expansion for two different
quantities with the non-perturbative contribution and check that the former is small
compared to the latter. We have compared the highest order of the perturbative expansion
of $Z_q$ with the $1/p^2$ contribution and find that the ratio ranges between 1/10 and 1/3
depending on the momentum. This is a good indication of the validity of our use of the
operator expansion. Had we used the perturbative expansion only up to $O(\alpha)$ the
criterium would not have been fulfilled.
Furthermore, all these estimates in table~\ref{tab:global_comp} show a clear tendency of
a decrease of $g^2 \VEV{A^2}$ from $N_f=0$ to $N_f=2$. This might
support an interpretation of $g^2 \VEV{A^2}$ as originating in
instantons~\cite{Boucaud:2002nc} since the instanton density should decrease
with light dynamical masses.
\section{Conclusion}
We have studied with care the twisted quark propagators produced on
the ETMC set of $N_f=2$ gauge configurations. Our goal was to concentrate on two
major issues: the correction for lattice spacing artefacts, particularly the
hypercubic ones, and the presence of a sizeable non perturbative contribution of the
$A^2$ operator. The latter is expected to be sizeable because it was seen in the
quenched case~\cite{Boucaud:2005rm}, it was seen in the unquenched study of the
strong coupling constant~\cite{Blossier:2010ky} and since the Wilson
coefficient of $g^2 \VEV{A^2}$ is not small in $Z_q$~\cite{Chetyrkin:2009kh}.
This is an important issue since from our estimates it gives a
$\sim 4 \%$ contribution at 2 GeV. A reliable estimate of this non-perturbative
correction needs a large enough fitting range, which allows to distinguish
a $1/p^2$ contribution from perturbative logarithms. But the fitting window
is restricted below by infrared effects and above by lattice spacing artefacts.
We thus need to improve our control on dominant lattice spacing artefacts which are of
two types: hypercubic ones and $\propto a^2p^2$ ones.
Concerning the hypercubic artefacts, we have summarized the non-perturbative
correcting method~\cite{Becirevic:1999uc,deSoto:2007ht} which we compared
systematically with the perturbative results of~\cite{Constantinou:2009tr}.
$Z_q$ has very large hypercubic artefacts which display, as a function of $p^2$
a ``half-fishbone'' very far from a smooth curve (see \fig{fig:fishbone}).
We check carefully how these fishbones are ``swallowed'' by the corrective
methods. It is worthwhile to emphasize that the ``democratic'' method, prescribing
for instance a cut on $p^{[4]}/p^2$ to drastically reduce the number of allowed
hypercubic orbits, is not good enough to eliminate the fishbones and to leave us with a
smooth curve for $Z_q$.
The perturbative method to correct hypercubic artefacts suffers from
some options left: what to take for the coupling constant,
use of $p_\mu$ or $\tilde p_\mu=a^{-1} \sin(a p_\mu)$ ?
We tried first to stick to the prescription of~\cite{Constantinou:2009tr}
and use the boosted coupling constant. This
reduces the hypercubic artefacts only up to $a^2p^2 \simeq 1.6$
(see \fig{fig:pert}, l.h.s).
Guided by the test on the fishbone reduction we then
propose a prescription based on the same perturbative formulae but using
$p_\mu$. For $Z_q$ this reduces the hypercubic artefacts up to
$a^2p^2 \simeq 3.5$ which has been our upper bound in this work
(see \fig{fig:pert}, r.h.s).
We test also the non-perturbative method to correct hypercubic artefacts. We use
two prescriptions. The first one uses a sliding window and the second one uses
only one fitting window on the full momentum range.
We find that the hypercubic artefacts are sufficiently well described and cured
by two terms: $\propto a^2 p^{[4]}/p^2$ and $\propto a^4 p^{[4]}$. We fit the
coefficients of these quantities and check their scaling with $\beta$.
From the resulting hypercubic corrected function $Z_q(a^2p^2,a^2\Lambda_{\rm
QCD}^2)$ we perform fits which incorporate the perturbative running, a non
perturbative $1/p^2$ term (presumably related to $g^2 \VEV{A^2}$), and a
hypercubic insensitive lattice spacing artefact proportional to $a^2p^2$. The
fits are good. The $a^2p^2$ term scales almost perfectly in lattice units, as
expected. The $g^2 \VEV{A^2}$ term scales rather well in physical units as
expected. The accuracy on $g^2 \VEV{A^2}$ is reduced by some correlations in the
fits: we see a correlation between the method used to correct hypercubic
artefacts and the estimated value of $g^2 \VEV{A^2}$. We also see a correlation
between the fitting range and the resulting $g^2 \VEV{A^2}$. But all values
of $g^2 \VEV{A^2}$ fall into the same ballpark and none of these fits
can be done without such a positive contribution. To estimate the systematic
uncertainty we have considered a large panel of fitting methods. All at more
than four sigmas from zero except at $\beta=4.2$ with the sliding window
where they are only 2.5 sigmas above zero. Comparing the fitted $\VEV{A^2}$
using the tree level Wilson coefficient and that using the three loops one,
we find that the latter is about 20 \% below the former.
The perturbative contribution to $Z_q$, $Z_q^{\mathrm{pert}}$ has a linear
dependence in the bare lattice coupling: see \fig{fig:g2Z0} and \eq{eq:final},
as expected from perturbation theory, but with a larger coefficient, even when
the boosted coupling constant is used in perturbation theory.
We also merge all three $\beta$'s after having subtracted the $a^2p^2$ term
and rescaled the $\beta=4.05, 4.2$ to $3.9$ using the ratios of
$Z_q^{\mathrm{pert}}(\mu)$. The overlap of the three data sets is rather good.
The need of a non-perturbative contribution is also visible there.
Both perturbative and non-perturbative contributions decrease with the momentum
and are convex. This makes the separation difficult. Grossly speaking
they share the decrease between 4 and 40 GeV$^2$ in equal parts.
We have converted our results for $Z_q^{\mathrm {RI'}}$ and its perturbative
part $Z_q^{\mathrm{pert}}(\mu)$ into the $\mathrm{\overline{\rm MS}}$ scheme.
Combining all the results we find, using the three loop Wilson coefficient:
\begin{eqnarray}\label{eq:resultA2}
g^2(\mu^2) \VEV{A^2}_{\mu^2\; CM} &=& 2.01(11)\left(^{+0.61}_{- 0.73}\right)
\;\mathrm {GeV}^2 \quad \mu=10\, \mathrm {GeV} \ ,
\nonumber \\
Z_q^{\rm pert}((2\,{\mathrm {GeV}})^2,g^2_{\mathrm {bare}})
&=& 0.773(3)(7) - 0.323(6)\left(^{+0.002}_{- 0.040}\right)
\,(g^2_{\mathrm {bare}}-1.5) \ ,
\nonumber \\
Z_q^{\mathrm {RI'}}((2\,{\mathrm {GeV}})^2,g^2_{\mathrm {bare}})
&=& 0.805(14) - 0.323(6) \left(^{+0.002}_{- 0.040}\right)
\,(g^2_{\mathrm {bare}}-1.5) \ ,
\nonumber \\
Z_q^{\overline{\mathrm {MS}}\,\mathrm{pert}}((2\,{\mathrm {GeV}})^2,
g^2_{\mathrm {bare}}) &=& 0.750(3)(7) - 0.313(20)
\,(g^2_{\mathrm {bare}}-1.5) \ ,
\nonumber \\
Z_q^{\overline{\mathrm {MS}}\,\mathrm{non-perturbative}}((2\,{\mathrm {GeV}})^2,
g^2_{\mathrm {bare}}) &=& 0.781(6)(21) - 0.313(20) \,(g^2_{\mathrm {bare}}-1.5) \ ;
\end{eqnarray}
where the systematic error is estimated from the scattering of the results
in tables~\ref{tab:p2andA2}, \ref{tab:p2andA2_sliding}, \ref{tab:range} and
\ref{tab:merged}. We use the lattice spacings listed in table~\ref{setup}.
Futhermore, table~\ref{tab:global_comp} also shows a nice agreement between the condensates
for $N_f=2$, although some systematics appears not to be under control for $N_f=0$.
This supports the interpretation of the $\propto 1/p^2$ contribution as being
due to a condensate of the only dimension two operator in Landau gauge: $A^2$.
Another confirmation comes from the validity of Martinelli-Sachrajda's
criterium~\cite{Martinelli:1996pk}. The accuracy on $g^2 \VEV{A^2}$ is however limited due to several correlations
in the fits. Further and more accurate checks of the consistency of $g^2
\VEV{A^2}$ from other renormalisation constants will be very welcome.
\section{#1}
\renewcommand{\thesection}{\Alph{section}}}
\numberwithin{equation}{section}
\title{Renormalisation of quark propagators from
twisted-mass lattice QCD at $N_f$=2}
\author{B. Blossier$^a$, Ph.~Boucaud$^a$, M.
Brinet$^b$, F. De Soto$^c$, Z. Liu$^{d,e}$, V.~Morenas$^f$\\ O.
P\`ene$^a$, K. Petrov$^a$, J.~Rodr\'iguez-Quintero$^g$ }
\date{}
\begin{document}
\maketitle
\begin{figure}[h]
\begin{center}
\includegraphics[width=50mm]{figs/ETMC_rund.pdf}
\end{center}
\end{figure}
\begin{center}
$^a$ Laboratoire de Physique Th\'eorique\footnote{Unit\'e Mixte de Recherche 8627 du Centre National de
la Recherche Scientifique},\\
CNRS et Universit\'e Paris-Sud XI, B\^atiment 210, 91405 Orsay Cedex,
France\\
$^b$ Laboratoire de Physique Subatomique et de Cosmologie, CNRS/IN2P3/UJF, \\
53, avenue des Martyrs, 38026 Grenoble, France\\
$^c$ Dpto. Sistemas F\'isicos, Qu\'imicos y Naturales,
Univ. Pablo de Olavide, 41013 Sevilla, Spain\\
$^d$ DAMTP, University of Cambridge,
Wilberforce Road, Cambridge CB3 0WA, United Kingdom\\
$^e$ Institute of High Energy Physics,
Chinese Academy of Science, Beijing 100049, China \\
$^f$ Laboratoire de Physique Corpusculaire, Universit\'e Blaise Pascal, CNRS/IN2P3 \\
63177 Aubi\`ere Cedex, France\\
$^f$ Dpto. F\'isica Aplicada, Fac. Ciencias Experimentales,
Univ. de Huelva, 21071 Huelva, Spain
\end{center}
\newpage
\begin{abstract}
We present results concerning the non-perturbative evaluation of
the renormalisation constant for the quark field, $Z_q$, from lattice
simulations with twisted mass quarks and three values of the lattice spacing.
We use the RI'-MOM scheme. $Z_q$ has very large lattice spacing artefacts ;
it is considered here as a test bed to elaborate accurate
methods which will be used for other renormalisation constants.
We recall and develop the non-perturbative
correction methods and propose tools to test the quality of the correction.
These tests are also applied to the perturbative correction method. We check
that the lattice spacing artefacts scale indeed as $a^2p^2$.
We then study the running of $Z_q$ with particular attention to the
non-perturbative effects, presumably dominated by the dimension-two gluon
condensate $\VEV{A^2}$ in Landau gauge. We show indeed that this effect is
present, and not small. We check its scaling in physical units confirming that
it is a continuum effect. It gives a $\sim 4 \%$ contribution at 2 GeV.
Different variants are used in order to test the reliability of our result
and estimate the systematic uncertainties.
Finally combining all our results and using the known Wilson
coefficient of $\VEV{A^2}$ we find $g^2(\mu^2) \VEV{A^2}_{\mu^2\; CM} =
2.01(11)\left(^{+0.61}_{- 0.73}\right) \;\mathrm {GeV}^2$ at $\mu=10\,
\mathrm{GeV}$, in fair agreement within uncertainties
with the value indepently extracted from the strong coupling constant. We
convert the non-perturbative part of $Z_q$ from RI'-MOM to
$\mathrm{\overline{\rm MS}}$. Our result for the quark field renormalisation constant in the
$\overline{\mathrm {MS}}$ scheme is $Z_q^{\overline{\mathrm {MS}}\,\mathrm{pert}}((2\,{\mathrm {GeV}})^2,
g^2_{\mathrm {bare}}) = 0.750(3)(7) - 0.313(20)
\,(g^2_{\mathrm {bare}}-1.5)$ for the perturbative contribution and
$Z_q^{\overline{\mathrm {MS}}\,\mathrm{non-perturbative}}((2\,{\mathrm {GeV}})^2,
g^2_{\mathrm {bare}}) = 0.781(6)(21) - 0.313(20) \,(g^2_{\mathrm {bare}}-1.5)$
when the non-perturbative contribution is included.
\end{abstract}
\begin{flushleft}
DAMTP-2010-88\\
LPT-Orsay 10-81\\
UHU-FT/10-39 \\
LPSC-10122 \\
PCCF-1007
\end{flushleft}
\tableofcontents
\input{my-diagrams.tex}
\input{introduction}
\input{Continuum_running_Zq}
\input{lattice_Zq}
\input{Zq}
\section*{Acknowledgements}
We thank Alain Le Yaouanc, Vittorio Lubicz and Gian-Carlo Rossi
for a critical reading of this manuscript and the very useful subsequent discussions.
This work was granted access to the HPC resources of CINES and IDRIS
under the allocation 2010-052271 made by GENCI.
J. R-Q is indebted to the Spanish MICINN for the
support by the research project FPA2009-10773 and
to ``Junta de Andalucia'' by P07FQM02962.
Z. Liu thanks the UK Science and Technology Facilities Council
(STFC) for financial support.
\input{appendix-Pp}
\include{biblio}
\end{document}
\section{Appendix: The Wilson coefficients at $O(\alpha^4)$}
\label{appendix2}
The purpose of this appendix is to describe briefly the OPE analysis of the quark propagator
renormalization constant defined in \eq{Zqdef} that leads us to
\eq{eq:zqfit}, where the four-loops
results in ref.~\cite{Chetyrkin:2009kh} are exploited to derive the Wilson coefficients with
the appropriate renormalization prescription. This OPE analysis
is analogous to the one performed in refs.~\cite{Boucaud:2001st,Boucaud:2000nd,Boucaud:2005xn,Boucaud:2008gn}.
The starting point is the OPE of the inverse of the quark propagator:
\begin{eqnarray}
S^{-1}(p^2,\mu^2) \ = \ Z^{-1}_q(\mu^2) \ S^{-1}_{\rm bare}(p^2) &= &
\left(S^{\rm pert}\right)^{-1}(p^2,\mu^2) \ + \ i p \!\!\!/
\frac{c_{2 Z_q}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)}{p^2}
\frac{\langle A^2\rangle_{R,\mu^2}}{4 (N_c^2-1)} \delta_{ab}
\nonumber \\
&=&
\frac{\left(S^{\rm pert}_{\rm bare}\right)^{-1}(p^2)}{ Z^{\rm pert}_q(\mu^2)}
\ + \ i p \!\!\!/
\frac{c_{2 Z_q}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)}{p^2}
\frac{\langle A^2\rangle_{R,\mu^2}}{4 (N_c^2-1)} \delta_{ab}
\ , \nonumber \\
\label{ap-prop}
\end{eqnarray}
where only the leading term in $p \!\!\!/$ is kept, the quark mass
being assumed to be negligible or to vanish. The cut-off
regularization dependence is omitted for the bare quantities but that on the
renormalization momentum, $\mu$, is explicitly written for the renormalized
ones. In the RI'-MOM scheme we define $Z^{\rm RI'}_q$ such that $(S_{\rm
bare})^{-1}(p^2)= i p \!\!\!/ \delta_{ab} \ Z^{\rm RI'}_q(p^2)$ and $Z^{\rm
pert\; RI'}_q$ such that, $(S^{\rm pert}_{\rm bare})^{-1}(p^2)= i p \!\!\!/
\delta_{ab} \ Z^{\rm pert \; RI'}_q(p^2)$. Then, the renormalization momentum,
$\mu^2$, taken to lie on the perturbative regime, one can apply \eq{Zqdef} and
obtain,
\begin{eqnarray}
\frac{Z_q^{RI'}(p^2)}{ Z^{\rm pert}_q(\mu^2)}
&=&
\frac{Z_q^{\rm pert\; RI'}(p^2)}{ Z^{\rm pert}_q(\mu^2)}
\ + \
c_{2 Z_q}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right) \
\frac{\langle A^2\rangle_{R,\mu^2}}{4 (N_c^2-1) p^2}
\nonumber \\
&=&
c_{0 Z_q}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right) \ + \
c_{2 Z_q}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right) \
\frac{\langle A^2\rangle_{R,\mu^2}}{4 (N_c^2-1) p^2} \ .
\label{ap-prop2}
\end{eqnarray}
which implies a definition of $c_{0 Z_q}$ and where
$c_{2Z_q}(p^2,\mu^2)$ is the Wilson coefficient of $g^2 \VEV{A^2}$.
Although not yet specifying the renormalisation scheme of $Z^{\rm pert}_q(\mu^2)$,
we know that
\begin{eqnarray}
c_{0 Z_q}\left(1,\alpha(\mu)\right) &=& 1 + O(\alpha^2) \ ,
\end{eqnarray}
while $c_{2 Z_q}$ is known up to $O(\alpha^4)$ in the $\mathrm{\overline{\rm MS}}$
scheme~\cite{Chetyrkin:2009kh} and, in particular,
$c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(1,\alpha(\mu)\right)$ is given in eq (18) of that paper
using $q^2=\mu^2$. Let us however keep in mind that
\begin{eqnarray}\label{trentedeux}
c_{2 Z_q}\left(1,\alpha(\mu)\right) &=& \frac{32 \pi} 3 \alpha(\mu)
\;\left(1+{\cal O}(\alpha(\mu))\right) = \frac
{8\,g^2(\mu)}{3}\;\left(1+{\cal O}(\alpha(\mu))\right) \ .
\end{eqnarray}
Now, with the help of the appropriate renormalization constants, one can also
write \eq{ap-prop2} in terms of bare quantities:
\begin{eqnarray}\label{ap-bar}
Z_q^{RI'}(p^2) &=& Z^{\rm pert }_q(\mu^2) \
c_{0 Z_q}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)
\nonumber \\
&+& Z^{\rm pert}_q(\mu^2) Z_{A^2}^{-1}(\mu^2,\Lambda^2)
c_{2 Z_q} \left(\frac{p^2}{\mu^2},\alpha(\mu)\right) \
\frac{\langle A^2 \rangle}{4 (N_c^2-1) p^2} \ ,
\end{eqnarray}
where $A^2_R=Z^{-1}_{A^2} A^2$ .
Then, as the $\mu$-dependence of both l.h.s. and r.h.s. of \eq{ap-bar} should
match each other for any $p$, one can take the logarithmic derivative with
respect to $\mu$ and infinite cut-off limit, term by term, on r.h.s. and
obtains:
\begin{eqnarray}\label{ap-diffeqs}
\gamma_q(\alpha(\mu)) +
\left\{ \frac{\partial}{\partial\log\mu^2}
+ \beta(\alpha(\mu))\frac{\partial}{\partial \alpha}\right\} \
\ln c_{0 Z_q}\left(\frac{q^2}{\mu^2},\alpha(\mu)\right) &=& 0
\nonumber \\
-\gamma_{A^2}(\alpha(\mu)) + \gamma_q(\alpha(\mu))
+ \left\{ \frac{\partial}{\partial\log\mu^2}
+ \beta(\alpha(\mu))\frac{\partial}{\partial \alpha}\right\} \
\ln c_{2 Z_q}\left(\frac{q^2}{\mu^2},\alpha(\mu)\right) &=& 0 \ ,
\end{eqnarray}
where the $\beta$-function, chosen to be in $\mathrm{\overline{\rm MS}}$, is defined as
\begin{eqnarray}\label{beta-f}
\beta(\alpha(\mu)) \ = \frac{d}{d\log\mu^2} \alpha(\mu) \ = \ - 4 \pi \ \sum_{i=0} \beta_i \left( \frac{\alpha(\mu)}{4 \pi}\right)^{i+2}
\end{eqnarray}
and where $\gamma_q(\alpha(\mu))$ and $\gamma_{A^2}(\alpha(\mu))$ are the anomalous
dimensions for the fermion propagator and local operator $A^2$, respectively,
which are formally defined as
\begin{eqnarray}
\gamma_{X}(\alpha(\mu)) \ = \ \frac{d}{d\log\mu^2} \log{Z_{X}} \ = -
\sum_{i=0} \gamma_i^{X} \ \left( \frac{\alpha(\mu)}{4\pi} \right)^{i+1} \ ,
\end{eqnarray}
where $X$ stands for $q$ or $A^2$.
The scheme for the anomalous dimension of $Z_{A^2}$ is imposed
through the renormalization of the local operator $A^2$, as was done
in ref.~\cite{Boucaud:2001st} to obtain its leading logarithm contribution, and it is
only known in the $\mathrm{\overline{\rm MS}}$ at the order ${\cal O}(\alpha^4)$ ~\cite{Gracey:2002yt}.
Then, that is the only possible choice of scheme for $\gamma_{A^2}$
in eqs.~(\ref{ap-diffeqs}). Concerning $Z_q^{\rm pert}$, its scheme is determined by the
renormalization prescription for the non-perturbative propagator in the left hand-side
of \eq{ap-prop}. Both $\mathrm{\overline{\rm MS}}$ and RI'-MOM are possible. As we aim to obtain
a non-perturbative formula to be confronted to the lattice estimate of the RI'-MOM
quark renormalization constant, it is convenient also to prescribe the RI'-MOM scheme
for $Z_q^{\rm pert}$. Thus, eqs.~(\ref{ap-diffeqs}) must be re-written as
\begin{eqnarray}\label{ap-diffeqs2}
\gamma_q^{\rm RI'}(\alpha(\mu)) +
\left\{ \frac{\partial}{\partial\log\mu^2}
+ \beta(\alpha(\mu))\frac{\partial}{\partial \alpha}\right\} \
\ln c_{0 Z_q}^{\rm RI'}\left(\frac{q^2}{\mu^2},\alpha(\mu)\right) &=& 0
\nonumber \\
-\gamma_{A^2}^{\mathrm{\overline{\rm MS}}}(\alpha(\mu)) + \gamma_q^{\rm RI'}(\alpha(\mu))
+ \left\{ \frac{\partial}{\partial\log\mu^2}
+ \beta(\alpha(\mu))\frac{\partial}{\partial \alpha}\right\} \
\ln c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}}\left(\frac{q^2}{\mu^2},\alpha(\mu)\right) &=& 0 \ ,
\end{eqnarray}
where
\begin{eqnarray}\label{eq:defc0}
c_{0 Z_q}^{\rm RI'} \left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)\equiv
\frac{Z_q^{\rm pert\; RI'}(p^2)}{Z^{\rm pert\; RI'}_q(\mu^2)} \
\end{eqnarray}
and $c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}}$ is in the $\mathrm{W\overline{\rm MS}}$ scheme\footnote{We define this scheme by imposing that the local operator of the Wilson expansion is renormalized in $\mathrm{\overline{\rm MS}}$, while the expanded operator (the quark propagator, in our case) is
in a MOM scheme. We called this ``Wilson $\mathrm{\overline{\rm MS}}$". } explicitly defined by the second equation of (\ref{ap-diffeqs2}),
after the RI'-MOM prescription for $Z_q^{\rm pert}$, that of $\overline{\rm MS}$ for $A^2$ and by the choice of a boundary condition,
$c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}}(1,\alpha(q))$. Then, from \eq{ap-prop2} one obtains
\begin{eqnarray}\label{ap-fin0}
Z_q^{\rm RI'}\left(p^2\right) = Z^{\rm pert \; RI'}_q(\mu^2) \
c_{0 Z_q}^{\rm RI'} \left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right) \
\left( 1+
\frac {c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}} \left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
{ c_{0 Z_q}^{\rm RI'} \left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)} \
\frac{\langle A^2 \rangle_{R,\mu^2}}{4 (N_c^2-1) p^2}\right) \ ,
\end{eqnarray}
and, in practice, both eqs.~(\ref{ap-diffeqs2}) can
be combined to give the following differential equation,
\begin{eqnarray}\label{ap-fin}
\left\{ -\gamma^{\mathrm{\overline{\rm MS}}}_{A^2}(\alpha(\mu)) + \frac{\partial}{\partial\log\mu^2}
+ \beta(\alpha(\mu))\frac{\partial}{\partial \alpha}\right\} \
\frac{c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
{c_{0 Z_q}^{\rm RI'}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)} \ = \ 0 \ ,
\end{eqnarray}
that can be solved to provide us with the ratio of Wilson coefficients, $c_{2 Z_q}/c_{0 Z_q}$,
required to implement \eq{ap-fin0}. For the purpose of the best comparison
with the results from the analysis performed in ref.~\cite{Boucaud:2008gn}, we applied
\begin{eqnarray}\label{boundary}
c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}}\left(1,\alpha(p)\right) \ \equiv \ c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(1,\alpha(p)\right) \ ,
\end{eqnarray}
where $c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(1,\alpha(\mu)\right)$ is taken from
eq.~(18) of ref..~\cite{Chetyrkin:2009kh} using $q^2=\mu^2$,
as a boundary condition which is equivalent to the one applied in the analysis of
ref.~\cite{Boucaud:2008gn}.
On the other hand, if we take $Z_q^{\rm pert}$ to be renormalized in $\mathrm{\overline{\rm MS}}$, the
equations in (\ref{ap-diffeqs}) reads
\begin{eqnarray}\label{ap-diffeqs3}
\gamma_q^{\mathrm{\overline{\rm MS}}}(\alpha(\mu)) +
\left\{ \frac{\partial}{\partial\log\mu^2}
+ \beta(\alpha(\mu))\frac{\partial}{\partial \alpha}\right\} \
\ln c_{0 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{q^2}{\mu^2},\alpha(\mu)\right) &=& 0
\nonumber \\
-\gamma_{A^2}^{\mathrm{\overline{\rm MS}}}(\alpha(\mu)) + \gamma_q^{\mathrm{\overline{\rm MS}}}(\alpha(\mu))
+ \left\{ \frac{\partial}{\partial\log\mu^2}
+ \beta(\alpha(\mu))\frac{\partial}{\partial \alpha}\right\} \
\ln c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{q^2}{\mu^2},\alpha(\mu)\right) &=& 0 \ ,
\end{eqnarray}
where $c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}$ is the Wilson coefficient computed in ref.~\cite{Chetyrkin:2009kh},
provided that the boundary condition, $c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(1,\alpha(\mu)\right)$, is taken again
from eq.~(18) of the same paper using $q^2=\mu^2$.
Then, we can combine again Eqs.~(\ref{ap-diffeqs3}) to obtain for $c_2^\mathrm{\overline{\rm MS}}/c_0^\mathrm{\overline{\rm MS}}$ the same
equation \eq{ap-fin} that, with the same boundary condition, leads to:
\begin{eqnarray}
\frac{c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
{c_{0 Z_q}^{\rm RI'}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
\ = \
\frac{c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
{c_{0 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)} \ .
\end{eqnarray}
On the other hand, we can also combine the second equation of (\ref{ap-diffeqs2}) with the second one of
\eq{ap-diffeqs3} and obtain
\begin{eqnarray}\label{ap-fin-alt}
\left\{ \gamma_q^{\rm RI'}(\alpha(\mu)) -\gamma_q^{\mathrm{\overline{\rm MS}}}(\alpha(\mu)) + \frac{\partial}{\partial\log\mu^2}
+ \beta(\alpha(\mu))\frac{\partial}{\partial \alpha}\right\} \
\frac{c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
{c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)} \ = \ 0 \ ,
\end{eqnarray}
that, according to \eq{boundary}, can be solved with the boundary condition
$c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}(1,\alpha(p))/c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}(1,\alpha(p)) \equiv 1$ and leaves us with a relation of
$\mathrm{W\overline{\rm MS}}$ and $\mathrm{\overline{\rm MS}}$ Wilson coefficients wich allows \eq{ap-fin0} to be re-written as
\begin{eqnarray}\label{eq:app_final1}
Z_q^{\rm RI'}\left(p^2\right) &=& Z^{\rm pert \; RI'}_q(\mu^2) \
c_{0 Z_q}^{\rm RI'} \left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right) \
\nonumber \\
&\times&
\left( 1+
\frac {c_{2 Z_q}^{\mathrm{\overline{\rm MS}}} \left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
{ c_{0 Z_q}^{\rm RI'} \left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)} \
\frac {c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}} \left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
{ c_{2 Z_q}^{\mathrm{\overline{\rm MS}}} \left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)} \
\frac{\langle A^2 \rangle_{\mathrm{\overline{\rm MS}},\mu^2}}{4 (N_c^2-1) p^2}\right) \ .
\end{eqnarray}
where, futhermore, $c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}$ is to be taken from ref.~\cite{Chetyrkin:2009kh} and
$c_{0 Z_q}^{\rm RI'}$ from ref.~\cite{Chetyrkin:1999pq}. Thus, we can use either \eq{ap-fin0} with
the solution of \eq{ap-diffeqs2} or \eq{eq:app_final1} with that of \eq{ap-fin-alt} to confront
to the lattice estimates of $Z_q^{\rm RI'}$. Both expressions are equivalent. In the first case,
one can proceed as was done in ref.~\cite{Blossier:2010ky} to solve \eq{ap-diffeqs2}.
To illustrate this first method, let us remind that \eq{ap-diffeqs2}
can be solved at the leading logarithm by applying the following ansatz,
\begin{eqnarray}\label{ap-ansatz1}
\frac{\displaystyle c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
{\displaystyle c_{0 Z_q}^{\rm RI'}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
\ = \
\frac{32 \pi}{3} \alpha(p)
\
\left( \frac{\alpha(\mu)}{\alpha(p)}\right)^a \
\left( \rule[0cm]{0cm}{0.5cm} \ 1 + {\cal O}\left( \alpha \right) \ \right)
\ ,
\end{eqnarray}
where we apply \eq{trentedeux} and the exponent $a$, required to satisfy \eq{ap-diffeqs2}, should be
\begin{eqnarray}
a \ = \ \frac{\gamma_0^{A^2}}{\beta_0} \ = \frac{105-8 N_f}{132-8 N_f} \ .
\end{eqnarray}
In the second case, to solve \eq{ap-fin-alt}, a similar ansatz extended to three-loops order
can be applied,
\begin{eqnarray}\label{ap-ansatz2}
\frac{\displaystyle c_{2 Z_q}^{\mathrm{W\overline{\rm MS}}}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
{\displaystyle c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{p^2}{\mu^2},\alpha(\mu)\right)}
\ = \
\left( \frac{\alpha(\mu)}{\alpha(p)}\right)^b \
\left( \frac{\displaystyle 1+\sum_i r_i \ \left( \frac{\alpha(\mu)}{4\pi} \right)^i}
{\displaystyle 1+\sum_i r_i \ \left( \frac{\alpha(p)}{4\pi} \right)^i} \right)
\ ,
\end{eqnarray}
where we use \eq{boundary} for the boundary condition. Then, by requiring that the ansatz
\eq{ap-ansatz2} verifies \eq{ap-fin-alt},
the coefficients $b$ and $r_i$'s will be obtained in terms of those for the
fermion propagator $\mathrm{\overline{\rm MS}}$ and RI'-MOM anomalous dimensions and for the $\mathrm{\overline{\rm MS}}$
$\beta$-function.
However, in this case,
\begin{eqnarray}
b \ = \ \frac{\gamma_0^{q \mathrm{\overline{\rm MS}}}-\gamma_0^{q {\rm RI'}}}{\beta_0} \ = \ 0 \ ,
\end{eqnarray}
because the first-loop coefficient for the anomalous dimension is scheme independent
(in the particular Landau gauge, this scheme-independent first-loop coefficient is also
zero for any scheme~\cite{Chetyrkin:1999pq}). Furthermore, as can be seen in the appendix C
of ref.~\cite{Chetyrkin:1999pq}, one is also left with $\gamma_1^{q \mathrm{\overline{\rm MS}}} \equiv \gamma_1^{q {\rm RI'}}$
in Landau gauge. Then,
\begin{eqnarray}\label{r1}
r_1 =\frac{\gamma_1^{q \mathrm{\overline{\rm MS}}}-\gamma_1^{q {\rm RI'}}}{\beta_0} \ = 0 \ ,
\end{eqnarray}
and the Wilson coefficients for $\mathrm{\overline{\rm MS}}$ and RI'-MOM will thus differ only at
the order ${\cal O}\left(\alpha^2\right)$, with the non-zero $r_i$'s coefficients
to be applied in \eq{ap-ansatz2} given by
\begin{eqnarray}\label{ris}
r_2 &=& \frac{ \gamma_2^{q \mathrm{\overline{\rm MS}}}-\gamma_2^{q {\rm RI'}} }{2 \beta_0} \ =
- 25.4642 + 2.3333 \ N_f \ , \\
r_3 &=& \frac{\gamma_3^{q \mathrm{\overline{\rm MS}}}-\gamma_3^{q {\rm RI'}}}{3\beta_0} -
\beta_1 \frac{\gamma_2^{q \mathrm{\overline{\rm MS}}}-\gamma_2^{q {\rm RI'}}}{3\beta_0^2} \ =
\ -1489.9796 + 246.4424 \ N_f - 6.4609 \ N_f^2 \ ;
\nonumber
\end{eqnarray}
where the three and four-loop coefficient in $\mathrm{\overline{\rm MS}}$ and RI'-MOM for the fermion propagator anomalous
dimension have been again obtained from ref.~\cite{Chetyrkin:1999pq}.
This leads, using Eqs.~(\ref{eq:app_final1},\ref{ap-ansatz2}--\ref{ris}) with
$N_c^2-1=8$ and $N_f=2$, to our final formulae for
the free-of-artefacts lattice determination of $Z_q$ :
\begin{eqnarray}\label{eq:app_final2}
Z_q^{\rm Latt\; artefree}(p^2,\beta)
&=& Z_q^{\rm pert\; RI'}(\mu'^2) \,
c_{0 Z_q}^{\rm RI'}\left(\frac{p^ 2}{\mu'^2},\alpha(\mu')\right) \\
&\times& \left( 1 \ + \,
\frac{c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)}
{c_{0 Z_q}^{\rm RI'}\left(\frac{p^ 2}{\mu^2},\alpha(\mu)\right)} \
\frac{1 - 0.1317 \ \alpha^2(\mu) - 0.5155 \ \alpha^3(\mu) } {1 -
0.1317 \ \alpha^2(p) - 0.5155 \ \alpha^3(p) } \
\frac{\ \langle A^2\rangle_{\mathrm{\overline{\rm MS}},\mu^2}}{32 \ p^2} \right) \nonumber
\end{eqnarray}
%
In this last equation, we exploited the fact that the expression in parenthesis in
Eqs.~(\ref{eq:app_final1},\ref{eq:app_final2}) does not vary with the renormalization
momentum for the local operator $A^2$, as can be inferred
from~\eq{ap-fin}. Thus, once a given momentum, $\mu'^2$, is
fixed for the renormalization of the fermion propagator in the l.h.s. of
\eq{ap-prop}, the one appearing in $Z_q^{\rm pert \ RI'}$ in front of the expression
in parenthesis, one is still left with the the freedom of choosing a renormalization
momentum, $\mu^2$, which does not need to be the same, for the local
operator $A^2$ inside the parenthesis.
In \eq{eq:app_final2} the coefficients $c_{0 Z_q}^{RI'}$ and $c_{2 Z_q}^{\mathrm{\overline{\rm MS}}}$ are known from
perturbation theory, the former can be obtained from ref.~\cite{Chetyrkin:1999pq} and the
latter from ref.~\cite{Chetyrkin:2009kh}. Two parameters are to be fitted:
$Z_q^{\rm pert\; RI'}(\mu'^2)$ and the non perturbative condensate
$g^2(\mu) \ \langle A^2\rangle_{R,\mu^2}$. It is important to underline
that the condensate {\it is defined via the OPE, i.e. from Eqs.~(\ref{eq:app_final1},\ref{eq:app_final2})}.
Its precise definition depends on the renormalisation scheme,the renormalisation scale, as well as the order in
perturbation theory to which the coefficients $c_{0 Z_q}$ and $c_{2 Z_q}$
are used. In \eq{eq:app_final1} and \eq{eq:app_final2} the renormalisation scheme
for $g^2(\mu) \ \langle A^2\rangle_{R,\mu^2}$ is $\mathrm{\overline{\rm MS}}$ and the
scale is $\mu$ (10 GeV in our calculations). The coupling we use for the perturbative expansions
of these coefficients, $c_{0 Z_q}$ and $c_{2 Z_q}$, is also chosen to be the $\mathrm{\overline{\rm MS}}$ one.
These choices are kept all along the present paper. If we now wish to compare
$g^2(\mu) \ \langle A^2\rangle_{R,\mu^2}$ from the present calculation to
that from another calculation, for example from the strong coupling
constant~\cite{Blossier:2010ky}, {\it we must as far as possible use the same
precise definition in both cases}. However, its dependence on the scheme and
on the order in perturbation theory is not so important: as seen in
section~\ref{sec:running} other systematic uncertainties are larger.
\section{Introduction}
Computing matrix elements in lattice Quantum ChromoDynamics (LQCD) needs often
the computation of renormalisation constants. Indeed, even if the lattice
computation contains only $O(a^2)$ lattice artefacts, the bare quantities differ
from the continuum ones by $O(g^2) \simeq O(1/\log(a^2))$ which is
unacceptable. Renormalisation restores the $O(a^2)$ accuracy. It is also known
since long that these renormalisation constants need to be computed
non-perturbatively, using LQCD techniques.
Several non-perturbative methods have been proposed. Let us here concentrate on
those based on the MOM scheme. They start from the computation of Green
functions of quarks, gluons and ghosts at large enough momenta in a fixed
gauge, usually the Landau gauge. This gives the renormalisation constant
$Z(\mu)$ at many values of the scale $\mu$. Assuming that our goal is to deliver
the renormalisation constant in the $\overline{\rm MS}$ scheme at say 2 GeV (a typical
phenomenological scale), one must then use results from perturbative QCD to
convert MOM into $\overline{\rm MS}$ and run to 2 GeV. The running of $Z_{\mathrm MOM}(\mu)$
is a very powerful testing tool: indeed perturbative QCD is only useful if we
are in the perturbative regime, i.e. at large enough momenta. The only way to
check whether this is the case is to compare lattice data with the perturbative
running. In this framework, it turns out that this is not always the case.
Deviations from perturbative running can be analysed via Wilson operator
expansion and the Shifman-Vainshtein-Zakharov (SVZ) sum rules.
It turns out that the dominant non-perturbative correction in
Landau gauge is due to the non vanishing vacuum expectation value of the
only dimension-two operator~:
$A^2 \equiv A^a_\mu A^{a\mu}$~\cite{Lavelle:1992yh}, and that it is not
small~\cite{Boucaud:2001st,Boucaud:2000nd,Boucaud:2005rm,Boucaud:2005xn,
Boucaud:2008gn,Blossier:2010ky,Megias:2007pq}.
It is thus necessary to carefully look for the possibility of such a
contribution, which appears in the OPE, as a $1/p^2$ contribution up to
logarithmic corrections. The coefficient of this $1/p^2$ contribution is
equal to the vacuum expectation value $g^2\VEV{A^2}$ times a Wilson
coefficient that has to be computed in perturbation theory, and has been up
to three loops for propagators~\cite{Chetyrkin:2009kh}. To argue that a
measured $1/p^2$ contribution is a continuum power correction and not a
lattice artefact, we must check that the $1/p^2$ term in the fit scales with
lattice spacing when expressed {\it in physical units}. To further argue that
this is indeed due to $\VEV{A^2}$ we must compare the resulting $\VEV{A^2}$
from different quantities and thus check the universality of the condensates
which is on the ground of the SVZ technology. The theory of Wilson
operator expansion is then constraining: since there exists only one dimension-two
operator, provided that it is renormalized with the same prescription for
all these quantities, all the different estimates of $\VEV{A^2}$ should
coincide within errors, up to $1/p^4$ corrections, for a given value of $N_f$
and of the dynamical masses. Of course, its extraction needs the coefficients of the
Wilson expansion which are computable in perturbation.
To test this universality of the extracted $\VEV{A^2}$ is one of the goals of
our program of analysing many different quark and gluon quantities obtained from lattice gauge
configurations produced by the European twisted mass collaboration (ETMC).
We have also applied a criterium proposed in~\cite{Martinelli:1996pk} to validate
the way we use operator expansion.
This paper makes one of the first steps in such a program.
It is worthwile also to mention that several authors
have elaborated further on the relation between this gauge-dependent gluon
condensate, obtained in the Landau gauge, and possible $1/p^2$-terms in gauge
invariant quantities, and thus on the phenomelogical implications,
mainly in connection with confinement scenarios, of such
a dimension-two condensate~\cite{Gubarev:2000nz}.
All this can only be done once the lattice artefacts are eliminated or at
least under control. The $O(a^2)$ artefacts can be quite large since we
consider large momenta, while finite volume artefacts are minor. There are
two main types of $O(a^2)$ artefacts: $O(a^2p^2)$ artefacts which respect the
continuum $O(4)$ rotation symmetry, and hypercubic artefacts which respect
the $H_4$ hypercubic symmetry group but not $O(4)$. The latter are effects of
the hypercubic symmetry of the lattice action. We will identify the
$O(a^2p^2)$ artefacts non-perturbatively by doing a fit of the running
$Z(\mu)$ which will include the perturbative running, the $\VEV{A^2}$ power
correction and a term proportional to $a^2p^2$. Notice that, while the
perturbative and $\VEV{A^2}$ running contributions must approximately scale
in physical units, the artefacts must scale in lattice units. This is an
additional check we shall perform.
Concerning the elimination of hypercubic artefacts, which is better done before
the above mentioned running fit, several methods have been proposed in
literature: the democratic selection, the perturbative correction and the
non-perturbative ``egalitarian'' one (``egalitarian" because all the points are used on the same footing in this approach).
We will discuss this in some detail later
and perform extensive comparisons. In particular we will use a new quality
test which consists in watching to what extent the ``half-fishbone''
structure, which raw lattice results for $Z_q$ always exhibit and which
is a dramatic illustration of hypercubic artefacts, are corrected by every
method.
Although all the issues raised here concern all the renormalisation constants
as well as the QCD coupling constant, we will concentrate in the following
on $Z_q$, that renormalises the quark field
\begin{eqnarray}
q_R = Z_q^{1/2} q_B
\end{eqnarray}
where $q_B$ ($q_R$) is the bare (renormalised) quark field.
In the RI'-MOM scheme $Z_q$ is defined by
\begin{eqnarray}\label{Zqdef}
Z_q(\mu^2=p^2) = \frac {-i} {12\, p^2} {\mathrm Tr}[S_{bare}^{-1}(p) \;p \!\!\!/]
\end{eqnarray}
where $S_{\mathrm{bare}}(p)$ is the bare quark propagator.
Our goal is to compute that constant from LQCD with twisted Wilson quarks.
In~\cite{Boucaud:2003dx,Boucaud:2005rm} a study~\footnote{In~\cite{Boucaud:2005rm}
$Z_q$ was denoted $Z_\psi$} of $Z_q$ was performed from LQCD in the case
$N_f=0$ using both the overlap and Wilson clover fermions.
In~\cite{Boucaud:2003dx} the exceptional size of hypercubic artefatcs was
stressed and a non-perturbative elimination of hypercubic artefacts
performed along the same principle as what is used here.
In~\cite{Boucaud:2005rm} the Wilson coefficient of the
$\VEV{A^2}$ was computed up to the leading logarithm approximation and applied to
estimate the condensate from the LQCD data. The outcome was that a
significant non-perturbative contribution from $\VEV{A^2}$ was needed to
account for the results. Notice that we do not expect the $\VEV{A^2}$ to be similar or even
close in the cases of $N_f=0$ and $N_f=2$.
Summarising the above discussion, we do here concentrate on $Z_q$ because we
consider it as a kind of benchmark for the following reasons:
\begin{itemize}
\item It has specially large hypercubic artefacts and is thus a good
test bed for a correct treatment of these.
\item It has a vanishing anomalous dimension at leading order in the Landau
gauge: it's perturbative running is thus soft.
\item The Wilson coefficient of the $\VEV{A^2}$ condensate is rather
large~\cite{Boucaud:2005rm,Chetyrkin:2009kh}, which is an incentive to
look carefully for non-perturbative contributions.
\end{itemize}
In this paper, in order to test deeply the reliability of our results,
we will compare many fits:
perturbative/non-perturbative hypercubic correction, one-window/sliding-windows
non perturbative hypercubic correction, effect of the total fitting range,
fitting separately every $\beta$ and global fit. As a consequence we will
proceed as follows:
\begin{itemize}
\item We will recall some general formulae concerning the perturbative
and non-perturbative running in the continuum
\item describe our lattice setting
and our non-perturbative ``egalitarian'' method to eliminate hypercubic
artefacts;
\item present the results concerning the {\it perturbative} correction
method for hypercubic artefacts and show the quality checks;
\item present the results concerning the {\it non-perturbative} method
to correct for hypercubic artefacts and show the quality checks, propose
two types of fits, the sliding windows fit (SWF) and the one-window fit (OWF).
\item We will perform the running fit on the output of all the previously mentioned
hypercubic corrected data, compare the results for the $g^2 \VEV{A^2}$
for all these fits, check the scaling of $g^2 \VEV{A^2}$;
\item check the scaling of the $\propto a^2p^2$ artefacts;
\item check the lattice spacing dependence of $Z_q^{\mathrm {pert}}$, the
perturbative contribution to $Z_q$, $\propto g^2$;
\item study the range of variation of our results
for $g^2 \VEV{A^2}$ from the egalitarian method with one/sliding window(s),
with varying fitting ranges, and the perturbative method with two
realisations. We extract from there the systematic uncertainty.
\item We will join in one plot the three $\beta$'s and perform the fit of the
running,
\item compare the resulting $g^2 \VEV{A^2}$ with the
one extracted from the strong coupling constant and with quenched
estimates, and tested our procedure according to Martinelli-Sachrajda's
criterium~\cite{Martinelli:1996pk}.
\item conclude.
\end{itemize}
\section{The lattice computations}
\label{sec:lat}
The results presented here are based on the gauge field
configurations generated by the European Twisted Mass Collaboration
(ETMC) with the tree-level improved Symanzik gauge action~\cite{Weisz:1982zw}
and the twisted mass fermionic action~\cite{Frezzotti:2000nk} at
maximal twist.
\subsection{The lattice action}
A very detailed discussion about the twisted mass and tree-level improved
Symanzik gauge actions, and about the way they are implemented by ETMC, can be
found in
refs.~\cite{Boucaud:2007uk,Boucaud:2008xu,Urbach:2007rt,Dimopoulos:2008sy}.
Here, for the sake of completeness, we will present a brief reminder of the
twisted action and the run parameters for the gauge configurations that will be
exploited in the present work (See tab.~\ref{setup}).
The Wilson twisted mass fermionic lattice action for two flavours of mass
degenerate quarks, reads (in the so called twisted
basis~\cite{Frezzotti:2000nk,Frezzotti:2003ni} )
\begin{align}
\label{eq:Sf}
\begin{split}
S_\mathrm{tm}^{\rm F} = &\, a^4\sum_x\Bigl\{
\bar\chi_x\left[D_{\rm W}+ m_0 + i\gamma_5\tau_3\mu_q
\right]\chi_x\Bigr\}\, , \\
& D_{\rm W} = \frac{1}{2}\gamma_\mu\left(\nabla_\mu+\nabla_\mu^{*}\right)
-\frac{ar}{2}\nabla_\mu\nabla_\mu^{*} \, ,
\end{split}
\end{align}
where $m_0$ is the bare untwisted quark mass and $\mu_q$ the bare twisted quark
mass, $\tau_3$ is the third Pauli matrix acting in flavour space and $r$ is
the Wilson parameter, which is set to $r=1$ in the simulations. The twisted
Dirac operator is defined as
\begin{eqnarray}\label{eq:DIRtw}
D_{\rm tw} \equiv D_{\rm W} + m_0 + i\gamma_5\tau_3\mu_q \ .
\end{eqnarray}
The operators
$\nabla_\mu$ and $\nabla_\mu^{*}$ stand for the gauge covariant nearest
neighbour forward and backward lattice derivatives:
\begin{eqnarray}\label{def}
\nabla_\mu(x,y) &\equiv&\left[ \delta_{y,x+\hat \mu}\,U_\mu(x) - \delta_{x,y}\right] \ , \nonumber \\
\nabla^\star_\mu(x,y)&=&\left[\delta_{x,y} -
\delta_{y,x-\hat \mu}\,U^\dagger_\mu(x-\hat \mu)\right] \ , \nonumber \\
D_\mu \equiv \frac 12\left [\nabla_\mu(x,y)+\nabla^\star_\mu(x,y)\right]&=&\frac 12 \left[\delta_{y,x+\hat \mu}\,W(x,y) -
\delta_{y,x-\hat \mu}\,W(x,y)\right] \ ;
\end{eqnarray}
defining the operator $D_\mu$ as the discretized covariant derivative.
The bare quark mass $m_0$
is related as usual to the so-called hopping parameter $\kappa$, by
$\kappa=1/(8+2am_0)$. Twisted mass fermions are said to be at {\em maximal
twist} if the bare untwisted mass is tuned to its critical value,
$m_\mathrm{crit}$. This is in practice done by setting the so-called untwisted
PCAC mass to zero.
In the gauge sector the tree-level Symanzik improved
gauge action (tlSym)~\cite{Weisz:1982zw} is applied. This action includes besides the
plaquette term $U^{1\times1}_{x,\mu,\nu}$ also rectangular $(1\times2)$ Wilson loops
$U^{1\times2}_{x,\mu,\nu}$. It reads
\begin{eqnarray}
\label{eq:Sg}
S_g = \frac{\beta}{3}\sum_x\Biggl( b_0\sum_{\substack{
\mu,\nu=1\\1\leq\mu<\nu}}^4\{1-\mathbb{R}\text{e}\text{Tr}(U^{1\times1}_{x,\mu,\nu})\}\Bigr.
\Bigl.+
b_1\sum_{\substack{\mu,\nu=1\\\mu\neq\nu}}^4\{1
-\mathbb{R}\text{e}\text{Tr}(U^{1\times2}_{x,\mu,\nu})\}\Biggr)\, ,
\end{eqnarray}
where $\beta \equiv 6 / g_0^2$, $g_0$ being the bare lattice coupling and it
is set $b_1=-1/12$ (with $b_0=1-8b_1$ as dictated by the requirement of
continuum limit normalization). Note that at $b_1=0$ this action becomes the
usual Wilson plaquette gauge action. The run parameters for $\beta$ and $\mu_q$
of the gauge configurations that will be exploited in the following can be
found in tab.~\ref{setup}.
\begin{table}[ht]
\centering
\begin{tabular}{||c|c|c|c|c|c|c||}
\hline
\hline
$\beta$ & $a$ fm & $a^{-1}$ GeV &$a \mu_q$ & Volume & \# confs
\\ \hline
$3.9$ & 0.083&2.373&
0.004
&
$24^3\times48$ &
$100$
\\ \hline
$4.05$ & 0.0675&2.897&
0.006 &
$24^3\times48$
&
$100$
\\ \hline
$4.2$ & 0.055&3.58
& 0.002 &
$24^3\times48$
&
$100$
\\ \hline
\hline
\end{tabular}
\caption{Run parameters of the exploited data from ETMC collaboration for the
present study of $Z_q$. The second column lists the lattice spacings which we
have used in this study. It is easy to convert it to other lattice spacings.}
\label{setup}
\end{table}
\subsection{The computation of the quark propagator}
Computing the renormalisation constants for the quark propagator and the
operators containing quark fields demands to compute first the gauge-fixed
2-point quark Green functions from the lattice. We exploited ETMC
gauge configurations~\cite{Baron:2009wt} obtained for $\beta=3.9$, $\beta=4.05$ and $\beta=4.2$.
After checking the small dependence of $Z_q$ on the dynamical and valence
quark masses we decided to use only one mass for every $\beta$,
table~\ref{setup}. The lattice gauge configurations are transformed to Landau
gauge by minimising the following functional of the SU(3) matrices, $U_\mu(x)$,
\begin{eqnarray}
F_U[g] = \mbox{\rm Re}\left[ \sum_x \sum_\mu \hbox{Tr}\left(1-\frac{1}{N}g(x)U_\mu(x)g^\dagger(x+\mu) \right) \right] \ ,
\end{eqnarray}
with respect to the gauge transform $g$, by applying a combination of
overrelaxation algorithm and Fourier acceleration~\footnote{We end when
$|\partial_\mu A_\mu|^2 <10^{-11}$ and when the spatial integral of $A_0$ is
constant in time to better than $10^{-6}$.}.
We compute quark propagators with a local source taken at a random point $x_0$
on the lattice, in order to reduce the correlation between successive
configurations:
\begin{eqnarray}
S(y,x_0)^{a,\alpha;b_0,\beta_0}_{j} = D_{\rm tw}^{-1}(y,x)^{a,\alpha;b,\beta;i,j}
so^{b,\beta}_{j}(x,x_0) \ , \ \qquad so^{b,\beta}_{j}(x,x_0) = \delta_{x,x_0} \delta_{b,b_0}
\delta_{\beta,\beta_0} \ ;
\end{eqnarray}
where the equation is solved for every
$b_0=1,3$ and $\beta_0=1,4$, and $j=u,d$ labels the isospin.
We perform the Fourier transform which is a $12\times12$ complex matrix
\begin{eqnarray}\label{Sp}
S_i(p) \equiv \sum_{y} e^{-i p (y-x_0)} \,S_i(y,x_0) \ .
\end{eqnarray}
This is the Fourier transform of the quark incoming to the source (the arrow
pointing towards the source). The Fourier transforms of the quark outgoing from
the source is
\begin{eqnarray}
S^{\dagger 5}_{i}(p) = \gamma_5 S^{\dagger}_{\bar i}(p) \gamma_5 \ ,
\end{eqnarray}
where $\bar u\equiv d; \bar d\equiv u$.
From~\eq{Zqdef} the lattice quark renormalisation constant $Z_q$ is
given by
\begin{eqnarray}\label{Zq}
Z_q(p)\equiv \frac {-i} {12\,\tilde p^2} \;<{ \rm Tr} [S^{-1}(p) \,\tilde p \!\!\!/]> \ ,
\end{eqnarray}
where $<...>$ means here the average over the chosen ensemble of thermalised
configurations and $\tilde{p}_\mu = \frac{1}{a} \sin ap_\mu$. The reason to use
$\tilde{p}_\mu = \frac{1}{a} \sin ap_\mu$ is to get $Z_q=1$ for a free fermion,
or in other words, to eliminate hypercubic artefacts at tree level.
\subsection{The method of non-perturbative Hypercubic $H(4)$ correction}
\label{NPhyp}
The lattice estimates of the quark field renormalisation constant
and the vertex functions lead to dimensionless quantities that, because of
general dimensional arguments, depend on the strong interaction scale
$\Lambda_{\rm QCD}$ and on the lattice momentum $a\,{p}_\mu$.
We have computed the Fourier transforms for the following momenta:
\begin{eqnarray}\label{pmu}
p_i = \frac{2\pi n_i}{N_L a} \qquad n_i=-N_L/4,\cdots,N_L/4 \ , \ \qquad p_4= \frac{\pi (2
n_4 + 1)}{N_T a}\qquad n_4=-N_T/4,\cdots,N_T/4 \ ; \nonumber \\
\end{eqnarray}
where $p_i=1,3$ are the spatial momenta and $p_4$ the time like. The
antiperiodic boundary condition in the time direction explains the $\pi (2
n_4 + 1)$ factor.
The lattice action \eq{eq:Sf} and \eq{eq:Sg} is invariant under the
hypercubic group $H(4)$. However the boundary conditions and the difference
between the spatial size $N_L$ and the time-like one $N_T=2 \,N_L$ generate
finite volume corrections to the hypercubic symmetry. Only the cubic symmetry
is exact. We define cubic invariant quantities and compute their average over
the cubic group. We have thus a set of measures for every orbit of the cubic
group, labelled by
\begin{eqnarray}\label{cubic}
\left( \sum_{i=1,3} p_i^m, \;\; p_4 \right) \ ,
\end{eqnarray}
where $m=2,4,6$.
A first kind of artefacts that can be systematically
cured~\cite{Becirevic:1999uc,Boucaud:2003dx,deSoto:2007ht} are those due to the breaking of
the rotational symmetry of the Euclidean space-time when using an hypercubic
lattice, where this symmetry is restricted to the discrete hypercubic $H(4)$
isometry group. However, as already mentioned, we have also finite volume
effects which break $H(4)$. We therefore need to adapt the
method. One idea could be to generalise it to a cubic symmetry. This happens not
to be practical due to too few cubic symmetric orbits for a given $\vec \! p^2$.
We choose another approach motivated by the fact that the lattice action is
indeed $H(4)$ symmetric and that finite volume effects are expected to be
small at large momenta
compared to finite lattice spacing artefacts. We therefore use a slight
variation of the method
described in~\cite{Becirevic:1999uc,deSoto:2007ht} {\it : we apply it to the
cubic orbits of~\eq{cubic}}, keeping track of $p_4$ which is not an $H(4)$
symmetric quantity.
Defining the $H(4)$ invariants
\begin{eqnarray}
p^{[4]}=\sum_{\mu=1}^{4} p_\mu^4 \ , \qquad p^{[6]}=\sum_{\mu=1}^{4} p_\mu^6 \ ,
\qquad p^{[8]}=\sum_{\mu=1}^{4} p_\mu^8 \ ;
\end{eqnarray}
it happens that every cubic orbit~\eq{cubic} has a well defined set of
values for these $H(4)$ invariants, but several cubic orbits may have the
same $H(4)$ invariants. We will neglect $p^{[8]}$ which plays no role on small
lattices.
We can thus define the quantity $Z_q(a p_\mu)$ averaged over
the cubic orbits as
\begin{eqnarray}\label{Q246}
Z_q^{\mathrm {latt}}(a^2\,p^2, a^4p^{[4]}, a^6 p^{[6]},
ap_4, a^2\Lambda_{\rm QCD}^2) \ .
\end{eqnarray}
We expect the hypercubic effects to be $O(a^2)$ lattice artefacts and therefore
to be expandable into powers of $a^2$. This would of course trivially be the
case if $a^2\,p^2 \ll 1$ since then, for example $\epsilon=a^2 p^{[4]}/p^2 \le
a^2\,p^2 \ll 1$ (we take on purpose this quantity which will be seen to be
dominant). Then a Taylor expansion of \eq{Q246} will ensure the artefact to be
$O(a^2)$. However, aiming at measuring $Z_q$ at large momentum we go up to
$a^2\,p^2 \sim 3 - 4$. We will assume, and then check, that the $Z_q^{\mathrm
{latt}}$ in \eq{Q246} can be Taylor-expanded around $p^{[4]}=0$ up to
$\epsilon$ significantly larger than 1: %
\begin{eqnarray}\label{eq:p4expan}
Z_q^{\mathrm {latt}}(a^2\,{p}^2, a^4p^{[4]}, a^6 p^{[6]}, ap_4, a^2\Lambda_{\rm
QCD}^2) &=& Z_q^{\mathrm {hyp\_corrected}}(a^2p^2, ap_4, a^2\Lambda_{\rm QCD}^2)
\nonumber \\
&+&
R(a^2p^2,a^2\Lambda_{\rm QCD}^2) \, a^2 \frac{p^{[4]}}{p^2} \ + \
\cdots
\end{eqnarray}
where
\begin{eqnarray}\label{eq:deriv}
\left.R(a^2p^2,a^2\Lambda_{\rm QCD}^2) = \frac{dZ_q^{\mathrm
{latt}}\left(a^2p^2 ,0,0,0,a^2\Lambda_{\rm
QCD}^2\right)}{d\epsilon}\right|_{\epsilon=0}.
\end{eqnarray}
Of course terms proportional to $p^{[6]}$, $p^{[4]2}$, etc. can be added
analogously to the formula, as well as terms breaking
$H(4)$. However we have found that our data were not accurate enough
to allow fitting them, and that using only \eq{eq:p4expan} and \eq{eq:deriv} gave
satisfactory fits.
Now we must describe how we fit the functions appearing in the r.h.s of
\eq{eq:p4expan}.
\subsubsection{The sliding window fit (SWF)}
\label{sec:slid}
We consider all values of $a^2p^2$ in a range: $a^2p_{\mathrm{min\_in}}^2 \le
a^2p^2 \le a^2p_{\mathrm{max\_in}}^2$, each of which contains a set of cubic
orbits. We choose an integer width $w$ (we will use $w=10$ in numerical
applications) and define a
window as the set of $2w + 1$ values of $a^2p^2$ around a
$a^2p^2_{\mathrm {center}}$ (
$w$ contiguous values below $a^2p^2_{\mathrm {center}}$ and as many above).
There are as many
windows as values of $a^2p^2_{\mathrm {center}}$ such that all of them are
in the range $[a^2p_{\mathrm{min\_in}}^2,a^2p_{\mathrm{max\_in}}^2]$.
This defines the
range of interest $a^2p_{\mathrm{min\_out}}^2 \le a^2p^2_{\mathrm {center}} \le
a^2p_{\mathrm{max\_out}}^2$.
For every window we use for the fit all cubic orbits corresponding to the values
of $a^2p^2$ in the window. We fit, according to \eq{eq:p4expan}, $2\, w+2$
parameters which are the $2\, w+1$ values of $ Z_q^{\mathrm
{hyp\_corrected}}(a^2p^2,a^2\Lambda_{\rm QCD}^2)$ within the window, and one
common value of $R(a^2p^2_{\mathrm center},a^2\Lambda_{\rm QCD}^2)$. The
dependence in these parameters is linear and thus the fit amounts to invert a
matrix. It is clear that for any $a^2p^2$ the $ Z_q^{\mathrm
{hyp\_corrected}}(a^2p^2,a^2\Lambda_{\rm QCD}^2)$ is fitted every time $a^2p^2$
is within a window, i.e. $2\, w+1$ times. We keep as the final result only the
result of the fit when $a^2p^2$ is the center of the window. At the end of the
fit, for every $a^2p^2_{\mathrm {center}}$ in the range
$a^2p_{\mathrm{min\_out}}^2 \le a^2p^2_{\mathrm center} \le
a^2p_{\mathrm{max\_out}}^2$ we have, as expected, a fitted value for both
functions of the r.h.s of \eq{eq:p4expan}.
We can then study the function $R(a^2p^2,a^2\Lambda_{\rm QCD}^2)$.
As will be reported later (see for instance \fig{fig:slopes}) we find that
a reasonable approximation for $R$ is
\begin{eqnarray}\label{eq:ow}
R(a^2p^2,a^2\Lambda_{\rm QCD}^2)= c_{a2p4} + c_{a4p4} \, a^2p^2 \ .
\end{eqnarray}
This leads to the one window fit.
\subsubsection{The one window fit (OWF)}
\label{sec:one}
We tune $w$ such that only one (or at worst two) windows are included in the
range $[a^2p_{\mathrm{min\_in}}^2,a^2p_{\mathrm{max\_in}}^2]$. We then perform
the fit for that window according to the equation
\begin{eqnarray}\label{eq:owfexpan}
Z_q^{\mathrm {latt}}(a^2\,{p}^2, a^4p^{[4]}, a^6 p^{[6]}, ap_4, a^2\Lambda_{\rm
QCD}^2) &=& Z_q^{\mathrm {hyp\_corrected}}(a^2p^2,a^2\Lambda_{\rm QCD}^2) +
c_{a2p4} \, a^2 \frac{p^{[4]}}{p^2} \nonumber \\
&+& c_{a4p4} \ a^4 p^{[4]} \ .
\end{eqnarray}
This fit gives $2 w +3$ parameters which are
$Z_q^{\mathrm {hyp\_corrected}}(a^2p^2,a^2\Lambda_{\rm QCD}^2)$ for all
$a^2p^2$ in the window, i.e. in the range
$[a^2p_{\mathrm{min\_in}}^2,a^2p_{\mathrm{max\_in}}^2]$, (or if the range size
is even one value is eliminated) and the parameters $c_{a2p4}$ and $c_{a4p4}$.
\subsection{Other lattice artefacts}
There are ultraviolet artefacts which are functions of $a^2\,p^2$ and are thus
insensitive to hypercubic biases and not corrected by the above-mentioned
method. They will be corrected simply by assuming a term linear in $a^2p^2$ in
the final fit and check that the coefficient scales correctly for different
lattice spacings.
To take into account the space-time anisotropy, which is a finite volume
artefact, we can check the dependence of $Z_q$ on a anisotropic quantity such
as $p_4^2-\vec\, \! p ^2/3$. We did not see any sizeable effect of this
parameter.
Finite volume artefacts are also studied as usual by a comparison of runs at
different volume. We expect a small effect at large momenta and our checks
confirmed it, as well as the analysis of~\cite{Blossier:2010ky}. We will not
consider this artefact anymore.
\section{#1}
\def\jhep#1#2#3{J. High Energy Phys. {\bf #1} (#2) #3}
\def\prd#1#2#3{Phys.\ Rev.\ {\bf D#1} (#2) #3}
\def\npb#1#2#3{Nucl.\ Phys.\ {\bf B#1} (#2) #3}
\def\plb#1#2#3{Phys.\ Lett.\ {\bf B#1} (#2) #3}
\def\ap#1#2#3{Ann.\ Phys.\ (NY) #1 (19#3) #2}
\def\ar#1#2#3{Ann.\ Rev.\ Nucl.\ Part.\ Sci.\ #1 (19#3) #2}
\def\cav#1{Cambridge preprint Cavendish--HEP--#1}
\def\cpc#1#2#3{Computer Phys.\ Comm.\ #1 (19#3) #2}
\def\ib#1#2#3{ibid.\ #1 (19#3) #2}
\def\np#1#2#3{Nucl.\ Phys.\ B#1 (19#3) #2}
\def\pl#1#2#3{Phys.\ Lett.\ #1B (19#3) #2}
\def\pr#1#2#3{Phys.\ Rev.\ D #1 (19#3) #2}
\def\prep#1#2#3{Phys.\ Rep.\ #1 (19#3) #2}
\def\prl#1#2#3{Phys.\ Rev.\ Lett.\ #1 (19#3) #2}
\def\rmp#1#2#3{Rev.\ Mod.\ Phys.\ #1 (19#3) #2}
\def\sj#1#2#3{Sov.\ J.\ Nucl.\ Phys.\ #1 (19#3) #2}
\def\zpc#1#2#3{Z.\ Phys.\ {\bf C#1} (#2) #3}
\defx \!\!\!/#1{#1 \!\!\!/}
\def\fig#1{Fig. \ref{#1}}
\def\simquand#1{\mbox{\raisebox{-1.2ex}[0.ex][1.6ex]{$\widetilde{\scriptstyle #1}$}}}
\newcommand{e_{\mu}}{e_{\mu}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\text{Tr}}{\text{Tr}}
\newcommand{\mathbb{R}\text{e}}{\mathbb{R}\text{e}}
\newcommand{\overline{\text{MS}}}{\overline{\text{MS}}}
\newcommand{\widetilde{\text{MOM}_{c0}}}{\widetilde{\text{MOM}_{c0}}}
\newcommand{\widetilde{\text{MOM}_g}}{\widetilde{\text{MOM}_g}}
\newcommand{\widetilde{\text{MOM}_c}}{\widetilde{\text{MOM}_c}}
\newcommand{\text{pc}}{\text{pc}}
|
1011.2523
|
\section{INTRODUCTION}
\label{sect:intro}
The realization of the supersolid form of matter, where the
superfluid and the crystalline order co-exist~\cite{andreev,legget},
in the ultra-cold bosonic atoms in optical lattices is at the
forefront of research. After the claim of observing the supersolid
phase in solid $^4He$ by Kim {\it et al}~\cite{chan}, the progress
in the research of this exotic phase of matter has advanced
substantially. The successful observation of the superfluid (SF) to
Mott insulator (MI) transition in ultra-cold bosonic atoms in
$3D$~\cite{bloch} and subsequently in $2D$~\cite{spielman} and
$1D$~\cite{stoferle} has shaped the study of ultra-cold systems as an
ideal tool to understand condensed matter phenomena. In order to achieve the
supersolid form of
matter that is characterized by the co-existence of the superfluid
and crystalline order, it is essential for the system to have long
range interactions. The remarkable experimental realization of BEC
in $Cr$ atoms~\cite{pfau} that have fairly large dipole moment, has
increased the expectations to observe the supersolid phase in
optical lattice experiments.
In recent years there have been several theoretical evidences for
the supersolid phase in various lattice
geometries~\cite{batrouniprl,mishrass,
kedar,arun,sengupta,wessel,scarola}. However, the experimental
search of the supersolid in the ultra-cold atomic systems in optical
lattices still remains a challenge. The real experimental situation
is different from the usual homogeneous system considered in
theoretical calculations. In experiments, the translational symmetry
of the lattice is broken due to the presence of an external harmonic
trap potential (magnetic or optical) and various quantum phases
co-exist
~\cite{wessel1,svistunov,bergkvist,pollet,smita,mitra,spiel,nandini,suna,batrounimi}.
Hence, it is essential to understand the signatures of the
supersolid phase in the presence of such a trap.
In this paper we have considered a system of ultra-cold bosonic
atoms possessing long range interactions in a one dimensional
optical lattice with a harmonic confinement. The Hamiltonian for
this kind of system is represented by the extended Bose-Hubbard
model,
\begin{eqnarray}
H &=&-t\sum_{<i,j>}(a_{i}^{\dagger}a_{j}+H.c)
+\frac{U}{2}\sum_{i} n_{i}(n_{i}-1)\nonumber\\
& &\mbox{} +V\sum_{<i,j>}n_in_j+V_{\text{T}}\sum_i\,r_i^2n_i.
\label{eq:ham}
\end{eqnarray}
Here, $t$ is the hopping amplitude between the nearest neighbor
sites $\langle i,j \rangle$, $a_i^{\dagger}(a_i)$ is the bosonic creation
(annihilation) operator obeying the Bosonic commutation relation
$[a_i,a_j^\dagger] = \delta_{i,j}$ and $n_i = a_i^{\dagger}a_i$ is
the number operator. $U$ and $V$ are the on-site and the nearest
neighbor interactions, respectively. $V_{\text{T}}$ is the magnitude of
the external trap potential and $r_i$ is the distance from the trap
center. We re-scale in units of the hopping amplitude, $t$, setting
$t= 1$, making the Hamiltonian and other quantities dimensionless.
The homogeneous version of this model (i.e. without the external
trap), has been studied earlier using several techniques in one
dimension~\cite{mishrass,pai,whiteprb,batrouniprl,kashurnikov,
batrouni95,niyaz,kuhner,iskin}. The prediction of an accurate phase
diagram using Quantum Monte Carlo method~\cite{batrouniprl} and
DMRG~\cite{mishrass} has revealed the physical conditions required
to stabilize a supersolid phase. It has been shown that the
supersolid phase is obtained when:
\begin{enumerate}
\item The total density of the system is incommensurate to the lattice.
\item The on-site ($U$) and the nearest neighbor interactions ($V$) are
fairly large compared to the hopping amplitude ($t$).
\item The condition $U<2V$ is satisfied.
\end{enumerate}
A homogeneous system exhibits a uniform phase determined by the
global chemical potential for a given set of interaction parameters.
The phase diagram of the model in Eq.~\ref{eq:ham} in the homogeneous
limit i.e., $V_{\text{T}}=0$, exhibiting different possible phases
including the supersolid phase is shown in
Fig.~\ref{fig:fig1}~\cite{mishrass}.
\begin{figure}[ht]
\centering
\includegraphics[width = 3.4in, angle = 0, clip = true]
{fig1.ps}
\caption{(Color on line) The phase diagram of the model in
Eq.~\ref{eq:ham}
in the homogeneous limit i.e., $V_{\text{T}}=0$ and for $U= 10.0$ in the
$\mu-V$ plane.}
\label{fig:fig1}
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip = true]
{fig2.ps}
\end{center}
\caption{(Color on line) Homogeneous phase diagram for the
model in Eq.~\ref{eq:ham} showing canonical trajectories. For a given
value of $V$, the presence of an external trap allows all phases
that fall on the line that starts from the canonical trajectory to
the V-axis for example: lines AB and CD in the figure.}
\label{fig:fig1a}
\end{figure}
In the presence of an external trap, the role of a local chemical
potential becomes important as demonstrated in our earlier work
on the Bose-Hubbard model~\cite{suna}. Since the local chemical
potential varies from the center of the trap to the edges, the system
exhibits different phases simultaneously. An earlier DMRG study of
model given in Eq.~\ref{eq:ham} could not confirm the presence of
the supersolid phase in the system~\cite{urba}. A recent study of
this model in two dimension using mean field theory predicts that
the noise correlation could be a valid signature to separate the
supersolid phase from the other ground state phases~\cite{scarola}. In this
paper we re-visit the extended Bose-Hubbard model with the external harmonic
trap potential and search for experimental signatures of the different
ground state phases, in particular, the supersolid phase.
The remaining part of the paper is organized as follows. In
Sec.~\ref{sect:meth}, we discuss the method of our calculation using the
finite size density matrix renormalization group (FS-DMRG)
technique. The results along with discussions are presented in
Sec.~\ref{sect:res_disc} with experimental signatures for the
different ground state phases in Sec.~\ref{sect:exp_sig} and we present
our conclusions in Sec.~\ref{sect:concl}.
\section{METHOD OF CALCULATION}
\label{sect:meth}
To obtain the ground state of model~(\ref{eq:ham}) for the system of
$N$ bosons on a lattice of length $L$, we use the FS-DMRG method
with open boundary conditions~\cite{white,schollwock}. This method
has been widely used to study the Bose-Hubbard
model~\cite{pai,kuhner,whiteprb,schollwock,urba}. We have considered
six bosonic states per site and the weights of the states neglected
in the density matrix formed for the left or the right blocks are
less than $10^{-6}$~\cite{pai}. In order to improve the convergence
of the results, the finite-size sweeping procedure as given
in~\cite{white,pai} has been used for every length. Using the ground
state wave function $|\psi_{LN} \rangle$ and energy $E_L(N)$, we
calculate the following physical quantities and use them to identify
the different phases.
The on-site local number density $\langle n_i \rangle$, defined as,
\begin{equation}
\langle n_i \rangle= \langle\psi_{LN}|n_i|\psi_{LN} \rangle,
\label{eq:ni}
\end{equation} gives the local density distribution.
The fluctuation in the local number density, $\kappa_i$, which is finite for
the SF phase, is calculated
using the relation \begin{equation}
\kappa_i =\langle{n_i^2}\rangle -{\langle{n_i}\rangle}^2
\label{eq:kappa} \end{equation}
and finally the existence of the CDW order is confirmed by
calculating the structure factor:
\begin{equation}
S(k)=\frac{1}{L^2}\sum_{i,j}e^{i\,k\,(i-j)}\langle{n_in_j}\rangle.
\label{eq:sk}
\end{equation}
In our calculations, we have considered a system of length $L=140$ and vary
$N$ from $30$ to $140$. In our previous work on the homogeneous
extended Bose-Hubbard Model~\cite{mishrass}, we had considered a
fixed value of the on-site interaction $U=10$ and vary the nearest
neighbor interaction strengths $V$ from $0$ to $10$. We choose the
same range of parameters here as well, since the homogeneous phase
diagram for this range, as shown in the Fig.~\ref{fig:fig1}, exhibits most of
the interesting phases for this model. The strength of the external
confining trap potential is fixed at $V_{\text{T}}=0.008$.
\section{Results and Discussion}
\label{sect:res_disc} We begin with the summary of the phase diagram
for the homogeneous extended Bose-Hubbard model, which has been
studied recently~\cite{batrouniprl,mishrass} for a wide range of
densities and interaction parameters namely, the on-site
interaction $U$ and the nearest neighbor interaction $V$. The phase
diagram for a typical value of the on-site interaction, say $U=10$ is
shown in Fig.~\ref{fig:fig1}~\cite{mishrass}. The phase diagram consists of
gapped as well as gapless phases. The gapless phases include the superfluid
phase, the supersolid phase where superfluidity and charge
density wave order co-exist, and the solitonic phases. The gapped
phases are (i) the Mott insulator phase with $\rho=1$ for $V < V_C
\sim 5.4$, (ii) charge density wave phase
CDW-II, (where every other site is doubly occupied, i.e.,
$|2~0~2~0~\cdots\rangle$) with average density $\rho=1$ for $V> V_C\sim
5.4$ and (iii) the CDW-I phase (alternative sites are occupied, i.e., boson
density varies as $|1~0~1~0~\cdots\rangle$) with average density
$\rho=1/2$ for $V > V_C \sim 3.0$. The gap vanishes
when doping above or below these gapped phases.
For example doping below half-filling ($\rho=1/2$) gives rise to
solitons that break the CDW-I order. This phase extends over a small
range of densities below the CDW-I and eventually goes over to the
superfluid phase when the density is further decreased. However, the
behavior of the system when doping above half-filling is different. For small
$V$ we get similar solitonic phases, however, for larger $V$ a
supersolid phase stablizes. The supersolid phase forms again while
doping above and below the CDW-II phase. In fact there exists a range
densities $0.5<\rho<1$ and $\rho>1$ for $V
> U/2$ where the supersolid phase is the stable ground state of
model~(\ref{eq:ham}) as shown in the Fig.~\ref{fig:fig1}.
Let us introduce a harmonic trap potential. Earlier studies of the
one-dimensional Bose-Hubbard model in the presence of a trap has
demonstrated the co-existence of the superfluid and the Mott
insulator phases~\cite{suna,batrounimi}. The Mott insulator is
characterized by the formation of a plateau in the local number
density $\langle n_i \rangle$ as a function of the distance $r_i$ from the
center of the trap and is incompressible, while the superfluid phase
is characterized by large local number density fluctuations and is
compressible. The nearest neighbor interaction brings about the
charge density wave order in the system due to the interplay between
the $U$ and $V$ terms in the Hamiltonian. Figures~\ref{fig:fig2}
and~\ref{fig:fig2a} show the density profile, i.e., the variation of
the local density $\langle n_i\rangle$ as a function of the distance from
the trap center $r_i$. We obtain the density profile for two sets of
parameters: (i) for the number of bosons fixed at $N=80$, but
different values of $V$ (Fig.~\ref{fig:fig2}) and (ii) fixed nearest
neighbor interaction, $V=8$, but different values of $N$
(Fig.~\ref{fig:fig2a}). The following three features are clearly
seen: (i) the local density $\langle n_i\rangle$ is maximum at the center of
the trap, (ii) the density falls-off with increase in $r_i$ and
(iii) the density profile exhibits plateaus and oscillations.
In order to understand these features and identify various phases from the
density profile, we define the local chemical potential at
the site $i$ at a distance $r_i$ from the center of the trap as,
\begin{equation}
\mu_i = \mu_0 - V_{\text{T}} r_i^2.
\label{eq:local_mu}
\end{equation}
Here $\mu_0=E_L(N+1)-E_L(N)$ is the chemical potential of the
system. For the homogeneous system, $\mu_i=\mu_0$ for any $i$.
However, for a finite trap the local chemical
potential $\mu_i$ equals $\mu_0$, which is its maximum value, at the center of
the trap and decreases radially outward as in Eq.~\ref{eq:local_mu}. It is
instructive to plot the density profile as a function of $\mu_i$ instead of
$r_i$ as in Fig.~\ref{fig:ni_mui_3}.
It may be noted from Fig.~\ref{fig:ni_mui_3} that the density of bosons at
any site $i$ is controlled by the value of the local chemical potential
$\mu_i$. So a decrease in $\mu_i$ results in a decrease in $\langle n_i
\rangle$, with the maximum at the center of the trap as observed in
Figs.~\ref{fig:fig2} and~\ref{fig:fig2a}.
\begin{figure}[ht]
\centering
\includegraphics[width = 3.4in, angle = 0, clip = true]
{fig3.ps}
\caption{The local density $\langle n_i \rangle$ as a function of
the distance from the center of the trap $r_i$
for $N = 80$, $U=10$, $V_{\text{T}}=0.008$, but for different values of $V$.
}
\label{fig:fig2}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width = 3.4in, angle = 0, clip = true]
{fig4.ps}
\caption{The local density $\langle n_i \rangle$ as a function of the distance from
the center of the trap $r_i$
for fixed $V = 8$, $U=10$, $V_{\text{T}}=0.008$, but for different values of
$N$.
}
\label{fig:fig2a}
\end{figure}
The density of bosons plays a very crucial role in the determination
of the ground state of the model in Eq.~\ref{eq:ham}. The gapped
phases are possible only when the density is commensurate. The
homogeneous system with a given value of $U$ and $V$ and a uniform
local chemical potential $\mu_0$ represents one point in the phase
diagram. However, for the system with a trap potential, the density
varies across the lattice due to the variation of the local chemical
potential and therefore different phases co-exist. In order to
understand this feature of co-existence of the different ground
state phases and the role played by the local chemical potential, we
study first the path in the phase diagram that is traced by $\mu_0$
as we change the interaction parameter $V$ keeping the number of
bosons $N$, the trap potential $V_T$ and the on-site interaction $U$
fixed. This path is referred to as the \emph{Canonical
Trajectory}~\cite{batrounimi}, since $N$ is held fixed.
Fig.~\ref{fig:fig1a} shows several canonical trajectories (for
different values of $N$) in the homogeneous phase diagram. In may be
noted that $\mu_0$ is the local chemical potential at the center of
the trap and the position of these canonical trajectories trace the
phase present at the trap center as $V$ is varied for fixed $N$. For
example when $N=30$, the canonical trajectory and hence the phase at
the center of the trap goes from the superfluid to CDW-I as we
increase $V$. The position of the canonical trajectory in the phase
diagram can be shifted by changing the number of bosons $N$. When
the number of bosons is increased, say to $N=40$, $\mu_0$ increases
and the position of the canonical trajectory in the phase diagram is
shifted upward. As a result the center of the trap, say for $V=0$,
which was in the SF phase for $N=30$, is now in the Mott insulator
phase. Following the canonical trajectory for $N=40$, the trap
center goes from MI to SF and then to a supersolid phase for
increasing $V$. Thus the position of the canonical trajectory for a
given $N$ and $V$ in the phase diagram represents the phase at the
center of the trap.
Moving away from the center of the trap, the local chemical
potential decreases as in Eq.~(\ref{eq:local_mu}) and the
variation of $\mu_i$ is represented in the phase diagram by a line
drawn vertically downwards from the canonical trajectory to the
horizontal axis. The local chemical potential values across the
lattice fall on this line, which passes through different ground state
phases. Therefore, the local chemical potential (and thus local density) at
different sites favor the co-existence of different phases in the presence of
a trap. It is useful to re-plot the density profile given in the
Fig.~\ref{fig:fig2} as a function of $\mu_i$ using
Eq.~\ref{eq:local_mu} instead of $r_i$ as in
Fig.~\ref{fig:ni_mui_3}. We also calculate and plot, in the same
figure, the average local number density define as
\begin{equation}
\bar{n}_i= \langle (2n_i+n_{i+1}+n_{i-1}) \rangle/4.
\label{eq:nbar}
\end{equation}
For $N = 80$ and $V = 2.0$, $\mu_0$ falls in the superfluid phase
above the $\rho=1$ Mott lobe (point $A$ as indicated on the
canonical trajectory corresponding to $N= 80$ in
Fig~\ref{fig:fig1a}). This means that the center of the trap has
$\langle n_i \rangle > 1$. Moving away from the trap center, $\mu_i$
decreases along the line $AB$ and there are regions where $\mu_i$
falls inside the MI lobe. From Fig.~\ref{fig:ni_mui_3}, we see that
for these values of $\mu_i$, $\langle n_i \rangle=1$. Similarly as we move
towards the edge, the values of $\mu_i$ decreases further such that the system
is once again in a superfluid phase on the lower side of the Mott lobe. So the
system for $N=80$, $V=2$ has a superfluid core
flanked by a MI phase and finally ending with a superfluid edge. The density
profile (top panel of Fig~\ref{fig:fig2} and
Fig~\ref{fig:ni_mui_3}) correlates with this result. In addition,
there are oscillations in $\langle n_i \rangle$ in the superfluid shoulders
near $\bar{n}_i=1/2$. The reasons for these oscillation are the
following. For $V=2$, the system is close to CDW-I lobe (see
Fig~\ref{fig:fig1}). In the thermodynamic limit, the CDW-I order can stabilize
only for $V> V_C\sim 3.0$. However, the finite size of the system allows a
CDW-I phase to exist for lower values of $V$, here $V=2$, although it
vanishes in the thermodynamic limit. Finite size effects are characterized by
oscillations in the local density $\langle n_i \rangle$. These oscillations
stabilize at higher values of $V$ into the CDW-I phase. For example for $V=7$
and $N=80$, $\mu_0$ (point $C$ in Fig.~\ref{fig:fig1a}) falls inside the CDW-II
lobe yielding a CDW-II phase at the center. As we move towards
the edges, the CDW-II phase is flanked by a supersolid phase,
CDW-I and finally a superfluid shoulder as can also be infered from the density
profiles as shown in Fig.~\ref{fig:fig2} and Fig~\ref{fig:ni_mui_3}. In
fact, these conclusions can be further fortified by comparing the variation
of the average $ \bar{n}_i$ as a function of $\mu_i$ with the density of the
corresponding homogeneous system as in Fig.~\ref{fig:ni_mui_3a}. The
agreement is striking, leading to the conclusion that for a given set of
parameters, the phase of a system with an external trap
is represented by a line starting from the canonical trajectory to the
horizontal axis while the phase of the homogeneous system is represented by a
point in the phase diagram. This immediately shows that while the homogeneous
system can have a unique phase, the phases tend to co-exist for an
inhomogeneous system.
In addition to scanning along the phase diagram at fixed values of
$N$ and varying $V$, it is also equally possible to fix the nearest
neighbor interaction $V$ and move along the phase diagram by varying
$N$ and therefore the chemical potential $\mu_0$. The canonical
trajectory in the phase diagram moves upwards (downwards) by
increasing (decreasing) the total number of bosons and as a result
the local chemical potential at the center of the trap $\mu_0$
changes, giving rise to different phases at the center. This is
demonstrated in Fig.~\ref{fig:fig2a} for fixed $V=8$ for different
values of $N$. For $N=30$, position of the $\mu_0$ is inside the
CDW-I lobe (see Fig.~\ref{fig:fig1a}) and as discussed above, the
corresponding system has a CDW-I core flanked by a superfluid edge
as seen in the density profile (top panel of Fig.~\ref{fig:fig2a}). An
interesting situation occurs for
$N = 40$, where the trap center is expected to be in the elusive
supersolid phase, as seen in Fig.~\ref{fig:fig1a} and is
characterized by density fluctuations between $1.0 \le \langle n_i \rangle
\le 1.5$, that is, the system has a CDW order at incommensurate
densities~\cite{mishrass}. As a result, the system now will have a
supersolid core, followed by a CDW-I and a superfluid phase moving
outward from the trap center (top panel of
Fig.~\ref{fig:fig2a}). Further increase in $N$ leads to the inclusion
of a CDW-II phase in the system in addition to the supersolid, the
CDW-I and the superfluid phases.
\begin{figure}[ht]
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip = true]{fig5.ps}
\end{center}
\caption{(Color on-line) Local number density $\langle n_i \rangle$ and average define as
$\bar{n}_i= \langle (2n_i+n_{i+1}+n_{i-1})/4 \rangle$ as a function of local chemical potential
for different values of $V$ but
fixed $N = 80$.}
\label{fig:ni_mui_3}
\end{figure} \begin{figure}[ht]
\centering
\includegraphics[width = 2in, angle = 0, clip = true]
{fig6.ps}
\caption{(Color on-line) Average local number density $\bar{n}_i$ for system
with a trap and $\langle n_i \rangle$ for a homogeneous system as function of the
local chemical potential $\mu_i$ for $V=2$ and $7$.}
\label{fig:ni_mui_3a}
\end{figure} \begin{figure}[ht]
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip = true]{fig7.ps}
\end{center}
\caption{(Color on-line) Number density per site and its fluctuations that serve as a measure of compressibility and
hence can be used as a tool to pick out the compressible and the incompressible phases that coexist in the presence of a harmonic trap.}
\label{fig:fig4}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip = true]{fig8.ps}
\end{center}
\caption{(Color on-line) Picking out the different phases from the density profile for $N = 80$ and $V = 8.0$.}
\label{fig:fig5}
\end{figure}
The next issue we address here is a scheme to pick out the various
phases using local properties of the system. We will follow the
discussions in~\cite{suna} and use local compressibility or
equivalently the fluctuations in the number density per lattice site,
$\kappa_i$, given in Eq.~\ref{eq:kappa}, as a tool to distinguish
between the gapped and the gapless phases. It is known that the
number fluctuation is large in the superfluid phase while it is a
minimum for the MI and the CDW phases. Fig.~\ref{fig:fig4} shows the
variation of $\kappa_i$ across the lattice. For small values of $V$,
$\kappa_i$ varies at the center and at the edges of the trap indicating that
these regions are in the superfluid phase, while the plateaus represent the
Mott insulator phase. Further, we note
that these plateaus (minima) occur exactly over the values of $r_i$ where the
average local density $\bar{n}_i$ exhibits a plateau at integer densities.
Therefore, one can pick out the incompressible phases using the
density profile and its local fluctuation $\kappa_i$ and identify
them using the phase diagram and the canonical trajectories.
As an example, Fig.~\ref{fig:fig5} shows the different phases for $N=80
$ but varying $V$. In the next section, we will discuss the
experimental signatures for the various phases that have been
isolated in the presence of a harmonic trap using global properties
of the system.
\section{Experimental Signatures}
\label{sect:exp_sig}
The presence of a harmonic trap in the optical lattice leads to the
co-existence of the superfluid, the Mott insulator, the charge density wave and
the supersolid phases as seen in the previous sections. As a result, extracting
the signature of a particular phase in the presence of other phases becomes a
theoretically important exercise in order to make connections with experiments.
In the following we analyze possible global signatures of the
various ground state phases that can be experimentally confirmed.
It is now possible in experiments to record the spatial distribution
of the lattice with different filling
factors~\cite{bloch06,gemelke,sherson,bakr}.
Similar experiments in one-dimensional
optical lattices can yield density profiles using which the ground state phases
can be mapped. Another way to obtain direct
information about the Mott plateaus (shells in 3D) is through the
atomic clock shift experiment~\cite{campbel}. By using density
dependent transition frequency shifts, sites with different
occupation can be spectroscopically distinguished, thus giving us
information about the number of sites corresponding to a given density
$\rho$ of bosons, defined as $N(\rho)$. As a first step, we look for the
signatures of the solid phases (MI and CDW) in an atomic clock shift
experiment.
\begin{figure}[ht]
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip = true]{fig9.ps}
\end{center}
\caption{$N(\rho)$ versus $\rho$ for $N = 80$ and different values of the
nearest neighbor interaction $V$. The
presence of incompressible phases can be distinguished by the formation of a peak at commensurate densities.}
\label{fig:fig6}
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip = true]{fig10.ps}
\end{center}
\caption{$N(\rho)$ versus $\rho$ for $V = 8.0$ as number of Bosons is varied. Incompressible phases can be
picked out by the formation of peaks at commensurate densities.}
\label{fig:fig6a}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip = true]{fig11.ps}
\end{center}
\caption{Structure factor as a function of $q$ for $N = 40$ and different
values of $V$.}
\label{fig:fig7}
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip = true]{fig12.ps}
\end{center}
\caption{Structure factor as a function of $q$ for $V = 8.0$ and different
values of $N$.}
\label{fig:fig7a}
\end{figure}
In Figs.~\ref{fig:fig6} and~\ref{fig:fig6a} we plot $N(\rho)$ as a
function of $\rho$ for different values of $V$
fixing $N=80$ and different $N$ values with fixed $V=8$ respectively. The
density profiles
corresponding to these parameter values are
given in the Figs.~\ref{fig:fig2} and~\ref{fig:fig2a} respectively.
The presence of the incompressible phases, that is, the MI, the CDW-I
and II in the system can be inferred from the \emph{formation of a
peak} in $N(\rho)$ at commensurate densities. For example, existence
of a Mott plateau in the density profile for $V$ ranging between $0$ and $5$ (
see Fig~\ref{fig:fig2}) correlates with a peak in $N(\rho)$ at
$\rho=1$. Similarly peaks in $N(\rho)$ at $\rho=2$ correlate
with the formation of CDW-II phases in the density profile. Similar
conclusions can be drawn from Fig~\ref{fig:fig6a}. Comparing with the
density profile in Fig~\ref{fig:fig2a}, we can conclude that the formation of
peaks in $N(\rho)$ at integer densities can be correlated with the existence of
the solid phases, i.e., MI or CDW.
In order to distinguish between the two solid phases, i.e, the CDW and MI
phase, we calculate the structure factor, as defined in
Eq.~\ref{eq:sk}. Fig.~\ref{fig:fig7} shows the structure factor in
momentum space as $V$ is varied for $N = 40$, while
Fig.~\ref{fig:fig7a} has fixed $V = 8.0$ for different $N$ values.
From the phase diagram (see the canonical trajectory in
Fig.~\ref{fig:fig1a}) for $N = 40$ the phases at low $V$ values are
the superfluid and the Mott insulator. However, for higher
values of $V$, a CDW-I phase is possible. The CDW oscillations in the
density profile translates to the formation of a peak at $q = \pi$
in the structure factor. As $V$ increases, this peak at $q = \pi$
grows in magnitude reaching its maximum value when the trap center
exhibits the CDW crystalline structure. However this crystalline structure is
possible for a CDW or a SS phase. For example, for $N = 40$ and $V = 8.0$, the
center of the trap is in the supersolid phase that has the CDW-like crystalline
structure and is compressible like a superfluid. Hence the next step is to
distinguish between
the CDW ordered phases that could be either compressible (SS phase) or
incompressible (CDW phase itself). While this can be established locally
with the behavior of compressibility as a function of the distance
from the trap center, a global signature that can be used to check
for the presence of a SS phase in the trap is the momentum distribution
$n(q)$~\cite{scarola-demler-das}.
\begin{figure}[ht]
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip = true]{fig13.ps}
\end{center}
\caption{(Color on-line) Momentum distribution as a function of $q$ for $N = 40$ and different values of $V$.}
\label{fig:fig8}
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip = true]{fig14.ps}
\end{center}
\caption{(Color on-line) Momentum distribution as a function of $q$ for $V = 8.0$ and different values of $N$.}
\label{fig:fig8a}
\end{figure}
In experiments, the bosons in the optical lattice are allowed to expand and the
interference pattern in the density is recorded. The density distribution is
mirrored in the momentum distribution defined as,
\begin{equation}
n(q) = \displaystyle\frac{1}{L} \sum_{k, l = 1}^L \langle
a_k^{\dagger} a_l \rangle \exp(i q (k - l))
\label{eqn:mom_dist1}
\end{equation}
which then provides global information about the various phases present in
the system. Figures~\ref{fig:fig8} and~\ref{fig:fig8a} show the momentum
distribution, respectively, for various values of $V$ with fixed $N = 40$ and for various values of $N$ with fixed $V=8.0$.
We see that in addition to the peaks at $q = 0$ and $q
= \pm 2 \pi$, there are peaks around $q = \pi$. In order to understand the
reason for this peak at $q = \pi$, let us look at the momentum distribution for
the homogeneous system as in Fig.~\ref{fig:homo_compare}. We choose four densities to demonstrate the features of
the peak in $n(q)$ at $q = \pi$. From the phase
diagram we see that for $V = 8.0$ the homogeneous system with $\rho=0.42$ is in the superfluid phase, $\rho=1/2$ and $1$
are, respectively, in CDW-I and CDW-II phases and for $\rho=0.67$, the system is in the
supersolid phase. From Fig.~\ref{fig:homo_compare} we note that the presence of
a supersolid order in the system is accompanied by a peak in the momentum
distribution at $q = \pi$, which is absent in the SF, CDW-I and CDW-II phases. The structure function for the same set of densities
as given in Fig.~\ref{fig:homo_compare1} show peak at $q=\pi$ when the system is in CDW-I, CDW-II and SS phases. This confirm that the peak in $n(q)$ at $q=\pi$ is a clear signature of the supersolid phase.
Therefore for the trapped systems, when the phases co-exist, we note that a
peak in the momentum distribution function at $q = \pi$ signals the presence of
a supersolid phase somewhere in the trap, and the peak height being maximum when
the supersolid occupies the center of the trap.
We summarize below the signatures of the different ground state phases for the
inhomogeneous extended Bose-Hubbard model:
\begin{figure}[ht]
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip =
true]{fig15.ps}
\end{center}
\caption{(Color on line) Momentum distribution for homogeneous case. Note that
when the
system is in the Supersolid phase, a peak at $q = \pi$ develops.}
\label{fig:homo_compare}
\begin{center}
\includegraphics[angle = 0, width = 3.4in, clip =
true]{fig16.ps}
\end{center}
\caption{(Color on line) Structure function for homogeneous case. Note that
when the
system is in the Supersolid phase, a peak at $q = \pi$ develops.}
\label{fig:homo_compare1}
\end{figure}
\begin{itemize}
\item MI - Phase:
\begin{itemize}
\item Peaks in $N(\rho)$ at integer densities.
\item No peaks in the momentum distribution at $n(q = \pi)$.
\item No peaks in the structure function at $S(q = \pi)$.
\end{itemize}
\item CDW Phase:
\begin{itemize}
\item Peaks in $N(\rho)$ at integer densities.
\item No peaks in the momentum distribution at $n(q = \pi)$.
\item Peaks in the structure function at $S(q = \pi)$.
\end{itemize}
\item supersolid phase:
\begin{itemize}
\item No Peaks in $N(\rho)$ at integer densities.
\item Peaks in the momentum distribution at $n(q = \pi)$
\item Peaks in the structure function at $S(q = \pi)$
\end{itemize}
\end{itemize}
\section{Conclusion}
\label{sect:concl} In conclusion, we have studied a system of
dipolar ultra-cold bosonic atoms in the frame work of the extended
Bose-Hubbard model in the presence of external harmonic trap. Using
finite size density matrix renormalization group (FS-DMRG) method we
have demonstrated the simultaneous existence of different phases in the
system. We show the signature of different phases by calculating
different observable quantities such as the on-site number density,
the number fluctuation, the structure factor and the momentum
distribution. We also document global signatures for the ground
phases that can be observed experimentally.
\section{Acknowledgement} R.~V.~P. acknowledges financial support from CSIR and
DST, India.
|
1011.2064
|
\section{Why Care About the S-type AGB stars?}
Earlier theoretical results suggest that the S-type AGB stars have a C/O-ratio$\approx$1 \citep{scalross76}. This has been interpreted as the S-stars being representative of a brief transitional phase as M-type stars, through the dredge-up of internally synthesized carbon, evolves into carbon stars. As such, their study could potentially give important clues to a number unsolved issues regarding the mass-loss mechanism(s) and the chemical evolution as stars evolve along the AGB. In an atmosphere where the chemistry is in equilibrium and the amount of carbon is approximately equal to the amount of oxygen, almost all of the carbon and oxygen will be bound in CO molecules and little will be left to form other carbon- or oxygen-bearing compounds. This complicates the formation of dust in these stars as either free carbon or oxygen in the form of silicon oxide is normally needed to form dust. This does not necessarily imply that dust cannot be formed around these stars, merely that it is more complicated than previously assumed.
The S-type stars are classified by the presence and dominance of ZrO-bands in their optical spectra. Intrinsic S-type stars (as opposed to extrinsic S-type stars which owe their chemical peculiarities to mass-transfer across a binary system) also show Tc in their spectra. Recent models have shown that this classification might represent a larger range in C/O-ratio, possibly going as low as 0.5 (see Van Eck et al., this volume). The stars in our sample are stars found in the {\em General Catalogue of Galactic S stars}, the IRAS {\em Point Source Catalogue}, and the {\em Guide Star Catalogue}. They are all intrinsic and previously detected in CO. This might introduce a bias toward higher mass-loss-rate stars, and we will miss any very low mass-loss-rate stars, however this selection was made with the aim of also detecting line emission from other molecules besides CO. The sample is most likely representative of mass-losing S-type stars and complete (or close to complete) out to 600\,pc \citep{ramsetal09}.
The goals of this investigation is to determine reliable mass-loss rates and circumstellar molecular abundances for chemically important molecules in a consistent way so that the results can be compared to previous results on M-type and carbon stars, hoping that this will lead to a better understanding of the evolutionary status of the S-type stars and the chemical evolution along the AGB.
\section{Observations and modeling}
The 40 sample stars are observed and detected in several lower-frequency rotational transitions of CO ($J=1\rightarrow0$ to $J=3\rightarrow2$; 40 detected sources), SiO ($J=2\rightarrow1$ to $J=8\rightarrow7$; 26 detected sources), and HCN ($J=1\rightarrow0$ to $J=4\rightarrow3$; 18 detected sources) using the Onsala, IRAM, APEX and JCMT telescopes \citep[see][for details on the observations]{ramsetal09,schoetal10}. We have also searched for line emission from SiS, CS, and H$_{2}$CO, but with no or little success.
The data is then modeled using a detailed, non-local, non-LTE, Monte-Carlo radiative transfer code described in e.g. \citet{schoolof01}. The circumstellar envelope is assumed to be spherically symmetric and formed by a constant mass-loss rate. It is also assumed to have a constant expansion velocity. The thermal balance is solved self-consistently. The mass-loss rates ($\dot{M}$) and the kinetic gas temperature distributions ($T(r)$) are determined by reproducing the CO line emission. These parameters are then used as input together with the fractional abundance (relative to H$_{2}$) of the respective molecules in order to reproduce the SiO and HCN line emission (see Figs~1 and 2). The abundance distribution is described by a Gaussian function, and in the modeling both the molecular abundance at the inner boundary ($f_{0}$) and the extent of the emitting region (r$_{\rm{e}}$) is constrained. For all models $\chi^2$-minimization is used to find the best fit.
\begin{figure}[hb]
\begin{flushleft}
\includegraphics[width=3.2cm]{ramstedt_fig1.eps}
\includegraphics[width=3.2cm]{ramstedt_fig2.eps}
\includegraphics[width=3.2cm]{ramstedt_fig3.eps}
\includegraphics[width=3.2cm]{ramstedt_fig4.eps}
\caption{Example of spectra (histogram) of several transitions of SiO from the S-type star W~Aql. The spectra are overlaid by the results from the best-fit model (solid line) for this source, assuming an initial fractional abundance of $f_{0}=3\times10^{-6}$ and a e-folding radius of the Gaussian SiO abundance distribution of $r_{\rm{e}}=6.5\times10^{15}$. }
\label{sio}
\end{flushleft}
\end{figure}
\section{Results and comparison with previous results for M-type and carbon stars}
\subsection{CO}
The median mass-loss rate found for the S-type stars is 2.7$\times$10$^{-7}$\,M$_{\odot}$\,yr$^{-1}$, and the distribution is spread over about two order of magnitude. The median gas expansion velocity is 8.0\,km\,s$^{-1}$, ranging from 3 to 21\,km\,s$^{-1}$. The mass-loss rate distribution looks very similar regardless of chemical type (Fig.~\ref{dist}). So does the distribution of expansion velocities, however, the carbon stars seem to have slightly higher expansion velocities. The correlation between $\dot{M}$ and pulsational period, and expansion velocity and pulsational period, also appears to be similar regardless of the chemical type. All these results point to that the mass-loss is driven by the same mechanism or mechanisms in all three chemical types. For further details on the CO results, see \citet{ramsetal06,ramsetal09}.
\subsection{SiO}
The SiO fractional abundances for the S-type stars range three orders of magnitude, and the median value, 6$\times$10$^{-6}$, is almost one order of magnitude larger than what would be expected in chemical equilibrium \citep{cher06}. From a non-equilibrium model, including the effects of shocks in the circumstellar gas, the SiO abundance would be expected to be a few times 10$^{-5}$. This is more in agreement with what we find in our models. A comparison of the results for all three chemical types shows that the abundance distributions are very similar (see Fig.~\ref{dist}) and can vary up to two order of magnitude for a specific density. We interpret the large spread in abundances as indicative of the SiO chemistry being a consequence of the effects of shocks in the stellar atmospheres, as a shock chemistry would be very sensitive to other specific parameters of the star, like shock velocity for instance. For the M-type and carbon stars, previous results show a clear decrease in the SiO abundance with the circumstellar wind density, and this has been interpreted as an effect of SiO adsorption onto dust grains in a high-density wind. There is some indication of the same effect in the S-type stars, however, very few high-density S-type stars have been observed. For further details on the SiO results, see \citet{ramsetal09}.
\begin{figure}[t]
\begin{flushleft}
\includegraphics[width=4cm]{ramstedt_fig5.eps}
\includegraphics[width=4cm]{ramstedt_fig6.eps}
\includegraphics[width=4cm]{ramstedt_fig7.eps}
\includegraphics[width=4cm]{ramstedt_fig8.eps}
\includegraphics[width=4.cm]{ramstedt_fig9.eps}
\includegraphics[width=4cm]{ramstedt_fig10.eps}
\caption{Example of spectra (histogram) of several transitions of HCN from the S-type star W~Aql. The HHT spectra are from \citet{biegetal00}. The spectra are overlaid by the results from the best-fit model (solid line) for this source, assuming an initial fractional abundance of $f_{0}=5\times10^{-7}$ and a e-folding radius of the Gaussian SiO abundance distribution of $r_{\rm{e}}=6\times10^{16}$. }
\label{hcn}
\end{flushleft}
\end{figure}
\subsection{HCN}
The HCN abundances show a different picture. Here the three types are clearly different. The carbon stars have a median abundance of 2.5$\times$10$^{-5}$, while the M-type stars have a median abundance of 1.2$\times$10$^{-7}$ and both distributions are quite narrow (see Fig.~\ref{dist}). The S-type stars, on the other hand, are spread out between the other two types, and has a median abundance of 7.0$\times$10$^{-7}$. This is more in line with what would be expected in equilibrium chemistry where the HCN abundance would be very dependent on the C/O-ratio. In this scenario, the S-type stars would either be more 'M-type-like', have a lower C/O-ratio and HCN abundance, or be more 'C-type-like', have a higher C/O-ratio and HCN abundance. The estimated HCN abundances do differ from what is found in equilibrium chemistry models, especially for the M-type sources where the equilibrium abundance would be expected to be $\sim$10$^{-11}$. This indicates that there might be some non-equilibrium processes influencing the chemistry, however not as much as in the models by \citet{cher06} where the HCN abundance is found to be independent of chemical type. For further details on the HCN results, see \citet{schoetal10}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=4cm]{ramstedt_fig11.eps}
\includegraphics[width=4cm]{ramstedt_fig12.eps}
\includegraphics[width=4cm]{ramstedt_fig13.eps}
\includegraphics[width=4cm]{ramstedt_fig14.eps}
\includegraphics[width=4cm]{ramstedt_fig15.eps}
\includegraphics[width=4cm]{ramstedt_fig16.eps}
\caption{The upper panel show histograms of the mass-loss rate distributions, the SiO abundance distributions, and the HCN abundance distributions of the three samples compared in our work. The solid line represents the S-type stars \citep{ramsetal09}, the dotted-dashed line represents the M-type stars \citep{olofetal02, delgetal03}, and the dashed line represents the carbon stars \citep{schoolof01}. The lower panel shows $\dot{M}$ versus $v_{\rm{e}}$ (left), the SiO abundance versus wind density (middle), and the HCN abundance versus wind density (right). S-type stars \citep{ramsetal09, schoetal10} are shown as dots, M-type stars \citep{delgetal03,schoetal10} as squares, and carbon stars \citep{schoetal06,schoetal10} as triangles. }
\label{dist}
\end{center}
\end{figure}
\section{Conclusions}
We have modeled circumstellar molecular line emission from CO, SiO, and HCN for 40 S-type AGB stars. We have compared the results to previous results for M-type and carbon stars, and arrive at the following conclusions:
\begin{itemize}
\item{We see no indications that the mass-loss process is different in the S-type stars compared to that of the M-type or carbon stars.}
\item{Circumstellar SiO abundances are similar in all three chemical types and the results are indicative of shock chemistry and grain adsorption.}
\item{Circumstellar HCN abundances are clearly sensitive to the spectral/chemical type of the star and more in line (than the estimated SiO abundances) with results from models assuming thermal equilibrium. Furthermore, our results for HCN clearly shows that the S-type stars in our sample are chemically different from the M-type stars.}
\end{itemize}
Despite the limitations due to the selection criteria of the compared samples, we believe that our conclusions apply to AGB stars in general.
\vspace{1cm}
\acknowledgements The authors acknowledge support from the Swedish Research Council. SR acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) through the Emmy Noether Research grant VL 61/3-1.
|
1011.2809
|
\section{Introduction}
In wireless communications, reflection and diffraction of the
transmitted radio signal results in the superimposition of multiple
complex-scaled and delayed copies of the signal at the receiver. This
type of channel is commonly referred to as a \emph{multipath
channel}. In some instances, the multiple copies add constructively,
and in others destructively resulting in \emph{multipath fading}. When
the coherence bandwidth of the channel is smaller than the bandwidth
of the radio signal then the fading is termed \emph{frequency
selective}~\cite{Proakis:2000}. We assume the reader is familiar
with standard wide-sense stationary uncorrelated scattering models,
for an overview see e.g.~\cite{Goldsmith:2005book}.
Orthogonal frequency division multiplexing (OFDM) is a transmission
strategy specifically designed to combat frequency selective channels
with relatively low receiver complexity
\cite{Peled:1980p964,Cimini:1985p665,Vannee:1999book}. In OFDM, the
signal bandwidth is divided into several non-overlapping (hence
orthogonal) narrowband subcarriers where the width of each subchannel
is chosen such that it is approximately frequency non-selective. Thus
only a single tap equaliser per subchannel is required to compensate
for the multipath fading. Together with the use of the fast Fourier
transform (FFT), this results in a low complexity way to handle
frequency-selective channels. As such, OFDM is now the basis of many
current and emerging wireless communications standards,
see~\cite{Vannee:2006p445,HieDen10} for an overview. Many of these standards
are targeted for outdoor mobile applications,
e.g. 802.11p~\cite{IEEE80211p}. Mobility causes the multipath channel
(and hence frequency selectivity) to change with time. If the mobility
is fast enough compared to the symbol rate, then the channel impulse
response may vary significantly within an OFDM packet. Extensive field
trials have shown that this is indeed the case for the transmission of
802.11 OFDM signals in vehicular
environments~\cite{AleHal10}. Time-varying multipath channels such as
these are commonly termed
\emph{doubly-dispersive}~\cite{Kozek:1998p1579,Liu:2004p2583,Taubock:2007isit}.
In general, a realization of a doubly selective multipath channel
time-varying impulse response can be modeled in continuous time as
\begin{equation*}
c(t,\tau) = \sum_{p=1}^P a_p(t) \delta(\tau-\tau_p)
\end{equation*}
where $c(t,\tau)$ is the response at delay $\tau$ to an impulse at
time $t$, where $\delta(\tau)$ denotes the Dirac-delta function. The
$a_p(t)$ are the time-varying complex amplitude (magnitude and phase)
of tap $p$, with delay $\tau_p$. The number of resolvable multipath
components is $P$. The $a_p(t)$ may aggregate many more unresolvable
multipath components, typically resulting in Ricean or Rayleigh
statistics for these parameters.
Note that for sufficiently short time durations, mobility-induced
Doppler shifts manifest as linear variations of the phase of $a_p(t)$
with time. In this paper, we consider the special case where the OFDM
packet duration is short enough such that we can model the channel as
\begin{equation} \label{eq:cir}
c(t,\tau) = \sum_{p=1}^{P} a_p e^{-j 2 \pi \nu_p t} \delta(\tau - \tau_p),
\end{equation}
where $a_p$, $\tau_p$ and $\nu_p$ respectively denote the complex
gain, delay and Doppler frequency (relative to the nominal carrier
frequency) of tap $p$. These parameters are all assumed to be
\emph{constant} over the duration of an OFDM packet. In a physical
sense, this implies that changes in the relative distance and velocity
between the transmitter, receiver and scatterers are negligible over
the duration of an OFDM packet. This model is consistent with the
geometric-stochastic model presented in~\cite{KarTuf09} for short
observation windows, and has been validated experimentally in~\cite{AleHal10}.
In this paper, we concentrate on joint estimation of $a_p$, $\tau_p$
and $\nu_p$ of the multipath components assuming perfect knowledge of
the transmitted OFDM symbols. This is a practical assumption, e.g. a
transmitted training/pilot signal, or the receiver is able to decode
the signal without error (via a forward error correction code).
Estimation of these parameters is useful in a number of areas: channel
sounding and characterisation; channel prediction; reducing channel
state information for feedback in adaptive communications; and
radar. Estimation of these parameters in the OFDM setting has been
studied previously by a number of researchers from both the
communications and radar fields. Channel estimation via an
approximate maximum likelihood parameter search algorithm was proposed
by Thomas \emph{et~al.~} \cite{Thomas:2003p74}. Their iterative algorithm was
based on an approximation of the maximum likelihood function, where
the multipath gain values are substituted with their least-squares
estimates. In the radar community, estimation of delay/Doppler is
vital for determination of target range and velocity. Berger
\emph{et~al.~}~\cite{Berger:2008p2384} studied the problem of extracting the
target range/velocity information from the OFDM signal in a passive
multi-static radar system~\cite{chern:2007} using digital audio/video
broadcasted signals as illuminators of opportunity. They set up the
problem as a sparse estimation problem to use recent results from
compressed sensing~\cite{Candes:2008:SPM}. In particular, they employ
the orthogonal matching pursuit
algorithm~\cite{Needell:2008arxiv,Needell:2008asilomar}, which is in
an iterative algorithm that successively removes previously estimated
multipath components from the received signal to estimate new
components. Note that Taub\"{o}ck~\emph{et~al.~}
\cite{Taubock:2008,Taubock:2010p255} also consider compressed sensing
to estimate the OFDM channel coefficients. However, their interest is
not in the estimation of delays/Doppler, but in the frequency/time
channel coefficients.
In this paper we begin with a continuous-time model of the transmitted
OFDM signal and derive the received matched filtered signal from first
principles. Assuming the delay-spread of the channel does not exceed
the cyclic-prefix and the pass-band of the receive/transmit filters
exceed the signal bandwidth (with negligible pass-band ripple), we
show that the resulting frequency domain channel coefficients can be
represented as the superimposition of two-dimensional (2-D) complex
sinusoids, where each 2-D frequency is proportional to the delay and
Doppler of each multipath component. Similar observations have been
made by Wong and Evans~\cite{Wong:2008:pp1601-1615,Wong:2005:p2259}
although without detailed justification. Under a similar setting they
consider estimation using only OFDM pilot symbols and propose channel
prediction algorithms based on the estimation of channel parameters
via a rotational invariance technique. Using this method, they
reformulate the problem as a one-dimensional estimation problem.
Parameter estimation of 2-D sinusoids in a general setting has been
studied extensively many years prior to the work of Wong and
Evans~\cite{Wong:2008:pp1601-1615,Wong:2005:p2259}. Estimation
methods and the Cramer-Rao-Lower-Bound (CRLB) for the single 2-D
sinusoid case was investigated by Chien~\cite{Chien:1981thesis}. Kay
and Nekovei~\cite{Kay:1990p1807} proposed a low complexity estimator
that operates on the phase of the noisy 2-D sample data. For the
estimation of the superposition of multiple 2-D sinusoids: Bresler and
Macovski employ a 2-D version of Prony's
method~\cite{Bresler:1986p1081}; Rao \emph{et
al.}~\cite{Rao:1994p1795} use a similar polynomial rooting approach;
and recently Kliger and Francos~\cite{Kliger:2005p2563} consider
maximum-likelihood estimation with a maximum a-posteriori (MAP) model
order selection rule for the case where the number of sinusoids is
unknown.
In this paper we concentrate purely on the estimation of the complex
amplitude, delay and Doppler of each multipath tap, assuming the
number of taps is known. Our results can be straightforwardly extended
to the case where the number of taps is unknown using well-known
\emph{model-order selection} methods~\cite{Stoica:2004p36}. The
maximum-likelihood approach requires the solution to a
multi-dimensional nonlinear least-squares estimation
problem~\cite{Pereyra:1967p27} and hence has complexity that is
prohibitive in
practice~\cite{Bresler:1986p1081,Rao:1994p1795,Kliger:2005p2563}. We
propose a low-complexity algorithm based on a two-stage process:
first, an initial estimation; followed by a refinement proceedure. In
the same spirit
as~\cite{Thomas:2003p74,Kliger:2005p2563,Berger:2008p2384} the initial
estimation algorithm is based on successive cancellation, whereby
multipath components are subtracted from the original signal after
they are detected. In each iteration, the delay/Doppler is estimated
using periodogram search~\cite{Chien:1981thesis,Kay:1990p1807} via a
2-D bisection algorithm. The multipath complex amplitudes are then
obtained via standard linear least-square
estimation~\cite{Boyd:2004book}. Moreover, we show that this secondary
problem can be written in terms of the \emph{ambiguity
function}~\cite{Skolnik:2002book}. Once initial estimates have been
obtained, we then propose an iterative refinement algorithm based on
parallel cancellation. Each iteration of the refinement involves
subtracting all multipath components from the received signal except
the component of interest, which is re-estimated using the 2-D
bisection algorithm. This refinement process yields significant
improvements over the standard successive cancellation approach. We
show via Monte-Carlo simulations that this refinement algorithm
achieves performance very close to the CRLB for single 2-D sinusoid
estimation.
The remainder of our paper is organised as follows. In
Section~\ref{sec:system_model} we state the system model and derive
from first principles the received match filtered frequency domain
OFDM symbols. In Section~\ref{sec:ambiguity} we derive the transmit
signal ambiguity function. Then in Section~\ref{sec:estimator} we
present our proposed estimation algorithm and enhanced refinement
process. Simulation results are presented in
Section~\ref{sec:performance}. Finally, concluding remarks are given
in Section~\ref{sec:conclusion}.
\section{System Model} \label{sec:system_model}
Consider a $K$ subcarrier OFDM system, where packets of length $L$
OFDM symbols are transmitted. Let $\mat{X} \in \mathbb{C}^{L \times K}$
denote a packet of complex OFDM symbols. Thus $X_{l,k}$, the $l,k$th
element of $\mat{X}$, denotes the $l$th symbol transmitted on
subcarrier $k$, for $l=1,\ldots,L$ and $k = 1,\ldots,K$. In practical
OFDM systems, a certain number \emph{null} subcarriers are employed to
simplify receiver design~\cite{Vannee:1999book}. To incorporate this
feature, we let $\mathcal{K}$ denote the set of null subcarrier
indices. Thus, $X_{l,k} = 0$ for all $k \in \mathcal{K}$ and $l =
1,\ldots,L$. For all other subcarriers, i.e. $k \notin \mathcal{K}$,
we assume $X_{k,l} \in \mathcal{X}$, where $\mathcal{X} \subset \mathbb{C}$
is an arbitrary complex constellation. These symbols are drawn
randomly, independently and uniformly from $\mathcal{X}$, which is
normalised to have unit average energy. Thus $\mathbb{E} [|X_{l,k}|^2] = 1$
for $k \notin \mathcal{K}$ and $\mathbb{E} [X_{l,k} X_{n,m}^*] = 0$ for any
$n \neq l$ or $m \neq k$. The receiver is assumed to have complete
knowledge of the transmitted symbols $X_{l,k}$, e.g. a pilot/training
signal, or from the feedback of error-free decoder decisions.
Let $x(t) = \sum_{l=1}^{L} x_l(t)$ denote the complex baseband
continuous-time transmitted OFDM signal, where
\begin{equation}
x_l(t) = \frac{1}{\sqrt{KL}}\sum_{k=1}^{K} X_{l,k} e^{j 2 \pi (k-1-\lfloor K/2 \rfloor) (t - T_{\rm cp})/T} w(t - (l-1) T_d), \label{eq:tx_ofdm_sym}
\end{equation}
is the $l$th OFDM symbol, $T_d$ is the OFDM symbol duration (seconds),
$1/T$ is the subcarrier spacing (Hz), $T_{\rm cp} = T_d - T$
is the cyclic prefix duration (seconds), and $w(t)$ is a windowing
function such that
\begin{equation} \label{eq:window_fun}
w(t) =
\begin{cases}
\tilde{w}(t) & 0 \leq t \leq T_d \\
0 & \text{otherwise},
\end{cases}
\end{equation}
and $\int_{0}^{T_d} |w(t)|^2 \, dt = 1$. %
A simple choice of windowing function is $\tilde{w}(t) =
1/\sqrt{T_d}$. Note that the assumption $w(t) = 0$ for $t \notin
(0,T_d)$ is not necessarily required, but we assume this for
simplicity. In practice, \eqref{eq:tx_ofdm_sym} is implemented in the
discrete time domain via the inverse discrete Fourier transform
(IDFT)~\cite{Vannee:1999book}.
We assume transmit and receive filter impulse responses $g_{\rm T}(t)$
and $g_{\rm R}(t)$ respectively, and let $g(t) \triangleq
\int_{-\infty}^{ \infty} g_{\rm T}(u) g_{\rm R}(t-u) \, du $ denote
the combined transmit/receive filter response.Thus,
using~\eqref{eq:cir}, we write the overall channel response as
\begin{equation}
h(t,\tau) = \int_{-\infty}^{\infty} g(u) c(t,\tau-u) \, du = \sum_{p=1}^{P} a_p e^{-j 2 \pi \nu_p t} g(\tau - \tau_p). \label{eq:chan_response}
\end{equation}
Application of the overall channel response~\eqref{eq:chan_response}
to~\eqref{eq:tx_ofdm_sym} plus additive Gaussian white noise (AWGN)
yields the received continuous-time baseband signal,
\begin{equation}
y(t) = \sum_{l'} \int_{-\infty}^{\infty} x_{l'}(t-\tau) h(t,\tau) \, d \tau + z(t),
\end{equation}
where $z(t) = \int_{-\infty}^{\infty} \tilde{z}(t-u) g_{\rm R} (u) \,
du$, and $\tilde{z}(t)$ is an additive white Gaussian noise (AWGN)
process. Assuming perfect OFDM symbol synchronism, the receiver
discards the cyclic prefix and performs the matched filter to the
transmitted sinusoids, i.e.
\begin{equation}
Y_{l,k} = \frac{1}{\sqrt{KL}}\int_{T_{\rm cp} + (l-1) T_d}^{lT_d} y(t) w^*(t - (l-1)T_d) e^{-j 2 \pi (k-1-\lfloor K/2 \rfloor ) (t-T_{\rm cp})/T} \, dt, \label{eq:matched_filt_out}
\end{equation}
for $k = 1,\ldots,K$ and $l = 1,\ldots,L$. Note that in practice,
$Y_{l,k}$ is obtained via the discrete Fourier transform
(DFT)~\cite{Vannee:1999book}. We assume the pass-band of filters
$g_{\rm T}$ and $g_{\rm R}$ exceed the signal bandwidth. In addition,
we assume $\max_p \tau_p < T_{\rm cp}$ and $\max_p |\nu_p| < 1/T$, so
that inter-symbol interference (ISI) and inter-carrier interference
(ICI) can be considered negligible. Under these assumptions, in
Appendix~\ref{app:rx_matched_filt} we show that the matched filtered
output~\eqref{eq:matched_filt_out} can be written in matrix form
\begin{equation}
\mat{Y} = \mat{H} \odot \mat{X} + \mat{Z}, \label{eq:chan_model}
\end{equation}
where $\odot$ denotes the element-wise (Hadamard) product, $\mat{Y}
\in \mathbb{C}^{L \times K}$ is the received matrix of filtered noisy OFDM
symbols, $\mat{Z} \in \mathbb{C}^{ L \times K}$ is a matrix of independent
identically distributed (i.i.d.) zero mean complex Gaussian random
variables with variance $\sigma^2$, and $\mat{H} \in \mathbb{C}^{L \times K}$
are the frequency domain channel coefficients,
\begin{equation} \label{eq:channel_coeffs}
H_{l,k} = \sum_{p=1}^{P} a_p e^{-j 2 \pi \nu_p T_d (l-1)} e^{-j 2 \pi (k-1-\lfloor K/2 \rfloor) \tau_p/T}.
\end{equation}
In relation to~\eqref{eq:chan_model}, we define the signal-to-noise
ratio (SNR) as $\mathsf{snr} \triangleq \mathbb{E} \left[ \| \mat{X} \|^2
\right] / (L \sigma^2) = ({K-|\mathcal{K}|})/{\sigma^2}$, where $\|
\cdot \|$ denotes the Frobenius norm~\cite{Horn:1985book}.
From inspection of~\eqref{eq:channel_coeffs}, we see that it is simply
the superimposition of 2-dimensional (2-D) complex exponential
signals. We may also express $\mat{H}$ as the matrix product
\begin{equation}
\mat{H} = \mat{\Psi}(\mat{\nu}) \mathrm{diag}(\mat{a}) \mat{\Phi}^{\dagger}(\mat{\tau}),
\end{equation}
where $\mathrm{diag}(\mat{a})$ denotes a $P \times P$ diagonal matrix with diagonal
entries $\mat{a} = (a_1,\ldots,a_P)$, and
\begin{align}
\Psi_{l,p} (\mat{\nu}) &= \psi_l(\nu_p) \triangleq e^{-j 2 \pi (l-1) \nu_p T_d } \\
\Phi_{k,p} (\mat{\tau}) &= \phi_k(\tau_p) \triangleq e^{j 2 \pi (k-1-\lfloor K/2 \rfloor) \tau_p/T },
\end{align}
for $p = 1,\ldots,P$, $l=1,\ldots,L$ and $k = 1,\ldots,K$. As we shall
see later, the separation of the parameters in this matrix form will
simplify the development of our estimation algorithms.
In the analysis that is to follow, we will make use of the vectorised
version of~\eqref{eq:chan_model}. Let $\mat{y} = \mathrm{vec}(\mat{Y}) =
(Y_{1,1}, \ldots, Y_{L,1}, Y_{1,2}, \ldots, Y_{L,2}, \ldots, Y_{1,K},
\ldots, Y_{L,K})'$, and $\mat{z} = \mathrm{vec}(\mat{Z})$ then
\begin{equation}
\mat{y} = \mat{\Omega}(\mat{\tau},\mat{\nu},\mat{X}) \mat{a} + \mat{z}. \label{eq:vec_model}
\end{equation}
where the $KL \times P$ matrix $\mat{\Omega}$ is a function of
$\mat{\tau}$, $\mat{\nu}$, and $\mat{X}$ as follows,
\begin{equation} \label{eq:omega_mat}
\mat{\Omega}(\mat{\tau},\mat{\nu},\mat{X}) = \left(
\begin{matrix}
X_{1,1} \psi_{1}(\nu_1) \phi^*_1(\tau_1) & X_{1,1} \psi_{1}(\nu_2) \phi^*_1(\tau_2) & \ldots & X_{1,1} \psi_{1}(\nu_P) \phi^*_1(\tau_P) \\
\vdots & \vdots & & \vdots \\
X_{L,1} \psi_{L}(\nu_1) \phi^*_1(\tau_1) & X_{L,1} \psi_{1}(\nu_2) \phi^*_1(\tau_2) & \ldots & X_{L,1} \psi_{L}(\nu_P) \phi^*_1(\tau_P) \\
X_{1,2} \psi_{1}(\nu_1) \phi^*_2(\tau_1) & X_{1,2} \psi_{1}(\nu_2) \phi^*_2(\tau_2) & \ldots & X_{1,2} \psi_{1}(\nu_P) \phi^*_2(\tau_P) \\
\vdots & \vdots & & \vdots \\
X_{L,2} \psi_{L}(\nu_1) \phi^*_2(\tau_1) & X_{L,2} \psi_{1}(\nu_2) \phi^*_2(\tau_2) & \ldots & X_{L,2} \psi_{L}(\nu_P) \phi^*_2(\tau_P) \\
\vdots & \vdots & & \vdots \\
X_{1,K} \psi_{1}(\nu_1) \phi^*_K(\tau_1) & X_{1,K} \psi_{1}(\nu_2) \phi^*_K(\tau_2) & \ldots & X_{1,K} \psi_{1}(\nu_P) \phi^*_K(\tau_P) \\
\vdots & \vdots & & \vdots \\
X_{L,K} \psi_{L}(\nu_1) \phi^*_K(\tau_1) & X_{L,K} \psi_{L}(\nu_2) \phi^*_K(\tau_2) & \ldots & X_{L,K} \psi_{L}(\nu_P) \phi^*_K(\tau_P)
\end{matrix}
\right),
\end{equation}
where $(\cdot)^*$ denotes the complex conjugate.
\section{Ambiguity Function} \label{sec:ambiguity}
The ambiguity function of the transmitted signal $x(t)$ is defined as
the inner product of the signal with a delayed, frequency shifted
version of itself~\cite{Skolnik:2002book}
\begin{equation} \label{eq:ambig_def}
A_x(\tau,\nu) = \int_{-\infty}^{\infty} x(t) x^*(t-\tau) e^{-j 2 \pi \nu t} \, d t.
\end{equation}
In the context of OFDM communications the ambiguity function has been
used often as a tool for pulse design and
optimisation~\cite{Kozek:1998p1579,Liu:2004p2583}. In radar systems,
the ambiguity function plays an important role in determining target
range and velocity resolution~\cite{Skolnik:2002book}. In this section
we derive the ambiguity function of the transmitted OFDM signal and
highlight important characteristics that will affect the delay/Doppler
estimation problem. Moreover, as we shall see later, parts of the
estimation problem can be written succinctly in terms of the ambiguity
function. In this direction, substitution of~\eqref{eq:tx_ofdm_sym}
into~\eqref{eq:ambig_def} yields the following result.
\begin{theorem}[OFDM Ambiguity Function]
For a general windowing function $w(t)$, let $A_w(\nu,\tau)$ denote
its ambiguity function. The ambiguity function of the OFDM
signal~\eqref{eq:tx_ofdm_sym} is,
\begin{align}
A_x(\tau,\nu) = \frac{e^{-j \pi K \tau/T}}{KL} \sum_{l,k,l',k'} &X_{l,k} X^*_{l',k'} e^{-j 2 \pi (k-k')T_{\rm cp}/T} e^{j 2 \pi (k'-1) \tau/T} e^{-j 2 \pi T_d (l-1) \left(\nu - \frac{k-k'}{T}\right)} \notag \\
& \times A_{w}(\tau + (l'-l)T_d, \nu + (k'-k)/T). \label{eq:ambig_fun}
\end{align}
\end{theorem}
Due to the quadruple summation, numerical evaluation
of~\eqref{eq:ambig_fun} is computationally demanding. However, under
some common practical design assumptions we can make further
simplifications. Firstly, the windowing function is usually designed
such that $A_w(\tau,\nu) \approx 0$ for $|\tau| > T_{d}$. Window
functions of the form~\eqref{eq:window_fun} will have this
property. Thus the terms when $l' \neq l$ in the summation
of~\eqref{eq:ambig_fun} are approximately zero. Secondly, although
$A_w(\tau,\nu)$ cannot be considered negligible for $|\nu| >
(k'-k)/T$, since $X_{l,k}$ are i.i.d. with zero mean and unit variance
by assumption (except for the null subcarriers, which have zero
power), the summation of terms over $k \neq k'$ will approach zero for
large $K$ and/or $L$. In addition, since we are primarily concerned
with delay and Doppler in the region $ 0 \leq \tau \leq T_{\rm cp}$
and $|\nu| \ll 1/T$, where there is negligible variation in
$A_w(\tau,\nu)$, we may assume $A_w(\tau,\nu) \approx A_w(0,0)$, which
only introduces a constant phase offset, since $|A_w(0,0)|^2 =
1$. Hence we ignore the complex scaling affects of $A_w(\tau,\nu)$ and
approximate~\eqref{eq:ambig_fun} as
\begin{align} \label{eq:ambig_fun_approx}
A_x(\tau,\nu) \approx \tilde{A}_x(\tau,\nu) \triangleq \frac{1}{KL} \sum_{k,l} |X_{l,k}|^2 e^{-j 2 \pi (l-1) \nu T_d} e^{j 2 \pi (k-1-\lfloor K/2 \rfloor) \tau/T}.
\end{align}
In light of~\eqref{eq:ambig_fun}, note
that~\eqref{eq:ambig_fun_approx} is also the ambiguity function when
ISI and ICI can be considered negligible.
For phase shift keying (PSK) modulation, $|X_{l,k}| = 1$ with no null
subcarriers, i.e. $\mathcal{K} = \emptyset$, then using the geometric
summation formula, the ambiguity function~\eqref{eq:ambig_fun_approx}
further simplifies to
\begin{equation}
\tilde{A}_x(\tau,\nu) \approx e^{-j \pi K \tau/T} \mathrm{sinc} \left( \pi K \frac{\tau}{T} \right) \mathrm{sinc} \left( \pi L \nu T_d \right),
\label{eq:ofdm_ambig_psk}
\end{equation}
where $\mathrm{sinc}(x) = \sin(x)/x$. Note that~\eqref{eq:ofdm_ambig_psk} is
also equal to the expectation of~\eqref{eq:ambig_fun} over the
transmitted symbols $X_{l,k}$, i.e. it is the expected ambiguity
function, $\mathbb{E} \left[ A_x(\tau,\nu) \right]$, for an arbitrary signal
constellation $\mathcal{X}$ with zero mean and unit variance. In many
OFDM standards, subcarrier $k = \lfloor
K/2 \rfloor +1$ is a null subcarrier. For these special
cases~\eqref{eq:ofdm_ambig_psk} becomes,
\begin{equation}
\tilde{A}_x(\tau,\nu) \approx e^{-j \pi K \tau/T} \cos \left[ \pi (K/2 + 1)(\tau/T) \right] \mathrm{sinc} ( \pi \tau K/(2T)) \mathrm{sinc} (\pi \nu T_d L) \label{eq:ofdm_ambig_psk_zero_dc}
\end{equation}
From the above expressions~\eqref{eq:ofdm_ambig_psk}
and~\eqref{eq:ofdm_ambig_psk_zero_dc}, we see that the $\mathrm{sinc}$ terms
introduce sidelobes in the ambiguity function. Interestingly, the
sidelobes are de-coupled in time and frequency. As an example, the
ambiguity function of the 802.11a standard~\cite{ieee802.11a,OhaPet99} is plotted
in Fig.~\ref{fig:ambig}, i.e. $K=53$ subcarriers, with a null
subcarrier at $k = \lfloor K/2 \rfloor + 1$, $T_d = 8$ $\mu$sec and $T
= 6.4$ $\mu$sec.
\begin{figure}[htbp]
\centering
\subfigure[$K=52$, $L=90$]{\includegraphics[width=0.47\columnwidth]{ambig_K52_L90_zero_dc} \label{fig:ambig_K52_L90}}
\subfigure[$K=52, L=242$]{\includegraphics[width=0.47\columnwidth]{ambig_K52_L242_zero_dc} \label{fig:ambig_K52_L242}}
\caption{Magnitude contour plot of the ambiguity
function~\eqref{eq:ofdm_ambig_psk_zero_dc} of an 802.11a OFDM
system with PSK modulation, $T = 6.4$ $\mu$sec, $T_d = 8$
$\mu$sec}
\label{fig:ambig}
\end{figure}
From Fig.~\ref{fig:ambig}, as predicted by
\eqref{eq:ofdm_ambig_psk_zero_dc} we see that increasing $L$ improves
the Doppler resolution, but not the delay resolution, which is
dependent only on $K$ and the subcarrier spacing ($1/T$).
\section{Multipath Parameter Estimation} \label{sec:estimator}
Our primary objective is to estimate $\mat{a} = (a_1,\ldots,a_P)$,
$\mat{\tau} = (\tau_1,\ldots,\tau_P) $ and $\mat{\nu} =
(\nu_1,\ldots,\nu_P)$ in~\eqref{eq:cir} from the received noisy
symbols $\mat{Y}$~\eqref{eq:matched_filt_out} given perfect knowledge
of $\mat{X}$. In this section, without loss of generality, for brevity
of notation, we assume the OFDM system has no null subcarriers,
i.e. $\mathcal{K} = \emptyset$. Using~\eqref{eq:vec_model}, the
maximum likelihood (ML) approach is to solve the following
\begin{equation}
(\mat{\hat{a}}, \mat{\hat{\tau}}, \mat{\hat{\nu}}) = \arg \min_{\mat{a},\mat{\tau},\mat{\nu}} \| \mat{y} - \mat{\Omega}(\mat{\tau},\mat{\nu},\mat{X}) \mat{a} \|^2, \label{eq:ml_problem}
\end{equation}
which is a non-linear least squares minimisation problem. The
computational complexity can be reduced by replacing $\mat{a}$ with
its least squares estimate. That is, for a given $\mat{\tau}$ and
$\mat{\nu}$ the ML estimate of $\mat{a}$ is a linear least squares
minimisation problem, which has solution~\cite{Boyd:2004book}
\begin{equation}
\mat{\hat{a}} = \left( \mat{\Omega}^{\dagger}\mat{\Omega} \right)^{-1} \mat{\Omega}^{\dagger} \mat{y}, \label{eq:lse_a}
\end{equation}
where we have dropped the dependence of $\mat{X},\mat{\tau}$ and
$\mat{\nu}$ for brevity of notation. Hence
substituting~\eqref{eq:lse_a} for $\mat{a}$ in~\eqref{eq:ml_problem}
results in the reduced problem
\begin{equation}
(\mat{\hat{\tau}}, \mat{\hat{\nu}}) = \arg \max_{\mat{\tau},\mat{\nu}} \; \mat{y}^{\dagger} \mat{\Omega} (\mat{\Omega}^{\dagger} \mat{\Omega})^{-1} \mat{\Omega}^{\dagger} \mat{y}. \label{eq:reduced_ml_problem2}
\end{equation}
It is known that problems~\eqref{eq:reduced_ml_problem2}
and~\eqref{eq:ml_problem} are equivalent,
i.e.~\eqref{eq:reduced_ml_problem2} followed by~\eqref{eq:lse_a} is
also the ML
solution~\cite{Golub:1973p413,Bresler:1986p1081}. Unfortunately,~\eqref{eq:reduced_ml_problem2}
is in general multimodal, rendering the multidimensional search for a
global extremum computationally prohibitive.
Before we begin our reduced complexity suboptimal solution, let us
first make some interesting observations
about~\eqref{eq:reduced_ml_problem2}. Let $\mat{R} =
\mat{\Omega}^{\dagger} \mat{\Omega}$ and $\mat{w} =
\mat{\Omega}^{\dagger} \mat{y}$. From~\eqref{eq:omega_mat}, it is
straightforward to show,
\begin{align}
R_{ij} &= KL \tilde{A}_x(\tau_i-\tau_j, \nu_j-\nu_i) \label{eq:mat_R} \\
w_i &= \mat{\psi}^{\dagger}(\nu_i) \left( \mat{Y} \odot \mat{X}^* \right) \mat{\phi}(\tau_i), \label{eq:vec_w}
\end{align}
for $i,j = 1,\ldots,P$, where $\mat{\psi}(\nu_i)$ and
$\mat{\phi}(\tau_i)$ denote column $i$ of the matrices
$\mat{\Psi}$ and $\mat{\Phi}$ respectively, and $\mat{A}^*$ denotes
the element-wise conjugate of the matrix $\mat{A}$. Thus, rather than
performing computation of $\mat{R} = \mat{\Omega}^{\dagger}
\mat{\Omega}$ (requiring on the order of $PKL(KL+1)/2$ complex
multiply-accumulate operations) using standard matrix operations, to
reduce complexity, $\mat{R}$ can be evaluated using the ambiguity
function via a look-up table. Moreover, for the special case of PSK
modulation,~\eqref{eq:ofdm_ambig_psk} implies we only need the
evaluation of a $\mathrm{sinc}(x)$ function.
For the special case of $P=1$, the ML
solution~\eqref{eq:reduced_ml_problem2} becomes
\begin{align}
(\hat{\tau}_1, \hat{\nu}_1) &= \arg \max_{\tau,\nu} \; \left| \mat{\psi}^{\dagger}(\nu_1) \left( \mat{Y} \odot \mat{X}^* \right) \mat{\phi}(\tau_1) \right|^2, \label{eq:ml_single_tap}
\end{align}
after which the corresponding complex gain ML estimates can be
determined using~\eqref{eq:lse_a},
\begin{equation}
\hat{a}_1 = \frac{1}{KL} \mat{\psi}^{\dagger}(\hat{\nu}_1) \left( \mat{Y} \odot \mat{X}^* \right) \mat{\phi}(\hat{\tau}_1).
\end{equation}
We see that the solution to~\eqref{eq:reduced_ml_problem2} corresponds
to the maximum absolute value of the 2-D
periodogram~\cite{Kay:1988book}. Moreover, the CRLB for the estimation
of a single tap multipath channel can be written
as~\cite{Chien:1981thesis}
\begin{align}
\mathrm{var} [\hat{\nu}_1 T_d] \geq \frac{1}{4 \pi^2} \frac{6}{KL(L^2 - 1)} \frac{\sigma^2}{|a_1|^2}, \;\;\;\;\;\;
\mathrm{var} \left[ \hat{\tau}_1/T \right] \geq \frac{1}{4 \pi^2}\frac{6}{KL(K^2 - 1)} \frac{\sigma^2}{|a_1|^2}. \label{eq:crlb_single_tap}
\end{align}
Note that Kay and Nekovei~\cite{Kay:1990p1807} proposed a low
complexity weighted phase averager estimator as an alternative to
solving~\eqref{eq:ml_single_tap}.
If we were to use~\eqref{eq:ml_single_tap} when multiple taps are
present ($P>1$), then
\begin{equation}
\mat{\psi}^{\dagger}(\nu) \left( \mat{Y} \odot \mat{X}^* \right) \mat{\phi}(\tau) = KL \sum_p a_p \tilde{A}_x(\nu_p-\nu,\tau - \tau_p) + \mat{\psi}^{\dagger}(\nu) \left[ \mat{Z} \odot \mat{X}^*\right] \mat{\phi}(\tau), \notag
\end{equation}
which is the superimposition of complex scaled, delay and frequency
shifted ambiguity functions, plus an additive Gaussian noise term. We
see that detection and estimation of a particular tap will be
significantly affected by the main lobe and sidelobes from the
ambiguity functions of the remaining taps. This motivates a successive
cancellation approach whereby the signal contribution in $\mat{Y}$
induced by a multipath tap is removed after it is detected, thus
allowing subsequent taps to be detected and estimated. Successive
cancellation algorithms have found widespread use in a number of
communication scenarios requiring the recovery of multiple
superimposed signals. In particular, interference cancellation
(successive and parallel forms) is the basis of practical
low-complexity multi-user decoding algorithms, which attain close to
single-user bit error rate performance~\cite{Schlegel:2006book}. In
our case, the superimposed signals are not signals from multiple
users, but time/frequency shifted versions of the same signal. However
the same principle can still be applied, and as we will see later,
achieves performance close to the CRLB of a single tap channel
(provided the taps are sufficiently separated in either delay or
Doppler). In this direction, the first algorithm we propose is based
on successive cancellation and is employed to find an initial estimate
of the delay, Doppler and complex gain of each tap. The second
algorithm we propose is based on parallel cancellation and is employed
to refine the initial estimates. Integral to both of these algorithms
is a search for the largest absolute value of a 2-D
periodogram~\cite{Kay:1988book}, and we propose a low-complexity 2-D
bisection algorithm for doing this. A detailed description of each of
these algorithms is given as follows.
\subsection{Initial Estimation}
Algorithm~\ref{alg:init_sc} describes our proposed initial successive
cancellation procedure. First we initialise the residual error matrix
$\mat{E}^{(1)}$ equal to the received noisy OFDM symbols $\mat{Y}$. At
iteration $p=1,2,\dots,P$: we find $\hat{\tau}_p$ and $\hat{\nu}_p$
that correspond to the maximum absolute value squared of the 2-D
periodigram of $\mat{E}^{(p)}$; construct the $p \times p$ matrix
\begin{equation*}\mat{R}^{(p)} =
\begin{pmatrix}
R_{11} & R_{12} &\dots & R_{1p} \\
\vdots & \ddots & & \\
R_{p1} & R_{p2} &\dots & R_{pp}
\end{pmatrix}
\end{equation*}
and length $p$ vector $\mat{w}^{(p)}=(w_1,w_2,\dots,w_p)$ substituting
the delay and Doppler estimates
$\hat{\mat{\tau}}^{(p)}=(\hat{\tau}_1,\ldots, \hat{\tau}_p)$ and
$\hat{\mat{\nu}}^{(p)}=(\hat{\nu}_1, \ldots, \hat{\nu}_p)$
into~\eqref{eq:mat_R} and~\eqref{eq:vec_w}; re-estimate the length $p$
complex gain vector $\hat{\mat{a}}^{(p)} = (\hat{a}_1 \ldots,
\hat{a}_p) = (\mat{R}^{(p)})^{-1} \mat{w}^{(p)}$; and finally subtract
the signal contributions of all $p$ estimated multipath components
from $\mat{Y}$, which becomes the residual error matrix for the next
iteration.
\begin{algoendfloat}
\caption{Initial estimation via successive cancellation.}
\label{alg:init_sc}
\begin{algorithmic}[1]
\STATE $\mat{E}^{(1)} = \mat{Y}$ \label{alg:init_sc:init}
\FOR{$p = 1,\ldots,P$}
\STATE $(\hat{\tau}_p, \hat{\nu}_p) = \arg \max_{\tau,\nu} \; \left| \mat{\psi}^{\dagger}(\nu) \left( \mat{E}^{(p)} \odot \mat{X}^* \right) \mat{\phi}(\tau) \right|^2 $ \label{alg:init_sc:line:periodogram}
\STATE Contruct $\mat{R}^{(p)}$ and $\mat{w}^{(p)}$ using \eqref{eq:mat_R} and \eqref{eq:vec_w} with $\hat{\tau}_1,\ldots,\hat{\tau}_p$ and $\hat{\nu}_1,\ldots, \hat{\nu}_p$.
\STATE $\mat{\hat{a}}^{(p)} = (\mat{R}^{(p)})^{-1} \mat{w}^{(p)} $
\STATE $\mat{E}^{(p+1)} = \mat{Y} - \left[ \mat{\Psi}(\mat{\hat{\nu}}^{(p)}) \mathrm{diag}(\mat{\hat{a}}^{(p)}) \mat{\Phi}^{\dagger}(\mat{\hat{\tau}}^{(p)}) \right] \odot \mat{X} $ \label{alg:init_sc:line:subtract}
\ENDFOR
\end{algorithmic}
\end{algoendfloat}
Typically Algorithm~\ref{alg:init_sc} will estimate the multipath
starting from the strongest to the weakest tap, i.e. $|\hat{a}_1| >
|\hat{a}_2| > \ldots > |\hat{a}_P |$. Thus, for the case when $P$ is
unknown, an obvious exit criterion is to stop once $|\hat{a}_p| <
\gamma$, where $\gamma$ is a threshold that determines the minimum tap
energy. Alternatively, the Algorithm can be modified to incorporate a
model order selection rule~\cite{Stoica:2004p36}.
Note that two simple modifications can be made to
Algorithm~\ref{alg:init_sc} to further reduce complexity. Firstly, in
the main loop, rather than subtracting all multipath contributions of
the previously estimated components from the original signal $\mat{Y}$
to obtain the residual error $\mat{E}^{(p)}$, simply subtract the
contribution of the current estimate from the residual error of the
previous iteration $\mat{E}^{(p-1)}$,
i.e. line~\ref{alg:init_sc:line:subtract} can be replaced with
$\mat{E}^{(p)} = \mat{E}^{(p-1)} - \left[ \hat{a}^{(p)}_{p}
\mat{\psi}(\hat{\nu}_{p}) \mat{\phi}^{\dagger}(\hat{\tau}_p) \right]
\odot \mat{X} $. Secondly, rather than operating on $\mat{Y}$, one
could apply the algorithm on the zero-forcing estimate of $\mat{H}$,
i.e. $\hat{H}_{l,k} = Y_{l,k} X^{*}_{l,k}/ |X_{l,k}|$. Thus, in
Algorithm~\ref{alg:init_sc}, one simply replaces $\mat{Y}$ with
$\mat{\hat{H}}$ and the Hadamard product with $\mat{X}$ in
lines~\ref{alg:init_sc:line:periodogram}
and~\ref{alg:init_sc:line:subtract} is no longer required. To
summarize, we can make the following complexity-reducing modifications to
Algorithm~\ref{alg:init_sc}. Line~\ref{alg:init_sc:init}:
$\mat{E}^{(1)} = \hat{\mat{H}}$,
Line~\ref{alg:init_sc:line:periodogram}: $(\hat{\tau}_p, \hat{\nu}_p)
= \arg \max_{\tau,\nu} \; \left| \mat{\psi}^{\dagger}(\nu)
\mat{E}^{(p)} \mat{\phi}(\tau) \right|^2 $, and
Line~\ref{alg:init_sc:line:subtract}: $\mat{E}^{(p+1)} = \hat{\mat{H}}
- \left[ \mat{\Psi}(\mat{\hat{\nu}}^{(p)}) \mathrm{diag}(\mat{\hat{a}}^{(p)})
\mat{\Phi}^{\dagger}(\mat{\hat{\tau}}^{(p)}) \right]$
\subsection{Estimation Refinement}
It is quite reasonable to rely solely on Algorithm~\ref{alg:init_sc}
to estimate the delay/Doppler. Indeed similar approaches have been
employed in~\cite{Thomas:2003p74,Kliger:2005p2563,Berger:2008p2384},
but without any detailed comparison to theoretical bounds. We find
that the performance of Algorithm~\ref{alg:init_sc} is hampered by
interference from undetected taps, which as we will see later,
introduces a floor in the root mean squared (RMS) error
performance. Therefore we propose a refinement process based on
parallel cancellation whereby for each iteration, all multipath
components are removed except for the component of interest, that is
subsequently re-estimated. This refinement procedure is described in
detail in Algorithm~\ref{alg:refine}, where $\mat{\hat{\tau}}^{(i)} =
(\hat{\tau}_1^{(i)}, \ldots, \hat{\tau}_P^{(i)}$),
$\mat{\hat{\nu}}^{(i)} = (\hat{\nu}_1^{(i)}, \ldots,
\hat{\nu}_P^{(i)})$ and $\mat{\hat{a}}^{(i)} =
(\hat{a}_1^{(i)},\ldots,\hat{a}_P^{(i)})$ denote the refined estimates
after the $i$'th iteration, and $\mat{\hat{\tau}}^{(0)} =
\mat{\hat{\tau}}$, $\mat{\hat{\nu}}^{(0)} = \mat{\hat{\nu}}$ and
$\mat{\hat{a}}^{(0)} = \mat{\hat{a}}$ are the initial estimates
obtained from Algorithm~\ref{alg:init_sc}. In addition, we let
$\mat{\hat{\tau}}^{(i)}_{\bar{p}}$, $\mat{\hat{\nu}}^{(i)}_{\bar{p}}$
and $\mat{\hat{a}}^{(i)}_{\bar{p}}$ denote the refined
estimates at step $i$ with element $p$ element omitted.
\begin{algoendfloat}
\caption{Estimate refinement algorithm.}
\label{alg:refine}
\begin{algorithmic}[1]
\STATE $\mat{\hat{\tau}}^{(0)} = \mat{\hat{\tau}}$,$\mat{\hat{\nu}}^{(0)} = \mat{\hat{\nu}}$ and $\mat{\hat{a}}^{(0)} = \mat{\hat{a}}$
\FOR{$i = 1,\ldots,N$}
\FOR{$p=1,\ldots,P$}
\STATE $\mat{E} = \mat{Y} - \left[ \mat{\Psi}(\mat{\hat{\nu}}^{(i-1)}_{\bar{p}}) \mathrm{diag}(\mat{\hat{a}}^{(i-1)}_{\bar{p}}) \mat{\Phi}^{\dagger}(\mat{\hat{\tau}}^{(i-1)}_{\bar{p}}) \right] \odot \mat{X} $ \label{alg:refine:line:subtract}
\STATE $(\hat{\tau}^{(i)}_q, \hat{\nu}^{(i)}_q) = \arg \max_{\tau,\nu} \; \left| \mat{\psi}^{\dagger}(\nu) \left( \mat{E} \odot \mat{X}^* \right) \mat{\phi}(\tau) \right|^2 $ \label{alg:refine:line:periodogram}
\ENDFOR
\STATE $\hat{a}^{(i)} = \mat{R}^{-1}(\mat{\hat{\tau}}^{(i)},\mat{\hat{\nu}}^{(i)}) \mat{w}(\mat{\hat{\tau}}^{(i)},\mat{\hat{\nu}}^{(i)})$
\ENDFOR
\end{algorithmic}
\end{algoendfloat}
Note that rather than refining for a fixed number of iterations,
Algorithm~\ref{alg:refine} can be easily be modified to incorporated
an early stopping criterion, e.g. by checking the improvement in the
residual error $\| \mat{E}\|^2$. As previously described for Algorithm~\ref{alg:init_sc}, one could
apply Algorithm~\ref{alg:refine} to the zero-forcing estimate of
$\mat{\hat{H}}$, i.e. replace $\mat{Y}$ with $\mat{\hat{H}}$ and
removing the Hadamard product with $\mat{X}$ in
lines~\ref{alg:refine:line:subtract}
and~\ref{alg:refine:line:periodogram}.
\subsection{2-D Bisection Algorithm}
As mentioned earlier, the maximisation step in line
\ref{alg:init_sc:line:periodogram} Algorithm \ref{alg:init_sc} and
line \ref{alg:refine:line:periodogram} of Algorithm~\ref{alg:refine}
can be solved by finding the maximum absolute value of the 2-D
periodogram~\cite{Kay:1988book}. To perform this operation, we propose
a 2-D bisection approach described as follows. First we assume $ \tau_p
\in (\tau_{\rm min}, \tau_{\rm max})$ and $ \nu_p \in(\nu_{\rm min},
\nu_{\rm max})$ for all $p = 1,\ldots,P$, i.e. the delay/Doppler of
each tap is constrained to lie within predefined intervals. Let
$(\tau^{(i)}_{\rm min}, \tau^{(i)}_{\rm max})$ and $ (\nu^{(i)}_{\rm
min}, \nu^{(i)}_{\rm max})$ denote the search interval at iteration
$i$, and $\mat{\tilde{\tau}}^{(i)}$ and $\mat{\tilde{\nu}}^{(i)}$
denote linearly spaced vectors within these intervals, i.e.
\begin{equation}
\tilde{\tau}_{m}^{(i)} = \tau_{\rm min}^{(i)} + (m-1)\Delta \tau^{(i)} \;\;\;\;\;
\tilde{\nu}_{m}^{(i)} = \nu_{\rm min}^{(i)} + (n-1)\Delta \nu^{(i)}, \label{eq:lin_space}
\end{equation}
for $m = 1,\ldots,M$ and $n = 1,\ldots,N$, where $\Delta \tau^{(i)} =
(\tau_{\rm max}^{(i)} - \tau_{\rm min}^{(i)})/M$ and $\Delta \nu^{(i)}
= (\nu_{\rm max}^{(i)} - \nu_{\rm min}^{(i)})/N$ denote the bin
spacing at the $i$'th iteration. For each iteration of the bisection
algorithm, we find the indices corresponding to the largest peak of $
\mat{\Psi}^{\dagger}(\mat{\tilde{\nu}}^{(i)}) \left[ \mat{Y} \odot
\mat{X}^* \right] \mat{\Phi}( \mat{\tilde{\tau}}^{(i)})$. For the
next iteration, the search interval is then bisected or reduced to a
smaller 2-D region, i.e. $\left( 2 \beta \Delta \tau^{(i)}, 2 \beta
\Delta \nu^{(i)} \right)$, centered at the previous delay/Doppler
indices (typically $\beta \geq 1/2$). A detailed description of the
procedure is given in Algorithm~\ref{alg:bisection}. Note that for
ease of exposition, the bisection process completes after a fixed
number of iterations $N_{\rm bisect}$. The algorithm can easily be
modified to employ an early stopping criterion, e.g. exit the main
loop when $\Delta \tau^{(i)} < \epsilon_{\tau}$ and $\Delta \nu^{(i)}
< \epsilon_{\nu}$ to ensure a certain level of delay/Doppler
resolution.
\begin{algoendfloat}
\caption{2-D Bisection Algorithm.}
\label{alg:bisection}
\begin{algorithmic}[1]
\STATE Initialise $ (\tau_{\rm min}^{(0)}, \tau_{\rm max}^{(0)}) =
(\tau_{\rm min}, \tau_{\rm max})$ and $(\nu_{\rm min}^{(0)},
\nu_{\rm max}^{(0)}) = (\nu_{\rm min}, \nu_{\rm max})$.
\FOR{ $i = 1,\ldots, N_{\rm bisect}$}
\STATE $\Delta \tau^{(i)} = (\tau_{\rm max}^{(i-1)} - \tau_{\rm min}^{(i-1)})/M$, $\Delta \nu^{(i)}= (\nu_{\rm max}^{(i-1)} - \nu_{\rm min}^{(i-1)})/N$
\STATE Construct $\mat{\tilde{\tau}}^{(i)}$ and $\mat{\tilde{\nu}}^{(i)}$ using \eqref{eq:lin_space}.
\STATE $\mat{\Upsilon}^{(i)} = \mat{\Psi}^{\dagger}(\mat{\tilde{\nu}}^{(i)}) \left[ \mat{Y} \odot \mat{X}^* \right] \mat{\Phi}( \mat{\tilde{\tau}}^{(i)})$
\STATE $ (\hat{n}, \hat{m} ) = \arg \max_{n,m} |\Upsilon^{(i)}_{n,m}|^2$
\STATE $\tau^{(i)}_{\rm max} = \tilde{\tau}^{(i)}_{\hat{m}} + \beta \Delta \tau^{(i)}$, $\tau^{(i)}_{\rm min} = \tilde{\tau}^{(i)}_{\hat{m}} - \beta \Delta \tau^{(i)}$
\STATE $\nu^{(i)}_{\rm max} = \tilde{\nu}^{(i)}_{\hat{n}} + \beta \Delta \nu^{(i)}$, $\nu^{(i)}_{\rm min} = \tilde{\nu}^{(i)}_{\hat{n}} - \beta \Delta \nu^{(i)}$
\ENDFOR
\STATE $\hat{\tau} = \tilde{\tau}^{(I)}_{\hat{m}}$, $\hat{\nu} = \tilde{\nu}^{(I)}_{\hat{n}}$.
\end{algorithmic}
\end{algoendfloat}
\section{Performance Evaluation} \label{sec:performance}
Performance evaluation is complicated by the fact there are infinitely
many possible multipath channel realisations and many OFDM system
design configurations all of which can have a significant effect on
the estimator's performance. To reduce our analysis, we focus on OFDM
systems with similar specifications to the IEEE802.11p standard (as
described in Section~\ref{sec:ambiguity}). In addition, we concentrate
on multipath channels typical of outdoor mobile vehicular
environments~\cite{Alexander:2007p108}, i.e. delay spreads not
exceeding $200$ nsec and Doppler differentials not exceeding $1000$
Hz. For example, at a carrier frequency of $5.9$ GHz, this corresponds
to a maximum excess delay of $60$ m and velocity
differentials of $51$ m/s or $183$ km/hr.
Ultimately, we would like to investigate the estimator's performance
for as many different multipath channel configurations as
possible. However, we find that the performance is significantly
affected by the location of the multipath taps in the 2-D
delay/Doppler space. When two or more taps are too close to each
other there is a high probability Algorithm~\ref{alg:init_sc} will
detect these as a single tap.\footnote{In a physical sense, if these closely
spaced taps are the result of first order reflections it may imply
they are reflections from the same object.} The minimum separation
distance is essentially the delay/Doppler resolution of the estimator,
which is dependent on the main lobe of the ambiguity function, which
in turn, is dependent on the subcarrier spacing and duration of the
OFDM packet (as evidenced in~\eqref{eq:ofdm_ambig_psk}). When the
components are sufficiently separated, the estimator's performance is
dominated by AWGN and hence the CRLB~\eqref{eq:crlb_single_tap}.
To separate the above mentioned effects, we conducted Monte Carlo
simulations whereby for each trial a random set of multipath taps is
generated. Whilst these taps are drawn randomly, they are not i.i.d.,
and instead are drawn to ensure a minimum separation in delay and
Doppler. This is achieved by continually drawing a vector of $P$
delays from an i.i.d. uniform distribution on the interval $(\tau_{\rm
min}, \tau_{\rm max})$ until the minimum pairwise distance between
the delays is greater than a specified $\Delta \tau$. The delays are
then sorted in ascending order. The Doppler offsets are generated in a
similar fashion on the interval $(\nu_{\rm min}, \nu_{\rm max})$, but
with no sorting. Note that $\Delta \tau \leq (\tau_{\max} - \tau_{\rm
min})/P$ and similarly $\Delta \nu \leq (\nu_{\max} - \nu_{\rm
min})/P$. Whilst we fix the power delay profile, for each trial, the
phase of each tap is generated randomly according to a uniform
distribution over the interval $(0,2\pi)$. Once the multipath taps are
generated, the frequency domain channel coefficients are generated
using~\eqref{eq:channel_coeffs} and the received noisy symbols are
generated using~\eqref{eq:chan_model}, where, without loss of
generality, we assume $X_{l,k} = 1$. It is important to note how the
error statistics were calculated. For each trial, RMS error statistics
were only collected when all taps are detected, i.e. each tap is
closest (in Euclidean distance) to a single estimate. Events when this
does not occur are counted as missed detections, but are not included in
the RMS error statistics. This allows us to separate error events
caused by miss detections due to the transmit ambiguity function.
In our simulations we considered a $P=3$ tap multipath channel, with
power delay profile $|a_1|^2 = 0$, $|a_2|^2 = -10$ and $|a_3|^2 = -20$
dB, $(\tau_{\rm min}, \tau_{\rm max}) = (0,200)$ nsec, $(\nu_{\rm
min}, \nu_{\rm max}) = (-500,500)$, minimum delay separation of
$\Delta \tau = 66.67$ nsec and minimum Doppler separation of $\Delta
\nu = 333.33$ Hz. Error statistics were collected from $10^4$
trials.
Fig.~\ref{fig:three_tap_miss_det} shows the miss detection
probability for $L = 128, 256$ and $512$ OFDM packet lengths. We see
that when $L = 128$, the miss detection probability is greater than
$10$ percent. As $L$ increases the main lobe of the ambiguity function
shrinks in the Doppler domain improving the resolution of the
estimator and hence reduces the miss detection probability. When $L =
512$, no miss detections were recorded for an SNR greater than $5$ dB.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{miss_detection_prob}
\caption{Probability of miss detecting all taps of a $P=3$ tap
multipath channel.}
\label{fig:three_tap_miss_det}
\end{figure}
Fig.~\ref{fig:three_tap_rms} shows the RMS estimation error results
(recalling that this is restricted to instances where missed detection
does not occur). The square marked curves show the RMS error when no
refinement is performed, i.e. only Algorithm~\ref{alg:init_sc} is
employed. In this case a floor in the RMS error performance is
observed (caused by undetected multipath components in the successive
cancellation process). When refinement is employed, as shown by the
circle marked curves, the error floor is significantly
reduced. Moreover, as $L$ increases the floor does not occur until
very high SNRs and the RMS error performance is primarily dominated by
the CRLB~\eqref{eq:crlb_single_tap}, which is shown by the dashed
curves. Thus with sufficiently long packet length,
Algorthms~\ref{alg:init_sc} and \ref{alg:refine} deliver single-tap
performance, i.e. are able to accurately cancel the contributions of
``interfering'' taps.
It is interesting to translate the estimator performance into
range/velocity resolution. Considering $L=512$ and SNR $20$ dB, the
3-standard-deviation values for the $-20$ dB tap are $9$ ns and $15$
Hz. This corresponds to range resolution of $2.7$ m and relative
velocity resolution (at 5.9GHz) of $0.77$ m/s ($2.7$ km/h). This
clearly demonstrates the capability to accurately resolve quite
challenging multipath channels.
\begin{figure}[htbp]
\centering
\subfigure[$L=128$]{\includegraphics[width=0.45\columnwidth]{L128_K52_P3_060810_125937_tau} \label{fig:tau_L128_P3}}
\subfigure[$L=128$]{\includegraphics[width=0.45\columnwidth]{L128_K52_P3_060810_125937_nu} \label{fig:nu_L128_P3}}
\subfigure[$L=256$]{\includegraphics[width=0.45\columnwidth]{L256_K52_P3_060810_125936_tau} \label{fig:tau_L256_P3}}
\subfigure[$L=256$]{\includegraphics[width=0.45\columnwidth]{L256_K52_P3_060810_125936_nu} \label{fig:nu_L256_P3}}
\subfigure[$L=512$]{\includegraphics[width=0.45\columnwidth]{L512_K52_P3_060810_130002_tau} \label{fig:tau_512_P3}}
\subfigure[$L=512$]{\includegraphics[width=0.45\columnwidth]{L512_K52_P3_060810_130002_nu} \label{fig:nu_512_P3}}
\caption{Three tap estimation root-mean squared (RMS) error. Dashed lines show the CRLB~\eqref{eq:crlb_single_tap} (single-tap estimation). Solid lines with squares show RMS error performance of Algorithm~\ref{alg:init_sc} only. Solid lines with circles show the RMS error performance after refinement Algorithm~\ref{alg:refine} with $20$ iterations.}
\label{fig:three_tap_rms}
\end{figure}
\section{Conclusion} \label{sec:conclusion}
In this paper we examined amplitude, delay and Doppler estimation of
the multipath channel taps from OFDM signal transmission in a doubly
selective mobile environment. Under certain practical system design
and mobile channel assumptions, we showed that the frequency domain
channel coefficients for an entire OFDM packet can be written as the
superimposition of 2-D complex sinusoids. The angular frequency of
each sinusoid is proportional to the delay and Doppler of a particular
multipath tap.
ML estimation of the delay/Doppler requires non-linear least squares
minimisation, which is computationally infeasible for practical
implementation. We therefore proposed a low complexity suboptimal
estimation method, based on successive cancellation, whereby multipath
components are removed once they are detected. The complexity
reduction results from a simplification of the channel model, where
time variations manifest only as Doppler frequency offsets for each
tap. For a single tap channel, this method is maximum likelihood. The
performance of this successive cancellation approach can be degraded by
interference from taps that are yet to be detected in future
iterations. To remedy this, we proposed a refinement algorithm based
on parallel cancellation, i.e. all estimated multipath components are
subtracted except the component of interest, which is subsequently
re-estimated.
The performance of our estimator was shown to be dominated by two
effects: separation of the multipath taps in the delay/Doppler plane;
and noise. When two or more taps are close together in the 2-D
delay/Doppler space, the estimator may detect these as single tap,
resulting in missed detections and significantly degrading the RMS
error of other detected taps. When the multipath taps are
sufficiently separated in delay/Doppler the estimator performance is
dominated by noise and hence the RMS error of the refined estimates
are very close the CRLB of a single 2-D sinusoid in additive white
Guassian noise. We believe the missed detections are caused by the
transmit ambiguity function: broadness of the main lobe affects the
delay/Doppler resolution; and sidelobes of components that have not
been sufficiently subtracted can mask weaker taps. However, a detailed
analytic investigation of these effects is beyond the scope of this
paper and the subject of future work.
Note that although our results assume delay spreads less than the
cyclic prefix, our proposed estimator still works well without this
restriction. Multipath taps with delay exceeding the cyclic prefix
will introduce inter-symbol interference. The estimator views this
interference as extra noise on the received symbols. Thus, as long as
AWGN dominates, this extra interference will have negligible effect on
performance.
\appendices
\section{Derivation of Receiver Matched Filter Output} \label{app:rx_matched_filt}
For clarity we repeat the transmitted signal,
\begin{equation}
x(t) = \sum_{l'} x_{l'}(t) = \frac{1}{\sqrt{KL}} \sum_{l',k'} X_{l',k'} w(t - (l'-1)T_d) e^{j 2 \pi (k' - 1 - \lfloor K/2 \rfloor)(t - T_{\rm cp})/T}. \label{eq:tx_sig}
\end{equation}
Application of the channel response~\eqref{eq:chan_response} to the
transmitted signal~\eqref{eq:tx_sig} yields
\begin{align}
y(t) &= x(t) * h(t,\tau) + z(t) = \int_{-\infty}^{\infty} x(t - \tau) h(t,\tau) \, d \tau \notag \\
&= \frac{1}{\sqrt{KL}} \sum_{l',k',p}a_p X_{l',k'} \int_{-\infty}^{\infty} g(\tau-\tau_p) w(t - \tau - (l'-1)T_d) e^{-2 \pi \nu_p t} e^{j 2 \pi (k'-1 -\lfloor K/2 \rfloor)(t - \tau - T_{\rm cp})/T} \, d\tau + z(t) \notag \\
&= \frac{1}{\sqrt{KL}} \sum_{l',k',p}a_p X_{l',k'}e^{-j 2 \pi (k'-1-\lfloor K/2 \rfloor)T_{\rm cp}/T} e^{-j 2 \pi \left( \nu_p - \frac{k'-1}{T} + \frac{K}{2T}\right)t} s_{l',k'}(t,\tau_p) + z(t), \label{eq:rx_sig_generic}
\end{align}
where the integral
\begin{equation}
s_{l',k'}(t,\tau_p) = \int_{-\infty}^{\infty} g(\tau-\tau_p) e^{-j 2 \pi (k' - 1 -\lfloor K/2 \rfloor)\tau/T} w(t - \tau - (l'-1)T_d) \, d\tau, \label{eq:phi}
\end{equation}
is simply the convolution of a time/frequency translated filter
response $g(t)$ and time shifted window function $w(t)$. Using the
appropriate properties of Fourier transforms, the Fourier transform
of~\eqref{eq:phi} can be written as
\begin{equation}
S_{l',k'}(f,\tau_p) = e^{-j 2 \pi \tau_p \left[ \frac{k'-1}{T} - \frac{K}{2T} + f\right]} e^{-j 2 \pi f (l'-1) T_d} G \left(f+\frac{k'-1}{T} - \frac{K}{2T}\right) W (f),
\end{equation}
where $G(f)$ and $W(f)$ denote the Fourier transforms of $g(t)$ and
$w(t)$ respectively. In practical OFDM systems the passband bandwidth
of $G(f)$ is typically greater than $K/T$ and the bandwidth of $W(f)$
is typically less than $1/T$, e.g. for the simple case $\tilde{w}(t) =
\frac{1}{\sqrt{T_d}}$, then $W(f) = \sqrt{T_d} e^{-j \pi f T_d} \mathrm{sinc}
( \pi f T_d)$. Moreover, in many OFDM standards the outer subcarriers
are null subcarriers. Hence assuming negligible passband ripple then
$G \left(f+\frac{k'-1}{T} - \frac{K}{2T}\right) \approx 1$ for
$k'=1,\ldots,K$ and $|f| < 1/(2T)$. Therefore,
\begin{equation}
S_{l',k'}(f,\tau_p) \approx e^{-j 2 \pi \tau_p \left[ \frac{k'-1}{T} - \frac{K}{2T} + f\right]} e^{-j 2 \pi f (l'-1) T_d} W (f).
\end{equation}
Thus, taking the inverse Fourier transform yields
\begin{align}
s_{l',k'}(t,\tau_p) & \approx e^{-j 2 \pi \tau_p \left[ \frac{k'-1}{T} - \frac{K}{2T}\right]} \int_{-\infty}^{\infty} W (f) e^{j 2 \pi f \left[ t - \tau_p - (l'-1) T_d\right]} \, df \notag \\
&= e^{-j 2 \pi \tau_p \left[ \frac{k'-1}{T} - \frac{K}{2T}\right]} w(t - \tau_p - (l'-1) T_d). \label{eq:phi_approx}
\end{align}
Substituting~\eqref{eq:phi_approx} into~\eqref{eq:rx_sig_generic} gives,
\begin{align}
y(t) &= \frac{1}{\sqrt{KL}} \sum_{l',k',p}a_p X_{l',k'}e^{-j 2 \pi (k'-1-\lfloor K/2 \rfloor)T_{\rm cp}/T} e^{-j 2 \pi \left( \nu_p - \frac{k'-1}{T} + \frac{K}{2T}\right)t} \notag \\
& \;\;\;\;\; \times e^{-j 2 \pi \tau_p \left(\frac{k'-1}{T} - \frac{K}{2T}\right)} w(t - \tau_p - (l'-1) T_d) + z(t), \label{eq:rx_sig}
\end{align}
The receiver now performs the matched filter to the transmitted sinusoids
(less the cyclic prefix), i.e.
\begin{align}
Y_{l,k} &= \frac{1}{\sqrt{KL}}\int_{T_{\rm cp} + (l-1) T_d}^{lT_d} y(t) w^*(t - lT_d) e^{-j 2 \pi (k-1-\lfloor K/2 \rfloor) (t-T_{\rm cp})/T} \, dt \notag \\
&= \frac{1}{KL} \sum_{l',k',p} a_p X_{l',k'} e^{-j 2 \pi \tau_p \left(\frac{k'-1}{T} - \frac{K}{2T}\right)} e^{-j 2 \pi (k'-k)T_{\rm cp}/T} \notag \\
& \;\;\;\;\; \times \int_{T_{\rm cp} + (l-1)T_d}^{lT_d} w(t - \tau_p - (l'-1) T_d) w^*( t - (l-1)T_d) e^{-j 2 \pi \left( \nu_p + \frac{k-k'}{T} \right) t} \, dt + Z_{l,k} \notag \\
&= \frac{1}{KL} \sum_{l',k',p} a_p X_{l',k'} e^{-j 2 \pi \tau_p \left(\frac{k'-1}{T} - \frac{K}{2T}\right)} e^{-j 2 \pi (k'-k)T_{\rm cp}/T} e^{-j 2 \pi \left( \nu_p + \frac{k-k'}{T} \right) (l-1)T_d} \notag \\
& \;\;\;\;\; \times \hat{A}_w\left( \tau_p + (l'-l) T_d, \nu_p + \frac{k-k'}{T} \right) + Z_{l,k}, \label{eq:ofdm_rx_mf}
\end{align}
where
\begin{equation}
\hat{A}_w(\tau,\nu) = \int_{T_{\rm cp}}^{T_d} w(t - \tau) w^*(t) e^{-j 2 \pi \nu t} \, d t,
\end{equation}
which resembles the ambiguity function of $w(t)$.\footnote{The
function $\hat{A}_w(\tau,\nu)$ is not quite the ambiguity function
of $w(t)$ because of the limits of integration.} In practical OFDM
systems, usually $\max_{p} \tau_p < T_{\rm cp}$ and $\max_p \nu_p \ll
1/T$ and the windowing function is usually designed such that \\
$\hat{A}_w\left(\tau_p + (l'-l)T_d, \nu_p + \frac{k-k'}{T} \right)
\approx 0$ for $k \neq k'$ or $l \neq l'$.
Hence we may write,
\begin{align}
Y_{l,k} &= \frac{1}{KL} \sum_{p} a_p X_{l,k} e^{-j 2 \pi \tau_p \left(\frac{k-1}{T} - \frac{K}{2T}\right)} e^{-j 2 \pi \nu_p T_d (l-1)} \hat{A}_w ( \tau_p,\nu_p ) + Z_{l,k} \notag \\
&= \frac{1}{KL} \sum_{p} \tilde{a}_p X_{l,k} e^{-j 2 \pi (k-1) \tau_p/T } e^{-j 2 \pi (l-1)\nu_p T_d } + Z_{l,k},
\end{align}
where $\tilde{a}_p = e^{-j \pi K \tau_p/T}
\hat{A}_w(\tau_p,\nu_p)$. With some slight abuse of notation, for the
remainder of the paper for brevity of notation (and without loss of
generality) we will replace $\tilde{a}_p$ with $a_p$. Defining $H_{l,k}$ according to \eqref{eq:channel_coeffs}, we obtain~\eqref{eq:chan_model}.
\input{continuous.bbl}
\end{document}
|
2210.16729
|
\section{Introduction}
A Lie suparalgebra $\mathfrak{osp}_{1|2n}$ is a finite-dimensional simple Lie superalgebra whose Dynkin diagram is the same as type $B_n$ except for a unique simple short root, which is replaced by a non-isotropic odd simple root in $\mathfrak{osp}_{1|2n}$. The Lie suparalgebra $\mathfrak{osp}_{1|2n}$ is not a Lie algebra but has similar properties to simple Lie algebras. For example, the category of finite-dimensional $\mathfrak{osp}_{1|2n}$-modules is semisimple and we have the Harish-Chandra isomorphism $Z(\mathfrak{osp}_{1|2n}) \simeq \mathbb{C}[\mathfrak{h}]^W$, where $Z(\mathfrak{g})$ denotes the center of the universal enveloping algebra $U(\mathfrak{g})$, $\mathfrak{h}$ is a Cartan subalgebra of $\mathfrak{osp}_{1|2n}$ and $W$ is the Weyl group. However $\mathfrak{osp}_{1|2n}$ doesn't satisfy the Duflo theorem \cite{Duflo77}, which says that annihilators of Verma modules in $U(\mathfrak{g})$ is generated by its intersections with the center $Z(\mathfrak{g})$ for simple Lie algebras $\mathfrak{g}$. This problem was founded by Musson \cite{Musson97} and solved by Gorelik and Lantzmann \cite{GL} by using an extension algebra $\widetilde{Z}(\mathfrak{osp}_{1|2n})$ of $Z(\mathfrak{osp}_{1|2n})$. More precisely, Gorelik and Lantzmann prove that annihilators of Verma modules in $U(\mathfrak{osp}_{1|2n})$ is generated by its intersections with $\widetilde{Z}(\mathfrak{osp}_{1|2n})$. The associative algebra $\widetilde{Z}(\mathfrak{osp}_{1|2n})$ is called the ghost center of $\mathfrak{osp}_{1|2n}$ in \cite{Gorelik01}.
For a Lie superalgebra $\mathfrak{g}$ with $\mathfrak{g}_{\bar{1}} \neq0$, the ghost center $\widetilde{Z}(\mathfrak{g})$ is introduced by Gorelik in \cite{Gorelik01} as the direct sum $Z(\mathfrak{g}) \oplus \mathcal{A}(\mathfrak{g})$, where $\mathcal{A}(\mathfrak{g})$ is the anticenter defined by $
\mathcal{A}(\mathfrak{g}) = \{a \in U(\mathfrak{g}) \mid ua-(-1)^{p(u)(p(a)+\bar{1})}au = 0\ \mathrm{for}\ \mathrm{all}\ u \in \mathfrak{g}\}$. If $\mathfrak{g}$ is a finite-dimensional simple basic classical Lie superalgebra, it is known that $\widetilde{Z}(\mathfrak{g})$ coincides with the center of $U(\mathfrak{g})_{\bar{0}}$ and thus is a purely even subalgebra of $U(\mathfrak{g})$. Moreover, if $\mathfrak{g}=\mathfrak{osp}_{1|2n}$, there exists $T \in U(\mathfrak{g})_{\bar{0}}$ such that $\mathcal{A}(\mathfrak{osp}_{1|2n}) = Z(\mathfrak{osp}_{1|2n}) T$ by \cite{ABF, Musson97, GL}. The element $T$ is called the Casimir ghost \cite{ABF} since $T^2 \in Z(\mathfrak{osp}_{1|2n})$. In case $\mathfrak{g}=\mathfrak{osp}_{1|2}$, \cite{Pinczon} also suggested that $T = 4Q - 4C +\frac{1}{2}$ satisfies $T^2 = 4C + \frac{1}{4}\in Z(\mathfrak{osp}_{1|2})$, where $C$ is the Casimir element in $U(\mathfrak{osp}_{1|2})$ and $Q$ is one in $U(\mathfrak{sl}_2)$.
The finite $\mathcal{W}$-algebra $U(\mathfrak{g}, f)$ is an associative superalgebra over $\mathbb{C}$ defined from a simple basic classical Lie superalgebra $\mathfrak{g}$ and its even nilpotent element $f$ \cite{Premet02, Premet07, Kostant, Lynch, BT, RaSo, GG}. If $\mathfrak{g}$ is a simple Lie algebra and $f$ is a principal nilpotent element $f_\mathfrak{prin}$, the corresponding finite $\mathcal{W}$-algebra $U(\mathfrak{g}, f_\mathrm{prin})$ is isomorphic to the center $Z(\mathfrak{g})$ of $U(\mathfrak{g})$ by Kostant \cite{Kostant}.
The $\mathcal{W}$-algebra $\mathcal{W}^k(\mathfrak{g}, f)$ is a vertex superalgebra defined by the Drinfeld-Sokolov reductions associated to $\mathfrak{g}, f$ and level $k \in \mathbb{C}$ \cite{FF92, KRW}. In general, (Ramond-twisted) simple modules of a $\frac{1}{2}\mathbb{Z}$-graded vertex superalgebras $V$ with a Hamiltonian operator $H$ are classified by the associated superalgebra named as the ($H$-twisted) Zhu algebras of $V$. De Sole and Kac shows that the $H$-twisted Zhu algebra of $\mathcal{W}^k(\mathfrak{g}, f)$ is isomorphic to the finite $\mathcal{W}$-algebra $U(\mathfrak{g}, f)$. In particular, there exists a one-to-one correspondence between simple modules of $U(\mathfrak{g}, f)$ and Ramond-twisted simple positive-energy modules of $\mathcal{W}^k(\mathfrak{g}, f)$. If $f = f_\mathrm{prin}$, the corresponding $\mathcal{W}$-algebra is called the principal $\mathcal{W}$-algebra of $\mathfrak{g}$, which we denoted by $\mathcal{W}^k(\mathfrak{g}) = \mathcal{W}^k(\mathfrak{g}, f_\mathrm{prin})$.
\begin{thmA}[Theorem \ref{thm:ghostZ_finiteW}]
$U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$ is isomorphic to $\widetilde{Z}(\mathfrak{osp}_{1|2n})$ as associative algebras.
\end{thmA}
The finite $\mathcal{W}$-algebra $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$ associated to $\mathfrak{osp}_{1|2n}$ and its principal nilpotent element $f_\mathrm{prin}$ is an associative superalgebra with its non-trivial odd part, while the ghost center $\widetilde{Z}(\mathfrak{osp}_{1|2n})$ is not. However, we prove an isomorphism between them. Through the isomorphism in Theorem A, a $\mathbb{Z}_2$-grading of $\widetilde{Z}(\mathfrak{osp}_{1|2n})$ is inherited from one of $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$ so that the even part of $\widetilde{Z}(\mathfrak{osp}_{1|2n})$ is $Z(\mathfrak{osp}_{1|2n})$ and the odd part is $\mathcal{A}(\mathfrak{osp}_{1|2n})$.
To prove Theorem A, we use the Miura map $\mu$ and its injectivity and relationship with the Harish-Chandra homomorphism of $\mathfrak{osp}_{1|2n}$. See Section \ref{sec:finite-W} for the definition of $\mu$. The map $\mu$ was originally introduced in \cite{Lynch}. The injectivity of $\mu$ was only known for non-super cases, but has been recently proved by \cite{Nakatsuka} for super cases. As a corollary of Theorem A, it follows that simple positive-energy Ramond-twisted modules of principal $\mathcal{W}$-algebras $\mathcal{W}^k(\mathfrak{osp}_{1|2n})$ are classified by simple modules of the ghost center of $\mathfrak{osp}_{1|2n}$. See also Corollary \ref{cor:Zhu_twisted}. We remark that our definitions of $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$ is different from those in some literatures \cite{Poletaeva13, PS, ZS}. See Remark \ref{rem:def-W}.
\smallskip
The paper is organized as follows. In Sect.\ref{sec:Gamma/Z}, we introduce $H$-twisted Zhu algebras. In Sect.\ref{sec:W-alg}, we recall the definitions of $\mathcal{W}$-algebras $\mathcal{W}^k(\mathfrak{g}, f)$. In Sect.\ref{sec:finite-W}, we introduce two definitions $U(\mathfrak{g}, f)_{I}$ and $U(\mathfrak{g}, f)_{I\hspace{-.2em}I}$ of finite $\mathcal{W}$-algebras and show the equivalence of the definitions, that is, $U(\mathfrak{g}, f)_{I} \simeq U(\mathfrak{g}, f)_{I\hspace{-.2em}I}$. The proof is similar to \cite{D3HK}. In Sect.\ref{sec:prin-W}, we recall the principal $\mathcal{W}$-algebras $\mathcal{W}^k(\mathfrak{osp}_{1|2n})$ of $\mathfrak{osp}_{1|2n}$. In Sect.\ref{sec:Zhu_prinW}, we prove Theorem A.
\vspace{3mm}
{\it Acknowledgments}\quad The author wishes to thank Thomas Creutzig, Tomoyuki Arakawa, Hiroshi Yamauchi and Maria Gorelik for valuable comments and suggestions. Some part of this work was done while he was visiting
Instituto de Matem\'{a}tica Pura e Aplicada, Brazil in March and April 2022 and the Centre de Recherches Math\'{m}atiques, Universit\'{e} de Montr\'{e}al, Canada in October 2022. He is grateful to those institutes for their hospitality. He is supported by World Premier International Research Center Initiative (WPI), MEXT, Japan and JSPS KAKENHI Grant Number JP21K20317.
\section{$H$-twisted Zhu algebras}\label{sec:Gamma/Z}
Let $V$ be a vertex superalgebra. Denote by $|0\rangle$ the vacuum vector, by $\partial$ the translation operator, by $p(A)$ the parity of $A \in V$ and by $Y(A, z) = A(z) = \sum_{n \in \mathbb{Z}}A_{(n)}z^{-n-1}$ the field on $V$ corresponding to $A \in V$. Let
\begin{align*}
[A_\lambda B] = \sum_{n=0}^\infty \frac{\lambda^n}{n!}A_{(n)}B \in \mathbb{C}[\lambda]\otimes V
\end{align*}
be the $\lambda$-bracket of $A$ and $B$ for $A, B \in V$. A Hamiltonian operator $H$ on $V$ is a semisimple operator on $V$ satisfying that $[H, Y(A, z)] = z\partial_zY(A, z) + Y(H(A), z)$ for all $A \in V$. The eigenvalue of $H$ is called the conformal weight. If $V$ is conformal and $L(z) = \sum_{n \in \mathbb{Z}}L_nz^{-n-2}$ is the field corresponding to the conformal vector of $V$, we may choose $H = L_0$ as the Hamiltonian operator.
Suppose that $V$ is a $\frac{1}{2}\mathbb{Z}$-graded vertex superalgebra with respect to a Hamiltonian operator $H$. Denote by $\Delta_A$ the conformal weight of $A \in V$. Define the $*$-product and $\circ$-product of $V$ by
\begin{align*}
A * B =\sum_{j=0}^\infty\binom{\Delta_A}{j}A_{(j-1)}B,\quad
A \circ B =\sum_{j=0}^\infty\binom{\Delta_A}{j}A_{(j-2)}B,\quad
A, B \in V.
\end{align*}
Then the quotient space
\begin{align*}
\operatorname{Zhu}_H V = V/V\circ V
\end{align*}
has a structure of associative superalgebra with respect to the product induced from $*$, and is called the $H$-twisted Zhu algebra of $V$. Here $V \circ V = \operatorname{Span}_\mathbb{C}\{A \circ B \mid A, B \in V\}$. The vacuum vector $|0\rangle$ defines a unit of $\operatorname{Zhu}_H V$. A superspace $M$ is called a Ramond-twisted $V$-module if $M$ is equipped with a parity-preserving linear map
\begin{align*}
Y_M \colon M \ni A \rightarrow Y_M(A, z) = \sum_{n \in \mathbb{Z} + \Delta_A}A^M_{(n)}z^{-n-1} \in (\operatorname{End} M)[\![z^{\frac{1}{2}}, z^{-\frac{1}{2}}]\!]
\end{align*}
such that (1) for each $C \in M$, $A^M_{(n)}C = 0$ if $n \gg 0$, (2) $Y_M(|0\rangle, z) = \operatorname{id}_M$ and (3) for any $A, B \in V$, $C \in M$, $n \in \mathbb{Z}$, $m \in \mathbb{Z} + \Delta_A$ and $\ell \in \mathbb{Z} + \Delta_B$,
\begin{align*}
&\sum_{j=0}^\infty(-1)^j\binom{n}{j}\left(
A^M_{(m+n-j)}B^M_{(\ell+j)} - (-1)^{p(A)p(B)}B^M_{(\ell+n-j)}A^M_{(m+j)}
\right)C\\
&= \sum_{j=0}^\infty \binom{m}{j}\left(A_{(n+j)}B
\right)^M_{(m+\ell-j)}C.
\end{align*}
Hence the Ramond-twisted module is a twisted module of $V$ for the automorphism $\mathrm{e}^{2\pi i H}$. In particular, $M$ is just a $V$-module if $V$ is $\mathbb{Z}$-graded. Define $A^M_n$ by $Y_M(A, z) = \sum_{n \in \mathbb{Z}}A^M_n z^{-n-\Delta_A}$ for $A \in V$. A Ramond-twisted $V$-module $M$ is called positive-energy if $M$ has an $\mathbb{R}$-grading $M = \bigoplus_{j \in \mathbb{R}}M_j$ with $M_0 \neq 0$ such that $A^M_n M_j \subset M_{j+n}$ for all $A \in V$, $n \in \mathbb{Z}$ and $j \in \mathbb{R}$. Then $M_0$ is called the top space. By \cite[Lemma 2.22]{DK}, a linear map $V_\Gamma \ni A \mapsto A^M_0|_{M_0} \in \operatorname{End} M_0$ induces a homomorphism $\operatorname{Zhu}_H V \rightarrow \operatorname{End} M_0$. Thus we have a functor $M \mapsto M_0$ from the category of positive-energy Ramond-twisted $V$-modules to the category of $\mathbb{Z}_2$-graded $\operatorname{Zhu}_H V$-modules. By \cite[Theorem 2.30]{DK}, these functors establish a bijection (up to isomorphisms) between simple positive-energy Ramond-twisted $V$-modules and simple $\mathbb{Z}_2$-graded $\operatorname{Zhu}_H V$-modules.
\section{$\mathcal{W}$-algebras}\label{sec:W-alg}
Let $\mathfrak{g}$ be a finite-dimensional simple Lie superalgebra with the normalized even supersymmetric invariant bilinear form $(\cdot|\cdot)$ and $f$ be a nilpotent element in the even part of $\mathfrak{g}$. Then there exists a $\frac{1}{2}\mathbb{Z}$-grading on $\mathfrak{g}$ that is good for $f$. See \cite{KRW} for the definitions of good gradings and \cite{EK, Hoyt} for the classifications. Let $\mathfrak{g}_j$ be the homogeneous subspace of $\mathfrak{g}$ with degree $j$. The good grading $\mathfrak{g}=\bigoplus_{j\in\frac{1}{2}\mathbb{Z}}\mathfrak{g}_j$ for $f$ on $\mathfrak{g}$ satisfies the following properties:
\begin{enumerate}
\item $[\mathfrak{g}_i, \mathfrak{g}_j]\subset\mathfrak{g}_{i+j}$,
\item $f\in\mathfrak{g}_{-1}$,
\item $\operatorname{ad}(f)\colon\mathfrak{g}_j\rightarrow\mathfrak{g}_{j-1}$ is injective for $j\geq\frac{1}{2}$ and surjective for $j\leq\frac{1}{2}$,
\item $(\mathfrak{g}_i|\mathfrak{g}_j)=0$ if $i+j\neq 0$,
\item $\dim\mathfrak{g}^f=\dim\mathfrak{g}_0+\dim\mathfrak{g}_{\frac{1}{2}}$, where $\mathfrak{g}^f$ is the centralizer of $f$ in $\mathfrak{g}$.
\end{enumerate}
Then we can choose a set of simple roots $\Pi$ of $\mathfrak{g}$ for a Cartan subalgebra $\mathfrak{h}\subset\mathfrak{g}_0$ such that all positive root vectors lie in $\mathfrak{g}_{\geq0}$. Denote by $\Delta_j = \{ \alpha \in \Delta \mid \mathfrak{g}_\alpha \subset \mathfrak{g}_j\}$ and $\Pi_j = \Pi\cap\Delta_j$ for $j \in \frac{1}{2}\mathbb{Z}$. We have $\Pi=\Pi_0\sqcup\Pi_{\frac{1}{2}}\sqcup\Pi_1$. Let $\chi\colon\mathfrak{g}\rightarrow\mathbb{C}$ be a linear map defined by $\chi(u)=(f|u)$. Since $\operatorname{ad}(f)\colon\mathfrak{g}_{\frac{1}{2}}\rightarrow\mathfrak{g}_{-\frac{1}{2}}$ is an isomorphism of vector spaces, the super skew-symmetric bilinear form $\mathfrak{g}_{\frac{1}{2}}\times\mathfrak{g}_{\frac{1}{2}}\ni(u, v)\mapsto\chi([u, v])\in\mathbb{C}$ is non-degenerate. We fix a root vector $u_\alpha$ and denote by $p(\alpha)$ the parity of $u_\alpha$ for $\alpha \in \Delta$.
Let $V^k(\mathfrak{g})$ be the affine vertex superalgebra associated to $\mathfrak{g}$ at level $k \in \mathbb{C}$, which is generated by $u(z)$ ($u \in \mathfrak{g}$) whose parity is the same as $u$, satisfying that
\begin{align*}
[u_\lambda v] = [u, v] + k(u|v)\lambda,\quad
u, v \in \mathfrak{g}.
\end{align*}
Let $F(\mathfrak{g}_{\frac{1}{2}})$ be the neutral vertex superalgebra associated to $\mathfrak{g}_{\frac{1}{2}}$, which is strongly generated by $\phi_{\alpha}(z)$ ($\alpha \in \Delta_{\frac{1}{2}}$) whose parity is equal to $p(\alpha)$, satisfying that
\begin{align*}
[{\phi_\alpha}_\lambda \phi_\beta] = \chi(u_\alpha, u_\beta),\quad
\alpha, \beta \in \Delta_{\frac{1}{2}}.
\end{align*}
Let $F^\mathrm{ch}(\mathfrak{g}_{>0})$ be the charged fermion vertex superalgebra associated to $\mathfrak{g}_{>0}$, which is strongly generated by $\varphi_\alpha(z), \varphi^*_\alpha(z)$ ($\alpha \in \Delta_{>0}$) whose parities are equal to $p(\alpha) + \bar{1}$, satisfying that
\begin{align*}
[{\varphi_\alpha}_\lambda \varphi^*_\beta] = \delta_{\alpha, \beta},\quad
[{\varphi_\alpha}_\lambda \varphi_\beta] = [{\varphi^*_\alpha}_\lambda \varphi^*_\beta] = 0,\quad
\alpha, \beta \in \Delta_{>0}.
\end{align*}
Let $C^k(\mathfrak{g},f) = V^k(\mathfrak{g}) \otimes F(\mathfrak{g}_{\frac{1}{2}}) \otimes F^\mathrm{ch}(\mathfrak{g}_{>0})$ and $d$ be an odd element in $C^k(\mathfrak{g},f)$ defined by
\begin{align*}
d =& \sum_{\alpha \in \Delta_{>0}}(-1)^{p(\alpha)}u_\alpha\varphi^*_\alpha - \frac{1}{2}\sum_{\alpha, \beta, \gamma \in \Delta_{>0}}(-1)^{p(\alpha)p(\gamma)}c_{\alpha, \beta}^\gamma\NO{\varphi_\gamma\varphi^*_\alpha\varphi^*_\beta}\\
&+\sum_{\alpha \in \Delta_{\frac{1}{2}}}\phi_\alpha\varphi^*_\alpha + \sum_{\alpha \in \Delta_{>0}}\chi(u_\alpha)\varphi^*_\alpha.
\end{align*}
Then $(C^k(\mathfrak{g},f), d_{(0)})$ defines a cochain complex with respect to the charged degree: $\operatorname{charge}\varphi_\alpha = -\operatorname{charge}\varphi^*_\alpha = 1$ ($\alpha \in \Delta_{>0}$) and $\operatorname{charge}A=0$ for all $A \in V^k(\mathfrak{g})\otimes F(\mathfrak{g}_{\frac{1}{2}})$. The (affine) $\mathcal{W}$-algebra $\mathcal{W}^k(\mathfrak{g}, f)$ associated to $\mathfrak{g}$, $f$ at level $k$ is defined by
\begin{align*}
\mathcal{W}^k(\mathfrak{g}, f) = H(C^k(\mathfrak{g},f), d_{(0)}).
\end{align*}
Let $C^k(\mathfrak{g},f)_+$ be a subcomplex generated by $\phi_\alpha(z)$ ($\alpha \in \Delta_{\frac{1}{2}}$), $\varphi^*_\alpha(z)$ ($\alpha \in \Delta_{>0}$) and
\begin{align*}
J^u(z) = u(z) + \sum_{\alpha, \beta \in \Delta_{>0}}c_{\beta, u}^\alpha\NO{\varphi^*_\beta(z)\varphi_\alpha(z)},\quad
u \in \mathfrak{g}_{\leq 0}.
\end{align*}
Then we have \cite{KW04}
\begin{align*}
\mathcal{W}^k(\mathfrak{g}, f) = H(C^k(\mathfrak{g},f), d_{(0)}) = H^0(C^k(\mathfrak{g},f)_+, d_{(0)}).
\end{align*}
Thus, $\mathcal{W}^k(\mathfrak{g}, f)$ is a vertex subalgebra of $C^k(\mathfrak{g},f)_+$. Using the fact that
\begin{align*}
&[{J^u}_\lambda J^v] = J^{[u, v]} + \tau(u|v)\lambda,\quad
u, v \in \mathfrak{g}_{\leq0}\\
&\tau(u|v) = k(u|v) + \frac{1}{2}\kappa_{\mathfrak{g}}(u|v) - \frac{1}{2}\kappa_{\mathfrak{g}_0}(u|v),\quad
u, v \in \frak{g}_{\leq0},
\end{align*}
where $\kappa_{\mathfrak{g}}$ denotes the Killing form on $\mathfrak{g}$, it follows that the vertex algebra generated by $J^u(z)$ $(u \in \mathfrak{g}_{\leq0})$ is isomorphic to the affine vertex superalgebra associated to $\mathfrak{g}_{\leq0}$ and $\tau$, which we denote by $V^{\tau}(\mathfrak{g}_{\leq0})$. Therefore the homogeneous subspace of $C^k(\mathfrak{g},f)_+$ with charged degree $0$ is isomorphic to $V^{\tau}(\mathfrak{g}_{\leq0}) \otimes F(\mathfrak{g}_{\frac{1}{2}})$. The projection $\mathfrak{g}_{\leq 0} \twoheadrightarrow \mathfrak{g}_0$ induces a vertex superalgebra surjective homomorphism $V^{\tau}(\mathfrak{g}_{\leq0}) \otimes F(\mathfrak{g}_{\frac{1}{2}}) \twoheadrightarrow V^{\tau}(\mathfrak{g}_{0}) \otimes F(\mathfrak{g}_{\frac{1}{2}})$ so that we have
\begin{align*}
\Upsilon \colon \mathcal{W}^k(\mathfrak{g}, f) \rightarrow V^{\tau}(\mathfrak{g}_{0}) \otimes F(\mathfrak{g}_{\frac{1}{2}})
\end{align*}
by the restriction. The map $\Upsilon$ is called the Miura map and injective thanks to \cite{Frenkel, Arakawa17, Nakatsuka}.
\section{Finite $\mathcal{W}$-algebras}\label{sec:finite-W}
Recall the definitions of finite $\mathcal{W}$-algebras $U(\mathfrak{g}, f)$, following \cite{D3HK}. We introduce two definitions in \eqref{eq: finiteW-def1}, \eqref{eq: finiteW-def2} denoted by $U(\mathfrak{g}, f)_I$, $U(\mathfrak{g}, f)_{I\hspace{-.2em}I}$ respectively and prove the isomorphism $U(\mathfrak{g}, f)_I \simeq U(\mathfrak{g}, f)_{I\hspace{-.2em}I}$ in Theorem \ref{thm:D3HK}.
\smallskip
Let $\Phi$ be an associative $\mathbb{C}$-superalgebra generated by $\Phi_{\alpha}$ $(\alpha \in \Delta_{\frac{1}{2}})$ that has the same parity as $u_\alpha$, satisfying that
\begin{align*}
[\Phi_{\alpha}, \Phi_{\beta}] = \chi([u_\alpha, u_\beta]),\quad
\alpha, \beta \in \Delta_{\frac{1}{2}}.
\end{align*}
Here $[A,B]$ denotes $AB - (-1)^{p(A)\,p(B)}BA$. We extend the definition of $\Phi_\alpha$ for all $\alpha\in \Delta_{>0}$ by $\Phi_{\alpha} = 0$ for $\alpha \in \Delta_{\geq 1}$. Let $\Lambda(\mathfrak{g}_{>0})$ be the Clifford superalgebra associated to $\mathfrak{g}_{>0}$, which is an associative $\mathbb{C}$-superalgebra generated by $\psi_\alpha, \psi^*_\alpha$ $(\alpha \in \Delta_{>0})$ with the opposite parity to that of $u_\alpha$, satisfying that
\begin{align*}
[\psi_\alpha, \psi^*_\beta] = \delta_{\alpha, \beta},\quad
[\psi_\alpha, \psi_\beta] = [\psi^*_\alpha, \psi^*_\beta] = 0,\quad
\alpha, \beta \in \Delta_{>0}.
\end{align*}
The Clifford superalgebra $\Lambda(\mathfrak{g}_{>0})$ has the charged degree defined by $\deg(\psi_\alpha) = 1 = -\deg(\psi^*_\alpha)$ for all $\alpha \in \Delta_{>0}$. Set
\begin{align*}
&C_I = U(\mathfrak{g})\otimes\Phi\otimes\Lambda(\mathfrak{g}_{>0}),\quad
d_I=\operatorname{ad}(Q),\\
&Q = \sum_{\alpha\in\Delta_{>0}}(-1)^{p(\alpha)}X_\alpha\psi_\alpha-\frac{1}{2}\sum_{\alpha,\beta,\gamma\in\Delta_{>0}}(-1)^{p(\alpha)p(\gamma)}c_{\alpha,\beta}^\gamma\psi_\gamma\psi^*_\alpha\ \psi^*_\beta,\\
&X_\alpha = u_\alpha + (-1)^{p(\alpha)}(\Phi_{\alpha} + \chi(u_\alpha)),\quad
\alpha \in \Delta_{>0},
\end{align*}
where $c_{\alpha, \beta}^\gamma$ is the structure constant defined by $[u_\alpha, u_\beta] = \sum_{\gamma \in \Delta_{>0}} c_{\alpha, \beta}^\gamma u_\gamma$. Then a pair $(C_I, d_I)$ forms a cochain complex with respect to the charged degree on $\Lambda(\mathfrak{g}_{>0})$ and the cohomology
\begin{align}\label{eq: finiteW-def1}
U(\mathfrak{g}, f)_{I} = H^\bullet(C_I, d_I)
\end{align}
has a structure of an associative $\mathbb{C}$-superalgebra inherited from that of $C_I$. Let
\begin{align*}
j^u = u + \sum_{\alpha, \beta \in \Delta_{>0}}c_{\beta, u}^\alpha\psi^*_\beta\ \psi_\alpha,\quad
u \in \mathfrak{g}.
\end{align*}
Then
\begin{align*}
\operatorname{ad}(Q) \cdot \psi_{\alpha} = j^{u_\alpha} + (-1)^{p(\alpha)}(\Phi_{\alpha} + \chi(u_\alpha)) = X_\alpha + \sum_{\alpha, \beta\Delta_{>0}}c_{\beta, u}^\alpha\psi^*_\beta\ \psi_\alpha,\quad
\alpha \in \Delta_{>0}.
\end{align*}
Let $C_-$ be the subalgebra of $C_I$ generated by $\psi_\alpha$, $\operatorname{ad}(Q) \cdot \psi_{\alpha}$ $(\alpha \in \Delta_{>0})$ and $C_+$ be the subalgebra of $C_I$ generated by $j^u$ $(u \in \mathfrak{g}_{\leq0})$, $\Phi_\alpha$ $(\alpha \in \Delta_{\frac{1}{2}})$ and $\psi^*_\alpha$ $(\alpha \in \Delta_{>0})$. Then $(C_\pm, d_I)$ form subcomplexes and $C_I \simeq C_- \otimes C_+$ as vector superspaces. Since $H(C_-, d_I)=\mathbb{C}$, we have
\begin{align*}
H(C_I, d_I) \simeq\ H(C_-, d_I) \otimes H(C_+, d_I) = H(C_+, d_I).
\end{align*}
Using the same argument as in \cite{KW04}, it follows that $H^n(C_+, d_I)=0$ for $n\neq0$. Therefore $U(\mathfrak{g}, f)_{I}$ is a subalgebra of $C^0_+$, which is generated by $j^u$ $(u \in \mathfrak{g}_{\leq0})$ and $\Phi_\alpha$ $(\alpha \in \Delta_{\frac{1}{2}})$. Since $[j^u, j^v] = j^{[u, v]}$ for $u, v \in \mathfrak{g}_{\leq0}$, there exists an isomorphism $C^0_+ \simeq U(\mathfrak{g}_{\leq0}) \otimes \Phi$ as associative $\mathbb{C}$-superalgebras. The projection $\mathfrak{g}_{\leq 0} \twoheadrightarrow \mathfrak{g}_0$ induces an associative $\mathbb{C}$-superalgebra surjective homomorphism $U(\mathfrak{g}_{\leq 0}) \otimes \Phi \twoheadrightarrow U(\mathfrak{g}_{0}) \otimes \Phi$ so that we have
\begin{align*}
\mu \colon U(\mathfrak{g}, f)_{I} \rightarrow U(\mathfrak{g}_{0}) \otimes \Phi
\end{align*}
by the restriction. The map $\mu$ is called the Miura map for the finite $\mathcal{W}$-algebras and injective by \cite{Lynch, Genra20, Nakatsuka}. Let $\mathbb{C}_{-\chi}$ be the one-dimensional $\mathfrak{g}_{\geq1}$-module defined by $\mathfrak{g}_{\geq1} \ni u \mapsto -\chi(u) \in \mathbb{C}$ and $M_{I\hspace{-.2em}I}$ be the induced left $\mathfrak{g}$-module
\begin{align*}
M_{I\hspace{-.2em}I} = \operatorname{Ind}^\mathfrak{g}_{\mathfrak{g}_{\geq1}}\mathbb{C}_{-\chi} = U(\mathfrak{g})\underset{U(\mathfrak{g}_{\geq1})}{\otimes}\mathbb{C}_{-\chi} \simeq U(\mathfrak{g})/I_{-\chi},
\end{align*}
where $I_{-\chi}$ is a left $U(\mathfrak{g})$-module generated by $u + \chi(u)$ for all $u \in \mathfrak{g}_{\geq1}$. Then $M_{I\hspace{-.2em}I}$ has a structure of the $\operatorname{ad}(\mathfrak{g}_{>0})$-module inherited from that of $U(\mathfrak{g})$. Set the $\operatorname{ad}(\mathfrak{g}_{>0})$-invariant subspace
\begin{align}\label{eq: finiteW-def2}
U(\mathfrak{g}, f)_{I\hspace{-.2em}I} = (M_{I\hspace{-.2em}I})^{\operatorname{ad}(\mathfrak{g}_{>0})}.
\end{align}
Then $U(\mathfrak{g}, f)_{I\hspace{-.2em}I}$ also has a structure of an associative $\mathbb{C}$-superalgebra inherited from that of $U(\mathfrak{g})$. We may also define $U(\mathfrak{g}, f)_{I\hspace{-.2em}I}$ as the Chevalley cohomology $H(\mathfrak{g}_{>0}, M_{I\hspace{-.2em}I})$ of the left $\mathfrak{g}_{>0}$-module $M_{I\hspace{-.2em}I}$:
\begin{lemma}[{\cite{GG, Nakatsuka}}]
\begin{align*}
H(\mathfrak{g}_{>0}, M_{I\hspace{-.2em}I}) = H^0(\mathfrak{g}_{>0}, M_{I\hspace{-.2em}I}) = (M_{I\hspace{-.2em}I})^{\operatorname{ad}(\mathfrak{g}_{>0})}.
\end{align*}
\begin{proof}
Though the assertion is proved in \cite{GG} for Lie algebras $\mathfrak{g}$, the same proof together with \cite[Corollary 2.6]{Nakatsuka} applies.
\end{proof}
\end{lemma}
\begin{theorem}[{\cite[Theorem A.6]{D3HK}}]\label{thm:D3HK}
There exists an isomorphism $U(\mathfrak{g}, f)_I \simeq U(\mathfrak{g}, f)_{I\hspace{-.2em}I}$ as associative $\mathbb{C}$-superalgebras.
\begin{proof}
Though the assertion is proved in \cite{D3HK} for Lie algebras $\mathfrak{g}$, the same proof applies as follows. Let $C_{I\hspace{-.2em}I} = \Lambda(\mathfrak{g}_{>0})_c \otimes M_{I\hspace{-.2em}I}$ be the Chevalley cohomology complex of the left $\mathfrak{g}_{>0}$-module $M_{I\hspace{-.2em}I}$, where $\Lambda(\mathfrak{g}_{>0})_c$ is the subalgebra of $\Lambda(\mathfrak{g}_{>0})$ generated by $\psi^*_\alpha$ for all $\alpha \in \Delta_{>0}$, and $d_{I\hspace{-.2em}I}$ be the derivation of the cochain complex $C_{I\hspace{-.2em}I}$. Let $U(\mathfrak{g}_{>0})_{-\chi} = U(\mathfrak{g}_{>0})\otimes\mathbb{C}_{-\chi}$ be a left $\mathfrak{g}_{\geq1}$-module defined by the diagonal action, where $U(\mathfrak{g}_{>0})$ is considered as a left $\mathfrak{g}_{\geq1}$-module by the left multiplication, and $M_{I\hspace{-.2em}I\hspace{-.2em}I}$ be the induced left $\mathfrak{g}$-module
\begin{align*}
M_{I\hspace{-.2em}I\hspace{-.2em}I}
= \operatorname{Ind}^\mathfrak{g}_{\mathfrak{g}_{\geq1}}U(\mathfrak{g}_{>0})_{-\chi}
= U(\mathfrak{g})\underset{U(\mathfrak{g}_{\geq1})}{\otimes}U(\mathfrak{g}_{>0})_{-\chi}.
\end{align*}
Let $\mathbb{C}_\chi$ be the one-dimensional $\mathfrak{g}_{\geq1}$-module defined by $\mathfrak{g}_{\geq1} \ni u \mapsto \chi(u) \in \mathbb{C}$ and $U(\mathfrak{g})_\chi = U(\mathfrak{g}) \otimes \mathbb{C}_\chi$ be a right $\mathfrak{g}_{\geq1}$-module defined by the diagonal action, where $U(\mathfrak{g})$ is considered as a right $\mathfrak{g}_{\geq1}$-module by the right multiplication. Then we have
\begin{align*}
M_{I\hspace{-.2em}I\hspace{-.2em}I} \simeq U(\mathfrak{g})_\chi\underset{U(\mathfrak{g}_{\geq1})}{\otimes}U(\mathfrak{g}_{>0})
\end{align*}
so that $M_{I\hspace{-.2em}I\hspace{-.2em}I}$ is a left $\mathfrak{g}$- right $\mathfrak{g}_{>0}$-bimodule. Note that there is an isomorphisms $\Lambda(\mathfrak{g}_{>0}) \simeq \Lambda(\mathfrak{g}_{>0})_h \otimes \Lambda(\mathfrak{g}_{>0})_c$ of vector superspaces, where $\Lambda(\mathfrak{g}_{>0})_h$ is the subalgebra of $\Lambda(\mathfrak{g}_{>0})$ generated by $\psi_\alpha$ for all $\alpha \in \Delta_{>0}$. Let $d_h$ be the derivation of the Chevalley homology complex $M_{I\hspace{-.2em}I\hspace{-.2em}I}\otimes\Lambda(\mathfrak{g}_{>0})_h$ of the right $\mathfrak{g}_{>0}$-module $M_{I\hspace{-.2em}I\hspace{-.2em}I}$. Then $M_{I\hspace{-.2em}I\hspace{-.2em}I}\otimes\Lambda(\mathfrak{g}_{>0})_h$ is clearly a left $\mathfrak{g}_{>0}$-module with respect to the adjoint $\mathfrak{g}_{>0}$-action. Now, let $\overline{d}_c$ be the derivation of the Chevalley cohomology complex $\Lambda(\mathfrak{g}_{>0})_c \otimes M_{I\hspace{-.2em}I\hspace{-.2em}I}\otimes\Lambda(\mathfrak{g}_{>0})_h$ of the left $\mathfrak{g}_{>0}$-module $M_{I\hspace{-.2em}I\hspace{-.2em}I}\otimes\Lambda(\mathfrak{g}_{>0})_h$. Then, as in \cite{D3HK}, we get a new cochain complex $(C_{I\hspace{-.2em}I\hspace{-.2em}I}, d_{I\hspace{-.2em}I\hspace{-.2em}I})$ defined by
\begin{align*}
C_{I\hspace{-.2em}I\hspace{-.2em}I} = \Lambda(\mathfrak{g}_{>0})_c \otimes M_{I\hspace{-.2em}I\hspace{-.2em}I}\otimes\Lambda(\mathfrak{g}_{>0})_h,\quad
d_{I\hspace{-.2em}I\hspace{-.2em}I} = d_c +(-1)^{\delta-1}\otimes d_h,
\end{align*}
where $\delta$ denotes the parity of the part of elements in $\Lambda(\mathfrak{g}_{>0})_c$. Then it is easy to check that the following linear map
\begin{multline*}
i_{I\hspace{-.2em}I\hspace{-.2em}I \rightarrow I} \colon C_{I\hspace{-.2em}I\hspace{-.2em}I} \ni\ \psi^*_{\beta_1}\cdots \psi^*_{\beta_i} \otimes (v_1 \cdots v_s \underset{U(\mathfrak{g}_{\geq1})}{\otimes}u_{\alpha_1} \cdots u_{\alpha_t}) \otimes \psi_{\gamma_1}\cdots \psi_{\gamma_j}\\
\mapsto\ \psi^*_{\beta_1}\cdots \psi^*_{\beta_i} \cdot v_1 \cdots v_s \cdot X_{\alpha_1} \cdots X_{\alpha_t} \cdot \psi_{\gamma_1}\cdots \psi_{\gamma_j} \in \overline{C}_I
\end{multline*}
with $v_1, \ldots, v_s \in \mathfrak{g}$, $\alpha_1,\ldots,\alpha_t, \beta_1, \ldots,\beta_i, \gamma_1, \ldots, \gamma_j \in \Delta_{>0}$ is well-defined and induces an isomorphism of complexes $(C_{I\hspace{-.2em}I\hspace{-.2em}I}, d_{I\hspace{-.2em}I\hspace{-.2em}I}) \rightarrow (C_I, d_I )$ since $i_{I\hspace{-.2em}I\hspace{-.2em}I \rightarrow I} \circ d_{I\hspace{-.2em}I\hspace{-.2em}I} = d_I \circ i_{I\hspace{-.2em}I\hspace{-.2em}I \rightarrow I}$. Now
\begin{align*}
H_n(C_{I\hspace{-.2em}I\hspace{-.2em}I}, d_h)
&= \Lambda(\mathfrak{g}_{>0})_c \otimes H_n\left(M_{I\hspace{-.2em}I\hspace{-.2em}I}\otimes\Lambda(\mathfrak{g}_{>0})_h, d_h\right)\\
&= \Lambda(\mathfrak{g}_{>0})_c \otimes U(\mathfrak{g})_\chi\underset{U(\mathfrak{g}_{\geq1})}{\otimes} H_n(\mathfrak{g}_{>0}, U(\mathfrak{g}_{>0}))\\
&= \delta_{n, 0}\ \Lambda(\mathfrak{g}_{>0})_c \otimes U(\mathfrak{g})_\chi\underset{U(\mathfrak{g}_{\geq1})}{\otimes}\mathbb{C}
\simeq \delta_{n,0}\ C_{I\hspace{-.2em}I}.
\end{align*}
Thus, since $d_c$ and $(-1)^{\delta-1}\otimes d_h$ commute, we have
\begin{align*}
H(C_{I\hspace{-.2em}I\hspace{-.2em}I}, d_{I\hspace{-.2em}I\hspace{-.2em}I})
\simeq H(H(C_{I\hspace{-.2em}I\hspace{-.2em}I},d_h), d_c)
\simeq H(C_{I\hspace{-.2em}I}, d_{I\hspace{-.2em}I}).
\end{align*}
The above argument together with the isomorphism $i_{I\hspace{-.2em}I\hspace{-.2em}I \rightarrow I}$ of complexes shows that $(C_I, d_I )$ and $(C_{I\hspace{-.2em}I}, d_{I\hspace{-.2em}I})$ are quasi-isomorphic via the following quasi-isomorphism
\begin{multline}\label{eq:D3HK-quasi}
i_{I \rightarrow I\hspace{-.2em}I}\colon C_I \ni\ \psi^*_{\beta_1}\cdots \psi^*_{\beta_i} \cdot v_1 \cdots v_s \cdot X_{\alpha_1} \cdots X_{\alpha_t} \cdot \psi_{\gamma_1}\cdots \psi_{\gamma_j}\\
\mapsto\ \delta_{t,0}\delta_{j,0}\ \psi^*_{\beta_1}\cdots \psi^*_{\beta_i} \cdot v_1 \cdots v_s \in C_{I\hspace{-.2em}I},
\end{multline}
which preserves the associative superalgebra structures on the cohomologies.
\end{proof}
\end{theorem}
\begin{definition}
The finite $\mathcal{W}$-algebra $U(\mathfrak{g}, f)$ associated to $\mathfrak{g}, f$ is defined to be the superalgebra $U(\mathfrak{g}, f)_{I}$, which is isomorphic to $U(\mathfrak{g}, f)_{I\hspace{-.2em}I}$ due to Theorem \ref{thm:D3HK}.
\end{definition}
\begin{remark}\label{rem:def-W}
The same results as Theorem \ref{thm:D3HK} for Poisson superalgebra versions has been studied in \cite{Suh16}. Also remark that our definitions of the finite $\mathcal{W}$-algebra $U(\mathfrak{g}, f)$ are not necessarily equivalent to the definitions in some literatures \cite{Poletaeva13, PS, ZS}. In fact, in case that $\mathfrak{g}=\mathfrak{osp}_{1|2n}$ and $f = f_\mathrm{prin}$ its principal nilpotent element, we have $\dim\mathfrak{g}_{\frac{1}{2}} = \dim\mathfrak{g}_{\frac{1}{2}, \bar{1}} = 1$ and thus $\mathfrak{g}_{\geq1} \subsetneq \mathfrak{g}_{>0}$. Then $U(\mathfrak{g}, f) \simeq U(\mathfrak{g}, f)_{I\hspace{-.2em}I} = (U(\mathfrak{g})/I_{-\chi})^{\operatorname{ad}(\mathfrak{g}_{>0})}$ is a proper subalgebra of $(U(\mathfrak{g})/I_{-\chi})^{\operatorname{ad}(\mathfrak{g}_{\geq 1})} = \operatorname{End}_{U(\mathfrak{g})}U(\mathfrak{g})/I_{-\chi}$.
\end{remark}
The vertex superalgebra $C^k(\mathfrak{g}, f)$ has a conformal vector $\omega$ if $k\neq-h^\vee$, which defines the conformal weights on $C^k(\mathfrak{g}, f)$ by $L_0$, where $\omega(z) = \sum_{n \in \mathbb{Z}}L_n z^{-n-2}$. See \cite{KRW} for the details. Then $H=L_0$ defines a Hamiltonian operator on $C^k(\mathfrak{g}, f)$, the vertex subalgebra $C^k(\mathfrak{g},f)_+$ and the corresponding $\mathcal{W}$-algebra $\mathcal{W}^k(\mathfrak{g}, f)$. Moreover the Hamiltonian operator $L_0$ is well-defined for all $k \in \mathbb{C}$. Recall that $\operatorname{Zhu}_H V$ is the $H$-twisted Zhu algebra of $V$, see Section \ref{sec:Gamma/Z}. Let $x \in \mathfrak{h}$ such that $[x ,u] = j u$ for $u \in \mathfrak{g}_j$. Then by \cite{Arakawa17,DK},
\begin{align}\label{eq:Zhu-explicit}
\operatorname{Zhu}_H C^k(\mathfrak{g},f)_+ \simeq C_+,\quad
J^u \mapsto j^u + \tau(x|u),\
\phi_\alpha \mapsto \Phi_\alpha,\
\varphi_\alpha^* \mapsto \psi^*_\alpha
\end{align}
for $u \in \mathfrak{g}_{\leq 0}$, $\alpha \in \Delta_{>0}$ and $\operatorname{Zhu}_H H^0(C^k(\mathfrak{g},f)_+,d_{(0)}) \simeq H^0(C_+, d_I)$ so that
\begin{align}\label{eq:tw-Zhu}
\operatorname{Zhu}_H \mathcal{W}^k(\mathfrak{g}, f) \simeq U(\mathfrak{g}, f).
\end{align}
Let $V_1, V_2$ be any $\frac{1}{2}\mathbb{Z}_{\geq0}$-graded vertex superalgebras with the Hamiltonian operators and $g \colon V_1\rightarrow V_2$ any vertex superalgebra homomorphism preserving the conformal weights. Since $g(V_1\circ V_1)=g(V_1)\circ g(V_1)\subset V_2\circ V_2$, the map $g$ induces an algebra homomorphism
\begin{align*}
\operatorname{Zhu}_H (g)\colon\operatorname{Zhu}_H V_1\rightarrow\operatorname{Zhu}_H V_2.
\end{align*}
Apply for $g = \Upsilon$. Then we get
\begin{align*}
\operatorname{Zhu}_H(\Upsilon) = \mu
\end{align*}
by construction.
\section{Principal $\mathcal{W}$-algebras of $\mathfrak{osp}_{1|2n}$}\label{sec:prin-W}
Consider the case that
\begin{align*}
\mathfrak{g} = \mathfrak{osp}_{1|2n}
= \left\{ u =
\left(
\begin{array}{c|cc}
0 & {}^t y & -{}^t x \\
\hline %
x & a & b \\
y & c & -{}^t a
\end{array}
\right) \in \mathfrak{gl}_{1|2n}
\mathrel{} \middle| \mathrel{}
\begin{array}{l}
a, b, c \in \operatorname{Mat}_\mathbb{C}(n \times n),\\
x, y \in \operatorname{Mat}_\mathbb{C}(n \times 1),\\
b = {}^t b,\ c = {}^t c
\end{array}
\right\},
\end{align*}
where ${}^t A$ denotes the transpose of $A$. Let $\{e_{i, j}\}_{i, j \in I}$ be the standard basis of $\mathfrak{gl}_{1|2n}$ with the index set $I = \{ 0, 1, \ldots, n, -1, \ldots, -n\}$ and $h_i = e_{i, i} - e_{-i, -i}$ $(i = 1, \ldots, n)$. Then $\mathfrak{h} = \operatorname{Span}_\mathbb{C}\{h_i\}_{i=1}^n$ is a Cartan subalgebra of $\mathfrak{osp}_{1|2n}$. Define $\epsilon_i \in \mathfrak{h}^*$ by $\epsilon_i(h_j) = \delta_{i, j}$. Then $\Delta_+ = \{\epsilon_i, 2\epsilon_i\}_{i=1}^n \sqcup \{\epsilon_i-\epsilon_j,\epsilon_i+\epsilon_j\}_{1 \leq i<j \leq n}$ forms a set of positive roots with simple roots $\Pi = \{ \alpha_i\}_{i=1}$, $\alpha_i = \epsilon_i - \epsilon_{i+1}$ $(i=1,\ldots,n-1)$ and $\alpha_n = \epsilon_n$, and $\epsilon_1, \ldots, \epsilon_n$ are the (non-isotropic) odd roots in $\Delta_+$. Set $\Delta_- = -\Delta_+$ and $(u|v) = -\operatorname{str}(uv)$ for $u, v \in \mathfrak{osp}_{1|2n}$. We may identify $\mathfrak{h}^*$ with $\mathfrak{h}$ through $\nu \colon \mathfrak{h}^* \ni \lambda \mapsto \nu(\lambda) \in \mathfrak{h}$ defined by $\lambda(h) = (h|\nu(\lambda))$ for $h \in \mathfrak{h}$, which induces a non-degenerate bilinear form on $\mathfrak{h}^*$ by $(\lambda|\mu) = (\nu(\lambda)|\nu(\mu))$ so that $(\epsilon_i|\epsilon_j) = \delta_{i, j}/2$. Then $h_i$ corresponds to $2\epsilon_i = 2 \sum_{j=i}^n \alpha_j$ by $\nu$. We have
\begin{align*}
(\alpha_i|\alpha_i) = 1,\quad
(\alpha_i|\alpha_{i+1}) = -\frac{1}{2},\quad
i = 1, \ldots, n-1;\quad
(\alpha_n|\alpha_n) = \frac{1}{2}.
\end{align*}
Note that the dual Coxeter number of $\mathfrak{osp}_{1|2n}$ is equal to $n+\frac{1}{2}$. Let
\begin{align*}
f_\mathrm{prin} = \sum_{i=1}^{n-1} u_{-\alpha_i} + u_{-2\alpha_n}
\end{align*}
be a principal nilpotent element in the even part of $\mathfrak{osp}_{1|2n}$, where $u_\alpha$ denotes some root vector for $\alpha \in \Delta$. Then there exists a unique good grading on $\mathfrak{osp}_{1|2n}$ such that $\Pi_1 = \{\alpha_i\}_{i=1}^{n-1}$ and $\Pi_{\frac{1}{2}} = \{\alpha_n\}$. Thus
\begin{align*}
\mathfrak{g}_0 = \mathfrak{h},\
\mathfrak{g}_{>0} = \mathfrak{n} := \bigoplus_{\alpha \in \Delta_+}\mathfrak{g}_\alpha,\
\mathfrak{g}_{<0} = \mathfrak{n}_- := \bigoplus_{\alpha \in \Delta_-}\mathfrak{g}_\alpha.
\end{align*}
Let
\begin{align*}
\mathcal{W}^k(\mathfrak{osp}_{1|2n}) := \mathcal{W}^k(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})
\end{align*}
be the principal $\mathcal{W}$-algebra of $\mathfrak{osp}_{1|2n}$ at level $k$. The Miura map for $\mathcal{W}^k(\mathfrak{osp}_{1|2n})$ is
\begin{align*}
\Upsilon\colon \mathcal{W}^k(\mathfrak{osp}_{1|2n}) \rightarrow \pi \otimes F,
\end{align*}
where $\pi$ is the Heisenberg vertex algebra generated by even fields $\alpha_i(z)$ $(i = 1, \ldots, n)$ satisfying that
\begin{align*}
[{\alpha_i}_\lambda \alpha_j] = \left(k+n+\frac{1}{2}\right)(\alpha_i|\alpha_j)\lambda,\quad
i, j = 1,\ldots,n
\end{align*}
and $F$ is the free fermion vertex superalgebra generated by an odd field $\phi(z)$ satisfying that
\begin{align*}
[\phi_\lambda\phi] = 1.
\end{align*}
By \cite[Theorem 6.4]{Genra17}, $\mathcal{W}^k(\mathfrak{osp}_{1|2n})$ is strongly generated by $G, W_2, W_4, \ldots, W_{2n}$ for odd $G$ and even $W_2, W_4, \ldots, W_{2n}$ elements of conformal weights $n + \frac{1}{2}$ and $2, 4, \ldots, 2n$ such that
\begin{equation}\label{eq:G_W_formula}
\begin{array}{ll}
&\displaystyle \Upsilon(G)(z) =\ \NO{(2(k+n)\partial + h_1(z)) \cdots (2(k+n)\partial + h_n(z))\phi(z)},\\[2mm]
&\displaystyle \Upsilon(W_{2i})(z) \equiv \sum_{1\leq j_1<\cdots<j_{i}\leq n}\NO{h_{j_1}^2(z)\cdots h_{j_{i}}^2(z)}\quad\left(\operatorname{mod}\ C_2(\pi\otimes F)\right),\\[5mm]
&\displaystyle C_2(\pi\otimes F)=\{ A_{(-2)}B\mid A, B\in\pi\otimes F\}.
\end{array}
\end{equation}
and
\begin{align}\label{eq:G-W-relation}
[G_\lambda G] = W_{2n} + \sum_{i=1}^{n-1}\gamma_i\left(\frac{\lambda^{2i-1}}{(2i-1)!}W_{2n-2i+1}+\frac{\lambda^{2i}}{(2i)!}W_{2n-2i}\right) + \gamma_n \frac{\lambda^{2n}}{(2n)!}
\end{align}
for some $W_{2j+1} \in \mathcal{W}^k(\mathfrak{osp}_{1|2n})$, where
\begin{align*}
h_i(z) = 2\sum_{j=i}^n \alpha_j(z),\quad
\gamma_i = (-1)^i\prod_{j=1}^i\left( 2(2j-1)(k+n)-1 \right)\left( 4j(k+n)+1 \right),
\end{align*}
which satisfy that
\begin{align*}
[{h_i}_\lambda h_j] = (2k+2n+1)\delta_{i, j}\lambda,\quad
i, j=1,\ldots, n.
\end{align*}
If $k + n + \frac{1}{2} \neq 0$,
\begin{align*}
L = \frac{W_2}{2(2k+2n+1)}
\end{align*}
is a unique conformal vector of $\mathcal{W}^k(\mathfrak{osp}_{1|2n})$ with the central charge
\begin{align*}
c(k) = -\frac{(2n+1)(2(2n-1)(k+n)-1)(4n(k+n)+1)}{2(2k+2n+1)}.
\end{align*}
\section{Zhu algebras of $\mathcal{W}^k(\mathfrak{osp}_{1|2n})$}\label{sec:Zhu_prinW}
By \eqref{eq:tw-Zhu}, we have an isomorphism
\begin{align*}
\iota_1 \colon \operatorname{Zhu}_H \mathcal{W}^k(\mathfrak{osp}_{1|2n}) \xrightarrow{\simeq} U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin}).
\end{align*}
Then $\iota_1$ is induced by \eqref{eq:Zhu-explicit}:
\begin{align*}
&\operatorname{Zhu}_H C^k(\mathfrak{osp}_{1|2n},f_\mathrm{prin}) \xrightarrow{\simeq} C_+,\\
&J^u \mapsto j^u + (2k+2n+1)(\rho_{\mathfrak{osp}}|u),\
\phi_\alpha \mapsto \Phi_\alpha,\
\varphi^*_\alpha \mapsto \psi^*_\alpha,
\end{align*}
where
\begin{align*}
\rho_{\mathfrak{osp}} = \frac{1}{2}\sum_{\alpha \in \Delta_+}(-1)^{p(\alpha)}\alpha.
\end{align*}
Let $\mathbb{C}[\mathfrak{h}^*] = U(\mathfrak{h})$ and set an isomorphism
\begin{align*}
&\iota_2 \colon \operatorname{Zhu}_H \pi\otimes \operatorname{Zhu}_H F \xrightarrow{\simeq} \mathbb{C}[\mathfrak{h}^*]\otimes\Phi,\\
&h_i \mapsto h_i + (2n-2i+1)\left(k+n+\frac{1}{2}\right),\quad
\phi_{\alpha_n} \mapsto \Phi_{\alpha_n}.
\end{align*}
Then we have a commutative diagram of Miura maps
\begin{equation*}
\SelectTips{cm}{}
\xymatrix@W15pt@H11pt@R12pt@C30pt{
\operatorname{Zhu}_H \mathcal{W}^k(\mathfrak{osp}_{1|2n})\ar[r]^-{\operatorname{Zhu}_H (\Upsilon)}\ar[d]_-{\iota_1}&\operatorname{Zhu}_H \pi\otimes \operatorname{Zhu}_H F\ar[d]^{\iota_2}\\
U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})\ar[r]^-{\mu}&\mathbb{C}[\mathfrak{h}^*]\otimes\Phi.
}
\end{equation*}
By \cite{DK}, $\operatorname{Zhu}_H\mathcal{W}^k(\mathfrak{osp}_{1|2n})$ has a PBW basis generated by $G, W_2, W_4, \ldots, W_{2n}$. By abuse of notation, we shall use the same notations for the generators of $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$ corresponding to $G, W_2, W_4, \ldots, W_{2n}$ by $\iota_1$.
\begin{lemma}\label{lem:mu(G)}
$\mu(G) = \left(h_1 + \rho_{\mathfrak{osp}}(h_1)\right)\left(h_2 + \rho_{\mathfrak{osp}}(h_2)\right) \cdots \left(h_n + \rho_{\mathfrak{osp}}(h_n)\right)\otimes\Phi_{\alpha_n}$.
\begin{proof}
We have
\begin{align*}
\Upsilon(G) =&\ \NO{(2(k+n)\partial + h_1) \cdots (2(k+n)\partial + h_n)\phi}\\
\equiv& (-(2n-1)(k+n) + h_1) * (-(2n-3)(k+n) + h_2)*\\
& \cdots * (-(k+n) + h_n)* \phi
\quad\left(\operatorname{mod}\ \mathcal{W}^k(\mathfrak{osp}_{1|2n}) \circ \mathcal{W}^k(\mathfrak{osp}_{1|2n}) \right).
\end{align*}
Thus
\begin{align*}
\mu(G)=& \iota_2\Bigl((-(2n-1)(k+n) + h_1) * (-(2n-3)(k+n) + h_2) *\\
&\quad\cdots * (-(k+n) + h_n)* \phi\Bigr)\\
=&\left(h_1 + n-1 + \frac{1}{2}\right)\left(h_2 + n-2+\frac{1}{2}\right) \cdots \left(h_n + \frac{1}{2}\right)\otimes\Phi_{\alpha_n}.
\end{align*}
Therefore the assertion follows from the fact that $\rho_{\mathfrak{osp}}(h_i) = n-i+\frac{1}{2}$.
\end{proof}
\end{lemma}
For a basic classical Lie superalgebra $\mathfrak{g}$ such that $\mathfrak{g}_{\bar{1}} \neq 0$, denote by
\begin{align*}
&Z(\mathfrak{g}) = \{ z \in U(\mathfrak{g}) \mid u z - (-1)^{p(u)p(z)}z u = 0\ \mathrm{for}\ \mathrm{all}\ u \in \mathfrak{g}\},\\
&\mathcal{A}(\mathfrak{g}) = \{ a \in U(\mathfrak{g}) \mid u a - (-1)^{p(u)(p(a)+\bar{1})}a u = 0\ \mathrm{for}\ \mathrm{all}\ u \in \mathfrak{g}\},\\
&\widetilde{Z}(\mathfrak{g}) = Z(\mathfrak{g}) \oplus \mathcal{A}(\mathfrak{g}),
\end{align*}
called the center, the anticenter and the ghost center of $U(\mathfrak{g})$, respectively due to \cite{Gorelik01}. Then the ghost center $\widetilde{Z}(\mathfrak{g})$ coincides with the center of $U(\mathfrak{g})_{\bar{0}}$ by \cite[Corollary 4.4.4]{Gorelik01}. In case that $\mathfrak{g}=\mathfrak{osp}_{1|2n}$, there exists $T \in U(\mathfrak{g})_{\bar{0}}$ \cite{ABF, Musson97, GL} such that
\begin{align*}
\mathcal{A}(\mathfrak{osp}_{1|2n}) = Z(\mathfrak{osp}_{1|2n}) T,\quad
(\sigma \circ \eta)(T) = h_1h_2\cdots h_n,
\end{align*}
where
\begin{align*}
\eta \colon U(\mathfrak{osp}_{1|2n}) \twoheadrightarrow U(\mathfrak{h}) = \mathbb{C}[\mathfrak{h}^*]
\end{align*}
is the projection induced by the decomposition $U(\mathfrak{osp}_{1|2n}) \simeq \mathfrak{n}_- U(\mathfrak{osp}_{1|2n}) \oplus U(\mathfrak{h}) \oplus U(\mathfrak{osp}_{1|2n}) \mathfrak{n}$ and $\sigma$ is an isomorphism defined by
\begin{align*}
\sigma \colon \mathbb{C}[\mathfrak{h}^*] \rightarrow \mathbb{C}[\mathfrak{h}^*],\quad
f \mapsto (\sigma(f) \colon \lambda \mapsto f(\lambda-\rho_{\mathfrak{osp}}))
\end{align*}
The element $T$ is called the Casimir ghost \cite{ABF} since $T^2 \in Z(\mathfrak{osp}_{1|2n})$ such that $(\sigma \circ \eta)(T^2) = h_1^2\cdots h_n^2$, and is studied for general $\mathfrak{g}$ in \cite{Gorelik01}. It is well-known \cite{Kac84, Gorelik04} that the restriction of $\sigma \circ \eta$ to $Z(\mathfrak{g})$ is injective and maps onto $\mathbb{C}[\mathfrak{h}^*]^W$, where $W$ is the Weyl group of $\mathfrak{sp}_{2n}$, called the Harish-Chandra homomorphism of $\mathfrak{osp}_{1|2n}$. Recall that
\begin{align*}
U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin}) \simeq U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{I\hspace{-.2em}I} = (U(\mathfrak{osp}_{1|2n})/I_{-\chi})^{\operatorname{ad}\mathfrak{n}},
\end{align*}
where $I_{-\chi}$ is a left $U(\mathfrak{osp}_{1|2n})$-module generated by $u_\alpha + (f_\mathrm{prin}|u_\alpha)$ for all $\alpha \in \Delta_+\setminus\{\alpha_n\}$. Define the projections $q_1, q_2$ by
\begin{align*}
&q_1 \colon U(\mathfrak{osp}_{1|2n}) \twoheadrightarrow U(\mathfrak{osp}_{1|2n})/I_{-\chi},\\
&q_2 \colon U(\mathfrak{osp}_{1|2n})/I_{-\chi} \simeq \mathfrak{n}_-U(\mathfrak{osp}_{1|2n})/I_{-\chi} \oplus U(\mathfrak{h}) \oplus U(\mathfrak{h})u_{\alpha_n} \twoheadrightarrow U(\mathfrak{h}) \oplus U(\mathfrak{h})u_{\alpha_n}
\end{align*}
and a linear map $q_3$ by
\begin{align*}
q_3 \colon U(\mathfrak{h}) \oplus U(\mathfrak{h})u_{\alpha_n} \rightarrow \mathbb{C}[\mathfrak{h}^*] \otimes \Phi,\quad
(f_1,\mathrel{} f_2 \cdot u_{\alpha_n}) \mapsto f_1\otimes1+ f_2\otimes\Phi_{\alpha_n}.
\end{align*}
Then, using the quasi-isomorphism $i_{I \rightarrow I\hspace{-.2em}I}$ in \eqref{eq:D3HK-quasi}, the Miura map $\mu$ can be identified with the restriction of the composition map $q_3 \circ q_2$ to $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{I\hspace{-.2em}I}$ since $u_{\alpha_n} = X_{\alpha_n} + \Phi_{\alpha_n}$.
\begin{lemma}\label{lem:p(Tu)}
$q_1(Tu_{\alpha_n})$ is the element of $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{I\hspace{-.2em}I}$ corresponding to $G$.
\begin{proof}
First of all, we show that $q_1(Tu_{\alpha_n}) \in U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{I\hspace{-.2em}I}$. It is enough to show that $[u_\alpha, Tu_{\alpha_n}] \equiv 0$ $(\operatorname{mod}. I_{-\chi})$ for all $\alpha \in \Delta_+$. Let $\Delta_{+, \bar{i}} = \{ \alpha \in \Delta_+ \mid p(u_\alpha) = \bar{i}\}$. Since $[u_\alpha, T]=0$ for $\alpha \in \Delta_{+, \bar{0}}$, we have
\begin{align*}
[u_\alpha, Tu_{\alpha_n}] = T[u_\alpha, u_{\alpha_n}] \equiv 0\quad
(\operatorname{mod}. I_{-\chi}),\quad
\alpha \in \Delta_{+, \bar{0}}.
\end{align*}
Next, for $\alpha \in \Delta_{+, \bar{1}}\setminus\{\alpha_n\}$, since $u_\alpha T + Tu_\alpha = 0$, we also have
\begin{align*}
[u_\alpha, Tu_{\alpha_n}] = -T[u_\alpha, u_{\alpha_n}] + 2Tu_{\alpha_n}u_\alpha\equiv 0\quad
(\operatorname{mod}. I_{-\chi}),\quad
\alpha \in \Delta_{+, \bar{1}}\setminus\{\alpha_n\}.
\end{align*}
Finally, in case that $\alpha = \alpha_n$,
\begin{align*}
[u_{\alpha_n}, Tu_{\alpha_n}] = (u_{\alpha_n}T + Tu_{\alpha_n})u_{\alpha_n} = 0.
\end{align*}
Therefore, $q_1(Tu_{\alpha_n})$ belongs to $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{I\hspace{-.2em}I}$. Now $\mu = q_3 \circ q_2|_{U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{I\hspace{-.2em}I}}$ and by definition,
\begin{align*}
((\sigma \otimes 1) \circ \mu )(q_1(Tu_{\alpha_n})) &= ((\sigma \otimes 1) \circ q_3 \circ q_2 \circ q_1)(Tu_{\alpha_n})\\
&= (\sigma \circ \eta)(T) \otimes \Phi_{\alpha_n} = h_1\cdots h_n \otimes\Phi_{\alpha_n}.
\end{align*}
By Lemma \ref{lem:mu(G)}, $((\sigma \otimes 1) \circ \mu )(G) = h_1\cdots h_n \otimes\Phi_{\alpha_n}$. Since $(\sigma \otimes 1) \circ \mu$ is injective, we have $q_1(Tu_{\alpha_n}) = G$.
\end{proof}
\end{lemma}
\begin{theorem}\label{thm:Uosp=Z}
$U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{\bar{0}} \simeq Z(\mathfrak{osp}_{1|2n})$.
\begin{proof}
Since $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$ has a PBW basis generated by $G$, $W_2$, $W_4, \ldots, W_{2n}$ and $G$ is a unique odd generator, $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{\bar{0}}$ has a PBW basis generated by $W_2$, $W_4, \ldots, W_{2n}$. Now $\Phi$ is superalgebra generated by $\Phi_{\alpha_n}$ with the relation $2 \Phi_{\alpha_n}^2 = \chi(u_{\alpha_n}, u_{\alpha_n})$. Thus $\mu$ maps $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{\bar{0}}$ to $\mathbb{C}[\mathfrak{h}^*]$. By \eqref{eq:G_W_formula}, $\mu(W_{2i})$ for $i = 1, \ldots, n$ are algebraically independent in $\mathbb{C}[\mathfrak{h}^*]$ with degree $2i$ (but not necessary homogeneous). Now, by definition, $q_2 \circ q_1 =\eta$ on $Z(\mathfrak{osp}_{1|2n})$. Hence $q_2 \circ q_1 |_{Z(\mathfrak{osp}_{1|2n})}$ is injective. In particular, $q_1 |_{Z(\mathfrak{osp}_{1|2n})}$ is injective. Clearly, $q_1(Z(\mathfrak{osp}_{1|2n}))$ is $\operatorname{ad}\mathfrak{n}$-invariant. Thus, $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})\simeq U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{I\hspace{-.2em}I} $ contains $Z(\mathfrak{osp}_{1|2n})$ through $q_1$. Moreover
\begin{align*}
\mu(Z(\mathfrak{osp}_{1|2n}))
= (q_3 \circ q_2 \circ q_1 )(Z(\mathfrak{osp}_{1|2n}))
= \eta(Z(\mathfrak{osp}_{1|2n}))
= \sigma^{-1}(\mathbb{C}[\mathfrak{h}^*]^W).
\end{align*}
Since $\mathbb{C} [ \mathfrak{h}^*]^W$ is a symmetric algebra of $h_1^2, \ldots, h_n^2$, $\mu(Z(\mathfrak{osp}_{1|2n}))$ must contain all $\mu(W_{2i})$ for $i = 1, \ldots, n$. Therefore
\begin{align*}
U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{\bar{0}} \simeq Z(\mathfrak{osp}_{1|2n}).
\end{align*}
This completes the proof.
\end{proof}
\end{theorem}
\begin{corollary}\label{cor:evenZhu=Z}
$\left(\operatorname{Zhu}_H \mathcal{W}^k(\mathfrak{osp}_{1|2n})\right)_{\bar{0}} \simeq Z(\mathfrak{osp}_{1|2n})$.
\begin{proof}
The assertion is immediate from Theorem \ref{thm:Uosp=Z} and the fact that $\operatorname{Zhu}_H \mathcal{W}^k(\mathfrak{osp}_{1|2n})$ $\simeq$ $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$.
\end{proof}
\end{corollary}
Consider a linear isomorphism
\begin{align*}
\xi \colon \widetilde{Z}(\mathfrak{osp}_{1|2n}) = Z(\mathfrak{osp}_{1|2n}) \oplus \mathcal{A}(\mathfrak{osp}_{1|2n}) \xrightarrow{\simeq} Z(\mathfrak{osp}_{1|2n}) \oplus \mathcal{A}(\mathfrak{osp}_{1|2n})u_{\alpha_n}
\end{align*}
defined by $\xi(z, a) = (z, a\,u_{\alpha_n})$. Then by Lemma \ref{lem:p(Tu)} and the fact that $\mathcal{A}(\mathfrak{osp}_{1|2n}) = Z(\mathfrak{osp}_{1|2n}) T$, we have $(q_1 \circ \xi)(\widetilde{Z}(\mathfrak{osp}_{1|2n})) \subset U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})_{I\hspace{-.2em}I}$.
\begin{theorem}\label{thm:ghostZ_finiteW}
The map $q_1 \circ \xi \colon \widetilde{Z}(\mathfrak{osp}_{1|2n}) \rightarrow U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$ is an isomorphisms of associative algebras.
\begin{proof}
By definition and Lemma \ref{lem:p(Tu)}, $(q_3 \circ q_2 \circ q_1 \circ \xi)(zT) = (q_3 \circ q_2 \circ q_1)(zTu_{\alpha_n}) = \eta(z)G$ for all $z \in Z(\mathfrak{osp}_{1|2n})$. Thus,
$q_3 \circ q_2 \circ q_1 \circ \xi|_{\mathcal{A}(\mathfrak{osp}_{1|2n})}$ is injective. In particular, $q_1 \circ \xi|_{\mathcal{A}(\mathfrak{osp}_{1|2n})}$ is injective. Using the fact that $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$ has a PBW basis generated by $G$, $W_2$, $W_4, \ldots, W_{2n}$ and Theorem \ref{thm:Uosp=Z}, it follows that $q_1 \circ \xi$ is a linear isomorphism. Now, we may suppose that $\chi(u_{\alpha_n}, u_{\alpha_n}) = 2$. Then $\Phi_{\alpha_n}^2 = 1$ so that $\mu(T^2) = \sigma^{-1}(h_1^2\cdots h_n^2) = \mu(G^2)$. Therefore $q_1 \circ \xi$ defines an isomorphisms of associative algebras. This completes the proof.
\end{proof}
\end{theorem}
Let $L(\lambda)$ be the simple highest weight $\mathfrak{osp}_{1|2n}$-module with the highest weight $\lambda$. Then there exists $\chi_\lambda \colon Z(\mathfrak{osp}_{1|2n}) \rightarrow \mathbb{C}$ such that $z$ acts on $\chi_\lambda(z)$ on $L(\lambda)$ for all $z \in Z(\mathfrak{osp}_{1|2N})$. The map $\chi_\lambda$ is called a central character of $\mathfrak{osp}_{1|2n}$ and is induced by $\eta$ and one-dimensional $\mathbb{C}[\mathfrak{h}^*]$-module $\mathbb{C}_\lambda$ defined by $f \mapsto f(\lambda)$. Using the Harish-Chandra homomorphism, it follows that $\chi_{\lambda_1} = \chi_{\lambda_2}$ if and only if $\lambda_2 = w(\lambda_1 + \rho_{\mathfrak{osp}}) - \rho_{\mathfrak{osp}}$ for some $w \in W$. Let
\begin{align*}
D = \{ \lambda \in \mathfrak{h}^* \mid \prod_{\alpha \in \Delta_{\bar{1}}}(\lambda+\rho_{\mathfrak{osp}}|\alpha) = 0\}.
\end{align*}
Denote by $\chi_\lambda \in D$ if $\lambda \in D$. Since $w(\Delta_{\bar{1}}) \subset \Delta_{\bar{1}}$ for all $w \in W$, we have $\lambda \in D \Rightarrow w(\lambda + \rho_{\mathfrak{osp}}) - \rho_{\mathfrak{osp}}\in D$ for any $w \in W$ so that $\chi_\lambda \in D$ is well-defined.
From now on, we will identify $\widetilde{Z}(\mathfrak{osp}_{1|2n})$ with $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$ by Theorem \ref{thm:ghostZ_finiteW}. Then $\widetilde{Z}(\mathfrak{osp}_{1|2n})$ is a superalgebra such that $\widetilde{Z}(\mathfrak{osp}_{1|2n})_{\bar{1}} = \mathcal{A}(\mathfrak{osp}_{1|2n})$. Let $E$ be a finite-dimensional $\mathbb{Z}_2$-graded simple $\widetilde{Z}(\mathfrak{osp}_{1|2n})$-module. Then $Z(\mathfrak{osp}_{1|2n})$ acts on $E$ as $\chi_\lambda$ for some $\lambda \in \mathfrak{h}^*$. For a non-zero parity-homogeneous element $v \in E$, $Tv$ has an opposite parity to $v$ such that $T^2 v = \chi_\lambda(T^2)v$. Recall that the set $\{h_1, \ldots, h_n\}$ is identified with $2\Delta_{+, \bar{1}}$ by $\mathfrak{h} \simeq \mathfrak{h}^*$. Then, using the fact that $\eta(T^2) = \sigma^{-1}(h_1^2\cdots h_n^2)$, it follows that
\begin{align*}
\chi_\lambda(T^2) = \prod_{i=1}^n\left((\lambda+\rho_{\mathfrak{osp}})(h_i)\right)^2 = \prod_{\alpha \in \Delta_{+, \bar{1}}}(\lambda + \rho_{\mathfrak{osp}}|2\alpha)^2.
\end{align*}
Hence $\chi_\lambda(T^2) = 0$ if and only if $\chi_\lambda \in D$. Since $E$ is simple, $E = \mathbb{C} v$ if $\chi_\lambda \in D$ and $E = \mathbb{C} v \oplus \mathbb{C} Tv$ if $\chi_\lambda \notin D$, which we denote by $E_{\chi_\lambda}$. Here we identify $E_{\chi_\lambda}$ with the parity change of $E_{\chi_\lambda}$ if $\chi_\lambda(T^2) = 0$. Therefore we obtain the following results:
\begin{proposition}\label{prop:rep_finiteW}
A finite-dimensional $\mathbb{Z}_2$-graded simple $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$-module is isomorphic to $E_{\chi_\lambda}$ for some $\lambda \in \mathfrak{h}^*$. In particular, there exists one-to-one correspondence between isomorphism classes (up to the parity change) of finite-dimensional $\mathbb{Z}_2$-graded simple $U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$-modules and central characters of $\mathfrak{osp}_{1|2n}$.
\end{proposition}
\begin{corollary}\label{cor:Zhu_twisted}
There exists a bijective correspondence between central characters of $\mathfrak{osp}_{1|2n}$ and isomorphism classes (up to the parity change) of simple positive-energy Ramond-twisted $\mathcal{W}^k(\mathfrak{osp}_{1|2n})$-modules with finite-dimensional top spaces.
\begin{proof}
The assertion is immediate from $\operatorname{Zhu}_H \mathcal{W}^k(\mathfrak{osp}_{1|2n}) \simeq U(\mathfrak{osp}_{1|2n}, f_\mathrm{prin})$, Proposition \ref{prop:rep_finiteW} and \cite[Theorem 2.30]{DK}
\end{proof}
\end{corollary}
Corollary \ref{cor:Zhu_twisted} implies that dimensions of the top spaces $E_{\chi_\lambda}$ of simple positive-energy Ramond-twisted $\mathcal{W}^k(\mathfrak{osp}_{1|2n})$-modules are equal to $2$ if and only if $(\lambda+\rho_{\mathfrak{osp}}|\alpha) \neq 0$ for all $\alpha \in \Delta_{\bar{1}}$. We remark that this condition is equivalent to one that the annihilator of the Verma module $M(\lambda)$ is generated by its intersection with the center $Z(\mathfrak{osp}_{1|2n})$ by \cite{GL99}.
\bibliographystyle{halpha}
|
2212.08375
|
\section{Introduction}
\subsection{Optimal transport problems}
Consider $N\ge 2$ Polish probability spaces $(X^1,\mu^1),\ldots,\\ (X^N,\mu^N)$ and $X=\prod_{i=1}^NX^i$. Let $c:X\to [0,\infty]$ and consider the \emph{Multi-marginal optimal transport} (MOT) problems
\[
\min_{\gamma \in \Pi(\mu^1,\ldots,\mu^N)}C[\gamma]:= \min_{\gamma \in \Pi(\mu^1,\ldots,\mu^N)}\int_X c d\gamma,\tag{P}
\]
and
\[
\min_{\gamma \in \Pi(\mu^1,\ldots,\mu^N)}C_\infty [\gamma]:= \min_{\gamma \in \Pi(\mu^1,\ldots,\mu^N)} \gamma-\mathop\mathrm{ess\,sup\,}_{(x^1,\ldots,x^N)\in X} c,\tag{P$_\infty$}
\]
in the set
\[\Pi(\mu^1,\ldots,\mu^N):=\{\gamma\in\mathcal{P}(X)~|~\pi^i(\gamma)=\mu^i\text{ for all }i=1,\ldots,N\},\]
that is, in the set of \emph{couplings} or \emph{transport plans} between the $N$ marginals $\mu^1,\ldots,\mu^N$.
We refer to the second problem as {\it the $\sup$ case.} The first of these problems is widely encountered in the literature of the last thirty years. The second, although also old, gained popularity only more recently thanks to the applications of optimal transportation in machine learning (see, for example \cite{peyre2019computational}).
This paper is concerned with an optimality condition for the problems above, introduced in the next subsection.
In particular we will study the sufficiency of such optimality condition. We will give a new, easier, and in our opinion easier-to-understand proof of some known results, and we will show that this new approach allows to extend sufficiency results to a wider setting.
\subsection{$c$-cyclical monotonocity, $\infty$-$c$-cyclical monotonicity, and the main theorem}
In this context the $c$-cyclical monotonicity takes the following form.
\begin{definition}\label{cm}
We say that a set $\Gamma\subset\prod_{i=1}^NX^i$ is $c$-cyclically monotone (CM), if for every $k$-tuple of points $(x^{1,i},\ldots, x^{N,i})_{i=1}^k$ and every $(N-1)$-tuple of permutations $(\sigma^2,\ldots,\sigma^N)$ of the set $\{1,\ldots, k\}$ we have
\[\sum_{i=1}^kc(x^{1,i} ,x^{2,i},\ldots,x^{N,i})\le \sum_{i=1}^kc(x^{1,i},x^{2,\sigma^2(i)},\ldots,x^{N, \sigma^N(i)})\,.\]
We also say that $\gamma\in \Pi(\mu^1,\ldots,\mu^N)$ is $c$-cyclically monotone if it is concentrated on a $c$-cyclically monotone set.
\end{definition}
\begin{definition}\label{icm}
We say that a set $\Gamma\subset\prod_{i=1}^NX^i$ is infinitely $c$-cyclically monotone (ICM), if for every $k$-tuple of points $(x^{1,i},\ldots, x^{N,i})_{i=1}^k$ and every $(N-1)$-tuple of permutations $(\sigma^2,\ldots,\sigma^N)$ of the set $\{1,\ldots, k\}$ we have
\[\max\{c(x^{1,i} ,x^{2,i},\ldots,x^{N,i})~|~i\in \{1,\ldots,k\}\}\le \max\{c(x^{1,i},x^{2,\sigma^2(i)},\ldots,x^{N, \sigma^N(i)})~|~i\in \{1,\ldots,k\}\}\,.\]
We also say that a coupling $\gamma\in \Pi(\mu^1,\ldots,\mu^N)$ is infinitely cyclically monotone if it is concentrated on an ICM set.
\end{definition}
We will use the expression $c$-cyclically monotone for both conditions above.
The main theorem of this paper is the following
\begin{theorem}\label{icmoptimal}
Let $\mu^i \in \mathcal P(X^i)$ with compact support for $i=1,\dots,N$, let $c:X \to \mathbb{R}\cup\{+\infty\}$ be continuous. If $\gamma\in\Pi(\mu^1,\ldots,\mu^N)$ is an ICM plan for $c$, then $\gamma$ is optimal.
\end{theorem}
Along the way we will give a new proof of the following known result due, in a more general setting, to Griesler \cite{griessler2018}.
\begin{theorem}\label{ccmoptimal}
Let $\mu^i \in \mathcal P(X^i)$ with compact support for $i=1,\dots,N$, let $c:X \to \mathbb{R}$ be continuous. If $\gamma\in\Pi(\mu^1,\ldots,\mu^N)$ is an CCM plan for $c$, then $\gamma$ is optimal.
\end{theorem}
We now discuss an important characterization of $c$-cyclically monotone transport plans. To this aim we define
\begin{definition}\label{finop}
Let $\gamma$ be a positive and finite Borel measure on $X$. We say that $\gamma$ is finitely optimal if all its finitely-supported submeasures are optimal with respect to their marginals. Here by submeasure we mean any probability measure $\alpha$ satisfying $\mathop{\rm supp}\nolimits(\alpha)\subset \mathop{\rm supp}\nolimits(\gamma)$.
\end{definition}
\begin{proposition}\label{icmfinitelyoptimal}
If $\gamma\in\Pi(\mu^1,\ldots, \mu^N)$ is CM or ICM, then it is finitely optimal for the problem $(P)$ or $(P_\infty)$, respectively.
\end{proposition}
\begin{lemma}\label{permutations}
Let $\alpha=\sum_{i=1}^lm_i\delta_{( x^{1,i},\ldots, x^{N,i})}$ and $\overline\alpha=\sum_{i=1}^{\overline l}\overline{m}_i\delta_{(\overline{x}^{1,i},\ldots,\overline{x}^{N,i})}$ be two discrete measures with positive, integer coefficients and the same marginals. Let us denote by $\tilde l=m^1 + \dots +m^l$ the number of rows of the following table
$$
\left. \begin{array}{lll}
x^{1,1} & \dots & x^{N,1} \\
\vdots & & \vdots \\
x^{1,1} & \dots & x^{N,1}
\end{array}
\right \rbrace
\begin{array}{l}
\\
m^1 \mbox{- times}\\
\\
\end{array}
$$
$$ \begin{array}{llllllllll}
\dots & \dots & \dots & & & & & & &
\end{array}
$$
$$
\left.\begin{array}{lll}
x^{1,l} & \dots & x^{N,l} \\
\vdots & & \vdots \\
x^{1,l} & \dots & x^{N,l}
\end{array}
\right \rbrace
\begin{array}{l}
\\
m^l \mbox{- times}\\
\\
\end{array}
$$
where the first $m^1$ rows are equal among themselves, the following $m^2$ rows are equal among themselves and so on.
Let $\overline A$ be the analogous table associated to $\overline \alpha$. Then $\overline A$ has $\tilde l$ rows and there exist $(N-1)$ permutations of the set $\{1,\ldots,\tilde l\}$ such that
$\overline{A}$ is equal to
$$
\begin{array}{lll}
x^{1,1} & \dots & x^{N,\sigma^N(1)} \\
\vdots & & \vdots \\
x^{1,1} & \dots & x^{N,\sigma^N (m_1)}\\
x^{1,2} & \dots & x^{N, \sigma^N(m_1 +1)}\\
\vdots & & \vdots \\
x^{1,l} & \dots & x^{N,\sigma^N(m_1+\dots+m_{l-1}+1)}\\
\vdots & & \vdots \\
x^{1,l} & \dots & x^{N, \sigma^N (\tilde l)}
\end{array}
$$
\end{lemma}
\begin{proof}
For each $k\in\{1,\ldots, N\}$, the $k$-th marginal of $\alpha$ is given by the sum of the Dirac masses centered on the points of the $k$-th column of the table $A$ with multiplicity. Analogously, the $k$-th marginal of $\overline \alpha$ is given by the sum of the Dirac masses centered on the points of the $i$-th column of the table $\overline A$ with multiplicity. Since the marginals of $\alpha$ and $\overline \alpha$ are the same, each point $x^{k,i}$ appearing in the $k$-th marginal must appear in both matrices the same number of times, proving the existence of the bijections $\sigma^2,\ldots,\sigma^N$ as required. This also implies that $\overline A$ has $\tilde l$ rows.
\end{proof}
\begin{proof}{(of Proposition \ref{icmfinitelyoptimal})} We fix a finitely-supported submeasure $\alpha=\sum_{i=1}^l a_i\delta_{X^i}$ of $\gamma$. We need to show that $\alpha$ is an optimal coupling of its marginals. To do this, we fix another coupling, $\overline\alpha= \sum_{i=1}^{\overline l} \overline a _i\delta_{\overline X^i}$, with the same marginals as $\alpha$. We have to show that
\begin{equation}\label{eq:costclaim}
\tilde C[\alpha]\le \tilde C[\overline \alpha],
\end{equation}
where $\tilde C$ is any of the two costs under consideration.
Let us first assume that the discrete measures $\alpha$ and $\overline \alpha$ have rational coefficients. We consider the measures $M\alpha$ and $M \overline \alpha$, where $M$ is the product of the denominators of the coefficients of $\alpha$ and $\overline \alpha$. They are discrete measures having positive, integer coefficients and the same marginals, so we can apply Lemma \ref{permutations} to find permutations $\sigma^2,\ldots,\sigma^N$ such that $M \alpha$ and $M \overline \alpha$ have representations $A$ and $\overline A$, respectively. If $\tilde C=C$ we have, using the $c$-cyclical monotonicity of $\alpha$
\[MC[\alpha]=\sum_{i=1}^{\tilde l} c(x^{1,i},\ldots, x^{N,i})\le \sum_{i=1}^{\tilde l} c(x^{1,i},x^{2,\sigma^2(i)},\ldots, x^{N, \sigma^N(i))})=MC[\overline \alpha],\]
proving the optimality of $\alpha$.
If $\tilde C=C_\infty$, the conclusion is immediate:
\[C_\infty[\alpha]=\max_{1\le i\le \tilde k}c(x^{1,i},\ldots, x^{N,i})\le \max_{1\le i\le \tilde k}c(x^{1,i},x^{2,\sigma^2(i)},\ldots, x^{N,\sigma^N(i)})= C_\infty[\overline \alpha].\]
Now, assume that $\alpha$ and $\overline \alpha$ have real (not necessarily rational) coefficients,
\[\alpha:= \sum_{i=1}^l a_i \delta_{X^i}, \ \overline \alpha= \sum_{i=1}^{\overline l} \overline a_i \delta_{\overline X ^i}.
\]
We show that for all $\varepsilon >0$ there exist two discrete measures
\[\beta:= \sum_{i=1}^l q_i \delta_{X^i} \ \mbox{and} \ \ \overline \beta= \sum_{i=1}^{\overline l} \overline q_i \delta_{\overline X ^i},
\]
with the same marginals, $q_i, \overline q_i \in \mathbb{Q}$ and
\[|a_i-q_i| <\varepsilon, \ |\overline a_i-\overline q_i| <\varepsilon.
\]
Being concentrated on $X^1,\dots,X^l$ and $\overline X^1, \dots \overline X^{\overline l}$ is equivalent to the fact that the vector ${\bf \underline a}:=(a_1, \dots, a_l, \overline a_1, \dots, \overline a_{\overline l})$ is a solution of
\[\mathcal A {\bf \underline a}=0,\]
where $\mathcal A$ is a matrix with coefficients $1, 0, -1$.
Indeed, if we write, for example, the equality between the first two marginals we obtain
\[ \sum_{i=1}^l a_i \delta_{x^{1,i}}= \sum_{i=1}^{\overline l} \overline a_i \delta_{\overline x ^{1,i}}.
\]
so some of the points $\overline x^{1,i}$ must coincide with, for example, $x^{1,1}$ and this gives, for two sets of indices
\[\sum_{i\in I} a_i = \sum_{j\in J} \overline a_j.\]
Since the matrix $\mathcal A$ has integer coefficients
\[\overline{Ker_\mathbb{Q} \mathcal A} = Ker_\mathbb{R} \mathcal A,\]
and this allows to choose $\beta$ and $\overline \beta$.
Since $C[\alpha]\approx C[\beta]$, $C[\overline \alpha]\approx C[\overline \beta]$, $C_\infty [\alpha]=C_\infty [\beta]$ and $C_\infty[ \overline \alpha]=C[\overline \beta]$, we conclude.
\end{proof}
\section{Preliminary results}
\subsection{Lower semi-continuity, compactness and existence of minimizers}
Existence for the optimal transport problems above is usually obtained by the direct method of the Calculus of Variations. Here we shortly report the tools which we don't find elsewhere or that will be used substantially in our proofs.
A useful convergence on the set of transport plans is the tight convergence.
\begin{definition} Let $X$ be a metric space and let $\gamma_n \in \mathcal P (X)$ we say that $\gamma_n$ converges tightly to $\gamma$ if for all $\phi \in C_b (X)$
\[ \int \phi d\gamma_n \to \int \phi d \gamma.\]
The tight convergence will be denoted by $\stackrel{*}{\rightharpoonup}$.
\end{definition}
\begin{definition} Let $\Pi$ be a set of Borel probability measures on a metric space $X$. We say that $\Pi$ is tight (or uniformly tight) if for all $\varepsilon >0$ there exists $K_\varepsilon \subset X$ compact such that
\[\gamma (K_\varepsilon)> 1-\varepsilon \ \mbox{or, equivalently,} \ \gamma (X\setminus K_\varepsilon)\leq\varepsilon \]
for all $\gamma \in \Pi$.
\end{definition}
\begin{theorem}[Prokhorov] Let $X$ be a complete and separable metric space (Polish space). Then $\Pi \subset \mathcal P (X)$ is tight if and only if it is pre-compact with respect to the tight convergence.
\end{theorem}
\begin{remark}\hfill
\begin{enumerate}
\item The tight convergence is lower semi-continuous on open sets and upper semi-continuous on closed sets;
\item If $X$ is complete and separable then if $\Pi$ is a singleton it is always tight.
\end{enumerate}
\end{remark}
The following compactness theorem will be used in this paper.
\begin{theorem} For $i=1, \dots, N$, let $X^i$ be a Polish space. Let $X= X^1 \times \dots \times X^N$. Let $\mathcal M^i \subset \mathcal P (X^i)$ be tight for all $i$. Then the set
\[\Pi=\{\gamma \in \mathcal P (X)\ | \ \pi^i_\sharp \gamma \in \mathcal M^i \}\]
is tight.
\end{theorem}
\begin{proof} Let $\varepsilon >0$. By the tightness of $\mathcal M^i$ we can fix a compact set $K^i \subset X^i$ such that for all $\mu^i \in \mathcal M^i$
\[\mu^i (X^i \setminus K^i)< \frac\varepsilon N .\]
Let $K= K^1 \times \dots \times K^N$ and let $\gamma \in \Pi$. Since all the marginals $\pi^i_\sharp \gamma \in \mathcal M ^i$, and since
\[ X\setminus K \subset ((X^1\setminus K^1 )\times \prod_{k=2}^NX^k)\cup (X^1\times (X^2\setminus K^2 )\times\prod_{k=3}^NX^k)\cup\cdots\cup (\prod_{k=1}^{N-1}X^k\times(X^N\setminus K^N)),
\]
one gets
\[
\gamma(X\setminus K)\leq \varepsilon.
\]
\end{proof}
\begin{corollary}\label{precompatto}
By Prokhorov's theorem a set $\Pi\subset \mathcal P(X)$ as in the theorem above is pre-compact for the tight convergence. This is, in particular, true if $\mathcal M^i=\{\mu^i\}$.
\end{corollary}
If $c$ is lower semi-continuous, then also the functionals $C$ and $C_\infty$ are lower semi-continuous with respect to the tight convergence of measures. The lower semi-continuity of $C$ is a standard result of optimal transport theory (see, for example, \cite{villani} or \cite{kellerer1984} for the multi-marginal case). The next lemma proves the lower semi-continuity of $C_\infty$.
\begin{lemma}
If the function $c:X\to\mathbb{R}\cup\{+\infty\}$ is lower semi-continuous, then also the functional $C_\infty$ is lower semi-continuous.
\end{lemma}
\begin{proof}
First we note that, thanks to the lower semi-continuity of $c$, its $\gamma$-essential supremum can be written as
\[\gamma-\mathop\mathrm{ess\,sup\,} c=\sup \{c(x^1,\ldots,x^N)~| ~(x^1,\ldots,x^N)\in \mathop{\rm supp}\nolimits \gamma\}.\]
Fix $\gamma\in\Pi(\mu^1,\ldots,\mu^N)$ and let $(\gamma^n)_n$ be a sequence converging to $\gamma$. Now there exist a vector $v\in\mathop{\rm supp}\nolimits\gamma$ and a sequence $v^n=(x^{1,n},\ldots,x^{N,n})\in\prod_{i=1}^NX^i$ such that $v^n\in\mathop{\rm supp}\nolimits\gamma^n$ for all $n$ and $v^n\to v$. Moreover
\[\liminf_{n\to\infty}C_\infty[\gamma^n]\ge \liminf_{n\to\infty}c(v^n)\ge c(v)\,.\]
Since the above inequality holds for all $v\in\mathop{\rm supp}\nolimits\gamma$ and for all sequences converging to $v$, it also holds for the $\gamma$-essential supremum, and the claim follows.
\end{proof}
The use of compactness and semi-continuity theorems above gives the existence of optimal transport plans for both problems considered here.
\subsection{$\Gamma$-convergence}
A crucial tool that we will use in this paper is $\Gamma$-convergence.
All the details can be found, for instance, in Braides's book \cite{braides2002gamma} or in the classical book by
Dal Maso \cite{dal1993introduction}. In what follows, $(X,d)$ is a metric space or a topological space equipped with a convergence.
\begin{definition} Let $(F_n)_n$ be a sequence of functions $X \mapsto \bar\mathbb{R}$. We say that $(F_n)_n$ $\Gamma$-converges to
$F$ if for any $x \in X$ we have
\begin{itemize}
\item for any sequence $(x^n)_n$ of $X$ converging to $x$
$$ \liminf\limits_n F_n(x^n) \geq F(x) \qquad \text{($\Gamma$-liminf inequality);}$$
\item there exists a sequence $(x^n)_n$ converging to $x$ and such that
$$ \limsup\limits_n F_n(x^n) \leq F(x) \qquad \text{($\Gamma$-limsup inequality).} $$
\end{itemize}
\end{definition}
This definition is actually equivalent to the following equalities for any $x \in X$:
\[
F(x) = \inf\left\{ \liminf\limits_n F_n(x^n) : x^n \to x \right\} = \inf\left\{ \limsup\limits_n F_n(x^n) : x^n \to x \right\}
\]
The function $x \mapsto \inf\left\{ \liminf\limits_n F_n(x^n) : x^n \to x \right\}$ is called $\Gamma$-liminf of the sequence $(F_n)_n$ and the other one its $\Gamma$-limsup. A useful result is the following (which for instance implies that a constant sequence of functions does not $\Gamma$-converge to itself in general).
\begin{proposition}\label{lsc}
The $\Gamma$-liminf and the $\Gamma$-limsup of a sequence of functions $(F_n)_n$ are both lower semi-continuous on $X$.
\end{proposition}
The main interest of $\Gamma$-convergence resides in its consequences in terms of convergence of minima.
\begin{theorem}\label{convminima}
Let $(F_n)_n$ be a sequence of functions $X \to \bar\mathbb{R}$ and assume that $F_n$ $\Gamma$-converges to $F$. Assume moreover that there exists a compact and non-empty subset $K$ of $X$ such that
$$ \forall n\in \mathbb{N}, \; \inf_X F_n = \inf_K F_n $$
(we say that $(F_n)_n$ is equi-mildly coercive on $X$). Then $F$ admits a minimum on $X$ and the sequence $(\inf_X F_n)_n$ converges to $\min F$. Moreover, if $(x_n)_n$ is a sequence of $X$ such that
$$ \lim_n F_n(x_n) = \lim_n (\inf_X F_n) $$
and if $(x_{\phi(n)})_n$ is a subsequence of $(x_n)_n$ having a limit $x$, then $ F(x) = \inf_X F $.
\end{theorem}
\section{Discretisation of transport plans (Dyadic-type decomposition in Polish spaces)}\label{dyaddecomp}
Let $\gamma$ be a Borel probability measure on $X=(X^1,d_1)\times \cdots\times (X^N,d_N)$ with marginals $\mu^1,\ldots,\mu^N$.
The space $X$ will be equipped with the $\sup$ metric
\[
d(w,z)=\max_{1\le i\le N}d_i(w^i,z^i).
\]
Let $\varepsilon_n=\frac 1n$. Since $\{\mu^i\}_{i=1}^N$ are Borel probability measures, they are inner regular. Hence for all $n$ there exist compact sets $K^{1,n}\subset\mathop{\rm supp}\nolimits\mu^1, K^{2,n}\subset\mathop{\rm supp}\nolimits\mu^2,\ldots, K^{N,n}\subset\mathop{\rm supp}\nolimits\mu^N$ such that
\begin{equation}\label{frominside}
\mu^k(X^k\setminus K^{k,n})<\tfrac{\varepsilon_n}{N},
\end{equation}
for all $k=1,\ldots,N.$
We may assume that, for all $k$ and $n$, $K^{k,n}\subset K^{k,{n+1}}$. \\
We denote $K^{n}:=\prod_{k=1}^N K^{k,n}$. Since
\[ X\setminus K^n \subset ((X^1\setminus K^{1,n})\times \prod_{k=2}^NX^k)\cup (X^1\times (X^2\setminus K^{2,n} )\times\prod_{k=3}^NX^k)\cup\cdots\cup (\prod_{k=1}^{N-1}X^k\times(X^N\setminus K^{N,n})),
\]
one gets
\[
\gamma(X\setminus K^n)\leq \varepsilon_n.
\]
The cost $c$ is uniformly continuous on each $K^{n}$, and for all $n$ we can fix $\delta_n\in (0,\varepsilon_n)$ such that the sequence ($\delta_n$) is decreasing in $n$ and
\[
|c(u)-c(z)|<\varepsilon_n ~~~\text{ for all }u,z\in K^n\text{ for which } d(u,z)<\delta_n.
\]
Next we fix, for all $n$, finite Borel partitions for the sets $K^{1,n},\ldots,K^{N,n}$. We denote these by $\{\tilde B_i^{k,n}\}_{i=1}^{\tilde m^{k,n}}$, $k=1,\ldots,N$, and we choose them in such a way that for all $n\in\mathbb{N}$ and $k\in\{1,\ldots,N\}$
\[
diam(\tilde B_i^{k,n})<\tfrac12\delta_n,
\]
for all $i\in\{1,\ldots,\tilde m^{k,n}\}$.
We form a new, possibly finer, partition $\{B_i^{k,n}\}_{i=1}^{m^{k,n}}$ for each $K^{k,n}$ by intersecting (if the intersection if nonempty) each element $\tilde B_i^{k,n}$ successively first with the set $K^{k,1}$, then with $K^{k,2}$, and so on up until intersecting with the set $K^{k,n-1}$. So that for $j \in \{1, \dots n\}$ either $B_i^{k,n} \cap K^{k,j}$ it's empty or it is the entire $B_i^{k,n}$.
The products
\[
\mathcal{Q}^n=\{ B_{i_1}^{1,n}\times B_{i_2}^{2,n}\times\cdots\times B_{i_N}^{N,n},~i_k\in\{1,\ldots, m^{k ,n} \}\text{ for all }k=1,\ldots,N\}
\]
form a partition of the set $K^n$ with
\[diam(B_i^{k,n})<\tfrac12\delta_n,
\]
for all $i\in\{1,\ldots,m^{k,n}\}.$
We denote
\[I^n=\{(i_1,\ldots,i_N)~| ~\gamma(B_{i_1}^{1,n}\times B_{i_2}^{2,n}\times\cdots\times B_{i_N}^{N,n})>0\},\]
and for all ${\bf i}:=(i_1,\ldots,i_N)\in I^n$ we use the notation $Q_\mathbf{i}^n:= B_{i_1}^{1,n}\times\cdots \times B_{i_N}^{N,n}$. We fix points $z^n _{\bf i}=z_{i_1,\ldots,i_N}^n\in\prod_{k=1}^NB_{i_k}^{k,N}\cap\mathop{\rm supp}\nolimits\gamma$ (i.e. $z^n _\mathbf{i}\in Q^n_\mathbf{i}\cap \mathop{\rm supp}\nolimits \gamma$).
We define
\[
\tilde\alpha^n=\sum_{(i_1,\ldots,i_N)\in I^n}\gamma(B_{i_1}^{1,n}\times\cdots \times B_{i_N}^{N,n})\delta_{z_{i_1,\ldots,i_N}^n}~~~\text{and}~~~\alpha^n=\frac{1}{\gamma(K^n)}\tilde\alpha^n;
\]
since $\tilde\alpha^n(X)=\gamma(K^n)$, the measures $\alpha^n$ are probability measures.
To each multi-index $\mathbf i=(i_1,\ldots,i_N)$ and thus to each point $z_\mathbf i^n$ correspond $N$ points
\[x_\mathbf{i}^{1,n}\in B_{i_1}^{1,n},\ldots,~x_\mathbf{i}^{N,n}\in B_{i_N}^{N,n},\]
which are "coordinates" in the spaces $X_i$ of $z_\mathbf i ^n$.
The marginals of $\alpha^n$ are supported on the Dirac measures given by these points.
We denote these marginals by $\mu^{1,n},\ldots, \mu^{N,n}$.
More precisely, they can be described as
\begin{equation}\label{marginal}
\mu^{k,n}=\frac{1}{\gamma(K^n)}\sum_{i=1}^{m_{k,n}} \sum_{\stackrel{{\bf i} \in I^n}{i={\bf i}_k}} \gamma(Q_\mathbf{i}) \delta_{x_\mathbf{i}^{k,n}}.
\end{equation}
\begin{proposition}\label{discretegamma} $\alpha^n\rightharpoonup\gamma$.
\end{proposition}
\begin{proof}
Let $\varepsilon>0$ and $\varphi\in C_b(X)$.
We have to find $n_0\in\mathbb{N}$ such that
\begin{equation}\label{claim}
|\int_X \varphi d \gamma- \int_X\varphi d \alpha^n | <\varepsilon, ~~~\text{ for all }n\ge n_0.
\end{equation}
Let $M>0$ be such that
\[
|\varphi(z)| \le M~~~\text{for all }z\in X.\]
We fix $\bar n\in\mathbb{N}$ such that
\[
\gamma(X\setminus K^n)<\min\left\{\frac{1}{2},\frac{\varepsilon}{5M}\right\},~~~\text{for all }n\ge \bar n
\]
Since $\varphi\in C_b(X)$, it is uniformly continuous on the set $K^{\bar n}$, there exists $\delta>0$ such that
\[
|\varphi(z)-\varphi(v)|<\frac{\varepsilon}{5}~~~\text{ for all }z,v\in K^{\bar n}\text{ such that }d(z,v)<\delta.
\]
Moreover, the decomposition $\mathcal{Q}^n$ has been constructed so that there exists $n_0 \geq \bar{n}$ such that for all $k \in \{1, \dots, N\}$ and $n \geq n_0$
\[
diam(B_i^{k,n})<\delta~~~\text{for all }i\in\{1,\ldots,m^{k,n}\}.
\]
We start from
\begin{equation}\label{total}
|\int_X\varphi \d\gamma-\int_X\varphi \d\alpha^n | \le | \int_{K^{\bar n}}\varphi \d\gamma-\int_{K^{\bar n}}\varphi \d\alpha^n |+ |\int_{X\setminus K^{\bar n}}\varphi \d\gamma-\int_{X\setminus K^{\bar n}}\varphi \d\alpha^n|
\end{equation}
and we evaluate separately the two terms on the RHS.
For all $n\ge n_0$ the first term can be estimated as follows: (we recall that, by construction, $K^{\bar n}\subset K^n$)
\begin{align} \label{long}
&| \int_{K^{\bar n}}\varphi\d\gamma-\int_{K^{\bar n}}\varphi\d\alpha^n|
=|\int_{K^{\bar n}}\varphi\d\gamma-\frac{1}{\gamma(K^n)}\int_{K^{\bar n}}\varphi\d\tilde\alpha^n|\nonumber\\
&\stackrel{a)}{\le}|\int_{ K^{\bar n}}\varphi\d\gamma-\int_{K^{\bar n}}\varphi\d\tilde\alpha^n| +\frac{\gamma(X\setminus K^n)}{1-\gamma(X\setminus K^n)}\int_{K^{\bar n}}|\varphi|\d\tilde\alpha^n\nonumber\\
&<|\int_{K^{\bar n}}\varphi\d\gamma-\int_{K^{\bar n}}\varphi\d\tilde\alpha^n| +M\cdot 2\cdot\frac{\varepsilon}{5M}\nonumber\\
&<|\int_{ K^{\bar n}}\varphi\d\gamma-\int_{K^{\bar n}}\varphi\d\tilde\alpha^n| +\frac{2\varepsilon}{5}.
\end{align}
Above in $a)$ we have written
\[
\frac{1}{\gamma(K^n)}=\frac{1}{1-\gamma(X\setminus K^n)}=1+\frac{\gamma(X\setminus K^n)}{1-\gamma(X\setminus K^n)}
\]
and then estimated the numerator from above by $\frac{\varepsilon}{5M}$ and the term $\gamma(X\setminus K^n)$ of the denominator from below by $\frac 12$.
By construction, since $n\ge \bar n$, there exist a subset $\bar I^n\subset I^n$ such that
\[
K^{\bar n}=\bigcup_{(i_1,\ldots, i_N)\in \bar I^n}B_{i_1}^{1,n}\times\cdots\times B_{i_N}^{N,n}.
\]
So we write
\[
\int_{K^{\bar n}}\varphi\d\gamma-\int_{K^{\bar n}}\varphi\d\tilde\alpha^n=\sum_{\mathbf{i}\in \bar I^n}\left(\int_{(B_{i_1}^{1,n}\times\cdots\times B_{i_N}^{N,n})}\varphi\d\gamma-\int_{(B_{i_1}^{1,n}\times\cdots\times B_{i_N}^{N,n})}\varphi\d\tilde\alpha^n\right).
\]
We simplify the notations for the next few lines and, for all $\mathbf{i}\in \bar I^n$, we denote by $Q:=B_{i_1}^{1,n}\times\cdots\times B_{i_N}^{N,n}$ and by $u_0=z_{i_1,\ldots,i_N}\in Q$ the point in which $\tilde \alpha ^n$ is concentrated.
Then for each 'cube' $Q$
\begin{align*}
&|\int_{Q}\varphi(u)\,\d\gamma -\int_{Q}\varphi(u)\,\d\tilde\alpha^n|=|\int_{Q}\varphi(u)\,\d\gamma-\varphi(u_0)\gamma(Q)|\\
&\le\int_{Q}|\varphi(u)-\varphi(u_0)|\,\d\gamma \leq \gamma(Q)\cdot\frac{\varepsilon}{5},
\end{align*}
and in the last passage we have used the uniform continuity of $\varphi$ on $K^{\bar n}$.
Summing the estimate above over all cubes $Q=B_{i_1}^{1,n}\times\cdots\times B_{i_N}^{N,n}$, $\mathbf{i}\in \bar I^n$, gives
\[
|\int_{K^{\bar n}} \varphi \d\gamma - \int_{K^{\bar n}} \varphi \d\tilde\alpha^n |< \gamma(K^{\bar n})\cdot\frac{\varepsilon}{5} \leq \frac{\varepsilon}{5}.
\]
Combining this estimate with \eqref{long} gives us the estimate
\begin{equation}\label{int1}
|\int_{K^{\bar n}}\varphi\d\gamma-\int_{K^{\bar n}}\varphi\d\alpha^n|<\frac15 \varepsilon+\frac{2\varepsilon}{5}=\frac35 \varepsilon.
\end{equation}
Finally, the 'tail' term in (\ref{total}). Using the set $\bar I^n$ defined above one gets
\begin{align*}
\alpha^n(X\setminus K^{\bar n})&=1-\frac{1}{\gamma(K^n)}\sum_{(i_1,\ldots,i_N)\in \bar I^{n}}\gamma(B_{i_1}^{1,n}\times\cdots\times B_{i_N}^{N, n})\\
&=1-\frac{\gamma(K^{\bar n})}{\gamma(K^n)}\le 1-\gamma(K^{\bar n})<\frac{\varepsilon}{5M}.
\end{align*}
Using this we get
\begin{align}
&|\int_{X\setminus K^{\bar n}}\varphi\d\gamma-\int_{X\setminus K^{\bar n}}\varphi\d\alpha^n|\le \int_{X\setminus K^{\bar n}}|\varphi|\d\gamma+\int_{X\setminus K^{\bar n}}|\varphi|\d\alpha^n\nonumber\\
&<M\frac{\varepsilon}{5M}+M\frac{\varepsilon}{5M}=\frac25\varepsilon.\label{int2}
\end{align}
Together estimates (\ref{int1}) and (\ref{int2}) prove the claim (\ref{claim}).
\end{proof}
\begin{remark}\label{discretegamma2}
If $\mathop{\rm supp}\nolimits \mu^k$ is compact for $k=1, \dots, N$ then the dependence on $n$ of $K^n$ is not needed anymore since one can take $K^n\equiv K:= \mathop{\rm supp}\nolimits \mu^1 \times \dots \times \mathop{\rm supp}\nolimits \mu^N$.
This also simplifies the analytic expressions of $\alpha^n$ and their marginal measures.
\end{remark}
In line with the previous Remark we prove the following:
\begin{proposition}\label{discrecompact}
If $\mathop{\rm supp}\nolimits \mu^k$ is compact for $k=1, \dots, N$ then for all $k,n$ and all $i$
\[
\mu^{k,n} (B_i^{k,n})= \mu^k (B_i^{k,n})
\]
\end{proposition}
\begin{proof}
Again we prove the formula for the first marginal.
\begin{align*}
\alpha^n ( B_i ^{1,n}\times\prod_{k=2}^N X^k ) &= \sum_{\stackrel{{\bf \underline{i}} \in I^n}{i=i_1}} \gamma (Q_{\bf \underline{i}} ^n) \delta_{z_{{\bf \underline{i}}}^n} (B_i ^{1,n}\times\prod_{k=2}^N X^k)\\
&=\sum_{\stackrel{{\bf \underline{i}} \in I^n}{i=i_1}} \gamma (Q_{\bf \underline{i}} ^n)=\gamma( B_i ^{1,n}\times\prod_{k=2}^N X^k).
\end{align*}
\end{proof}
\section{Variational approximations and conclusions}
In this section we prove the discrete approximations of the functionals that will be used in the optimality proofs.
Given a transport plan $\gamma$, we have introduced, in the previous section, the dyadic approximation $\{\alpha^n\}_{n \in \mathbb{N}}$ of $\gamma$.
\subsection{The $\sup$ case.}\label{sup}
We define the functionals $\mathcal{F}_n,\mathcal{F}:\mathcal{P}(X)\to \mathbb{R} \cup\{+\infty\}$ by
\[
\mathcal{F}_n(\beta)=
\begin{cases}
C_\infty[\beta]&\text{ if }\beta\in\Pi(\mu^{1,n},\ldots,\mu^{N,n}),\\
+\infty&\text{ otherwise; }
\end{cases}
\]
and
\[
\mathcal{F}(\beta)=
\begin{cases}
C_\infty[\beta]&\text{ if }\beta\in\Pi(\mu^1,\ldots,\mu^N),\\
+\infty&\text{ otherwise.}
\end{cases}
\]
For the rest of this subsection we assume that $c$ is continuous and that $\mu^i$ has compact support for $i=1,\dots,N.$
We prove the following
\begin{proposition}\label{gammaconv} The functionals $\mathcal{F}_n $ are equi-coercive and
\begin{equation}\label{eq:gammaclaim}
\mathcal{F}_n \stackrel{\Gamma}{\rightarrow}\mathcal{F}.
\end{equation}
\end{proposition}
\begin{proof}
Let $\beta\in\mathcal{P}(X)$. We recall that we need to prove the following:
\begin{equation*}
\forall (\beta^n)_n\stackrel{*}{\rightharpoonup}\beta \text{ in }\mathcal{P}(X), \
\liminf_{n\to\infty}\mathcal{F}_n(\beta^n)\ge \mathcal{F}(\beta).\tag{I}
\end{equation*}
\begin{equation*}
\exists (\beta^n)_n \stackrel{*}{\rightharpoonup} \beta \text{ in } \mathcal{P}(X)
\text{ s.t. } \limsup_{n\to\infty}\mathcal{F}_n(\beta^n)\le \mathcal{F}(\beta).\tag{II}
\end{equation*}
If $\mathcal F[\beta]<+\infty$, the $\Gamma$-$\liminf$ inequality (Condition (I)) follows from the lower-semicontinuity of the functional $C_\infty$.
If $\mathcal{F}[\beta]=+\infty$, then either $\beta\notin\Pi(\mu^1,\ldots,\mu^N)$ or $C_\infty (\beta)=+\infty$. In the first case, since $\beta^n \stackrel{*}{\rightharpoonup} \beta$ and
$\mu^{i,n}\stackrel{*}{\rightharpoonup} \mu^i$ for $i=1, \dots, N$, there exists $n_0\in\mathbb{N}$ such that $\beta^n\notin \Pi(\mu^{1,n},\ldots,\mu^{N,n})$ for all $n\ge n_0$. Hence
$\mathcal{F}_n[\beta^n]=+\infty$ for all $n\ge n_0$. If $C_\infty (\beta)=+\infty$ then let $M>0$ and let $\bf x \in \mathop{\rm spt}\nolimits \beta$ and $r>0$ be such that $B({\bf x}, r) \subset \{c>M-\varepsilon\}$.
Since the evaluation on open sets is lower semi-continuous with respect to the tight convergence we have that, for $n$ big enough, $\beta_n (B({\bf x}, r))>0$ so that
$C_\infty (\beta_n)>M-\varepsilon$ and since $M$ is arbitrary we conclude.
For the $\Gamma$-$\limsup$ inequality (Condition (II)), if $\mathcal{F}[\beta]=+\infty$, then any sequence with the right marginals and tightly converging to $\beta$ will do. Therefore, we may assume that the measure $\beta$ satisfies $\beta\in\Pi(\mu^1,\ldots,\mu^N)$ and $C_\infty[\beta]<+\infty$.
To build the approximants, we use the Borel partitions $\{B_i^{k,n}\}_{i=1}^{m^{k.n}}$ and discrete measures introduced in Section \ref{dyaddecomp}.
For all $n$, given a multi-index ${\bf \underline{i}}=(i_1,\ldots,i_N)$ we use, again, the "cube"
\[
Q_{\bf \underline{i}}^n:=B_{i_1}^{1,n}\times\cdots\times B_{i_N}^{N,n} \]
and set
\[J^n:=\{{\bf \underline{i}} \ | \ \beta (Q^n_{\bf \underline{i}}) >0\}.\]
We then define the measures
\[
\beta^n=\sum_{{\bf \underline{i}} \in J^n} \beta(Q_{\bf \underline{i}}^n) \frac{\mu^{1,n} \restr{B^{1,n}_{i_1}} }{\mu^1 (B^{1,n}_{i_1})}\otimes \dots \otimes \frac{\mu^{N,n} \restr{B^{N,n}_{i_N}}}{\mu^N (B^{N,n}_{i_N})}.
\]
We show that $\beta^n$ has marginals $\mu^{1,n},\ldots,\mu^{N,n}$. For all Borel sets $A\subset X_1$ we have
\begin{align}
\beta^n\left(A\times\prod_{k=2}^N X_k\right)&=\sum_{\mathbf{j}\in J^n}\beta (Q_\mathbf{j}^{n})\frac{\mu^{1,n}_{|B_{j_1}^{1,n}}(A)}{\mu^{1,n}(B_{j_1}^{1,n})}\nonumber \\
&=\sum_{j_1\in\pi^1(J^n)}\frac{\mu^{1,n}_{|B_{j_1}^{1,n}}(A)}{\mu^{1}(B_{j_1}^{1,n})}\sum_{\{(j_2,\ldots,j_N)~|~\mathbf{j}\in J^n\}}\beta(Q_\mathbf{j}^n)\nonumber\\
&=\sum_{j_1\in\pi^1(J^n)}\frac{\mu^{1,n}_{|B_{j_1}^{1,n}}(A)}{\mu^{1}(B_{j_1}^{1,n})}\mu^1(B_{j_1}^{1,n})\nonumber \\
&=\sum_{j_1\in\pi^1(J^n)}\mu^{1,n}_{|B_{j_1}^{1,n}}(A)=\mu^{1,n}(A).
\end{align}
where the third inequality is due to Proposition \ref{discrecompact}. The computation is analogous for the other marginals.
The sequence $(\beta^n)$ converges tightly to $\beta$ which can be seen in a manner analogous to the convergence of the sequence $(\alpha^n)$ to $\gamma$. It remains to prove that the sequence satisfies the $\Gamma$-$\limsup$ inequality.
We fix $\varepsilon>0$. It suffices to show that
\[\limsup_{n\to\infty}C_\infty[\beta^n]\le C_\infty[\beta]+\varepsilon.\]
Since for all $n$ the support of $\beta^n$ is a finite set, we can fix $u^n\in\mathop{\rm supp}\nolimits\beta^n$ such that $C_\infty[\beta^n]=c(u^n)$.
Moreover for all $n$ there exists $z^n\in\mathop{\rm supp}\nolimits\beta$ such that $d(u^n,z^n)\le\tfrac12\delta_n$.
Now for all $n$ large enough to satisfy $\varepsilon_n<\varepsilon$ we have
\[C_\infty[\beta^n]=c(u^n)\le c(z^n)+\varepsilon_n\le C_\infty[\beta]+\varepsilon_n<C_\infty[\beta]+\varepsilon\]
and we are done.
By Corollary \ref{precompatto}, $\Pi (\mu^1, \dots, \ \mu^N)\cup_n \Pi (\mu^{1,n}, \dots, \ \mu^{N,n})$ is compact and therefore the equi-coercivity follows.
\end{proof}
\subsection{The integral case.}\label{integral}
We define the functionals $\mathcal{G}_n,\mathcal{G}:\mathcal{P}(X)\to \mathbb{R} \cup\{+\infty\}$ by
\[
\mathcal{G}_n(\beta)=
\begin{cases}
C[\beta]&\text{ if }\beta\in\Pi(\mu^{1,n},\ldots,\mu^{N,n}),\\
+\infty&\text{ otherwise; }
\end{cases}
\]
and
\[
\mathcal{G}(\beta)=
\begin{cases}
C[\beta]&\text{ if }\beta\in\Pi(\mu^1,\ldots,\mu^N),\\
+\infty&\text{ otherwise.}
\end{cases}
\]
For this integral case we assume that the measures $\mu^1,\ldots,\mu^N$ have compact supports and that the cost function $c:X\to\mathbb{R}$ is continuous.
We prove the following:
\begin{proposition}\label{gammaconvint} The functionals $\mathcal{G}_n $ are equi-coercive and
\begin{equation}\label{eq:gammaclaimint}
\mathcal{G}_n \stackrel{\Gamma}{\rightarrow}\mathcal{G}.
\end{equation}
\end{proposition}
\begin{proof}
The proof is analogous to that of Proposition \ref{gammaconv}. The only substantial difference is in the proof of the $\Gamma$-$\limsup$ inequality in the case that the measure $\beta$ belongs to the set $\Pi(\mu^1,\ldots,\mu^N)$. We have to find a sequence $(\beta^n)$, weakly${}^\ast$-converging to $\beta$ and satisfying Condition (II). Let ($\beta^n$) be the discretization defined in the proof of Proposition \ref{gammaconv}. Since the supports of the measures $\mu^1,\ldots,\mu^N$ are compact, also the set $K:=\mathop{\rm spt}\nolimits\mu^1\times\cdots\times \mathop{\rm spt}\nolimits\mu^N$ is compact. Note that for all $n\in\mathbb{N}$ we have $\mathop{\rm spt}\nolimits\beta^n\subset K$. We set
$T=\max_{z\in K}c(z)$. Now the function $c_T:=\min\{c,T\}$ is continuous and bounded on $X$, and by the weak${}^\ast$-convergence
\[\mathcal{G}(\beta^n)=\int_Xcd\beta^n=\int_X c_Td\beta^n\to\int_Xc_Td\beta=\int_Xcd\beta=\mathcal{G}[\beta],\]
from which the $\Gamma$-$\limsup$ inequality follows.
\end{proof}
\subsection{Proof of the main theorems and a counterexample.}
\begin{proof}{(of Theorem \ref{icmoptimal})}
By Proposition \ref{discretegamma} and Remark \ref{discretegamma2} we can find a sequence $(\alpha^n)_n$ with finite supports such that $\mathop{\rm spt}\nolimits \alpha^n \subset \mathop{\rm spt}\nolimits \gamma$ and $\alpha^n \stackrel{*}{\rightharpoonup} \gamma$.
We define the functionals $\mathcal{F}$ and $\mathcal{F}_n$ of Subsection \ref{sup} using the marginals of $\gamma$ and $\alpha^n$.
The plan $\gamma$ is ICM, therefore by Proposition \ref{icmfinitelyoptimal} it is finitely optimal. This means that each plan $\alpha^n$ is optimal between its marginals and thus a minimiser of the functional $\mathcal{F}_n$.
The $\Gamma$-convergence and equi-coercivity established in Proposition \ref{gammaconv} imply, by Theorem \ref{convminima}, that the minimisers of the functionals $\mathcal{F}_n$ converge, up to subsequences, to a minimiser of $\mathcal F$. Therefore, since $\alpha^n\stackrel{*}{\rightharpoonup}\gamma$, the plan $\gamma$ is optimal for the problem ($P_\infty$).
\end{proof}
\begin{proof}{(of Theorem \ref{ccmoptimal})}
The proof is the same as that of Theorem \ref{icmoptimal}. The $\Gamma$-convergence is now given by Proposition \ref{gammaconvint}.
\end{proof}
In \cite{Ambrosio}, Ambrosio and Pratelli showed that for the problem ($P$) the continuity of the cost function $c:X\times X\to[0,\infty]$ is necessary to guarantee the optimality of a cyclically monotone transport plan. The next example, that is a slightly modified version of theirs, shows that also in the case of the problem ($P_\infty$) the continuity of the cost is required, even when the cost assumes only finite values.
\begin{example}
Let us consider the two-marginal $L^\infty$-optimal transportation problem with marginals $\mu=\nu=\mathcal{L}|_{[0,1]}$ and the cost function
\[c(x,y)=
\begin{cases}
1&\text{ if }x=y\\
2&\text{ otherwise}
\end{cases}.\]
\end{example}
We fix an irrational number $\alpha$. We set $T_1=Id_{[0,1]}$ and $T_2:[0,1]\to[0,1]$, $T_2(x)=x+\alpha\pmod 1$. Now $T_1$ is an optimal transportation map for the problem ($P_\infty$) with $C_\infty[T_1]=1$. Since $C_\infty[T_2]=2$, $T_2$ cannot be optimal. However, it is ICM.
In fact if we assume that $T_2$ is not ICM, we should find a minimal $K\in\mathbb{N}$ and a $K$-tuple of couples $\{x_i,y_i\}_{i=1}^K $, all belonging to the support of the plan given by $T_2$, such that
\begin{equation*}
\max_{1\le i\le K}c(x_i,y_i)>\max_{1\le i\le K}c(x_{i+1},y_i),
\end{equation*}
with the convention $x_{K+1}=x_1$.
By the definition of the map $T_2$ we have $y_i=x_i+\alpha\pmod 1$ for all $i$.
Given the form of $c$, the only form in which this inequality can hold is: $2>1$. The right-hand side now tells us that $y_i=x_i+\alpha\pmod 1$ for all $i$, that is, $x_{i+1}=x_i+\alpha\pmod 1$ for all $i$. Summing up now gives us (keeping in mind that $x_{K+1}=x_1$) that $x_1=x_1+K\alpha\pmod 1$, contradicting the irrationality of $\alpha$.
\bibliographystyle{plain}
|
2208.10094
|
\section{ANDREEV SPIN QUBIT}
We implement the ASQ in a quantum dot Josephson junction formed in a hybrid InAs/Al semiconducting-superconducting nanowire, see Fig.~\ref{fig:intro}(a). The quantum dot is electrostatically defined by three gate electrodes under an uncovered InAs section of the nanowire and tunnel-coupled to the superconducting segments \cite{Bargerbos2022}. In the presence of a magnetic field, the ASQ can be described by the effective Hamiltonian \cite{Bargerbos2022b, Padurariu2010}
\begin{equation} \label{eq:H_ASQ}
H_{\rm s} = E_0\cos\left(\phi\right) - E_{\rm SO}\, \vec{\sigma} \cdot \vec{n}\,\sin\left(\phi\right) +\frac{1}{2} \vec{E}_{\rm Z} \cdot\vec{\sigma} ,
\end{equation}
where $\phi$ is the phase difference across the junction, $\vec{\sigma}$ is the spin operator, $\vec{n}$ is a unit vector along the zero-field spin-polarization direction, set by the SOI, and $\vec{E}_{\rm Z}$ is a Zeeman field arising in the presence of an external magnetic field.
$E_0$ denotes the effective Josephson energy of the quantum dot junction, common for both spin states. We note that the the term proportional to $E_0$ has a minimum at $\phi=\pi$, originating from the odd occupancy of the quantum dot junction~\cite{Bargerbos2022b}. In turn, $E_{\rm SO}$ denotes the spin-dependent contribution to the energy of the junction. The spin-dependent potential energy originates from the occurrence of electron co-tunneling accompanied by a spin flip, and it is finite only if SOI is present and multiple levels of the quantum dot are involved in the co-tunneling sequence \cite{Padurariu2010}. The difference between the energies of the $\ket{\downarrow}$ and $\ket{\uparrow}$ eigenstates of Eq.~\eqref{eq:H_ASQ} determines the ASQ qubit frequency $f_{\rm s}=E_\uparrow - E_\downarrow$, as depicted in Fig.~\ref{fig:intro}(b).
For readout and control, we embed the ASQ into a superconducting transmon circuit, as illustrated in Fig.~\ref{fig:intro}(c). The transmon circuit consists of a capacitor, with charging energy $E_{\rm c}$, shunting a superconducting quantum interference device (SQUID) formed by the parallel combination of a gate-tunable Josephson junction with Josephson energy $E_{\rm J}$, and the quantum dot Josephson junction hosting the ASQ. We operate in the regime $E_{\rm J}/\sqrt{E_0^2+E_{\rm SO}^2} > 20$ so that the phase difference $\phi$ across the quantum dot Josephson junction can be controlled by the magnetic flux through the SQUID loop $\Phi_{\rm ext}=\phi_{\rm ext}\Phi_0/(2\pi)$, where $\Phi_0 = h/2e$ is the magnetic flux quantum. Due to the presence of the $E_{\rm SO}$ term, the transmon frequency $f_{\rm t}$ becomes spin-dependent and can be distinguished by probing a readout resonator capacitively coupled to the transmon using standard circuit QED techniques~\cite{Bargerbos2022b}.
Finally, the spin-flipping qubit transition can be directly driven, while maintaining the transmon in its ground state, by applying a microwave tone on the central quantum dot gate~\cite{Metzger2021, Bargerbos2022b, Wesdorp2022}. Such microwave drive allows for all-electrical manipulation through EDSR \cite{Nowack2007, NadjPerge2010}. For further details about the device implementation and setup, see the Supplementary Information \cite{Supplement}.
\begin{figure}[t!]
\centering
\includegraphics[scale=1.0]{Fig_1_v1.pdf}
\caption{(a) Schematic depiction of an Andreev spin qubit in a hybrid superconductor-semiconductor nanowire. The qubit is formed in a gate-defined quantum dot with an odd number of electrons and is coupled to superconducting leads. (b) Eigenenergies of the qubit levels as a function of the phase difference $\phi$, as described by the effective model of Eq.~\ref{eq:H_ASQ}. The frequency of the qubit spin-flip transition $\ket{\downarrow} \leftrightarrow \ket{\uparrow}$ is denoted by $f_{\rm s}$. In this panel the component of the Zeeman energy parallel to the zero-field spin-polarization direction is $E_{\rm Z}^\parallel = \SI{2.9}{GHz}$. (c) Circuit model of the Andreev spin qubit embedded in a transmon circuit. The spin state is manipulated by a microwave drive, at frequency $f_{\rm drive}$, applied to the central gate electrode. The transmon island, with charging energy $E_{\rm c}$, is connected to ground by a SQUID formed by the parallel combination of the ASQ and a reference Josephson junction. Here, $\phi$ denotes the superconducting phase difference across the quantum dot junction, while $\Phi_\textrm{ext}$ is the externally applied magnetic flux through the SQUID loop. (d) Transmission through the readout circuit \cite{Supplement} as a function of the external flux and the applied drive frequency, measured at a magnetic field $B_{\rm {z}}=\SI{17}{mT}$ parallel to the nanowire. (e) Extracted qubit frequency $f_{\rm s}$ versus $B_z$ (markers), measured at $\phi_{\rm ext} = 3\pi/2$. The data is fitted with a linear dependence (solid line), resulting in an effective Landé $g$-factor of $g^* = 12.7 \pm 0.2$. Horizontal dashed lines denote, from bottom to top, the first transmon frequency, the readout resonator frequency and the second transmon frequency.
}
\label{fig:intro}
\end{figure}
Following the gate-tuning strategy described in Ref.~\cite{Bargerbos2022}, we prepare the quantum dot junction in a regime where it is occupied by an odd number of electrons, with $E_0/h = \SI{211}{MHz}$ and $E_{\rm SO}/h = \SI{305}{MHz}$ \cite{Supplement}. In this regime, the qubit states $\ket{\uparrow}$ and $\ket{\downarrow}$ are the lowest energy levels of the system, and the qubit subspace is separated from higher lying states by a frequency gap of at least \SI{20}{GHz} \cite{Supplement}.
After fixing the gate voltages of the quantum dot, we investigate the tunability of the spin-flip transition $f_{\rm s}$ by applying a microwave tone at frequency $f_{\rm drive}$ and performing dispersive readout of the transmon qubit. As shown in Fig.~\ref{fig:intro}(d), we can finely control $f_{\rm s}$ by applying a magnetic flux through the SQUID loop, although the visibility of the measurement signal is reduced around $\phi_{\rm ext}=0,\pi$, where the spin-dependent transmon frequencies are degenerate \cite{Bargerbos2022b}.
By applying an external magnetic field along the nanowire $B_z$ of up to \SI{65}{mT}, the qubit frequency can be varied from \SI{250}{MHz} to \SI{12}{GHz}, see Fig.~\ref{fig:intro}(e). The magnetic field direction is chosen to maximize the magnetic field compatibility of the Al shell of the nanowire and is generally not aligned with the spin-orbit direction $\vec{n}$ \cite{Han2022, Bargerbos2022b}.
\section{QUBIT COHERENCE}
To perform coherent manipulation of the spin states we fix $B_z$~=~\SI{65}{mT} and $\phi_{\rm ext}$~=~$3\pi/2$, setting $f_{\rm s} = \SI{11.5}{GHz}$, where the residual population of the excited state is suppressed to less than 5~\% \cite{Supplement}, facilitating qubit manipulation and readout. We observe Rabi oscillations between the qubit states $\ket{\uparrow}$ and $\ket{\downarrow}$ by applying a Gaussian microwave pulse with a carrier frequency at the spin-flip transition frequency $f_{\rm drive} = f_{\rm s}$, see Fig.~\ref{fig:Rabi}. Here, the Gaussian pulses are truncated so that the total pulse length is 2.5 times the Gaussian full width at half maximum (FWHM). As shown in Fig.~\ref{fig:Rabi}(a), we resolve up to 10 oscillations by varying the amplitude and duration of the pulse envelope.
The population transfer between the spin states, as measured by the dispersive readout scheme with a readout time of \SI{2}{\micro \second}, follows the expected time-dependence of a standard Rabi oscillation, as shown in Fig.~\ref{fig:Rabi}(b), from which we extract the Rabi frequency for each pulse amplitude. For a fixed Rabi frequency, we calibrate the FWHM needed for $\pi$ and $\pi/2$ pulses for single qubit manipulation.
\begin{figure}[t]
\centering
\includegraphics[scale=1.0]{Fig_Rabi_v2.pdf} \caption{Coherent manipulation of the Andreev spin qubit for $f_{\rm s} = \SI{11.5}{GHz}$ at $B_z = \SI{65}{mT}$. (a) Rabi oscillations for a range of Gaussian pulses characterized by their amplitude $\rm A$ at the waveform generator output and their full width at half maximum (FWHM), see pulse sequence. As also indicated in the pulse sequence, the Rabi pulse is immediately followed by a readout (RO) pulse (red, not to scale). (b) Rabi oscillation corresponding to $A=\SI{0.1}{V}$, fit with $a \cos{\left(t \Omega_R\right)} \exp{\left(\rm{t}/t_{\rm d}\right)}$ (solid line). The fit yields a decay time $t_{\rm d}=\SI{27}{ns}$. (c) Extracted Rabi frequencies versus pulse amplitude, fit with a linear equation (solid line).}
\label{fig:Rabi}
\end{figure}
As expected for a two-level system, the Rabi frequency is linear over a wide range of pulse amplitudes and only starts to deviate from this linear dependence for strong drive amplitudes, see Fig.~\ref{fig:Rabi}(c). This deviation is due to saturation of the maximum power provided by the room-temperature electronics. We measure Rabi frequencies larger than \SI{200}{MHz}, exceeding the largest Rabi frequencies achieved in SOQ \cite{vandenBerg2013} and more than an order of magnitude faster than previous results for the ASQ \cite{Hays2021}. We observe that the Rabi frequency is approaching the anharmonicity of typical transmon qubits, with no indications of higher order levels being driven. The two-level nature of the ASQ thus intrinsically supports faster single qubit gates than standard transmon qubits \cite{Werninghaus2021}.
Next, we characterize the lifetime of the ASQ by applying a $\pi$ pulse and reading out the qubit state after a delay time $\tau$. We obtain an exponential decay with a characteristic time $T_1$~=~24.4~$\pm$~\SI{0.5}{\micro s} at $B_z$~=~\SI{65}{mT}, see Fig.~\ref{fig:coherence}(a). As a function of magnetic field, $T_1$ varies between 10 and \SI{40}{\micro s} for qubit frequencies above the transmon frequency. We conjecture that the observed lifetime is limited by Purcell-like decay from coupling to the transmon, given the short transmon lifetime of around \SI{250}{ns}. For $B_z$ closer to zero, $T_1$ drops down to around \SI{1}{\micro s} \cite{Supplement}. This low lifetime is in contrast to the near-zero-field lifetimes found in previous ASQ experiments \cite{Hays2020, Hays2021}, which were in the range of $\SIrange{10}{50}{\micro s}$. The cause of this discrepancy is unknown, but a potential reason is an enhanced resonant exchange with the nuclear spins~\cite{Stockill2016} due to stronger strain in the InAs nanowire, which may differ for different nanowires depending on the exact growth conditions.
To characterize the coherence time of the qubit, we apply two $\pi/2$~pulses separated by a delay time, after which we read out the qubit state. From this experiment we extract a Ramsey coherence time of $T_{\rm 2R}$~=~11~$\pm$~\SI{1}{ns}, see Fig.~\ref{fig:coherence}(b), much smaller than $T_1$, and thus indicative of strong dephasing. Dephasing that originates from slow noise compared to the spin dynamics can be partially cancelled using a Hahn-echo sequence \cite{Hahn1950}, which introduces a $\pi$ pulse halfway between the two $\pi/2$ pulses. This echo sequence increases the measured coherence time by more than three times, to $T_{\rm 2E}$~=~37~$\pm$~\SI{4}{ns}, see Fig.~\ref{fig:coherence}(c)
\begin{figure}[t]
\centering
\includegraphics[scale=1.0]{Fig_coherence_alt.pdf} \caption{Coherence of the Andreev spin qubit at the same setpoint as Fig. \ref{fig:Rabi}. (a) Qubit lifetime, (b) Ramsey, (c) Hahn-echo and (d) CP experiments. Solid lines indicate fits to the data. For (b-d) oscillations are introduced into the decay by adding a phase proportional to the delay time for the final $\pi/2$-pulse. The data of (a-c) is obtained using a $\pi$-pulse ($\pi/2$-pulse) of $\rm{FWHM}=\SI{8}{ns}$ ($\SI{4}{ns}$), while for (d) this is \SI{4}{ns} (\SI{2}{ns}). For (a-c) we plot the normalized population inversion, where each sub-panel is individually normalized to the resulting fit. }
\label{fig:coherence}
\end{figure}
The coherence time of the qubit can be further enhanced by using dynamical-decoupling pulse sequences, which serve to filter out faster environmental fluctuations. We apply Carr–Purcell (CP) sequences \cite{Carr1954, Barthel2010, Bylander2011}, interleaving a varying number of equidistant $\pi$ pulses, $n_\pi$, in between two $\pi/2$ pulses. As $n_\pi$ increases, higher frequency noise is cancelled out, extending the decoherence times. We reach $T_2$ times up to more than \SI{90}{ns} for $n_\pi=7$, at which stage we are most likely limited by decoherence during the $\pi$ pulses, see Fig.~\ref{fig:coherence}(d) \cite{Supplement}. We subsequently fit the $n_\pi$ dependence of $T_2$ with a power law $T_2(n_\pi) \propto n_{\pi}^\gamma$. Assuming a noise power spectral density of the form $f^{1/\beta}$, we expect the relation $\beta = \gamma/(1-\gamma)$ \cite{Cywinski2008, Bylander2011, Medford2012}. The observed scaling with $\gamma = 0.47 \pm 0.1$ therefore suggests that the decoherence is governed by noise with a $1/f$ spectral density in the frequency range 25 to 100~MHz.
There are several potential sources of dephasing that are compatible with a $1/f$ noise spectral density, such as flux noise through the SQUID loop and charge noise \cite{Schreier2008, Braumueller2020}. We exclude the former, as we do not observe an increase of coherence times at the flux sweet spots \cite{Supplement}. Similarly, no consistent trend is observed when varying the gate voltages, nor when increasing the magnetic field strength. The latter indicates that charge noise is likely not the dominant contributor to dephasing, given that EDSR becomes more effective at coupling charge noise to the qubit at elevated fields. Additionally, based on the evolution of the Rabi decay time with increasing pulse amplitudes \cite{Malinowski2017}, the size of the charge fluctuations required to cause the observed amount of dephasing is estimated to be \SI{0.25}{mV}, significantly larger than what is expected to originate from the gate lines \cite{Supplement}. However, the contribution of charge fluctuations originating elsewhere, such as in the dielectric material on the device, could still be contributing to the dephasing. Given that the sensitivity to fluctuations in environmental offset charge on the transmon island is suppressed by the large $E_{\rm J}/E_{\rm c}> 30 $ ratio, it is furthermore unlikely that the ASQ dephasing originates from offset-charge-dependent fluctuations of the transmon frequency qubit \cite{Koch2007}.
Another potential source of dephasing originates from the dynamics of the spinful nuclei in the nanowire, which may couple to the ASQ as a result of the hyperfine interaction. It has previously been shown that these dynamics can lead to longitudinal Overhauser field fluctuations with a $1/f$ spectral density \cite{Malinowski2017b}. Moreover, this effect is expected to be particularly strong in InAs due to the large nuclear spin of indium ($I = 9/2$) and should not be strongly affected by magnetic field in the $B_z$ range investigated here, which is not enough to polarize the nuclear spins. Corroborated by the fact that the extracted $T_{\rm 2R}$ and $T_{\rm 2E}$ times are strikingly similar to those found for the weak-link InAs ASQ \cite{Hays2021}, the InAs SOQ \cite{NadjPerge2010} and the InSb SOQ \cite{vandenBerg2013}, we conjecture that the nuclear environment provides a significant contribution to the decoherence of the ASQ.
\section{ASQ-TRANSMON COUPLING}
One of the main characteristics of the ASQ is the intrinsic coupling between the spin degree of freedom and the supercurrent across the quantum dot Josephson junction. We have so far only exploited this coupling for read-out of the qubit state using circuit QED techniques. Now, we demonstrate the observation of coherent coupling of the ASQ with the transmon qubit.
\begin{figure}[h!]
\centering
\includegraphics[scale=1.0]{Fig_qubit-qubit_V3.pdf} \caption{Coherent ASQ-transmon coupling. (a) Frequency diagram of the joint ASQ-transmon circuit of Fig.~\ref{fig:intro}(c) at large detuning between ASQ and transmon qubit energy levels. In addition to the two spin-conserving transmon transitions (solid red and blue) and the transmon-conserving spin qubit transition (solid yellow), two additional transitions involving both qubits can take place in the presence of coherent coupling between them (dashed and dotted black). (b) Two-tone spectroscopy of the joint two-qubit system at $B_z = 0$. In addition to the two spin-dependent branches of the transmon qubit frequency \cite{Bargerbos2022b}, two additional transitions appear. Overlaid are transition frequencies obtained from the model of Eq~\ref{eq:H_total}. (c) Frequency diagram of the joint ASQ-transmon circuit for $\ket{e,\downarrow} =\ket{g,\uparrow}$. In the presence of coherent coupling, the two qubits hybridize into states with a frequency splitting of $2J$. Green arrows denote the transitions from ground to the two hybridized states. (d) Two-tone spectroscopy versus external flux at $B_z=\SI{28}{mT}$, where $f_{\rm s} \approx f_{\rm t}$. This results in avoided crossings between the two qubit frequencies. Overlaid are the transition frequencies obtained from the model of Eq.~\ref{eq:H_total}. Their colors denote the expectation value of the spin degree of freedom of the excited state and go from $\ket{\downarrow}$ (blue) for the transmon transition to $\ket{\uparrow}$ (yellow) for the spin-flip transition. $f_{\rm t, drive}$ denotes the frequency of the second tone, sent through the readout resonator.}
\label{fig:coupling}
\end{figure}
A first signature of a coherent coupling is the presence of transitions that involve both qubits, in addition to the single-qubit transitions, see Fig.~\ref{fig:coupling}(a). At zero applied magnetic field, we spectroscopically detect two of such transitions at $f_{\rm t}+f_{\rm s}$ and $f_{\rm t}-f_{\rm s}$, where $f_{\rm t}$ is the transmon frequency, see Fig.~\ref{fig:coupling}(b). We classify them based on a fit with the joint Hamiltonian of the total ASQ-transmon circuit of Fig.~\ref{fig:intro}(c), given by
\begin{equation}\label{eq:H_total}
H_{\rm tot} = -4 E_{\rm c} \partial_\phi^2 - E_{\rm J} \cos{ ( \phi-\phi_{\rm ext} )} + H_{\rm s}(\phi).
\end{equation}
We identify the additional observed resonances as the double excitation $\ket{g \downarrow} \leftrightarrow \ket{e \uparrow}$ and the $\ket{g \uparrow} \leftrightarrow \ket{e \downarrow}$ SWAP transitions, where $\ket{g}$ and $\ket{e}$ denote the ground and first excited transmon states, respectively. These transitions could be used to construct entanglement and two qubit gates between the two different qubit platforms, provided the transitions can be driven at a faster rate than the decoherence rates of either qubit.
Additionally, one of the hallmarks of strong coherent coupling is the appearance of an avoided level crossing when both qubit frequencies are made equal, $f_{\rm t} \approx f_{\rm s}$. In this case the $\ket{e,\downarrow}$ and $\ket{g,\uparrow}$ states are expected to hybridize into superposition states with a frequency splitting of $2J$, see Fig.~\ref{fig:coupling}(c). At $B_z$~=~\SI{28}{mT} this splitting can be readily observed in the experiment. By varying the external flux $\phi_{\rm ext}$~such that the ASQ frequency $f_{\rm s}$ crosses the transmon frequency $f_{\rm t}$, we find avoided crossings with a minimum frequency splitting $2J/(2\pi)= 2\times 52$~MHz, as shown in Fig.~\ref{fig:coupling}(d). As $J$ is four times larger than the decoherence rate of the ASQ, $1/T_{\rm 2R} \approx 14\times 2\pi\,$MHz and one order of magnitude larger than the decoherence rate of the transmon, $\approx1.2\times 2\pi\,$MHz, the coupling between the two qubits falls into the strong coupling regime. This result establishes the first realization of a direct strong coupling between a spin qubit and a superconducting qubit, in contrast to the results of Ref.~\cite{Landig2019}, where a high-impedance bus resonator was required to mediate the coupling between spin and transmon qubit through virtual photons.
Analytical estimates predict that the coupling $J \propto E_{\rm SO} \phi_{\rm zpf} \sin(\theta)$, where $\phi_{\rm zpf}$ is the magnitude of zero-point fluctuation of the transmon phase, and $\theta$ is the angle between the Zeeman field and the spin-orbit direction $\vec{n}$ \cite{Supplement}. This expression suggests that by choosing a resonance with a larger $E_{\rm SO}$ \cite{Bargerbos2022b} and by aligning the magnetic field perpendicular to the spin-orbit direction, coupling rates of hundreds of MHz can be achieved, which would enable rapid two-qubit gates between the transmon and the ASQ and potentially allow for the study of light-matter interactions in the ultrastrong coupling regime \cite{FornDiaz2019, Scarlino2022}.
\section{TOWARDS NEW PLATFORMS AND MULTIPLE ASQ}
We have implemented an Andreev spin qubit, where the spin degree of freedom of a quasi-particle in a quantum dot with superconducting leads encodes the qubit state. The qubit subspace is stabilized by the charging energy of the quantum dot and direct microwave driving of the transitions is possible without the requirement of auxiliary levels. The qubit coherence was found to be comparable to previous results for qubits implemented in InAs or InSb nanowires \cite{Hays2021, NadjPerge2010, vandenBerg2013}. Our results therefore suggest that the nuclear environment contributes strongly to the ASQ decoherence, although the contribution of charge noise can not be fully neglected. This limitation motivates future investigation of alternative material platforms for ASQs, such as superconductor-proximitized nuclear-spin-free semiconductors \cite{Leon2021}, e.g. isotopically purified germanium \cite{Hendrickx2018, Scappucci2021, Tosato2022}.
We furthermore observed direct strong coherent coupling between the ASQ and a transmon qubit. Such strong coupling showcases the advantage of the intrinsic spin-supercurrent coupling, allowing the ASQ to be readily integrated into a circuit QED architecture. Our results open avenues towards multi-qubit devices: we propose to leverage the fact that transmon qubits can be readily coupled together using capacitive coupling, useful for mediating interactions between multiple ASQ. Furthermore, our results are a crucial step towards the coupling of distant Andreev spin qubits through bus resonators or a shared inductance \cite{Padurariu2010}, as well as short-distance coupling through wavefunction overlap \cite{Spethmann2022}.
\begin{acknowledgments}
We acknowledge fruitful discussion with Menno Veldhorst, Maximillian Russ, Filip Malinowski, Valla Fatemi, and Yuli Nazarov. We further thank Peter Krogstrup for guidance in the material growth. This research was inspired by prior work by co-author J.J.W. where the spin-flip transition in an InAs/Al nanowire weak-link was directly observed in spectroscopy under the application of a magnetic field~\cite{Wesdorp2022}. This research is co-funded by the allowance for Top consortia for Knowledge and Innovation (TKI’s) from the Dutch Ministry of Economic Affairs, research project {\it Scalable circuits of Majorana qubits with topological protection} (i39, SCMQ) with project number 14SCMQ02, from the Dutch Research Council (NWO), and the Microsoft Quantum initiative. R. \v{Z}. acknowledges the support of the Slovenian Research agency (ARRS) under P1-0416 and J1-3008. R. A. acknowledges support from the Spanish Ministry of Science and Innovation through Grant PGC2018-097018-B-I00 and from the CSIC Research Platform on Quantum Technologies PTI-001. B.v.H. and C.K.A. acknowledge support from the Dutch Research Council (NWO).
\end{acknowledgments}
\section*{Data avalability}
The data and analysis code that support the findings of this study will be made available in 4TU.ResearchData before final publication.\\
\section*{Author contributions}
A.B., M.P.V., and A.K. conceived the experiment.
Y.L. developed and provided the nanowire materials.
A.B., M.P.V., L.S., L.G. and J.J.W prepared the experimental setup and data acquisition tools.
L.S. deposited the nanowires.
A.B. and M.P.V. designed and fabricated the device, performed the measurements and analysed the data, with continuous feedback from L.S., L.G., J.J.W, B.v.H, A.K. and C.K.A.
R.A., B.v.H. and R.Z. provided theory support during and after the measurements.
A.B., M.P.V. and B.v.H. wrote the code to compute the circuit energy levels and extract experimental parameters.
L.P.K., R.A., B.v.H., A.K. and C.K.A. supervised the work.
A.B., M.P.V., and C.K.A. wrote the manuscript with feedback from all authors.
\section{Modeling of joint ASQ-transmon system} \label{Ss:ASQ-transmon}
\subsection{Numerical diagonalization}
\label{Ss:diagonalization}
In order to obtain the transition frequencies of the joint ASQ-transmon system, we combine the Hamiltonian of the ASQ [Eq.~(1) in the main text] with the Hamiltonian of the transmon, as indicated in Eq.~(2) of the main text. This combined Hamiltonian is numerically diagonalized in the phase basis following the procedure in Refs.~\cite{Bargerbos2020, Kringhoj2020b}. This results in the transmon and ASQ energy levels $E_n$, as well as the associated transition frequencies $f_{nm} = \left(E_m - E_n\right)/h$. These frequencies are used in Figs.~1 and 4 to fit the spectroscopy measurements.
\subsection{Estimate of qubit-qubit coupling strength}
As demonstrated in Fig.~4, we observe avoided crossings between the transmon and the ASQ transitions, which is indicative of strong coherent coupling. In this section we derive how the coupling strength depends on the model parameters.
We start by combining Eq.~(1) and (2) of the main text into the effective Hamiltonian
\begin{equation}
H_{\rm tot} = H_{\rm tmon} + H_{\rm Z} + H_{\rm coupling},
\end{equation}
with the individual terms given as
\begin{align}
H_{\rm tmon} &= -4 E_{\rm c} \partial_\phi^2 - E_{\rm J} \cos{ ( \phi )} - E_0 \cos{\left(\phi-\phi_{\rm ext}\right)}, \\
H_{\rm Z} &= \frac{1}{2} \begin{pmatrix}
E_{\rm Z}^{\rm \perp} & E_{\rm Z}^{\rm \parallel} \\
E_{\rm Z}^{\rm \parallel} & - E_{\rm Z}^{\rm \perp}
\end{pmatrix} = \frac{|\vec{E}_{\rm Z}|}{2} \begin{pmatrix}
\sin{(\theta)} & \cos{(\theta)} \\
\cos{(\theta)} & - \sin{(\theta)}
\end{pmatrix} \\
H_{\rm coupling} &= - E_{\rm SO} \sin(\phi-\phi_{\rm ext}) \sigma_x .
\end{align}
Here, $\sigma_x$ is the $x$ Pauli matrix and $\theta$ is the angle between the Zeeman field $\vec{E}_{\rm Z}$ and the spin-orbit direction, such that $E_{\rm Z}^\parallel = |\vec{E}_{\rm Z}| \cos{\theta}$ and $E_{\rm Z}^\perp = |\vec{E}_{\rm Z}| \sin{\theta}$. Next, we write the coupling term $H_{\rm coupling}$ in the eigenbasis of $H_{\rm Z}$, which is given by the states $\ket{v_1} = \left( \cos{(\theta/2)}, \sin{(\theta/2)} \right)$ and $\ket{v_2} = \left( -\sin{(\theta/2)}, \cos{(\theta/2)} \right)$. We identify that
\begin{align}\bra{v_1} \sigma_x \ket{v_2} = \bra{v_2} \sigma_x \ket{v_1} = \cos{\theta}
\end{align}
and
\begin{align}\bra{v_1} \sigma_x \ket{v_1} = - \bra{v_2} \sigma_x \ket{v_2} = \sin{\theta},
\end{align}
such that $\sigma_x$ becomes $\cos{(\theta)} \sigma_{\overline{x}} + \sin{(\theta)} \sigma_{\overline{z}}$ in the $\{ \ket{v_1}, \ket{v_2} \}$ spin basis.
We rewrite $H_{\rm coupling}$ and expand to first order in $\phi$, valid in the transmon limit $E_{\rm J}\gg E_{\rm c}$, where $\langle \phi \rangle \ll 1$, which results in
\begin{align}
H_{\rm coupling} &\,= E_{\rm SO}\left[\cos{\left(\phi\right)}\sin{\left(\phi_{\rm ext}\right)} - \cos{\left(\phi_{\rm ext}\right)} \sin{\left(\phi\right)}\right] \sigma_x \\
&\,\approx E_{\rm SO} \left[ \sin{(\phi_{\rm ext})} - \phi \cos{(\phi_{\rm ext})} \right] \sigma_x.
\end{align}
Therefore, in the spin eigenbasis, we obtain
\begin{equation}
H_{\rm coupling} \approx E_{\rm SO} \left[ \sin{(\phi_{\rm ext})} - \phi \cos{(\phi_{\rm ext})} \right] \left[\cos{(\theta)} \sigma_{\overline{x}} + \sin{(\theta)} \sigma_{\overline{z}}\right]. \label{eq:Hcoupling}
\end{equation}
This term of the Hamiltonian couples the ASQ to the transmon via the phase operator $\phi$ of the transmon and is, thus, reminiscent of a dipole coupling. In the transmon regime, we can express the operator $\phi$ in terms of the zero point fluctuations of the phase, $\phi_{\rm zpf}$, and the bosonic creation and annihilation transmon operators, $c^\dagger$ and $c$ respectively: $\phi = \phi_{\rm zpf} \left(c^\dagger + c \right)$. Inserting this operator into Eq.~\eqref{eq:Hcoupling}, we obtain
\begin{align}
H_{\rm coupling} &\,\approx \left[ E_{\rm SO} \sin{(\phi_{\rm ext})} -E_{\rm SO} \phi_{\rm zpf} \left(c^\dagger + c \right) \cos{(\phi_{\rm ext})} \right] \left[\cos{(\theta)} \sigma_{\overline{x}} + \sin{(\theta)} \sigma_{\overline{z}} \right] \\
&\,= E_{\rm SO} \sin{(\phi_{\rm ext})} \left[\cos{(\theta)} \sigma_{\overline{x}} + \sin{(\theta)} \sigma_{\overline{z}} \right]+ \hbar J_{\overline{x}}\left(c^\dagger + c \right) \sigma_{\overline{x}} + \hbar J_{\overline{z}}\left(c^\dagger + c \right) \sigma_{\overline{z}}.
\end{align}
In this expression, we have the transversal coupling with the strength $\hbar J_{\overline{x}} = E_{\rm SO} \cos{(\phi_{\rm ext})} \phi_{\rm zpf} \cos{(\theta)}$ and a longitudinal coupling with the strength $\hbar J_{\overline{z}} = E_{\rm SO} \cos{(\phi_{\rm ext})} \phi_{\rm zpf} \sin{(\theta)}$.
From fitting the spectroscopy data we find a charging energy of $E_{\rm c}/h =$~\SI{284}{MHz} and a Josephson energy of $E_{\rm J}/h =$~\SI{13.1}{GHz}, which results in $\phi_{\rm zpf} = [2E_{\rm c}/E_{\rm J, eff}(\phi_{\rm ext})]^{1/4} \le 0.46$ where
\begin{equation}
E_{\rm J, eff}(\phi_{\rm ext})=(E_{\rm J}+E_0)\sqrt{\cos^2(\phi_{\rm ext}) + \left(\frac{E_{\rm J}-E_0}{E_{\rm J}+E_0}\right)^2\sin^2(\phi_{\rm ext})}.
\end{equation}
For $E_{\rm SO}/h=$~\SI{309}{MHz}, this results on a transverse coupling of up to $J_{\overline{x}}/(2\pi) =$~\SI{145}{MHz} when $\phi_{\rm ext}$~=~0 and the magnetic field is applied perpendicular to the spin-orbit direction. In the fit of Fig.~4 we instead find an avoided crossing of $2J_{\overline{x}}/(2\pi)=2\times 52 $~MHz, corresponding to a Zeeman field at an angle of $\theta=$~\SI{35.6}{\degree} with respect to the spin-orbit direction.
\newpage
\section{Device and experimental setup}
\label{Sss:device-setup}
\subsection{Device overview}
Fig.~\ref{fig:device} shows an overview of the device including the different elements forming the superconducting circuit used for readout and control of the qubits. The device under investigation in this work is the same as the one used in Ref.~\cite{Bargerbos2022b}, where further details about its physical implementation and fabrication can be found.
\begin{figure}[h!]
\center
\includegraphics[scale=1.0]{device_coherence_compressed.pdf}
\caption{{ \bf Device overview.} (a) Diagram of the microwave circuit. A coplanar waveguide transmission line with an input capacitor (green center conductor) is capacitively coupled to a grounded LC resonator. The resonator consists of an island (yellow) capacitively and inductively (pink) shunted to ground (blue). The resonator is in turn capacitively coupled to a transmon island (red), which is shunted to ground capacitively as well as via two parallel Josephson junctions.
(b) Chip containing four nearly identical devices coupled to the same transmission line, which has a capacitor at its input port, enlarged in the inset.
(c) False-colored optical microscope image of the device showing the qubit island, the resonator island, the resonator inductor, the transmission line, the electrostatic gates and ground.
(d) False-colored scanning electron micrograph (SEM) of a device comparable to that measured, showing the InAs/Al nanowire into which the junctions are defined. The $B_y$ component of the magnetic field is used to tune $\Phi_{\rm ext}$ \cite{Wesdorp2022}. $B_z$ is the magnetic field component parallel to the nanowire.
(e) False-colored SEM of a nearly identical device, showing the junction in which the quantum dot is gate defined. The three bottom gates have a width and spacing of \SI{40}{nm}, although this is obfuscated by the dielectric layer placed on top.
}
\label{fig:device}
\end{figure}
\subsection{Cryogenic and room temperature measurement setup}
The device was measured in a Triton dilution refrigerator with a base temperature of $\approx$~\SI{20}{mK}. Details of the wiring at room and cryogenic temperatures are shown in Fig.~\ref{fig:cryogenic_setup}. The setup contains an input radio-frequency (RF) line, an output RF line, an extra RF line for the spin-flip drive tone and multiple direct current (DC) lines, used to tune the electrostatic gate voltages. The DC gate lines are filtered at base temperature with multiple low-pass filters connected in series.
The input and drive RF lines contain attenuators and low-pass filters at different temperature stages, as indicated. In turn, the output RF line contains amplifiers at different temperature stages: a travelling wave parametric amplifier (TWPA) at the mixing chamber plate ($\approx$~\SI{20}{mK}), a high-electron-mobility transistor (HEMT) amplifier at the \SI{4}{K} stage, and an additional amplifier at room temperature.
A three-axis vector magnet (x-axis not shown) is thermally anchored to the \SI{4}{K} temperature stage, with the device under study mounted at its center. The three magnet coils are controlled with Yokogawa GS610 current sources.
At room temperature, a vector network analyzer (VNA) is connected to the input and output RF lines for spectroscopy at frequency $f_{\rm r}$. On the input line, this signal is then combined with the IQ-modulated transmon drive tone at frequency $f_{\rm t, drive}$. A separate IQ-modulated tone at $f_{\rm r}$, only used for time-domain measurements, is also combined onto this line. The IQ-modulated spin-flip drive tone at frequency $f_{\rm drive}$ is sent through the drive line. For time-domain measurements the output signal is additionally split off into a separate branch and down-converted to be measured with a Quantum Machines OPX.
\begin{figure}[h!]
\center
\includegraphics[scale=0.58]{setup-OPX.pdf}
\caption{Measurement setup at cryogenic and room temperatures.}
\label{fig:cryogenic_setup}
\end{figure}
\subsection{Basic characterization and tune up}\label{Sss:tuneup}
The basic characterization and tune-up of the device proceeds such as detailed in Ref.~\cite{Bargerbos2022}, while the specific tune-up of the quantum dot resonance investigated in this device is detailed in the supplement of Ref.~\cite{Bargerbos2022b}, where it is labeled as resonance A. A brief summary is as follows: We first characterize the gate dependence of the reference junction with the dot fully closed, and fix $V_{\rm J}$~such that $E_{\rm J} \gg \sqrt{E_0^2+E_{\rm SO}^2}$, to ensure the phase drop set by $\phi_{\rm ext}$~happens mostly at the quantum dot junction. Furthermore, we choose $E_{\rm J}$~such that the transmon frequency $f_{\rm t}$ is close to the readout resonator frequency $ \approx 6.11$~GHz to obtain a large dispersive shift for two-tone spectroscopy and qubit readout. For the results shown in this work, we used $V_{\rm J}$~=~\SI{3860}{mV}. We then investigate the gate dependence of the quantum dot junction with the reference junction fully closed, determining the pinchoff voltages of the three quantum dot gates. Next, we open the reference junction to its gate set-point and explore the quantum dot junction gate space at both $\phi_{\rm ext}$~$=0$ and $\phi_{\rm ext}$~$=\pi$ to identify regions that show a $\pi$-shift in phase. For a given $\pi$-shifted region, we measure explicit $\phi_{\rm ext}$~dependence of the transmon to identify a resonance with a spin splitting comparable to the spin-independent Josephson energy. Finally, we choose a gate set-point in the selected resonance. For the results shown here, the setpoint chosen for the three quantum dot gates was $V_{\rm L}$~=~\SI{363}{mV}, $V_{\rm C}$~=~\SI{1000}{mV} and $V_{\rm C}$~=~\SI{81}{mV}, which corresponds to $V_{\rm T, A}$~=~\SI{-423.6}{mV}, $V_{\rm P, A}$~=~\SI{909.5}{mV} in the rotated gate frame shown in Fig.~S7 in Ref.~\cite{Bargerbos2022b}.
\newpage
\section{Extended dataset}
\subsection{Extended two-tone spectroscopy data}
\label{Ss:extended2tone}
Fig.~\ref{fig:20GHz} shows extended two-tone spectroscopy measurements at the setpoint of main text Fig.~4(b), performed over a range of \SI{20}{GHz}. It reveals several additional transition frequencies: panels (a) and (b) contain the higher-lying transmon transitions $f_{03}$ and $f_{02}$, respectively, while panel (c) shows five different transitions. These are the four transitions also shown in Fig.~4(b) and, above that, the resonator transition frequency. Panel (d) exhibits two low-frequency transitions: the bright top transition is the direct spin-flip transition with the transmon in its ground state, while the dark lower transition results from the direct spin-flip transition with the transmon in its excited state. The latter transition is visible as a result of a residual excited state population of the transmon. No other auxiliary transitions are found between 0 and \SI{20}{GHz}, nor does any transition develop for magnetic fields up to \SI{65}{mT}. We further note that the measurement of panel (d) requires a large drive power (31~dBm more than for the measurement shown in Fig.~1 of the main text), and that visibility is reduced compared to panel (c), which is expected since the matrix elements for the EDSR driving is suppressed in the absence of an external magnetic field.
\begin{figure}[h!]
\center
\includegraphics[scale=1]{Fig_S_20GHz.pdf}
\caption{Normalized two-tone spectroscopy measurement of the transition spectrum versus external flux. Input power at the top of the spin-flip drive line is \SI{-36}{dBm} for (a-b), \SI{-46}{dBm} for (c) and \SI{-6}{dBm} for (d).}
\label{fig:20GHz}
\end{figure}
\subsection{Single shot assignment fidelity}
The time domain measurements in the main text are obtained by averaging over many shots. We now estimate the assignment fidelity of ASQ readout at the setpoint used for the coherence measurements in the main text ($B_z=$~\SI{65}{mT} and $\phi_{\rm ext}$~=~$3\pi/2$). To do so, we measure the IQ quadrature response of the readout resonator for the qubit prepared in the ground state [Fig.~\ref{fig:fidelity}(a)] and for the qubit prepared in the excited state [Fig.~\ref{fig:fidelity}(b)], after applying an 8-ns $\pi$-pulse. In both cases we read out for \SI{500}{ns}, more than 40 times shorter than $T_1$, and wait for $5 T_1$ between different measurements to let the qubit decay back to its ground state. We find an assignment fidelity of $F = 1 - (P(\downarrow|\uparrow) - P(\uparrow|\downarrow))/2=$~80\% [Fig.~\ref{fig:fidelity}(d)], where $P(a|b)$ denotes the probability of measuring the qubit to be in state $a$ after preparing it in state $b$. The fidelity is predominantly set by assignment errors for the excited state limited by decoherence during the excitation as the $\pi$-pulse duration is comparable to $T_2$. Longer readout times therefore do not significantly improve the assignment fidelity. However, shorter $\pi$ pulses would likely lead to improved performance although this experiment was not performed on the current device.
\begin{figure}[h!]
\center
\includegraphics[scale=1]{Fig_S_fidelity.pdf}
\caption{Single shot assignment fidelity. (a) Histogram in the complex plane of $3 \times 10^5$ sequential shots separated by \SI{200}{\micro s} and integrated for \SI{500}{ns} in the absence of an excitation pulse. (b) Same as (a) in the presence of a $\pi$-pulse with a FWHM of \SI{8}{ns} preceding each shot.
(c) Histograms of the I-quadrature response of the preceding panels. Green and orange colors correspond to panels (a) and (b), respectively. (d) Extracted single-shot fidelity's based on the threshold indicated in (c) with a gray dashed line. }
\label{fig:fidelity}
\end{figure}
\subsection{Parity lifetime}
\label{Ss:parity}
One of the advantages of using a quantum dot junction over a semiconducting weak link is that the charging energy of the quantum dot allows us to select an operational setpoint for which the doublet states are the lowest energy states of the system \cite{Padurariu2010, Bargerbos2022}. Therefore, the charging energy is expected to protect against qubit leakage via quasiparticle escape or recombination, which would take the junction outside of the computational space of the qubit. To confirm this protection, we measure the quasiparticle poisoning times of the junction around the gate setpoint used in the main text.
Shown in Fig.~\ref{fig:QPP}(a), two resonances are visible as the central quantum dot gate $V_{\rm C}$~is varied around its setpoint $V_{\rm C}$~=~\SI{1000}{mV}, at $\phi_{\rm ext}$~=~0. Following the methods of Ref.~\cite{Bargerbos2022}, we identify the outer two $V_{\rm C}$~regions as having a singlet ground state (spin-zero) and the central region as having a doublet ground state (spin-$1/2$). For each gate point, we subsequently monitor the transmon circuit in real time and determine the switching time of the quantum dot junction parity. $T_{\rm s}$ and $T_{\rm d}$ denote the characteristic times for which the quantum dot maintains a singlet or doublet occupation, respectively. The extracted times are shown in Fig.~\ref{fig:QPP}(b). Note that this measurement is performed at $\phi_{\rm ext}$~=~0, where the $\ket{\uparrow}$ and $\ket{\downarrow}$ states result in equal transmon frequencies, thus becoming indistinguishable using our readout scheme \cite{Bargerbos2022b}. The spin-flip times $T_{\rm spin}$ are therefore not resolved here, as opposed to the experiments of Ref.~\cite{Hays2020}.
\begin{figure}[h!]
\center
\includegraphics[scale=1]{Fig_QPP_v1.pdf}
\caption{Gate dependence of parity lifetimes.
(a) $V_{\rm C}$~dependence of $|S_{21}|$ at $\phi_{\rm ext}$~=~0.
(b) $V_{\rm C}$~dependence of the extracted lifetimes. Markers indicate the mean while error bars indicate the maximum and minimum values of 10 consecutive time traces.
(c) In yellow, 1D histogram of a continuously measured \SI{17}{s}-long time trace integrated in time bins of $t_{\rm int}$~=~\SI{4.3}{\micro s}, at $V_{\rm C}$~=~\SI{996}{mV}. In black, best fit to a double Gaussian shape.
(d) Same as (c) but at $V_{\rm C}$~=~\SI{1000}{mV}.
}
\label{fig:QPP}
\end{figure}
We observe that, for the outer two regions, where the ground state is the spin-0 state, the doublet switching time $T_{\rm d}$ ranges from a few $\upmu$s to hundreds of $\upmu$s, but is always much shorter than the singlet switching time $T_{\rm s}$. Close to the singlet-doublet ground state transition, both times become similar and of the order of \SI{1}{ms}, which can be seen in Fig.~\ref{fig:QPP}(c) for $V_{\rm C}$~=~\SI{996}{mV}, where the histogram of a continuous time trace, integrated in time bins of $t_{\rm int}$~=~\SI{4.3}{\micro \second}, shows two Gaussians with equal amplitudes. In the central region, where the doublet states are the lowest energy states, the situation is reversed and, away from the singlet-doublet transition, $T_{\rm d}$ is consistently above \SI{1}{ms}. The imbalance between average singlet and doublet occupation is shown in Fig.~\ref{fig:QPP}(d) for the setpoint used in the main text, $V_{\rm C}$~=~\SI{1000}{mV}. In this case we measure $T_{\rm s}=$~\SI{59}{\micro s} and $T_{\rm d}=$~\SI{2.8}{ms}. The latter is much larger than that of weak-link junctions, typically found to be in the range 10-500~$\upmu$s~\cite{Janvier2015, Hays2018, Hays2020, Hays2021, Wesdorp2021}, and thus demonstrates the advantage of using a quantum dot junction. In particular, for the weak-link ASQ \cite{Hays2021} the authors measured a parity lifetime $T_{\rm parity} =$~\SI{22}{\micro s} and a spin-flip time $T_{\rm spin} =$~\SI{17}{\micro s}, such that the parity lifetime was a relevant limitation to the qubit $T_1$. In contrast, we find that $T_{\rm d}\gg T_1$ such that the lifetime of the ASQ studied in this work is not limited by parity switches.
\subsection{Excited state population}
Similar to what is found in previous works investigating the doublet states of SNS junctions \cite{Hays2020, Hays2021, Wesdorp2022}, we observe that both $\ket{\uparrow}$ and $\ket{\downarrow}$ of the quantum dot junction are occupied at $B_z=$~\SI{0}{mT}, even in the absence of a drive. As such, we observe simultaneously both of the transmon branches corresponding to each spin state [see main text Fig.~4(b)]. We hypothesize that this residual excited state population is the result of excitations of either thermal or non-equilibrium origin, as the maximum zero-field ASQ transition frequency $f_{\rm s}\approx$~\SI{600}{MHz} corresponds to an effective temperature scale of $T_{\rm eff}\approx$~\SI{30}{\milli K}, below the typical electron temperatures found in transport and transmon \cite{Jin2015} experiments, \SIrange{35}{100}{\milli K}.
To investigate the residual population further, we monitor the transmon circuit in real-time, now at $\phi_{\rm ext}=3\pi/2$ so that we are maximally sensitive to changes in the spin state. At $B_z=$~\SI{0}{mT}, the IQ histogram of $2.5\times10^5$ sequential measurements confirms the presence of two populated states, as shown in Fig.~\ref{fig:thermal}(a). From a double Gaussian fit, we extract a ratio of state occupations of $P({\uparrow})/P({\downarrow})=$~0.7. Upon increasing the qubit frequency $f_{\rm s}$ with the magnetic field $B_z$, we find that the excited state population is strongly reduced, in line with expectation [Fig.~\ref{fig:thermal}(d)]. However, the ASQ frequency first crosses the transmon and then the resonator frequencies between 20 and \SI{30}{mT}, preventing the measurement of the spin states occupancy over a range of frequencies. Measuring again at $B_z=$~\SI{65}{mT}, where $f_{\rm s}=$~\SI{11.53}{GHz}, we find at most 4\% remaining excited state population, see Fig.~\ref{fig:thermal}(b). Here, the remaining excited state population is expected to be predominantly due to assignment errors, similar to those found in Fig.~\ref{fig:fidelity}(a).
To extract the effective temperature of the ASQ, we subsequently fit the frequency dependence of the ratio of populations to a Boltzmann distribution, $P({\uparrow})/P({\downarrow})= \exp \left(-hf_{\rm s}/(k_{\rm B}T_{\rm eff}) \right)$, where $h$ and $k_{\rm B}$ are the Planck and Boltzmann constants, respectively. This leads to reasonable agreement with the data, resulting in an effective temperature of $T_{\rm eff}=100$~$\pm$~\SI{8}{mK} [see Fig.~\ref{fig:thermal}(d)].
\begin{figure}[h!]
\center
\includegraphics[scale=1]{Fig_S_thermal.pdf}
\caption{
Excited state population of the spin states.
(a) Histogram in the complex plane of $2.5\times10^5$ sequential shots, integrated for \SI{500}{ns} in the absence of an excitation pulse. Measured at $B_z = \SI{0}{mT}$ and $f_{\rm s}=$~\SI{0.6}{GHz}.
(b) Same as (a) at $B_z = \SI{65}{mT}$ and $f_{\rm s}=$~\SI{11.5}{GHz}.
(c) Histograms of the I-quadrature response of the preceding panels (a in orange, b in green).
(d) Extracted excited state population versus spin qubit frequency $f_{\rm s}$, as tuned with the magnetic field $B_z$. Data (markers) are fit with a Boltzmann equation (see text) resulting in an effective temperature of \SI{100}{mK}.
}
\label{fig:thermal}
\end{figure}
\subsection{CP data}
\label{Sss:CP}
\begin{figure}[h!]
\center
\includegraphics[scale=1]{Fig_S_CPMG.pdf}
\caption{Extended CP experiment data. Solid lines indicate fits to the data (see text). All data is normalized to the visibility of a preceding Rabi oscillation measurement, and the data is obtained using a $\pi$-pulse ($\pi/2$-pulse) with a FWHM of \SI{4}{ns} (\SI{2}{ns}). The oscillations are introduced into the decay by adding a phase proportional to the delay time for the final $\pi/2$-pulse.}
\label{fig:CP}
\end{figure}
In this section we provide further data for the CP measurements shown in Fig.~3(d) in the main text. As discussed, the CP sequence is constructed as follows: for each $n_\pi$, we apply a $\pi/2$-pulse, followed by $n_\pi$ equidistant $\pi$-pulses and a final $\pi/2$-pulse. All pulses are composed of a Gaussian envelope and have a FWHM of \SI{2}{ns} and \SI{4}{ns} for the $\pi/2$- and $\pi$-pulses, respectively. The separation between the centers of consecutive $\pi$-pulses is $\tau/n_\pi$ and the separation between a $\pi/2$ pulse and its nearest $\pi$ pulse is $\tau/(2n_\pi)$, resulting in a total delay time $\tau$ between the center of the two $\pi/2$ pulses. Fig.~\ref{fig:CP} shows CP measurements for $n_\pi$ values ranging from $2$ to $7$, accompanied by a fit to the expression
\begin{equation}\label{eq:CP}
a \cos{\left(\tau \Omega -\phi\right)} \exp{\left(-\left(\tau/T_{2}\right)^{d+1} \right)} + c + e\tau,
\end{equation}
from which we extract the $T_2(n_\pi)$ values reported in Fig.~3(d). Note that the maximum waveform generator output power puts a limit on the minimum delay time $\tau$ for which the sequence can be generated, as the Gaussian pulses overlap for short delay times compared to the pulse width. This results in the absence of data for short $\tau$ in Fig.~\ref{fig:CP}.
\subsection{Transmon qubit coherence} \label{Sss:transmon}
We characterize the transmon performance at the flux and gate bias point used in the main text using standard time-domain techniques, see Fig.~\ref{fig:coherence_sup}.
\begin{figure}[H]
\centering
\includegraphics[scale=1.0]{Fig_coherence_transmon.pdf} \caption{Coherence of the transmon qubit at $B_z=$~\SI{65}{mT}. (a) Rabi oscillations, (b) qubit lifetime, (c) Ramsey and (d) Hahn-echo experiments. Solid lines indicate fits to the data. For (c-d) oscillations are introduced into the decay by adding a phase proportional to the delay time for the final $\pi/2$-pulse. We plot the normalized population inversion, where each sub-panel is individually normalized to the resulting fit.}
\label{fig:coherence_sup}
\end{figure}
\subsection{ASQ coherence versus control parameters} \label{Sss:coherence-parameters}
In this section we provide additional data showing the dependence of the ASQ lifetime and coherence times on different control parameters. They are extracted by fitting their respective time evolutions using the same expressions employed in Fig.~3 of the main text:
\begin{align}
T_1: &\, \hspace{0.5 cm} a\exp{\left(t/T_{1}\right)} + c \\
T_{\rm 2R}: &\, \hspace{0.5 cm} a \cos{\left(t \Omega -\phi\right)} \exp{\left(-\left(t / T_{\rm 2 R}\right)^{d+1}\right)} + c \\
T_{\rm 2E}: &\, \hspace{0.5 cm} a \cos{\left(t \Omega -\phi\right)} \exp{\left(-\left(t / T_{\rm 2 E}\right)^{d+1}\right)} + c + et
\end{align}
Here, $a$, $c$, $d$, $e$, $\phi$, $\Gamma$, $T_1$, $T_{\rm 2R}$ and $T_{\rm 2E}$ are fit parameters. For $T_{\rm 2R}$ and $T_{\rm 2E}$, $\Omega$ accounts for the combination of detuning and the oscillations introduced by adding a phase proportional to the delay time for the final $\pi/2$-pulse.
\subsubsection{ASQ lifetime versus magnetic field}
\begin{figure}[h!]
\center
\includegraphics[scale=1]{Fig_S_field_T1.pdf}
\caption{(a) Spin qubit frequency, $f_{\rm s}$, as a function of magnetic field, $B_z$. (b) Spin qubit lifetime, $T_1$ as a function of magnetic field. Dashed lines in (a-b) indicate the magnetic fields at which $f_{\rm s}$ crosses the first three transmon frequencies $f_{0j}$ and the resonator frequency [c.f. Fig.~\ref{fig:20GHz}]. (c) Representative qubit lifetime measurements, fit with an exponential decay.}
\label{fig:T1_field}
\end{figure}
We start by investigating the evolution of the ASQ lifetime $T_1$ versus magnetic fields between 0 and \SI{65}{mT}. As shown in Fig.~\ref{fig:T1_field}(b), the qubit lifetime varies strongly, from around \SI{1}{\micro s} close to zero magnetic field and up to \SI{40}{\micro s} at intermediate fields, before once-more decreasing to approximately \SI{20}{\micro s}. For intermediate magnetic fields between \SI{15}{mT} and \SI{35}{mT}, the measurement of the qubit lifetime is hindered by the vicinity to the transmon and resonator transition frequencies. In this region it is not possible to drive the ASQ independently as, due to the capacitance between the gate drive line and the transmon island, the transmon qubit is also excited. This simultaneous driving of both qubits impedes the distinction of the response coming from each of them.
The strong reduction of $T_1$ at low fields is potentially due to resonant exchange with the nuclear spins in InAs \cite{Stockill2016}; given the large $g$-factor of the ASQ, this process only takes places at low magnetic fields. This is supported by the finding that at elevated magnetic fields, in the range \SIrange{45}{50}{mT}, we find the ASQ lifetime to exceed \SI{40}{\micro s}. At even higher fields we observe a drop of the lifetime to around \SI{20}{\micro s}. As discussed in the main text, we conjecture the ASQ lifetime found in these regimes is limited by Purcell-like decay from coupling to the transmon, given the short transmon lifetime of around \SI{250}{ns} [Fig.~\ref{fig:coherence_sup}(b)].
To support the assertion that the reduction in the ASQ lifetime for qubit frequencies in the proximity of the transmon transitions is due to Purcell-like decay, we investigate whether the transmon lifetime is enhanced by proximity to the ASQ. Fig.~\ref{fig:trasmonT1} shows the transmon lifetime $T_1^{\rm t}$ for three different detunings between transmon and ASQ. When the qubits are detuned from each other, we measure $T_1^{\rm t} \approx$~\SI{250}{ns}. However, when the transmon is resonant with the ASQ, its lifetime is enhanced by almost factor of two, reaching 470~$\pm$~\SI{5}{ns}. This is consistent with hybridization of the two qubits, given that $T_1^{\rm s} \gg T_1^{\rm t}$, and supports that the lifetime of the ASQ can be decreased by vicinity to the transmon modes. These findings furthermore compliment the the observations discussed surrounding main text Fig.~4, serving as an additional signature of coherent coupling.
\begin{figure}[h!]
\center
\includegraphics[scale=1]{Fig_S_tmon_T1.pdf}
\caption{Transmon qubit lifetime $T_1^{\rm t}$ as a function of detuning $\Delta_f = f_{\rm{t}}-f_{\rm s}$ from the spin qubit frequency as tuned with the magnetic field. Detunings $\Delta_f = $~3.1, 0 and \SI{-6.6}{GHz} correspond to $B_z =$~9, 28 and \SI{65}{mT}, respectively.
}
\label{fig:trasmonT1}
\end{figure}
\subsubsection{Independence of ASQ coherence on gate voltages, magnetic field and flux}
We investigate the effect of different sources of noise by measuring the dependence of the $T_{2 \rm R}$ and $T_{2 \rm E}$ coherence times on gate voltage, magnetic field, and flux.
The $B_z$ dependence of coherence times is shown in Fig.~\ref{fig:coherence_flux}(a), for which we do not observe a measurable dependence over the $B_z$ range investigated. Therefore charge noise is likely not the dominant contribution to qubit dephasing since, if it was the case, an increase in $B_z$ would increase the effectiveness of EDSR at coupling charge noise to the qubit, which would result on a reduction of the decoherence times. In contrast, this $B_z$-independence of coherence times is compatible with nuclear magnetic noise being a strong contribution to qubit dephasing; due to the small magnetic moment of the nuclei spin, a magnetic fields of \SI{65}{mT} do not yet lead to a significant nuclear splitting. As a result of this we do not reach the regime of strong nuclear spin polarization, such that the precession of the nuclear bath in the external fields still leads to a significant Overhauser field for the range of fields explored. Additionally, the Overhauser field could have a field-independent component originating from the quadrupolar coupling of the nuclei to electric field gradients, induced by strain in the nanowire \cite{Krogstrup2015, Stockill2016}. A more complete understanding of the system will require further investigation.
Next, we consider the dependence of coherence times on the external flux $\phi_{\rm ext}$. As shown in Fig.~\ref{fig:coherence_flux}(b), we again do not find a pronounced dependence of the coherence times. In particular, we do not observe an increase of the $T_2$ times near the sweet spots at $\phi_{\rm ext}$~$=\pm \pi/2$. From this we conclude that flux noise does not strongly contribute to dephasing.
\begin{figure}[h!]
\center
\includegraphics[scale=1]{Fig_S_field_coherence.pdf}
\caption{Dependence of the spin qubit coherence on the external magnetic field (a) and the external flux (b). The dashed lines in (a) indicate the magnetic fields at which $f_{\rm s}$ crosses the transmon transition frequencies $f_{01}$ and $f_{02}$ as well as the resonator frequency.}
\label{fig:coherence_flux}
\end{figure}
\begin{figure}[h!]
\center
\includegraphics[scale=1]{Fig_S_gate_coherence.pdf}
\caption{Dependence of the spin qubit coherence on the three quantum dot gates. (a-c) Spin qubit frequency versus gate voltage. (d-f) Ramsey and Hahn echo $T_2$ times versus gate voltage.}
\label{fig:coherence_gate}
\end{figure}
Finally we investigate the dependence of coherence times on the voltages applied to the three gate electrodes situated underneath the quantum dot junction [see Fig.~\ref{fig:device}(e)]. As shown in Fig.~\ref{fig:coherence_gate}, we do not find a clear correlation between $T_{2 \rm R}$ or $T_{2 \rm E}$ and the slope of the qubit frequency versus any of the three gate voltages. This indicates that voltage noise also does not provide a large contribution to the dephasing rate. However, although we measure $T_2$ in the vicinity of the available sweet spots of the individual gate electrodes, we did not find a simultaneous sweet spot for all three quantum dot gates, and the effect of voltage noise cannot be entirely ruled out. Further investigation of the qubit's susceptibility to voltage and magnetic noise based on the Rabi decay times are discussed in the next section, Sec.~\ref{Sss:noise}.
\subsection{Estimating the amplitude of charge and magnetic noise fluctuations}
\label{Sss:noise}
A method for estimating upper bounds on the amplitude of fluctuations originating from different noise sources is provided in Ref.~\cite{Malinowski2017}, where the authors study the relation between the Rabi frequency, $f_{\rm R} = \Omega_{\rm R}/2\pi$, and the Rabi decay time, $T_{\rm R}$. These quantities,
respectively shown in Figs.~\ref{fig:TRfR}(a) and (b), can be extracted from a fit to the Rabi signal with the expression $a \cos{\left(t \Omega_R\right)} \exp{\left(\rm{t}/T_{\rm R}\right)} + c$, where $t$ denotes the full-width half maximum of the applied Gaussian pulse, see Fig.~2 in the main text. We fit the extracted decay times to the model of Ref.~\cite{Malinowski2017}
\begin{equation}\label{eq:TrfR}
\left( \frac{1}{T_{\rm R}} \right)^2 = \frac{\sigma_f^4}{4f_{\rm R}^2} + C^2 f_{\rm R}^2,
\end{equation}
where $\sigma_f$ is the standard deviation of the fluctuations of the qubit frequency $f_{\rm s}$ due to noise in the control and model parameters and $C$ is a measure of noise of the drive field. The data is fitted up to the region where the Rabi frequency stops being linear as a function of the pulse amplitude $A$, indicated with grey markers in Fig.~\ref{fig:TRfR}, and extract $\sigma_f=$~\SI{39.7}{MHz} and $C=0.25$.
\begin{figure}[b!]
\center
\includegraphics[scale=1]{Fig_Rabi_TR_fR.pdf}
\caption{
(a) Rabi frequency versus pulse amplitude (markers) and fit to a linear dependence (line). Same data as in Fig.~2.
(b) Rabi decay time versus pulse amplitude (markers). Grey markers denote the points for which the data deviates from the linear dependence in (a).
(c) Rabi decay time versus Rabi frequency (markers) an result of a fit of the black markers to Eq.~\ref{eq:TrfR} (continuous line). The dotted lines show the individual contributions of the two summands in Eq.~\ref{eq:TrfR}.
(d) Rabi quality factor $Q=T_{\rm R} f_{\rm R}$ versus Rabi frequency (markers) and result of the same fit as in (c) (line).
}
\label{fig:TRfR}
\end{figure}
If we assume that the dominating contribution to $\sigma_f$ originates from noise in just one control parameter, we can obtain upper bounds on the noise amplitude for various types of noise. Since the coherence time is mostly independent on the external flux [Fig.~\ref{fig:coherence_flux}], we focus only on two possible origins of decoherence: voltage noise and nuclear magnetic noise. We first determine the susceptibility of the qubit frequency with respect to the external parameters ($V_{\rm L}$, $V_{\rm C}$, $V_{\rm R}$, $B_\parallel$ and $B_\perp$) at the ASQ operational setpoint, calculated as the partial derivatives of the qubit frequency with respect to each parameter. From two-tone spectroscopy measurements, we find the susceptibilities with respect to the left, central and right quantum dot gates of $S_{\rm L} \approx$~\SI{0.16}{GHz/mV}, $S_{\rm C} \approx$~\SI{0.07}{GHz/mV} and $S_{\rm R} \approx$~\SI{0.08}{GHz/mV}, respectively, and the susceptibilities to the parallel and perpendicular magnetic fields of $S_{\parallel} \approx$~\SI{0.18}{GHz/mT} and $S_\perp \approx$~\SI{0.05}{GHz/mT}, respectively.
We start by evaluating the contribution of voltage noise on the DC lines. Considering noise from the gate with highest susceptibility we obtain an upper bound of $\sigma_{\rm L} < \sigma_f / S_{\rm L}=$~\SI{0.25}{mV} for the standard deviation of the gate voltage fluctuations. While this agrees with the gate noise observed in Ref.~\cite{Hays2021}, where the estimated standard deviation of the voltage gate fluctuations was $\sigma_V=$~\SI{0.24}{mV}, we do not expect fluctuations of this magnitude to be present in our system. Previous experiments measured in the same experimental setup [Fig.~\ref{fig:cryogenic_setup}] observed gate stability below \SI{60}{\micro eV} for similar device geometries \cite{Bargerbos2020}. Furthermore, the DC lines used to control the gate electrodes are strongly filtered with a sequence of \SI{9}{kHz} RC filters, \SI{80}{MHz} to \SI{5}{GHz} $\pi$ filters and, finally, custom made copper powder filters, all mounted at the mixing chamber stage. and an additional set of \SI{80}{MHz} $\pi$ filter on the printed circuit board. The left and right gates additionally have first order LC filters on-chip, with an expected cutoff frequency of \SI{200}{MHz}. We therefore suspect that the dominant contribution to $\sigma_f$ does not arise from gate voltage fluctuations on the DC lines. However, charge fluctuations on the device, unrelated to the gate control, could still limit the coherence time.
Alternatively, the gate voltage noise could originate from the RF drive line connected to the central gate electrode. This would result in an upper bound to gate voltage noise of $\sigma_{\rm C} < \sigma_f / S_{\rm C} =$~\SI{0.57}{mV}, which corresponds to an effective power of \SI{-53}{dBm} at the sample. Given the \SI{-55}{dB} attenuation of the drive line [Fig.~\ref{fig:cryogenic_setup}], this would correspond to a noise power of \SI{2}{dBm} at the fridge input, which we consider implausible. Furthermore, the RF line is connected via both a DC block and a bias tee, providing strong high-pass filtering.
Next, we consider the contribution of nuclear magnetic noise. We estimate upper bounds to the longitudinal and transverse magnetic fluctuations of $\sigma_\parallel < \sigma_f / S_{\rm \parallel} =$~\SI{0.22}{mT} and $\sigma_\perp < \sigma_f / S_{\rm \perp} =$~\SI{0.80}{mT}, respectively. These estimates are comparable to the values obtained for InAs and InSb spin-orbit qubits in previous works: $\sigma_{B} =$~\SI{0.66}{mT} \cite{NadjPerge2010} and $\sigma_{B} =$~\SI{0.16}{mT} \cite{vandenBerg2013}, respectively. Nuclear magnetic noise is therefore a plausible dominating contribution to the dephasing observed in the ASQ. However, we emphasize that these calculations are only an estimate and that further investigation is needed to discern between the different possible causes of dephasing.
\subsection{Virtual-photon-mediated ASQ–resonator coupling}
\label{Ss:resonator}
In this section we provide additional data showing coherent coupling between the readout resonator and the Andreev spin qubit. As shown in Fig.~\ref{fig:qubit-res}, we observe avoided crossings between the ASQ and resonator transitions when they are on resonance, at $B_z=$~\SI{36.5}{mT}. This coherent coupling is of note, as the ASQ and readout resonator are not directly coupled. However, both are directly and strongly coupled to the transmon qubit, detuned by \SI{900}{MHz} in this case, which mediates a strong virtual coupling. This effect is analogous to the work of Ref.~\cite{Landig2019} where, instead, a resonator mediated virtual coupling between a transmon and a spin qubit.
\begin{figure}[h!]
\center
\includegraphics[scale=1]{Fig_S_qubit_res.pdf}
\caption{Single-tone spectroscopy of the readout resonator versus the magnetic field in the chip plane and perpendicular to the nanowire, $B_y$, for $B_z$ = \SI{36.5}{mT}.}
\label{fig:qubit-res}
\end{figure}
|
1205.2423
|
\section{Introduction}
\label{Sec:Intro}
X-ray observations using space telescopes have revealed that the Universe is full of high-temperature phenomena reaching 10 to 100 million degrees, which nobody had imagined before the advent of X-ray Astronomy. The X-ray band is capable of probing extreme conditions of the Universe such as the proximity of black holes or the surface of neutron stars, as well as observing exclusively the emission from high-temperature gas and selectively the emission from accelerated electrons. In recent years, \emph{Chandra}, XMM-\emph{Newton}, \emph{Suzaku}\ and other X-ray missions have made great advances in X-ray Astronomy. We have obtained knowledge which revolutionized our understanding of the high energy Universe and learned that phenomena observed in the X-ray band are deeply connected to those observed in other wavelengths from radio to $\gamma$-rays.
X-rays of synchrotron origin are of special interest because they are generally produced in extreme cosmic accelerators, which can accelerate particles up to and above $10^{12}$\,eV energies. X-rays carry information not only about the directly accelerated electrons, but also about hadrons, through the synchrotron radiation of secondary $e^{\pm}$ pairs produced at interactions of accelerated protons and nuclei with ambient gas and radiation fields. The \emph{ASCA}\ X-ray satellite and the first generation TeV telescopes demonstrated the close relationship between X-rays and TeV $\gamma$-rays from objects such as ``high frequency peaked'' blazars and young supernova remnants \cite{Ref:Mrk421_TT,Ref:RXJ1713_HESS}. Since then, tremendous achievements which show the link between these two frequency bands have been done with the present X-ray satellites and the second generation TeV telescopes including \emph{H.E.S.S.}, \emph{MAGIC}, and \emph{VERITAS}. Recently, hard X-ray observations are becoming more important, since this is the energy band where non-thermal emission could overwhelm thermal X-ray emission in sources like galaxy clusters and supernova remnants. The hard X-ray emission, if detected, traces the sites of particle acceleration in such objects, and gives important information about the particle acceleration mechanisms involved. As discussed below, the hard X-ray telescopes onboard \emph{ASTRO-H}\ and \emph{NuSTAR}, with sensitivity two orders of magnitude better than the present missions, are capable of solving various scientific questions to understand the non-thermal Universe.
In order to study turbulence, magnetic fields, and relativistic particles in various astrophysical systems, and to draw a more complete picture of the high energy Universe, observations by a spectrometer with an extremely high resolution capable of measuring the bulk plasma velocities and/or turbulence with a resolution corresponding to a speed of $\sim 100$\,km\,s$^{-1}$ are desirable. In galaxy clusters, X-ray hot gas is trapped in a gravitational potential well and shocks and/or turbulence are produced as smaller substructures with their own hot gas halos fall into and merge with the dominant cluster. Large scale shocks can also be produced as gas from the intracluster medium falls into the gravitational potential of a cluster. The bulk motions and turbulences are in turn responsible for acceleration of particles to very high energies, which is manifested via non-thermal emission processes, best studied with sensitive hard X-ray and $\gamma$-ray measurements.
Understanding the non-thermal phenomena in the Universe is one of the key goals of modern astrophysics. The origin of galactic and extragalactic cosmic rays and their roles in the history of the Universe still remain unsolved. In this paper, we will discuss contributions by future X-ray missions which are under development in conjunction with possible synergy with the next-generation TeV $\gamma$-ray observatory, the Cherenkov Telescope Array (\emph{CTA}).
\section{Future X-ray Missions}
\label{Sec:Missions}
A number of new X-ray missions which are expected to revolutionize the current understanding of the high energy Universe are being developed and planned. In the next decade, \emph{ASTROSAT}\ \cite{Ref:ASTROSAT}, \emph{NuSTAR}\ \cite{Ref:NuSTAR}, \emph{e-ROSITA}\ \cite{Ref:eROSITA}, \emph{ASTRO-H}\ \cite{Ref:ASTRO-H} and \emph{GEMS}\ \cite{Ref:GEMS} will be realized. Among them, the 6th Japanese X-ray satellite \emph{ASTRO-H}, to be launched in 2014, is the next major international X-ray mission which will be operated as an observatory. Much larger missions, such as \emph{Athena}\ \cite{Ref:Athena} and \emph{LOFT}\ \cite{Ref:LOFT}, have been proposed for the 2020's.
\begin{figure}[t]
\centerline{\includegraphics[scale=0.32,angle=90]{MW-sensitivity-CTA1012.ps}}
\caption{Differential sensitivities of different X-ray and $\gamma$-ray instruments for an isolated point source.
Lines for the \emph{Chandra}/ACIS-S, the \emph{Suzaku}/HXD (PIN and GSO), the \emph{INTEGRAL}/IBIS (from the 2009 IBIS Observer's Manual), and the \emph{ASTRO-H}/HXI,SGD are the $3\sigma$ sensitivity curves for 100 ks exposures. A spectral bin with $\Delta E/E = 1$ is assumed for \emph{Chandra}\ and $\Delta E/E = 0.5$ for the other instruments.
Note that the XMM-\emph{Newton}\ instruments have a slightly better sensitivity than \emph{Chandra}\ for 100 ks, while \emph{SWIFT}/BAT is characterized by almost the same sensitivity limit as IBIS/\emph{ISGRI}\ within the range from 15\,keV up to $\sim 300$\,keV.
The sensitivities of the COMPTEL and EGRET instruments correspond to the all-lifetime all-sky survey of \emph{CGRO}. The curve denoting \emph{Fermi}-LAT\ is the pre-launch sensitivity evaluated for the $5 \sigma$ detection limit at high Galactic latitudes with 1/4-decade ranges of energy in a one-year dataset \cite{Atwood2009}. The curves depicting the \emph{MAGIC}\ Stereo system \cite{Carmona2011} and \emph{H.E.S.S.}\ are given for $5\sigma$ detection with $>10$ excess photons after 50\,h exposure. The simulated \emph{CTA}\ configuration C sensitivity curve for 50\,h exposure at a zenith angle of $20$\,deg is taken from \cite{CTA}. Red dashed line denotes the differential energy flux corresponding to the mCrab unit in various energy ranges as adopted in the literature.}
\label{Fig:Sensitivity}
\end{figure}
\emph{ASTROSAT}\ is a multi-wavelength astronomy mission carrying four X-ray instruments, which will be placed in a 650-km, near-equatorial orbit. It will provide data mainly in the area of X-ray timing and broadband spectroscopy covering the energy range $0.3-150$\,keV, with emphasis on hard X-rays. Diffuse UV studies can also be carried out with an onboard UV telescope.
\emph{NuSTAR}\ and \emph{ASTRO-H}\ will carry the first focusing hard X-ray telescopes with graded multilayer reflecting surfaces that operate in an energy range of $5-80$\, keV.
Imaging and especially focusing instruments have two tremendous advantages. Firstly, the volume of the focal plane detector can be made much smaller than for non-focusing instruments, so reducing the absolute background level since the background flux generally scales with the size of the detector. Secondly, the residual background, often time-variable, can be measured simultaneously with the source, and can be reliably subtracted.
\begin{figure}
\centerline{\includegraphics[scale=0.27,clip,angle=270]{nustar.ps}}
\caption{The IBIS/\emph{ISGRI}\ significance mosaic of the Galactic Center region in the $20-40$\,keV energy range \cite{Ref:INTEGRAL_GC} is shown in the top of the image. In the bottom of the image the expected performance of hard X-ray observations of the same region with \emph{NuSTAR}\ is presented. The simulation does not include molecular clouds but focuses on the population of X-ray binaries, with a depth of 12ksec/pixel or 6\,$\mu$Crab (courtesy of F.A. Harrison).}
\label{Fig:NuSTAR}
\end{figure}
As shown in Figure\,\ref{Fig:Sensitivity}, the sensitivity to be achieved by \emph{ASTRO-H}\ (and similarly \emph{NuSTAR}) is about two orders of magnitude improved compared to previous collimated or coded mask instruments that have operated in this energy band (Figure\,\ref{Fig:NuSTAR}). This will bring a breakthrough in our understanding of hard X-ray spectra of celestial sources in general. With this sensitivity, $30-50$\,\% of the hard X-ray Cosmic Background would be resolved. This will enable us to track the evolution of active galaxies with accretion flows which are heavily obscured, in order to accurately assess their contribution to the Cosmic X-ray Background over cosmic time. In addition, simultaneous observations of blazar-type active galaxies with \emph{Fermi}-LAT\ and the TeV $\gamma$-ray telescopes are of vital importance to study particle acceleration in relativistic jets (see \S\,\ref{Sec:AGN}).
In addition to the hard X-ray telescopes, \emph{ASTRO-H}\ will carry two Soft X-ray Telescopes, one with a micro-calorimeter spectrometer array with excellent energy resolution (Soft X-ray Spectrometer; SXS), and the other with a large area CCD in their respective focal planes (Figure\,\ref{Fig:ASTRO-H}). The spectroscopic capability of X-ray micro-calorimeters is unique in X-ray astronomy, since no other spectrometers can achieve high energy resolution, high quantum efficiency, and spectroscopy for spatially extended sources at the same time. Imaging spectroscopy with an energy resolution $< 7$\,eV by the SXS of extended sources can reveal line broadening and Doppler shifts due to turbulent or bulk velocities of the X-ray emitting plasma. This capability enables the determination of the level of turbulent pressure support in clusters, supernova ejecta dispersal patterns, the structure of active galactic and starburst winds, and the spatially dependent abundance pattern in clusters and elliptical galaxies. The SXS can also measure the optical depths of resonance absorption lines, from which the degree and spatial extent of turbulence can be inferred.
\begin{figure}
\centerline{\includegraphics[scale=0.5,clip]{ASTRO-H2.eps}}
\caption{Schematic view of the \emph{ASTRO-H}\ satellite. The total mass at launch will be $\sim 2700$\,kg. \emph{ASTRO-H}\ will be launched into a circular orbit with altitude $500-600$\,km, and inclination $\sim 31$\,degrees \cite{Ref:ASTRO-H}.}
\label{Fig:ASTRO-H}
\end{figure}
In combination with a high throughput X-ray telescope, the SXS improves on the \emph{Chandra}\ and XMM-\emph{Newton}\ grating spectrometers in two important ways. At $E> 2$\,keV, the SXS is both more sensitive and has higher resolution (Figure\,\ref{Fig:XRS}), especially in the Fe K band where the SXS has 10 times the collecting area and much better energy resolution, giving a net improvement in sensitivity by a factor of 30 over \emph{Chandra}. In addition the SXS uniquely performs high-resolution spectroscopy of extended sources. In contrast to a grating, the spectral resolution of the calorimeter is unaffected by source's angular size because it is non-dispersive.
In order to extend the energy coverage to the soft $\gamma$-ray region up to 600\,keV, the Soft Gamma-ray Detector (SGD) will be implemented as a non-focusing detector onboard \emph{ASTRO-H}. The SGD measures soft $\gamma$-rays via reconstruction of the Compton scattering in the Compton camera, covering an energy range of $40-600$\,keV with sensitivity at $300$\,keV of more than 10 times better than the \emph{Suzaku}\ Hard X-ray Detector. The SGD is capable of measuring the polarization of celestial sources brighter than a few times 1/100 of the Crab Nebula and polarized above $\sim 10\,\%$. This capability is expected to yield polarization measurements in several celestial objects, providing new insights into properties of soft $\gamma$-ray emission processes.
The Gravity and Extreme Magnetism Small Explorer (\emph{GEMS}) is an astrophysical observatory dedicated to X-ray polarimetry ($2-10$\,keV) and is being developed for launch in 2014. \emph{GEMS}\ will perform the first sensitive X-ray polarization survey of several classes of X-ray emitting sources characterized by strong gravitational or magnetic fields. It has been recognized for a long time that X-ray polarization measurements can provide unique diagnosis of the strong fields near compact objects. The prime scientific objectives of \emph{GEMS}\ are to determine the effects of the spin of black holes, the configurations of the magnetic fields of magnetars, and the structure of the supernova shocks which accelerate cosmic rays. In the cases of both stellar black holes and supermassive black holes, sensitivity to $1\,\%$ polarization is needed to make diagnostic measurements of the net polarizations predicted for probable accretion disk and corona models. \emph{GEMS}\ can reach this goal for several Seyfert galaxies and quasars and measure the polarizations of representatives of a variety of other classes of X-ray sources, such as rotation-powered and accretion-powered pulsars.
\begin{figure}
\centerline{\includegraphics[scale=0.575,clip]{sxs_a.ps}}
\centerline{\includegraphics[scale=0.575,clip]{sxs_b.ps}}
\caption{{\bf (a)} Effective areas of high-resolution X-ray spectroscopy missions as functions of X-ray energy. The curve for the \emph{ASTRO-H}\ SXS is the present best estimate for a point source. The two crosses show the mission requirements. The XMM-\emph{Newton}\ RGS effective area is a sum of first order of the two instruments (RGS-1 and RGS-2). The effective areas of LETG, MEG and HEG onboard \emph{Chandra}\ are sums of first order dispersions in $\pm$ directions. {\bf (b)} Resolving power of the \emph{ASTRO-H}\ SXS as a function of X-ray energy for the two cases, 4\,eV resolution (goal) and 7\,eV (requirement). The resolving power of high resolution instruments onboard \emph{Chandra}\ and XMM-\emph{Newton}\ and typical resolving power of X-ray CCD cameras are also shown for comparison \cite{Ref:SXS}. }
\label{Fig:XRS}
\end{figure}
\emph{e-ROSITA}\ will be the primary instrument onboard the Russian ``Spectrum-Roentgen-Gamma'' (SRG) satellite which will be launched in 2013 and placed in an L2 orbit \cite{Ref:eROSITA}. The \emph{e-ROSITA}\ mission will perform the first imaging all-sky survey in the medium energy X-ray range up to 10\,keV with an unprecedented spectral and angular resolution. The \emph{e-ROSITA}\ sensitivity during the all-sky survey for four years will be approximately 30 times \emph{ROSAT}. In the all-sky survey, the typical flux limit will be $\sim 10 ^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$ and $\sim 3 \times 10^{-13}$\,erg\,cm$^{-2}$\,s$^{-1}$ in the $0.5-2$\,keV and $2-10$\,keV energy bands, respectively. At these fluxes the X-ray sky is dominated by active galaxies and clusters, which can be separated with an angular resolution of $25-30$\,arcsec. The proposed survey will identify 50,000--100,000 clusters depending on the capabilities in disentangling moderately-low extended sources from active galaxies. Concerning the number of active galaxies, the $\log N - \log S$ measurement in moderately wide field surveys, like XMM-COSMOS, can be used to predict detections of $(3-10) \times 10^{6}$ sources, up to $z \sim 7-8$, depending on the detection threshold.
\section{Supernova Remnants}
\label{Sec:SNRs}
\subsection{X-ray Study of Supernova Remnants}
X-ray imaging and spectroscopic observations play an important role in understanding TeV $\gamma$-ray emission from supernova remnants (SNRs). Generally, SNRs are studied with X-ray instruments in the following contexts: (i) SNRs are believed to be the primary sources of galactic cosmic rays (CRs) up to the knee (at $\sim 3\times 10^{15}$\,eV in the all-particle CR spectrum) and possibly beyond it; (ii) SNRs are the best sites to study particle acceleration (particularly, diffusive shock acceleration, DSA) processes, which should have wide applications in astrophysics; (iii) supernovae are important sources of chemical elements in the Universe; X-ray line spectroscopy of SNRs can probe nucleosynthesis taking place in the interior of stars and during supernova explosions, being complementary to a late phase optical spectroscopy of supernovae; (iv) SNRs are major sources of kinetic energy and turbulence of interstellar gas, thereby affecting the gas structures of our Galaxy and also star formation. The synergies between X-ray and TeV observations of SNRs primarily lie in the areas of particle acceleration and the origin of Galactic CRs.
In young SNRs, both nonthermal and thermal X-ray emissions can be observed. Synchrotron radiation by TeV electrons accelerated at strong shock fronts, first identified with the \emph{ASCA}\ satellite \cite{Koyama95}, is currently the only established channel of nonthermal X-radiation in SNRs. Observations of synchrotron-emitting X-ray filaments provide key information about particle acceleration and magnetic field amplification processes (see \S\,\ref{sec:MFA} and \S\,\ref{sec:Bykov}). For instance, a recent deep \emph{Chandra}\ map of Tycho's SNR has revealed an interesting spatial feature, ``stripes'', of synchrotron X-ray emission (see Figure\,\ref{fig:Eriksen}) which may be taken as signatures of magnetic field amplification and associated acceleration of CR protons and nuclei up to $\sim 10^{15}$\,eV \cite{Eriksen11}.
\begin{figure}
\begin{center}
\includegraphics*[scale=0.4]{Eriksen_Tycho.eps}
\end{center}
\caption{Deep \emph{Chandra}\ $4-6$\,keV image of Tycho's SNR \cite{Eriksen11}.
Bright features are due to synchrotron radiation produced by multi-TeV electrons.}
\label{fig:Eriksen}
\end{figure}
Thermal components include the line and continuum (bremsstrahlung) emissions from shock-heated interstellar/circumstellar medium and from the hot ejecta heated by reverse shocks. Dissipation at a shock occurs through wave-particle interactions
and shock-heating is inevitably connected with shock-acceleration. As discussed below, X-ray diagnostics of shocked plasma in SNRs aids in understanding of TeV $\gamma$-ray emission. Line emissions come from electron-collisional excited ions, dominated by alpha elements like O, Ne, Mg, Si, and S, and the iron peak elements like Fe and Ni.
Shocked plasmas in young SNRs generally do not
reach collisional ionization equilibrium (CIE) and they are under-ionized.
It takes $\tau \equiv nt \sim 10^{12}$\,s\,cm$^{-3}$ to reach CIE.
Collisional heating of electrons by ions is also a slow process and consequently
electron-ion temperature equilibrium is not reached.
The degree of electron-proton temperature equilibration at the shock front
is determined by collisionless heating via collective plasma processes, which
are not well understood. The analysis of Balmer-dominated
optical spectra of partially ionized shocks indicates a temperature ratio of
$T_e/T_p = 0.05 - 0.1$ for
a high shock speed ($v > 1000$\,km\,s$^{-1}$) \cite{Ref:Balmer-dominated}.
On the other hand, ion temperatures at fully ionized shocks (dominant for young SNRs)
have never been well determined.
The excellent resolution of the \emph{ASTRO-H}\ SXS offers an opportunity to measure a temperature of shocked irons in young SNRs,
a real breakthrough in understanding the physics of shock heating.
Figure~\ref{fig:Tycho} presents simulated spectra of SXS observations of the central portion of Tycho's SNR. The spectra (black points) assume
two plasma blobs that are receding and approaching to us with $\pm 4000\ \rm km\ s^{-1}$ \cite{Ref:Hayato} with the same
parameters, an iron temperature of $kT_{\rm Fe} = 3$ MeV (mass-proportional heating),
an electron temperature of $kT_e = 5$ keV, and an ionization parameter
of $\tau = 0.9\times 10^{10}$\,s\,cm$^{-3}$. For a reference, a simulated spectrum with no thermal Doppler broadening is also shown (green points).
\begin{figure}
\begin{center}
\includegraphics*[scale=0.4]{Tycho_fake_SXS.eps}
\end{center}
\caption{
Simulated \emph{ASTRO-H}\ SXS spectrum (black points) around the iron K-shell complex of Tycho's SNR for an exposure of 100\,ks
(taken from \emph{ASTRO-H}\ Quick Reference \texttt{http://\-astro-h.isas.jaxa.jp/\-researchers/\-news/\-2010/\-1119\_e.html}).
For a reference, a simulated spectrum with no
thermal Doppler broadening is also shown (green points).
Iron K lines from a blob that is receding are red-shaded, while
lines from an approaching blob are blue-shaded. }
\label{fig:Tycho}
\end{figure}
Only thermal X-ray emission has been observed in evolved SNRs (say, $>10^4$\,yr), usually from low temperature ($kT_e \sim 0.1$\,keV) CIE plasmas. Recently, X-ray emission from overionized (recombining) plasma \cite{Kawasaki02} has been observed in Mixed Morphology SNRs (MMSNRs) with the \emph{Suzaku}\ satellite \cite{Yamaguchi09}.
MMSNRs are mostly strong GeV $\gamma$-ray emitters \cite{Uchi11}. X-ray observations can be used to infer the dynamical evolution of such SNRs. Conducting sensitive searches for nonthermal bremsstrahlung in the hard X-ray band by \emph{NuSTAR}\ and \emph{ASTRO-H}\ HXI will complement $\gamma$-ray measurements.
\subsection{Synergy between X-ray and Gamma-ray Observations of SNRs}
A supernova origin of CRs has long been a matter of active research since it was advocated by Baade and Zwicky in the 1930's \cite{BaadeZwicky34}. The current sophisticated paradigm is that diffusive shock acceleration (DSA) at collisionless shock waves of SNRs is responsible for the production of Galactic CRs up to the knee energy or even beyond \cite{MD01}, transferring $\sim 10\,\%$ of the explosion kinetic energy into the form of CR energy \cite{Hillas_Review05}. DSA is widely regarded as the standard mechanism for producing relativistic particles at collisionless shocks in various astrophysical objects.
$\gamma$-ray observations of SNRs provide the most straightforward way of addressing the SNR paradigm for the origin of CRs through the measurement of $\pi^0$-decay $\gamma$-rays \cite{DAV94}. \emph{H.E.S.S.}\ observations of TeV $\gamma$-ray emission from SNR RX\,J1713.7$-$3946 \cite{HESS_1713,HESS_1713_2} have revealed a TeV $\gamma$-ray morphology closely matching the synchrotron X-ray map (see Figure\,\ref{fig:RXJ1713}), providing a firm example of TeV $\gamma$-ray emission from an SNR shell. SNRs constitute one of the most populated classes of TeV sources in our Galaxy \cite{2009_TeVREview}. Given that the galactic CRs are energetically dominated by protons and that DSA models usually presume a very high efficiency of proton acceleration, finding evidence for the $\pi^0$-decay $\gamma$-rays is indispensable, but it has remained tantalizingly difficult mainly because radiation processes involving relativistic electrons (the so-called \emph{leptonic} components) could explain the $\gamma$-ray emission as well.
{X-ray observations play an important role in constraining the origin of TeV $\gamma$-ray emission from shell-type SNRs.
A synchrotron X-ray spectrum is tightly coupled to the IC $\gamma$-ray spectrum that is produced by the same population of accelerated electrons. Moreover, X-ray measurements provide information about the hydrodynamic structure (e.g., shock speed, gas density) and the magnetic field strength in a remnant, which are crucial to disentangle $\gamma$-ray emission mechanisms.
\subsection{Magnetic Field Amplification in Young SNRs}
\label{sec:MFA}
High angular resolution observations of young SNRs with \emph{Chandra}\ suggest that strong shocks may be able to amplify the interstellar magnetic field by large factors. The narrow widths of synchrotron X-ray filaments \cite{Bamba05} could be due to rapid synchrotron cooling in the postshock flow \cite{VL03}. The magnetic field strength inferred from the X-ray filaments is typically $\sim 0.1$\,mG \cite{Voelk05}. An alternative explanation for the narrowness of the filaments is a fast magnetic field damping behind a shock \cite{Pohl05}. This scenario also requires similarly strong magnetic fields. Evidence for the amplified magnetic field comes also from the year-scale time variability of synchrotron X-ray filaments (Figure\,\ref{fig:RXJ1713}) \cite{Uchi07}. If the variability timescale represents the synchrotron cooling time, the magnetic field strength can be estimated to be as large as $\sim 1$\,mG. On the other hand, if the variability is due to intermittent turbulent magnetic fields \cite{Bykov09} (see \S\,\ref{sec:Bykov}), the time variability can be reconciled with a weaker magnetic field ($\sim 0.1$\,mG).
\begin{figure}
\begin{center}
\includegraphics*[scale=0.3]{RXJ1713_Chandra.eps}
\end{center}
\caption{\emph{Chandra}\ images of SNR RX\,J1713.7$-$3946 \cite{Uchi07}
in an energy interval of 1--2.5 keV (panels {\bf a} and {\bf b}) or 3.5--6 keV (panel {\bf c}). In panel {\bf a}, the \emph{H.E.S.S.}\ contours ($> 0.7$ TeV) are overlaid on the \emph{Chandra}\ map.
A sequence of X-ray observations in July 2000, July 2005 and May 2006 for a small box depicted in panel {\bf a} are shown in panels {\bf b} and {\bf c},
which demonstrates time variability of synchrotron X-ray emission.}
\label{fig:RXJ1713}
\end{figure}
Theoretically it has been proposed that a turbulent magnetic field can be significantly amplified by a CR current driven instability (and other instabilities) in the shock precursor ahead of a shock front \cite{Bell04}. Magnetic field amplification (MFA) is now considered to be the key element in non-linear DSA theory \cite{Bykov11_MFA_Review}. Modeling of a synchrotron component depends critically on MFA. Consequently, any attempts to disentangle $\gamma$-ray emission mechanisms depend on our understanding of MFA, which in turn manifests in the synchrotron X-ray data.
\subsection{Synchrotron X-ray Emission as a Probe of Magnetic Turbulence}
\label{sec:Bykov}
During the DSA process, energetic particles have to be efficiently scattered by magnetic fluctuations in the shock vicinity to be accelerated to very high energies. In addition to the strength of the magnetic field, a power spectrum of magnetic turbulences is a key parameter in the DSA theory.
The turbulent magnetic fields, amplified possibly by CR current driven instabilities, can be imprinted in synchrotron X-ray images \cite{Bykov09}. The synchrotron emissivity depends strongly on the local magnetic field as $B^{(\Gamma +1)/2}$ (where $\Gamma$ denotes the effective photon index). Therefore, localized non-steady magnetic field concentrations contribute significantly to the synchrotron X-ray emission by the highest energy electrons in the cut-off region of the electron distribution \cite{Bykov09}. Strong fluctuations of the magnetic fields result in an intermittent, twinkling appearance of synchrotron X-ray images even if the electron distribution is steady. This may explain the variable filamentary and clumpy structures in the synchrotron X-ray map of SNR RX\,J1713.7$-$3946 (Figure\,\ref{fig:RXJ1713}). Indeed, \emph{Suzaku}\ broadband X-ray observations have shown that the X-ray emission is formed in the cut-off region of the electron distribution \cite{Tanaka08}.
Since the synchrotron filaments are expected to be more variable at higher photon energies in this scenario, sensitive hard X-ray imaging with \emph{NuSTAR}\ and \emph{ASTRO-H}\ is particularly interesting. Hard X-ray observations will allow the study of the power spectra of magnetic fluctuations and the acceleration mechanisms of CRs.
The important aspect of measuring $\gamma$-ray spectra with CTA will be to gauge the maximum energy
of CR particles, particularly that of protons which, unlike electrons, do not suffer from radiative losses in SNRs. Theoretically,
the maximum proton energy is expected to be controlled mainly by MHD waves in the shock precursor.
As discussed above, X-ray observations provide information about such turbulent waves.
The synchrotron X-ray stripes seen in Tycho's SNR (Figure\,\ref{fig:Eriksen}) are also intriguing in this regard;
the pattern of the stripes may reflect a turbulent wave spectrum \cite{Bykov11}.
In the CTA era, a synergy between X-ray and TeV $\gamma$-ray observations of SNRs will be even greater.
\section{Gamma-ray Binaries}
\subsection{Gamma-ray Loud X-ray Binaries}
Detections of modulated TeV $\gamma$-ray emission synchronized with an orbital period from a few high-mass X-ray binaries have revealed the existence of $\gamma$-ray loud X-ray binaries, so-called gamma-ray binaries. Examples include LS\,5039 \cite{HESS06_LS5039}, PSR\,B1259$-$63 \cite{HESS05_PSRB1259}, and LS\,I$+61^\circ 303$ \cite{albert08}, which have been well studied in the X-ray and TeV bands. Recent GeV $\gamma$-ray observations of three binaries \cite{LAT09_LS5039,LAT09_LSI,LAT11_PSRB1259} with the \emph{Fermi}-LAT\ demonstrate that they are also luminous GeV sources; the GeV $\gamma$-ray energy flux exceeds both the X-ray and TeV $\gamma$-ray energy fluxes at least at some orbital phase. Gamma-ray binaries can serve as an excellent laboratory for the study of extreme particle acceleration in a periodically changing environment in the vicinity of a massive star.
Two major competing models of nonthermal emission have been discussed in the literature. The first one attributes the high-energy phenomena to the interactions of a young rotation-powered pulsar with the wind (or disk) of a companion star. Collisions between the pulsar's relativistic wind and the stellar wind lead to the formation of a {\it compactified pulsar wind nebula} (CPWN), a scaled-down version of pulsar wind nebulae \cite{TA97}.
Another model invokes {\it a microquasar}; a relativistic jet ejected by an accreting compact object accounts for the $\gamma$-ray loudness in this case \cite{Bosch08}.
The emission from the PSR\,B1259$-$63 binary is powered by a young non-accreting pulsar and this is a clear example of the CPWN system.
Cygnus X-3 has been known as a microquasar and the GeV $\gamma$-rays detected by the \emph{Fermi}-LAT\ \cite{LAT09_CygX3} and \emph{AGILE}\ \cite{AGILE09_CygX3} can be ascribed to the emission from a relativistic jet.
However, type classification is difficult in most cases (e.g., LS\,5039).
In what follows, we discuss two TeV-$\gamma$-ray-emitting binaries, PSR\,B1259$-$63 and LS\,5039.
\subsection{PSR\,B1259$-$63}
PSR\,B1259$-$63 is a young radio pulsar (spin period 48\,ms) orbiting a fast-rotating O-type star LS\,2883 \cite{Negu11} in a highly eccentric 3.4\,yr orbit. The spindown power of the pulsar is $\dot{E}_{\rm p} \simeq 8\times 10^{35}$\,erg\,s$^{-1}$. Figure\,\ref{fig:LC_PSRB1259} shows the orbital lightcurves of the PSR\,B1259$-$63 system in the X-ray, GeV/TeV $\gamma$-ray, and radio bands \cite{LAT11_PSRB1259}.
\begin{figure}
\begin{center}
\includegraphics*[scale=0.425]{LC_PSRB1259_Abdo.eps}
\end{center}
\caption{Lightcurves of the PSR\,B1259$-$63 system in the (a) TeV (\emph{H.E.S.S.}), (b) GeV (\emph{Fermi}-LAT) (c) X-ray and (d) radio (2.4 GHz) bands \cite{LAT11_PSRB1259}. X-ray fluxes are in units of $\rm erg\ cm^{-2}\ s^{-1}$.}
\label{fig:LC_PSRB1259}
\end{figure}
The X-ray peaks of pre- and post-periastron are thought to arise from the interactions between the pulsar wind and the equatorial disk of the optical star. The narrow-band X-ray spectrum of the system is characterized by a power law of highly variable photon index $\Gamma \simeq 1.2 \mbox{--} 2.0$ without any detectable line emission \cite{Chern09,Uchi09}. A spectral break around $\varepsilon_{\rm br} \sim 5$\,keV was found by \emph{Suzaku}\ during the pulsar's transit of the disk, which provides an important constraint on the models \cite{Uchi09}.
The electrons and positrons are presumably accelerated at an inner shock front of the pulsar wind and adiabatically expand in the relativistic flow of the pulsar cavity. Synchrotron radiation by the accelerated electrons offers a reasonable explanation for the observed X-ray emission. The TeV $\gamma$-ray emission can be understood in terms of the anisotropic IC scattering on the intense stellar photons of the same population of electrons and positrons that produce the X-ray emission \cite{Khan07}. Future sensitive TeV observations with \emph{CTA}\ will allow detailed investigations of the X-ray and TeV connection.
Recent \emph{Fermi}-LAT\ observations have detected a remarkable GeV flare from PSR\,B1259$-$63 about 10 days after the second X-ray peak \cite{LAT11_PSRB1259}. The GeV luminosity reaches a sizable fraction of the pulsar's spindown power, implying a very efficient ($\sim 100\%$) conversion of the kinetic energy of the wind into $\gamma$-radiation. An extrapolation of the X-ray spectrum smoothly connects with the flare spectrum, suggesting that the GeV $\gamma$-ray emission might be a tail of the synchrotron spectrum. This requires extremely fast acceleration of electrons and positrons, as in the case of the Crab Nebula's GeV flare. Alternatively, the GeV flare may be explained by Comptonization of the cold pulsar wind with a wind Lorentz factor of $\gamma_{\rm w} \sim 10^4$ \cite{Khan11}.
\subsection{LS\,5039}
LS\,5039 is a high-mass X-ray binary with extended radio emission, comprised of a massive O-type star and a compact object (either neutron star or black hole). A periodic TeV $\gamma$-ray signal modulated with an orbital period of 3.906 days has been detected by \emph{H.E.S.S.}\ \cite{HESS06_LS5039}, as shown in Figure\,\ref{fig:LC_LS5039}.
\begin{figure}
\begin{center}
\includegraphics*[scale=0.64]{LC_LS5039_vertical.eps}
\end{center}
\caption{Lightcurves at X-ray (\emph{Suzaku}\ XIS), hard X-ray (\emph{Suzaku}\ HXD), TeV (\emph{H.E.S.S.}), and GeV (\emph{Fermi}-LAT) bands of LS\,5039 as a function of orbital phase \cite{Takahashi09,LAT09_LS5039}.}
\label{fig:LC_LS5039}
\end{figure}
\emph{Suzaku}\ observations, which continuously covered more than one orbital period, showed strong modulation of the X-ray emission at the orbital period \cite{Takahashi09}. In Figure\,\ref{fig:LC_LS5039}, the lightcurves obtained with the \emph{Suzaku}\ XIS and HXD are compared with the $\gamma$-ray modulated curves. The X-ray spectrum measured up to 70\,keV can be described by a simple power law with a phase-dependent photon index of $\Gamma = 1.45 \mbox{--} 1.61$. A remarkably stable X-ray modulation over a long time span of $\sim 10$\,yr was revealed through a comparison with the measurements made by previous missions \cite{Kisisita}. This finding favors the CPWN nature of LS\,5039.
The X-ray emission is likely due to synchrotron radiation, while IC scattering on stellar photons by the same population of relativistic electrons is a viable mechanism of TeV $\gamma$-ray production. The intense stellar light has dual effects; it largely enhances anisotropic IC emission but also introduces $\gamma\gamma$ opacity due to pair production, which can be used to constrain the emission site \cite{ST08}.
While the orbital modulation of TeV $\gamma$-ray emission is affected by $\gamma\gamma$ absorption and anisotropic IC, X-rays are free of these effects so their modulation contains fundamental information about the system itself.
For example, the X-ray lightcurve suggests the importance of adiabatic losses.
To simultaneously explain the X-ray/TeV data,
one needs to invoke
the extremely efficient and rapid acceleration process, allowing for acceleration of 10\,TeV electrons on a timescale of seconds.
Unlike TeV $\gamma$-rays, GeV photons are almost unaffected by $\gamma\gamma$ absorption, allowing us to probe particle acceleration in the direct vicinity of a massive star. LS\,5039 has been detected by the \emph{Fermi}-LAT\ \cite{LAT09_LS5039}. Figure\,\ref{fig:LC_LS5039} shows the LAT light curve folded with the orbital period. The LAT flux peaks near periastron, which is naturally expected in the IC model. Spectral modeling suggests the presence of a second population of $e^{\pm}$ accelerated possibly in the shocked stellar wind \cite{Bednarek11}.
Future X-ray polarimetry with GEMS may be able to confirm the synchrotron origin of the X-ray emission.
Also, thermal X-ray emission from the shocked stellar wind, which constrains the properties of the pulsar and stellar winds, could be detectable with the \emph{ASTRO-H}\ SXS.
Finally, thanks to \emph{CTA}, ``phase-resolved'' $\gamma$-ray spectra will be obtained in the TeV range, which would allow for an identification of the $\gamma\gamma$ absorption feature.
\section{Active Galaxies}
\label{Sec:AGN}
The phenomenon of Active Galactic Nuclei (AGN) is related to accreting supermassive black holes (SMBHs) hosted by massive galaxies. The integrated radiative output of the accreting matter in AGN dominates the extragalactic background light in the X-ray band \cite{XRB}, while the non-thermal emission of the plasma outflowing from AGN in the form of relativistic jets is widely believed to provide the bulk, or at least a substantial fraction of the extragalactic background photons at $\gamma$-ray energies \cite{GRB}. A multiwavelength approach is required for a proper understanding of AGN physics, with the X-ray and $\gamma$-ray bands being particularly important regimes to explore.
\subsection{Blazars}
\label{Sec:blazars}
Some AGN produce jets, i.e., collimated streams of magnetized plasma outflowing with relativistic bulk velocities from the immediate vicinities of SMBHs, and carrying huge amounts of energy far beyond the host galaxies \cite{JETS}. Jetted AGN observed at small viewing angles with respect to the jet axis ($< 10$\,deg) are called blazars \cite{unification}. A relatively diverse blazar family includes low-power sources of the BL Lacertae type (BL Lacs) and powerful flat-spectrum radio quasars (FSRQs).
What is common for all the blazars is that their broad-band spectra are dominated by the non-thermal jet emission produced at parsec and sub-parsec distances from the active centers. This emission, strongly boosted in the observer rest frame due to the relativistic bulk velocities of the emitting plasma, extends from radio up to very high energy $\gamma$-ray frequencies. This is in either the X-ray or the $\gamma$-ray regime where most of the radiatively dissipated power is released. The other crucial characteristic of blazar emission is its variability, ranging from minutes to years and decades, and involving flux changes from a few percent up to a few orders of magnitude. All of these findings point toward a highly non-stationary character of AGN jets, and a very efficient and extremely rapid acceleration of the jet particles to ultrarelativistic energies, typically ascribed to Fermi-type processes at relativistic shocks.
About 100 blazars, mostly of the FSRQ type, have been associated with $\gamma$-ray sources detected by EGRET onboard \emph{CGRO}\ at GeV photon energies \cite{EGRET}. Several BL Lacs have also been detected in the TeV range by the previous generation of Cherenkov telescopes, starting from Mrk\,421 \cite{Punch1992}, however with a little overlap with the EGRET catalog. After the first two years of the \emph{Fermi}-LAT\ operation, roughly 1,000 blazars have been identified as GeV emitters, with an almost equal split between BL Lacs and FSRQs \cite{2LAC}. In addition, the first cases of the detection of TeV flares from FSRQs were recently reported \cite{3C279,PKS1222}. Still, the overwhelming majority of the TeV blazars --- the population which nowadays is growing quite rapidly due to the development and successful operation of the modern Cherenkov telescopes --- are BL Lacs, mostly the ones selected from X-ray surveys (see, e.g., \cite{Holder}).
\begin{figure}[t]
\begin{center}
\includegraphics[scale = 0.925]{Blazars.eps}
\end{center}
\caption{Two examples of different types of blazars detected at $\gamma$-ray frequencies: 3C\,454.3 (red circles) and Mrk\,421 (black and gray squares). The broad-band spectrum of Mrk\,421 averaged over the observations taken during the 2009 multifrequency campaign and corresponding to the lower/average level of the source activity, is denoted by black squares \cite{LAT-Mrk421}. Gray squares illustrate the X-ray and TeV variability of Mrk\,421 during the 2008 higher active state \cite{MAGIC-Mrk421}. The quasi-simultaneous 2008 data for 3C\,454.3 denoted by red filled circles are taken from \cite{LAT-3C454}. The simultaneous observations of 3C\,454.3 during its flaring state (2009 Dec 3), plotted as red open circles, are analyzed and discussed by \cite{Bonnoli2011}. The thick cyan curves represent the simulated continuum sensitivities of the \emph{ASTRO-H}\ HXI and SGD instruments for point sources and 100\,ks exposures, as well as the \emph{CTA}\ sensitivity for 50\,h exposure at a zenith angle of $20$\,deg with the candidate configuration C \cite{CTA}.}
\label{fig-blazars}
\end{figure}
Figure\,\ref{fig-blazars} presents the broad-band spectral energy distributions (SEDs) of particularly bright examples of the main two types of blazars, namely of Mrk\,421, which is a famous BL Lac object, and 3C\,454.3, which is an archetypal FSRQ. The main feature to notice in the figure is that the non-thermal continua consist of two highly variable spectral components; this is a generic property of blazar spectra. The low-energy component, peaking in the $\nu - \nu F_{\nu}$ representation in infrared for 3C\,454.3, and in X-rays for Mrk\,421, is established to be due to the synchrotron emission of ultrarelativistic electrons. The high-energy component, peaking in the $\gamma$-ray regime in all cases, is most successfully modeled in terms of the inverse-Compton (IC) emission of the same population of electrons \cite{Maraschi1992,Sikora1994}.
An alternative interpretation of the high-energy blazar component, dealing with the interactions of ultrarelativistic protons with background electromagnetic fields (photomeson production, proton synchrotron emission, and the related cascades of the secondary particles) remains however a formal possibility \cite{Mannheim1993,Aharonian2000}. One of the main scientific objectives of the modern Cherenkov telescopes and \emph{Fermi}-LAT\ is in fact to distinguish between the two scenarios by providing conclusive evidence for the leptonic or hadronic origin of the detected $\gamma$-rays.
Interestingly, crucial pieces of information in the debate on the origin of the high-energy emission of blazars are gathered by means of X-ray observations. In the case of BL Lacs, the X-ray domain probes the highest-energy electrons ($E_e \geq 1$\,TeV) which, in the framework of the leptonic scenario, also produce the TeV photons via the synchrotron self-Compton process \cite{Tavecchio1998}. The correlated variability in the X-ray and TeV bands established for many BL Lacs and Mrk\,421 in particular \cite{Takahashi2000,Fossati2008,MAGIC-Mrk421}, therefore provides strong support for the IC origin of the observed $\gamma$-rays (see Figure\,\ref{fig-fossati}). Yet the exact correlation patterns emerging from detailed analysis revealed at the same time a picture which is much more complex than that expected in the ``standard'' models, assuming a single homogeneous emission zone and a simplified prescription for the shock acceleration of the radiating particles.
Possibly the most surprising of all such results is related to the BL Lac object PKS\,2155$-$304, for which the Cherenkov telescope \emph{H.E.S.S.}\ detected order-of-magnitude flares at TeV energies with doubling timescales as short as 200\,s, accompanied by only modest flux enhancement at X-ray photon energies \cite{PKS2155a}. This discovery has led to many question regarding the structure of the blazar emission zone and the particle acceleration processes involved \cite{Begelman2008}.
\begin{figure}
\begin{center}
\includegraphics[scale = 0.3,angle = 270]{fossati.ps}
\end{center}
\caption{Simultaneous optical ($V$ band, bottom), X-ray ($2-10$\,keV, middle) and TeV $\gamma$-ray (top) light curves for Mrk\,421 for the March 18-25, 2001 period (from \cite{Fossati2008}).}
\label{fig-fossati}
\end{figure}
To understand the multiwavelength correlations as well as the extremely short variability timescales characterizing BL Lacs, more data and higher-quality data are needed, and these can be gathered only by means of truly simultaneous, truly multiwavelength and long-term campaigns which are focused not exclusively on flaring activity of the targets, but also on their low-flux-level quiescent states. Such campaigns were hardly possible in the past due to the limited sensitivity of the previously available instruments. Recently, in the \emph{Fermi} era, the situation has improved \cite{LAT-Mrk421,LAT-Mrk501}, but still not many BL Lacs are bright enough (especially during their quiescent states) for the currently operating high-energy instruments to extract series of high-quality spectra in short multiple exposures.
The participation of X-ray telescopes providing detailed spectral information in future monitoring programs involving \emph{CTA}\ is crucial because, as stated above, the X-ray domain carries information regarding the electrons directly involved in shaping the $\gamma$-ray properties of BL Lacs. Instruments such as HXI and SGD onboard \emph{ASTRO-H}\ are expected to be particularly relevant, since these will enable us, for the very first time, to track the spectral evolution of several bright BL Lacs above 10\,keV photon energies in $\sim 100$\,ks (or shorter) exposures. An exciting and unique possibility is the detection of the polarization in hard X-rays by SGD during particularly strong flaring states of the brightest BL Lacs, like that of Mrk\,501 in 1997 when the synchrotron continuum of the source extended up to the $\gtrsim 100$\,keV photon energy range \cite{Pian1998}.
A precise characterization of the X-ray spectra of TeV-emitting BL Lacs is of importance also for another reason: the highest energy $\gamma$-ray photons emitted by cosmologically distant sources suffer from the absorption by the extragalactic background light (EBL) in the IR--to--UV band due to the photon-photon annihilation process \cite{Gould1966}. The observed TeV blazar fluxes have to be therefore de-absorbed --- with the correct number density of the EBL photons for a given redshift of a target --- before attempting any modeling. But the exact spectral distribution of the EBL, which is shaped by the cosmological evolution of galaxies, is not known precisely \cite{Hauser2001}. One of the main science goals of \emph{CTA}\ is, in fact, to invert the problem and to constrain the evolution of the EBL by a precise characterization of the blazar spectra enabling reliable identification and analysis of the EBL-related absorption features in the TeV range. With no good-quality and simultaneous broad-band X-ray data, on the other hand, disentangling the intrinsic curvatures and the absorption features in the observed $\gamma$-ray spectra of blazars relies heavily on several assumptions regarding the energy distribution of the emitting electrons \cite{HESS-EBL}. Any more robust determination of the intrinsic TeV properties of distant BL Lacs, and so any precise determination of the EBL level, requires simultaneous high-quality X-ray observations.
In the case of FSRQs the situation is different than in the case of BL Lacs, since here the X-ray observations probe instead the low-energy side of the high-energy emission component (see Figure\,\ref{fig-blazars}). This high-energy component in the spectra of FSRQs dominates energetically over the synchrotron one, reaching apparent $\gamma$-ray luminosities as large as $\sim 10^{49}$\,erg\,s$^{-1}$ during the flaring states \cite{Bonnoli2011}. FSRQs display in addition very flat X-ray continua, in many cases characterized by photon indices $\Gamma_{\rm X} < 1.5$ within a broad energy range from below keV up to hundreds of keV \cite{Sikora2009}.
In the framework of the leptonic models, the low-energy tail of the high-energy emission components of FSRQs is produced via the IC process involving the lowest-energy electrons, down to the mildly-relativistic regime \cite{Ghisellini2009}. A large amount of such mildly-relativistic leptons, outnumbering the ultrarelativistic electron population and carrying the bulk of the total jet kinetic power, should however manifest as a distinct steep-spectrum component in soft X-rays. The fact that the X-ray continua of FSRQs are flat and extend as such down to the lowest X-ray frequencies therefore has important implications for the jet energetics: as demonstrated by several authors (e.g., \cite{Sikora2000,Celotti2008}) the lack of any pronounced soft X-ray excess in the spectra of FSRQs excludes in particular the case of particle-dominated purely leptonic jets, implying either significant amount of protons, \emph{or} Poynting flux-dominated outflows.
A caution here is that the above conclusion is based on possibly oversimplified emission models, which recently have been questioned to some extent by the aforementioned detections of short TeV flares from several FSRQs (e.g., \cite{Tavecchio2011}). More extensive $\gamma$-ray monitoring of FSRQs with continuous coverage at X-ray photon energies involving soft and hard X-ray instruments like SXS, HXI and SGD onboard the \emph{ASTRO-H}\ is therefore needed for a robust determination of the jet energetics in the systems. We also note that the hard X-ray regime is particularly well suited for studying high-redshift FSRQs, which are of interest for understanding the cosmological evolution of jetted AGN \citep{Volonteri2011}.
\subsection{Radio Galaxies}
\label{Sec:RGs}
Radio galaxies (RGs), with their relativistic jets oriented at intermediate and larger viewing angles with respect to the line of sight, are believed to constitute the parent population of blazar sources \cite{unification}. As a result of larger inclinations, the observed non-thermal emission produced within the innermost parts of the jets in RGs is not amplified by relativistic beaming as dramatically as in the case of blazars, and hence different emission components, which are hardly observable in blazar spectra, may become prominent. For RGs oriented at particularly large inclinations, the radiative output of unresolved jets may be even strongly de-beamed in the observer rest frame, and therefore bulk of the observed emission may originate at further distances from the nuclei where relativistic outflows decelerate substantially so that beaming effects become less severe.
Before the launch of \emph{Fermi}-LAT\ only one radio galaxy, Cen\,A, has been firmly established as a source of the MeV--GeV photons by \emph{CGRO}\ \cite{CGRO-CenA}. At higher photon energies, longer exposure by the Cherenkov telescope \emph{HEGRA}\ allowed for the tentative detection of another object, M\,87 \cite{HEGRA-M87}. Both sources are low-power but particularly nearby systems. These two cases, when compared with about 100 blazars detected by EGRET and the previous Cherenkov telescopes, imply that RGs are relatively weak $\gamma$-ray emitters. Yet they are not ``$\gamma$-ray silent''. Indeed, after two years of the \emph{Fermi}-LAT\ operation and with the new generation of Cherenkov telescopes in hand, the sample of RGs detected at $\gamma$-rays has increased up to about 10 targets in the GeV range \cite{LAT-MAGN,2LAC}, and four objects at TeV photon energies \cite{VHE-CenA,VHE-IC310}. \emph{CTA}\ will hopefully further enlarge the population of non-blazar TeV-emitting AGN.
We also mention here that recently \emph{Fermi}-LAT\ has resolved giant (Mpc-scale) lobes surrounding the Cen\,A radio galaxy at GeV energies \cite{LAT-CenA-lobes}, proving in this way that $\gamma$-rays are being efficiently generated there, despite the advanced age and relaxed nature of the structure (see Figure\,\ref{fig-CenA-lobes}, and \cite{Takeuchi2012} for the case of the giant radio galaxy NGC\,6251). Probing the Cen\,A lobes at TeV and X-ray photon energies awaits future observations.
\begin{figure}
\begin{center}
\includegraphics[scale = 1.0]{CenA-lobes.eps}
\end{center}
\caption{Adaptively smoothed \emph{Fermi}-LAT\ $\gamma$-ray ($>200$\,MeV) counts maps centered on Cen\,A radio galaxy (from \cite{LAT-CenA-lobes}), showing emission from the extend lobes in the system.}
\label{fig-CenA-lobes}
\end{figure}
Increasing the sample of ``$\gamma$-ray loud'' RGs is important for several reasons. First, modeling of such sources provides an independent check of blazar models which are being developed, since, as noted above, RGs should be considered as blazars observed at larger viewing angles. Second, as also already emphasized, $\gamma$-ray observations of RGs may reveal some ``exotic'' or at least non-standard processes possibly related to the production of high energy photons and particles within active nuclei \citep{Neronov2007}, large-scale jets \cite{Stawarz03} and extended lobes \cite{Hardcastle-CenA}. Third, increasing the sample of $\gamma$-ray RGs will enable us to understand the contribution of nearby non-blazar AGN to the extragalactic $\gamma$-ray background \cite{Inoue2011}. And fourth, as discussed below in more detail, studying RGs at $\gamma$-ray frequencies may shed some light on the still hardly understood jet launching mechanisms in AGN. All of these research directions require a multiwavelength approach, but the relevance of the joint X-ray observations is particularly obvious in the latter case.
Similarly as in the case of blazars, the gathered X-ray data for RGs offer, in principle, constraints on the highest-energy and lowest-energy segments of the population of the radiating non-thermal electrons (see \S\,\ref{Sec:blazars}). But unlike in blazar sources, the observed X-ray emission of RGs is significantly contributed, or sometimes even entirely dominated by the emission of the accretion disks and disk coronae \cite{Antonucci2011}. This constitutes an opportunity to investigate the jet-disk coupling, and hence the jet launching processes, by means of joint X-ray and radio or $\gamma$-ray observations of RGs.
\begin{figure}[t]
\begin{center}
\includegraphics[scale = 0.925]{RGs.eps}
\end{center}
\caption{Two examples of different types of radio galaxies detected at $\gamma$-ray frequencies: Cen\,A (black circles) and 3C\,120 (red squares). The compiled historical data corresponding to the unresolved core of a low-power but nearby galaxy Cen\,A, including more recent LAT and \emph{H.E.S.S.}\ detections, are taken from \cite{LAT-CenA-core}. The archival non-simultaneous data for the unresolved core of a high-power galaxy 3C\,120, including reanalyzed LAT fluxes (following \cite{kataoka-BLRGs}), are represented by red squares. The sensitivity curves for different X-ray and $\gamma$-ray instruments are the same as in Figure\,\ref{fig-blazars}.}
\label{fig-RGs}
\end{figure}
In Figure\,\ref{fig-RGs} we plot the broad-band SEDs of two RGs detected at $\gamma$-ray frequencies: the low-power system Cen\,A and the high-power galaxy 3C\,120. In the case of Cen\,A the multiwavelength spectrum of the unresolved core seems to be dominated by the blazar-type emission of a misaligned jet, and in particular the high-energy emission component, extending up to the TeV range, seemingly resembles the high-energy IC peak of blazar sources. The most recent but ``standard'' blazar-type modeling of this component, although quite successful up to GeV frequencies, can however hardly accommodate the observed TeV fluxes \cite{LAT-CenA-core}. Hence a contribution from some other ``exotic'' processes at the highest photon energies, or modification of the blazar modeling, are implied. To make the situation more complex, there is an ongoing debate on whether the X-ray continuum of the Cen\,A core is indeed entirely due to the jet rather than the accretion flow, and what is the contribution of the extended structures in the source to the observed TeV emission.
The observations of nearby RGs with future X-ray telescopes such as \emph{GEMS}, \emph{NuSTAR}\ and \emph{ASTRO-H}\ will enable the jet and the disk contributions to the observed X-ray emission of the systems to be disentangled by obtaining high-quality spectra with unprecedented energy resolution up to tens and hundreds of keV photon energies. Such a rich and previously hardly available dataset, in addition to providing important diagnostics regarding the accretion process in AGN, will also allow us to understand the origin of $\gamma$-ray emission of RGs.
3C\,120 constitutes a particularly interesting case in this respect. Here the entire X-ray emission was argued to be produced by the accreting matter, with the possible exception of the soft X-ray band \cite{Kataoka2007}. Variable GeV emission of the source, on the other hand, tentatively detected by LAT, seems to be related to the pc-scale jet \cite{kataoka-BLRGs}. Importantly, the long-term monitoring of 3C\,120 at X-ray and radio frequencies has revealed a nontrivial connection between the two bands, with the dips in the X-ray emission followed by ejections of bright superluminal knots along the radio outflow \cite{Marscher2002}. In this way a direct observational link between the accretion and jet launching processes has been established for the very first time in the case of an AGN. The proposed interpretation involved X-ray dips due to the disappearance of the inner parts of the accretion disk leading to the ejection of the excess matter along the jet axis, analogous to what is observed in the Galactic jet sources. The possibility that the related phenomenon may be also observed at $\gamma$-ray frequencies, with the $\gamma$-ray flares following the dips in the accretion-related X-ray emission, awaits the operation of \emph{CTA}\ and future X-ray missions.
\section{Clusters of Galaxies}
\label{Sec:Clusters}
Merging processes leading to the formation of clusters of galaxies release huge amounts of gravitational energy ($\gtrsim 10^{64}$\,erg) on timescales of the order of $\sim$\,Gyr \cite{Sarazin1986}. While much of this energy is contained in thermal plasma with temperatures $kT \lesssim 10$\,keV emitting X-ray photons via the bremsstrahlung process, part of it may be channelled to accelerate a small fraction of particles from the thermal pool to ultrarelativistic energies, and to form in this way an energetically relevant population of cosmic rays (CRs) within the intracluster medium (ICM). In addition to the thermal and non-thermal baryonic particles, a large amount of dark matter (DM) is believed to be present in massive clusters of galaxies.
Both DM and CRs are supposed to give observable signatures at $\gamma$-ray frequencies, due to DM annihilation or decay processes, and due to interactions of hadronic CRs with the ambient gas \cite{Blasi2007,Jeltema2009}. There is an ongoing search for such signatures using currently available $\gamma$-ray instruments. Importantly, however, the production of $\gamma$-rays by DM and CRs should be accompanied by the production of secondary $e^{\pm}$ pairs. The presence of such secondary leptons should then result in observational signatures at lower frequencies, and in particular in the radio and X-ray domains.
Non-thermal activity in the ICM manifests clearly in the phenomenon of giant ($\sim$\,Mpc-scale) radio halos. These are roughly spherical and low-surface brightness structures centered at the position of the peaks in galaxy distributions, which are found in about $10\%$ of the systems \cite{Ferrari2008}. It was long speculated whether the synchrotron-emitting electrons populating giant halos are in fact secondary particles resulting from hadronic CR interactions \cite{Dennison1980}, or even DM annihilation/decay processes. Even though this possibility still cannot be excluded, a number of arguments and considerations were presented against such a scenario \cite{Magic-clusters}. Instead, radio-emitting electrons forming giant radio halos are now most widely believed to be accelerated directly from the thermal pool of the ICM by magnetic turbulence induced by merger processes at relatively early stages of the cluster lifetime \cite{Petrosian2001}.
\begin{figure}[t]
\includegraphics[scale = 0.925]{Coma.eps}
\caption{High to very high-energy pectrum of the Coma cluster of galaxies (Abell 1656). Thick green curve illustrates the thermal emission of the cluster gas ($k T \simeq 8.3$\,keV, $L_{\rm 0.1-2.4\,keV} \simeq 3 \times 10^{44}$\,erg\,s$^{-1}$). Hard X-ray ($20-80$\,keV) upper limit denoted by dark green arrow is taken from \cite{Wik2011}. \emph{Fermi}-LAT\ upper limits within the $0.2-1$, $1-10$, and $10-100$\,GeV photon energy ranges corresponding to a point source, to a King profile, and to the two-dimensional Gaussian with the $68\%$ contamination radius of $0.8$\,deg centered at the position of the cluster are denoted by black, dark blue, and blue arrows, respectively \cite{LAT-clusters}. Upper limits at $1$, $5$, and $10$\,TeV photon energies for the source region of interest with the radius of 0, $0.2$ and $0.4$\,deg, as reported by the \emph{H.E.S.S.}\ Collaboration \cite{HESS-Coma}, are denoted by red, magenta, and pink arrows, respectively. The sensitivities for \emph{ASTRO-H}\ and \emph{CTA}\ (thick cyan curves) are the same as in Figure\,\ref{fig-blazars}. The black and gray solid curves represent the predictions regarding the non-thermal emission of primary and secondary electrons accelerated by the magnetic turbulence within the Coma cluster for the central magnetic field intensity $5$\,$\mu$G and $2$\,$\mu$G, respectively (from \citep{Brunetti2011}). The black and gray dashed curves illustrate two exemplary models for the DM-induced emission in the Coma cluster for the intermediate neutralino mass of $60$\,GeV (from \citep{Colafrancesco2011}). The thick black line corresponds to the hadronic model of radio halo in the Coma cluster by \citep{Pinzke2010} normalized to the minimum flux prediction of \cite{Pfrommer2008} for the spectral index $2.1$.
}
\label{fig-Coma}
\end{figure}
It was recognized early on that the same electrons which produce synchrotron photons at radio frequencies should also lead to the production of higher-energy emission via IC upscattering of the cosmic microwave background radiation \cite{Rephaeli1979}. This additional emission component could then be probed at hard X-ray photon energies as an excess over the thermal (free-free) emission of the hot intracluster gas. Looking for such an excess using different X-ray instruments has resulted in contradicting claims in the past \cite{Rephaeli2008}. The most recent studies using \emph{Suzaku}\ and \emph{SWIFT}\ satellites indicate however that there is no power-law excess within the $10-100$\,keV photon energy range down to the level of a few $\times 10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$ for a number of the brightest systems, with the exception of the peculiar Bullet cluster \cite{Ajello2009,Wik2011}. These upper limits, when combined with the radio data, translate into lower limits for the magnetic field intensity within the ICM $> 0.3$\,$\mu$G \cite{Rephaeli2008,Wik2011}.
Searching for the observable signatures of the hadronic CR and DM populations in galaxy clusters is therefore a multiwavelength effort, even though it predominantly involves $\gamma$-ray instruments. As such, it will continue vigorously in the future with CTA. At present, despite extensive investigation, only upper limits for the $\gamma$-ray emission of clusters of galaxies have been provided both in the GeV \cite{Reimer2003,LAT-clusters} and in the TeV ranges (e.g., \cite{HESS-Coma,Magic-clusters}), and these are typically at the level of $\sim 10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$ (see Figure\,\ref{fig-Coma} for the case of the Coma cluster). Such limits provide constraints on the DM models, and also limit the contribution of the hadronic CRs to the cluster pressure down to the level of a few percent, at most.
\begin{figure*}
\begin{center}
\includegraphics[scale = 0.335, angle = 270]{A426_center_FeHeK.ps}
\includegraphics[scale = 0.3, angle = 270]{a426_center_sxs_sxi_hxi.ps}
\end{center}
\caption{Simulated spectra for 100\,ks \emph{ASTRO-H}\ observations of Perseus Cluster. {\bf (left)}
SXS spectra around the iron K line complex.
Line profiles assuming $\sigma=0$, 100 and 200\,$\rm km\ s^{-1}$ turbulence.
{\bf (right)} SXS (black), SXI (red), and HXI (blue) spectra
for hot plasma with three different temperatures of 0.6, 2.6 and 6.1\,keV ($r < 2'$; from \cite{Ref:ASTRO-H}).}
\label{fig-perseus}
\end{figure*}
A variety of choices regarding the candidates for the DM particles results however in different expectations regarding the high-energy emission of clusters related to the DM decay and annihilation processes \cite{Colafrancesco2011,Pinzke2011}. A variety of viable CR acceleration processes also plays a role. For example, the most widely considered acceleration mechanism for hadronic CRs in galaxy clusters is related to the 1st-order Fermi process operating at the fronts of large-scale shocks formed during the cluster mergers \cite{Bykov2000,Miniati2001}. But shock-produced CRs (both primary and secondary particles) may be also efficiently re-accelerated within the ICM via stochastic interaction with magnetic turbulence \citep{Brunetti2011}. Injection of ultrarelativistic particles into the ICM may be also related to the activity of AGN located in cluster centers \citep{Timokhin2004,Fujita2007}. The insufficient knowledge regarding the kinematics of the cluster gas and the structure of the cluster magnetic fields, affects the model predictions regarding the spatial distribution and the energy spectra of the accelerated CRs, and hence the non-thermal emission of the ICM (see Figure\,\ref{fig-Coma}).
Future X-ray observations which will allow not only for more robust constraints of the hard X-ray emission of clusters, but which will also provide a detailed insight into the kinematics of the ICM, and hence into the various energy dissipation processes involved, are therefore of primary interest. Missions such as \emph{ASTRO-H}\ can accomplish the task by means of spectrometric observations probing bulk plasma velocities and/or turbulence at a resolution corresponding to a speed of a few $\times 100$\,km/s, \emph{together} with an arcmin imaging system in the hard X-ray band with a sensitivity of orders of magnitude better than previous missions (see Figure\,\ref{fig-perseus}).
\section{Conclusions}
During the next decade, X-ray catalogues including about 200,000 clusters located out to high redshifts, together with about 3 million AGN, will be available thanks to the operation of \emph{e-ROSITA}. Around the same time, \emph{NuSTAR}\ and \emph{ASTRO-H}, carrying high-resolution hard X-ray mirrors with unprecedented performance, will open a new chapter in the studies of high-energy radiation of astrophysical sources in the hardly explored regime from $10$\,keV up to several hundreds of keV photon energies. A soft $\gamma$-ray survey at photon energies below 1\,MeV, with a sensitivity improved by 1--2 orders of magnitude with respect to previous surveys, will become possible thanks to the SGD instruments onboard \emph{ASTRO-H}. The collected data will enable us to monitor highly variable synchrotron emission of the highest-energy electrons in blazars and Galactic binaries, to track the evolution of supermassive black holes which are heavily obscured, and in general to probe with unprecedented accuracy the accretion process in different types of AGN. The new X-ray instruments will also uniquely allow for mapping of the spatial extent of the hard X-ray emission in diffuse non-thermal structures, thus tracing the sites of particle acceleration in clusters of galaxies and SNRs. In parallel, imaging spectroscopy with the energy resolution $< 5-7$\,eV brought by the micro-calorimeter onboard \emph{ASTRO-H}\ will reveal line broadening and Doppler shifts due to turbulent or bulk velocities in such extended systems. \emph{GEMS}\ will perform the first sensitive X-ray polarization survey of several classes of X-ray emitting sources characterized by strong gravitational or magnetic fields.
All these breakthroughs in studying high energy phenomena in the Universe are expected to happen at the time of the operation of the Cherenkov Telescope Array. The synergy between the TeV observations with \emph{CTA}\ and the X-ray observations with the future missions discussed here can hardly be overemphasized: parallel investigations in both regimes are indeed highly complementary and indispensable to understand the complex physics of the Galactic cosmic ray accelerators, active galaxies, and clusters of galaxies. This synergy regards primarily constraining particle acceleration processes, accretion in the black hole systems, and kinematics of the background plasma in various astrophysical sources of high energy emission. The cosmological context should not be forgotten either, since future X-ray missions together with \emph{CTA}\ are expected to enable significant progress in understanding the origin of the high-energy cosmic background radiation, as well as the nature of dark matter particles through the study of clusters of galaxies.
\bibliographystyle{elsarticle-num}
|
1205.2524
|
\section{Results}
\label{Res}
\subsection{2D $\rightarrow$ 1D transitions}
\label{sec:2D1D}
We first investigate the simplest few-particle two-dimensional
systems, undergoing $\rm 2D\rightarrow1D$ structural transitions. In
the case of $\alpha=1$, the confinement potential is symmetric and
particles form ordered states, that were previously observed experimentally and
modelled theoretically \cite{Kong, Saint-Jean}. Systems with $N=3,\, 4,\, 7$ particles
form only one stable configuration (ground state), while clusters with
$N=5,\,6$ particles in symmetric traps support both ground- and one metastable
configurations. Various states can be represented by listing the
occupation numbers of different shells --- ground state of 5-particle
system is therefore the configuration $(0,\,5)$ and metastable state is
$(1,\,4)$ (for the arrangements of particles see Figure \ref{fig1}). As
the anisotropy parameter $\alpha$ departs from unity,
metastable states can become ground states, some states can
disappear and new ones appear. In all investigated cases, however,
there is only one stationary configuration near the dimensional
transition --- a zigzag shaped pattern, which soon becomes a 1D linear chain
of particles at $\alpha>\ac$.
In the simplest symmetric case of $N=3$, particles form an
equilateral triangle in the $(xy)$ plane. As the value of $\alpha$
increases, the triangular configuration is gradually deformed until
the transition occurs at $\alpha=\ac \approx1.55$, as shown in
Figure \ref{fig1} with $\kappa=0$. In fact, dimensional transition in
three-particle system can be modelled analytically, which gives the value of
$\ac=\sqrt{12/5}$ (see Appendix \ref{appendix}). Numerical simulation gives exactly the same value.
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{1}
\caption{Value of the order parameter $\op$ as a function of the
anisotropy $\alpha$ for 2D clusters with $N=3\text{--} 5$ particles. Insets
show typical arrangements of particles in various stages of
compression. Note how quickly metastable
state $(1,\,4)$ disappears (top left corner).}
\label{fig1}
\end{figure}
The structural transition is slightly more intriguing in the case of
$N=4$ particles. As it can be deduced from the evolution of $\op$ in Figure \ref{fig1},
there is a discontinuity in a first derivative of the order parameter
${\rm d} \op/{\rm d}\alpha$ at the value of anisotropy parameter
$\alpha \approx 1.69$. As it turns out, there are two stages of the
four-particle cluster compression. In the first, slow stage, four particles form a
rhombus-shaped structure, with the particles located exactly on
$x$ or $y$ axis. Later, the stage of rapid compression takes over,
with two particles departing from the line $x=0$ and forming a
zigzag shaped pattern. Two typical rhombus- and zigzag-shaped configurations are presented
in the insets of Figure \ref{fig1}. As it was shown in the previous
study, the transition between these two
stages is followed by the specific oscillation of the heat capacity
\cite{Rancova}. Analogous scenario applies to the
other clusters with even number of particles. The dimensional transition is observed at
$\ac \approx 2.04$, where $\op$ suddenly drops to zero.
Two competing configurations are first observed in the case of $N=5$
particles, namely states $(0,\, 5)$ and $(1,\,4)$. As it is depicted
in
Figure \ref{fig1}, metastable state
$(1,\,4)$ exists only in the narrow window of anisotropy, $1<\alpha<1.05$. On the
other hand, the pentagonal ground state undergoes a continuous
structural transformation, forms a zigzag-shaped cluster and finally
becomes linear at $\ac\approx2.50$.
Structural transitions become more complex for the
systems with $N \geq 6$. Six
particles in a symmetric confinement can form two stable states. As
$\alpha$ increases, metastable $(0,\, 6)$ state vanishes at
$\alpha=1.05$ only to reappear again
and become a new ground state later. The former ground state $(1,\, 5)$ then
disappears completely near $\alpha=1.22$. Six- and eight-particle
clusters both feature the same discontinuity in ${\rm d} \op/{\rm d}\alpha$
as four-particle system, discussed above.
We have already seen in Figure \ref{fig1}, that critical value of
the parameter $\ac$ increases with $N$, when inter-particle
interaction is of the Coulomb type. As it might be expected, $\ac$ also
grows as Yukawa potential screening parameter $\kappa$ is
increased, which is shown in Figure \ref{fig2} for $N=3 \text{--} 6$. The
critical value of the anisotropy increases rapidly for
$\kappa<1.5$ and almost saturates for high values of screening, i.e.
$\kappa>4.0$.
The lines in Figure \ref{fig2} represent boundaries between different
phases of clusters. The structures are two-dimensional below the line
and form linear configurations above. Although we devote most
of the present work to the transitions induced by deformations
of the confinement well, structural changes can actually be caused by variations
in any of three parameters $\alpha$, $\kappa$ or $N$. As the figure shows, two-dimensional
cluster can become linear without any changes in $\alpha$, for
example, when
a value of $\kappa$ is
diminished, or when a particle is removed from the system.
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{2}
\caption{Phase diagram for two-dimensional Yukawa clusters of $N$
particles. Below the corresponding line, the cluster of $N$ particles
is two-dimensional and one-dimensional above. We see, that dimensional
transitions can be induced by variations in any of three parameters
$\alpha$, $\kappa$ or $N$.}
\label{fig2}
\end{figure}
We further examine the power-law behavior of the order parameter $\op$ near
its critical point $\ac$ in more detail. The power law is easily identified by
plotting the logarithm of the order parameter, $\lg \left( \op \right)$, as a
function of $\lg(\ac-\alpha)$. The function turns out to be linear for small
values of $(\ac-\alpha)$. This observation confirms that in the vicinity of
the transition point, the order parameter $\op$ demonstrates a power-law
behavior, which is a typical property of second-order phase transitions:
\begin{equation}
\op \propto (\ac -\alpha)^\gamma.
\end{equation}
We determine the values of the exponent $\gamma$ near the critical point by
analyzing the slope of the above discussed log-log plot. Namely, we take the
numerical derivative of the function
$\lg \left( \op \right)=f \left( \lg \left(\ac-\alpha \right)\right)$.
Calculated exactly at the critical point this derivative yields the exact
`theoretical' value of the critical exponent. However, in an experimental or
numerical investigation the precise location of the critical point may not
be known. Thus, calculating the numerical derivative a bit away from the
critical point we are able to mimic the uncertainty and errors present in a
realistic experimental situation. It turns out, that in all cases, $\gamma=1/2$
as long as $\alpha$ is close to its critical value $\ac$ (Figure \ref{fig3}).
However, the local value of the exponent (determined as the numerical derivative)
is very sensitive to the deviation of anisotropy parameter from $\ac$.
Figure \ref{fig3} shows the dependence of the power-law exponent
$\gamma$ on the deviation of $\alpha$ from its critical value. The
case of 2D cluster with $N=3$ particles and four different values of screening
length is presented. We see, that $\gamma$ departs from the value of $1/2$
significantly when the deviation from $\ac$ reaches third decimal and
attains its minimum near the first decimal. Furthermore, the exponent
$\gamma$ attains significantly lower values far from $\ac$ in the systems with
stronger screening. Other than that, there are no qualitative
differences in the critical behavior of systems with different values of $\kappa$. The
exponent $\gamma$ of the other systems with $N>3$ behaves similarly to
the case of three particles presented here.
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{3}
\caption{Critical exponent of the transition
observed in 2D three-particle systems as a function of deviation of
$\alpha$ from its critical value $\ac$ for different strengths of
screening.}
\label{fig3}
\end{figure}
A thorough analysis of transitions in a planar cluster of five particles was
reported by Sheridan in \cite{Sheridan}. The given value of critical parameter is
$\ac=2.96$ and the critical exponent is said to be
$\gamma=0.39 \ne 1/2$, while we find critical anisotropy parameter to
be $\ac=3.01$. According to the results of our calculations, a value of
$\alpha=2.96$ corresponds to the exponent $\gamma=0.37$, which is
close to the value reported by Sheridan et. al. Therefore, the reason of differences
between the results, published in \cite{Sheridan} and our discoveries
almost undeniably lies in the extreme sensitivity of $\op$ on the
value of anisotropy parameter and distance from its critical value. Thus, the accuracy of the results
presented in \cite{Sheridan} is probably not sufficient.
As it was demonstrated in \cite{Sheridan} and shown in
Figure \ref{fig2}, continuous transitions
might also be induced by the variations in the screening strength
$\kappa$. By keeping $N$ and $\alpha$ constant, we gradually change a
value of $\kappa$ while tracking the changes parameter
$\op$ undergoes. The dimensional transition takes place when the value of $\op$
suddenly drops to zero, at which point the critical value of $\kappa$
is obtained (see the inset of Figure \ref{figKY}). It turns out, that
$\op$ exhibits the same power law behavior
near the transition, i.e. $\op \propto (\kappa
-\kappa_c)^\beta$. Figure \ref{figKY} shows the dependence of critical exponent $\beta$
on a logarithm of the distance from the critical value $\kappa_c$ for
three systems. As opposed to the results presented in \cite{Sheridan}, we
see again, that in all cases close to the transition point
$\beta=1/2$. Moving away from the critical point, however, exponent
$\beta$ departs from the value of $1/2$ significantly.
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{4}
\caption{Critical exponent $\beta$, as a function of deviation of
$\kappa$ from its critical value $\kappa_{\rm c}$ for three 2D
clusters. The evolution of the order parameter $\op$ close to the
dimensional transition (inset).}
\label{figKY}
\end{figure}
\subsection{ 3D $\rightarrow$ 2D transitions}
We further investigate structural transitions in three-dimensional
Yukawa clusters with $N=4$ to $N=8$ particles and integer values of screening
parameter up to $\kappa=3$. Increased values of the parameter $\alpha$,
turn initially spherical structure into the oblate one;
eventually, after the anisotropy parameter reaches its critical value
$\ac$, dimensional phase
transition takes place and familiar two-dimensional clusters are
formed. In three-dimensional transformations of five- and six-particle
clusters, two
different final states are possible, as opposed to the zigzag
transitions in 2D, where only one linear configuration can be formed. Therefore, in 3D
$\to$ 2D transitions, there is a distinct value of $\ac$
for each final configuration and we are concerned by the properties
of phase transitions of a particular stable state.
As the
confinement potential well is squeezed in $y$ direction, it is handy to
label small clusters according to the arrangement of particles in
the projection to $(xz)$ plane, in a manner similar to the state labeling
by shell
occupation numbers in two dimensions. Moreover, particles in three-dimensional
anisotropic traps frequently organize themselves within the layers,
parallel to the $(xz)$ plane. For the sake of clarity and unambiguous
definition of the configurations, we will also use
a list of particle numbers in distinct layers, enclosed within
curly brackets.
The simplest system, undergoing a non-trivial $\rm 3D \rightarrow 2D$
transition is the cluster composed of four particles. Not surprisingly, four particles in a
symmetric three-dimensional trap form a regular tetrahedron, and there
is only one possible square-shaped $(0,\, 4)$ state in two dimensions. Figure
\ref{fig4} shows dependence of the order parameter $\op$ and potential
energy of a system $E$ on the anisotropy parameter $\alpha$. We see,
that $\op$ changes continuously and the transition is remarkably similar to the one in 2D case of
$N=3$ particles. The potential energy gradually increases as the
potential trap is flattened, until a two-dimensional structure is formed at $\ac \approx
1.22$. As it is demonstrated in Appendix \ref{appendix}, this
symmetric transition can be modelled analytically; the critical
value turns out to be $\ac={({4\sqrt{2}}/(1+2\sqrt{2}) )}^{1/2}$ --- exactly the
same as determined in our numerical modelling. Naturally, the value $\ac$ is
sensitive to the range of the inter-particle Yukawa potential. As
Figure \ref{fig_3D-ak} shows, critical value of the
anisotropy parameter increases rapidly with the strength of
screening for $\kappa<2$ and significantly slower after that, thus
reminding of the transitions from two- to one-dimensional configurations.
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{5}
\caption{Order parameter $\op$ and potential energy $E$ of a four-particle
3D Coulomb cluster in the asymmetric potential trap with anisotropy
parameter $\alpha$.}
\label{fig4}
\end{figure}
As it was already pointed out before, there are two competing stable
states observed in a two-dimensional system with
$N=5$ particles. Five particles in a spherically symmetric 3D confinement potential,
however, can form only one stable configuration. Slightly increased
anisotropy leads to the formation of a three-layer structure,
with the arrangement of particles within these layers being $\left
\{2,\,2,\,1\right\}$. As parameter $\alpha$ increases above the
value of
$\alpha=1.05$, two layers merge forming a square and thus transforming the
configuration into a pyramidal structure $\{ 4,\,1\}$ with
projection $(1,\,4)_{xz}$. This
structural transition from three-layered cluster to the pyramidal
configuration is signified
by the discontinuity in the derivative ${\rm d}\op/{\rm d} \alpha$
(Figure \ref{fig6}).
As it is
demonstrated in Figure \ref{fig6} for a pure Coulomb interaction, a
second stable state appears when the anisotropy parameter reaches the
value of $\alpha_0 \approx 1.29$. A new pentagonal state undergoes an asymmetric
dimensional transition
and is soon transformed into the new ground state $(0,\,5)$. The metastable
configuration $(1,\,4)_{xz}$, on the other hand, becomes two-dimensional only at
$\ac \approx 1.60$, through the so-called ``pyramidal'' transition mechanism.
Both point of appearance of the second state in
five-particle system $\alpha_0$, and
its critical value $\ac$ depends on the type of the interaction potential
and its screening parameter $\kappa$. As it is shown in
Figure \ref{fig_3D-ak}, both parameters grow with the strength of screening.
The distance between $\alpha_0$ and $\ac$, however, rapidly
diminishes. As the screening reaches the value of $\kappa \approx 4.5$,
two lines merge and
a new stable state appears already in its two-dimensional pentagonal form $(0,\,5)$.
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{6}
\caption{Order parameter $\op$ and potential energy $E$ of a $5$-particle
3D Coulomb cluster in the asymmetric potential trap with anisotropy
parameter $\alpha$. Here and later the insets show projections of
configurations to $(xz)$ plane.}
\label{fig6}
\end{figure}
A pyramidal configuration might be described as a planar base,
composed of $n=4\text{--} 6$ particles lying parallel to the $(xz)$ plane,
and a single particle located right above the center of the base, that
is, configuration $\{N-1,\,1\}$,
$(1,\,N-1)_{xz}$. A pyramidal structural transition takes place, when
the apex of the polyhedron is pushed into the base, thus becoming a
two-dimensional configuration with only one particle in the center. A
typical behavior of the order parameter $\op$ during such
transitions was already discussed and is presented in figures \ref{fig6} and
\ref{fig7}. As a matter of fact, the dimensional transitions of a
pyramidal type can be modelled analytically and exact values of
critical parameters $\ac$ can be found, see Appendix \ref{appendix}.
Even more stable configurations are observed in clusters with $N=6$
particles, as Figure \ref{fig7} shows for the Coulomb inter-particle
potential. Evolution of the system
starts with a single stable state in the symmetric 3D trap --- the octahedral
configuration (full line in Figure \ref{fig7}). As the parameter
$\alpha$ increases, this bipyramid is deformed
by pushing two of its particles lying exactly on $y$-axis,
towards each other, thus lowering the height and forming a
configuration $\{1,\,4,\,1\}$ with
projection $(1,\,4)_{xz}$. $\op$ decreases slowly, until the said two particles
start to depart form the $y$-axis near $\alpha \approx 1.46$, at which
point a phase of rapid deformation begins. Unfortunately, right
after this happens, the stable state disappears. The same scenario of bipyramidal
deformation also applies to the larger clusters, e.g. $N=7,\;8,\;9$,
and has a specific, well recognizable shape of its $\op=f(\alpha)$
curve, with the segments of slow and rapid changes.
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{7}
\caption{Order parameter $\op$ and potential energy $E$ of the six-particle
3D Coulomb cluster in the asymmetric potential trap with anisotropy
parameter $\alpha$.}
\label{fig7}
\end{figure}
A new metastable state emerges near $\alpha \approx 1.06$: the
particles lie on the six vertices of two
parallel equilateral triangles, centered precisely on $y$-axis and rotated
by $\pi/3$ with respect to each other, that is, state $\{3,\,3\}$ and
$(0,\,6)_{xz}$. These triangular layers are pushed
towards each other by deformations of the confinement, however, they
fail to ever become a
truly two-dimensional configuration. Instead, as it is demonstrated in
Figure \ref{fig7}, the configuration ceases to exist at $\alpha \approx 1.52$
where the r.m.s. value of $y$ coordinate is still $\op\approx 0.05>0$.
However, right before
the disappearance, a new similar purely two-dimensional state shows
up. The new planar configuration is
composed of six particles lying on the vertices of two
triangles of slightly different sizes (see Figure
\ref{fig7}). Therefore, in a brief range of
$\alpha$ values these two states exist simultaneously and there is no
continuous transition between them. Finally, a pyramidal
configuration $\{5,\,1\}$ appears near $\alpha
\approx 1.19$ and undergoes the usual pyramidal dimensional transition at $\ac \approx
1.59$, the value predicted by our analytical model (appendix
\ref{appendix}).
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{8}
\caption{Critical value of anisotropy parameter $\alpha_c$ as a
function of $\kappa$ in three-dimensional configuration with $N=4$
particles and two
states of $N=5$ clusters. The line marked as $\alpha_0$
corresponds to the appearance of the metastable state in a five-particle
cluster.}
\label{fig_3D-ak}
\end{figure}
Close to the critical point of continuous transitions from three- to
two-dimensional systems, a
power-law behavior of the order
parameter is detected once again, i.e. $\op \propto (\ac
-\alpha)^\gamma$. In the same manner as
in 2D case, Figure \ref{fig5} shows the dependence of the power-law exponent
$\gamma$ on the logarithm of $\ac-\alpha$. It turns out, that in a close vicinity of transition
point, the critical exponent $\gamma=1/2$ does not depend on the
screening strength $\kappa$. Deviations from this value occur when the departure of
$\alpha$ from its critical value reaches third decimal. Just as in the
two-dimensional case, the value of a critical exponent is lower for systems with
stronger inter-particle potential screening, and drops as low as
$\gamma \approx 0.35$ for $\kappa=3$. Essentially the same behavior
of the exponent $\gamma$ is observed in larger three-dimensional systems,
where dimensional transitions take place, be it
pyramidal transitions or transformations of any other type.
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{9}
\caption{Critical exponent $\gamma$ for a dimensional transition,
observed in 3D four-particle systems as a function of deviation of
$\alpha$ from its critical value $\ac$ for different strengths of
screening.}
\label{fig5}
\end{figure}
As the number of particles grows, more and more stable states emerge
and, as a consequence, $\op=f(\alpha)$ graphs become convoluted and
somewhat difficult
to study. An illustrative example is given in the
inset of
Figure \ref{fig10}, where the behavior of the order parameter in
stable states of
20-particle
Coulomb system is presented.
We can still see a few distinct continuous phase transitions in the
vicinity of $\alpha \approx 2.26$, however different lines become
hardly distinguishable
at the lower values of $\alpha$. In very large systems, the
values of $\op$ for all metastable states lie virtually on the same
line, as it is illustrated in Figure \ref{fig10} with a 100-particle
cluster and $\kappa=0$.
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{10}
\caption{Order parameter $\op$ of large Coulomb systems with
$N=100$ and $N=20$ (inset)
particles as a function of the anisotropy $\alpha$.}
\label{fig10}
\end{figure}
It might be worth discussing the structural evolution of Yukawa
clusters confined by traps
with a prolate equipotential surface. In our model, this effect is achieved by
lowering the value of anisotropy parameter $\alpha$ towards
zero. In that way, elongated clusters are formed, with low potential energies
and high values of $\op$. Consequently, in order to study this type of
structural transitions, a new order parameter must be
defined. We choose to rely on the root mean square of the distance from $y$-axis:
\begin{equation}
\ro = \left(\frac{1}{N} \sum_{i=1}^N
\left ( x_i^2+z_i^2 \right ) \right )^{1/2}.
\end{equation}
As Figure \ref{fig11a} shows for $N=3 \text{--} 6$, dependencies of the order parameter $\ro$
on the anisotropy $\alpha$ are not smooth and in some cases feature discontinuities. We
conclude, that in fact there is no direct transition
from three- to one-dimensional configurations. Instead, the system is first
transformed into the elongated 2D zigzag pattern, and only later, $\rm 2D
\rightarrow 1D$ structural transition takes place. With that in mind,
there is no surprise, that the values of critical parameters $\ac$
found by lowering $\alpha$ are the exact
inverses of those, determined in subsection \ref{sec:2D1D}.
\begin{figure}[ht]
\centering
\includegraphics[width=\figa]{11}
\caption{Order parameter $\ro$ as a function of the
anisotropy $\alpha<1$ for Coulomb clusters with $N=3\text{--} 6$ particles.}
\label{fig11a}
\end{figure}
Large three-dimensional clusters in prolate traps become
one-dimensional through the mechanism, which seems to be universal for
all values of $N$ used in our modelling. At first, the system is squeezed and elongated
until the particles arrange themselves into the shape of double-helix. As $\alpha$ is
lowered further, a number of helical turns decreases, until the helix
unwinds and cluster becomes a two-dimensional zigzag configuration. 2D system then
undergoes the usual zigzag transition with the power-law behavior
near the critical point.
\section{Conclusion}
\label{Sum}
Confined Yukawa clusters are among the physical systems, where simple
interparticle interactions lead to the emergence of complicated patterns and
spontaneous ordering. In this article, we present our findings in the numerical
and analytical studies of two- and three-dimensional clusters confined by
asymmetric parabolic traps.
We confirm, that dimensional transitions from oblate three- to
two-dimensional systems as well as from planar to linear
configurations can be induced by changes
in the anisotropy of the confinement $\alpha$, and screening
strength $\kappa$. On the other hand,
there are no direct transitions from three- to one-dimensional systems
in prolate harmonic traps; two-stage
transformations take place instead.
A critical value of the anisotropy parameter in general grows with the screening strength
$\kappa$. The growth is steepest for small values of $\kappa$ and
almost saturates for the large ones.
In a close vicinity of dimensional phase transition, the order parameter $\op$
exhibits power-law dependence on a control parameter, be it
$\alpha$ or $\kappa$.
In all cases studied here, the critical exponent is found to be
universal and equal to $1/2$, which is consistent with the general theory of
second order phase transitions. However, a value of the power-law exponent turns
out to be very sensitive to the deviations of a control parameter from
its critical value. Far from the critical point, the exponent attains lower values in
systems with stronger screening and shorter range of the inter-particle
interaction.
|
1902.02388
|
\section{Introduction}
In this paper, we study the following composite optimization problem
\begin{equation}
\min_{x\in\mathbb{R}^d}\left\{ F(x) \overset{\text{def}}{=}f(x)+h(x)\right\}, \label{eq: prob}
\end{equation}
where $f(x)$ is twice differentiable and convex, and $h(x)$ is simple, nonsmooth and convex. To solve \eqref{eq: prob}, the popular methods are first-order algorithms such as Proximal Gradient Method (PGM) and Accelerated Proximal Gradient Method (APGM) \cite{nesterov2007gradient}. Assume $x^* \overset{\text{def}}{=}{\rm argmin}_{x\in\mathbb{R}^d} F(x)$. In order to find an $\epsilon$-accurate solution $x$ such that $F(x)-F(x^*)\le \epsilon$, PGM needs $\mathcal{O}(\epsilon^{-1})$ iterations and APGM need $\mathcal{O}(\epsilon^{-1/2})$ iterations, where $\mathcal{O}(\epsilon^{-1/2})$ is the optimal rate for first-order methods. However, if we use the second-order information of $f(x)$, from \cite{nesterov2006cubic,nesterov2008accelerating,nunes2018accelerated}, we know that the Proximal Cubic regularized Newton Method (PCNM) with the following iterative procedure
\begin{equation}
x_{t+1}\overset{\text{def}}{=}{\rm argmin}_{x\in\mathbb{R}^d}\Big\{ f(x_t)+\langle \nabla f(x_t), x-x_t\rangle+\frac{1}{2}\langle \nabla^2 f(x_t)(x-x_t), x-x_t\rangle+\frac{\eta}{6}\|x-x_t\|^3+ h(x)\Big\}, \label{eq:cnm}
\end{equation}
needs $\mathcal{O}(\epsilon^{-1/2})$ iterations to find an $\epsilon$-accurate solution, where $\nabla f(x_t)$ denotes the gradient at $x_t$, $\nabla^2 f(x_t)$ denotes the Hessian matrix at $x_t$ and $\eta$ is a parameter to be determined. Meanwhile the accelerated PCNM (APCNM) only need $\mathcal{O}(\epsilon^{-1/3})$ iterations. Both iteration complexity results are better than that of PGM and APGM respectively.
Although the iteration complexities of PCNM and APCNM are superior, unlike PGM and APGM, they are seldom used in the large-scale optimization where the problem size ($i.e.,$ the number of data samples $n$ and the dimension $d$) is often large. This is because when the problem size is large, computing the exact gradient and Hessian is very expensive; meanwhile solving the subproblem \eqref{eq:cnm} exactly needs matrix factorization or inversion which scales poorly with the problem size.
As a result, although the iteration complexities are better, as the problem size becomes large, the overall complexity of PCNM and APCNM will not be as competitive as PGM and APGM respectively. In fact,
it is commonly believed that second-order methods such as PCNM and APCNM are only suitable for small-scale problems, and first-order methods such as PGM and APGM are superior in the large-scale setting.
In this paper, we consider the inexact variants of PCNM and APCNM and show the following result from a theoretical view: the overall complexity of second-order methods can be as competitive as that of first order algorithms in the large-scale setting or even in the online setting where data arrives endlessly. In fact, in the strongly convex setting, the overall complexity of APCNM will have a better dependence on the strong convexity constant than the state of the art stochastic gradient descent (SGD) algorithm.
We obtain the competitive overall complexity results by using the inexact gradient and Hessian and finding an approximate solution of the subproblem \eqref{eq:cnm} with properly decreased errors. The proposed results implies that \emph{the order of the information we use may be not the factor that determines the scalability of algorithm, if we tune the factors in the corresponding subproblem (such as \eqref{eq:cnm} ) properly}.
In Section \ref{sec:result}, we review the related work and give the main results from four aspects:
\begin{itemize}
\item the research progress of the inexact variants of Cubic regularized Newton Method (CNM);
\item the overall complexity in the online setting;
\item the overall complexity in the finite-sum setting;
\item the proposed efficient subsolver for \eqref{eq:cnm}.
\end{itemize}
Then in Section \ref{sec:cubic}, we propose the Inexact Proximal Cubic regularized Newton Method (IPCNM) and give its theoretical analysis; in Section \ref{sec:acc-cubic}, we propose Accelerated Inexact Proximal Cubic regularized Newton Method (AIPCNM) and give its theoretical analysis; in Section \ref{sec:cubic-svrg}, we propose the efficient Cubic Proximal Stochastic Variance Reduced Gradient (Cubic-Prox-SVRG) method and show that it can converge to a neighborhood of the optimal solution in a superlinear rate.
\section{Related Work and Main Results}\label{sec:result}
Before continue, we provide the notations and problem setting first.
Let $I$ as the identity matrix with a proper size according to the context.
Let $x^*$ as a minimizer of $F(x)$ and we say $x$ is an $\epsilon$-accurate solution if it satisfies $F(x)-F(x^*)\le \epsilon$.
Throughout this paper, we use $\|\cdot\|$ to denote the Euclidean norm of a vector or the spectral norm of a matrix. Denote $\nabla F(x)\overset{\text{def}}{=}\nabla f(x)+h^{\prime}(x)$, where $h^{\prime}(x)\in \partial h(x)$.
We use the big $O$ notation $\mathcal{O}(\cdot)$ to denote the computational complexity and $\tilde{\mathcal{O}}(\cdot)$ denote the complexity result that hide the ploy logarithmic terms.
\begin{definition}
For a function $F:\mathbb{R}^d\rightarrow \mathbb{R}$, $F(x)$ is $\sigma_2$-strongly convex if $\forall x, y \in\mathbb{R}^d$, $$F(y)\ge F(x)+\langle\nabla F(x), y-x\rangle+ \frac{\sigma_2}{2}\|y-x\|^2;$$
if $\sigma_2=0$, then $F(x)$ is only convex.
\end{definition}
\begin{definition}
For a function $f:\mathbb{R}^d\rightarrow \mathbb{R}$, $f(x)$ has $L_2$-Lipschitz gradients if $\;\forall x, y \in\mathbb{R}^d$, $$\|\nabla f(x) - \nabla f(y)\|\le L_2 \|x-y\|;$$ $f(x)$ has $L_3$-Lipschitz Hessians if $\forall x, y \in\mathbb{R}^d$, $$\|\nabla^2 f(x) - \nabla^2 f(y)\|\le L_3 \|x-y\|.$$
\end{definition}
When the nonsmooth term $h(x)$ exists, $f(x)$ has $L_3$-Lipschitz Hessian and $F(x)$ is convex, by \cite{nunes2018accelerated}, APCNM can find an $\epsilon$-accurate solution with $\O(\epsilon^{-1/3})$ iterations. Meanwhile by extending the result of the smooth setting trivially \cite{nesterov2006cubic}, we can know that PCNM can find an $\epsilon$-accurate solution for \eqref{eq: prob} with $\O(\epsilon^{-1/2})$ iterations. Except \cite{nunes2018accelerated}, the existing researches of inexact variants of CNM mainly focus on the smooth setting where the nonsmooth term $h(x)$ does not exist.
In the nonconvex setting,
in order to reduce the high computational cost in optimizing the subproblem \eqref{eq:cnm} and maintain the convergence rate of the exact case at the same time, \cite{cartis2011adaptive,cartis2011adaptive2,kohler2017sub} considered a subsampling strategy to obtain inexact gradient and Hessian, and a termination condition for optimizing \eqref{eq:cnm} , while the conditions of subsampling depend on the further iteration and thus is implementable, and the termination condition is specific to the Lanzcos method \cite{carmon2018analysis}. \cite{zhou2018stochastic} used variance reduction strategy to reduce the complexity of computing the gradient and Hessian, while the complexity to update the variance reduced gradient and Hessian is $O(d^2)$ and thus the SVRC method in \cite{zhou2018stochastic} is only suitable for the problem with small dimension $d$. \cite{tripuraneni2018stochastic} made a considerable progress that the stochastic cubic regularization method in \cite{tripuraneni2018stochastic} needs $\tilde{\mathcal{O}}(\epsilon^{-3.5})$ stochastic gradient and stochastic Hessian-vector product evaluations to find an approximate local minima for general smooth, nonconvex functions, which matches the best known result, while it did not give the analysis of the convex setting.
\begin{table*}[ht]
\caption{Comparison of inexact cubic regularized Newton methods in the convex setting}
\label{tb:1}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{ccccccccr}
\toprule
Cubic & Same & Inexact & Inexact & Inexact & Nonsmooth \\
Method & Rate? & Hessian? & Gradient? & Subsolver? & Regularizer? \\
\midrule
\cite{cartis2012evaluation} & {\large $\times$} & \Checkmark & {\large $\times$} & \Checkmark & {\large $\times$} \\
\cite{ghadimi2017second} & {\large $\times$} & \Checkmark & {\large $\times$} & {\large $\times$} &{\large $\times$} \\
\cite{chen2018adaptive} & \Checkmark & \Checkmark & {\large $\times$} & \Checkmark & {\large $\times$} \\
\textbf{This Paper} &\Checkmark & \Checkmark & \Checkmark & \Checkmark & \Checkmark \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table*}
Compared with finding an $\epsilon$-second order stationary point of the nonconvex setting, in the convex setting, the goal of finding an $\epsilon$-accurate solution in terms of objective function results in extra difficulty. In this paper, we propose the inexact PCNM (IPCNM) and accelerated IPCNM (AIPCNM) as the inexact invariants of PCNM and APCNM respectively.
Table \ref{tb:1} gives the researches about inexact variants of CNM in the convex setting. As shown in Table \ref{tb:1}, only the algorithms in \cite{chen2018adaptive} and this paper can maintain the same convergence rate as the exact case; all the researches consider using inexact Hessian, while only this paper also use inexact gradient; \cite{cartis2011adaptive,chen2018adaptive} and this paper use inexact subsolver, while only the termination condition for subsolver in this paper is not specific to the Lanzcos method; finally, only the results in this paper are applicable to the case where the nonsmooth regularizer $h(x)$ exists.
In the following discussion, the cubic regularized second-order approximation function we need to optimize in each iteration is defined as
\begin{eqnarray}
\tilde{f}_{\eta}(x;y)&\overset{\text{def}}{=}&f(y) +\langle g, x-y\rangle + \frac{1}{2}\langle H(x-y), x-x_t\rangle+\frac{\eta}{6}\|x-y\|^3 + h(x),\label{eq:subprob}
\end{eqnarray}
where $g$ is an inexact gradient on $y$, $H_t\succeq 0$ is an inexact Hessian on $y$, $\eta>0$ is a parameter to be determined. Then we make Assumption \ref{ass:appro}.
\begin{assumption}\label{ass:appro}
Assume $\tilde{x}\overset{\rm def}{=}{\rm argmin}_{x\in\mathbb{R}^d}\tilde{f}_{\eta}(x;y)$.
The subsolver we use can find an $\epsilon$-accurate solution $z$ such that $\tilde{f}_{\eta}(z;y) - \tilde{f}_{\eta}(\tilde{x};y)\le \epsilon$ with at most $\O(\text{\rm cost}(H_tv)\log\frac{1}{\epsilon})$ cost, where $\text{\rm cost}(H_tv)$ denotes the cost of Hessian-vector products.
\end{assumption}
From the convergence analysis \cite{carmon2018analysis}, it is known that the Lanzcos method can satisfy Assumption \ref{ass:appro} when the nonsmooth term $h(x)$ does not exist. In Section \ref{sec:cubic-svrg}, we propose the Cubic-Prox-SVRG method and show that it can converge to an enough small neighborhood of $\tilde{x}$ in a superlinear rate.
\subsection{Overall complexity in the online stochastic setting}
In the online stochastic setting where data arrives sequentially and endlessly, $f(x)$ can be written as an expectation of the stochastic function $f(x;\epsilon)$, then the problem \eqref{eq: prob} is
\begin{eqnarray}
\min_{x\in\mathbb{R}^d}\left\{ F(x) \overset{\text{def}}{=}f(x)+h(x)\overset{\text{def}}{=}\mathbb E_{\xi\sim \mathcal{D}}[f(x;\xi)]+h(x)\right\}, \label{eq: online-prob}
\end{eqnarray}
where $\xi$ is a random variable sampled from an underlying distribution $\mathcal{D}$. Meanwhile, we make two general assumptions \ref{ass:gradient-Hessian} and \ref{ass:g-H}.
\begin{assumption}\label{ass:gradient-Hessian}
$\nabla f(x;\xi)$ satisfies $\forall x\in \mathbb{R}^d$, $$\mathbb E[\nabla f(x;\xi)] = \nabla f(x),\quad\quad \mathbb E[\|\nabla f(x;\xi)-\nabla f(x)\|^2]\le \tau_1^2,\quad\quad\|\nabla f(x;\xi)-\nabla f(x)\|\le \gamma_1$$ almost surely; meanwhile,
$\nabla^2 f(x;\xi)$ satisfies $\forall x\in \mathbb{R}^d$, $$\mathbb E[\nabla^2 f(x;\xi)] = \nabla^2 f(x),\quad\quad \| \mathbb E[(\nabla^2 f(x;\xi)-\nabla^2 f(x))^2]\|\le \tau_2^2,\quad\quad\|\nabla^2 f(x;\xi)-\nabla^2 f(x)\|\le \gamma_2$$ almost surely; moreover the cost of the stochastic Hessian-vector product $\nabla^2 f(x;\xi)v$ is not higher than the vector-vector inner product $\langle \nabla f(x;\xi), v\rangle$.
\end{assumption}
\begin{assumption}\label{ass:g-H}
In the $t$-th iteration of IPCNM, we set $$g_t\overset{\rm def}{=}\frac{1}{\hat{n}_{t1}}\sum_{i=1}^{\hat{n}_{t1}} \nabla f(x;\xi),\quad\quad H_t\overset{\rm def}{=}\frac{1}{\hat{n}_{t2}}\sum_{i=1}^{\hat{n}_{t2}} \nabla^2 f(x;\xi),$$
where $\hat{n}_{t1}, \hat{n}_{t2}$ are the number of stochastic gradient samples and stochastic Hessian samples to be determined. Meanwhile,
in the $t$-th iteration of AIPCNM, we set $$g_t\overset{\rm def}{=}\frac{1}{\bar{n}_{t1}}\sum_{i=1}^{\bar{n}_{t1}} \nabla f(x;\xi), \quad\quad H_t\overset{\rm def}{=}\frac{1}{\bar{n}_{t2}}\sum_{i=1}^{\bar{n}_{t1}} \nabla^2 f(x;\xi)+\mu_t I,$$
where $\bar{n}_{t1}, \bar{n}_{t2}$ are the number of stochastic gradient samples and stochastic Hessian samples to be determined, $\mu_t>0$ is a parameter to be determined.
\end{assumption}
By Assumption \ref{ass:g-H}, from the $0$-th iteration to $t$-th iteration, in IPCNM, the number of stochastic gradient samples is $\sum_{i=1}^t \hat{n}_{i1}$, and the number of stochastic Hessian samples is $\sum_{i=1}^t \hat{n}_{i2}$; in AIPCNM, they are $\sum_{i=1}^t \bar{n}_{i1}$ and $\sum_{i=1}^t \bar{n}_{i2}$
respectively. Based on Assumption \ref{ass:appro}, in IPCNM, the total complexity of calling the subsolver will be
at most $\O(\sum_{i=0}^{t} \hat{n}_{i2}\log\frac{1}{\epsilon_i})$ stochastic Hessian-vector products, where $\epsilon_i$ is the accuracy we need to attain in the solving procedure of the subproblem ${\rm argmin}_{x\in\mathbb{R}^d} \tilde{f}_{\eta}(x;y)$ of the $i$-th iteration. Correspondingly, in AIPCNM, the total cost of calling the subsolver will be $\O(\sum_{i=0}^{t} \bar{n}_{i2}\log\frac{1}{\epsilon_i})$ stochastic Hessian-vector products. By Assumption \ref{ass:gradient-Hessian}, the cost of stochastic Hessian-vector products is not higher than that of stochastic gradient evaluation. Therefore we measure the overall complexity by the number of equivalent stochastic gradient evaluations.
Table \ref{tb:2} gives the overall complexity of representative algorithms in the online stochastic setting. For simplicity, in Table \ref{tb:2}, we neglect the poly-logarithmic factor and use the $\tilde{O}$ notation.
The existing algorithms under this setting are mainly first-order algorithms \cite{shalev2012online}, which can be divided into methods that pass one sample or a fixed mini-batch samples in each iteration \cite{duchi2010composite,xiao2010dual}, and methods that use an increased sample size in each iteration \cite{byrd2012sample,friedlander2012hybrid,schmidt2011convergence}. If we do not consider the poly-logarithmic factor, COMID \cite{duchi2010composite} obtains the optimal convergence rate ($i.e,$ regret) $\tilde{\mathcal{O}}(\epsilon^{-2})$ in the convex setting and $\tilde{\mathcal{O}}(\epsilon^{-1})$ in the $\sigma_2$-strongly convex setting.
However, as shown in Table \ref{tb:2}, the Inexact Proximal Gradient Method (IPGM) and Accelerated IPGM (IPGM) \cite{schmidt2011convergence}{\footnote{Although \cite{schmidt2011convergence} do not give the overall complexity in the online stochastic setting, we can give the complexity results in Table \ref{tb:2} based on the same analysis in this paper.}}, which belongs to the methods with increased sample size, can not obtain the optimal rate in the convex setting. The proposed methods IPCNM and AIPCNM belongs to the methods with an increased sample size, while we use the second-order information.
In Table \ref{tb:2}, it is shown that IPCNM and AIPCNM have better overall complexity than the corresponding IPGM and AIPGM respectively in the convex setting. AIPCNM can obtain the optimal rate in both convex and strongly convex setting. Particularly, AIPCNM has better dependence on the strong convexity constant $\sigma_2$ than COMID.
\begin{table*}[th]
\caption{Overall complexity of algorithms in the online stochastic setting}
\label{tb:2}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{c|ccccr}
\toprule
Setting & Method & $\#$-equivalent stochastic gradient evaluations \\
\midrule
Convex&COMID \cite{duchi2010composite} & $\tilde{\mathcal{O}}(\epsilon^{-2})$ \\
&IPGM \cite{schmidt2011convergence} & $\tilde{\mathcal{O}}(\epsilon^{-3}) $ \\
&AIPGM \cite{schmidt2011convergence} & $\tilde{\mathcal{O}}(\epsilon^{-5/2})$ \\
&IPCNM (\textbf{this paper}) & $\tilde{\mathcal{O}}(\epsilon^{-5/2})$ \\
&AIPCNM (\textbf{this paper}) & $\tilde{\mathcal{O}}(\epsilon^{-2})$ \\
\midrule
Strongly &COMID \cite{duchi2010composite} & $\tilde{\mathcal{O}}(\sigma_2^{-1}\epsilon^{-1})$ \\
Convex&IPGM \cite{schmidt2011convergence} & $\tilde{\mathcal{O}}(\sigma_2^{-1}\epsilon^{-1})$ \\
&IAPGM \cite{schmidt2011convergence} & $\tilde{\mathcal{O}}(\sigma_2^{-3/2}\epsilon^{-1})$ \\
&IPCNM (\textbf{this paper}) & $\tilde{\mathcal{O}}(\sigma_2^{-5/6}\epsilon^{-4/3})$ \\
&AIPCNM (\textbf{this paper}) & $\tilde{\mathcal{O}}(\sigma_2^{-2/3}\epsilon^{-1})$ \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table*}
\subsection{Overall complexity in the finite-sum setting}
In the finite-sum setting where $f(x)$ has a finite-sum structure, the problem \eqref{eq: prob} can be written as
\begin{eqnarray}
\min_{x\in\mathbb{R}^d}\left\{ F(x) \overset{\text{def}}{=}f(x)+h(x)\overset{\text{def}}{=}\frac{1}{n}\sum_{i=1}^n f_i(x)+h(x)\right\}. \label{eq: online-prob}
\end{eqnarray}
In the finite-sum setting, we also assume that Assumptions \ref{ass:gradient-Hessian} and \ref{ass:g-H} hold.
Because if the number of samples is $n$, then we can obtain the exact gradient or Hessian, in the finite-sum setting, the number of samples for gradient and Hessian is at most $n$.
To solve \eqref{eq: online-prob}, the state of the art algorithms are based on the well-known variance reduction technique \cite{johnson2013accelerating,xiao2014proximal,allen2017katyusha}. In Table \ref{tb:2}, we use SVRG and Katyusha as the representative algorithms of non-accelerated variance reduced method and accelerated variance reduced method respectively.
As shown in Table \ref{tb:2}, an important advantage of AIPCNM and AIPCNM is they do not need to pass all the data if we only want a low-accurate solution. Meanwhile, in the convex setting, the AIPCNM method has a faster rate $\tilde{\mathcal{O}}(\epsilon^{-1/3})$, therefore it can obtain a high-accuracy solution faster than optimal gradient method Katyusha. In the strongly convex setting, the AIPCNM method has a better dependence on the strong convexity constant $\sigma_2$.
\begin{table*}[th]
\caption{Comparision of representative algorithms in the finite-sum setting}
\label{tb:2}
\vskip 0.15in
\begin{center}
\begin{small}
\begin{tabular}{c|ccccr}
\toprule
Setting & Method & $\#$-equivalent stochastic gradient evaluations \\
\midrule
Convex &SVRG \cite{johnson2013accelerating} & $\tilde{\mathcal{O}}(n\epsilon^{-1}) $ \\
&Katyusha \cite{allen2017katyusha} & $\tilde{\mathcal{O}}(n\epsilon^{-1/2})$ \\
&IPCNM (\textbf{this paper}) & $\min\{\tilde{\mathcal{O}}(\epsilon^{-5/2}),\tilde{\mathcal{O}}({n\epsilon^{-1/2}})\} $ \\
&AIPCNM (\textbf{this paper}) & $\min\{\tilde{\mathcal{O}}(\epsilon^{-2}), \tilde{\mathcal{O}}(n\epsilon^{-1/3})\}$ \\
\midrule
Strongly &SVRG \cite{johnson2013accelerating} & $\tilde{\mathcal{O}}(n+n\sigma_2^{-1})$ \\
Convex&Katyusha \cite{allen2017katyusha} & $\tilde{\mathcal{O}}(n+n^{1/2}\sigma_2^{-1/2})$ \\
&IPCNM (\textbf{this paper}) & $\min\left\{\tilde{\mathcal{O}}(\sigma_2^{-5/6}\epsilon^{-4/3}), \tilde{\mathcal{O}}(\sigma_2^{-1/2}n)\right\}$ \\
&AIPCNM (\textbf{this paper}) & $\min\left\{\tilde{\mathcal{O}}(\sigma_2^{-2/3}\epsilon^{-1}),\tilde{\mathcal{O}}(\sigma_2^{-1/3}n)\right\} $ \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table*}
\subsection{Efficient subsolver for the cubic regularized second-order subproblem}
In Section \ref{sec:cubic-svrg}, we propose the Cubic-Prox-SVRG method to solve the subproblem $\min_{x\in\mathbb{R}^d}\tilde{f}_{\eta}(x;y)$ by exploring the finite-sum structure in the inexact Hessian and the uniform convexity of the cubic regularizer $\frac{1}{3}\|\cdot\|^3.$ Because it can converge to an enough small neighborhood of the optimal solution in a superlinear rate (``enough'' means the approximate solution satisfies the need of IPCNM and AIPCNM), in the convex setting, it is a good alternative to the well-known Lanczos method which only has a linear rate \cite{carmon2018analysis}.
\section{The inexact proximal cubic regularized Newton method}\label{sec:cubic}
\begin{algorithm}[htb]
\caption{Inexact proximal cubic regularized Newton method (IPCNM)}
\label{alg:icnm}
\begin{algorithmic}[1]
\STATE Input: $x_0\in\mathbb{R}^d$ and $\eta = 3L_3$
\FOR{t=0,1,2,...}
\STATE Compute an inexact gradient $g_t$ and an inexact Hessian $H_t$ on $x_t$
\STATE Compute an inexact solution $x_{t+1}$ of the subproblem $\min_{x\in\mathbb{R}^d}\tilde{f}_{\eta}(x;x_t)$
\ENDFOR
\end{algorithmic}\label{alg:icnm}
\end{algorithm}
In this section, we propose IPCNM in Algorithm \ref{alg:icnm}. For Algorithm \ref{alg:icnm}, we have Lemma \ref{lem:basic-f} which is key to the convergence result.
\begin{lemma}\label{lem:basic-f}
Let $\{x_t\}_{t\ge0}$ be generated by Algorithm \ref{alg:icnm} and $x_{t+1}^* = {\rm argmin}_{x\in \mathbb{R}^d}\tilde{f}_{\eta_t}(x; x_t)$.
Define the approximation error
\begin{equation}
E_t \overset{\rm def}{=} \frac{4}{3L_3^2}\|\nabla^2 f(x_t) - H_t\|^3
+ \frac{4}{3}\left(\frac{2}{L_3}\right)^{1/2}\|\nabla f(x_t) - g_t\|^{3/2} + (\tilde{f}_{\eta_t}(x_{t+1}; x_t) -\tilde{f}_{\eta_t}(x_{t+1}^*; x_t)).\label{eq:E-t}
\end{equation}
Then $\forall 0\le \alpha_t\le 1$, if $F(x)$ is convex and has $L_3$-Lipschitz Hessians, it follows that for $t\ge1$
\begin{equation}
F(x_{t})\le F(\alpha_{t-1}x^* + (1-\alpha_{t-1})x_t)+L_3\alpha_{t-1}^3\|x_{t-1}-x^*\|^3+E_{t-1}. \label{eq:meta}
\end{equation}
\end{lemma}
In Lemma \ref{lem:basic-f}, $E_t$ is the approximate error by the inexact gradient $g_t$, the inexact Hessian $H_t$ and the inexact solution $x_{t+1}$ of the subproblem \eqref{eq:subprob}.
To continue, we give the general assumption that the Euclidean distance between the iterates $\{x_t\}_{t\ge0}$ and $x^*$ is bounded by a constant.
\begin{assumption}\label{ass:x}
Let $\{x_t\}_{t\ge0}$ be generated by Alg. \ref{alg:icnm}. Then, there exists $D>0$ such that $\forall t\ge0$, $\|x_t-x^*\|\le D$.
\end{assumption}
Then based on Lemma \ref{lem:basic-f} and Assumption \ref{ass:x}
, Theorems \ref{thm:nonstrong} and \ref{thm:strong} gives the convergence result in the convex setting and strongly convex setting respectively.
\begin{theorem}[The convex setting]\label{thm:nonstrong}
Suppose that Assumption \ref{ass:x} holds and $E_t$ is defined in Lemma \ref{lem:basic-f}. For the convex function $F(x)$ in \eqref{eq: prob}
, then it follows that for $t\ge 1$,
\begin{eqnarray}
F(x_{t}) - F(x^*)\le\frac{27L_3D^3}{(t+1)(t+2)} + \frac{1}{t(t+1)(t+2)} \sum_{i=1}^{t} i(i+1)(i+2)E_i.\nonumber\label{eq:nonstrong-2}
\end{eqnarray}
\end{theorem}
By Theorem \ref{thm:nonstrong}, if $E_i=\O\left(\frac{1}{(i+2)^3}\right)$, then IPCNM can converge to an $\O(1/t^2)$-accurate solution after the $t$-th iteration.
\begin{theorem}[The strongly convex setting]\label{thm:strong}
Suppose that Assumption \ref{ass:x} hold, $E_t$ is defined in Lemma \ref{lem:basic-f}. Assume for $t\ge 0$,
\begin{equation}
\alpha = \min\left\{\frac{1}{3}, \sqrt{\frac{\sigma_2}{3L_3D}}\right\}.
\end{equation}
Then for the $\sigma_2$-strongly convex function $F(x)$, it follows that for $t\ge 1$,
\begin{eqnarray}
F(x_{t}) - F(x^*)\le (1-\alpha)^t\left(F(x_{0}) - F(x^*)\right) +(1-\alpha)^t \sum_{i=0}^{t-1}\frac{E_i}{(1-\alpha)^i}.\nonumber
\end{eqnarray}
\end{theorem}
By Theorem \ref{thm:strong}, if $E_i=\O\left(\frac{(1-\alpha)^i}{t}\right)$, then IPCNM can converge to an $\O((1-\alpha)^t)$-accurate solution after the $t$-th iteration. In Theorem \ref{thm:superlinear}, we also show that IPCNM has a local superlinear rate.
\begin{theorem}[Local superlinear convergence rate]\label{thm:superlinear}
Define $\omega \overset{\rm def}{=} \frac{1}{L_3^2}\left(\frac{\sigma_2}{2}\right)^3$.
Assume that $t_0$ is the minimal integer such that $F(x_{t_0}) - F(x^*)\le \frac{2}{3}\omega$. Then for $t\ge t_0$, by setting $$E_t\le\frac{\omega}{2}\left(2/3\right)^{(3/2)^{t-t_0+1}},$$ we have for $t\ge t_0$,
\begin{eqnarray}
F(x_{t})-F(x^*)\le \omega(2/3)^{(3/2)^{t-t_0}}.\nonumber
\end{eqnarray}
\end{theorem}
Finally, we give Corollaries \ref{thm:acc-online} and \ref{thm:acc-finite} to show the overall complexity in the online stochastic setting.
\begin{corollary}[The online stochastic setting]\label{thm:online}
Suppose that Assumptions \ref{ass:gradient-Hessian} and \ref{ass:g-H} hold. Then if $F(x)$ is convex, then with the probability $1-\delta$, IPCNM can find an $\epsilon$-accurate solution in at most $$\tilde{\mathcal{O}}(\epsilon^{-5/2}) \text{ {\rm equivalent stochastic gradient iterations.}} $$
If $F(x)$ is $\sigma_2$-strongly convex, IPCNM can find an $\epsilon$-accurate solution in at most $$\tilde{\mathcal{O}}(\sigma^{-5/6}\epsilon^{-4/3}) \text{ {\rm equivalent stochastic gradient iterations.}}$$
\end{corollary}
\begin{corollary}[The finite-sum setting]\label{thm:finite}
Suppose that Assumptions \ref{ass:gradient-Hessian} and \ref{ass:g-H} hold. Then if $F(x)$ is convex, then with the probability $1-\delta$, IPCNM can find an $\epsilon$-accurate solution in at most $$\min\{\tilde{\mathcal{O}}(\epsilon^{-5/2}),\tilde{\mathcal{O}}({n\epsilon^{-1/2}})\}
\text{ {\rm equivalent stochastic gradient iterations.}}$$
If $F(x)$ is $\sigma_2$-strongly convex, IPCNM can find an $\epsilon$-accurate solution in at most $$\min\left\{\tilde{\mathcal{O}}(\sigma_2^{-5/6}\epsilon^{-4/3}), \tilde{\mathcal{O}}(\sigma_2^{-1/2}n)\right\} \text{ {\rm equivalent stochastic gradient iterations.}}$$
\end{corollary}
\section{The accelerated inexact proximal cubic regularized Newton method}\label{sec:acc-cubic}
\begin{algorithm} [H]
\caption{Accelerated inexact proximal cubic regularized Newton method (AIPCNM)}
\label{alg:AICNM}
\begin{algorithmic}[1]
\STATE Input:
$x_0 \in \Bbb{R}^n$, $\eta=4L_3, C_1>0, C_2>0$, a sequence $\{A_t\}_{t\ge 0}$
\STATE Set $v_0=x_0$
\STATE Set $ \psi_0(x) = \frac{C_1}{2}\|x-x_0\|^2+ \frac{C_2}{3}\|x-x_0\|^3 $
\FOR{$t=0,1,2,\ldots$}
\STATE Set $a_t = A_{t+1}-A_t$
\STATE Obtain inexact gradient $g_t$ and Hessian $H_t$ , where $H_t$ satisfies Assumption \ref{ass:H}
\STATE Set $y_t = (1 - \alpha_t) x_t + \alpha_t v_t$, where $\alpha_t=\frac{a_t}{A_t+a_t}$
\STATE Find an approximate solution $x_{t+1}$ of $\min_{x\in\mathbb{R}^d}\tilde{f}_{\eta}(x;y_t)$, where $\tilde{f}_{\eta}(x;y)$ is defined in \eqref{eq:subprob}
\STATE Obtain $g_{t+1}^{\prime}$ that satisfies Assumption~\ref{ass:g-prime}
\STATE Find $v_{t+1}={\rm argmin}_{x \in \Bbb{R}^d}\psi_{t+1}(x)$, where
\begin{align}
\psi_{t+1}(x)=&\psi_t(x)+a_t\big(f(x_{t+1})+\langle g_{t+1}^{\prime}, x-x_{t+1} \rangle+h(x)\big). \label{eq:psi}
\end{align}
\ENDFOR
\end{algorithmic}
\end{algorithm}
In Algorithm \ref{alg:AICNM}, we propose the AIPCNM method. To ensure convergence, we make Assumptions \ref{ass:H} and \ref{ass:g-prime}.
\begin{assumption}\label{ass:H}
Let $\{y_t\}_{t \ge 0}$ be generated by Algorithm~\ref{alg:AICNM}. Then we have
\begin{eqnarray}
\frac{\mu_t}{2}\preceq H_t - \nabla^2 f(y_t)\preceq \mu_t,
\end{eqnarray}
where $\{\mu_t\}_{t\ge 0}$ is a positive sequence.
\end{assumption}
\begin{assumption}\label{ass:g-prime}
$g_{t+1}^{\prime}$ is an unbiased estimation of $\nabla f(x_{t+1})$, $i.e.,$ $\mathbb E[g_{t+1}^{\prime}] = \nabla f(x_{t+1})$.
\end{assumption}
Then by extending \citep[Lemma 2.1]{nunes2018accelerated}, we obtain Lemma \ref{eq:acc-key}, which is the key lemma to extend the conclusion of the exact APCNM to the inexact case.
\begin{lemma}\label{eq:acc-key}
Let $\{x_t\},\{y_t\}$ be generated by Alg. \ref{alg:AICNM}. Denote $$q_{t+1} \overset{\rm def}{=} g_t-\nabla f(y_t)+\nabla F(x_{t+1}) - \nabla \tilde{f}_{\eta}(x_{t+1}; y_t),$$ then we have
\begin{eqnarray*}
q_{t+1}^T(y_t-x_{t+1})
&\ge&\min\left\{ \frac{\|q_{t+1}\|^2}{3\mu_t} , \sqrt{\frac{\|q_{t+1}\|^3}{4L_3+2\eta}} \right\}+\frac{\mu_t}{4}\|y_t-x_{t+1}\|^2.
\end{eqnarray*}
\end{lemma}
In Lemma \ref{eq:acc-key}, we define $q_{t+1}$ as an proxy to the $\nabla F(x_{t+1}$ of the exact case. Then based on Lemma \ref{eq:acc-key}, we prove Theorem \ref{thm:prime-result}.
\begin{theorem}\label{thm:prime-result}
Assume for $t\ge0$, $\|v_{t+1}-v_t\|\le R$. Meanwhile
assume the constants $C_1>0, C_2>0$ in Algorithm \ref{alg:AICNM} satisfy for $t\ge0$,
\begin{eqnarray*}
C_1&\ge&\max_{t\ge0}\left\{\frac{9\mu_ta_t^2}{2A_{t+1}}-\frac{2}{3}A_t\sigma_2\right\}\\
C_2 &\ge&\max_{t\ge0}\left\{ \frac{32a_t^3L_3}{3A_{t+1}^2}-\frac{A_t\sigma_2}{R}\right\}.
\end{eqnarray*}
Then if sequences $\{x_t\}$, $\{v_t\}$ are generated by Algorithm \ref{alg:AICNM}, then for all $t>0$, we have
\begin{eqnarray}
\mathbb E[F(x_t) - F(x^*)]\le \frac{1}{A_t}\left( \frac{C_1}{2}\|x_0-x^*\|^2+ \frac{C_2}{3}\|x_0-x^*\|^3\right) + \frac{1}{A_t} \sum_{i=1}^t G_i,
\end{eqnarray}
where
\begin{eqnarray}
G_i &=& \left(\frac{A_{i+1}}{\mu_i}+\frac{9a_i^2}{2(3C_1+2A_i\sigma_2)}\right) \|g_i-\nabla f(y_i) - \nabla \tilde{f}_{\eta}(x_{i+1}; y_i)\|^2\nonumber\\
&&+ \frac{9a_i^2}{2(3C_1+2A_i\sigma_2)}\| g_{i+1}^{\prime}- \nabla f(x_{i+1})\|^2,
\end{eqnarray}
and the expectation is taken on all the history of the randomness of $g_{i+1}^{\prime}$ from $i=0$ to $t-1$.
\end{theorem}
In Theorem \ref{thm:prime-result}, $\mu_i$ bound the error of $H_i$, $\|g_i-\nabla f(y_i) - \nabla \tilde{f}_{\eta}(x_{i+1}; y_i)\|$ bounds the error of the inexact gradient $g_i$ and the inexact solution\footnote{$\|\nabla \tilde{f}_{\eta}(x_{i+1}; y_i)\|$ can be used as a measure of the subsolver, which can be bounded by $\tilde{f}_{\eta}(x_{i+1}; y_i) - \tilde{f}_{\eta}(\tilde{x}; y_i)$ by the Lipschitz gradient property} $x_{i+1}$ and $\| g_{i+1}^{\prime}- \nabla f(x_{i+1})\|$ bounds the error of $g_{i+1}^{\prime}$.
\begin{theorem}[The convex case]\label{thm:acc-convex}
Assume that $\|x_0-x^*\|\le D$.
If we set
\begin{equation}
\begin{cases}
&\forall 0\le i, A_{i}=\frac{i(i+1)(i+2)}{6},\mu_i = \frac{L_3D}{i+2} \\
&C_1 = 7L_3D, C_2 = 48L_3 \\
&\|g_i-\nabla f(y_i) - \nabla \tilde{f}_{\eta}(x_{i+1}; y_i)\|\le \frac{L_3D^2}{\sqrt{2t}(i+2)^2}\\
&\| g_{i+1}^{\prime}- \nabla f(x_{i+1})\| \le \frac{2L_3D^2}{\sqrt{t}(i+2)^2},\\
\end{cases}\label{eq:cond11}
\end{equation}
then if $F(x)$ is convex, we have
\begin{eqnarray*}
\mathbb E[F(x_t) - F(x^*)]&\le& \frac{129L_3D^3}{t(t+1)(t+2)}, \label{eq:cond12}
\end{eqnarray*}
where the expectation is taken on all the history of the randomness of $g_{i+1}^{\prime}$ from $i=0$ to $t-1$.
\end{theorem}
\begin{proof}
By using the setting in \eqref{eq:cond11} and Theorem \ref{thm:prime-result}, we can obtain \eqref{eq:cond12} directly.
\end{proof}
\begin{theorem}[The strongly convex case]\label{thm:acc-strong-convex}
If $F(x)$ is $\sigma_2$-strongly convex, by setting
\begin{eqnarray}
\rho=\min\left\{1, \frac{3^{1/3}}{2}\left(\frac{\sigma_2}{L_3R}\right)^{1/3}\right\},
\end{eqnarray}
and set
\begin{equation}
\begin{cases}
&A_0 =0 \\
&\forall i\ge 1, A_{i} = (1+\rho)^i, \mu_i =\mu_0= \frac{32(L_3R)^{2/3}\sigma_2^{1/3}}{27\cdot 3^{2/3}}\\
&C_1=\frac{9\mu_0(1+\rho)}{2}, C_2=\frac{32(1+\rho)L_3}{3} \\
&\|g_i-\nabla f(y_i) - \nabla \tilde{f}_{\eta}(x_{i+1}; y_i)\|\le \left(\frac{1}{\mu_0}+\frac{9\rho^2}{4\sigma_2}\right)^{-1/2}(1+\rho)^{-i/2+1}t^{-1/2}\\
&\| g_{i+1}^{\prime}- \nabla f(x_{i+1})\| \le \frac{2\sigma_2^{1/2}}{3\rho} (1+\rho)^{-i/2+1}t^{-1/2}\\
\end{cases}\label{eq:cond21}
\end{equation}
then we have for $t\ge 1$,
\begin{eqnarray}
\mathbb E[F(x_t) - F(x^*)]&\le& (1+\rho)^{-(t+1)}\left( \frac{9\mu_0}{4}\|x_0-x^*\|^2 + \frac{32L_3}{9}\|x_0-x^*\|^3+2\right),\label{eq:cond22}
\end{eqnarray}
where the expectation is taken on all the history of the randomness of $g_{i+1}^{\prime}$ from $i=0$ to $t-1$.
\end{theorem}
\begin{proof}
By using the setting in \eqref{eq:cond21} and Theorem \ref{thm:prime-result}, we can obtain \eqref{eq:cond22} directly.
\end{proof}
Finally, we give Corollaries \ref{thm:acc-online} and \ref{thm:acc-finite} to show the overall complexity in the online stochastic setting.
\begin{corollary}[The online stochastic setting]\label{thm:acc-online}
Suppose that Assumptions \ref{ass:gradient-Hessian} and \ref{ass:g-H} hold. Then if $F(x)$ is convex, then with the probability $1-\delta$, AIPCNM can find an $\epsilon$-accurate solution in at most $$\tilde{\mathcal{O}}(\epsilon^{-2}) \text{ \rm{equivalent stochastic gradient iterations}}. $$
If $F(x)$ is $\sigma_2$-strongly convex, IPCNM can find an $\epsilon$-accurate solution in at most $$\tilde{\mathcal{O}}(\sigma^{-2/3}\epsilon^{-1}) \text{ \rm{equivalent stochastic gradient iterations}}.$$
\end{corollary}
\begin{corollary}[The finite-sum setting]\label{thm:acc-finite}
Suppose that Assumptions \ref{ass:gradient-Hessian} and \ref{ass:g-H} hold. Then if $F(x)$ is convex, then with the probability $1-\delta$, AIPCNM can find an $\epsilon$-accurate solution in at most $$\min\{\tilde{\mathcal{O}}(\epsilon^{-2}), \tilde{\mathcal{O}}(n\epsilon^{-1/3})\} \text{ \rm{equivalent stochastic gradient iterations}}.$$
If $F(x)$ is $\sigma_2$-strongly convex, IPCNM can find an $\epsilon$-accurate solution in at most $$\min\left\{\tilde{\mathcal{O}}(\sigma_2^{-2/3}\epsilon^{-1}),\tilde{\mathcal{O}}(\sigma_2^{-1/3}n)\right\} \text{ \rm{equivalent stochastic gradient iterations}}.$$
\end{corollary}
\section{The Proximal SVRG with Cubic Regularization}\label{sec:cubic-svrg}
In this section, we propose an efficient algorithm called Cubic Proximal Stochastic Variance Reduced Gradient method (Cubic-Prox-SVRG) in Algorithm \ref{alg:cubic} to solve the cubic regularized second-order subproblem $\min_{x\in\mathbb{R}^d}\tilde{f}_{\eta}(x, y)$, where $\tilde{f}_{\eta}(x, y)$ is defined in \eqref{eq:subprob}. In this section we assume that the inexact Hessian $H$ is obtained by subsampling by Assumption \ref{ass:g-H}, $i.e,$ $H\overset{\text{def}}{=}\frac{1}{n}\sum_{i=1}^n H_i$, where $n$ is the number of subsamples, and we assume for any $v\in\mathbb{R}^d$, the cost of $H_i v$ is $O(d)$.
Then we reformulate the subproblem $\min_{x\in\mathbb{R}^d}\tilde{f}_{\eta}(x, y)$ as
\begin{eqnarray}
\min_{w\in\mathbb{R}^d}P(w) \overset{\text{def}}{=} \frac{1}{n}\sum_{i=1}^n \psi_i(w) + r(w),
\end{eqnarray}
where $w \overset{\text{def}}{=} x-y, \psi_i(w) \overset{\text{def}}{=} w^TH_i w, r(w)\overset{\text{def}}{=} \frac{\eta}{3}\|\omega\|^3+h(w+y)$.
The Cubic-Prox-SVRG algorithm is motivated by the uniform property of degree $3$
\begin{eqnarray}
\frac{1}{3}\|w\|^3\ge \frac{1}{3}\|u\|^3+\langle \nabla \frac{1}{3}\|u\|^3, w-u\rangle + \frac{1}{6}\|w-u\|^3
\end{eqnarray}
of the cubic regularizer $\frac{1}{3}\|w\|^3$ \cite{nesterov2008accelerating}.
Assume $H\succeq 0$, $h(w+y)$ is $\sigma_2$-strongly convex $ (\sigma_2\ge0)$. Then $P(w)$ is $\sigma_2$-strongly convex and $\frac{\eta}{2}$-uniformly convex of degree $3$. Meanwhile denote $ w^*\overset{\text{def}}{=}{\rm argmin}_{w\in\mathbb{R}^d}P(w).$
For two points $w\in\mathbb{R}^d, w^{\prime}\in\mathbb{R}^d$, the uniform convexity of degree $3$ is equivalent to $\frac{1}{2}\|w-w^{\prime}\|$-strong convexity. Therefore, the $3$-rd order uniform convexity is
stronger when the two points are far away from each other and is weaker than the strong convexity when the two points are close each other. Meanwhile, it is known that if $P(w)$ is smooth and strongly convex, gradient descent methods can converge with a linear rate \cite{nesterov1998introductory}. Combing the two facts, when $P(w)$ is smooth and $3$-rd order uniformly convex, we may obtain an gradient based algorithm with a two-stage convergence rate: a superlinear rate when the iterative solution is far away from the optimal point and a sublinear rate when they are close each other.
To verify this intuition, we propose a new algorithm called Cubic regularized Proximal SVRG (Cubic-Prox-SVRG) in Alg. \ref{alg:cubic}, which is a variant of the well-known Prox-SVRG algorithm \cite{xiao2014proximal}. Compared with Prox-SVRG in \cite{xiao2014proximal}, the difference is only the number of the inner iteration $M_s$ and the learning rate for each outer iteration $\tau_s$. In Theorem \ref{thm:converge}, we give the two-stage convergence rate of Cubic-Prox-SVRG.
\begin{algorithm}[!ht]
\caption{Cubic proximal stochastic variance reduced gradient}
\begin{algorithmic}[1
\STATE Initialization: $\tilde{w}_0 = 0, m = O(n), \tau_0 = 0.1/L_2$
\STATE $Q=\{q_1, q_2,\ldots, q_k, \dots, q_n\},$ where $q_k\overset{\text{def}}{=}\frac{\|H_k\|}{\sum_{i=1}^n \|H_i\|}$;
$L_2 \overset{\text{def}}{=} \frac{1}{n}\sum_{i=1}^n \|H_{i}\|$
$\kappa_2 \overset{\text{def}}{=} \frac{L_2}{\sigma_2}, \kappa_3\overset{\text{def}}{=}\frac{L_2}{2}\left(\frac{12}{\eta}\right)^{2/3}$
$M_s\overset{\text{def}}{=} \lceil 100\min\{\kappa_2,\kappa_3\max\{m, (P(\tilde{w}_{s-1}) - P(w^*))^{-1/3}\}\}\rceil;$
$ \tau_s \overset{\text{def}}{=} \tau_0\min\{1,m^{-\frac{1}{2}}(P(\tilde{w}_{s-1}) - P(w^*))^{-1/6}\}$
\vspace{0.02in}
\FOR{$s =1, 2, 3, \ldots$}
\STATE $\tilde{w} = \tilde{w}_{s-1}$
\STATE $\tilde{\mu}=\frac{1}{n}\sum_{i=1}^n \nabla \psi_i(\tilde{w})$
\STATE $w_0 = \tilde{w}$
\FOR{$k=1, 2, 3,..., M_s$}
\STATE Pick $i_k\in \{1, ..., n\}$ randomly according to $Q$
\STATE $\tilde{\nabla}_k =( \nabla \psi_{i_k}(w_{k-1}) -\nabla \psi_{i_k}(\tilde{w}))/(q_{i_k}n)+ \tilde{\mu}$
\STATE ${w}_k =\text{prox}_{r}(w_{k-1} -\tau_s \tilde{\nabla}_k ) $
\ENDFOR
\STATE $\tilde{w}_s = \frac{1}{M_s}\sum_{k=1}^{M_s} w_k$
\ENDFOR
\end{algorithmic}\label{alg:cubic}
\end{algorithm}
\begin{theorem}\label{thm:converge}
Assume that $s_1$ is the smallest number of outer iteration that satisfies $$P(\tilde{w}_{s_1})-P(w^*)\le \frac{1}{m^3}.$$
Then it follows that
\begin{eqnarray*}
&&\!\!\!\!\!\!\!\!\!\mathbb E[P(\tilde{w}_s) - P(w^*)]\le\begin{cases}
\left(\frac{\rho}{\sqrt{m}}\right)^{6\left(1-\left(\frac{5}{6}\right)^s\right)}\left(P(\tilde{w}_0)-P(w^*)\right)^{\left(\frac{5}{6}\right)^s}, &\text{ {\rm if } } s\le s_1 \\
\rho^{s-s_1} \left(P(\tilde{w}_{s_1})-P(w^*)\right), & \text{ {\rm if } } s> s_1 \\
\end{cases}
\end{eqnarray*}
where $\rho \overset{\rm def}{=} \frac{1}{100L_2\tau_0(1-4L_2\tau_0)} +\frac{4L_2\tau_0(100\kappa_2 m+1)}{100(1-4L_2\tau_0)\kappa_2 m}<1$.
\end{theorem}
It should be noted that $\eta=\O(1), L_2=\O(1)$. Then by the definition, $\kappa_3$ is an $O(1)$ constant.
By Theorem \ref{thm:converge}, the outer iteration $\tilde{w}_s$ will converge to a neighborhood of $w^*$ in a superlinear rate until $P(\tilde{w}_{s_1})-P(w^*)\le \frac{1}{m^3}$. Then it will converge in a linear rate. In the convex setting, the number of stochastic samples in IPCNM and AIPCNM will be $\tilde{\mathcal{O}}(t^2)$ in the $t$-th iteration. To ensure the convergence rate in Theorems \ref{thm:nonstrong} and \ref{thm:acc-convex}, we need the solving accuracies of the subproblem are $\O(\frac{1}{t^3)}$ and $\O(\frac{1}{t^5)}$ respectively, while by Theorem \ref{thm:converge}, if $n=\O(t^2)$, then Cubic-Prox-SVRG can converge to an $\O(1/t^6)$-accurate solution in a superlinear rate. In the strongly convex setting, Cubic-Prox-SVRG will finally converge in a linear rate and thus satisfies Assumption \ref{ass:appro}.
\bibliographystyle{alpha}
|
1209.5835
|
\section{Introduction:}
Let $X$ be a non empty set and $I=[0$, $1]$ be the closed unit interval in
the set
\mathbb{R}
$ of real numbers. Let $A_{f}(s)=\{A_{f}^{n}\}_{n}$ and $B_{f}(s)
\{B_{f}^{n}\}_{n}$ be sequences of fuzzy sets in $X$ called fuzzy sequential
sets in $X$ and we define\newline
$i)$ $A_{f}(s)\ \vee B_{f}(s)=\{A_{f}^{n}\vee B_{f}^{n}\}_{n}$(union),
\newline
$ii)$ $A_{f}(s)\wedge B_{f}(s)=\{A_{f}^{n}\wedge B_{f}^{n}\}_{n}
(intersection), \newline
$iii)$ $A_{f}(s)\leq B_{f}(s)~$if and only if~$A_{f}^{n}\leq B_{f}^{n~}~$for
all~$n\in
\mathbb{N}
$,
\mathbb{N}
$ being the set of positive integers, \newline
$iv)$ $A_{f}(s)\leq _{w}B_{f}(s)$ if and only if there exists $n\in
\mathbb{N}
~$such that~$A_{f}^{n}\leq B_{f}^{n}$, \newline
$v)$ $A_{f}(s)=B_{f}(s)~$if and only if~$A_{f}^{n}=B_{f}^{n}~$for all~$n\in
\mathbb{N}
$, \newline
$vi)$ $A_{f}(s)(x)=\{A_{f}^{n}(x)\}_{n}$, $x\in X$, \newline
$vii)$ $A_{f}(s)(x)\geq _{M}r$ if and only if $A_{f}^{n}(x)\geq r_{n}$ for
all $n\in M$, where $r=\{r_{n}\}_{n}$ is a sequence in $I$. In particular if
$M
\mathbb{N}
$, where
\mathbb{N}
$ is the set of positive integers, we write $A_{f}(s)(x)\geq r$, \newline
$viii)$ $X_{f}^{l}(s)=\{X_{f}^{n}\}_{n}~$where~$l\in I~$and~$X_{f}^{n}(x)=l
, for~all~$x\in X$, $n\in
\mathbb{N}
$, \newline
$ix)$ $A_{f}^{c}(s)=\{1-A_{f}^{n}\}_{n}=\{(A_{f}^{n})^{c}\}_{n}$, called
complement of $A_{f}(s)$,\newline
$x)$ a fuzzy sequential set $P_{f}(s)=\{p_{f}^{n}\}_{n}$ is called a fuzzy
sequential point if there exists $x\in X$ and a non zero sequence
r=\{r_{n}\}_{n}$\ in $I$ such that
\begin{eqnarray*}
p_{f}^{n}(t) &=&r_{n}\text{, if }t=x\text{, } \\
&=&0\text{, if }t\in X-\{x\}\text{, for all }n\in
\mathbb{N}
\text{.}
\end{eqnarray*
If $M$ be the collection of all $n\in
\mathbb{N}
$ such that $r_{n}\neq 0$, then we can write the above expression a
\begin{eqnarray*}
p_{f}^{n}(x) &=&r_{n}\text{, whenever }n\in M\text{, } \\
&=&0\text{,}\ \text{whenever }n\in
\mathbb{N}
-M\text{.}
\end{eqnarray*
The point $x$ is called the support, $M$\ is called base and $r$ is called
the sequential grade of membership of $x$ in the fuzzy sequential point
P_{f}(s)$ and we write $P_{f}(s)=(p_{fx}^{M}$, $r)$. If further $M=\{n\}$,
n\in
\mathbb{N}
$, then the fuzzy sequential point is called a simple fuzzy sequential point
and it is denoted by $(p_{fx}^{n}$, $r_{n})$. A fuzzy sequential point is
called complete if its base is the set of natural numbers. A fuzzy
sequential point $P_{f}(s)=(p_{fx}^{M}$, $r)$ is said to belong to $A_{f}(s)$
if and only if $P_{f}(s)\leq A_{f}(s)$ and we write $P_{f}(s)\in A_{f}(s)$.
It is said to belong weakly to $A_{f}(s)$, symbolically $P_{f}(s)\in
_{w}A_{f}(s)$ if and only if there exists $n\in M$ such that
p_{f}^{n}(x)\leq A_{f}^{n}(x)$. If $R\subseteq M$ and $s$ is the sequence in
$I$ same to $r$ in $R$ and vanishes outside $R$ then the fuzzy sequential
point $P_{rf}(s)=(p_{fx}^{R}$, $s)$ is called a reduced fuzzy sequential
point of $P_{f}(s)=(p_{fx}^{M}$, $r)$. A sequence $(x$, $L)=\{A_{n}\}_{n}$
of subsets of $X$, where $A_{n}=\{x\}$, for all $n\in L$ and $A_{n}=\Phi =
the null subset of $X$, for all $n\in
\mathbb{N}
-L$, is called a sequential point in $X$.\newline
\section{Definitions and Results:}
\begin{definition}
A family $\delta (s)$ of fuzzy sequential sets on a non empty set $X$
satisfying the properties
\end{definition}
$i)$ $X_{f}^{r}(s)\in \delta (s)~$for~all~$r\in \{0$, $1\}$,
$ii)$ $A_{f}(s)$, $B_{f}(s)\in \delta (s)\Rightarrow A_{f}(s)\wedge
B_{f}(s)\in \delta (s)$ and
$iii)$ for any family $\{A_{fj}(s)\in \delta (s)$, $j\in J\}$, $\underset
j\in J}{\vee }A_{fj}(s)\in \delta (s)$.\newline
is called a fuzzy sequential topology(FST) on $X$ and the ordered pair $(X$,
$\delta (s))$ is called fuzzy sequential topological space(FSTS). The
members of $\delta (s)$ are called open fuzzy sequential sets in $X$.
Complement of an open fuzzy sequential set in $X$ is called closed fuzzy
sequential set in $X.$
\begin{definition}
If $\delta _{1}(s)$ and $\delta _{2}(s)$ be two FSTs on $X$ such that
\delta _{1}(s)\subset \delta _{2}(s)$, then we say that $\delta _{2}(s)$ is
finer than $\delta _{1}(s)$ or $\delta _{1}(s)$ is weaker than $\delta
_{2}(s)$.
\end{definition}
\begin{proposition}
If $\delta $ be a fuzzy topology(FT) on $X$, then $\delta ^
\mathbb{N}
}$ forms a FST on $X$.
\end{proposition}
\begin{proof}
Proof is straightforward.
\end{proof}
We may construct different FSTs on $X$ from a given FT $\delta $ on $X$,
\delta ^
\mathbb{N}
}$ is the finest of all these FSTs. Not only that, any FT $\delta $ on $X$
can be considered as a component of some FST on $X$, one of them is $\delta
^
\mathbb{N}
}$, there are at least countably many FSTs on $X$ weaker than $\delta ^{N}$
of which $\delta $ is a component. One of them is $\delta ^{^{\prime
}}(s)=\{A_{f}^{n}(s)=\{A_{f}^{n}\}_{n}$; $A_{f}^{n}=A$ for all $n\in
\mathbb{N}
$ and $A\in \delta \}$.
\begin{proposition}
If $(X$, $\delta (s))$ is a FSTS, then $(X$, $\delta _{n})$ is a fuzzy
topological space(FTS), where $\delta _{n}$=$\{A_{f}^{n}$; $A_{f}^{n}(s)$ =
\{A_{f}^{n}\}_{n}\in \delta (s)\}$, $n\in
\mathbb{N}
$.
\end{proposition}
\begin{proof}
Proof is omitted.
\end{proof}
\begin{definition}
$(X$, $\delta _{n})$, where $n\in
\mathbb{N}
$, is called the $n^{th}$ component FTS of the FSTS $(X$, $\delta (s))$.
\end{definition}
\begin{proposition}
Let $A_{f}^{n}(s)$=$\{A_{f}^{n}\}_{n}$ be an open(closed) fuzzy sequential
set in the FSTS $(X$, $\delta (s))$, then for each $n\in
\mathbb{N}
$, $A_{f}^{n}$ is an open(closed) fuzzy set in $(X$, $\delta _{n})$ but the
converse is not necessarily true.
\end{proposition}
\begin{proof}
Proof of the first part is omitted. For the converse part let us take the
FSTS $(X$, $\delta (s))$ where $X$ is any non empty set and $\delta (s)$=
\{X_{f}^{r}(s)$, $r\in I\}$. Let $\{r_{n}\}_{n}$ be a strictly increasing
sequence in $I$ and $A_{f}(s)$=$\{A_{f}^{n}\}_{n}$, where $A_{f}^{n}$=
\overline{r_{n}}$ and $\overline{r_{n}}(x)$=$r_{n}$ for all $x\in X$, $n\in
\mathbb{N}
$. Clearly for each $n\in
\mathbb{N}
$, $A_{f}^{n}$ is open fuzzy set in $(X$, $\delta _{n})$ but $A_{f}(s)$=
\{A_{f}^{n}\}_{n}$ is not an open fuzzy sequential set in the FSTS $(X$,
\delta (s))$.
\end{proof}
\begin{definition}
Fuzzy sequential sets $A_{f}(s)$=$\{A_{f}^{n}\}_{n}$ and $B_{f}(s)$=
\{B_{f}^{n}\}_{n}$ are called quasi-coincident, denoted by
A_{f}(s)qB_{f}(s) $ if and only if there exists $x\in X$ such that
A_{f}^{n}(x)>(B_{f}^{n})^{c}(x)$, whenever $A_{f}^{n}$ and $B_{f}^{n}$ both
are non $\overline{0\text{.}}$ We write $A_{f}(s)\overline{q}B_{f}(s)$ to
say that $A_{f}(s)$ and $B_{f}(s)$ are not quasi-coincident.
\end{definition}
\begin{definition}
Fuzzy sequential sets $A_{f}(s)$=$\{A_{f}^{n}\}_{n}$ and $B_{f}(s)$=
\{B_{f}^{n}\}_{n}$ are called weakly quasi-coincident, denoted by
A_{f}(s)q_{w}B_{f}(s)$ if and only if there exists $x\in X$ such that
A_{f}^{n}(x)>(B_{f}^{n})^{c}(x)$ for some $n\in
\mathbb{N}
$. We write $A_{f}(s)\overline{q}_{w}B_{f}(s)$ to mean that $A_{f}(s)$ and
B_{f}(s)$ are not weakly quasi-coincident.
\end{definition}
\begin{definition}
A fuzzy sequential point $P_{f}(s)=(p_{fx}^{M}$, $r)$ is called
quasi-coincident with $A_{f}(s)$=$\{A_{f}^{n}\}_{n}$, denoted by
P_{f}(s)qA_{f}(s)$ if and only if $P_{f}^{n}(x)>(A_{f}^{n})^{c}(x)$ for all
n\in M$. If $P_{f}(s)=(p_{fx}^{M}$, $r)$ is not quasi-coincident with
A_{f}(s)$, then we write $P_{f}(s)\overline{q}A_{f}(s)$.
\end{definition}
\begin{definition}
A fuzzy sequential point $P_{f}(s)=(p_{fx}^{M}$, $r)$ is called weakly
quasi-coincident with $A_{f}(s)$=$\{A_{f}^{n}\}_{n}$, denoted by
P_{f}(s)q_{w}A_{f}(s)$ if and only if $P_{f}^{n}(x)>(A_{f}^{n})^{c}(x)$ for
some $n\in M$. If $P_{f}(s)=(p_{fx}^{M}$, $r)$ is not weakly
quasi-coincident with $A_{f}(s)$, then we write $P_{f}(s)\overline{q
_{w}A_{f}(s)$. If $P_{f}^{n}(x)>(A_{f}^{n})^{c}(x)$ for some $n\in
L\subseteq M$, then we say that $P_{f}(s)$ is weakly quasi-coincident with
A_{f}(s)$ at the sequential point $(x$, $L)$.
\end{definition}
\begin{proposition}
If the fuzzy sequential sets $A_{f}(s)$=$\{A_{f}^{n}\}_{n}$ and $B_{f}(s)$=
\{B_{f}^{n}\}_{n}$ are quasi-coincident, then each pair of non $\overline{0}$
fuzzy sets $A_{f}^{n}$ and $B_{f}^{n}$ is also so but the converse is not
necessarily true.
\end{proposition}
\begin{proof}
Proof of the first part is omitted. For the second part let $A_{f}(s)$=
\{A_{f}^{n}\}_{n}$ and $B_{f}(s)$=$\{B_{f}^{n}\}_{n}$ be fuzzy sequential
sets on
\mathbb{R}
$\ wher
\begin{eqnarray*}
A_{f}^{1}(x) &=&\frac{2}{3}\text{, }x\in (-\infty \text{, }0)\text{, } \\
&=&\frac{1}{3}\text{, }x\in \lbrack 0\text{, }\infty )\text{.}
\end{eqnarray*
\begin{eqnarray*}
A_{f}^{2}(x) &=&\frac{1}{3}\text{, }x\in (-\infty \text{, }0)\text{, } \\
&=&\frac{2}{3}\text{, }x\in \lbrack 0\text{, }\infty )\text{.}
\end{eqnarray*
\begin{equation*}
A_{f}^{n}(x)=\frac{3}{4}\text{, }x\in
\mathbb{R}
\text{ and }n\neq 1\text{, }2\text{.}
\end{equation*
\begin{eqnarray*}
B_{f}^{1}(x) &=&\frac{1}{2}\text{, }x\in (-\infty \text{, }0)\text{, } \\
&=&\frac{2}{3}\text{, }x\in \lbrack 0\text{, }\infty )\text{.}
\end{eqnarray*
\begin{eqnarray*}
B_{f}^{2}(x) &=&\frac{1}{4}\text{, }x\in (-\infty \text{, }0)\text{, } \\
&=&\frac{3}{7}\text{, }x\in \lbrack 0\text{, }\infty )\text{.}
\end{eqnarray*
\begin{equation*}
B_{f}^{n}(x)=\frac{1}{2}\text{, }x\in
\mathbb{R}
\text{ and }n\neq 1\text{, }2\text{.}
\end{equation*
Clearly $A_{f}^{n}qB_{f}^{n}$ for all $n\in
\mathbb{N}
$ but $A_{f}(s)\overline{q}B_{f}(s)$.
\end{proof}
\begin{corollary}
A fuzzy sequential point $P_{f}(s)=(p_{fx}^{M}$, $r)$ is quasi-coincident
with a fuzzy sequential set $A_{f}(s)$=$\{A_{f}^{n}\}_{n}$ if and only if
P_{f}^{n}$ and $A_{f}^{n}$ are so for each $n\in M$.
\end{corollary}
\begin{proof}
Proof is straightforward.
\end{proof}
\begin{definition}
A fuzzy sequential set $A_{f}(s)$ in the FSTS $(X$, $\delta (s))$ is called
a neighbourhood(in short nbd) of a fuzzy sequential point $P_{f}(s)$ if and
only if there exists $B_{f}(s)\in \delta (s)$ such that $P_{f}(s)\in
B_{f}(s)\leq A_{f}(s)$. A nbd $A_{f}(s)$ is called open if and only if
A_{f}(s)\in \delta (s)$.
\end{definition}
\begin{definition}
A fuzzy sequential set $A_{f}(s)$ in the FSTS $(X$, $\delta (s))$ is called
a weak nbd of a fuzzy sequential point $P_{f}(s)$ if and only if there
exists $B_{f}(s)\in \delta (s)$ such that $P_{f}(s)\in _{w}B_{f}(s)\leq
A_{f}(s)$. A weak nbd $A_{f}(s)$ is called open if and only if $A_{f}(s)\in
\delta (s)$.
\end{definition}
\begin{definition}
A fuzzy sequential set $A_{f}(s)$ in the FSTS $(X$, $\delta (s))$ is called
a $Q$-nbd of a fuzzy sequential point $P_{f}(s)$ if and only if there exists
$B_{f}(s)\in \delta (s)$ such that $P_{f}(s)qB_{f}(s)\leq A_{f}(s)$. A $Q
-nbd $A_{f}(s)$ is called open if and only if $A_{f}(s)\in \delta (s)$.
\end{definition}
\begin{definition}
A fuzzy sequential set $A_{f}(s)$ in the FSTS $(X$, $\delta (s))$ is called
a weak $Q$-nbd of a fuzzy sequential point $P_{f}(s)$ if and only if there
exists $B_{f}(s)\in \delta (s)$ such that $P_{f}(s)q_{w}B_{f}(s)\leq
A_{f}(s) $. A weak $Q$-nbd $A_{f}(s)$ is called open if and only if
A_{f}(s)\in \delta (s)$.
\end{definition}
\begin{proposition}
$A_{f}(s)\leq _{w}(\leq )B_{f}(s)$ if and only if $A_{f}(s)$ and
B_{f}^{c}(s)$ are not (weakly) quasi-coincident. In particular $P_{f}(s)\in
_{w}(\in )A_{f}(s)$ if and only if $P_{f}(s)$ is not (weakly)
quasi-coincident with $A_{f}^{c}(s)$.
\end{proposition}
\begin{proof}
Proof is omitted.
\end{proof}
\begin{proposition}
Let $\{A_{fj}(s)$, $j\in J\}$ be a family of fuzzy sequential sets in $X$.
Then a fuzzy sequential point $P_{f}(s)q_{w}(\vee _{j\in J}A_{fj}(s))$ if
and only if $P_{f}(s)q_{w}A_{fj}(s)$ for some $j\in J$.
\end{proposition}
\begin{proof}
Let $P_{f}(s)q_{w}(\vee _{j\in J}A_{fj}(s))$ where $P_{f}(s)=(p_{fx}^{M}$,
r)$ and $A_{fj}(s)=\{A_{fj}^{n}\}_{n}$. \newline
This implie
\begin{equation*}
p_{f}^{k}(x)+S_{f}^{k}(x)>1\text{ for some }n=k\in M\text{, where}\vee
_{j\in J}A_{fj}(s)=\{S_{f}^{n}\}_{n}\text{.}
\end{equation*
Therefore $S_{f}^{k}(x)=1-p_{f}^{k}(x)+\varepsilon _{k}$ where $\varepsilon
_{k}>0$.---------------------\TEXTsymbol{>}$(i)$ \newline
Also $S_{f}^{k}(x)-\varepsilon _{k}<A_{fj}^{k}(x)$ for some $j\in J
.------------------------\TEXTsymbol{>}$(ii)$ \newline
From $(i)$ and $(ii)$ we have $p_{f}^{k}(x)+A_{fj}^{k}(x)>1$, that is,
P_{f}(s)q_{w}A_{fj}(s)$ for some $j\in J$. Other implication is
straightforward.
\end{proof}
\begin{corollary}
If $P_{f}(s)qA_{fj}(s)$ for some $j\in J$, then $P_{f}(s)q(\vee _{j\in
J}A_{fj}(s))$, where $\{A_{fj}(s)$, $j\in J\}$ is a family of fuzzy
sequential sets in $X$ but not conversely.
\end{corollary}
\begin{proof}
Proof of the first part is omitted. For second part, let
A_{fj}(s)=\{A_{fj}^{n}\}_{n}$, $j=1$, $2$ be fuzzy sequential sets in
\mathbb{R}
$, wher
\begin{eqnarray*}
A_{f1}^{1}(x) &=&0\text{ for all }x\in
\mathbb{R}
-(0\text{, }1)\text{, } \\
&=&\frac{1}{4}\text{ for all }x\in (0\text{, }1)\text{.}
\end{eqnarray*
\begin{eqnarray*}
A_{f1}^{2}(x) &=&0\text{ for all }x\in
\mathbb{R}
-(\frac{1}{3}\text{, }\frac{2}{3})\text{, } \\
&=&\frac{2}{3}\text{ for all }x\in (\frac{1}{3}\text{, }\frac{2}{3})\text{.}
\end{eqnarray*
\begin{equation*}
A_{f1}^{n}(x)=0=A_{f2}^{n}(x)\text{ for all }x\in
\mathbb{R}
\text{, }n\neq 1\text{, }2
\end{equation*
\begin{eqnarray*}
A_{f2}^{1}(x) &=&0\text{ for all }x\in
\mathbb{R}
-(-\frac{1}{2}\text{, }1)\text{, } \\
&=&\frac{1}{3}\text{ for all }x\in (-\frac{1}{2}\text{, }1)\text{.}
\end{eqnarray*
\begin{eqnarray*}
A_{f2}^{2}(x) &=&0\text{ for all }x\in
\mathbb{R}
-(-\frac{1}{2}\text{, }2)\text{, } \\
&=&\frac{1}{5}\text{ for all }x\in (-\frac{1}{2}\text{, }2)\text{.}
\end{eqnarray*
The fuzzy sequential point $P_{f}(s)=(p_{f0.5}^{M}$, $r)$ where $M=\{1$,
2\} $, $r=\{r_{n}\}_{n}$ and $r_{1}=r_{2}=\frac{7}{10}$ is quasi-coincident
with $A_{f1}(s)\vee A_{f2}(s)$ but it is not so with any one of them.
\end{proof}
\begin{definition}
A subfamily $\beta $ of a FST $\delta (s)$ on $X$ is called a base for
\delta (s)$ if and only if to every $A_{f}(s)\in \delta (s)$, there exists a
subfamily $\{B_{fj}(s)$, $j\in J\}$ of $\beta $ such that $A_{f}(s)=\vee
_{j\in J}B_{fj}(s)$.
\end{definition}
\begin{definition}
A subfamily $S=\{S_{f\lambda }(s)$; $\lambda \in \Lambda \}$ of a FST
\delta (s)$ on $X$ is called a subbase for $\delta (s)$ if and only if
\{\wedge _{j\in J}S_{fj}(s)$; $J=$finite subset of $\Lambda \}$ forms a base
for $\delta (s)$.
\end{definition}
\begin{theorem}
A subfamily $\beta $ of a FST $\delta (s)$ on $X$ is a base for $\delta (s)$
if and only if for each fuzzy sequential point $P_{f}(s)$ in $(X$, $\delta
(s))$ and for every open weak $Q$ nbd $A_{f}(s)$ of $P_{f}(s)$, there exists
a member $B_{f}(s)\in \beta $ such that $P_{f}(s)q_{w}B_{f}(s)\leq A_{f}(s)$.
\end{theorem}
\begin{proof}
The necessary part is straightforward. To prove its sufficiency, if possible
let $\beta $ be not a base for $\delta (s)$. Then there exists a member
A_{f}(s)\in \delta (s)-\beta $, such that $O_{f}(s)=\vee \{B_{f}(s)\in \beta
$; $B_{f}(s)<A_{f}(s)\}\neq A_{f}(s)$, and hence there is an $x\in X$ and an
$M\subset
\mathbb{N}
$ such that $O_{f}^{n}(x)<A_{f}^{n}(x)$, for all $n\in M$. Let
r=\{r_{n}\}_{n}$ where $r_{n}=1-O_{f}^{n}(x)>0$ whenever $n\in M$ and
r_{n}=0$ whenever $n\in
\mathbb{N}
-M$, then $A_{f}^{n}(x)+r_{n}>O_{f}^{n}(x)+r_{n}=1$, for all $n\in M$ and
(p_{fx}^{M}$, $r)=P_{f}(s)q_{w}A_{f}(s)$. Therefore $A_{f}(s)$ is an open
weak $Q$ nbd of $P_{f}(s)$. Now $B_{f}(s)=\{B_{f}^{n}\}_{n}\in \beta $,
B_{f}(s)\leq A_{f}(s)\Rightarrow B_{f}(s)<A_{f}(s)\Rightarrow B_{f}(s)\leq
O_{f}(s)\Rightarrow B_{f}^{n}(x)+r_{n}\leq O_{f}^{n}(x)+r_{n}=1$ for all
n\in M\Rightarrow P_{f}(s)\overline{q_{w}}B_{f}(s)$ which is a
contradiction. Hence the proof.
\end{proof}
\begin{proposition}
If $\beta $ be a base for the FST $\delta (s)$ on $X$, then $\beta
_{n}=\{B_{f}^{n}$; $B_{f}(s)=\{B_{f}^{n}\}_{n}\in \beta \}$ will form a base
for the component FT $\delta _{n}$ on $X$ for each $n\in
\mathbb{N}
$ but not conversely.
\begin{proof}
Proof of the first part is straightforward. For converse part we consider
the FSTS $
\mathbb{R}
$, $\delta ^
\mathbb{N}
})$, where
\mathbb{R}
$ is the set of real numbers and $\delta =\{\overline{r}$; $r\in \lbrack 0$,
$1]\}$, $\overline{r}(x)=r$ for all $x\in
\mathbb{R}
$, which is a FT on
\mathbb{R}
$. Clearly $\beta _{n}=\{\overline{r}$; $r\in (0$, $1)\cap Q\}$, where $Q$
is the set of rational numbers, is a base for the component FT $\delta _{n}^
\mathbb{N}
}$ on $X$ for each $n\in
\mathbb{N}
$ but $\beta (s)=\{X_{f}^{r}(s)$; $r\in (0$, $1)\cap Q\}$ is not a base for
the FST $\delta ^
\mathbb{N}
}$ on $X$ because $A_{f}(s)=\{A_{f}^{n}\}_{n}$ where $A_{f}^{n}=\overline{
\frac{1}{n})}$ for all $n\in
\mathbb{N}
$ is a open fuzzy sequential set in $
\mathbb{R}
$, $\delta ^
\mathbb{N}
})$ but can not be written as a supremum of a subfamily of $\beta (s)$.
\end{proof}
\end{proposition}
\begin{definition}
Let $A_{f}(s)$ be any fuzzy sequential set in a FSTS $(X$, $\delta (s))$.
The closure $\overline{A_{f}(s)}$ and interior $\overset{o}{A_{f}}(s)$ of
A_{f}(s)$ are defined a
\begin{equation*}
\overline{A_{f}(s)}=\wedge \{C_{f}(s)\text{; }A_{f}(s)\leq C_{f}(s)\text{,
C_{f}^{c}(s)\in \delta (s)\}\text{, }
\end{equation*
\begin{equation*}
\overset{o}{A_{f}}(s)=\vee \{O_{f}(s)\text{; }O_{f}(s)\leq A_{f}(s)\text{,
O_{f}(s)\in \delta (s)\}\text{.}
\end{equation*}
\end{definition}
\begin{proposition}
If $\overline{A_{f}(s)}=\{\overline{A_{f}^{n}}\}_{n}$ in $(X$, $\delta (s))
, then $cl(A_{f}^{n})\leq \overline{A_{f}^{n}}$ in $(X$, $\delta _{n})$ for
each $n\in
\mathbb{N}
$, where $cl(A_{f}^{n})$ is the closure of $A_{f}^{n}$ in $(X$, $\delta
_{n}) $.
\end{proposition}
\begin{proof}
Proof is straightforward.
\end{proof}
Here we cite such an example where the equality in the proposition $2.8$
does not hold. Let $X=[0$, $1]$ and $\delta (s)=\{X_{f}^{r}(s)$; $r\in
\lbrack 0$, $1]\}$. If $A_{f}(s)=P_{f}(s)=(p_{f\frac{1}{3}}^
\mathbb{N}
}$, $r)$, $r=\{\frac{1}{2}-\frac{1}{3n}\}_{n}$, then $\overline{A_{f}(s)
=X_{f}^{\frac{1}{2}}(s)$. Here $cl(A_{f}^{n})=\overline{(\frac{1}{2}-\frac{
}{3n})}$, whereas $\overline{A_{f}^{n}}=\overline{(\frac{1}{2})}$.
\begin{definition}
The dual of a fuzzy sequential point $P_{f}(s)=(p_{fx}^{M}$, $r)$ is a fuzzy
sequential point $P_{df}(s)=(p_{fx}^{M}$, $t)$, where $r=\{r_{n}\}_{n}$,
t=\{t_{n}\}_{n}$ an
\begin{eqnarray*}
t_{n} &=&1-r_{n}\text{ for all }n\in M\text{, } \\
&=&0\text{ for all }n\in
\mathbb{N}
-M\text{.}
\end{eqnarray*}
\end{definition}
\begin{theorem}
Every $Q$ nbd of a fuzzy sequential point $P_{f}(s)$ is weakly
quasi-coincident with a fuzzy sequential set $A_{f}(s)$ implies $P_{f}(s)\in
\overline{A_{f}(s)}$ implies every weak $Q$ nbd of $P_{f}(s)$ and $A_{f}(s)$
are weakly quasi-coincident.
\begin{proof}
Let $P_{f}(s)=(p_{fx}^{M}$, $r)$. $P_{f}(s)\in \overline{A_{f}(s)}$ if for
every closed fuzzy sequential set $C_{f}(s)\geq A_{f}(s)$, $P_{f}(s)\in
C_{f}(s)$, that is $p_{f}^{n}(x)\leq c_{f}^{n}(x)$ for all $n\in
M\Longrightarrow P_{f}(s)\in \overline{A_{f}(s)}$ if for every open fuzzy
sequential set $B_{f}(s)=\{B_{f}^{n}\}_{n}\leq A_{f}^{c}(s)$,
B_{f}^{n}(x)\leq 1-p_{f}^{n}(x)$ for all $n\in M$; that is $P_{f}(s)\in
\overline{A_{f}(s)}$ if for every open fuzzy sequential set
B_{f}(s)=\{B_{f}^{n}\}_{n}$ satisfying $B_{f}^{n}(x)>1-p_{f}^{n}(x)$ for all
$n\in M$, $B_{f}(s)\nleq A_{f}^{c}(s)$, which implies the first part. Now
let $P_{f}(s)\in \overline{A_{f}(s)}$. If possible let there exists a weak
Q $ nbd $N_{f}(s)$ of $P_{f}(s)$ such that $N_{f}(s)\overline{q_{w}}A_{f}(s)
. Then there exists an open fuzzy sequential set $B_{f}(s)$\ such that
P_{f}(s)q_{w}B_{f}(s)\leq N_{f}(s)$. Now $N_{f}(s)\overline{q_{w}}A_{f}(s)$
and $B_{f}(s)\leq N_{f}(s)\Rightarrow B_{f}^{n}(x)+A_{f}^{n}(x)\leq 1$ for
all $x\in X$, $n\in
\mathbb{N}
\Rightarrow A_{f}(s)\leq B_{f}^{c}(s)\Rightarrow P_{f}(s)\in
B_{f}^{c}(s)\Rightarrow p_{f}^{n}(x)+B_{f}^{n}(x)\leq 1$ for all $n\in
\mathbb{N}
$. This contradicts the fact that $P_{f}(s)q_{w}B_{f}(s)$. Hence the result
follows.
\end{proof}
\end{theorem}
\begin{corollary}
A fuzzy sequential point $P_{f}(s)\in \overline{A_{f}(s)}$ if and only if
each nbd of its dual point $P_{df}(s)$ is weakly quasi-coincident with
A_{f}(s)$.
\end{corollary}
\begin{proof}
Proof is straightforward since $Q$ nbd of a fuzzy sequential point is
exactly the nbd of its dual point.
\end{proof}
\begin{theorem}
A fuzzy sequential point $P_{f}(s)\in \overset{o}{A}_{f}(s)$ if and only if
its dual point $P_{df}(s)\notin \overline{A_{f}^{c}(s)}$.
\end{theorem}
\begin{proof}
Let $P_{f}(s)\in \overset{o}{A}_{f}(s)$ $\Rightarrow $there exists an open
fuzzy sequential set $B_{f}(s)$ such that $P_{f}(s)\in B_{f}(s)\leq
A_{f}(s)\Rightarrow B_{f}(s)$ and $A_{f}^{c}(s)$ are not weakly
quasi-coincident$\Rightarrow P_{df}(s)\notin \overline{A_{f}^{c}(s)}$.
Conversely let $P_{df}(s)\notin \overline{A_{f}^{c}(s)}$. Then there exists
an open nbd $B_{f}(s)$ of $P_{f}(s)$ which is not weakly quasi-coincident
with $A_{f}^{c}(s)\Rightarrow P_{f}(s)\in B_{f}(s)\leq A_{f}(s)\Rightarrow
P_{f}(s)\in \overset{o}{A}_{f}(s)$.
\end{proof}
\begin{proposition}
In a FSTS $(X$,$\delta (s))$, the following hold:\newline
(i) $\overline{X_{f}^{r}(s)}=X_{f}^{r}(s)$, $r\in \{0$, $1\}$, (ii)
A_{f}(s) $ is closed if and only if $\overline{A_{f}(s)}=A_{f}(s)$, (iii)
\overline{\overline{A_{f}(s)}}=\overline{A_{f}(s)}$, (iv) $\overline
A_{f}(s)\vee B_{f}(s)}=\overline{A_{f}(s)}\vee \overline{B_{f}(s)}$, (v)
\overline{A_{f}(s)\wedge B_{f}(s)}\leq \overline{A_{f}(s)}\wedge \overline
B_{f}(s)}$, (vi) $(X_{f}^{r}(s))^{o}=X_{f}^{r}(s)$, $r\in \{0$, $1\}$, (vii)
$A_{f}(s)$ is open if and only if $\overset{o}{A}_{f}(s)=A_{f}(s)$, (viii) $
\overset{o}{A}_{f}(s))^{o}=\overset{o}{A}_{f}(s)$, (ix) $(A_{f}(s)\wedge
B_{f}(s))^{o}=\overset{o}{A}_{f}(s)\wedge \overset{o}{B}_{f}(s)$, (x)
(A_{f}(s)\vee B_{f}(s))^{o}=\overset{o}{A}_{f}(s)\vee \overset{o}{B}_{f}(s)
, (xi) $\overset{o}{A}_{f}(s)=(\overline{A_{f}^{c}(s))}^{c}$, (xii)
\overline{A_{f}(s)}=\overline{(A_{f}^{c}(s))^{o}}$, (xiii) $(\overline
A_{f}(s)})^{c}=(A_{f}^{c}(s))^{o}$, $($xiv$)$ $\overline{(A_{f}^{c}(s))}=
\overset{o}{A}_{f}(s))^{c}$.
\end{proposition}
\begin{proof}
Proof is straightforward.
\end{proof}
\begin{definition}
A fuzzy sequential point $P_{f}(s)$ is called an adherence point of a fuzzy
sequential set $A_{f}(s)$ if and only if every weak $Q$ nbd of $P_{f}(s)$ is
weakly quasi-coincident with $A_{f}(s)$.
\end{definition}
\begin{definition}
A fuzzy sequential point $P_{f}(s)$ is called an accumulation point of a
fuzzy sequential set $A_{f}(s)$ if and only if $P_{f}(s)$ is an adherence
point of $A_{f}(s)$ and every weak $Q$ nbd of $P_{f}(s)$ and $A_{f}(s)$ are
weakly quasi-coincident at some sequential point having different base or
support from that of $P_{f}(s)$ whenever $P_{f}(s)\in A_{f}(s)$.
\end{definition}
\begin{proposition}
Any reduced sequential point of an accumulation point of a fuzzy sequential
set is also an accumulation point of it.
\end{proposition}
\begin{proof}
Proof is omitted.
\end{proof}
From the proposition $2.10$, we see that any simple reduced sequential point
of an accumulation point of a fuzzy sequential set is also an accumulation
point of it but the converse is not true. For let $X=\{a$, $b\}$ and $\delta
(s)=\{X_{f}^{r}(s)$, $G_{f}(s)$; $r\in \{0$, $1\}\}$ where
G_{f}(s)=\{G_{f}^{n}\}_{n}$, $G_{f}^{n}(a)=\frac{1}{2}$ $G_{f}^{n}(b)=0$ for
$n\in
\mathbb{N}
$. Let $A_{f}(s)=\{A_{f}^{n}\}_{n}$ where $A_{f}^{n}=\overline{(\frac{2}{3})}
$ for $n=1$, $2$ and $A_{f}^{n}=0$ otherwise. Then the fuzzy sequential
point $P_{f}(s)=(p_{fa}^{M}$, $r)$ where $r=\{r_{n}\}_{n}$ with $r_{1}=r_{2}
\frac{2}{3}$ and $r_{n}=0$ otherwise, is not an accumulation point of
A_{f}(s)$ though $(p_{fa}^{1}$, $\frac{2}{3})$ and $(p_{fa}^{2}$, $\frac{2}{
})$ both are accumulation point of $A_{f}(s)$.
\begin{definition}
The union of all accumulation points of a fuzzy sequential set $A_{f}(s)$ is
called the fuzzy derived sequential set of $A_{f}(s)$ and it is denoted by
\overset{d}{A}_{f}(s)$.
\end{definition}
\begin{theorem}
In a FSTS $(X$, $\delta (s))$, $\overline{A_{f}(s)}=A_{f}(s)\vee \overset{d}
A}_{f}(s)$.
\end{theorem}
\begin{proof}
Let $\Omega =\{P_{f}(s)$; $P_{f}(s)$ is an adherence point of $A_{f}(s)\}$.
Then $\overline{A_{f}(s)}=\vee \Omega $. Now let $P_{f}(s)\in \Omega $ then
two cases may arise, $P_{f}(s)\in A_{f}(s)$ or $P_{f}(s)\notin A_{f}(s)$. If
$P_{f}(s)\notin A_{f}(s)$ then $P_{f}(s)\in \overset{d}{A}_{f}(s)$, hence
P_{f}(s)\in A_{f}(s)\vee \overset{d}{A}_{f}(s)$. Therefore
\begin{equation*}
\overline{A_{f}(s)}=\vee \Omega \leq A_{f}(s)\vee \overset{d}{A
_{f}(s).-----------(1)\text{.}
\end{equation*
Again, $A_{f}(s)\leq \overline{A_{f}(s)}$ and since any accumulation point
P_{f}(s)$ of $A_{f}(s)$ belongs to $\overline{A_{f}(s)}$ which implies
\overset{d}{A}_{f}(s)\leq \overline{A_{f}(s)}$. Therefore
\begin{equation*}
A_{f}(s)\vee \overset{d}{A}_{f}(s)\leq \overline{A_{f}(s)}-----------(2
\text{.}
\end{equation*
From (1) and (2) the result follows.
\end{proof}
\begin{corollary}
A fuzzy sequential set is closed in a FSTS $(X$, $\delta (s))$ if and only
if it contains all its accumulation points.
\end{corollary}
\begin{proof}
Proof is straightforward.
\end{proof}
\begin{remark}
The fuzzy derived sequential set of any fuzzy sequential set may not be
closed as shown by example $2.1$.
\end{remark}
\begin{example}
Let $X=\{a$, $b\}$, $\delta (s)$ be the FST having base $\beta
=\{X_{f}^{1}(s)\}\vee \{$ $X_{f}^{0}(s)\}\vee \{P_{f}(s)$, $G_{f}(s)\}$,
where $G_{f}^{n}(b)=1$ $\forall $ $n\in
\mathbb{N}
$, $G_{f}^{n}(a)=0$ $\forall $ $n\in
\mathbb{N}
$ and $P_{f}(s)=(p_{fa}^{M}$, $r)$, where $M=\{1$, $2$, $3\}$, $r_{1}=0.5$,
r_{2}=1$, $r_{3}=0.3$, $r_{n}=0$ $\forall $ $n\neq 1$, $2$, $3$. Here the
fuzzy derived sequential set of $(p_{fa}^{3}$, $0.3)$ is not closed.
\end{example}
\begin{proposition}
The fuzzy derived sequential set of a fuzzy sequential point equals the
union of the fuzzy derived sequential sets of all its simple reduced fuzzy
sequential points.
\end{proposition}
\begin{proof}
The proof is omitted.
\end{proof}
\begin{proposition}
If the fuzzy derived sequential set of each of the simple reduced fuzzy
sequential points of a fuzzy sequential point is closed, then the derived
sequential set of the fuzzy sequential point is closed.
\end{proposition}
\begin{proof}
Let $A_{f}(s)=(p_{fx}^{M}$, $r)$ be a fuzzy sequential point. Let $D_{f}(s)$
be the fuzzy derived sequential set of $A_{f}(s)$. Let $D_{nf}(s)$ be the
fuzzy derived sequential set of $A_{nf}(s)=(p_{fx}^{n}$, $r_{n})$, $n\in M$.
Suppose $D_{nf}(s)$ is closed for all $n\in M$. Let $P_{f}(s)$ be an
accumulation point of $D_{f}(s)$.\newline
Now, $P_{f}(s)\notin D_{f}(s)$\newline
$\Longrightarrow P_{f}(s)$ is not an accumulation point of $(p_{fx}^{M}$,
r) $\newline
$\Longrightarrow \exists $ a weak Q-nbd $B_{f}(s)$ of $P_{f}(s)$ which is
not weakly quasi coincident with $(p_{fx}^{M}$, $r)$\newline
$\Longrightarrow B_{f}(s)$ is not weakly quasi coincident with $(p_{fx}^{n}
, $r_{n})$ $\forall $ $n\in M$.\newline
$\Longrightarrow P_{f}(s)\notin D_{nf}(s)$ $\forall $ $n\in M$\newline
$\Longrightarrow P_{f}(s)$ is not an accumulation point of $D_{nf}(s)$
\forall $ $n\in M$ (since $D_{nf}(s)$ is closed $\forall $ $n\in M$)\newline
$\Longrightarrow P_{f}(s)$ is not an accumulation point of $\vee _{n\in
M}D_{nf}(s)=D_{f}(s)$, a contradiction. Hence proved.
\end{proof}
\begin{remark}
Converse of proposition $2.12$ is not true as shown by example $2.2$.
\end{remark}
\begin{example}
Let $X=\{a$, $b\}$, $\delta (s)$ be the FST having base $\beta
=\{X_{f}^{1}(s)\}\vee \{$ $X_{f}^{0}(s)\}\vee \{P_{f}(s)$, $G_{f}(s)\}$,
where $G_{f}^{n}(b)=1$ $\forall $ $n\in
\mathbb{N}
$, $G_{f}^{n}(a)=0$ $\forall $ $n\in
\mathbb{N}
$ and $P_{f}(s)=(p_{fa}^{M}$, $r)$, where $M=\{1$, $2$, $3\}$, $r_{1}=0.5$,
r_{2}=1$, $r_{3}=0.3$, $r_{n}=0$ $\forall $ $n\neq 1$, $2$, $3$. Here the
fuzzy derived sequential set of $P_{f}(s)$ is closed but the fuzzy derived
sequential set of $(p_{fa}^{3}$, $0.3)$ is not closed.
\end{example}
\begin{lemma}
Let $A_{f}(s)=(p_{fx}^{k}$, $r)$ be a fuzzy sequential point in FSTS $(X$,
\delta (s))$. Then,\newline
(i) For $y\neq x$, $\overline{A_{f}(s)}(y)=A_{f}^{d}(s)(y)$.\newline
(ii) If $\overline{A_{f}(s)}(x)>_{P}r$, $\overline{A_{f}(s)
(x)=_{P}A_{f}^{d}(s)(x)$, where $P\subset M$.\newline
(iii) If $\overline{A_{f}(s)}(x)>_{M}r$, $\overline{A_{f}(s)
(x)=A_{f}^{d}(s)(x)$.\newline
(iv) If $A_{f}^{d}(s)(x)=0=$sequence of real zeros, then $\overline{A_{f}(s)
(x)=r$.\newline
(v) If $A_{f}(s)$ is simple then converse of (iv) is true.
\end{lemma}
\begin{lemma}
Let $A_{f}(s)=(p_{fx}^{k}$, $r_{k})$ be a simple fuzzy sequential point in
FSTS $(X$, $\delta (s))$. Then,\newline
(i) If $A_{f}^{d}(s)(x)$ is a non zero sequence, then $\overline{A_{f}(s)
=A_{f}^{d}(s)$.\newline
(ii) If $A_{f}^{d}(s)(x)=0=$sequence of real zeros, then $A_{f}^{d}(s)$ is
closed iff $\exists $ an open fuzzy sequential set $B_{f}^{@}(s)$ such that
B_{f}^{@}(s)(x)=1$ and for $y\neq x$, $B_{f}^{@}(s)(y)=\{\overline{A_{f}(s)
\}^{c}(y)=\{A_{f}^{d}(s)\}^{c}(y)$.\newline
(iii) $A_{f}^{d}(s)(x)=0=$sequence of real zeros iff $\exists $ an open
fuzzy sequential set $B_{f}(s)$ such that $B_{f}(s)(x)=1-r$ $where$
r=\{r_{n}\}_{n}$ and $r_{n}=0$ if $n\neq k$, $r_{n}=r_{k\text{ }}$ if n=k.
\end{lemma}
\begin{theorem}
The fuzzy derived sequential set of each fuzzy sequential set is closed iff
the fuzzy derived sequential set of each simple fuzzy sequential point is
closed.
\end{theorem}
\begin{proof}
The necessity is obvious. Conversely, suppose $H_{f}(s)$ is a fuzzy
sequential set. We will show that $H_{f}^{d}(s)=D_{f}(s)$ is closed. Let
P_{f}(s)=(p_{fx}^{k}$, $r_{k})$ be an accumulation point of $D_{f}(s)$. It
is sufficient to show that $P_{f}(s)\in D_{f}(s)$. Let $r=\{r_{n}\}_{n}$
where $r_{n}=r_{k}$ for $n=k$ and $r_{n}=0$ $\forall $ $n\neq k$. Now
P_{f}(s)\in \overline{D_{f}(s)}=\overline{H_{f}^{d}(s)}\leq \overline
\overline{H_{f}(s)}}=\overline{H_{f}(s)}$. Therefore $P_{f}(s)$ is an
adherence point of $H_{f}(s)$. If $P_{f}(s)\notin H_{f}(s)$, then $P_{f}(s)$
is an accumulation point of $H_{f}(s)$, that is $P_{f}(s)\in D_{f}(s)$ and
we are done.\newline
Let us assume $P_{f}(s)\in H_{f}(s)$
$\ \ \ \Longrightarrow r\leq H_{f}(s)(x)=\rho $ (say)
\ \ \ \ $\Longrightarrow r_{k}\leq H_{f}^{k}(x)=\rho _{k}$\newline
Now consider the simple fuzzy sequential point $A_{f}(s)=(p_{fx}^{k}$, $\rho
_{k})$. Let $\rho ^{\prime }=\{\rho _{n}^{\prime }\}_{n}$ where $\rho
_{k}^{\prime }=\rho _{k}$ and $\rho _{n}^{\prime }=0$ $\forall $ $n\neq k$.
There are two possibilities \ concerning $A_{f}^{d}(s)$.\newline
Case I. $A_{f}^{d}(s)(x)=\rho _{1}$ is a non zero sequence. Now $\overline
A_{f}(s)}(x)\geq A_{f}(s)(x)=\rho ^{\prime }$\newline
By lemma $2.1(v)$, $\overline{A_{f}(s)}(x)>\rho ^{\prime }$
$\ \ \ \ \ \ \Longrightarrow A_{f}^{d}(s)(x)=\overline{A_{f}(s)}(x)>\rho
^{\prime }$
\ \ \ \ \ \ \ $\Longrightarrow \rho _{1}>\rho ^{\prime }$
\ \ \ \ \ \ \ $\Longrightarrow \rho _{1k}>\rho
_{k}=A_{f}^{k}(x)=H_{f}^{k}(x) $\newline
Hence the simple fuzzy sequential point $Q_{f}(s)=(p_{fx}^{k}$, $\rho
_{1k})\notin H_{f}(s)$. but since $Q_{f}(s)\in A_{f}^{d}(s)\leq \overline
A_{f}(s)}\leq \overline{H_{f}(s)}$, $Q_{f}(s)$ is an accumulation point of
H_{f}(s)$, that is $Q_{f}(s)\in D_{f}(s)$. Moreover $r_{k}\leq \rho
_{k}<\rho _{1k}$
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Longrightarrow
$ $r_{k}<\rho _{1k}$
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \Longrightarrow
P_{f}(s)\in D_{f}(s)$.\newline
Case II. $A_{f}^{d}(s)(x)=0$. Let $B_{f}(s)$ be an arbitrary weak Q-nbd of
A_{f}(s)$ and hence of $P_{f}(s)$. In view of lemma $2.2(ii)$, $\exists $ an
open fuzzy sequential set $B_{f}^{@}(s)$ such that $B_{f}^{@}(s)(x)=1$ and
for $y\neq x$, $B_{f}^{@}(s)(y)=\{\overline{A_{f}(s)}\}^{c}(y)$. Let
C_{f}(s)=B_{f}(s)\wedge B_{f}^{@}(s)$. Then $C_{f}(s)(x)=B_{f}(s)(x)$ which
implies $C_{f}^{k}(x)=B_{f}^{k}(x)>1-r_{k}$. Thus $C_{f}(s)$ is a weak Q-nbd
of $P_{f}(s)$. Hence $C_{f}(s)$ and $D_{f}(s)$ are weakly quasi coincident,
that is $\exists $ a point $z$ and $n\in
\mathbb{N}
$ such that $D_{f}^{n}(z)+C_{f}^{n}(z)>1$. Owing to the fact that $D_{f}(s)$
is the union of all the accumulation points of $H_{f}(s)$, $\exists $ an
accumulation point $P_{f}^{\prime }(s)=(p_{fz}^{n}$, $\mu _{n})$ such that
\mu _{n}+C_{f}^{n}(z)>1$. Therefore $C_{f}(s)$ is a weak Q-nbd of
P_{f}^{\prime }(s)$. Let $\mu =\{\mu _{n}\}_{n}$ where $\mu _{n}\neq 0$ and
\mu _{m}=0$ $\forall $ $m\neq n$. The proof will be carried out, according
to the following subcases:\newline
Subcase I. When n=k.\newline
(a) when $z=x$ and $\mu \leq \rho ^{\prime }$, then $P_{f}^{\prime }(s)\in
H_{f}(s)$. Since $P_{f}^{\prime }(s)$ is an accumulation point of $H_{f}(s)
, every weak Q-nbd of $P_{f}^{\prime }(s)$ (and hence $B_{f}(s)$) and
H_{f}(s)$ are weakly quasi coincident at some point having different base or
different support than that of $P_{f}(s)$.\newline
(b) When $z=x$ and $\mu >\rho ^{\prime }$, then $P_{f}^{\prime }(s)\notin
H_{f}(s)$. From lemma $2.2(iii)$, $\exists $ an open fuzzy sequential set
B_{f}^{\prime }(s)$ such that $B_{f}^{\prime }(s)(x)=1-\rho ^{\prime }>1-\mu
$. Therefore $G_{f}(s)=C_{f}(s)\wedge B_{f}^{\prime }(s)$ is also a weak
Q-nbd of $P_{f}^{\prime }(s)$. Hence $G_{f}(s)$ and $H_{f}(s)$ are weakly
quasi coincident. Since $G_{f}(s)(x)\leq B_{f}^{\prime }(s)(x)=1-\rho
^{\prime }$
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\Longrightarrow G_{f}^{k}(x)\leq B_{f}^{\prime k}(x)=1-\rho _{k}$\newline
Thus $G_{f}(s)$ (and hence $B_{f}(s)$) and $H_{f}(s)$ are weakly quasi
coincident at some point having different base or different support than
that of $P_{f}(s)$.\newline
(c) When $z\neq x$.\newline
We have $B_{f}^{@}(s)(z)=\{\overline{A_{f}(s)}\}^{c}(z)$. Also $\{\overline
A_{f}(s)}\}^{c}=\{A_{f}^{c}(s)\}^{\circ }$. Since $\{A_{f}^{c}(s)\}^{\circ
}(z)=B_{f}^{@}(s)(z)\geq C_{f}(s)(z)$,$~\exists $ an open fuzzy sequential
set $B_{f}^{\prime \prime }(s)\leq A_{f}^{c}(s)$ such that $B_{f}^{\prime
\prime k}(z)\geq C_{f}^{k}(z)(z)>1-\mu _{k}$. Therefore $G_{f}^{\prime
}(s)=B_{f}(s)\wedge B_{f}^{\prime \prime }(s)$ is also a weak Q-nbd of
P_{f}^{\prime }(s)$ and hence is weakly quasi coincident with $H_{f}(s)$
\newline
Since $B_{f}^{\prime \prime }(s)$ $\leq A_{f}^{c}(s)$\newline
$\Longrightarrow B_{f}^{\prime \prime }(s)(x)\leq 1-A_{f}(s)(x)$\newline
$\Longrightarrow B_{f}^{\prime \prime k}(x)\leq
1-A_{f}^{k}(x)=1-H_{f}^{k}(x) $.\newline
Thus $G_{f}^{\prime }(s)$ (and hence $B_{f}(s)$) is weakly quasi coincident
with $H_{f}(s)$ at some point having different base or different support
than that of $P_{f}(s)$.\newline
Subcase II. When n$\neq $k.\newline
(a) Suppose $z=x$. We have $B_{f}^{\prime }(s)(x)=1-\rho ^{\prime }$
$\Longrightarrow B_{f}^{\prime n}(x)=1>1-\mu _{n}$\newline
So $B_{f}^{\prime }(s)$ is a weak Q-nbd of $P_{f}^{\prime }(s)$. Hence
G_{f}(s)=C_{f}(s)\wedge B_{f}^{\prime }(s)$ is a weak Q-nbd of
P_{f}^{\prime }(s)$ and so it is weakly quasi coincident with $H_{f}(s).
\newline
Now $G_{f}(s)(x)\leq B_{f}^{\prime }(s)(x)=1-\rho ^{\prime }$.\newline
$\Longrightarrow $ $G_{f}^{k}(x)\leq B_{f}^{\prime k}(x)=1-\rho
_{k}=1-H_{f}^{k}(x)$.\newline
So $H_{f}(s)$ and $G_{f}(s)$ are weakly quasi coincident at some point
having different base or different support than that of $P_{f}(s)$.\newline
(b) When $z\neq x$, the proof is same as Subcase I (c).
\end{proof}
\section{Acknowledgement:}
The second Author is thankful to the Council of Scientific and Industrial
Research (CSIR), New Delhi, India, for the financial assistance awarded to
her through the NET-JRF program.\newpage
\section{References}
[1] L.A. Zadeh, "Fuzzy Sets", Information And Control 8, 338-353(1965)
\newline
[2] M.K. Bose and Indrajit Lahiri, "Sequential Topological Spaces and
Separation Axioms", Bulletin of The Allahabad Mathematical Society,
17(2002), 23-37.\newline
[3] N. Palaniappan, "Fuzzy Topology", Narosa Publishing House, New Delhi
(2006).\newline
[4] N. Tamang, M. Singha and S. De Sarkar, "Separation Axioms in Sequential
Topological Spaces in the Light of Reduced and Augmented Bases", Int. J.
Contemp. Math. Sciences, Vol. 6, 2011, no.23, 1137-1150.
\end{document}
|
1901.05186
|
\section{Introduction}
Causal analysis based on linear structural equation model and path analysis is widely used in sociology, economics, biology, etc.
Pearl extended the concept of total effects in the path analysis to a general structural equation model and defined it as the intervention effect \cite{pearl1995causal}.
Fixing a variable $X$ at a certain value $x$ by an external operation is called intervention, and the intervention effect is mathematically defined as a causal effect on the response variable $Y$.
The intervention effect is defined based on a causal diagram that expresses the existence or nonexistence of a causal relationship between variables and conditional probability distributions that expresses causal relationships among variables.
However, in general, the causal diagram and the conditional probability distributions among variables are unknown, so it is necessary to estimate both from the data.
That is, the calculation of the intervention effect based on the causal diagram consists of the following steps.
\begin{enumerate}
\item Estimate a causal diagram from the data
\item Estimate the conditional probability distributions among variables from the data
\item Calculate the intervention effect
\end{enumerate}
The estimation methods of the causal diagram are roughly divided into two categories: constraint-based methods (such as PC algorithm \cite{spirtes1991algorithm}) that estimates the structure with constraints such as conditional independence among variables, and score-based methods (such as GES algorithm \cite{chickering2002finding}) that output a graph with the maximum approximate value of posterior probability.
Estimation of a conditional probability distribution is a general topic not limited to causal inference, and widely used approaches are estimating a parameter by assuming a parametric probability distribution or estimating by a nonparametric method.
In this research, we assume parametric probability distributions for the conditional probability distributions.
Although it is known that the identifiability of causal diagrams would change by assumptions on the conditional probability distributions \cite{shimizu2006linear}, this research does not deal with that point in depth.
However, we note that the proposal in this research is applicable as long as parametric distribution is assumed for the conditional probability distribution.
Since the intervention effect is defined on the causal diagram and the conditional probability distributions, it seems natural to estimate it by the above procedure.
However, if we formulate the problem of estimating the intervention effect based on the statistical decision theory, estimating it by this procedure is not necessarily optimal.
In this study, the problem of estimating the intervention effect is formulated in the framework of the statistical decision theory for each case where the causal diagram is known and unknown, and the optimal decision function is derived under the Bayes criterion.
The remainder of the paper is organized as follows.
In Section 2, the definitions of the structural equation model, causal diagram, and intervention effect are described.
In Section 3, we formulate the problem to estimate the intervention effect as a statistical decision problem for the case where the causal diagram is known and derive the optimal decision function under the Bayes criterion.
In Section 4, we do the same thing as in Section 3 for the case where the causal diagram is unknown.
In Section 5, we evaluate the effectiveness of the proposed method by comparing the intervention effect estimated by the proposed method and that estimated by two stage method, that is, calculate the intervention effect after estimating the causal diagram and/or the conditional probability distributions.
Finally, we give a summary and future works in Section 6.
\section{Causal diagram and intervention effect}
Here, after describing the definition of the causal diagram, we describe the mathematical definition of the intervention effect.
\subsection{Causal diagram}
\begin{defi}
Let $G$ be a directed acyclic graph (DAG) and $V=(X_{1}, X_{2}, \ldots, X_{m})$ be a set of random variables that corresponds to the set of the vertices of $G$.
$G$ is called a causal diagram if it specifies the causal relationships among variables in the following form,
\begin{align}
X_{i}=g_{i}(\mbox{pa}(X_{i}),\epsilon_{i}),\quad i=1,\ldots,m, \label{SEM}
\end{align}
and the random variables are generated according to this causal relationship.
The equations (\ref{SEM}) are called structural equations for $X_{1}, X_{2}, \ldots, X_{m}$.
$\mbox{pa}(X_{i})\subset V$ is the set of variables that have an arrow that heads to $X_{i}$.
We assume that $\epsilon_{1}, \epsilon_{2}, \ldots, \epsilon_{m}$ are mutually independent.
\end{defi}
Let $p(x_{i}|\mbox{pa}(x_{i}))$ be the conditional probability distribution of $X_{i}$ given $\mbox{pa}(X_{i})$.
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=\linewidth]{diagram.eps}
\caption{Examples of causal diagram.}
\label{fig_diagram}
\end{center}
\end{figure}
\begin{example}
If the causal diagram of the random variables $X, Y, Z$ is $G_{1}$ in Figure \ref{fig_diagram}, there are causal relationships,
\begin{align}
Z&=g_{Z}(X, \epsilon_{Z}),\\
Y&=g_{Y}(Z, \epsilon_{Y}).
\end{align}
Similarly, if the causal diagram of the random variables $X, Y, Z$ is $G_{2}$ in Figure \ref{fig_diagram}, there are causal relationships,
\begin{align}
X&=g_{X}(Z, \epsilon_{X}),\\
Y&=g_{Y}(X, Z, \epsilon_{Y}).
\end{align}
\end{example}
\subsection{Intervention effect}
In a causal diagram, an external operation that fixes the value of $X$ to a constant regardless of the value of other variables is called intervention, and the distribution of $Y$ after the intervention is called intervention effect. Its mathematical definition is given as follows \cite{pearl1995causal}.
\begin{defi}
Let $V=\left\{X, Y, Z_{1}, Z_{2}, \ldots, Z_{p}\right\}$ be the set of vertices of a causal diagram $G$.
The intervention on $Y$ when intervening $X=x$ is defined as
\begin{align}
p(y|\mbox{do}(X=x))=\int\cdots\int \frac{p(x,y,z_{1},\ldots,z_{p})}{p(x|\mbox{pa}(x))}dz_{1}\ldots dz_{p}\label{effect_no_parameter}.
\end{align}
$\mbox{do}(X=x)$ means that $X$ is fixed to $x$ by intervention.
\end{defi}
(\ref{effect_no_parameter}) can be calculated only after the causal diagram is determined and the conditional distributions among the random variables are estimated.
Let $m$ be the variable that represents the causal diagram and the conditional probability distributions are parametric distributions specified by a parameter $\bm{\theta}_{m}$.
To clarify that the intervention effect depends on $m$ and $\bm{\theta}_{m}$, we rewrite (\ref{effect_no_parameter}) as follows.
\begin{multline}
p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})=\\
\int\cdots\int \frac{p(x,y,z_{1},\ldots,z_{p}|m, \bm{\theta}_{m})}{p(x|\mbox{pa}(x),m, \bm{\theta}_{m})}dz_{1}\ldots dz_{p}.\label{effect_parameter}
\end{multline}
\begin{example}
Assume that the causal diagram $m$ of $X, Y, Z$ is $G_{1}$ in Figure \ref{fig_diagram} and the structural equations are linear, that is,
\begin{align}
Z&=\theta_{Z|X}X+\epsilon_{Z},\quad \epsilon_{Z}\sim \mathcal{N}(0, 1^{2}),\label{example_SEM_1_1}\\
Y&=\theta_{Y|Z}Z+\epsilon_{Y},\quad \epsilon_{Y}\sim\mathcal{N}(0, 1^{2}),\label{example_SEM_1_2}
\end{align}
where $\mathcal{N}(\mu, \sigma^{2})$ denotes the normal distribution with mean $\mu$ and variance $\sigma^{2}$.
Then, $\bm{\theta}_{m}=(\theta_{Z|X}, \theta_{Y|Z})$ and the intervention effect on $Y$ when intervening $X=x$ is given by
\begin{align}
p(y|\mbox{do}(X=x),m=G_{1}, \bm{\theta}_{m})=\mathcal{N}(y; \theta_{Y|Z}\theta_{Z|X}x, 1+\theta_{Y|Z}^{2}),
\end{align}
where $\mathcal{N}(\cdot; \mu, \sigma^{2})$ denotes the probability density function of $\mathcal{N}(\mu, \sigma^{2})$.
In this case, it is well known that the intervention effect equals to the conditional pribability distribution $p(y|x,\bm{\theta}_{m})$ and the above formula describes this in detail.
Similarlly, assume that the causal diagram $m$ of $X, Y, Z$ is $G_{2}$ in Figure \ref{fig_diagram} and the structural equations are given by
\begin{align}
X&=\theta_{X|Z}Z+\epsilon_{X},\quad \epsilon_{X}\sim\mathcal{N}(0, 1^{2}),\label{example_SEM_2_1}\\
Y&=\theta_{Y|X}X+\theta_{Y|Z}Z+\epsilon_{Y},\quad \epsilon_{Y}\sim\mathcal{N}(0, 1^{2}).\label{example_SEM_2_2}
\end{align}
Then, $\bm{\theta_{m}}=(\theta_{X|Z}, \theta_{Y|X}, \theta_{Y|Z})$ and the intervention effect on $Y$ when intervening $X=x$ is given by
\begin{align}
&p(y|\mbox{do}(X=x),m=G_{2}, \bm{\theta}_{m})=\mathcal{N}(y; \tilde{\mu}, \tilde{s}^{-1}),\\
&\tilde{\mu}=\tilde{s}^{-1}\theta_{Y|X}x-\mu_{Z}\theta_{Y|Z},\\
&\tilde{s}=\frac{s_{Z}}{\theta_{Y|Z}^{2}+s_{Z}},
\end{align}
where we assumed that $Z\sim\mathcal{N}(\mu_{Z}, s_{Z}^{-1})$.
\end{example}
\section{Decision theoretic approach for estimating intervention effect; causal diagram is known}
Here, we consider the case where the causal diagram $m$ is known, but $\bm{\theta}_{m}$ is unknown.
In this case, we cannot calculate (\ref{effect_parameter}) directly and we have to estimate it from the data.
Let $D^{n}=(x_{n}, y_{n}, z_{1n},\ldots, z_{pn})_{n=1,\ldots,N}$ be a sample of $X, Y, Z_{1}, \ldots, Z_{p}$ with size $n$.
Decision function $AP:D^{n}\mapsto p(y|x)$ outputs an estimate of the intervention effect.
We have to define some loss function for the decision function.
In this study, the Kullback-Leibler divergence with the intervention effect is used as a loss function.
\begin{multline}
Loss(\bm{\theta}_{m}, AP(D^{n}))= \\
\int p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})\ln \frac{p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})}{AP(D^{n})(y|x)}dy.\label{loss}
\end{multline}
The risk function is defined as the expectation of the loss function with respect to $D^{n}$.
\begin{align}
Risk(\bm{\theta}_{m},AP)=E_{D^{n}|\bm{\theta}}\left[Loss(\bm{\theta}_{m}, AP(D^{n}))\right].
\end{align}
The risk function is a function of the parameter $\bm{\theta}_{m}$ and there is no decision function that minimizes the risk function for all parameter $\bm{\theta}_{m}\in\Theta_{m}$.
In this study, we assume a prior distribution $p(\bm{\theta}_{m})$ for the parameter $\bm{\theta}_{m}$ and consider the following Bayes risk function.
\begin{align}
BR(AP)=E_{\bm{\theta}_{m}}\left[Risk(\bm{\theta}_{m}, AP)\right].\label{BR}
\end{align}
Then, the following theorem holds.
\begin{theo}
\label{theorem1}
The Bayes optimal decision function that minimizes (\ref{BR}) is given by
\begin{align}
AP^{*}(D^{n})=p(y|\mbox{do}(X=x),m,D^{n}),\label{bayes_optimal_fixed_model}
\end{align}
where
\begin{multline}
p(y|\mbox{do}(X=x),m,D^{n})=\\
\int p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})p(\bm{\theta}_{m}|m, D^{n})d\bm{\theta}_{m},\label{predict_fixed_model}
\end{multline}
\end{theo}
\begin{proof}
The minimization of the Bayes risk function is reduced to the minimization of the loss function weighted by the posterior distribution \cite{berger2013statistical}.
That is,
\begin{multline}
\argmin_{AP} BR(AP)=\\
\argmin_{AP} \int Loss(\bm{\theta}_{m}, AP(D^{n}))p(\bm{\theta}_{m}|m, D^{n})d\bm{\theta}_{m}.
\end{multline}
Substituting (\ref{loss}) into the loss function and removing the terms that do not depend on $AP$, we have
\begin{align}
\argmin_{AP} BR(AP)=\argmax_{AP} \int\int p(y|\mbox{do}(X=x), m,\bm{\theta}_{m})\nonumber\\
\times p(\bm{\theta}_{m}|m,D^{n})\ln AP(D^{n})d\bm{\theta}_{m}dy\\
=\argmax_{AP}\int p(y|\mbox{do}(X=x),m,D^{n})\ln AP(D^{n})dy.
\end{align}
From Shannon's inequality \cite{cover2012elements},
\begin{multline}
\argmax_{AP}\int p(y|\mbox{do}(X=x),m,D^{n})\ln AP(D^{n})dy=\\
p(y|\mbox{do}(X=x),m,D^{n}).
\end{multline}
\hfill $\Box$
\end{proof}
\begin{example}
Assume that the causal diagram $m$ for $X, Y, Z$ is $G_{1}$ in Figure \ref{fig_diagram} and the structural equations are given by (\ref{example_SEM_1_1}) and (\ref{example_SEM_1_2}).
In addition, as the prior distributions of $\theta_{Y|Z}, \theta_{Z|X}$, assume that $\theta_{Y|Z}, \theta_{Z|X}\sim\mathcal{N}(0,\alpha^{-1})$.
Then, the Bayes optimal estimator of the intervention effect is given by
\begin{align}
p(y|\mbox{do}(X=x), m=G_{1}, D^{n})=\nonumber\\
\int\int \mathcal{N}(y; \theta_{Y|Z}\theta_{Z|X}x, 1+\theta_{Y|Z}^{2})\mathcal{N}(\theta_{Y|Z}; \mu_{Y|Z}, s_{Y|Z}^{-1})\times \nonumber \\
\mathcal{N}(\theta_{Z|X}; \mu_{Z|X}, s_{Z|X}^{-1})d\theta_{Y|Z}d\theta_{Z|X},\label{predict_example_1}
\end{align}
\begin{align}
\mu_{Y|Z}&=s_{Y|Z}^{-1}\bm{z}^{T}\bm{y},\\
s_{Y|Z}&=\alpha+\bm{z}^{T}\bm{z},\\
\mu_{Z|X}&=s_{Z|X}^{-1}\bm{x}^{T}\bm{z},\\
s_{Z|X}&=\alpha+\bm{x}^{T}\bm{x},
\end{align}
where $\bm{x}=(x_{1},\ldots,x_{N})^{T}, \bm{y}=(y_{1},\ldots,y_{N})^{T}, \bm{z}=(z_{1},\ldots, z_{N})$.
Similarly, assume that the causal diagram $m$ for $X, Y, Z$ is $G_{2}$ in Figure \ref{fig_diagram} and the structural equations are given by (\ref{example_SEM_2_1}) and (\ref{example_SEM_2_2}).
In addition, as the prior distributions of $\theta_{Y|X}, \theta_{Y|Z}$, assume that $\theta_{Y|X}, \theta_{Y|Z}\sim\mathcal{N}(0, \alpha^{-1})$.
Let $\bm{\theta}_{Y|XZ}=(\theta_{Y|X},\theta_{Y|Z})$, then, the Bayes optimal estimator of the intervention effect is given by
\begin{multline}
p(y|\mbox{do}(X=x), m=G_{2}, D^{n})=\\
\int \mathcal{N}(y; \tilde{\mu}, \tilde{s}^{-1})\mathcal{N}(\bm{\theta}_{Y|XZ};\bm{\mu}_{Y|XZ}, \bm{S}_{Y|XZ}^{-1})d\bm{\theta}_{Y|XZ},\label{predict_example_2}
\end{multline}
\begin{align}
\tilde{\mu}&=\tilde{s}^{-1}\theta_{Y|X}x-\mu_{Z}\theta_{Y|Z}\\
\tilde{s}&=\frac{\alpha s_{Z}}{\alpha\theta_{Y|Z}^{2}+s_{Z}}\\
\bm{\mu}_{Y|XZ}&=\bm{S}_{Y|XZ}^{-1}\bm{X}_{\setminus \bm{y}}^{T}\bm{y},\\
\bm{S}_{Y|XZ}&=\alpha\bm{I}+\bm{X}_{\setminus \bm{y}}^{T}\bm{X}_{\setminus \bm{y}},
\end{align}
where $\mathcal{N}(\cdot; \bm{\mu}, \bm{\Sigma})$ denotes the probability density function of the mulrivariate normal distribution with mean vector $\bm{\mu}$ and covariance matrix $\bm{\Sigma}$ and
\begin{align}
\bm{X}_{\setminus \bm{y}}=
\begin{pmatrix}
\bm{x}^{T}\\
\bm{z}^{T}
\end{pmatrix}^{T}.
\end{align}
We note that the Bayes optimal estimator (\ref{predict_example_1}) and (\ref{predict_example_2}) cannot be calculated analytically even in the cases of the linear structural equation model of these examples.
In the later experiments, we performed a numerical integration for the calculations.
\end{example}
\section{Decision theoretic approach for estimating intervention effect; causal diagram is unknown}
Here, we consider the case where not only the parameter $\bm{\theta}_{m}$, but also the causal diagram $m$ is unknown.
Since $m$ is unknown, the loss function is defined for $m$ and $\bm{\theta}_{m}$.
\begin{multline}
Loss(m,\bm{\theta}_{m}, AP(D^{n}))=\\
\int p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})\ln \frac{p(y|\mbox{do}(X=x),m,\bm{\theta}_{m})}{AP(D^{n})(y|x)}dy.
\end{multline}
The risk function is given by
\begin{align}
Risk(m,\bm{\theta}_{m},AP)=E_{D^{n}|\bm{\theta}_{m},m}\left[Loss(m, \bm{\theta}_{m}, AP(D^{n}))\right].
\end{align}
In this study, we consider the case where the set of candidate causal diagrams is given by $\mathcal{M}$ and we can assume the prior distribution $p(m)$ for $m\in\mathcal{M}$ and $p(\bm{\theta}_{m}|m)$ for $\bm{\theta}_{m}$ under $m$.
Then, the Bayes risk function is given by
\begin{align}
BR(AP)=E_{m}\left[E_{\bm{\theta}_{m}|m}\left[Risk(m,\bm{\theta}_{m}, AP)\right]\right].\label{BR_model}
\end{align}
In this case, the following theorem holds.
\begin{theo}
The Bayes optimal estimator that minimizes (\ref{BR_model}) is given by
\begin{align}
AP^{*}(D^{n})=p(y|\mbox{do}(X=x),D^{n}),\label{predict_mixed_model}
\end{align}
where
\begin{multline}
p(y|\mbox{do}(X=x),D^{n})=\\
\sum_{m\in\mathcal{M}}p(m|D^{n})p(y|\mbox{do}(X=x),m,D^{n}),
\end{multline}
and $p(y|\mbox{do}(X=x),m,D^{n})$ is given by (\ref{predict_fixed_model}).
\end{theo}
\begin{proof}
It is proved in the same manner as the proof of Theorem 1.\hfill $\Box$
\end{proof}
\begin{example}
Assume that the set $\mathcal{M}$ of the candidate causal diagrams is $\left\{G_{1}, G_{2}\right\}$ in Figure \ref{fig_diagram} and the structural equations under each causal diagram are given in the same way as in Examples 2 and 3.
When the prior distribution of the model $m$ is $p(m=G_{1}), p(m=G_{2})$ and the prior distribution of the parameter $\bm{\theta}_{m}$ under each model are given in the same way as in Example 3, the Bayes optimal estimator of the intervention effect is given by
\begin{align}
&p(y|\mbox{do}(X=x),D^{n})=\nonumber\\
&p(m_{1}|D^{n})p(y|\mbox{do}(X=x),m=G_{1},D^{n})+\\
&p(m_{2}|D^{n})p(y|\mbox{do}(X=x),m=G_{2},D^{n}),\nonumber
\end{align}
where $p(y|\mbox{do}(X=x),m=G_{1},D^{n}),p(y|\mbox{do}(X=x),m=G_{2},D^{n})$ are the same as given by (\ref{predict_example_1}) (\ref{predict_example_2}).
\end{example}
\section{Numerical experiments}
In this section, we show the effectiveness of the proposed method through numerical simulations.
\subsection{Case 1 : causal diagram is known}
\label{experiment_1}
First, we deal with the case where the causal diagram is known.
We consider the two cases, one is that the true diagram is $G_{1}$ in Figure \ref{fig_diagram} and the other is that the true diagram is $G_{2}$ in Figure \ref{fig_diagram}.
The structural equations are (\ref{example_SEM_1_1}) and (\ref{example_SEM_1_2}) for $G_{1}$ and (\ref{example_SEM_2_1}) and (\ref{example_SEM_2_2}) for $G_{2}$.
We assume that the probability distributions of variables corresponding to leaf nodes in each model, that is, $X$ in $G_{1}$ and $Z$ in $G_{2}$, are both $\mathcal{N}(0, 1^{2})$.
We also assume that the prior distributions of the parameters under each model, that is, $\theta_{Y|Z}, \theta_{Z|X}$ in $G_{1}$ and $\theta_{X|Z}, \theta_{Y|X}, \theta_{Y|Z}$ in $G_{2}$, are all $\mathcal{N}(0, 1^{2})$.
We consider the problem to estimate the intervention effect on $Y$ when intervening $X=1$ given $D^{n}=(x_{n}, y_{n}, z_{n})_{n=1,\ldots,N}$ as a sample of $(X, Y, Z)$.
We compare the following three methods.
\begin{description}
\item[Method 1 (ML)] \ \\ Calculate the maximum likelihood (ML) estimator $\bm{\theta}_{m, ML}$ by
\begin{align}
\hat{\bm{\theta}}_{m, ML}=\argmax_{\bm{\theta}_{m}}p(D^{n}|\bm{\theta}_{m}),
\end{align}
and substitute it to (\ref{effect_parameter}).
\item[Method 2 (MAP)] \ \\Calculate the maximum a posteriori (MAP) estimator $\bm{\theta}_{m, MAP}$ by
\begin{align}
\hat{\bm{\theta}}_{m,MAP}=\argmax_{\bm{\theta}_{m}}p(\bm{\theta}_{m}|D^{n}),
\end{align}
and substitute it to (\ref{effect_parameter}).
\item[Method 3 (BAYES)] \ \\Calculate the Bayes optimal estimator (\ref{bayes_optimal_fixed_model}).
\end{description}
Figure \ref{fig_result1} shows the Kullback-Leibler divergence between the true intervention effect on $Y$ when intervening $X=1$ in the model $G_{1}$ and the estimator of each method.
Figure \ref{fig_result2} is the same result for the model $G_{2}$.
In either case, as the sample size increases, the results of the three methods converge.
This can be explained by the fact that the posterior distribution of parameters concentrates around the MAP estimator as the sample size increases, and the MAP estimator and the ML estimator also approaches.
However, when the sample size is small, method 2 is better than method 1, and method 3 is better than method 2.
In this experiment, we experimented with models with very few variables, so the difference of each method is small, but it is expected that the difference of each method will become larger as the model becomes more complicated.
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=\linewidth]{model1.eps}
\caption{The Kullback-Leibler divergence between the true intervention effect on $Y$ when intervening $X=1$ in the model $G_{1}$ and the estimator of each method.}
\label{fig_result1}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=\linewidth]{model3.eps}
\caption{The Kullback-Leibler divergence between the true intervention effect on $Y$ when intervening $X=1$ in the model $G_{2}$ and the estimator of each method.}
\label{fig_result2}
\end{center}
\end{figure}
\subsection{Case 2 : causal diagram is unknown}
Next, we deal with the case where the causal diagram is unknown.
Let the set $\mathcal{M}$ of the candidates of the causal model be $\left\{G_{1}, G_{2}\right\}$ in Figure \ref{fig_diagram}.
The assumptions for the structural equations, the probability distributions of the leaf variables, and the prior distributions of the parameters are the same as the previous experiment.
We also assume that $p(m=G_{1})=p(m=G_{2})=\frac{1}{2}$.
Note that $X$ and $Y$ are conditionally independent when $Z$ is given in the model $G_{1}$, but they are not in the model $G_{2}$, so we can identify that which model generated data with high probability as the sample size increases.
As in the case of the previous experiment, we consider the problem to estimate the intervention effect on $Y$ when intervening $X=1$ given $D^{n}=(x_{n}, y_{n}, z_{n})_{n=1,\ldots,N}$ as a sample of $(X, Y, Z)$.
We compare the following two methods.
\begin{description}
\item[Method 1 (MAP)] \ \\Estimate the model by
\begin{align}
\hat{m}=\argmax_{m\in\mathcal{M}}p(m|D^{n})
\end{align}
and calculate the Bayes optimal estimator under the model $\hat{m}$,
\begin{align}
p(y|\mbox{do}(X=x),\hat{m},D^{n}).
\end{align}
\item[Method 2 (BAYES)] \ \\Calculate the Bayes optimal estimator (\ref{predict_mixed_model}).
\end{description}
Figure \ref{fig_result3} shows the Kullback-Leibler divergence between the true intervention effect on $Y$ when intervening $X=1$ and the estimator of each method.
The results of the two methods also approach as the sample size increases.
This can be explained from the fact that the as the sample size increases, the posterior probability of the true model approaches to $1$.
However, when the sample size is small, Method 2 is better than Method 1.
In this experiment, we experimented with only two candidate models, so there are differences between two methods only in small sample sizes.
It is expected that the difference will increase as the number of candidate models increases.
\begin{figure}[t]
\begin{center}
\includegraphics[keepaspectratio=true,width=\linewidth]{mixed_model.eps}
\caption{The Kullback-Leibler divergence between the true intervention effect on $Y$ when intervening $X=1$ and the estimator of each method. Models $G_{1}$ and $G_{2}$ appear with equal probability.}
\label{fig_result3}
\end{center}
\end{figure}
\section{Conclusion and future works}
In this study, the Bayes optimal estimation method for estimating the intervention effect was derived by formulating the estimation problem in the framework of the statistical decision theory.
In the estimation of the intervention effect, it is common to first estimate the causal diagram, estimate the conditional probability distributions among the variables, then calculate the intervention effect.
However, from the viewpoint of the Bayes decision theory framework, instead of determining models and parameters, weighting with a posterior probability or posterior distribution is optimal.
We describe some future works.
In the examples in this paper, we dealt with the case where the structural equations are linear.
It is necessary to derive the general form of the Bayes optimal estimator for those cases.
Further, it seems to be meaningful to investigate how the difference between the methods in the experiments becomes large in the cases other than the linear structural equation model.
In this study, we did not mention the calculation methods and computational complexity.
Even if the model is known and structural equations are linear, the Bayes optimal intervention effect estimator cannot be analytically calculated.
Therefore, in this paper, the estimator was calculated by numerical integration.
As the model becomes more complicated, the computational complexity will become higher.
It is necessary to construct an approximation algorithm that efficiently calculates the Bayes optimal estimator.
Also, when the model is unknown, it is necessary to calculate the posterior probability of all models, but as the number of candidate models becomes large, this also becomes computationally difficult.
It is also necessary to construct an approximation algorithm that efficiently calculates the Bayes optimal estimator in the case where the model is unknown.
\section*{Acknowledgment}
We would like to acknowledge all
members of Matsushima Lab. and Goto Lab. in Waseda Univ. for their
helpful suggestions to this work.
This research is partially supported by No. 16K00417 of Grant-in-Aid for
Scientific Research Category (C) and No. 18H03642 of Grant-in-Aid for Scientific Research Category (A), Japan Society for the Promotion
of Science.
\bibliographystyle{IEEEtran}
|
1901.04931
|
\section{Introduction}
\label{sec:Introduction}
Research on two-dimensional (2D) materials has grown enormously over the last few years \cite{Bhimanapati2015}, since the first isolation of monolayer graphene in 2004 by Novoselov \emph{et al.} \cite{Novoselov2004}. Despite graphene flexibility, strength, and high conductivity, with promising applications in electronics and spintronics, it does not have a bandgap, severely restricting its use in optoelectronics and digital electronics \cite{Gibney2015}. Although interest on graphene is far from over, the attention has been drawn recently to other 2D materials, such as hexagonal boron nitride (hBN) \cite{Zhang2017hBN}, the various -enes in group IVA and VA (silicine, arsenene, antimonene, stanene, germanene, bismuthene, etc.) \cite{Balendhran2015,Zhang2015ArseneneAntimonene,Reis2017}, black phosphorus \cite{Balendhran2015,Carvalho2016phosphorene}, and the extensive transition-metal dichalcogenide (TMD) family \cite{Manzeli20172d}.
Naturally, different 2D materials can be combined within the same device and form diverse heterostructures (HSs). As each material has different electronic structure and properties, HSs are capable of enhancing or, better yet, creating new tailored features, which are rather weak or nonexistent in their pristine counterparts. Prominent recent examples of HSs between 2D materials include enhancement of valley splitting by magnetic proximity \cite{Zhao2017enhanced,Seyler2018}, the appearance of spatially indirect excitons \cite{Calman2018}, and superconductivity in graphene bilayers rotated by a \emph{magic angle} \cite{Cao2018unconventional}.
In particular, group VIB semiconducting TMDs \cite{Novoselov2005} have been suggested as novel components for spintronics devices \cite{Zibouche2014}. They are receiving a great deal of attention due to their unique electronic \cite{Wang2012,Xu2014NatPhys,Liu2015} and optical properties \cite{Castellanos2016}, including strong spin-orbit coupling (SOC) \cite{Zhu2011,Cheiwchanchamnangij2012,Xiao2012} and either direct or indirect bandgap depending on the number of layers \cite{Splendiani2010,Mak2010}. TMDs can be exfoliated down to a `monolayer unit', a stack of three atomic layers ($MX_{2}$), in which transition metal atoms ($M$=Mo, W) are sandwiched between two layers of chalcogen species ($X$=S, Se, Te), resulting in molybdenum disulphide (MoS$_{2}$), tungsten diselenide (WSe$_{2}$), and molybdenum ditelluride (MoTe$_{2}$), among others.
Two (or more) different TMD monolayer units can either be stacked together to form a vertical HS \cite{Geim2013}, or they can be `stitched' together to build a lateral HS \cite{Ling2016} (also called planar or in-plane). Given that different TMDs show different bandgaps, work functions, SOC and excitonic spectra, they offer a wide variety of tunable properties when combined. In 2016, Kolobov \emph{et al.} \cite{Kolobov2016}, listed 1 theory and 7 experimental papers in their section 13.2 on lateral HSs; just two years later, those numbers have increased up to 35+ and 40+, respectively, highlighting the rapidly growing interest and activity in these novel structures.
Lateral HSs have been achieved, and the current emphasis is on improving the quality of the interfaces. Experiments on these systems include graphene-hBN \cite{Levendorf2012graphenehBN,Drost2015graphenehBN}, graphene-TMDs \cite{Ling2016}, hBN-TMDs \cite{Ling2016}, and different TMD-TMD combinations \cite{Ling2016,Huang2014NatMat,Gong2014NatMat,Duan2014NatNano,Zhang2015,Li2015,Zhang2016naturecommunications,Zhang2018strain,Sahoo2018Nature,Xie2018ParkGroup}, with many suggested applications as in-plane transistors, diodes, \emph{p-n} photodiodes and complementary metal-oxide-semiconductor (CMOS) inverters. Chemical vapor deposition (CVD) growth techniques have focused on successfully improving the lateral atomic connection between materials \cite{Huang2014NatMat,Gong2014NatMat,Duan2014NatNano,Zhang2018strain,Sahoo2018Nature,Xie2018ParkGroup}, in order to build clean, sharp and well oriented borders. This progress is clearly reflected in the description of HSs systems, which has changed from \emph{alloys} to \emph{interfaces}. Recent experiments have shown remarkable control on sharpness \cite{Sahoo2018Nature} and strain at the interface \cite{Zhang2018strain,Xie2018ParkGroup}. Moreover, atomically sharp interfaces between crystalline phases of the same TMD, 1T'-WSe$_{2}$ and 1H-WSe$_{2}$, have been studied in the search for topologically protected helical edge states \cite{Ugeda2018arxiv}.
As the quality and diverse composition of lateral HSs continue to improve, these interfaces can serve as exciting new platforms for the study of 1D physical phenomena.
As control of lateral HSs in TMDs is increasingly achieved in experiments, understanding the structural and electronic properties of the interfaces and their general behavior is important for future progress. Experiments have shown that the optoelectronic behavior has strong 1D character, and theoretical approaches have started to appear, suggesting effective applications for these novel 1D systems. Several interesting reviews on this topic \cite{Voiry2015,Lv2015,Li2016,Novoselov2016,Lin2016Review,Yuan2017Review,Zhao2018Review,Frisenda2018Review,Chen2018Review,Zeng2018Review,Hu2018Review} have focused mainly on experimental advances. In contrast, in this review we address current theoretical advances on 1D lateral interfaces between group VIB semiconducting TMDs (with a few exceptions), together with an overview of the experimental efforts to produce these interesting interfaces and associated device geometries.
The review is organized as follows: In section\ \ref{sec:Experiments} we briefly summarize experimental advances, focusing on growth and characterization of the interfacial region. In section\ \ref{sec:TheoreticalAndNumerical} we develop the main scope of this review, the theoretical and numerical descriptions of lateral HSs: We describe structural and electronic properties, and analyze proposals for using the interface as a unique stage to explore 1D physics. In section\ \ref{sec:Applications} we summarize the already available and proposed applications. In section\ \ref{sec:ProspectiveDirections} we give an outlook on the evolution of the field, and lastly, in section\ \ref{sec:SummaryFinal} we provide a concluding summary.
\section{Experiments}
\label{sec:Experiments}
Clean and sharp lateral interfaces in TMD HSs were reported in 2014 by three groups \cite{Huang2014NatMat,Gong2014NatMat,Duan2014NatNano}, improving on previous alloy growth $M^{(1)}X^{(1)}_x-M^{(2)}X^{(2)}_{1-x}$ \cite{Zhang20142DAlloys,Li2015Lateral,Wang2015SpinOrbit,Xie20152Dalloys,Yoshida2015microscopic,Zheng2015Monolayers,Duan2016Synthesis} with different metal $M^{(j)}$ and chalcogen $X^{(j)}$ combinations. Recent experiments have shown successful growth of longer interfacial sections with great strain, geometry, and/or electronic band alignment tunability \cite{Sahoo2018Nature,Xie2018ParkGroup,Zhang2018strain}.
This section is intended to serve as a brief resource to be considered in theoretical proposals for effective uses of lateral HSs between TMDs. It is divided into three subsections as follows: \ref{subsec:growth} reviews growth techniques, \ref{subsec:ExperimentalInterfaces} lists important interfacial parameters, and \ref{subsec:PhaseInterfaces} reviews advances on interfaces between different phases of the same TMD.
\subsection{Growth and characterization techniques}
\label{subsec:growth}
\begin{table*}[t]
\caption{\label{tab:table1} Atomic parameters for lateral TMD HS, including the HS interface, interfacial features (sharpness, strain, length, orientation, stitch or atomic interface), growth technique, source (Ref.), and substrate used. The strain column lists values only if characterized in the study. Length corresponds to the maximum pristine interface presented in the study. On the orientation column, ac* indicates that armchair domains are only sporadic instead of extended. Growth: while most procedures are chemical vapor deposition (CVD), additional techniques are also required, shown with symbols, $\dag$: e-beam lithography, $\ddag$: PTAS seeding, $\S$: assisted NaCl, and $\P$: self aligned. The table is in chronological order, with early work at the top.}
\begin{indented}
\item[]\begin{tabular}{@{}ccccccccc}
\br
&\multicolumn{5}{c}{Interface}&&&\\ \cline{2-6}\\
HS interface & Sharpness & Strain & Length & Orientation & Stitch & Growth & Ref. & Substrate \\
\mr
MoSe$_2$-WSe$_2$ & smooth (16 nm) & - & 30 nm & - & - & 1-step CVD & \cite{Huang2014NatMat} & SiO$_2$/Si \\
MoS$_2$-MoSe$_2$ & smooth (30 nm) & - & - & - & Se-W & 1-step CVD & \cite{Duan2014NatNano} & SiO$_2$/Si \\
WS$_2$-WSe$_2$ & smooth (40 nm)& - & - & - & & \\
MoS$_2$-WS$_2$ & sharp & - & 7 nm & zz \& ac* & S-Mo & 1-step CVD & \cite{Gong2014NatMat} & SiO$_2$/Si \\
MoS$_2$-WS$_2$ & sharp & - & 16 u.c. & zz & S-W & 2-step CVD$^\ddag$ & \cite{Zhang2015} & SiO$_2$/Si, sapphire, quartz \\
MoSe$_2$-WSe$_2$ & & & & & Se-W & & & \\
MoSe$_2$-MoS$_2$ & sharp & - & 1 nm & zz & - & - & \cite{Tizei2015Esciton} & \\
WS$_2$-MoS$_2$ & sharp & - & 10 nm & zz & S-W & 2-step CVD & \cite{Heo2015RotationMisfitFree} & SiO$_2$ \\
MoSe$_2$-MoS$_2$ & smooth (5 nm) & - & - & zz \& ac & - & 2-step CVD$^\dag$ & \cite{Mahjouri2015patterned} & SiO$_2$ \\
WSe$_2$-MoS$_2$ & sharp & 1.5\% & & & S-W & 2-step CVD & \cite{Li2015} & sapphire \\
MoSe$_2$-WSe$_2$ & sharp & - & & & S-W & 2-step CVD & \cite{Gong2015TwoStep} & SiO$_2$/Si \\
MoS$_2$-WS$_2$ & sharp & - & - & - & - & 2-step CVD & \cite{Chen2015Electronic} & SiO$_2$/Si \\
MoS$_2$-WS$_2$ & sharp & - & 6 nm & - & zz & 1-step CVD & \cite{Chen2015} & \\
MoS$_2$-WS$_2$ & sharp & - & 8 nm & zz & - & 2-step CVD & \cite{Yoo2015} & sapphire \\
MoS$_2$-graphene & smooth & - & - & - & - & 2-step CVD$^\ddag$ & \cite{Ling2016} & SiO$_2$/Si \\
MoS$_2$-WS$_2$ & & & & & & & & \\
MoS$_2$-hBN & & & & & & & & \\
WSe$_2$-MoS$_2$ & smooth (120 nm) & - & - & zz & - & 2-step CVD & \cite{Son2016Observation} & sapphire \& ITO \\
MoS$_2$-WS$_2$ & sharp & - & - & - & - & 2-step CVD & \cite{Bogaert2016Diffusion} & SiO$_2$/Si \\
WS$_2$-MoS$_2$ & sharp & - & - & - & - & 2-step CVD & \cite{Kobayashi2016} & Graphite \\
WSe$_2$-WS$_2$ & sharp & - & 4 nm & zz & S-W & 2-step CVD & \cite{Chen2016Lateral} & SiO$_2$/Si \\
WSe$_2$-WS$_2$ & sharp & - & - & zz \& ac & - & 2-step CVD$^\dag$ & \cite{Li2016Laterally} & sapphire \\
MoS$_2$–MoSe$_2$ & smooth & - & - & - & - & 2-step CVD & \cite{Chen2017InPlaneMosaic} & SiO$_2$/Si \\
WSe$_2$-MoS$_2$ & sharp & - & 5 nm & zz & - & 2-step CVD & \cite{Tsai2017SingleAtomically} & SiO$_2$/Si \\
WS$_2$-MoS$_2$ & sharp & - & 4 nm & zz & S-Mo & 1-step CVD & \cite{Shi2017cascaded} & SiO$_2$/Si \& Al$_2$O$_3$/Ag\\
& smooth & - & - & - & - & 2-step CVD$^\ddag$ & & \\
MoS$_2$-WS$_2$ & sharp (0.85 nm) & - & 6 nm & - & - & 1-step CVD$^\S$ & \cite{Wang2017NaClAssisted} & SiO$_2$/Si \\
MX$_2$ combinations & sharp & - & 6 nm & zz \& ac* & - & many-step CVD & \cite{Zhang2017Robust} & SiO$_2$/Si \\
MoS$_2$-WS$_2$ & sharp \& smooth & - & - & - & - & 1-pot CVD & \cite{Liu2017ARXIVNanoscale} & SiO$_2$ \\
MoSe$_2$-WSe$_2$, & sharp \& smooth & - & - & zz & X-Mo & 1-pot CVD & \cite{Sahoo2018Nature} & Si \\
MoS$_2$-WS$_2$ & & & & & & & & \\
WSe$_2$-MoS$_2$ & sharp & 2.2\% & 5-15 nm & zz \& ac* & Se-Mo & 2-step CVD & \cite{Zhang2018strain} & HOPG \\
& & 1.76\% & & & & & & WSe$_2$\\
WSe$_2$-MoS$_2$ & sharp & - & - & irregular & - & 2-step CVD$^\P$ & \cite{Li2018SelfAligned} & sapphire \\
WSe$_2$-WS$_2$ & sharp & 1.2\% & 160 u.c. & zz & & MOCVD & \cite{Xie2018ParkGroup} & SiO$_2$ \\
MoS$_2$-WS$_2$ & - & - & - & - & - & 2-step CVD$\dag$ & \cite{Murthy2018Intrinsic} & SiO$_2$/Si \& hBN\\
MoS$_2$-WS$_2$ & sharp & - & 3 nm & zz & - & 1-step CVD & \cite{Zhou2018Morphology} & SiO$_2$/Si \\
MoS$_2$-WS$_2$ & sharp & - & - & - & - & 1-step CVD & \cite{Wu2018SelfPowered} & SiO$_2$/Si \\
MoSe$_2$-WSe$_2$ & sharp \& smooth & - & - & - & - & 1-pot CVD & \cite{Xue2018NanoOptical} & SiO$_2$/Si \\
\br
\end{tabular}
\end{indented}
\end{table*}
Controlled synthesis of TMD HSs remains challenging due to the difficulty of growth conditions and their tunability. Furthermore, the HSs obtained with most methods are still relatively small, which restricts possible studies and applications. Different TMDs have similar thicknesses (monolayer height), so that the planar connection depends mostly on lattice constant mismatch. They also have nearly the same lattice constant if the chalcogen is the same ($\approx$0.3\% lattice mismatch) for different transition metals (e.g., MoS$_{2}$-WS$_{2}$). On the other hand, the lattice constant is very different ($\approx$4\% lattice mismatch) if only the chalcogen changes (e.g., MoS$_{2}$-MoSe$_{2}$). This large difference may lead to dislocations or wrinkles at the interface, with large built-in strains, which have important consequences on the electronic structure, as we will see later.
The growth of 2D TMDs HSs usually involves CVD, where vapor phase reactants are generated by thermally evaporating solid sources, usually powders. A classification scheme considers the degree of growth process manoeuvrability, where fewer changes in the growth conditions (such as sources or reactors) reduce degradation and promote cleaner and sharper interfaces. This \emph{the-fewer-steps-the-better} theme is increasingly mentioned in the literature. On the other hand, more steps allow the construction of more complex HSs, such as quantum wells and periodic HSs. A 1-step CVD process uses \emph{in situ} modulation of the vapor-phase reactants during growth, changing the chalcogen precursor just once in the middle of the growth run (see for example, Duan \emph{et al.} \cite{Duan2014NatNano}). A 2-step CVD involves the synthesis of one TMD, followed by epitaxial growth of the second one off the edges of the first growth, as reported in \cite{Heo2015RotationMisfitFree} and \cite{Gong2015TwoStep}. The advantage of this process is that it allows larger and sharper interfaces, avoiding cross-contamination. Multi-step CVD typically consists of modulating the chemical vapor source sequentially, to grow block-by-block multi-HSs \cite{Zhang2017Robust}.
CVD techniques have been used to grow heterotriangles, composed of a central TMD and an outer triangular ring of another TMD \cite{Huang2014NatMat,Gong2014NatMat,Duan2014NatNano}. Truncated triangles, hexagons, and hexagrams \cite{Zhang2015} have been also seen in experiments. These are built by changing the growth conditions, keeping the same chalcogen and changing the metal to build MoSe$_{2}$-WSe$_{2}$ \cite{Huang2014NatMat} or MoS$_{2}$-WS$_{2}$ \cite{Gong2014NatMat,Zhang2015}, or by keeping the metal and changing the chalcogen, building MoS$_{2}$-MoSe$_{2}$ or WS$_{2}$-WSe$_{2}$ \cite{Duan2014NatNano}. Other works change both species, such as WSe$_{2}$-MoS$_{2}$ \cite{Li2015}.
More complex patterned structures have also been reported \cite{Mahjouri2015patterned}. MoSe$_2$ pristine triangular flakes are coated with SiO$_2$, for subsequent sulfurization of the uncovered parts, obtaining \emph{crosswalk} patterned lateral arrays of MoSe$_2$-MoS$_2$ within the initial triangular flake. Similar approaches allow cutting triangular flakes with electron-beam lithography resulting in armchair interfaces or even irregular logos \cite{Li2016Laterally}. Large area mosaics of lateral HSs of triangular MoS$_2$ sections embedded in a monolayer MoSe$_2$ have been achieved, in a `cheetah spots' configuration \cite{Chen2017InPlaneMosaic}. Ring interfaces between MoS$_2$ and WS$_2$ have been observed as well \cite{Chen2015Electronic}.
Recent work has shown that nearly perfect interfaces can be grown in a `one-pot' synthesis process \cite{Sahoo2018Nature,Xie2018ParkGroup}. This is achieved by changing the composition of the reactive gas environment in the presence of water vapor, allowing for great control and flexibility. TMD controlled growth in the carrier gas N$_2$+H$_2$O(g) promotes growth of MoX$_2$, while Ar+H$_2$(5\%) suppresses Mo and promotes growth of WX$_2$. The approach appears versatile and scalable, as continuous planar multi-interfaces can be grown by controlled sequential edge-epitaxy. Sahoo \emph{et al.} report several MoSe$_2$-WSe$_2$ and MoS$_2$-WS$_2$ lateral HSs, with long and controllable 1D interfaces \cite{Sahoo2018Nature}. Their Se-based HSs are concentric triangles, while S-based HSs have one triangular central section with trapezoidal sections growing off the central edges. A similar one-pot process creates coherent WSe$_2$-WS$_2$ lateral HSs (also WSe$_2$-MoS$_2$-WS$_2$) \cite{Xie2018ParkGroup}. The coherence would allow one in principle to tune optical properties, strain-engineering the HS photoluminescence. The growth modulation uses metal-organic CVD (MOCVD), controlling each precursor individually and precisely, with linear dependence of transverse width vs growth time. Coherence was shown using different scanning transmission electron (STEM) microscopy techniques.
Similarly, sophisticated CVD growth techniques have allowed the characterization of strain, as discussed by Zhang \emph{et al.} \cite{Zhang2018strain}, that directly determine strain in WSe$_2$-MoS$_2$ and in the coherent and sharp WSe$_2$-WS$_2$ HSs \cite{Xie2018ParkGroup}. Table \ref{tab:table1} summarizes grown techniques, substrates an other interfacial parameters for experiments with lateral TMD HSs.
In-plane lateral HSs have been also achieved between materials with different thicknesses, such as those composed by bilayer-monolayer combinations (also called \emph{terrace} structures) of either MoSe$_{2}$ or WSe$_{2}$ \cite{Zhang2016naturecommunications}.
Other improvements on growth processes are also being considered. For example, temperature control is essential, promoting mixing at high temperatures and compositional segregation at lower temperatures, so that HSs with sharp interface are achieved at low growth temperatures, and alloying occurs at higher temperatures \cite{Bogaert2016Diffusion}. CVD assisted by sodium chloride (NaCl) requires lower growth temperatures, as Na precursors condensate on the substrate and reduce reaction energies \cite{Wang2017NaClAssisted}. Given that the properties of 2D materials are susceptible to external environments, the encapsulation of HSs between hBN sheets has been recently obtained \cite{Murthy2018Intrinsic}, showing that both photovoltaic and hot electron generation lead to photocurrents that depend on the biasing conditions.
Control over location and size of the CVD flakes is not as well developed, although efforts are underway. Ling \emph{et al.} developed a \emph{parallel stitching} method for connecting MoS$_2$ to several materials, such as WS$_2$, graphene, and hBN \cite{Ling2016}. This consisted in sowing perylene-3,4,9,10-tetracarboxylic acid tetrapotassium salt (PTAS) molecules on the growth substrate. These serve as seeds to facilitate growth off the edges of a previously deposited 2D material, depending on the wettability of seeds and surfaces \cite{Ling2016,Shi2017cascaded}.
The interface between MoS$_2$-graphene appears more terrace-like than lateral stitching, with overlapping edges extending for 2-30 nm. No lattice distortion is seen at the interface, but atomic defects associated with MoS$_2$ edges, such as Mo-Mo bonds, and S bridge defects were found.
Also, a new scalable 2-step CVD method for lateral growth has been developed, allowing the fabrication of heteroribbons \cite{Li2015,Chen2016Lateral} with long interfaces in a non-triangular structure.
Most recently, WSe$_2$-MoS$_2$ \cite{Li2018SelfAligned} and WS$_2$-MoS$_2$ HSs \cite{Aleithan2018unpublished} are grown starting from both distinct metallic samples. This 2-step process promotes growth from distinct patterned metal contacts in a \emph{position-selective} manner, as the interface is created at the meeting point between both flakes, as shown in figure\ \ref{FigInterfaces}(d). This method allows for control of the geometrical distribution of the interfaces, tailored by pre-growth lithographically patterned electrodes, as done for precontacted monolayer systems \cite{Khadka2017,Aleithan2018unpublished}.
One of the most common and interesting characterization tools of lateral TMD HSs is the excitonic photoluminescence (PL) near the interface \cite{Huang2014NatMat,Gong2014NatMat,Duan2014NatNano,Sahoo2018Nature}. Most interestingly, sub-wavelength scale resolution reported by Tizei {\em et al.} \cite{Tizei2015Esciton} has measured the spatial variation of excitons in a MoS$_2$-MoSe$_2$ interface using spatially resolved electron energy loss spectroscopy (EELS) with a monochromatic beam size of 1 nm. The exciton maps allow measurements of optical features with nanometer-scale resolution, and excitonic peaks are seen broader at interfaces, probably due to interfacial roughness.
A different technique of photocurrent spectral atomic force microscopy allowed imaging of currents and photocurrents generated between a PtIr tip and the monolayer WSe$_2$-MoS$_2$ HS \cite{Son2016Observation}. Changing tip polarity and magnitude showed that the photoresponse can be switched on and off.
Second harmonic generation (SHG) and atomic-resolution STEM have also been used to characterize HS symmetries \cite{Zhang2015,Li2015,Zhou2018Morphology,Wu2018SelfPowered}. A recent study has quantitatively characterized the built-in potential at the interface by scanning Kelvin probe force microscopy (SKPFM) along with SHG at the interface \cite{Wu2018SelfPowered}. SHG measures the angle between the crystal orientation and axis of a linearly polarized pump laser normally incident on the HS.\@ When the incident laser polarization is perpendicular (parallel) to the zigzag (armchair) direction, intensity maxima appear. This allows one to determine the growth direction, and if the interfaces are zigzag or armchair \cite{Li2015}.
Lateral MoSe$_2$-WSe$_2$ \cite{Xue2018NanoOptical} and MoS$_2$-WS$_2$ \cite{Liu2017ARXIVNanoscale} HSs have been used for mapping spatially confined carriers with nanoscale resolution around the interfaces. Near-field plasmonic tip-enhanced photoluminescence has been able to distinguish distinct crystal boundaries with high resolution, showing enhanced PL at the interfaces.
\subsection{Experimental interfacial parameters}
\label{subsec:ExperimentalInterfaces}
Lateral HSs can be grown along both the zigzag and armchair directions, see figure\ \ref{FigInterfaces}(a)-(b). Although zigzag is the most common, armchair interfaces are also seen often with atomic-resolution STEM \cite{Gong2014NatMat,Zhang2015}. PL spectroscopy can probe the clean and sharp interface as shown in figure\ \ref{FigInterfaces}(c). The localized excitonic signal is due to the strong built-in electric field at the atomically sharp interface, originating from a type-II band alignment, as will be explained later. This built-in field leads to preferential recombination at the interface. In bulk monolayer regions, radiative recombination of excitons may be suppressed by non-radiative channels \cite{Gong2014NatMat}.
In this section we discuss experimental techniques and parameters that are important for theoretical modeling and characterization. In \ref{subsubsec:InterfacialGeometry} we describe the geometry observed in commensurate HSs, while \ref{subsubsec:InterfacialStrain} describes incommensurate HSs and how strain affects the interface. In \ref{subsubsec:BandalignmentExperiment} we show measurements in band alignment between both TMD semiconductors forming the HS. Lastly, \ref{subsubsec:PlasmonicsEffectsEXPERIMENT} highlights plasmonic effects observed at the interfaces.
\begin{figure*}[tbph]
\centering
\includegraphics[width=1.0\textwidth]{Fig1.pdf}
\caption{(a) Zigzag and (b) armchair interfaces with atomic resolution Z-contrast images, between WS$_2$ and MoS$_2$, with their respective ball-stick models. Scale bar is 0.5 nm \cite{Gong2014NatMat}. (c) One-pot growth of larger and sharper interfaces to date. Upper panels show the optical image of a multi HS, and composite PL maps with TMD and strongest excitonic single peak as indicated. Lower panels show atomic resolution images of a MoSe$_2$-WSe$_2$ sharp zigzag interface, as indicated in the zoomed model \cite{Sahoo2018Nature}. (d) Schematics for the metal deposition and ion-gel film coating process (left panel); electroluminescence image where white dashed lines show electrode shape and orange dashed line is the interface (middle panel); zoomed sharp interface with atomic-resolution (right panel) \cite{Li2018SelfAligned}. (a) and (b) Reprinted with permission from \cite{Gong2014NatMat}. Copyright 2014 Springer Nature, Nature Materials. (c) Reprinted with permission from \cite{Sahoo2018Nature}. Copyright 2018 Springer Nature, Nature. (d) Reprinted with permission from \cite{Li2018SelfAligned}. Copyright 2018 John Wiley and Sons, Advanced Functional Materials.}
\label{FigInterfaces}
\end{figure*}
\subsubsection{Interfacial geometry in commensurate HSs}
\label{subsubsec:InterfacialGeometry}
As previously mentioned, when the chalcogen across a HS is the same, the strain is less than 1\%, so that relaxed commensurate interfaces can be achieved [Table \ref{tab:table1} summarizes results for HSs]. One of the 2014 reports \cite{Gong2014NatMat} characterizes the atomic connections between MoS$_2$ and WS$_2$, finding zigzag and armchair interfaces, as shown in figures\ \ref{FigInterfaces}(a)-(b). These were found to be sharp, with 4 unit cells of overall roughness (about 15 nm). The armchair domains were seen to have inter-diffusion over 1-3 unit cells. The longest defect-free zigzag lengths are seen to be about 7 nm, while the armchair are about 2 nm, suggesting the relatively low stability of fresh armchair MoS$_2$ edges during epitaxial growth.
Sahoo \emph{et al.} \cite{Sahoo2018Nature} achieved one-pot CVD growth of either MoSe$_2$-WSe$_2$ or MoS$_2$-WS$_2$, shown in figure\ \ref{FigInterfaces}(c), one of the sharper and longer HSs obtained to date. Their MoSe$_2$-WSe$_2$ [concentric triangles in figure\ \ref{FigInterfaces}(c)] exhibit both atomically sharp and smooth interfaces just 4 (1 nm) or 21 atomic lines (6 nm) wide in the two different HSs. This difference is attributed to different oxidation and reduction rates of Mo and W as well as to the gas switching mechanism. Further optimization is anticipated to lead to even sharper interfaces. The MoS$_2$-WS$_2$ trapezoids around a central triangle in figure\ \ref{FigInterfaces}(c) show also sharp interfaces and modulation of the optical bandgap. Inner MoS$_2$ shows two kinds of terminations: Mo- and S-zigzag, depending on the gas environment: chalcogen-deficiency promotes the formation of M-zigzag edges.
Planar and vdWs combinations, terrace interfaces, where the edge of a first monolayer TMD is on top of another, also exhibit zigzag orientations that may act as quantum wires \cite{Zhang2016naturecommunications}.
As most CVD procedures yield zigzag terminations, with sporadic armchair domains, the best approach for obtaining armchair interfaces is perhaps e-beam lithography. Such cutting of a TMD pristine monolayer followed by deposition of another TMD, achieves a `crosswalk' pattern of lateral MoSe$_2$-MoS$_2$ ribbons \cite{Mahjouri2015patterned} or a bisector strip of the second TMD in a WSe$_2$-WS$_2$ HS \cite{Li2016Laterally}, both with interfaces along the armchair direction.
\subsubsection{Interfacial strain and incommensurate HSs}
\label{subsubsec:InterfacialStrain}
When chalcogens are different at either side of the interface (or the other side is another material altogether), strain plays a large role on the HS properties, which could be used in strain engineering. Although many of these structures have been grown, no detailed analysis of strain distribution had been performed \cite{Duan2014NatNano,Tizei2015Esciton,Ling2016}. Early attempts found a 1.59\% tensile strain and 1.1\% compressive strain in a WSe$_2$-MoS$_2$ HS, as estimated from a PL energy shift rate of 45 meV per \% of strain \cite{Li2015}. Strain effects have now started to be systematically characterized and even tailored in experiments \cite{Xie2018ParkGroup,Zhang2018strain}, allowing coherent HSs. Different interfacial parameters for incommensurate HSs are also listed in table\ \ref{tab:table1}.
Zhang \emph{et al.} \cite{Zhang2018strain} directly map the anisotropic strain tensor in WSe$_2$-MoS$_2$ using scanning tunneling microscopy/spectroscopy (STM/STS) techniques. Unlike previous optical techniques, such as Raman and PL, STM/STS is not diffraction limited. They further use the hexagonal moir\'e pattern ($\sim1$ nm period) as a `magnifying glass' for observing changes in lattice constants, as seen in figure\ \ref{FigInterfaceStrain}(a). When a highly oriented pyrolytic graphite (HOPG) substrate is used, the magnification is found to be 3$\times$, driven by the large lattice mismatch ($>$30\%) between TMD and HOPG substrate, and the nearly zero rotation between them. When a WSe$_2$ substrate is used instead, the magnification factor increases ($>$20$\times$), since there is basically no moir\'e pattern observed due to the small mismatch between HS and the substrate. The strain distribution is characterized by the 2D strain tensor parameters $\epsilon_{aa}$, $\epsilon_{bb}$, and $\epsilon_{ab}=\epsilon_{ba}$, with $a$ and $b$ defined along the zigzag and armchair directions, as shown in figure\ \ref{FigInterfaceStrain}(b). It is seen that $\epsilon_{aa}$ decays much faster than $\epsilon_{bb}$, by a 2 to 1 ratio over a 50 nm length. This difference is probably due to the fact that there is a free edge during growth, allowing stress relaxation normal to the edge. Analytical modelling for these $\epsilon$ components is discussed later in section \ref{subsubsec:IncommensurabilityAndStrain} below.
Xie \emph{et al.} \cite{Xie2018ParkGroup}, are able to control strain effects in coherent WSe$_2$-WS$_2$ and (WSe$_2$-MoS$_2$-WS$_2$) lateral HSs, as shown in figure\ \ref{FigInterfaceStrain}(c). The interface was repeated without dislocations, matching lattice constants at the interfaces (even though they are $\sim$4\% off), and maintaining structure and triangular symmetry, as seen in figure\ \ref{FigInterfaceStrain}(d). Since one misfit dislocation is expected every 25 unit cells on average, the 160 unit cells ($\sim$50 nm) average length observed is definite evidence of coherent HSs. This data agrees with coarse-grained simulations (see section \ref{subsubsec:IncommensurabilityAndStrain}) which account for bonding and angle interactions. Rippled regions where the lattice constant is larger (WSe$_2$) can also be achieved perturbing the coherent 2D flat HS with thermal cool-down immediately after growth, as shown in figure\ \ref{FigInterfaceStrain}(e). These ripples show characteristic wavelengths of about 30 nm.
\begin{figure*}[tbph]
\centering
\includegraphics[width=1.0\textwidth]{Fig2.pdf}
\caption{(a) Left panel shows STM image of lateral WSe$_2$-MoS$_2$ HS on a WSe$_2$ substrate (inset), where red regions are kink spots due to adsorbates, separating straight interfacial sections. Right panels are moir\'e patterns for both the strain-free and strained regions within the HS, used as \emph{magnification glasses} for changes in lattice constant, given by 3 and 20 respectively \cite{Zhang2018strain}. (b) Strain tensor decays as a function of the distance away from the interface, with experimental results for multiple regions on the WSe$_2$ (black dots), averages along representative zigzag lines (triangles), and fittings (lines) \cite{Zhang2018strain}. (c) Schematic representation of coherent WS$_2$-WSe$_2$ HS (left panel), and its growth epitaxy (right), where $a_{\parallel}$ and $a_{\perp}$ are lattice constants parallel and perpendicular to the interface \cite{Xie2018ParkGroup}. (d) SEM images of achieved coherent planar WS$_2$-WSe$_2$ HSs \cite{Xie2018ParkGroup}. (e) Thermally induced ripples in WSe$_2$ (owing to its larger lattice constant) with respect to WS$_2$ \cite{Xie2018ParkGroup}. (a) and (b) Reprinted with permission from \cite{Zhang2018strain}. Copyright 2018 Springer Nature, Nature Nanotechnology. (c)-(e) Reprinted with permission from \cite{Xie2018ParkGroup}. Copyright 2018 The American Association for the Advancement of Science, Science.}
\label{FigInterfaceStrain}
\end{figure*}
\subsubsection{Band alignment}
\label{subsubsec:BandalignmentExperiment}
A key feature of HSs is that bandgaps and Fermi levels of both materials are usually different, leading to polarization dipoles and even charge transfer across the HS, driven by differences in the bulk conduction and valence bands. These can be seen to arise from differences in electronegativity and/or work function of the materials across the HS.
A major question to be addressed is to determine the relative band alignment of the conduction and valence bands across the HS. Borrowing from bulk semiconductors, one identifies three usual types of alignments: type-I, when the bandgap of one material is contained (nested) inside the bandgap of the other (also called symmetric alignment); type-II, when the conduction band maximum (CBM) of one material is inside the gap of the other (also called staggered alignment); and type-III, when the CBM of one material is lower than the valence band minimum (VBM) of the other material (also called broken alignment). Related useful quantities to measure are the conduction and valence band offsets, CBO and VBO respectively, defined as CBM$_1-$CBM$_2 \equiv$ CBO, and VBM$_1-$VBM$_2 \equiv$ VBO.
For HSs between different TMDs, band alignments have been calculated with DFT \cite{Kang2013,Gong2013,Kosmider2013,Wei2014,Guo2016,OngunOzcelik2016} (see below for details),
and measured experimentally for vertical \cite{Chiu2015,Hill2016} and lateral \cite{Gong2014NatMat,Zhang2018strain} HSs in different works. Vertical HS band alignment has been experimentally measured by STM/STS, scanning photocurrent microscopy and X-ray photoelectron spectroscopy (XPS), in MoS$_{2}$ \cite{Howell2015}, MoSe$_{2}$ \cite{Zhang2016naturecommunications} and WSe$_{2}$ \cite{Zhang2016naturecommunications} terraces, and both MoS$_{2}$-WSe$_{2}$ \cite{Chiu2015} and WS$_{2}$-MoS$_{2}$ \cite{Hill2016} vertical HSs. Although most of these experimental works consider vertical HSs, Zhang \emph{et al.} \cite{Zhang2018strain} have recently studied band alignment in WSe$_{2}$-MoS$_{2}$ lateral HSs.
\begin{figure*}[tbph]
\centering
\includegraphics[width=1.0\textwidth]{Fig3.pdf}
\caption{Experimentally resolved band alignments (a) type-II for a vertical MoS$_{2}$-WSe$_{2}$ HS \cite{Chiu2015}, (b) type-II for a vertical MoS$_{2}$-WS$_{2}$ \cite{Hill2016}, and (c) type-I for a lateral MoS$_{2}$-WSe$_{2}$ \cite{Zhang2018strain}. In (a) and (b) the schematic setups for the $\mu$-XPS + STM/S measurements are shown, as well as the alignment diagrams. In (c), left and right panels show STS spectra as the tip is approached to the interface (see upper insets showing relative atomic positions), for measuring the VBO and CBO; results schematically shown in the middle panel. Here the levels for the interface states and strain-induced states (SIS) are also shown, with respect to the strain-free MoS$_2$ case (dashed green). (a) Reprinted with permission from \cite{Chiu2015}. Copyright 2015 Creative Commons Attribution 4.0, Nature Communications. (b) Reprinted with permission from \cite{Hill2016}. Copyright 2016 American Chemical Society, Nano Letters. (c) Reprinted with permission from \cite{Zhang2018strain}. Copyright 2018 Springer Nature, Nature Nanotechnology.}
\label{BandAligmentMixed}
\end{figure*}
A type-II band alignment has been inferred in vertical MoTe$_{2}$-MoS$_{2}$ HSs from SKPFM and Raman measurements, and theoretically calculated to be $\simeq 0.66$ eV \cite{Zhang2016ACSnano}. For terrace HSs of the same TMD in a monolayer-bilayer interface, the band alignment for MoS$_{2}$ observed by scanning photocurrent microscopy is found to be type-II \cite{Howell2015}. Later, STS measurements in terraces of WSe$_{2}$ and MoSe$_{2}$ found type-I band alignment \cite{Zhang2016naturecommunications}, with VBO for WSe$_{2}$ (MoSe$_{2}$) of 0.12 eV (0.43 eV) and CBO of 0.15 eV (0.08 eV). This work also reports DFT calculations for vertical HSs of TMDs with different chalcogens. An interesting hybrid bilayer system, with top WS$_2$ layer and bottom WS$_2$-MoS$_2$ lateral HS, is studied by STM/STS and it appears to show type-II band alignment at the HS \cite{Kobayashi2016}.
Chiu \emph{et al.} \cite{Chiu2015} used STS and $\mu$-XPS measurements in vertical MoS$_{2}$-WSe$_{2}$ HSs, finding that the HS bandgap is $1.32\pm0.12$ eV, measured from the VB K-point of WSe$_{2}$ up to the CB K-point of MoS$_{2}$, corresponding to a type-II alignment. They measure the VBO is 0.83 eV, and CBO is 0.76 eV; the quasiparticle gaps of MoS$_{2}$ ($2.15\pm0.01$ eV) and WSe$_{2}$ ($2.08\pm0.01$ eV) are also reported. The VBO value of $\approx0.8$ eV is supported by DFT calculations (GGA-PBE) \cite{Guo2016}. In other work, Hill \emph{et al.} \cite{Hill2016} studied both vertical MoS$_{2}$-WS$_{2}$ and WS$_{2}$-MoS$_{2}$ HSs by STS, and observed an HS bandgap of $1.45\pm0.06$ eV, measured from the VB K-point of WS$_{2}$ up to the CB K-point of MoS$_{2}$, corresponding to type-II alignment. The quasiparticle gaps of MoS$_{2}$ ($2.16\pm0.04$ eV) and WS$_{2}$ ($2.38\pm0.06$ eV) on the HS setup were determined together with the energy difference between the Q and K-points on the MoS$_{2}$ CB of 110 meV. The band offset findings are schematically shown in figure\ \ref{BandAligmentMixed}(a)-(b) \cite{Chiu2015,Hill2016}, and a brief summary is given in table \ref{tab:table2}.
\begin{table*}[t]
\caption{\label{tab:table2} Experimental band alignment energy parameters. The parameters are (in order): HS, type of HS, MoS$_{2}$ quasiparticle gap $\Delta_{\textrm{MoS}_{2}}$, WS(Se)$_{2}$ quasiparticle gap $\Delta_{\textrm{WS(Se)}_{2}}$, HS gap $\Delta_{\textrm{HS}}$, valence band offset (VBO), conduction band offset (CBO), and type of alignment. All energies are in eV.}
\begin{indented}
\item[]\begin{tabular}{@{}cccccccc}
\br
HS & HS type &$\Delta_{\textrm{MoS}_{2}}$& $\Delta_{\textrm{WS(Se)}_{2}}$ & $\Delta_{\textrm{HS}}$ & VBO & CBO & band alignment\\
\mr
MoS$_{2}$-WSe$_{2}$ \cite{Chiu2015} & Vertical & 2.15$\pm$0.01 & 2.08$\pm$0.01 & 1.32 & 0.83$\pm$0.07 & 0.76$\pm$0.12 & type-II \\
MoS$_{2}$-WS$_{2}$ \cite{Hill2016} & Vertical & 2.16$\pm$0.04 & 2.38$\pm$0.06 & 1.45 & 0.71 & 0.93 & type-II \\
MoS$_{2}$-WSe$_{2}$ \cite{Zhang2018strain} & Lateral & position-dependent & position-dependent & 0.52 & -0.65$\pm$0.05 & 0.40$\pm$0.05 & type-I \\
\br
\end{tabular}
\end{indented}
\end{table*}
In lateral HSs, two works have addressed band alignment. The study by Gong \emph{et al.} \cite{Gong2014NatMat} finds the DFT alignment in WS$_{2}$-MoS$_{2}$ is type-II, with band offset of 0.07 eV at the VBM, and gaps of 1.59 eV and 1.55 eV for MoS$_{2}$ and WS$_{2}$, respectively. This work also calculates a built-in electric field of over $2\times10^{8}$ N/C at the zigzag interface, which may drive free electrons and holes generated in the vicinity of the interface to recombine preferentially at the interface. More recently, Zhang \emph{et al.} \cite{Zhang2018strain} analyzed the band alignment in a lateral MoS$_{2}$-WSe$_{2}$ HS, finding that the misfit strain induces a type-II to type-I transformation. They used STS mappings of the valence and conduction bands as the tip moves across the interface, as shown in figure\ \ref{BandAligmentMixed}(c). While a vertical HS of the same materials shows type-II alignment \cite{Chiu2015} (as shown in figure\ \ref{BandAligmentMixed}(a) \cite{Chiu2015} and (b) \cite{Hill2016}), the lateral HS shows type-I alignment, as the MoS$_2$ valence band shows an unexpected spatial variation with respect to WSe$_2$, with CBO=0.4 eV and VBO$=-$0.65 eV, as shown in figure\ \ref{BandAligmentMixed}(c). The strain pushes the VBM of the MoS$_2$ above the $\Gamma$ point, while the CBM behaves more straightforwardly. The band bending is found to start just 5 nm away from the interface on the WSe$_2$ side, while in MoS$_2$ the band bending starts further away. The potential discontinuity is however observed in a window of just 1 nm. To the best of our knowledge, this is the first accurate space-resolved measurement of band alignment in a lateral TMD HS.
\subsubsection{Plasmonics}
\label{subsubsec:PlasmonicsEffectsEXPERIMENT}
Quantum plasmonic effects were recently observed in WS$_2$-MoS$_2$ \cite{Shi2017cascaded} and WSe$_2$-MoSe$_2$ \cite{Tang2018} lateral HSs, measuring photoresponse that suggests these systems might serve as quantum nanodevices with tunable optical response.
Shi \emph{et al.} \cite{Shi2017cascaded} transferred WS$_2$-MoS$_2$ onto a Ag plasmonic plate covered with Al$_2$O$_3$, so as to transfer the excitonic energy to surface plasmon polaritons. A complex cascade of exciton/surface-plasmon-polariton/exciton conversion in lateral HSs was demonstrated from WS$_2$ to MoS$_2$, as mediated by the plasmonic substrate. The advantage of having an atomically sharp interface is that the energy transfer has a propagation length of $\sim$40 $\mu$m (2 orders of magnitude larger than in bare TMD), and the pristine interface minimizes energy loss.
The experiments by Tang \emph{et al.} \cite{Tang2018} image near-field tip-enhanced photoluminescence (TEPL) of a lateral 150 nm wide interface (not atomically sharp). They investigate tunneling-assisted hot-electron injection (HEI) at room temperature, observing quenching and enhancement of the PL from the interfacial region due to the attenuation of localized electromagnetic field and hot electron injection. TEPL allowed optical characterization of the HS, showing that the interface PL response can be controlled by varying lateral tip position and picoscale tip-sample distance. For charge tunneling distances of $\sim$20 pm, the electron tunneling facilitates thermionic injection in the quantum regime.
The interface plays a critical role in the enhancement of the TEPL signal: it is the interfacial region that allows the MoSe$_2$ side to accumulate more plasmon-induced hot electrons. This is because of directional hot electron injection at the interface, due to band alignment. Hot electrons are transferred to the MoSe$_2$ side and when the tip diameter is comparable to the interfacial region (20 to 0.36 nm) the injected hot electrons accumulate in MoSe$_2$, leading to PL enhancement in MoSe$_2$ and quenching on the WSe$_2$ side. Close tip-sample distance favors electron tunneling, leading to extra quenching in the WSe$_2$ PL, while the MoSe$_2$ component is still enhanced.
\subsection{Phase interfaces within the same MX$_{2}$}
\label{subsec:PhaseInterfaces}
Different crystalline phases of the same TMD can also be created by electrostatic potential differences between regions, for example. Lateral \emph{p-n} junctions within the same TMD \cite{Baugher2014Optoelectronic,Pospischil2014Solar,Ross2014Electrically,Desai2016Mos2} have been studied with a smooth HS profile.
An early report by Eda \emph{et al.} showed the creation of coherent interfaces between semiconducting
H and metallic T phases within MoS$_2$, characterized by STEM \cite{Eda2012Coherent}. In 2014, Lin \emph{et al.} \cite{lin2014atomic} created few-atoms-wide interfaces between MoS$_2$ metallic 1T triangular islands embedded in MoS$_2$ semiconducting 2H phases controlling the growth of triangular 1T regions by electron beam illumination. They observed that the atomic interface shows a dynamic evolution between different H-T phases of MoS$_2$, involving atomic gliding of S and/or Mo planes to achieve the triangular island. Further insight into these experiments is given by DFT calculations \cite{Kretschmer2017}. More recently, Yoo \emph{et al.} \cite{Yoo2017} showed the creation of lateral HSs between MoTe$_2$ 2H-1T' phases, by controlling temperature of the reaction vessel and Te flux (high flux for the 2H phase and low for 1T'). These crystals appear as 2H circular islands, laterally connected to multilayer 1T' regions. SKPFM and Raman show sharp in‐plane interfaces.
One fascinating aspect of the 1D 1T'-phase structures is their topological nature. An atomic sharp interface between 1T'- and 1H-WSe$_{2}$ monolayer has been synthesized \cite{Ugeda2018arxiv}, figure\ \ref{Ugeda2018arxiv}(a), to study topological properties at the 1D interface. Topologically protected helical edge states were seen at the interface, showing that such a novel quantum spin Hall insulator platform is possible \cite{Ugeda2018arxiv}. Molecular beam epitaxy (MBE) is used to grow a mixed-phase of monolayer WSe$_2$ and characterized with angle-resolved photoemission spectroscopy (ARPES) and STM/STS, jointly revealing inverted bulk bands, and the existence of topological interface states within the bandgap, figure\ \ref{Ugeda2018arxiv}(c), at crystallographically well-ordered interfaces. All this in agreement with first principles calculations.
The 1D interfacial states at such atomically sharp interface have characteristic decay penetration length of only 2 nm into the bulk, as shown in figure\ \ref{Ugeda2018arxiv}(b).
\begin{figure}[tbph]
\centering
\includegraphics[width=0.4\textwidth]{Fig4.pdf}
\caption{1D interface in 1H-1T' WSe$_2$ (a) Model, topography, and dI/dV STS spectra at $-130$ mV. Dashed-dotted white line indicates the interface, and green dashed lines indicate the spatial extent of the interface state. (b) dI/dV along $x$-direction. (c) Comparison between interfacial and bulk LDOS. Reprinted with permission from \cite{Ugeda2018arxiv}. Copyright 2018 Creative Commons Attribution 4.0, Nature Communications.}
\label{Ugeda2018arxiv}
\end{figure}
\section{Numerical and theoretical descriptions}
\label{sec:TheoreticalAndNumerical}
This section represents the main scope of this review, and focuses on current theoretical advances for the description of lateral HS between TMDs. While experiments have been concentrating on achieving pristine and coherent interfaces with lengths now exceeding several micrometers, prospective theoretical directions and applications have also been reported.
In section\ \ref{subsec:StructuralAndElectronic} we present advances in numerical and theoretical calculations of different aspects of TMD lateral HSs, and in \ref{subsec:1DNovelPlatform} we review proposals for using these interfaces as an effective platform for unique 1D physics.
\subsection{Structural and electronic properties}
\label{subsec:StructuralAndElectronic}
DFT calculations of lateral HSs have been appearing since 2014, as the first clean interfaces were being grown by several groups. These theoretical works are mostly focused on studying the evolution of the electronic bands and bandgaps, with consideration of band alignment effects. Structural studies focus on geometrical stability and strain, utilizing relatively small unit cells for computation.
The remainder of this section is arranged as follows: first, in \ref{subsubsec:ElectronicStructure} we summarize works on electronic structure for commensurate TMD HSs, highlighting band alignment, stability, and structure. Then, in \ref{subsubsec:IncommensurabilityAndStrain} we provide an overview of how the inherent strain at the interface affects the electronic properties, especially for non-commensurate HSs.
\subsubsection{Electronic structure
\label{subsubsec:ElectronicStructure}
\emph{Band alignment.- } Let us first review the theoretical works that analyze an important aspect in the electronic structure of TMD HSs, which is the band alignment (or band offset) across the juncture. These offsets are important parameters in material design, as discussed in section\ \ref{subsubsec:BandalignmentExperiment}, and HS modeling requires an accurate knowledge of the alignment. Unfortunately, the band offsets for monolayer materials and their lateral heterostructures are not fully known theoretically. Depending on the materials involved, some works suggest type-I (nested) alignment, others suggest type-II \cite{Kang2013,Kosmider2013,Wei2014}, and even type-III alignment in some cases \cite{OngunOzcelik2016}.
Early DFT band alignment studies mostly addressed vertical HSs \cite{Kang2013,Gong2013,Kosmider2013}. The first work to analyze lateral monolayer HSs and the role of band alignment between different TMDs, was reported by Kang \emph{et al.} \cite{Kang2013}. They use the Vienna \emph{ab initio} simulation package VASP \cite{Kresse1996} with projector augmented wave (PAW) \cite{Blochl1994}, in either the generalized gradient approximation of Perdew-Burke-Ernzerhof (GGA-PBE) \cite{PerdewBurkeErnzerhof1996} or the hybrid Heyd-Scuseria-Ernzerhof (HSE06) \cite{Heyd2003} functionals for electronic correlations. They study several lateral TMD combinations, finding similar chemical trends, regardless of the functional used. This suggests a model to establish relative alignment of valence and conduction bands, from the orbital content of the VBM and CBM, which originate from the repulsion between the cation-$d$ and anion-$p$ orbitals. They show that the MoX$_2$-WX$_2$ lateral HS would have a type-II band alignment, as shown in figure \ref{Kang2013}, where each WX$_2$ element is higher in energy than its same-chalcogen MoX$_2$ counterpart. This approach has become popular and remains widely used.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.5\textwidth]{Fig5.pdf}
\caption{Band alignments for six TMDs as shown. Solid (dashed) lines are with PBE functional (HSE06 hybrid functional). Potentials levels for water reduction (H$^+$/H$_2$) and oxidation (H$_2$O/O$_2$) are also shown. Reprinted with permission from \cite{Kang2013}. Copyright 2013 by AIP Publishing, Applied Physics Letters.}
\label{Kang2013}
\end{figure}
In general PBE (GW$_{0}$) underestimates (overestimates) the bandgaps, and the bandgap accuracy is improved by using hybrid functionals, such as HSE06. Nevertheless, the HSE06 functional overestimates the spin splitting of the valence band \cite{Amin2015,Kormanyos2015}, so that one finds use of PBE and HSE06 as lower and upper bound estimates for gaps, respectively. The band structure of HSs is also very sensitive to the type of atomic stacking \cite{Kormanyos2015,Terrones2013}.
Other DFT calculations have shown that a vertical MoS$_{2}$/WS$_{2}$ HS \cite{Kosmider2013} also shows type-II band alignment with direct bandgap (HSE functional, while PBE predicts indirect bandgap), in contrast to their pristine bilayer counterparts, both of which show indirect bandgaps. Gong \emph{et al.} studied the band alignment between several vertical TMDs \cite{Gong2013}, including the semiconducting group-IVB and metallic group-VB TMDs (IVB: Ti, Zr, and Hf; VB: V, Nb, Ta). They find that tunnel field effect transistors could be built with $p$-$n$ junctions of group VIB $n$-type and group IVB $p$-type HSs. Soon after these predictions, experimental work finds that the lateral WS$_{2}$-MoS$_{2}$ HS alignment is indeed type-II, providing a combined theory-experiment study. DFT calculations determined a band offset of 0.07 eV at the VBM, and gaps of 1.59 eV and 1.55 eV for MoS$_{2}$ and WS$_{2}$, respectively \cite{Gong2014NatMat}. This work also calculates a strong built-in electric field of over $2\times10^{8}$ N/C at the zigzag MoS$_{2}$-WS$_{2}$ lateral interface.
Interest in the alignment between TMD in lateral HSs has been increasing over the years \cite{Wei2014,OngunOzcelik2016,Amin2015}. Wei \emph{et al.} confirmed the type-II alignment predicted by Kang \emph{et al.}, additionally studying lateral junctions with metallic TMDs \cite{Wei2014}. Other DFT work \cite{Amin2015} looks at vertical and lateral MoX$_{2}$-WX$_{2}$ (X=S, Se, Te) HSs, reporting structural, electronic, optical, and photocatalytic properties. We note, however, that this system is not a single interface between two slabs, but rather an in-plane arrangement of single atomic lines of different transition metals, with zigzag interfaces between each other, which one can describe as a large concentration of parallel grain boundaries. They find all these systems have direct bandgap, with contributions of both the Mo and W to the VBM and CBM.
More recently, Guo \emph{et al.} have addressed the issue of band alignment for lateral HSs of different TMDs, metallic and semiconducting \cite{Guo2016}. They used the CASTEP plane wave pseudopotential \cite{CASTEP2005} code, with a combination of ultrasoft potentials. For lateral MoS$_2$-WS$_2$ HS, they studied both zigzag and armchair interfaces, finding little difference in the HS projected DOS. This is attributed to the Mo-S bond being relatively non-polar, with only 0.3e charge on each S site. Finally, a comprehensive spin-polarized DFT study of band alignment \cite{OngunOzcelik2016} (PBE and HSE06 functionals) has been carried out for well-known 2D semiconductors, including transition metal di- and trichalcogenides. This results in a useful database, the \emph{periodic table of heterostructures}, including geometries, electronic structure and band offsets, among other properties. This is shown in figure \ref{OngunOzcelik2016}. In this table, the alignment for most group-VIB TMD HSs is proposed to be type-II.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.5\textwidth]{Fig6.pdf}
\caption{Band alignment of different HSs, in the so called \emph{periodic table of heterostructures} \cite{OngunOzcelik2016}. Lower left (upper right) region of the table corresponds to results for the PBE (hybrid HSE06) functional. Two colors for the same HS, indicates accuracy is beyond DFT error range, so the HS can have either type. Reprinted with permission from \cite{OngunOzcelik2016}. Copyright 2016 by the American Physical Society, Physical Review B.}
\label{OngunOzcelik2016}
\end{figure}
Recent studies have argued for the applicability of the definition of band alignment. Wei \emph{et al}.\ dispute whether the alignment in lateral HSs of different TMDs can be addressed by separately aligning rigid band edges, since the creation of a dipole at the interface \cite{Wei2016}, which has been seen experimentally \cite{Gong2014NatMat}, is an effect that strongly depends on the structure. The size and directionality of the dipole should consider the different TMD crystallite edges, such as zigzag and armchair, as well as terminations involving grain boundaries \cite{Kang2015}, and/or defects \cite{Wei2016,Cao2017}. Such a complete description of the band alignment in lateral TMD HSs is yet to come.
\emph{Band structure of commensurate TMD HSs.- } Most numerical work in lateral TMD HSs has been done in nanoribbons (NRs). Let us first summarize what is known for pristine TMD NRs. DFT studies have shown that zigzag terminated NRs have a magnetic ground state, with metal, half-metal and semiconducting electronic states depending on the NR width \cite{Wei2015,Wen2016}, while larger sizes tend to remain metallic. Armchair-terminated NRs are nonmagnetic and semiconducting \cite{Wei2015,Wen2016}. Zigzag magnetic properties can be enhanced by strain, while bandgaps of armchair NRs decrease with strain. MoS$_2$ tensile (compressive) strain increases (reduces) the bond lengths, so that the bulk bandgap reduces (increases) monotonically and a direct-indirect transition occurs. In contrast, bi-axial tensile strain reduces the gap further \cite{Wei2017}.
Pristine zigzag NRs are found to exhibit magnetic ground states for small NRs (smaller than 8 atomic lines in width) \cite{Wen2016}.
The larger magnetic moment is for the smallest `ribbon' (2 atomic lines, 1 of each atom), while higher MoS$_{2}$ content produces smaller magnetic moment for any width. Pristine armchair NRs have smaller (larger) bandgap for smaller (larger) width, so that the gap can be tuned by changing the NR width and edge termination. These authors also report a transition from indirect to direct bandgap, when the width increases above 9 atomic lines.
Studies in lateral TMD HSs have been carried out for (nearly) commensurate and incommensurate junctions. The first ones are built out of TMDs with different transition metal but the same chalcogen (such as MoS$_{2}$-WS$_{2}$), while the latter have typically different chalcogen (such as MoS$_{2}$-MoTe$_{2}$). The latter will be addressed in more detail in section \ref{subsubsec:IncommensurabilityAndStrain}.
An early study by Wang \emph{et al}.\ \cite{Wang2013} tackles structural and electronic properties of commensurate MoS$_{2}$-WS$_{2}$, and incommensurate MoS$_{2}$-MoTe$_{2}$ HSs. They used Quantum Espresso, and PBE-GGA for exchange-correlation, and found that the MoS$_{2}$-WS$_{2}$ HS remains a semiconductor after hybridization, with bandgap of 1.58 eV, smaller than that of the constituents. They also find that the lowest energy superlattice system consists of a MoS$_{2}$ row embedded into a WS$_{2}$ ribbon.
Larger systems were studied with VASP (GGA-PBE for electron exchange correlation) \cite{Wen2016}.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.4\textwidth]{Fig7.pdf}
\caption{Band structures for (a) armchair MoS$_{2}$-WS$_{2}$, (b) zigzag MoS$_{2}$-WS$_{2}$, (c) armchair MoSe$_{2}$-WS$_{2}$, and (d) zigzag MoSe$_{2}$-WS$_{2}$. Red arrows indicate the bandgap. Reprinted with permission from \cite{Wei2015}. Copyright 2015 by Royal Society of Chemistry, Physical chemistry chemical physics.}
\label{Wei2015PCCPFIGURE2}
\end{figure}
Wei \emph{et al}.\ \cite{Wei2015SciRep} have studied the electronic properties of quantum well HSs, a system with two interfaces, such as MoS$_{2}$-WS$_{2}$-MoS$_{2}$, among others. They used PAW+VASP with GGA+PBE for exchange/correlations, and found that the electronic properties of these quantum wells can be engineered by adjusting the strain, resulting in different bandgaps and an indirect-to-direct bandgap transition as the number of unit cells in each HS changes, similar to results by Kang \emph{et al}.\ for single interfaces \cite{Kang2015}. Wei {\em et al}.\ also find type-II alignment in coherent interfaces with strong coupling, suggesting effective separation and collection of excitons as a possible application. The same group studied interface properties in great detail, confirming that excitons should stay confined at opposite sides of the 1D interface due to the type-II band alignment \cite{Wei2015}. Typical band structures for sufficiently large HSs (width $\sim$90\AA) are shown in figure \ref{Wei2015PCCPFIGURE2}. All HSs are found to be semiconducting with direct gaps, at the $A$-point (which is 2/3 of $\Gamma X$) for zigzag HSs, and for the armchair at the $\Gamma$-point.
In lateral HSs, no van der Waals forces keep the materials together, rather Mo and W atoms near the interface form competing covalent bondings with the chalcogens. This can be seen for MoS$_{2}$-WS$_{2}$ in figure \ref{Wei2015PCCPFIGURE3} for both armchair and zigzag HSs. Covalent bonding changes can be seen at the interface as electron density probabilities in (a) and (b), and as large electron density difference in (c) and (d), boosting electrical and optical responses exactly at the 1D interface. Note that a net charge transfer does not occur for the armchair HS, where interfacial electrical polarization cancels across the junction, while a net charge accumulation occurs for the zigzag HS.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.45\textwidth]{Fig8.pdf}
\caption{For MoS$_{2}$-WS$_{2}$, (a) and (b) show the real-space charge density difference for armchair and zigzag HS, respectively. (c) and (d) show the plane averaged electron density for armchair and zigzag HS. In (a) and (b) yellow (cyan) regions are electron accumulation (depletion). Insets show connection geometries. Reprinted with permission from \cite{Wei2015}. Copyright 2015 by Royal Society of Chemistry, Physical chemistry chemical physics.}
\label{Wei2015PCCPFIGURE3}
\end{figure}
Analysis of the band-decomposed charge density of the VBM and CBM at the $\Gamma$ point of armchair HSs show localization of states in W and Mo atoms respectively, illustrating a true type-II HS, suggesting monolayer-like optical absorption in these HSs and strong excitonic effects with large binding energies. For zigzag MoS$_{2}$-WS$_{2}$ HS at the $\Gamma$ point, charge is located on opposite sides at the $S$-edges with slight overlap at the interface
In-plane interfacing effects are further studied, describing charge transfer across the interface, work functions of the different edges connecting at the interface, and the role of defects \cite{Wei2016}. Quantum wells show similar behavior as those in the single interface case of Wei \emph{et al}. \cite{Wei2015}, and include
projections of the wave functions of each material. The VBM and CBM are located at opposite sides of the interface, in WS$_{2}$ and MoS$_{2}$, respectively. The difference yields the alignment offsets, which for VBM is 0.1 eV and for the CBM is 0.3 eV, different than the core-level alignment values of 0.27 eV and 0.23 eV, respectively. This 0.1 eV offset for the VBM is in agreement with experiments that measure 0.07 eV \cite{Gong2014NatMat}. The HS band structure shows a direct bandgap at the $K$ valley projection. The gaps in quantum wells are found to be lower
than in single-interface HSs, and lower than in the pristine monolayer TMD, for different well widths.
The HS shows type-II band alignment at the interface, with binding energies, $E_b=E_{\mathrm{HS}}-E_{\mathrm{MoS}_2}-E_{\mathrm{WS}_2} \simeq -18$ eV for different well widths, owing to the metal-S strong covalent bonds formed at the interface. Variations of $E_b$ for large wells are $\simeq 0.02$ eV,
suggesting that the interaction between interfaces is well-screened out for the considered widths.
Most importantly, Wei {\em et al}.\ consider different hybridization geometries at the interface. It is known
that zigzag TMD terminations can be either chalcogen-- or transition-metal--terminated (see figure\ \ref{AVALOS2018Fig1}), with the S-termination being more stable. Hence, zigzag interfaces can have two patterns: i) Mo-edge with the S-edge of the W ribbon, or ii) W-edge with the S-edge of the Mo ribbon, as shown schematically in figures\ \ref{Wei2016PCCPFIGURE45}(a)--(b).
The hybridization and charge transfer occurring at the interface creates a built-in electric field that leads to a potential gradient across the interface, and seen in charge density difference maps in figures \ref{Wei2016PCCPFIGURE45}(c)--(d).
The built-in potential is expected to change local work functions; on the S-edge they have the same value for both TMDs, while on the M-edge they are only 0.08 eV apart, as shown schematically in figure \ref{Wei2016PCCPFIGURE45}(e). The charge transfer between MoS$_{2}$ and WS$_{2}$ is hence attributed to the larger work function at the zigzag edges with S-termination. Different zigzag interfacial connections patterns considered in \cite{Wei2016}, have been seen in experiments and are listed in table\ \ref{tab:table1}.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.4\textwidth]{Fig9.pdf}
\caption{For MoS$_{2}$-WS$_{2}$ zigzag interface, (a) and (b) show two in-plane connection patterns between MoS$_{2}$ (purple) and WS$_{2}$ (green), with sulfur (yellow balls). In (a) the Mo-edge connects to the S-edge of the W ribbon, and in (b) it is reversed. (c) and (d) charge density difference for (a) and (b), respectively. (e) Work function $\Phi$ for the four possible zigzag terminations at the interface, as shown. Reprinted with permission from \cite{Wei2016}. Copyright 2016 by Royal Society of Chemistry, Physical chemistry chemical physics.}
\label{Wei2016PCCPFIGURE45}
\end{figure}
As for defects, chalcogen vacancies are the most recurrent defects in monolayer TMDs. Wei \emph{et al.} \cite{Wei2016} studied S vacancies at the interface and at the two closest S-lines on both the Mo and W sides, in a MoS$_{2}$-WS$_{2}$ HSs. These defects cause localized in-gap states that evolve into overlapping bands in these short period unit cells, and appear below the bottom of the conduction band, contributed mostly by metallic \emph{d}-orbitals. The states closest to the conduction band are contributed by the S-vacancies on the WS$_2$, while the ones lower in energy are on the MoS$_2$ side. When the vacancies are exactly at the interface, both Mo and W sharing the S vacancies are not saturated, creating a two-fold band structure with a gap, where the lower (higher) band is linked to the Mo (W) atom.
\emph{Tight-binding structure of commensurate TMD HSs.- } Other approaches have also been used to describe the electronic structure of HSs. Tight-binding approaches have been among the most common, for either commensurate \cite{Zhang2016SciRep,AvalosOvando2018Arxiv} or incommensurate HSs \cite{Choukroun2018Arxiv}. In this approach, it also becomes straightforward to model vacancies, adatoms and other local defects.
The successful 3-orbital tight-binding (3OTB) model \cite{Liu2013} allows one to build commensurate lateral HS nanoribbons with realistic sharp interfaces, as those seen in experiments \cite{Huang2014NatMat,Sahoo2018Nature}. Different boundary geometries of edges and interfaces (either zigzag or armchair), with periodic boundary conditions (PBC) along the ribbon can be modeled. The NR can be described by a triangular lattice of metal atoms and associated chalcogens, with only three 4\emph{d}-orbitals per metal site. This model exploits the fact that the near-gap (low energy) level structure in TMDs is dominated by the metal 4\emph{d}-orbitals with nearly no contribution from the chalcogen \emph{p}-orbitals \cite{Liu2013}. Other multi-orbital tight-binding models use larger basis sets \cite{Cappelluti2013,Roldan2014,Ridolfi2015,Fang2018}. These more computationally expensive but powerful formulations validate much of the results seen in the midgap range from the 3OTB approach.
The 3OTB model uses $d_{z^2}$, $d_{xy}$ and $d_{x^2-y^2}$ as basis, and for the HS is given by
\begin{equation}\label{heterolattice1}
H_{\mathrm{3OTB}} = H^{A}_{\mathrm{pristine}} + H^{B}_{\mathrm{pristine}} + H_{\mathrm{interface}},
\end{equation}
where $H^{A(B)}_{\mathrm{pristine}}$ is the Hamiltonian for each of the two TMDs, and $H_{\mathrm{interface}}$ describes the hoppings at the interface between the two TMD lattices. For TMDs with the same chalcogen atoms, the lattice mismatch is less than 1\% (such as MoS$_{2}$-WS$_{2}$ and MoSe$_{2}$-WSe$_{2}$) \cite{Huang2014NatMat,Gong2014NatMat,Sahoo2018Nature}. This results in corresponding small strain, so that the interface is essentially only compositional. The tight-binding description simply connects the metal atoms across the interface. Differences in real space lattice sizes are translated into different monolayer Brillouin zones (BZ), although the difference is in the m\AA$^{-1}$ range and can be neglected without the necessity of introducing band folding. For each of the pristine TMD lattices (A and B), the 3OTB model is given by \cite{Liu2013}
\begin{equation}\label{lattice1}
H_{\mathrm{pristine}}^{\mathrm{A(B)}} = H^{\mathrm{A(B)}}_{\mathrm{o}} + H^{\mathrm{A(B)}}_{\mathrm{t}} + H^{\mathrm{A(B)}}_{\mathrm{SOC}},
\end{equation}
where $H^{\mathrm{A(B)}}_{\mathrm{o}}$ is the onsite Hamiltonian and $H^{\mathrm{A(B)}}_{\mathrm{t}}$ has the hopping integrals. $H_{\mathrm o}$ is given by
\begin{equation}\label{lattice2}
H^{\mathrm{A(B)}}_{\mathrm{o}} = \sum_{ \textbf{l}}^{N_{sites}} \sum_{s=\uparrow,\downarrow}^{\mathrm{spin}} \sum_{\alpha,\alpha'}^{\mathrm{orbitals}} \varepsilon^{\mathrm{A(B)}}_{\alpha\alpha',s}d_{\alpha,\textbf{l},s}^{\dagger\mathrm{A(B)}}d^{\mathrm{A(B)}}_{\alpha',\textbf{l},s},
\end{equation}
where $d^{\mathrm{A(B)}}_{\alpha,\textbf{l},s}$ ($d^{\dagger\mathrm{A(B)}}_{\alpha,\textbf{l},s}$) annihilates (creates) a spin-$s$ electron in orbital $\alpha$, $\in\,\left\{d_{z^2},d_{xy},d_{x^2-y^2}\right\}$ at site $\textbf{l}=l_{1}\textbf{R}_{1}+l_{2}\textbf{R}_{2}$, where \textbf{R}$_{j}$ are the lattice vectors of the triangular lattice for each material, and the onsite energies are given by $\varepsilon^{\mathrm{A(B)}}_{\alpha\alpha',s}$. For a rectangular ribbon, the total number of sites is $N_{sites}=N\times H$, as shown in figure\ \ref{AVALOS2018Fig1}. The nearest-neighbor coupling Hamiltonian is
\begin{equation}
H^{\mathrm{A(B)}}_{\mathrm{t}} = \sum_{\textbf{l,R}_j} \sum_{\alpha,\alpha',s}
t_{\alpha\alpha'}^{(\textbf{R}_{j})\mathrm{A(B)}}d_{\alpha,\textbf{l},s}^{\dagger\mathrm{A(B)}}d^{\mathrm{A(B)}}_{\alpha',\textbf{l}+\textbf{R}_{j},s}+
\mathrm{H.c.},\\
\end{equation}
with different hopping parameters $t_{\alpha\alpha'}^{(\textbf{R}_{j})\mathrm{A(B)}}$.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.5\textwidth]{Fig10.pdf}
\caption{Heteroribbons with edges and interfaces for (a) zigzag, and (b) armchair configurations. Metals Mo and W are shown in aqua and red colors, respectively. Chalcogens S or Se are shown in dark yellow. The zigzag (or armchair) heteroribbon is finite along the vertical (horizontal) direction, while periodic boundary conditions are used in the other direction, as indicated by the triple black dots. The interface is shown as a blue dotted line. The zigzag ribbon in (a) has two different edges, the $S$-edge (outermost-atom is a chalcogen) and the $M$-edge (outermost-atom is a transition metal). Reprinted with permission from \cite{AvalosOvando2018Arxiv}. Copyright 2019 by the American Physical Society, Physical Review B.}
\label{AVALOS2018Fig1}
\end{figure}
The SOC in each material is approximated by the metal onsite contributions, $H^{\mathrm{A(B)}}_{\mathrm{SOC}}=\lambda^{\mathrm{A(B)}} L_{z}S_{z}$, where $L_{z}$ and $S_{z}$ are the $z$-components of the orbital and spin operators, respectively, and $\lambda^{\mathrm{A(B)}}$ is the SOC strength for each material. This results in on-site orbital mixings, $\varepsilon_{d_{xy}d_{x^2-y^2},\uparrow}=\varepsilon_{d_{x^2-y^2}d_{xy},\downarrow}=i\lambda^{\mathrm{A(B)}} = -\varepsilon_{d_{xy}d_{x^2-y^2},\downarrow}=-\varepsilon_{d_{x^2-y^2}d_{xy},\uparrow}$, that reproduce well the spin-split valence bands in the 2D crystal and give rise to strong spin-valley locking \cite{Liu2013}.
The interface is also described by nearest neighbor hopping integrals, and needs to take into account two important issues: the band alignment (or offsets) between materials $V_{\mathrm{A-B}}$, and re-scaling of the hoppings across the interface. The band alignment is taken into account through relative shifts of the onsite terms, given by $\varepsilon^{\mathrm{B}'}_{\alpha\alpha',s}=\varepsilon^{\mathrm{B}}_{\alpha\alpha',s}+V_{\mathrm{A-B}}$. These offsets can be taken from DFT results \cite{Kang2013,Guo2016,OngunOzcelik2016}, resulting in either type-I or type-II band alignments, as described above.
The hopping integrals can be written as an arithmetic \cite{AvalosOvando2018Arxiv} or geometric average \cite{Zhang2016SciRep}, with no qualitative difference in results. With the arithmetic average, the Hamiltonian is
\begin{equation}\label{df}
H_{t}^{\mathrm{A-B}}=\sum_{\textbf{$\gamma$,a}_j} \sum_{s,\alpha,\alpha'}
\delta\left[t_{\alpha\alpha'}^{(\textbf{a}_{j})\mathrm{A}}+t_{\alpha\alpha'}^{(\textbf{a}_{j})\mathrm{B}}\right]d_{\alpha,\textbf{$\gamma$},s}^{\dagger}d_{\alpha',\textbf{$\gamma$}+\textbf{a}_{j},s}
\end{equation}
($+$ H.c.), where $\gamma$ labels the atoms on both sides of the interface. The scaling factor $\delta$ describes the compositional symmetry as well as possible relaxation effects at the interface (it is found that $\delta=0.1$ is a value consistent with experiments \cite{AvalosOvando2018Arxiv}). The geometric average, with similar results for the state localization at the interface, uses $\sqrt[]{t_{\alpha\alpha'}^{(\mathbf{a}_{j})\mathrm{A}} t_{\alpha\alpha'}^{(\mathbf{a}_{j})\mathrm{B}}}$ as the effective interface hoppings
\cite{Zhang2016SciRep}.
Zhang \emph{et al.} also consider hopping reconstruction at the ribbon edges, resetting the hopping integrals only for atoms on the borders, since they are connected to fewer atoms than those in the interior. The values used for these edge hopping integrals are inversely proportional to the bond lengths \cite{Zhang2016SciRep}.
The band structure of joined nanoribbons at a HS displays bands lying within the bulk gap. These midgap states are located at either the ribbon edges or at the interface of the system. For zigzag HS, all these states cross the gap, as shown in figure\ \ref{AVALOS2018Fig3}. One can identify two interfacial midgap bands, one closer to the conduction band and another to the valence band, with weight in all three orbitals ($d_{z^2,\mathrm{s}}$, $d_{xy,\mathrm{s}}$ and $d_{x^2-y^2,\mathrm{s}}$). The hybridization across the two materials produces a gap and mixing between the interfacial branches. This gap is proportional to the hybridization parameter $\delta$, as the chalcogens of one TMD hybridize with the metal atoms of the other TMD.
The interfacial zigzag states can be described analytically by \ref{Avalos20181EffectiveHamiltonianWithPauli}
\begin{eqnarray}\label{Avalos20181EffectiveHamiltonianWithPauli}
H^{\mathrm{interface}}_{\mathrm{eff}} &= \frac{(1-\sigma_{z})}{2} \sum_{n=0}^{\mathcal{N}}\left[t^{(n)}\cos (n k) + s_{z} \ t^{(n)}_{SO}\sin (n k)\right] + \nonumber\\
&\frac{(1+\sigma_{z})}{2} \sum_{n=0}^{\mathcal{N}}\left[\gamma^{(n)}\cos (n k) + s_{z} \ \gamma^{(n)}_{SO}\sin (n k)\right],
\end{eqnarray}
where $\sigma_{z}$ is the Pauli matrix operating in a two function basis $\left\{ |\phi_{c}\rangle , |\phi_{v}\rangle \right\}$, and $s_{z}$ is the corresponding spin operator. The constants are the $n$th-nearest neighbor hoppings $t^{(n)}$ ($\gamma^{(n)}$) and
spin-orbit interaction $t^{(n)}_{SO}$ ($\gamma^{(n)}_{SO}$) for the lower (upper) interfacial band in the gap, respectively. These are obtained by numerical fitting to the 3OTB results, with excellent agreement, shown in figure \ref{AVALOS2018Fig3} \cite{AvalosOvando2018Arxiv}.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.45\textwidth]{Fig11.pdf}\\
\caption{Fitted bands for the zigzag MoS$_2$-WS$_2$ heteroribbon: The fits of \ref{Avalos20181EffectiveHamiltonianWithPauli} are shown as dashed lines, while symbols indicate the numerical 3OTB bands. We highlight interfacial zigzag bands (blue hexagons), as well as zigzag pristine Mo (green squares) and W (red triangles) edge bands, as shown in figure \ref{AVALOS2018Fig1}. Only spin up states are shown. Reprinted with permission from \cite{AvalosOvando2018Arxiv}. Copyright 2019 by the American Physical Society, Physical Review B.}
\label{AVALOS2018Fig3}
\end{figure}
For armchair HSs, the electronic structure is fully semiconducting, with no states crossing the bulk bandgap. The type-II alignment allows for easy differentiation of two interfacial bands in the gap, one per each material, but displaced to lower energy with respect to the pristine edge band. The interfacial gap also scales with $\delta$, as in the zigzag case, except that for small $\delta$ the gap does not close \cite{AvalosOvando2018Arxiv}.
The 3OTB model was first used for describing lateral MoS$_2$-WS$_2$ HSs by Zhang \emph{et al.} \cite{Zhang2016SciRep}, and transport quantities, as described in section \ref{subsubsec:Transport}. They built an HS of lateral alternating MoS$_2$ and WS$_2$ slabs, and consider different hopping strengths at the edges of the ribbon, to include reconstruction effects.
They find the HS has high-performance thermoelectric response, as the interfaces reduce the thermal conductivity.
Recently, the model was used for describing zigzag and armchair MoS$_2$-WS$_2$ and MoSe$_2$-WSe$_2$ interfaces, to show 1D confinement of states at the interface. Furthermore, it was shown that the interface can act as an unusual effective 1D-host when magnetic impurities are hybridized to it \cite{AvalosOvando2018Arxiv}. Driven by the complex spin and orbital texture of the interfacial states, anisotropic and sizable non-collinear (Dzyaloshinskii-Moriya) effective exchange interactions arising between the impurities. These and other behavior are discussed further in section\ \ref{subsec:1DNovelPlatform}.
\subsubsection{Incommensurability and strain}
\label{subsubsec:IncommensurabilityAndStrain}
The properties of commensurate TMD HSs described previously, can be strongly affected when strain is present. Usually, when TMD lateral HSs of different chalcogen atoms such as MoS$_{2}$-WSe$_{2}$ or MoS$_{2}$-MoTe$_{2}$ are formed, the lattice constant for the heavier chalcogen system is much larger, leading to sizable intrinsic strain at the interface. This mismatch has been experimentally measured to be as large as 4\% \cite{Duan2014NatNano,Xie2018ParkGroup}, introducing strain and requiring consideration of lattice relaxation effects. In the following, we review some of these effects.
\emph{Band alignment.- }
Guo \emph{et al.} \cite{Guo2016} address band alignment in a lateral HS between TMDs with different chalcogen, MoS$_{2}$-WSe$_{2}$, which has a 3.7\% lattice mismatch. This structure has type-II band alignment, with the states of WSe$_2$ lying higher in energy than MoS$_2$, and the charge neutrality point lying close to midgap. The comprehensive spin-polarized DFT study by \"Oz\ifmmode \mbox{\c{c}}\else \c{c}\fi{}elik \emph{et al.} \cite{OngunOzcelik2016} confirmed that in group-VIB TMD HSs the band alignment is mostly type-II, with only a few combination of TMDs being type-I, as strain is considered.
Early studies by Wang \emph{et al.}\ \cite{Wang2013} considered structural and electronic properties of non-commensurate MoS$_{2}$-MoTe$_{2}$ HSs, using Quantum Espresso, and PBE-GGA for exchange-correlation. Their HS is interesting to study because of the different bandgaps of each pristine system: MoS$_{2}$ has the largest and MoTe$_{2}$ the smallest. Due to the large difference in lattice constants (about 10\%), they consider a 10 MoS$_{2}$-9 MoTe$_{2}$ supercell. This HS shows metallic behavior, originating from atoms displaced at the interface. A similar study in MoS$_2$-WS$_2$ zigzag and armchair interfaces found strain-driven type-II to type-I band alignment transition when tensile strain is applied to the WS$_2$ side, as well as localized in-gap states in the presence of grain boundaries \cite{Kang2015}.
The MoS$_{2}$-WSe$_{2}$-MoS$_{2}$ quantum well HSs studies by Wei \emph{et al.} \cite{Wei2015SciRep} show that their electronic properties can be engineered by adjusting the strain. This leads to different bandgaps and to an indirect-to-direct bandgap transition as the number of unit cells in each HS changes. Typical results for large HSs are shown in figure\ \ref{Wei2015PCCPFIGURE2}(c)--(d). The MoSe$_{2}$-WS$_{2}$ armchair HS is semiconducting with a direct gap in $\Gamma$, but the zigzag HS is indirect (at the VBM in $\Gamma$ and the CBM in $A$), the difference attributed to the intrinsic electric field across the interface and to lattice mismatch effects. For zigzag MoSe$_{2}$-WS$_{2}$ HSs,
the projected band-decomposed charge density shows that both VBM and CBM are confined to the Mo-side, suggesting a type-I band alignment with a smaller gap than in pristine MoSe$_{2}$, due to the presence of the built-in dipole at the interface.
Defects such as S-vacancies at the interface on incommensurate MoS$_{2}$-WSe$_{2}$ interfaces have also been recently addressed, showing that even non-pristine interfaces show sharp electrostatic potential profile changes at the interface, as in commensurate HSs \cite{Cao2017}.
{\em Straintronics} in lateral TMD HSs has also been studied. Strain changes atom bondings, resulting in bandgap changes, and/or indirect-direct bandgap crossover \cite{Kang2015,Wei2017,Lee2017,Mu2018MRExpress}. Electronic effects driven by strain, such as band alignment transition under tensile strain from type-II to type-I in a MoS$_{2}$-WS$_{2}$ HS were characterized in four HSs \cite{Wei2017},
as depicted in figure\ \ref{Wei2017PCCPFIGURE3}.
The band structure calculations find that while cases (b), (c), and (d) in figure \ref{Wei2017PCCPFIGURE3} show direct bandgap, case (a) does not. The projections of the wave function contributions for each material do not change considerably after four atomic lines ($\sim$22 \AA),
signaling strong interfacial behavior. Contribution of states to the VBM and CBM in cases (a) and (b) show type-I alignment, while
cases (c) and (d) show clear type-II, with the VBM localized on WSe$_2$ and the CBM on MoS$_2$. The relative alignments are attributed in part to SOC effects, since the VBM at the K-valleys are shifted upwards, overcoming the tensile strain-induced shift of the VBM at the $\Gamma$-point. These results highlight that intrinsic strain at 1D interfaces gives rise to different electronic properties. Interestingly, systems with simultaneous strain show direct bandgaps.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.45\textwidth]{Fig12.pdf}
\caption{Band structures for HSs as shown, with strain applied to the TMD inside the square while the other remains unaffected. Tensile (compressive) strain is applied to the Se (S) based TMD. Reprinted with permission from \cite{Wei2017}. Copyright 2017 by Royal Society of Chemistry, Physical chemistry chemical physics.}
\label{Wei2017PCCPFIGURE3}
\end{figure}
Strain can also affect the solar power conversion efficiency in commensurate quantum wells HS. Lee \emph{et al.} found that type-II band alignment can be preserved with up to 12\% of uniaxial strain \cite{Lee2017}. Straintronics can also manifest in more exotic HSs, such as WS$_{2}$-WSe$_{2}$-MoS$_{2}$ quantum wells \cite{Mu2018MRExpress}. The bandgap can be continuously tuned changing the size of the central quantum well component. Lattice mismatch induces strain, direct-indirect bandgap transitions, and differences in band alignment. They used \emph{ab initio} molecular dynamics to verify thermodynamic stability of the interfaces, finding that room temperature does not break bonds, and that the hexagonal structure holds, supporting interface stability \cite{Mu2018MRExpress}. This finding is also reflected in the phonon dispersion curves, showing only branches with positive frequencies.
An electrostatic potential difference associated with the built-in electric field is seen at the interfaces. No sharp drop in the macroscopic average indicates also strong hybridization at the interfaces.
\emph{Tight binding.- }
Tight binding models have also recently addressed the effects of inconmensurability, studying WTe$_2$-MoS$_2$ and MoTe$_2$-MoS$_2$ HSs \cite{Choukroun2018Arxiv}, using an 11-orbital basis \cite{Cappelluti2013,Roldan2014,Ridolfi2015,Fang2018}. Choukroun \emph{et al.} used this approach to model tunnel field effect transistors on in-plane heterojunctions, and studied quantum transport with NEGF. The original 11-orbital TB Hamiltonian doubles, as both TMDs must be considered in the transport simulation cell. The model uses all five metal $d$-orbitals, as well as the three $p$-orbitals for each of the chalcogen layers. The model describes first neighbor hoppings M-M, M-X, and X-X, and second neighbors X-X, and considers strain between the different TMD lattices. The coupling Hamiltonian between both TMD lattices is taken to be the arithmetic average between hoppings on both sides of the interface (see equation \ref{df}),
\begin{equation}\label{Choukroun2018ArxivHopping}
T_{n+1,m}^{A/B}=\left(T_{n+1,m}^{A}+T_{n+1,m}^{B}\right)/2,
\end{equation}
where A(B) are the TMDs on either side of the interface. This is analogous to the approach in \cite{AvalosOvando2018Arxiv}.
\emph{Strain tensor.- }
Recently, the 2D strain tensor ${\epsilon}$ in lateral WSe$_2$-MoS$_2$ HSs has been characterized as \cite{Zhang2018strain}
\begin{equation}
{\epsilon} = \left[
\begin{array}{cc}
\epsilon_{aa} & \epsilon_{ab} \\
\epsilon_{ba} & \epsilon_{bb} \\
\end{array}
\right],
\end{equation}
in terms of appropriately defined directions $a$ and $b$. In a strainless case, such as an HS with the same chacolgen, vectors $\textbf{a}$ and $\textbf{b}$ can be defined in terms of a rectangular unit cell, where $\textbf{a}$ is parallel to the zigzag interface, and $\textbf{b}$ is along the perpendicular armchair direction. In the presence of shear strain on the MoS$_2$ side (smaller lattice constant), the unit cell is now a trapezoid, with moir\'e pattern spacings $\lambda_a$ and $\lambda_b$, along the $\textbf{a}$ and $\textbf{b}$ directions given by
\begin{eqnarray}
\lambda_a&=&a'_{\mathrm{Mo}}/\delta_a,\,\,\,\,\,\,\,\,\,\,\mathrm{with}\,\,\,\,\,\delta_a=|a_\mathrm{W}-a'_\mathrm{Mo}|/a_\mathrm{W}, \label{strain1} \nonumber \\
\lambda_b&=&b'_{\mathrm{Mo}}/\delta_b,\,\,\,\,\,\,\,\,\,\,\mathrm{with}\,\,\,\,\,\delta_b=|b_\mathrm{W}-b'_\mathrm{Mo}|/b_\mathrm{W}, \label{strain2}
\end{eqnarray} \label{straintensor1}
where no-prime (prime) values correspond to unstrained (strained) lattices, and $\delta$'s are lattice mismatches. See figure\ \ref{FigInterfaceStrain}(a)--(b) for the schematic representation of these quantities. The shear angle $\beta$ of the moir\'e pattern is related to the atomic lattice shear angle $\alpha$ by
\begin{equation}
\tan{\beta}=A_\beta\tan{\alpha},\,\,\,\,\,\,\,\,\,\,\mathrm{with}\,\,\,\,\,A_\beta=1/\delta_a. \\
\end{equation} \label{straintensor2}
This approach allows one to relate the moir\'e pattern spacing obtained with STM to the atomic lattice spacing, allowing the first one to act as a \emph{magnifying glass} with amplification factor $A_\beta$, inversely proportional to the mismatch: a tensile (compressive) strain in the Mo-side (W-side) will reduce (increase) the mismatch and will increase (reduce) the moir\'e pattern periodicity. Experimental data for $\lambda$'s should allow the determination of $\epsilon$: for a lateral WSe$_2$-MoS$_2$ interface, it is found that $\epsilon_{aa}=1.17\%$, $\epsilon_{bb}=-0.26\%$, and $\epsilon_{ab}=\epsilon_{ba}=0.69\%$ \cite{Zhang2018strain}. These parameters could be introduced into tight-binding descriptions of atomic lattices to realistically account for strain distributions around interfaces.
\emph{Coarse-grained simulations.- }
Coherent WSe$_2$-WS$_2$ HSs have been recently grown \cite{Xie2018ParkGroup}, where the WS$_2$ (WSe$_2$) lattice constant is stretched (compressed) to achieve an integrated superlattice with almost-no-dislocations, as shown in figure\ \ref{FigInterfaceStrain}(c)--(e).
Lattice constant measurements along the directions parallel ($a_{\parallel}$) and perpendicular ($a_{\perp}$) to the interface, allowed estimates of the corresponding lattice mismatches $\delta_{\parallel}=0$ and $\delta_{\perp}=1.2\%$. A coarse-grained force-field model was used to model this system. The model needs to consider nearest-neighbor bonds and angular interactions to accurately reproduce experimental results. The energy of the HS is given by the sum of the harmonic bond and angular potentials,
\begin{equation}
E_{latt}=\frac{1}{2} \sum_{\rm bonds} k_b (r-r_0)^2 + \frac{1}{2} \sum_{\rm angles} k_{\theta} (\theta-\theta_0)^2.\\
\end{equation} \label{XiePark}
After an initial configuration of atoms is defined, following the scheme presented in figure\ \ref{XiePark}(a)-(b), $E_{latt}$ is minimized using second-order damped dynamics, until convergence is achieved. Atomic bonding is parameterized from 2D Young's moduli for WS$_2$ ($Y_{2D}=140$ N/m) and WSe$_2$ ($Y_{2D}=116$ N/m) DFT calculations, obtaining $k_b$'s and $r_0$'s. Although angular interactions are important, as they reflect the shear stiffness modulus, the moduli for TMDs is yet unknown. However, it is found that $k_{\theta}=20$ rad$^{2}$ yields reasonable results. The simulations including angular coupling find $\delta_{\parallel} =0$ and $\delta_{\perp}=1.3\%$, as shown in figure\ \ref{XiePark}(c), in excellent agreement with the aforementioned experimental values.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.45\textwidth]{Fig13.pdf}
\caption{Coarse-grained simulations for WS$_2$-WSe$_2$ coherent HS. (a) Atoms types (WS$_2$ in red, WSe$_2$ in blue), four possible bonding among atoms (growth directionality matters), and angular terms. $k_b$ and $r_0$ describe the bond force and equilibrium distance; similarly, $k_{\theta}$ and $\theta_0$ are angular force constant and equilibrium angle. (b) Coherent HS with its growth directions shown as green arrows, labeling three possible cases for surrounding neighbors shown on the bottom. (c) Simulation results for $\delta_{\parallel}$ (orange) and $\delta_{\perp}$ (green). Reprinted with permission from \cite{Xie2018ParkGroup}. Copyright 2018 The American Association for the Advancement of Science, Science.}
\label{XiePark}
\end{figure}
\emph{Molecular dynamics simulations.- }
Strain effects have also been recently addressed with models based on classical potentials. Jiang \emph{et al.} \cite{Jiang2018misfit} studied the misfit strain-induced buckling for different interfaces in TMD lateral HSs, using molecular dynamics calculations. As more experiments are rapidly appearing, they highlight the need for theoretical methods (other than DFT) that are able to consider properties such as misfit strain, thermal transport, or sharpness of the interface, in systems with larger sizes. In their work they used 50,000 atom simulations and Stillinger-Weber (SW) potentials. Calculating the strain distribution along the interface yields that misfit strain can induce significant buckling on various TMDs in patterns consistent with experiments.
The incommensurate lattices cause compressive stress in the TMD with largest lattice constant, and a buckling instability may occur.
The SW potential is a nonlinear potential given by two- ($V_2$) and three-body ($V_3$) interaction terms as
\begin{eqnarray}
V_2(r_{ij})&=&\epsilon A(B \sigma^p r_{ij}^{-p}-\sigma^q r_{ij}^{-q})e^{[\sigma(r_{ij}-a\sigma)^{-1}]}, \label{Jiang2018misfitSWpotentialEq1} \nonumber \\
V_3(\vec{r}_i,\vec{r}_j,\vec{r}_k)&=&\epsilon\lambda e^{[\lambda\sigma(r_{ij}-a\sigma)^{-1}+\lambda\sigma(r_{jk}-a\sigma)^{-1}]} \nonumber \\
&&\times (\cos\theta_{jik}-\cos\theta_0)^2, \label{Jiang2018misfitSWpotentialEq2}
\end{eqnarray} \label{Jiang2018misfitSWpotential}
where $r_{ij}$ is the distance between atoms $i$ and $j$, and $\theta_{jik}$ is the angle between the bonds $r_{ij}$ and $r_{jk}$, $\theta_0$ is the equilibrium angle, and the parameters are naturally TMD-dependent. The structure is first relaxed, then thermalized at 4.2 K, using Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) \cite{LAMMPS}. Several HSs were studied, including MoS$_2$-WSe$_2$, MoS$_2$-WTe$_2$, MoS$_2$-MoSe$_2$, and MoS$_2$-MoTe$_2$, all exhibiting strain distributions consistent with available experiments \cite{Zhang2018strain,Xie2018ParkGroup}. The TMD with smaller lattice constant shows only small tensile strain, and the edges of the sample not interfaced with another TMD also present compression due to bending of the interface. The TMD with larger lattice constant, however, shows significant compressive strain at the interface, and small tensile strain at the edges. The effect is also seen in triangular lateral heterostructures. They find that both strain, tensile and compressive, decay exponentially as $\propto e^{-x/\xi}$, with a critical length of $\xi \simeq 15$ \AA.
\subsection{1D novel platform}
\label{subsec:1DNovelPlatform}
Experiments have shown enormous progress in achieving nearly-clean 1D interfaces between TMDs, and theoretical calculations have confirmed the remarkable stability and interesting electronic structure of lateral HSs. An increasing number of experimental and theoretical efforts have started exploring effective uses for these lateral interfaces, in areas as diverse as optics, magnetism, and transport.
Section\ \ref{subsubsec:Optics} shows studies in optics, which have addressed excitonic effects around the interface \cite{Yang2017,Lau2018ArxivExcitons}, as well as wave guiding and spin-valley selection effects \cite{Ghadiri2018JAP}. A combined low-energy continuum description and tight binding approach, has found that the 1D HS interface exciton has similar binding energy as the 2D excitons in pristine monolayer TMDs, with somewhat larger effective radius.
This finding suggests effective optoelectronics applications involving 0D quantum dot confinement of excitons \cite{Lau2018ArxivExcitons}, associated with the formation of new W-S chemical bonds that favor exciton recombination \cite{Yang2017}.
The interface has moreover been recently proposed to serve as 1D-like host for long-range non-collinear magnetic interactions when magnetic impurities are hybridized to the interface, finding large tunability and stable conditions for the interaction to occur \cite{AvalosOvando2018Arxiv}. This is further described in section\ \ref{subsubsec:MagneticProperties}.
Transport effects have been studied with DFT, tight binding \cite{Zhang2016SciRep,Choukroun2018Arxiv}, and effective mass approximation \cite{Mishra2018oneDimensional} approaches. A 3-orbital tight binding model has been used to describe MoS$_2$-WS$_2$ and found to exhibit efficient thermoelectric characteristics, depending on the number and width of lateral HSs segments \cite{Zhang2016SciRep}. The 11-orbital TB model has been used for modelling MoTe$_2$-MoS$_2$, and found to be a possible system to implement high performance tunnel effect transistors \cite{Choukroun2018Arxiv}. The effective mass approximation which describes electrons at the K-points has been used to study transport properties, finding a one-dimension spin polarized channel at the interface \cite{Mishra2018oneDimensional}.
Similarly, theoretical transport studies have found that HSs can be used as gateless electron waveguides and spin valley filters/splitters \cite{Ghadiri2018JAP}.
In \ref{subsubsec:PhaseInterfacesTHEORY} we review studies of atomically clean interfaces between phases of the same TMD. Finally, in \ref{subsubsec:WithOtherMaterials} we briefly summarize lateral HSs proposed between group VIB semiconducting TMDs and metallic TMDs, and posible uses for them.
\subsubsection{Optical effects}
\label{subsubsec:Optics}
The study of excitons (bound states of an electron and hole) in a TMD, has been a topic of great interest from the outset of TMD monolayer studies, as exciton properties are essential for determining optical response. The attention has focused on pristine monolayers and vertical heterostructures. In the latter, the exciton may be spatially separated, and with lower binding energy than in pristine TMDs, providing long exciton lifetimes and tunability. More recently, however, lateral HSs excitons have been seen in experiments \cite{Huang2014NatMat,Gong2014NatMat,Duan2014NatNano}, promising exciting properties by the inherent 1D interface of the planar HS \cite{Wei2015,Wei2015SciRep,Mu2018MRExpress,Yang2017,Lau2018ArxivExcitons}.
Early DFT studies suggested excitonic localization on either side of the interface, based on the projected density of states of the band structure, and reflecting the associated type-II band alignment. This alignment would allow hole and electron to be located on different sides of the interface, favoring the selective formation of the exciton right at the interface \cite{Wei2015,Wei2015SciRep,Mu2018MRExpress,Yang2017}. A photoexcitation charge transfer study, using time-domain DFT along with nonadiabatic molecular dynamics, was carried out in lateral (and vertical) MoS$_{2}$-WS$_{2}$ HSs \cite{Yang2017}. They use VASP, with PBE-GGA, along with Grimme DFT-D3 for the molecular dynamics simulations. In the lateral HS case, an exciton-like state is seen to be localized at the interface due to Coulomb interaction, with an exciton recombination factor 3 times faster than in the vertical HS. The coupled electron-hole at the interface enhances electron-phonon coupling, due to the formation of new W-S chemical bonds.
Lau \emph{et al.} \cite{Lau2018ArxivExcitons} have recently theoretically studied excitonic states at the 1D armchair interface between two TMDs, with type-II band alignment. They considered one interface (WSe$_{2}$-MoSe$_{2}$), and two interfaces (WSe$_{2}$-MoSe$_{2}$-WSe$_{2}$), as well as a triangular MoSe$_2$ area enclosed by WSe$_2$, i.e., an heterotriangular quantum dot (QD) with surrounding interface. They analyzed the exciton binding energy $E_b$, effective radius $a_b$, optical dipole $D$ (related to exciton lifetime), and intervalley coupling strength $J$, using two approaches for solving the exciton problem. They find that the exciton radius increases with band offset, to be much larger than the 2D TMD exciton, while the binding energy does not decrease significantly.
The optical transition dipole decreases with band offset, up to one order of magnitude smaller than in pristine 2D TMD. Excitons in triangular QD structures show confinement of one carrier inside the QD, while the other remains close but in the second material, separated by the interface. They find this effect is tunable, with optical selection rules depending on the QD size.
The exciton is studied in the effective mass approximation with
\begin{equation}
\label{Lau2018Hamiltonian1}
H=-\frac{\hbar^2}{2m_e}\nabla^{2}_{\textbf{r}_e}-\frac{\hbar^2}{2m_h}\nabla^{2}_{\textbf{r}_h}+V_C(|\textbf{r}_e-\textbf{r}_h|)+V_I(\textbf{r}_e,\textbf{r}_h),
\end{equation}
where $m_e$ $(m_h)$ are the electron (hole) effective masses, $\textbf{r}_e$ $(\textbf{r}_h)$ their real space positions, and $V_I(\textbf{r}_e,\textbf{r}_h)$ is the interface potential defined as $V_I(\textbf{r}_e,\textbf{r}_h)=V_e(\textbf{r}_e)+V_h(\textbf{r}_h)$, with contributions of $V_e$ $(V_h)$ for the electron (hole) lattice potentials, that include band offsets at the interface. The electron-hole Coulomb interaction $V_C(|\textbf{r}_e-\textbf{r}_h|)$ is given by the Keldysh 2D potential \cite{Keldysh1979}
\begin{equation}
\label{Lau2018Coulomb2}
V_C(\textbf{r})=-\frac{e^2\pi}{2r_0}\left(H_0\left(\frac{r}{r_0}\right)-Y_0\left(\frac{r}{r_0}\right)\right),
\end{equation}
with $H_0$ and $Y_0$ the Struve and Bessel functions of the second kind, respectively.
The interface potential favors the electron an hole staying on opposite sides of the interface, while the attractive Coulomb interaction opposes that effect.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.5\textwidth]{Fig14.pdf}
\caption{Atomically sharp armchair interface (in purple) of lateral HS between (a) WS$_{2}$-MoS$_{2}$ and (c) WS$_{2}$-MoS$_{2}$-WS$_{2}$. (b) and (d) are \emph{p-n} and \emph{p-n-p} junctions, respectively. For HSs between different TMDs, $V_0$ is the band offset, $\omega$ is the interface width (set to zero for atomically sharp interfaces), $\delta$ is the difference in band offset for electron and hole, and $L$ is central region width in (d). Reprinted with permission from \cite{Lau2018ArxivExcitons}. Copyright 2018 by the American Physical Society, Physical Review B.}
\label{Lau2018ArxivExcitonsFIGURE1}
\end{figure}
Equation \ref{Lau2018Hamiltonian1} can be rewritten in terms of center-of-mass and electron-hole pair relative motion. Assuming the Bohr-Oppenheimer approximation to be valid, the wave function is separable,
\begin{equation}
\label{Lau2018Hamiltonian3}
\Phi(\textbf{R},\textbf{r})=\Psi(\textbf{R})\Theta(\textbf{r}),
\end{equation}
with $\textbf{R}$ as the position of the center of mass $M=m_e+m_h$, and relative coordinate $\textbf{r}$ with reduced mass $\mu=m_e m_h/(m_e+m_h)$. The interface potential is modeled as
\begin{eqnarray}
V_e(x_e)&=&\frac{V_0+\delta}{2}\left(1-\tanh \left(\frac{x_e}{\omega}\right)\right), \label{electronpotential} \\
V_h(x_h)&=&-\frac{V_0-\delta}{2}\left(1-\tanh \left(\frac{x_h}{\omega}\right)\right), \label{holepotential}
\end{eqnarray} \label{interfacepotentials}
where $\omega$ is the interface width, characterizing the sharpness of the band offset $V_0$, as shown in figure\ \ref{Lau2018ArxivExcitonsFIGURE1}. The translational symmetry along $y$ allows the center-of-mass motion to be written as $\Psi(\textbf{R})=\Psi(x)e^{i p_y y}$, and the relative motion and center-of-mass equations, respectively as
\begin{equation}
\label{Lau2018Hamiltonian4}
\left[-\frac{\hbar^2}{2\mu}\nabla^{2}_{\textbf{r}}+V_C(r)+V_I(x,\textbf{r})\right]\Theta(x,\textbf{r})=E(x)\Theta(x,\textbf{r}),
\end{equation}
\begin{equation}
\label{Lau2018Hamiltonian5}
\left[-\frac{\hbar^2}{2M}\frac{\partial^2}{\partial x^2}+E(x)\right]\Psi(x)=E_g \Psi(x).
\end{equation}
In \ref{Lau2018Hamiltonian5}, $E_g$ is the ground state for the type-II interface exciton. Equation \ref{Lau2018Hamiltonian4} is solved by: i) real-space tight-binding (for small supercells), or ii) perturbation expansion in a hydrogen-like basis (for larger systems).
\begin{figure*}[tbph]
\centering
\includegraphics[width=1.0\textwidth]{Fig15.pdf}
\caption{Results for exciton at WS$_{2}$-MoS$_{2}$ single interface. (a) Binding energy $E_b$, (b) exciton radius $a_b$, and (c) optical dipole $D$ vs band offset $V_0$, obtained with tight-binding model (red symbols) and the continuum 2D hydrogenic basis model (blue symbols), and $\delta=0$. (d) Valley coupling $J$ vs band offset $V_0$, for symmetric (asymmetric) band offset $\delta=0$ eV ($\delta=0.5$) in blue (red) symbols. (e) and (f) are wave functions in real space for the center of mass, and relative coordinate, respectively. The interface is at $x=0$, and results are for different band offsets $V_0$. Reprinted with permission from \cite{Lau2018ArxivExcitons}. Copyright 2018 by the American Physical Society, Physical Review B.}
\label{Lau2018ArxivExcitonsFIGURE75c3}
\end{figure*}
Typical results for interface exciton binding energy $E_b$, effective radius $a_b$, and optical dipole $D$ vs band offset $V_0$ are shown in figure\ \ref{Lau2018ArxivExcitonsFIGURE75c3}(a)--(c).
The results fall into two different regimes: small ($V_0<0.1$ eV), and large band offsets ($V_0>0.4$), driven by the competition between Coulomb interaction and interface potential. In the small-$V_0$ regime, $E_b$ is relatively large ($\approx0.35$ eV), $a_b$ small, and $D$ is large, meaning that $V_C$ dominates, leading to an exciton ground state with similar characteristics to the 2D exciton. On the other hand, for the large-$V_0$ regime, $E_b$ is relatively small ($\approx 0.2$ eV), $a_b$ large ($\approx 5$ nm), and $D$ is small, as $V_I$ dominates over $V_C$, yielding long lifetimes. Note that although smaller, $E_b=0.2$ eV, the binding energy is still of the same order of magnitude as for the 2D excitons. Lastly, the intervalley exchange $J$ vs band offset is studied for a symmetrical ($\delta=0$ eV) and asymmetrical ($\delta=0.5$ eV) band offset, as shown in figure\ \ref{Lau2018ArxivExcitonsFIGURE75c3}(d). For the symmetric case, three-fold symmetry with $V_0=0$ does not allow valley-mixing, but as $V_0$ increases $J$ reaches a maximum before decaying. For the asymmetric case, $\delta\neq0$ has already broken the symmetry and $J$ is already maximum, decreasing for larger band offsets. This suggests that the interface exciton has a $(|K\rangle\pm|K'\rangle)$ valley state, and it will couple with linearly polarized light instead of circular polarization. Results for larger supercells based on a continuum model lead to similar results, as shown in figure\ \ref{Lau2018ArxivExcitonsFIGURE75c3}(a)-(c) in blue symbols.
The ground-state solutions for \ref{Lau2018Hamiltonian4} and \ref{Lau2018Hamiltonian5}, $\Theta(x,\textbf{r})$ and $\Psi(x)$, are shown in figure\ \ref{Lau2018ArxivExcitonsFIGURE75c3}(e)-(f), for different band offsets $V_0$. The figure shows that $\Psi(x)$ is spread across the heterostructure width for small $V_0$, with 2D-like exciton behavior. This changes for large $V_0$, as the exciton center of mass is now located at the interface, indicating that electron and hole are separated on opposite sides. The contour maps in figure\ \ref{Lau2018ArxivExcitonsFIGURE75c3}(f), show that the extent of the relative coordinate function increases for larger $V_0$. Although for small $V_0$ the exciton behaves as in a 2D pristine TMD, for $V_0>0.2$ eV the exciton size is much larger, decreasing its overlap and enhancing its lifetime.
These authors consider also a double lateral interface, as in the system WS$_{2}$-MoS$_{2}$-WS$_{2}$, shown in figure\ \ref{Lau2018ArxivExcitonsFIGURE1}(c). Type-II band alignment dictates that the electron should mostly remain in the central MoS$_{2}$ region, while the hole would be in the outer WS$_{2}$ regions, to an extent depending on the competition between attracting Coulomb interaction, width of the central region $L$ and band offset $V_0$. For small $L$, the interface exciton has small binding energy due to the overlap across the central nanoribbon. However, binding increases for larger $L$ until it saturates to the same value of the single interface case described before ($E_b \simeq 0.16$ eV at $a_b \simeq 5$ nm for $V_0=0.5$ eV), as shown figure\ \ref{Lau2018ArxivExcitonsFIGURE8}(a). This suggests that in WS$_{2}$-MoS$_{2}$-WS$_{2}$ HSs the excitons will be separated at each interface for $L> 5$ nm, as seen in figure\ \ref{Lau2018ArxivExcitonsFIGURE8}(b).
\begin{figure}[tbph]
\centering
\includegraphics[width=0.5\textwidth]{Fig16.pdf}
\caption{(a) Binding energy $E_b$ vs different MoS$_{2}$ widths $L$ for WS$_2$-MoS$_2$-WS$_2$ structure. (b) Magnitude squared of the electron and hole wave functions for three different widths of MoS$_{2}$ ($L=1.3$, 3.3, 10 nm, respectively). MoS$_{2}$ region is shown in gray. (c) Binding energy $E_b$ vs band offset $V_0$, for $L=3.3$ nm. Reprinted with permission from \cite{Lau2018ArxivExcitons}. Copyright 2018 by the American Physical Society, Physical Review B.}
\label{Lau2018ArxivExcitonsFIGURE8}
\end{figure}
\begin{figure}[tbph]
\centering
\includegraphics[width=0.5\textwidth]{Fig17.pdf}
\caption{Results of the excitons for triangular HS of WS$_{2}$-MoS$_{2}$-WS$_{2}$ triple zigzag interface, with a central triangular MoS$_{2}$ region, enclosed by WS$_{2}$. (a) Schematics of the heterostructure, with $R$ the MoS$_{2}$ triangle side. (b) electron and hole relative motion spatial wave function probability distribution vs the heterostructure real-space super cell dimensions, for $R=7,10,15$ nm, and $V_0=0.3$ eV. Reprinted with permission from \cite{Lau2018ArxivExcitons}. Copyright 2018 by the American Physical Society, Physical Review B.}
\label{Lau2018ArxivExcitonsFIGURE9and12}
\end{figure}
Lastly, excitons in triangular MoS$_{2}$ flakes, QDs enclosed by WS$_{2}$, are studied (see figure\ \ref{Lau2018ArxivExcitonsFIGURE9and12}).
A finite basis representation of n-electron/m-hole states is used to find binding energy and wave function of the interface exciton.
They find maximum exciton binding energy for an optimal flake size $R$, due to competitions of quantum confinement and Coulomb interaction terms, as shown in figure\ \ref{Lau2018ArxivExcitonsFIGURE9and12}(b)-(c). For the smallest central QD region size, the electron is spread over the entire QD, but for the largest size, one can see spatially separated wave functions at each interface. For sufficient large QDs, the wave functions are spatially separated, with a $E_b\approx 0.14$ eV, in agreement with the single interface calculations. They propose a valley-dependent effective model for three-fold symmetric QD systems with overlapping states as
\begin{equation}
\label{Lau2018Hamiltonian6}
H^{\triangle\textrm{QD}}_{\textrm{eff,valley}}=
\left(\begin{array}{ccc}
E_0 & te^{i\theta_{\tau}} & te^{-i\theta_{\tau}}\\
te^{-i\theta_{\tau}} & E_0 & te^{i\theta_{\tau}}\\
te^{i\theta_{\tau}} & te^{-i\theta_{\tau}} & E_0\\
\end{array}\right),
\end{equation}
with basis $\{|\Phi\rangle,C_3|\Phi\rangle,C_3^2|\Phi\rangle\}$. The wave function of a 1D interface exciton at one edge, $|\Phi\rangle$, is modified by $2\pi/3$ rotation operators $C_3$. $E_0$ is the exciton binding energy, and $te^{i\theta_{\pm1}}$ the transition between wave functions at different edges, with $\tau=\pm1$ indicating the valley index for $K$ and $K'$, respectively. For non-valley mixing, three energy states $E_j=E_0+2t\cos(2j\pi/3-\theta)$ are found, with twofold degeneracy. Numerical results show that the transition coefficient $t$ depends on the overlap of the quasi-1D excitonic wave functions at the corners of the triangular MoS$_2$ QD, with an interesting sign change: $t<0$ for small QD (large Coulomb interaction, large exciton overlap), and $t>0$ for large QD (small Coulomb interaction, small overlap). One of the excitonic states is bright, and the other two are dark under circularly polarized light excitation.
For the more general case of valley-mixing,
\begin{equation}
\label{Lau2018Hamiltonian7}
H^{\triangle\textrm{QD}}_{\textrm{eff-intervalley}}=
\left(\begin{array}{cc}
H^{\triangle\textrm{QD}}_{\textrm{eff,+1}} & H^{\triangle\textrm{QD}}_{\textrm{eff,pq}}\\
H^{\triangle\textrm{QD}}_{\textrm{eff,pq}} & H^{\triangle\textrm{QD}}_{\textrm{eff,-1}}
\end{array}\right),
\end{equation}
the intervalley mixing is reduced by symmetry to two independent matrix elements, and
Lau {\em et al.} provide estimates for them \cite{Lau2018ArxivExcitons}.
Other theoretical treatments of excitons in lateral HSs include Wei \emph{et al.} \cite{Wei2015SciRep}, who predict a coherent lattice and strong coupling at the interface with type-II alignment. They suggest this as a possible mechanism for effective separation and collection of excitons at the HS. This expectation was confirmed by a detailed interfacial study that finds excitons pinned to the HS with carriers on opposite sides of the 1D interface \cite{Wei2015}. Other DFT studies have also agreed \cite{Yang2017,Mu2018MRExpress}. On the other hand, when defects are distributed along/near the interface, resulting in non-sharp junctions, a smooth electrostatic potential profile is expected, reducing the HS exciton localization and weak overlap properties \cite{Cao2017}.
We want to call attention to quantum plasmonic effects recently observed in WSe$_2$-MoSe$_2$ lateral HSs, by Tang \emph{et al.} \cite{Tang2018}, mentioned in section \ref{subsubsec:PlasmonicsEffectsEXPERIMENT}.
These authors suggested that hot electron injection enhances the PL signal of the MoSe$_2$ side, since there is an increased recombination rate, while hot electron injection in WSe$_2$ quenches its PL due to charge transfer across the interface. This competition is tunable with tip position away from the interface, and by the tip-sample distance. The model shown schematically in figure\ \ref{Tang2018}(a) accounts for the competition between hot electron injection (HEI) and the tip-enhanced PL.
Rate equations relate the initial excited state populations $N_{X0}$ and $N_{Y0}$ with that of exciton $|X\rangle$ in MoSe$_2$, and $|Y\rangle$ in WSe$_2$ \cite{Tang2018}.
\begin{figure}[tbph!]
\centering
\includegraphics[width=0.45\textwidth]{Fig18.pdf}
\caption{(a) Schematics of WSe$_2$-MoSe$_2$ lateral HS energy diagram showing arrows for hot-electron injection (HEI) caused by the interface: HEI (black), plasmon-induced charge transfer (green), and tip-enhanced PL TEPL (purple). These lead to tunable quenching/enhancement of PL (dashed arrows). $\Gamma_{\mathrm{CT}}G_{\mathrm{HEI}}$ ($\Gamma_{\mathrm{CT}}R_{\mathrm{HEI}}$) is the hot-electron injection rate (HE decay rate) for states $|X^{0}\rangle$ and $|Y^{0}\rangle$. $\Gamma_{\mathrm{CT}}$ is the tunneling between tip and sample; $\Gamma_{p}$ the local optical excitation by the tip near field; $\alpha$ ($\beta$) the exciton generation rate of MoSe$_2$ (WSe$_2$); $\gamma\Gamma_{p}$ the photoinduced charger transfer across the interface; and $\tau_X$ ($\tau_{Y}$) the exciton relaxation time in MoSe$_2$ (WSe$_2$). (b) and (c) Experiment PL intensities (dots) at either side of the interface (WSe$_2$ and MoSe$_2$, respectively) vs tip-sample distance, with theoretical model (lines). $d \simeq 0.36$ nm signals the transition from the classical to the quantum tunneling regime. Reprinted with permission from \cite{Tang2018}. Copyright 2018 by the American Physical Society, Physical Review B.}
\label{Tang2018}
\end{figure}
The model fits well both the classical ($d>0.36$ nm) and the quantum ($d<0.36$ nm) regimes, as shown in figures\ \ref{Tang2018}(b)-(c). As the tip approaches the surface, the photoinduced charge transfer $\gamma\Gamma_{p}$ suppresses PL in WSe$_2$ and enhances it in MoSe$_2$, as seen in experiments.
\subsubsection{Magnetic interactions}
\label{subsubsec:MagneticProperties}
The tight binding models presented in \ref{subsubsec:ElectronicStructure} and \ref{subsubsec:IncommensurabilityAndStrain}, for commensurate and incommensurate lateral HSs, respectively, have one great advantage: they can reliably simulate large systems that may also include defects such as vacancies and/or adatoms. Of particular interest are the effective interactions between magnetic adatoms hybridized at or near the lateral interface between MoS$_{2}$-WS$_{2}$ and MoSe$_{2}$-WSe$_{2}$ HSs. As the interfacial states are highly localized, the HS acts as a 1D effective host interaction between the magnetic impurities. This Ruderman-Kittel-Kasuya-Yosida (RKKY) indirect exchange interaction between magnetic impurities is expected to be drastically different, due to the strong spin-orbit coupling these materials, and the effective 1D dimensionality of the states at the HS. The impurities are modeled by an additional term in the Hamiltonian given by
\begin{equation}\label{impurities1}
H_{\mathrm{I}}={\cal J} \sum_{i=1,2} \textbf{S}_{i}\cdot\textbf{s}_{\alpha_i}(\textbf{l}_i),
\end{equation}
which describes the local exchange coupling ${\cal J}$ between the impurity spin $\textbf{S}_{i}$ and electrons in orbital $\alpha_i$ at the location of the impurity $\textbf{l}_{i}$ at/near the interface \cite{AvalosOvando2018Arxiv}. $\textbf{s}_{\alpha_i}(\textbf{l}_i)$ is the electron spin density at the impurity location. After integration of the electronic degrees of freedom, one obtains the inter-impurity effective exchange interaction as
\begin{eqnarray}\label{jeffective1}
H_{RKKY} &=& J_{XX}\left(S_{1}^{x}S_{2}^{x}+S_{1}^{y}S_{2}^{y}\right)+J_{ZZ}S_{1}^{z}S_{2}^{z}\nonumber\\
& &+J_{DM}\left(\textbf{S}_{1}\times \textbf{S}_{2}\right)_{z},
\end{eqnarray}
where $J_{XX} = J_{YY}$ (in-plane), $J_{ZZ}$ (Ising), and $J_{DM}$ (Dzyaloshinskii-Moriya), as mediated by the TMD HS host are proportional to the static spin susceptibility tensor of the electron system \cite{RudermanKittel1954,Kasuya1956,Yosida1957,Imamura2004}. These $J$ parameters (jointly called $J_{\mathrm{eff}}$ for simplicity below) control the impurity interaction, and can be calculated by different approaches, including: i) consideration of the energy difference between triplet and singlet impurity configurations after diagonalization of the full Hamiltonian $H=H_{\rm HS} + H_{\rm I}$, and ii) second order perturbation theory \cite{AvalosOvando2018Arxiv}.
\begin{figure}[tbph]
\centering
\includegraphics[width=0.45\textwidth]{Fig19.pdf}
\caption{RKKY interaction for impurities on a zigzag lateral HS interface vs impurity separation. The interaction is shown in units of a typical TMD-impurity hybridization magnitude ${\cal J} (=10$ meV), and scaled by the impurity separation $r^{1/2}$, normalized by the zigzag lattice constant. Full (empty) symbols indicate triplet-singlet energy difference (perturbation theory) results: magenta for Ising $J_{ZZ}$, dark green for $J_{XX}$ and orange for $J_{DM}$ terms. Reprinted with permission from \cite{AvalosOvando2018Arxiv}. Copyright 2019 by the American Physical Society, Physical Review B.}
\label{AVALOS2018Fig4a}
\end{figure}
The resulting set of $J_{\rm eff}$ exhibit long-range sub-1D behavior, as well as strong DM interactions. This proves that the interface states behave indeed as unusual 1D hosts in the p-doped regime. This is illustrated in figure\ \ref{AVALOS2018Fig4a}, where typical RKKY interactions at a HS are shown vs separation $r/a$ between the two magnetic impurities; the impurities are hybridized at the interface and the Fermi level of the system is assumed to be in the bulk midgap. The various interactions are seen to oscillate with a decaying envelope, as one would expect. The oscillations in $J_{\rm eff}$ values describe how the lowest energy impurity alignment changes between ferromagnetic ($J_{\mathrm{eff}}<0$) and antiferromagnetic ($J_{\mathrm{eff}}>0$), depending on their separation $r/a$. More importantly, the interaction is seen to decay with distance as $J_{\mathrm{eff}}\propto1/r^{1/2}$, much slower that the expected $r^{-1}$ for a simple 1D system. This suggests that the HS interface hosts a long range interaction between impurities with rather unusual features.
Results for armchair interfaces are similar in magnitude and decay envelope but exhibit different oscillation patterns \cite{AvalosOvando2018Arxiv}.
It would be interesting to explore magnetic ensembles with long range and helical magnetic interactions in lateral HSs. Such systems are highly desirable in many different contexts in condensed matter. For example, transferring quantum information in spin chains \cite{Menzel2012} or studying the possible emergence of Majorana bound states when in proximity to superconductors \cite{Kim2014}. Lateral interfaces between TMDs could provide a new platform for future studies with unique doping or gating tunability.
\subsubsection{Transport properties}
\label{subsubsec:Transport}
Heterostructures should exhibit electronic transport features absent in their pristine counterparts. In TMDs, some of these properties include spin filtering, rectification, and enhanced energy conversion in thermoelectric transport.
Electronic transport in commensurate MoS$_2$-WS$_2$ HSs has been studied using non-equilibrium Green functions (NEGF) \cite{zhou2015RSCAdv,zhou2016PCCP,an2016JMCC}, within the Atomistix ToolKit package \cite{atomistix}. The currents through the device are calculated by the Landauer-B\"uttiker with the Fisher-Lee relation for transmission. These authors find significant negative differential resistance (NDR) in different systems, arising from the level structure differences and offset across the HS, which produces transmission resonances.
Zhou \emph{et al.} \cite{zhou2015RSCAdv} studied several perpendicular and parallel ribbon geometries of MoS$_2$-WS$_2$ HSs, finding that an armchair interface shows rectifying behavior, which is suppressed as the number of WS$_2$ slabs decreases. These HSs are proposed for spintronics due to spin filtering and NDR capabilities \cite{zhou2016PCCP}. The rectification ratio, $RR(V)=|I(V)/I(-V)|$, which characterizes the asymmetry current-voltage, is predicted to be as high as 18.3. The rectifying behavior is analyzed in terms of transmission and DOS profiles, which show higher amplitude peaks for positive than for negative $V$, and produce large $RR(V)$ values. For zigzag (magnetic) edges there is no asymmetry between negative and positive bias, although NDR can be obtained but with smaller efficiency than in armchair, and vanishes with increasing WS$_2$ content. The spin content can also be analyzed through the spin-filtering efficiency (SFE),
\begin{equation}
SFE=\frac{T_{up}(E_F)-T_{down}(E_F)}{T_{up}(E_F)+T_{down}(E_F)} \, ,
\label{SFEzhou2016PCCP}
\end{equation}
where $T_s(E_F)$ is the transmission coefficient for spin $s$ at the Fermi level. The SFE efficiency is found to reach 60\%, and attributed to the larger contribution to the DOS from the \emph{p}-orbital of S atoms on the ribbon edges. This effect is independent of the ribbon width, and shows different behavior at different edges.
A similar study \cite{an2016JMCC} finds NRD as well, with electrons propagating along the M-edge, and never along the X-edge. The transmission channels appear indeed as contributed mostly by the metal $d$-orbitals at the Mo-edge. Local current calculations show two types of current channels: the predominant Mo-Mo hopping current, and Mo-S-Mo hopping current (via Mo-S bonds). These results suggest that different widths and metal-vacancies at the sulfur-edge will have little impact on the transport features.
Motivated by nanostructured thermoelectric materials that can efficiently convert wasted heat into electricity (and vice versa), and that a modified thermoelectric material could be made more efficient by interfacial effects, Zhang \emph{et al.} \cite{Zhang2016SciRep} studied thermoelectric properties of MoS$_2$-WS$_2$ HSs. They show that this system is expected to have higher figures of merit than those of the counterparts, as the interface reduces the lattice thermal conductivity more than electron transport.
\begin{figure*}[tbph]
\centering
\includegraphics[width=0.9\textwidth]{Fig20.pdf}
\caption{(a) Atomic structure of the zigzag terminated MoS$_{2}$-WS$_{2}$ hybrid nanoribbons. The central scattering region is composed by $N_{\mathrm{Mo/W}}$ periodic Mo-W slabs, in this example $N_{\mathrm{Mo/W}}=2$, defined as $N_{\mathrm{Mo/W}}=L/L'$. Atomic hoppings are expressed by $t^{m}_{\alpha}$, where $m$ indicates transversal location of the hopping and $\alpha$ indicates in which material is it. (b) Maximum $ZT$ as a function of temperature $T$, for the hybrid nanoribbon with just one interface ($N_{\mathrm{Mo/W}}=1$), and their respective pristines counterparts. (c) Phonon transmission $T_{p}$ as a function of phonon frequency $\omega$, for $N_{\mathrm{Mo/W}}=1$. (d) Phonon local density of states for $N_{\mathrm{Mo/W}}=1$, where the interface is shown in dashed white and the colors represent the strength of phonon localization. Reprinted with permission from \cite{Zhang2016SciRep}. Copyright 2016 Creative Commons Attribution 4.0, Scientific Reports.}
\label{Zhang2016SciRep}
\end{figure*}
The thermoelectric energy conversion efficiency of zigzag-edge MoS$_2$-WS$_2$ NRs is calculated using a system of $N_{\mathrm{Mo/W}}$ dual slabs as the active scattering region, connected to metallic leads, as shown in figure\ \ref{Zhang2016SciRep}(a). The thermoelectric efficiency of the system is characterized by the figure of merit $ZT=S^2\sigma T/\kappa$, where $T$ is the system temperature, $S$ the Seebeck coefficient, the electronic conductance $\sigma$, and $\kappa=\kappa_{e}+\kappa_{p}$ is the total thermal conductance with contributions of electrons ($\kappa_{e}$) and phonons ($\kappa_{p}$). Systems with $ZT>1$ are considered efficient energy converters. These various quantities are calculated using non-equilibrium Green functions, except for $\kappa_{p}$ which is obtained in a harmonic approximation.
The highest thermoelectric efficiencies are achieved by tunning the number of MoS$_2$-WS$_2$ dual slabs, $N_{\mathrm{Mo/W}}$, i.e. the number of interfaces. For a single interface ($N_{\mathrm{Mo/W}}=1$), $ZT=2.3$ at room temperature, while $ZT=1.6(1.5)$ for pristine WS$_{2}$(MoS$_{2}$), as seen in figure \ref{Zhang2016SciRep}(b). Higher efficiencies are reached with more interfaces, so that for $N_{\mathrm{Mo/W}}=6$ it reaches $ZT=5.5$ at $T=600$ K or $ZT=4$ at $T=300$ K, nearly 3 times as large as in the pristine components. Higher $N_{\mathrm{Mo/W}}$ reduces efficiency, however.
These results are attributed to a sharp decrease of phononic thermal conductance $\kappa_{p}$ at the interfaces, especially as the effects on $\sigma$, $S$ and $\kappa_{e}$ are small. The phonon transmissions $T_{p}$ for the hybrid and the pristine systems reveal two main reasons for the $\kappa_{p}$ drop: a decrease of the spectral range, as well as the reduction in $T_{p}$ magnitude itself. The first is shown in figure\ \ref{Zhang2016SciRep}(c), where phonon gaps for WS$_{2}$ (MoS$_{2}$) are 73 (24) cm$^{-1}$, while for the MoS$_2$-WS$_2$ hybrid nanoribbon is 87 cm$^{-1}$, hence a larger gap for the phonons to overcome. Also shown in figure\ \ref{Zhang2016SciRep}(c) is the clearly smaller $T_{p}$ from 80 cm$^{-1}$ onward. The interface then acts as a potential barrier, localizing phonons and suppressing their transmission from left to right in figure\ \ref{Zhang2016SciRep}(d).
Different theoretical studies have looked at the stability and transport quantities of HSs when a variety of gas molecules is adsorbed; in MoS$_{2}$-WS$_{2}$ HS, adsorbed molecules studied include CO, H$_2$O, NH$_3$, NO, and NO$_2$ \cite{Sun2016}. The calculations are performed with VASP and PBE-GGA, while the vdW correction for molecule adsorption includes the Grimme long-range correction. Transport properties are calculated with the Landauer-B\"uttiker formula, while adsorption stability is analyzed by considering $E_{\mathrm{ads}}=E_{gas/HS}-E_{gas}-E_{HS}$, where a negative $E_{\mathrm{ads}}$ indicates adsorption being energetically favorable.
NH$_3$ is reported to act as a charge donor, while all other studied molecules act as acceptors. The largest device sensitivity is found for CO and NO$_2$, due to their larger binding energy, which deeply modifies the TMD HS electronic structure.
Other approaches have also been used to calculate electronic transport properties through TMD lateral HSs. A recent study on WTe$_2$-MoS$_2$ and MoTe$_2$-MoS$_2$ HSs study was reported \cite{Choukroun2018Arxiv}, using an 11-orbital tight binding model \cite{Fang2018}. This work models tunnel field effect transistors on lateral HSs, and study quantum transport with NEGF. Analyzing DOS and characteristic I-V curves of several systems as the channel length and backgate voltages are changed, finds that the MoTe$_2$-MoS$_2$ HS can serve as ultra-low power consumption device, with low sub-threshold swings and high $I_{\mathrm{on}}/I_{\mathrm{off}}$ ratios.
A different study on transport by Ghadiri \emph{et al.} \cite{Ghadiri2018JAP} considers a possible Goos-H{\"a}nchen lateral shift of valley electrons arriving at the interfacial scattering region, in lateral HSs of normal MoS$_{2}$ and `ferromagnetic' WS$_{2}$, in WS$_{2}$-MoS$_{2}$-WS$_{2}$ and MoS$_{2}$-WS$_{2}$-MoS$_{2}$ quantum well systems. The magnetic TMD is achieved by deposition on a magnetic insulator system.
The Goos-H{\"a}nchen (GH) shift occurs in optics when an incident beam of light is fully reflected off an interface and displaced laterally from the anticipated geometrical path. The shift occurs as the incident wave packet is reshaped by the interface due to each plane wave component experiencing a different phase shift. Similarly, Goos-H{\"a}nchen-like (GHL) shifts of electrons transmitted through an interface are also observed.
Such GH and GHL shifts are expected in 2D materials due to local strains, as predicted on graphene \cite{WuPRL2011}. It is natural that the inherent local strain and band alignment at the interface between WS$_{2}$ and MoS$_{2}$ would cause GH and/or GHL shifts. The ferromagnetic proximity induced on WS$_2$ should also result in strong valley-dependence for incident waves at the interface.
To obtain the transport properties of the system they used wave function matching, modifying the low-energy two-orbital representation from \cite{Xiao2012}, including substrate-induced exchange bias on WS$_{2}$, to write the effective Hamiltonian as
\begin{equation}
\label{GhadiriHamiltonian}
H_j = \left(\begin{array}{cc}
E_{jc}-h_{j}s_{z} & \tau a_{j} t_{j} k_{j} e^{-i\tau\theta_{j}} \\
\tau a_{j} t_{j} k_{j} e^{i\tau\theta_{j}} & E_{jv}+\tau s_{z}\lambda_{j} -h_{j}s_{z}
\end{array}\right) \, ,
\end{equation}
where $j$ indicates each region $j=1,2,3$ in the structure, and $k_{j}$ and $\theta_{j}$ are the $x$-component of the momentum and angle of the electron wave vector $\mathbf{k}_{j}$, respectively. The pristine material parameters include $E_{jc}$ $(E_{jv})$ the conduction (valence) band minimum (maximum), lattice constant $a_{j}$, hopping integrals $t_{j}$, induced SOC splitting at the valence band 2$\lambda_{j}$, valley index $\tau=1(-1)$ for K (K'), as well as the substrate-induced exchange $h_{j}$, and $s_{z}=+1(-1)$ is the electron spin. The energy dispersion and corresponding pseudospinors allow them to find
transmission $T_{A(B)}$ and reflection amplitudes, as well as associated GH (transmitted electrons GHL) shifts $\sigma_{re(tr),s_{z}}^{\tau}$ for different quantum well systems, shown schematically in figure \ref{Ghadiri2018JAP}. We should comment that it is not clear if the required wave function matching in this work has considered the non-hermiticity of the effective Hamiltonian for non-uniform hopping integrals. This requires consideration of different matching conditions \cite{Kolesnikov1997,Silin1998,Ratnikov2012}.
\begin{figure}[tbph!]
\centering
\includegraphics[width=0.4\textwidth]{Fig21.pdf}
\caption{Lateral quantum well HSs, (a) WS$_{2}$-MoS$_{2}$-WS$_{2}$ and (b) MoS$_{2}$-WS$_{2}$-MoS$_{2}$. The WS$_2$ regions exerience proximity-induced exchange bias. The main results are indicated by arrows: the initial incident beam (green), and the $K$ (red) and $K'$ (blue) valleys. In (a) the incident beam is in the central region, and the reflected $K$ and $K'$ are spatially separated. In (b) the incident beam on the left WS$_{2}$ region, reflect back and transmit spatially separated $K$ and $K'$ valleys into the MoS$_{2}$. In both cases, the incident electron comes from MoS$_{2}$, since its CBM is lower than in WS$_{2}$. Reprinted with permission from \cite{Ghadiri2018JAP}. Copyright 2018 AIP Publishing, Journal of Applied Physics.}
\label{Ghadiri2018JAP}
\end{figure}
When MoS$_{2}$ is the quantum well middle region, it can act as a waveguide due to multiple internal reflections at both interfaces, leading to effective confinement in the middle MoS$_{2}$ region. The $\sigma_{re,s_{z}}^{\tau}$ GH shifts are calculated to be of the order of the Fermi wavelength $\lambda_{F}$, leading to the spatial separation of electrons with distinct valley indexes as shown in figure\ \ref{Ghadiri2018JAP}(a). Multiple internal reflections increase $\sigma_{re,s_{z}}^{\tau}$, creating an efficient valley splitter. Additionally, a spin splitter can be achieved by selecting specific angles of incidence.
When the magnetized WS$_{2}$ is the middle region, the transport across the structure shows transmission resonances and GHL spin/valley dependent lateral shifts of the reflected and transmitted beams, as shown in figure\ \ref{Ghadiri2018JAP}(b). GHL shifts can be enhanced near transmission resonances, achieving full spin-valley beam splitter as structure and current injection parameters are tuned.
In an effective mass approximation, Mishra \emph{et al.} \cite{Mishra2018oneDimensional} have found that a 1D spin channel exists at the lateral in plane interface between two TDMs, produced by the Rashba electric field perpendicular to the HS \cite{Sinova2015}.
Lateral metal-metal HSs with strong interfacial SOC have also been suggested to host spin currents \cite{Borge2017}. In the case of the TMD HS, the electric field is provided by the band offset at the interface. As electrons propagate parallel to the interface, they experience an effective magnetic field pointing out of plane which polarizes them. The interface channel is modeled using effective mass approximation for the conduction band minimum of the K-point. The Hamiltonian describing longitudinal (L) and transverse (T) motion at the interface is given by
\begin{equation}
\label{MishraHamiltonian1}
H=\frac{p^2_x}{2m_L}+\frac{p^2_y}{2m_T}+\frac{\alpha}{\hbar}|E_y| p_x \sigma_z + V_{\rm conf}(x,y),
\end{equation}
which includes spin orbit coupling $\alpha$, and confining potential $V_{\rm conf}$.
The one-dimensionality of the spin generation becomes evident, showing a peak exactly at the interface.
Although the pseudo magnetic field at the interface does not produce a transverse force on the carriers, there could be a flow out of the interface by diffusion, leading the polarization to depend on the spin diffusion length. If the diffusion length is larger or similar to the device width, the 1D spin channel is lost.
When the device is p-doped, and sectors of the valence band are reached, there is cooperation between the interface spin polarization and the intrinsic spin orbit. The spin polarization then reaches 0.75\%, much larger than the 0.1\% seen in the conduction band.
\subsubsection{Interfaces between different phases: topological, structural and transport effects}
\label{subsubsec:PhaseInterfacesTHEORY}
A few works have addressed the coexistence between phases separated by an atomic sharp interface, as those already experimentally available--see section\ \ref{subsec:PhaseInterfaces}.
In 2016, Olsen \cite{Olsen2016} suggested the design of lateral TMD HS across different regions within the same 1T'-MoS$_2$ monolayer. The first phase is the natural quantum spin Hall insulator of 1T'-MoS$_2$, while a second region is made into a trivial insulating phase by adsorption of different atomic species, including O, F, and Cl. DFT calculations show that a topological 1D metallic state arises at the sharp interface between these phases. The interfacial state is further tested against boundary reconstruction and disorder, and seen to persist as a single level crossing the gap. Hence this platform is suggested to study topologically protected conductivity in 1D.
A monolayer triangular island of T'-MoS$_2$ phase surrounded by H-MoS$_2$ phase has been synthesized in experiments upon electron irradiation, as mentioned in section\ \ref{subsec:PhaseInterfaces}. With this system in mind, Kretschmer \emph{et al.} \cite{Kretschmer2017} studied structural transitions and effects of strain and vacancies. They find that charge redistribution promotes phase transitions, driven by electronic excitations and formation of vacancies while a monolayer is illuminated. The interface between T' and H phases is found to be atomically sharp with some S-deficiency.
The effects of the type and crystallographic orientation of the interface between 1T and 1H phases of MoS$_2$, have been recently studied in transport calculations, with DFT and non-equilibrium Green functions \cite{Aierken2018}. The interface geometry between phases is found to be decisive, as an armchair interface is more conductive than a zigzag. This occurs because in the first, the Mo zigzag chains are placed along the transport direction. Electron doping and Mo-substitutional doping of Re or Ta atoms is suggested for stabilization of the 1T metallic phase and reduction of the Schottky barrier.
Other phase-based planar HSs have been also proposed, creating phases by top/bottom gating potentials. Lateral field-effect transistors composed from adjacent regions of T' and H phases of MoSe$_2$, appear
to have better performance and lower-power consumption than conventional CMOS \cite{Marian2017}. Nevertheless, ideal contacts are assumed and no atomic geometry interface effects are further analyzed.
\subsubsection{Lateral heterostructures with other materials}
\label{subsubsec:WithOtherMaterials}
Semiconducting group VIB TMDs have also been proposed to form lateral HSs with metallic NbS$_2$ \cite{Wu2015NbS2,Liu2017NbS2,Yang2017NbS2}, and NiTe$_2$ \cite{Aras2018}. Moreover, as NbS$_2$ (also NbSe$_2$ and TaS$_2$) exhibits superconductivity and charge density waves at low temperatures \cite{Heil2017,Xi2015NbSe2,Xi2016Ising,Sohn2018NbSe2,Barrera2018tuning},
while NiTe$_2$ has been shown to be a type-II Dirac semimetal \cite{Xu2018NiTe2}, this suggests new and interesting lateral HS properties with other TMDs. We briefly summarize results in this rapidly developing area.
Armchair-terminated (zigzag interfaces) of MoS$_2$-NbS$_2$ quantum wells have been predicted to exhibit semiconducting or metallic behavior, depending on which material is the quantum well, respectively \cite{Wu2015NbS2}. Zigzag-terminated structures were seen to exhibit resonance tunneling transport \cite{Liu2017NbS2}, while armchair-terminated are predicted to be excellent ambipolar devices \cite{Liu2017NbS2,Yang2017NbS2}.
The bandgaps of these NRs appear insensitive to their lateral dimension \cite{Wu2015NbS2, Liu2017NbS2}.
Periodic arrays of alternating strips of zigzag-terminated MoTe$_2$-NiTe$_2$ ribbons have been modeled \cite{Aras2018}. They show metallic behavior for narrow strips, while large strips serve as metal-semiconductor junctions with tunable Schottky barriers, due to the confinement of electronic states in different TMDs.
Other examples of lateral HSs of group VIB semiconducting TMDs and metallic CrX$_2$ and VX$_2$ materials have also been suggested for solar energy and photocatalytic water-splitting applications, aided by the corresponding band alignments \cite{Wei2014,Zhao2018Designing}.
\section{Applications}
\label{sec:Applications}
Materials growth and device design continue to improve, allowing for deeper tunability and functionality in experiments, while more theoretical proposals continue to appear. One open question now is how feasible and effective are some of these proposed applications. In this section we discuss devices and applications already achieved in experiments, as well as some proposed by theoretical studies.
\subsection{Experimentally achieved}
\label{subsec:ExperimentalApplications}
Lateral TMD HSs have suggested effective alternatives for producing in-plane \emph{p-n} junctions, critical components in electronic and optoelectronic applications.
Of particular interest is the ability for enhanced exciton trapping at interfacial regions, due to the built-in potential at the interface.
The strong PL enhancement observed at the interface would ideally be the result of enhanced recombination at the HS \cite{Gong2014NatMat,Huang2014NatMat,Duan2014NatNano}, even if in many samples it is likely assisted by exciton trapping by defects.
As discussed previously, the band alignments shown in figure\ \ref{Kang2013} help determine which material may act as \emph{n}-type or \emph{p}-type across the interface, such as WS$_2$ being \emph{n}-type and WSe$_2$ \emph{p}-type \cite{Duan2014NatNano}, or WS$_2$ serving as \emph{p}-type and MoS$_2$ as \emph{n}-type \cite{Chen2015Electronic}. This has allowed the fabrication of lateral \emph{p-n} diodes \cite{Gong2014NatMat,Duan2014NatNano,Li2015,Gong2015TwoStep,Chen2015Electronic,Chen2016Lateral,Son2016Observation,Zhang2017Robust,Wu2018SelfPowered}, as well as \emph{n-n} \cite{Mahjouri2015patterned} junctions. The \emph{p-n} diodes have been shown to serve as inverters with high voltage gain \cite{Duan2014NatNano}, large rectification ratios ($10^{5}$ \cite{Zhang2017Robust}, 10$^6$ \cite{Xie2018ParkGroup}), and high electroluminescence \cite{Xie2018ParkGroup}. Also, MoS$_2$-graphene lateral HSs have been built and tested as transistors, with good rectifying behavior and switching ratios as high as 10$^{6}$ \cite{Ling2016}.
Devices with large photoresponse and photovoltaic effects have also been studied. The solar cell efficiency of lateral WSe$_2$-MoS$_2$ HSs \cite{Tsai2017SingleAtomically} has been found to be high, with excellent power conversion efficiency under illumination, and omnidirectional light harvesting capability, not seen in vertical TMD solar cells. The \emph{cheetah spots} mosaic configurations in MoS$_2$-MoSe$_2$ have been used as photodetectors, showing enhanced performance with respect to the individual components, as band alignments appear to suppress photoexcited electron-hole recombination, leading to effective \emph{n}-doping of MoS$_2$ and \emph{p}-doping of MoSe$_2$ \cite{Chen2017InPlaneMosaic} . Self-powered photovoltaic light sensors based on MoS$_2$-WS$_2$ exhibit large spectral responsivity and detectivity coefficients \cite{Wu2018SelfPowered}.
LED designs have also been built with double HS transistors between WS$_2$ (\emph{n}-type) and WSe$_2$ (\emph{p}-type) \cite{Xie2018ParkGroup} monolayers. The luminescence in these devices originates from the interface, suggesting electrons (from WS$_2$ side) and holes (from WSe$_2$ side) recombine in its vicinity.
\subsection{Theoretically proposed}
\label{subsec:TheoreticalApplications}
Theoretical proposals highlight the powerful features of the lateral TMD HSs. Optoelectronics applications are perhaps the most developed, including excitonic solar cells, and photocatalysis, as effective separation and collection of photo-induced excitons improves efficiency. However, applications based on electronic transport and magnetic properties have also been proposed.
Excitonic effects have been studied by DFT \cite{Wei2015,Yang2017} and other calculations \cite{Lau2018ArxivExcitons}, as discussed above. The effective band bending, relative band alignment and associated barriers confine photogenerated carriers to opposite sides of the interface and suppress recombination, which yields higher solar conversion efficiency \cite{Wei2015,Wei2015SciRep,Mu2018MRExpress}. The electron-hole separation was shown to persist for up to 12\% of uniaxial strain. Moreover, strain can significantly increase the power conversion efficiency of lateral HSs. A 4\% uniaxial tensile strain could increase the efficiency of the lateral MoS$_2$-WS$_2$ (MoSe$_2$-WSe$_2$) heterostructure by about 35\% (15\%), when compared to the pristine system \cite{Lee2017}.
The use of vacancies has also been proposed. Localized arrays of S-vacancies at interfaces, along with type-II band alignment and built-in electric field could improve the energy conversion efficiency in photocatalysis, given that the ingap states caused by vacancies can activate and optimize hydrogen evolution reactions \cite{Li2016activating}. Strain effects at the interface can also be utilized, as WSe$_2$-MoS$_2$ and MoS$_2$-WSe$_2$ HSs may exhibit photon-induced Coulomb drag over the interface region \cite{Wei2017}.
Lastly, the effect of adsorption of different gas molecules near the HS region has been studied with DFT in MoS$_{2}$-WS$_{2}$ and gases such as CO, H$_2$O, NH$_3$, NO, and NO$_2$ \cite{Sun2016}. The rectification behavior and value of the passing current can be altered by adsorption, and this sensitivity promises HSs as superior gas sensors in practical applications \cite{Sun2016}.
Transport studies have suggested that the zigzag interface in MoS$_2$-WS$_2$ nanoribbons can be used as high-performance thermoelectric materials, promising applications with high values of the ZT figure of merit \cite{Zhang2016SciRep}. Moreover, WS$_2$-MoS$_2$-WS$_2$ quantum wells have been proposed as spin-valley filters and splitters without external gating \cite{Ghadiri2018JAP}. Electrons with different spins and valleys can be well separated spatially by tuning the Fermi energy and current incident angle to the interface.
Similar ideas suggest the WS$_2$-MoS$_2$ interface as an electron waveguide, useful in spintronic applications.
Enhanced magnetic exchange driven by the interface electrons has also been predicted at these 1D HSs \cite{AvalosOvando2018Arxiv}. Here the hybridization of magnetic impurities and the tunability of the Fermi level through interface midgap states can serve to implement tunable spin chain systems for information transfer or storage \cite{Menzel2012}. Interactions between magnetic impurities can be tuned by gating and/or separation.
\section{Prospective directions}
\label{sec:ProspectiveDirections}
A great deal of attention has been focused towards obtaining long interfacial regions, with several groups already achieving $\mu$m-lengths. Many of these are however not fully pristine or sharp, containing defects and a diffusive interfacial region, which in the best cases is only 4 atomic rows \cite{Sahoo2018Nature,Xie2018ParkGroup}. This is an excellent development if compared to the diffusive/alloy interfacial regions that existed only 4 years ago and that could extend from several nm to a few $\mu$m away from the interface. Both zigzag and armchair interfaces can be obtained, the first most often obtained with CVD processes. The cleanest and sharpest pristine zigzag interfaces are now longer than $\sim$50 nm long, while armchair HSs are no more than a few nm long.
Other important experimental issues to be addressed include scalability and HS degradation, as well as selectivity in overall crystal shape and dimensions.
Protection from the environment has seen progress in promising combinations or encapsulation with materials such as graphene and hBN \cite{Ling2016,Murthy2018Intrinsic}. This should facilitate exploring the real interface features, as well as avoiding HS degradation. State-of-the-art sub-\AA\,microscopy could also be used to further explore these interfaces \cite{Winkler2018Absolute,Jiang2018ElectronPtychography}.
We have also described notable advances in strain control at interfaces providing `predictable wrinkling' and coherent crystals over large scales \cite{Xie2018ParkGroup}.
This is likely the beginning of `kirigami' efforts using TMDs \cite{kirigami}, which would bring additional optical functionalities to that field.
Theoretical efforts to accompany experimental realizations must take into account realistic effects, including atomic relaxation at the interface, dislocations, impurities, vacancies, and strain fields. Recent DFT studies have started to consider such realistic features of HSs and their effect on electronic, structural and dynamical properties. A first experimental study has tackled the change in band alignment by analyzing atomic lines at the interface \cite{Zhang2018strain}. Nevertheless, more studies are needed, although the intrinsic exposed nature of these HSs gives unprecedented access to study materials interfaces.
Multiscale models are also being used to studying HSs systems where DFT approaches cannot account for simulations of thousands of atoms, random distribution of atomic vacancies or impurities, and large-scale strains, among others. These studies have been able to predict features not yet seen in experiments or anticipated from DFT calculations. Further improvements of treatments within effective mass or tight binding treatments or molecular dynamics models must address realistic systems, supported by analysis of experimental or DFT results. We trust that this review would serve as a starting point where the interplay experiment-theory-numerics can be readily accessed to motivate further advances.
Doubtlessly, efforts in this field are increasing and experiments becomimg more sophisticated. We anticipate further improvements in length and quality of the interfaces are coming, and they would be important for realistic/practical device geometries and for the implementation of HSs as novel physical environments in which to test tantalizing theoretical proposals.
\section{Summary}
\label{sec:SummaryFinal}
We have presented advances in group-VIB semiconducting TMDs lateral heterostructures. These materials have been on the spotlight in the last few years, given their attractive properties at the monolayer level. The promise of direct bandgaps, large spin-orbit coupling, and controlable interface features makes them promising in optoelectronics, spintronics, and valleytronics applications. Moreover, the combinations of materials have shown different and often enhanced response functions with respect to their pristine counterparts.
Our focus on the lateral connection between distinct TMD monolayers, has analyzed the unusual one-dimensional interfaces with many beautiful examples already seen in experiments. Theoretical studies have started to appear, led by numerical DFT as well as other more recent approaches with complementary emphasis. We have summarized ongoing trends and developments in numerical and theoretical studies, as well as experimental milestones. Available theoretical studies suggesting possible uses for these unique 1D states at the interface have also been discussed. It is clear that the interface provides an interesting platform for achieving 1D physical systems with unique features.
We look forward to more theoretical studies and experiments in this growing field. Lateral HSs provide exciting opportunities for monolayer systems with tailored properties and the ultimate tunable 2D/1D interchangeable system.
\ack We acknowledge support from NSF grant DMR 1508325.
\section*{References}
\bibliographystyle{iopart-num}
|
gr-qc/0609103
|
\section{Motivation}
The General Relativity (GR) is the viable theory of gravity,
very robust in the underlying principles. It is known to consistently
describe the massless tensor graviton as a part of the metric
field. This is insured by the general covariance (GC) which
serves as the gauge symmetry to eliminate the degrees of freedom
contained in the metric in excess of the massless tensor
graviton. Nevertheless, phenomenologically, the application of GR to
cosmology encounters a number of problems, superior of which are those
of the dark energy (DE) and the dark matter~(DM). In particular, to
solve the latter problem one adjusts usually the conventional or
hypothetical matter particles, remaining still in the realm of~GR.
The ultimate goal of DM being in essence to participate only in the
gravitational interactions, one can try to attribute to the aforesaid
purpose the additional degrees of
freedom contained in the metric, going thus
beyond~GC. With this in mind, I discuss in the given report the
self-consistent extension of GR, with the explicit violation of GC to
the residual unimodular covariance~(UC). In addition to the massless
tensor graviton, such an extension describes the massive scalar
graviton as a part of the metric field. The scalar graviton is
proposed as a resource of the gravitational DM, as well as the scale
dependent part of DE.\footnote{The report is partly based on
ref.~\cite{Pirogov1}, where more details can be found.}
\section{GC and beyond}
\paragraph{Poincare group}
Let us first discuss the problem of the GC violation from the point
of view of the
particle representation in the relativistic quantum mechanics. The
free particles are described by the irreducible finite-dimensional
unitary representations of the Poincare group
$ISO(1,3)$~\cite{Wigner}. The proper representations
$(m, s)$ are characterized by the mass $m$ and spin~$s$. The massless
particles, $m=0$, possess the isotropic momentum
$k_\mu$, $k\cdot k=0$.
The invariance group of the momentum (the ``little'' group)
proves to be $ISO(2)$, which is noncompact. The unitary
representations of the noncompact groups are known to be
infinite-dimensional, but for the scalar representations. Thus, for
a unitary representation of the Poincare group to
be finite-dimensional the noncompact generators of the
little group (here the ``translations''of $ISO(2)$) should act
trivially on the representation. It follows thereof that the massless
particles of the spin
$s\ge 1$ should be described not by the rays in a Hilbert space
but by the respective equivalence classes. This means that the theory
for the spin $s\ge 1$ should possess the invariance relative to
transformations within the proper equivalence classes, in other words,
be gauge invariant. Thus, the gauge invariance is not a mere accident
but is in fact deeply rooted in the unitarity requirement for the
relativistic quantum theory.
Remind that the spin-one massless particle, say, photon is described
by the transverse vector $\hat A_\mu(k)$, $k\cdot \hat A=0$. The
gauge transformations required for the triviality of the noncompact
generators, and thus for the unitarity, is $\hat
A_\mu\to \hat A_\mu+\alpha k_\mu $, with $\alpha(k)$ being a scalar. The
respective gauge group is~$U(1)$.
Due to this, one is left with the two-component photon
possessing helicities $\lambda=\pm 1$.
Likewise, the spin-two massless particle, the graviton, is described
by the transverse-traceless symmetric tensor $\hat
h_{\mu\nu}(k)$, with $k^\mu \hat h_{\mu\nu}=0$ and
$\hat h^\mu_\mu=0$~\cite{Vanderbij}.
The gauge transformations required for the triviality of the $ISO(2)$
translations prove to be
\begin{equation}\label{gt}
\hat h_{\mu\nu}\to \hat h_{\mu\nu}+ \xi_\mu k_\nu+\xi_\nu k_\mu,
\end{equation}
with $\xi_\mu(k)$ restricted by $k\cdot \xi=0$. The respective
three-parameter group corresponds precisely to UC.
Altogether, one arrives at the two-component graviton with the
helicities $\lambda=\pm 2$.
Thus, UC is necessary and sufficient for the consistent description of
the massless tensor graviton. In this, the massive scalar
graviton can additionally be represented by the independent scalar
$\hat h(k)$ for the time-like momentum $k_\mu$, $k\cdot k=m^2> 0$.
The little group of the momentum being the compact $SO(3)$, the
respective gauge transformations are trivial.
One can abandon the reducibility requirement for the representation of
the massless tensor graviton, describing the latter at $k\cdot k=0$ by
the arbitrary transverse symmetric tensor $\hat h_{\mu\nu}(k)$, $\hat
h^\mu_\mu\neq 0$. For consistency, this requires the whole gauge
group, with arbitary $\xi_\mu$ corresponding to GC. Under these
transformations, the trace changes as $\hat h^\mu_\mu\to \hat
h^\mu_\mu+2k\cdot \xi$ and thus can be removed, leaving no scalar
graviton. It follows thereof that GC, with $\xi_\mu$ unrestricted,
though being commonly used and sufficient to consistently describe the
massless tensor graviton, is in fact redundant.
\paragraph{Field theory}
Let $x^\mu$, $\mu=0,\dots,3$, be the arbitrary observer's
coordinates. Let us now consider the same problem of the GC violation
in the framework of the Lorentz-invariant local field theory of the
symmetric tensor $h_{\mu\nu}(x)$. The latter is treated as a part of
the dynamical metric field $g_{\mu\nu}(x)$. The effective field theory
of the metric is to be built of the metric itself and its first
derivatives $\partial_\lambda g_{\mu\nu}$ (as well as, generally, the
higher ones). Otherwise, one can use the
Christoffel connection $\Gamma^\lambda{}_{\mu\nu}(g_{\rho\si})$ which is
in the one-to-one correspondence with the first derivatives of the
metric. Now, $\Gamma^\lambda{}_{\mu\nu}$ is not a tensor and as such can
not generally be used as the Lagrangian field variable. To remedy
this introduce the new field variable
\begin{equation}
\Omega^{\lambda}{}_{\mu\nu}= \Gamma^{\lambda}{}_{\mu\nu}-
\tilde\Gamma^{\lambda}{}_{\mu\nu},
\end{equation}
with the compensating term
$\tilde\Gamma^{\lambda}{}_{\mu\nu}$ being an external nondynamical
affine connection. As the difference of the two connections,
$\Omega^{\lambda}{}_{\mu\nu}$ is the tensor and can thus serve as the
Lagrangian field variable.
Generally, $\tilde\Gamma^{\lambda}{}_{\mu\nu}$ contains forty
components. Allowing for the
four-parameter coordinate freedom to bring four components
of $\tilde\Gamma^{\lambda}{}_{\mu\nu}$ to a
canonical form, there are still left thirty six free
components. Thus, GC is completely violated. But for the field theory
of the metric to be consistent, at least the three-parameter residual
covariance is obligatory. This can be shown as follows.
Consider the linearized approximation (LA) of the metric theory by
putting $g_{\mu\nu}=\eta_{\mu\nu} + h_{\mu\nu}$,
with $h_{\mu\nu}$ being the
symmetric tensor field, $|h_{\mu\nu}|\ll 1$, and $\eta_{\mu\nu}$
being the Minkowski symbol.
Specify some coordinates $x^\mu=(x^0,x^m)$, $m=1,2,3$, and decompose
the symmetric Lorentz-tensor $h_{\mu\nu}(x)$ in terms of the $SO(3)$
fields as $h_{\mu\nu}=(h_{00}, h_{m0}, h_{mn})$. The second, namely,
the three-vector component in the decomposition possesses
the wrong norm, violating thus unitarity. The unitarity to be
preserved, the ``dangerous'' component should be eliminated. This
requires the three-parameter residual gauge symmetry, at the least.
In GR, one invokes the four-parameter gauge transformations
\begin{equation}
h_{\mu\nu}(x)\to h_{\mu\nu}(x)+\partial_\mu \xi_\nu+\partial_\nu
\xi_\mu
\end{equation}
with arbitrary $\xi_\mu(x)$ in accord with GC. Together with the
three wrong-norm components $h_{m0}$, these transformations eliminate
one more right-norm component. In the transverse gauge, $\partial^\mu
h_{\mu\nu}=0$, on the mass shell, $\partial\cdot\partial
h_{\mu\nu}=0$, accounting for the
residual gauge freedom with the harmonic
parameters, $\partial \cdot \partial\xi_\mu=0$, one arrives
explicitly at the two-component graviton. (Here one puts
$\partial\cdot \partial =\partial_\mu \partial^\mu$ and similarly for
any two vectors in what follows.) This procedure is quite reminiscent
of the electrodynamics where the vector field $A_\mu(x)=(A_0,A_m)$
possesses one, namely, scalar component with the
wrong norm. To eliminate this component the one-parameter gauge
symmetry $U(1)$ is required: $A_\mu\to A_\mu+\partial_\mu \alpha$, with
arbitrary $\alpha(x)$. In the transverse gauge, $\partial\cdot A=0$, on
the mass shell, $\partial \cdot \partial A_\mu=0$, with account for
the residual harmonic transformations, $\partial \cdot
\partial\alpha=0$, one is left explicitly with
the two-component photon.
To allow for some residual covariance one should
reduce the number of the free components in
$\tilde\Gamma^{\lambda}{}_{\mu\nu}$. To this end, suppose that
$\tilde\Gamma^{\lambda}{}_{\mu\nu}$ is the
Christoffel connection for an external nondynamical metric
$\tilde g_{\mu\nu}$. The latter contains generally ten free
components. Allowing for the four-parameter coordinate freedom there
are left six independent nondynamical fields. Thus, the reduction of
the number of the fields is insufficient to
leave some residual covariance. The possible caveat is to confine
oneself to the contraction $\tilde\Gamma^{\lambda}{}_{\mu\lambda}$.
Due to the relation
$\tilde\Gamma^{\lambda}{}_{\mu\lambda}=\partial_\mu \sqrt{- \tilde g}$, with
$\tilde g$ being the determinant of $\tilde g_{\mu\nu}$, the theory
depends in this case just on one nondynamical field.
The respective Lagrangian field variable becomes
\begin{equation}
\Omega_\mu =
\Gamma^{\lambda}{}_{\mu\lambda}-\tilde\Gamma^{\lambda}{}_{\mu\lambda}
=\partial_\mu\ln\sqrt{g/\tilde g}
\end{equation}
In this marginal case, the nondynamical metric entering only
through $\tilde g$, one can consider the latter just as a
scalar density of the proper weight.
One can always choose the coordinates so that $\tilde g=-1$. Under the
variation of the coordinates $\delta x^\mu= -\xi^\mu$, the scalar density
$\tilde g$ varies as $\delta \sqrt {-\tilde g}=
\partial\cdot (\sqrt{-\tilde g}\xi)$.
The residual covariance is that which leaves the canonical value
$\tilde g=-1$ invariant, requiring $\partial\cdot\xi=0$. This is
the three-parameter UC. In this case, there is left one more
independent component in the dynamical metric. Precisely this extra
component corresponds to the scalar graviton which can be
supplemented to the tensor graviton not violating the consistency of
the theory. Note finally that the dependence on the external
nondynamical field $\tilde g$ (more generally, on $\tilde
g_{\mu\nu}$) would tacitly imply that the metric Universe, contrary to
what is assumed in GR, should be not a self-contained system and
could not entirely be described in the internal dynamical terms.
\section{Scalar graviton}
\paragraph{Lagrangian}
Let us study the theory of the dynamical metric field $g_{\mu\nu}$
and the generic matter field $\phi_{\rm m}$ with the generic action
\begin{equation}\label{GCV}
I=\int\Big(L_{\rm g}(g_{\mu\nu})+ \Delta L_{\rm g}(g_{\mu\nu},\chi)
+ L_{\rm m}( \phi_{\rm m}, g_{\mu\nu})+
\Delta L_{\rm m}( \phi_{\rm m}, g_{\mu\nu}, \chi)
\Big) \sqrt{-g}\,d^4x,
\end{equation}
where
\begin{equation}
\chi=\ln\sqrt{g/\tilde g}.
\end{equation}
Here $g=\mbox{\rm det\,}g_{\mu\nu}$ and $\tilde g$ is a nondynamical
scalar density of the same weight as $g$. Being the function of the
ratio of the two similar scalar densities, $\chi$ itself is the
scalar and thus can serve as the Lagrangian field variable.
In the above, $L_{\rm g}$ and $\Delta L_{\rm g}$ are, respectively,
the generally covariant and the GC violating contributions of the
gravity. Likewise, $L_{\rm m}$ and $\Delta L_{\rm m}$ are the matter
Lagrangian, respectively, preserving and violating GC. All
the Lagrangians above are assumed to be the scalars.
Conventionally, take as~$L_{\rm g}$ the $\Lambda$-grafted
Einstein-Hilbert Lagrangian:
\begin{equation}\label{EH}
L_{\rm g}=- \frac{1}{2}M_{\rm P}^2 \Big(
R(g_{\mu\nu})-2 \Lambda\Big),
\end{equation}
where $R=g^{\mu\nu} R_{\mu\nu}$ is the Ricci scalar, with
$R_{\mu\nu}$ being the Ricci curvature, and $\Lambda$ is the
cosmological constant. Also,
$M_{\rm P}=(8\pi G_{\rm N})^{-1/2}$ is the Planck mass, with
$G_{\rm N}$ being the Newtonian constant. Present the scalar
graviton Lagrangian $\Delta L_{\rm g}$ as
\begin{equation}\label{Ls}
\Delta L_{\rm g}= \Delta K_{\rm g}(\partial_\mu\chi, \chi)-\Delta
V_{\rm g}(\chi),
\end{equation}
with $\Delta V_{\rm g}$ being the potential.
In the lowest order, the kinetic term $\Delta K_{\rm g}$ looks like
\begin{equation}\label{K}
\Delta K_{\rm g}=\frac{1}{2} \kappa_0^2\,\partial
\chi\cdot\partial \chi ,
\end{equation}
with $\kappa_0$ being a constant with the dimension of mass.
The proposed extension of GR is more deeply rooted in the affine
Goldstone approach to gravity~\cite{Pirogov2}.
This approach is based on two symmetries: the
global affine symmetry (AS) and GC. AS terminates the theory in the
local tangent space, whereas GC insures the matching among the
various tangent spaces. Most generally, such a theory depends
on an external nondynamical metric $\tilde g_{\mu\nu}$. This
dependence violates
GC and reveals the extra degrees of freedom contained in the dynamical
metric $g_{\mu\nu}$. Call such
an extended metric theory of gravity the ``metagravity''.
Its minimal version, as considered in the report, depends just on
$\tilde g$ and describes only the scalar graviton in addition to the
tensor one. Call specifically the so reduced theory -- the
``scalar-tensor metagravity''.\footnote{This theory is not to be mixed
with the ``scalar-tensor gravity''~\cite{BD}. The latter is the
generally covariant extension of GR by means of a genuine scalar
field, which can not completely be absorbed by the metric. Also, the
theory proposed is to be distinguished from
the ``Unimodular Relativity'' based on UC but with the dynamical
metric scale completely changed for the nondynamical one~\cite{Gal}.}
More generally, the metagravity
can encompass also the vector graviton~\cite{Pirogov3}, though in this
case the unitarity is to be violated as well.
In the Lagrangian $\Delta L_{\rm g}$ above, $\Delta K_{\rm g}$
violates only GC, with $\Delta V_{\rm g}(\chi)$ violating also AS.
The GC violating part of the matter Lagrangian, $\Delta L_{\rm m}$,
can be postulated in the simplest form as
\begin{equation}
\Delta L_{\rm m}=- f_0J_{\rm m}(\phi_{\rm m}, g_{\mu\nu})\cdot
\partial \chi,
\end{equation}
where $J_{{\rm m}\mu}$ is the
matter current and $f_0$ is a scalar. In the case when $f_0$ is a
constant, $\Delta L_{\rm m}$ above violates only GC, still preserving
AS. The possible dependence of $ f_0$ on
$\chi$ would reflect the violation of AS, though
still preserving UC. Allowing for $f_0\to 0$, independent of
$\kappa_0$, the matter sector can be
made as safe in confrontation between the theory and experiment as
desired. For this reason, $\Delta L_{\rm m}$ will be disregarded in
what follows.
\paragraph{Classical equations}
By varying the action~(\ref{GCV}) with respect to $g^{\mu\nu}$,
$\tilde g$ being fixed, one arrives at the modified gravity
equation:
\begin{equation}\label{eomg}
G_{\mu\nu} = M_{\rm P}^{-2}\Big( T^{(\rm m)}_{\mu\nu} + \Delta
T^{(\rm g)}_{\mu\nu}\Big).
\end{equation}
Here
\begin{equation}
G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}( R - 2\Lambda)g_{\mu\nu}
\end{equation}
is the usual gravity tensor and $T^{(\rm m)}_{\mu\nu}$ is the matter
energy-momentum tensor defined by~$L_{\rm m}$. The term $\Delta
T^{(\rm g)}_{\mu\nu}$ is the scalar graviton contribution looking
as follows:
\begin{eqnarray}\label{DT}
\Delta T^{(\rm g)}_{\mu\nu}&=&
\kappa_0^2 \Big(\partial_\mu\chi \partial_\nu\chi-
\frac{1}{2}\partial\chi\cdot \partial\chi g_{\mu\nu} \Big)
+\Delta V_{\rm g} g_{\mu\nu}\nonumber\\
&&+\Big(\kappa_0^2\nabla\cdot\nabla \chi+
\frac{\partial \Delta V_{\rm g}}{\partial \chi}\Big) g_{\mu\nu}.
\end{eqnarray}
Mutatis mutandis, the first line of the equation above is the
ordinary energy-momentum tensor of the scalar field. The second
line is the effective wave operator of the field, with $\nabla_\mu$
being the covariant derivative, $\nabla_\mu \chi=\partial_\mu \chi$. This
line appeared solely due to the
dependence of $\chi$ on the metric and would be absent for the genuine
scalar field. We interpret the above contributions, respectively, as
those of the gravitational DM and the scale dependent part of DE,
caused by the scalar graviton. The latter having no
specific quantum numbers and undergoing only the gravitational
interactions, such an association is quite a natural~one.\footnote{The
above division on DM and DE is rather conventional. In particular in
the limit $\kappa_0\to 0$, the whole contribution of the scalar
graviton looks like DE.}$^{,}$\footnote{The other kinds of DM, if any,
are to be included in the matter Lagrangian.}
The r.h.s.\ of eq.~(\ref{eomg}) is thus proportional to the total
energy momentum of the nontensor-graviton origin, produced by the
nongravitational matter and the scalar graviton.
Due to the Bianchi identity
\begin{equation}
\nabla_\mu G^{\mu\nu}=0,
\end{equation}
the total energy-momentum is conserved:
\begin{equation}\label{cc}
\nabla_\mu (T_{\rm m}^{\mu\nu} +\Delta T_{\rm g}^{\mu\nu} )=0,
\end{equation}
whereas the energy-momentum of the nongravitational matter alone,
$T^{(\rm m)}_{\mu\nu}$, ceases to conserve.
To really solve the gravity equations one should impose the four
coordinate fixing conditions. E.g., one can choose the canonical
coordinates where $\tilde g=-1$, supplemented by the three more
independent conditions on the dynamical metric $g_{\mu\nu}$. As a
result, $g_{\mu\nu}$ contains
generally seven independent components. Having solved the equations in
the distinguished coordinates one can recover the solution in the
arbitrary observer's coordinates. Confronting the latter
solution with experiment one could conceivably extract
the sought~$\tilde g$.
\paragraph{Linearized approximation}
To facilitate the problem of finding $\tilde g$ one could rely on~LA.
Not knowing $\tilde g$, guess from some physical considerations the
background metric $\bar g_{\mu\nu}$.
Decompose the dynamical metric in LA as follows
\begin{eqnarray}\label{WFA}
g_{\mu\nu}&=& \bar g_{\mu\nu}+h_{\mu\nu},\nonumber\\
g^{\mu\nu}&=& \bar g^{\mu\nu}-h^{\mu\nu} + {\cal O}((h_{\mu\nu})^2),
\end{eqnarray}
with $\bar g^{\mu\nu}$ being the inverse background metric. For the
consistency, it is to be supposed that $\vert h_{\mu\nu}\vert\ll 1$.
The indices are raised and lowered with
$\bar g^{\mu\nu}$ and $\bar g_{\mu\nu}$, respectively,
so that $h^{\mu\nu}=\bar
g^{\mu\lambda}\bar g^{\nu\rho}h_{\lambda\rho}$, etc. Then one gets
\begin{equation}\label{chi}
\chi= (h_0+h)/2+{\cal O}(h^2),
\end{equation}
where $h\equiv \bar g^{\mu\nu}h_{\mu\nu}$
and $ h_0=\ln (\bar g/\tilde g)$.
The latter term is a scalar parameter-field,
not bound in general to be small.
Physically, it reflects the discrepancy between the
background scale $\sqrt {-\bar g}$, which is at our disposal, and
the nondynamical scale~$\sqrt {-\tilde g}$, which is given a priori.
The GR Lagrangian in LA becomes as follows
\begin{equation}
L_{\rm g}=
\frac{1}{8}M^2_{\rm P}\Big((\bar\nabla_\lambda h_{\mu\nu})^2
-2(\bar\nabla^\lambda h_{\lambda\mu})^2+ 2\bar\nabla^\lambda
h_{\lambda\mu} \bar\nabla^\mu h
-(\bar\nabla_\lambda h)^2\Big) +{\cal O}((h_{\mu\nu})^3),
\end{equation}
with $\bar\nabla_\mu$ being the background
covariant derivative and $\bar\nabla_\mu h=\partial_\mu h$.
The $\Lambda$-term is omitted here and in what follows.
For the respective gravity tensor, one gets
\begin{equation}
G_{\mu\nu}=-\frac{1}{2}\Big(\bar\nabla\cdot\bar\nabla h_{\mu\nu}
-\bar\nabla_\mu\bar\nabla^\lambda h_{\lambda\nu}-
\bar\nabla_\nu\bar\nabla^\lambda h_{\lambda\mu}
+\bar\nabla_\mu\bar\nabla_\nu h
\Big)
-\frac{1}{2}\Big(\bar\nabla^\lambda\bar\nabla^\rho
h_{\lambda\rho}
-\bar\nabla\cdot\bar\nabla h\Big)\bar g_{\mu\nu},
\end{equation}
independent of $h_0$. The Lagrangian above is invariant under the
gauge transformations
\begin{equation}
h_{\mu\nu}(x)\to h_{\mu\nu}(x)+\bar\nabla_\mu \xi_\nu+ \bar\nabla_\nu
\xi_\mu,
\end{equation}
with arbitrary $\xi_\mu$ corresponding to GC.
In particular, one has $h(x)\to h(x)+2\bar\nabla\cdot \xi$. By this
token, $h$ can be removed, and thus $L_{\rm g}$, taken alone,
does not produce any physical manifestations for the scalar graviton.
The contribution of $\Delta L_{\rm g}$ to the gravity equations in
terms of $h_0$ and~$h$ can be read off
from eqs.~(\ref{DT}), (\ref{WFA}) and (\ref{chi}).
This contribution is invariant only under the restricted gauge
transformations with $\bar\nabla \cdot\xi=0 $ or, otherwise,
$\partial\cdot(\sqrt{-\bar g}\xi)=0 $. In the curved background, this
corresponds to the residual UC.
To solve the gravity equations one should impose on
$h_{\mu\nu}$ the three gauge fixing conditions, leaving thus seven
independent components. Comparing the solution with observations one
can conceivably extract thereof $h_0$ and, under the chosen $\bar g$,
the looked for $\tilde g$.
\paragraph{Quantization}
Assuming to have found $\tilde g$, rescale the background metric
to adjust it to the external nondynamical scale, so
that $\bar g =\tilde g$. Under this choice,
$h_0$ vanishes. The GC preserving part of the gravity Lagrangian stays
as before. The GC violating part reads
\begin{equation}
\Delta L_{\rm g}=\frac{1}{8}\Big(\kappa^2_0 (\bar\nabla_\lambda h)^2
-\mu_0^4 h^2\Big)+{\cal O}(h^4),
\end{equation}
with the potential supposed to be as follows
\begin{equation}
\Delta V_{\rm g}(h)=\frac{1}{8} \mu_0^4 h^2+{\cal O}(h^4)
\end{equation}
and $\mu_0$ being a constant with the dimension of mass.
The Lagrangian $\Delta L_{\rm g}$ possesses only the
residual UC, with $\bar\nabla\cdot \xi=0$ insuring
$h\to h$. Normalized properly, the true field for the scalar graviton
is $\kappa_0 h/2$, with the constant $\kappa_0$ characterizing thus
the scale of the wave function. At $\kappa_0\to 0$, the wave function
squeezes formally to dot. The other
free constant, $\mu_0$, characterizes the scalar
graviton mass, $m_0=\mu_0^2/\kappa_0$.
Finally, the gauge fixing Lagrangian in the case of UC can be
chosen similar to ref.~\cite{Buchmuller}~as
\begin{equation}
L_{\rm gf}=-\lambda(
\bar\nabla_\mu\bar\nabla^\lambda h_{\lambda\nu}-
\bar\nabla_\nu\bar\nabla^\lambda h_{\lambda\mu})^2,
\end{equation}
with $\lambda$ being the indefinite Lagrange multiplier. This
condition
fixes three components in $ h_{\mu\nu}$, the scalar $h$ remaining
untouched. The forth independent gauge condition which is to
be imposed in GR is now abandoned. It is superseded by
the GC violating term. The latter looks superficially as the gauge
fixing term but with the definite coefficients. This is the principle
difference between the two kinds of terms. In the GC limit,
$\kappa_0\to 0$ and $\mu_0\to 0$, the given quantum theory becomes
underdetermined and requires one more gauge condition. For this
reason, the GC restoration is, generally, singular.
Altogether, one should study the present theory of the field
$h_{\mu\nu}$ in the curved background. As usually, this requires
the transition to the local inertial coordinates, what can in
principle be done. To facilitate the quantization procedure suppose
the Lorentzian background,
$\bar g_{\mu\nu}=\eta_{\mu\nu}$, with the effect that
$\bar \nabla_\mu= \partial_\mu$. The required ghost system is found
in this case in ref.~\cite{Buchmuller}. The respective propagator can
be shown to become
\begin{equation}
D_{\mu\nu\rho\sigma}(x-x')=
\frac{1}{4}\Big( P^{(2)}_{\mu\nu\rho\sigma}(\lambda)
\frac{1}{\partial\cdot\partial}+
\frac{1}{\epsilon_0^2}P^{(0)}_{\mu\nu\rho\sigma}
\frac{1}{\partial\cdot\partial+m_0^2} \Big)i\delta^4(x-x'),
\end{equation}
where $
\epsilon_0=\kappa_0/M_{\rm P}$.
The first term in the propagator corresponds to the massless tensor
graviton. The tensor projector $P^{(2)}_{\mu\nu\rho\sigma}$,
unspecified here, corresponds to the six
components of the tensor graviton off the mass shell, as in GR.
The second term, with the scalar projector
$P^{(0)}_{\mu\nu\rho\sigma}=\partial_\mu
\partial_\nu\partial_\rho\partial_\sigma/(\partial\cdot\partial)^2$,
describes additionally the scalar graviton. Altogether, the theory
describes the seven propagating degrees of freedom
reflecting ultimately the residual three-parameter UC.
In the limit $\kappa_0\to 0$, $\mu_0$ being fixed,
one gets for the scalar part of the propagator
\begin{equation}
D^{(0)}_{\mu\nu\rho\sigma}(x-x')\simeq
\frac{1}{4\omega_0^2} P^{(0)}_{\mu\nu\rho\sigma}
i\delta^4(x-x'),
\end{equation}
with $\omega_0\equiv\epsilon_0 m_0= \mu_0^2/M_{\rm P}$ being
finite. In this limit, the theory describes the massless tensor
graviton, as in GR, plus the contact scalar interactions. The
GC restoration limit, $\kappa_0\to 0$ and $\mu_0\to 0 $, is indefinite
in accord with the necessity of adding one more gauge
condition.\footnote{Conceivably, this is the particular manifestation
of a more general singularity at $\mu_0\to 0$ but $\kappa_0$ fixed,
corresponding to the massless limit for the scalar graviton.}
\section{Conclusion}
In conclusion, the self-consistent extension of GR, with the explicit
violation of GC to the residual UC, is developed. Being based on the
gauge principle, though with the reduced covariance, the extension is
as consistent theoretically as GR itself. In addition to the massless
tensor graviton, the respective theory -- the scalar-tensor
metagravity -- describes the massive
scalar graviton as the part of the metric field. The scalar
graviton is the natural challenger for the gravitational DM and/or the
scale dependent part of DE. The restoration of GR being
unattainable on the whole, the extension may be not quite safe
vs.\ observations. Its experimental consistency needs investigation.
I am grateful to Organizers for support and to W.~Buchm\"uller,
V.A.~Kuzmin, V.A.~Ru\-bakov, and M.I.~Vysotsky for discussions.
|
astro-ph/0609415
|
\section{Introduction}
Over the past decade enormous progress has been made toward mapping
the cosmological history of star formation in the universe ({\it
e.g.,} Madau et al. 1996; Giavalisco et al. 2004). This has mainly
been accomplished using large samples of high-redshift galaxies
selected by their rest-frame ultraviolet (UV) colors ({\it e.g.,}
Steidel et al 1996, 2003; Dickinson et al. 2004). These surveys
indicate that the global star formation rate of the universe has been
in decline since $z\sim1-2$, and was generally constant at higher
redshift out to at least $z=6$ \citep{g04}.
This picture still contains some uncertainty resulting from several
factors. The star formation rate density at low redshift ($z=0-1$) has
been determined through different techniques (${\it e.g.,}$
\ha\ luminosity) than those used for higher redshift galaxies (${\it
e.g.,}$ rest-frame ultraviolet luminosity). These techniques are affected
differently by extinction and radiative transfer effects, and they
fundamentally probe star formation over different time scales.
One way around this problem is to obtain rest-frame ultraviolet (UV)
measurements for a large sample of galaxies at low redshift, enabling
the measurement of star formation rates using the same techniques that
are used at higher redshift. This requires a UV telescope in space
with a large field of view, something that has not been available
until the {\it Galaxy Evolution Explorer (GALEX)} mission
\citep{cm05}. {\em GALEX}\ is obtaining UV fluxes for more than $\sim10^7$
galaxies in the redshift range of $0<z<2$. Initial results on the UV
luminosity density show strong evolution from $z=2$ to $z=0$, with the
strongest evolution occurring in the most UV luminous galaxies
\citep{ds05,arn05}. The fraction of galaxies with $L_{1530}>0.2$L$_{*,
z=3}$ fell by a factor of 30 from $z=1$ to $z=0$ (using L$_{*,
z=3}=6\times10^{10}$ L$_{\odot}$; Steidel et al. 1999).
These UV luminous galaxies at high redshift are more commonly called
Lyman Break Galaxies (LBGs; Steidel et al. 1999). These high-redshift
galaxies are so named because they are identified by the effects of
the Lyman break on their broadband colors \citep{sh93}. LBGs are UV
bright galaxies undergoing intense star formation with low to moderate
stellar masses (log $M_*=9.5$ to $11.0$ M$_{\odot}$), and are
candidates for the precursors of present-day elliptical galaxies (see
{\it e.g,} Giavalisco 2002). LBGs are common at $z>2$, and they are
clearly important as the sites of a significant fraction of all the
star formation in the universe. Since strong evolution has made
objects like LBGs extremely rare in the local universe, all of the
information on this important galaxy population has come from very
distant samples, which are inherently difficult to study. Thus, there
has been little detailed information available on the processes
driving the evolution of star formation in the population of LBGs.
Using local UV-bright starbursts as local analogs to LBGs has
contributed significantly toward understanding these objects
\citep{h98,m99}. However, local starbursts differ from LBGs in
important ways. Local starbursts are usually dwarf galaxies or small
(sub-kpc) regions in the nuclei of larger galaxies, while LBGs have
typical sizes of a few kpc \citep{ferg04}. Luminous local starbursts
are usually very dusty systems in which only a small fraction of the
UV light escapes, while LBGs with similar bolometric luminosities
(star formation rates) typically contain modest amounts of dust
(e.g. Reddy et al. 2006; Erb et al. 2006b). Given these differences,
it is not clear that the conditions in local starbursts or the
triggers of star formation are identical to those in LBGs, and so
there is a need for better LBG analogs in the local universe.
Since LBGs are found in part by their large UV luminosity, LBG analogs
in the local universe should also be UV luminous. The large area UV
sky surveys being carried out by {\em GALEX}\ provide an ideal data set for
finding rare UV luminous galaxies in the local universe. Heckman et
al. (2005; hereafter Paper I) described the properties of the most UV
luminous galaxies (UVLGs) in the local universe based on
cross-matching the initial {\em GALEX}\ surveys with the Sloan Digital Sky
Survey (SDSS) first data release (DR1; Abazajian et al. 2003). The
UVLGs were composed of two basic types of galaxies: Large UVLGs, which
are characterized by lower UV surface brightness and high mass, and
compact UVLGs, which have higher UV surface brightness and lower
mass. Many of the compact UVLGs have properties very similar to LBGs.
Although this sample was very illuminating, several questions
remain. The extent of the similarity between the compact UVLGs and
LBGs is a crucial question. More generally, it is not known whether
these galaxies are truly a distinct population of objects in an
earlier phase of evolution, {\it i.e.,} remnants of the epoch of
galaxy formation, or whether they are simply the high end of the UV
luminosity function. Many of these questions could be better addressed
if there were more such galaxies available for study, so we present an
analysis of a larger sample of UVLGs, based on more recent {\em GALEX}\ and
SDSS data.
\section{Data}
\subsection{Ultraviolet Data}
Since its launch in April, 2003, {\em GALEX}\ has been conducting several
surveys of the UV sky. In this paper we make use of the {\em GALEX}\
All-sky Imaging Survey (AIS) and Medium-Deep Imaging Survey (MIS). The
data were taken from the first public release of {\em GALEX}\ data (GR1)
available at the Multimission Archive at Space Telescope
(MAST). Details on the {\em GALEX}\ mission and surveys are given in
\citet{cm05}.
The {\em GALEX}\ data include far-ultraviolet (FUV; $\lambda_{eff}=1528$~\AA,
$\Delta\lambda=268$~\AA) and near-ultraviolet (NUV;
$\lambda_{eff}=2271$~\AA, $\Delta\lambda=732$~\AA) images with a
circular field of view with radius $\sim38$$^{\prime}$. The spatial
resolution is $\sim5$$^{\prime\prime}$. Details of the {\em GALEX}\ satellite and data
characteristics can be found in \citet{pm05}.
The data were processed through the {\em GALEX}\ reduction pipeline at the
California Institute of Technology. The pipeline reduces the data and
automatically detects, measures, and produces catalogs of FUV and NUV
fluxes for sources in the {\em GALEX}\ images.
\subsection{Optical data}
The {\em GALEX}\ catalogs were then matched to the SDSS Third Data Release
(DR3; Abazajian et al. 2005) spectroscopic sample. The area of the
overlap region between GR1 and DR3 is about 450 square degrees
\citep{b06}. The SDSS catalog provides (among many other available
parameters) $ugriz$ magnitudes, spectroscopic redshifts, concentration
parameters, observed half-light radii and model-fit exponential scale
lengths. To be included in our final matched catalog, we required that
each source have a spectroscopic redshift in the range $0<z<0.3$, and
that the SDSS source be spectroscopically classified as a galaxy,
excluding objects classified by the SDSS pipeline as QSOs or Type I
(broad line) AGN. The resulting GR1/DR3 sample contains 25362
galaxies. Of these, 18463 have 3$\sigma$ FUV detections. The remaining
galaxies were detected in the NUV images only.
With the distances estimated from the SDSS redshift, the FUV (and NUV)
luminosity for each galaxy is known. Following Paper I, galaxies with
$L_{1530}>2\times10^{10}$~L$_{\odot}$ qualify as UV luminous
galaxies\footnote{Throughout this paper we use $H_0=70$~km~s$^{-1}$,
$\Omega_m=0.3$, and $\Omega_{\Lambda}=0.7$.}, where $L_{1530}$ is the
luminosity at the observed wavelength of 1530~\AA. This luminosity is
$\sim5L_*$ for $z=0$ \citep{w05} and $\sim0.3L_*$ for LBGs at $z=3$
\citep{s99}. There are 235 galaxies in the GR1/DR3 sample that meet
this criterion. We then inspected the SDSS spectra of these galaxies
to eliminate broad line (Type I) AGN that were missed by the SDSS
pipeline, as well as objects with BL Lac-type spectra (UV bright but
with weak or non-existent emission lines). Type II AGN in the sample
are discussed in section 4.4. The 215 galaxies that remain are
hereafter referred to as ultraviolet luminous galaxies (UVLGs). These
galaxies span the redshift range from $z=0.053$ to $z=0.3$.
A large number of galaxy parameters derived from the SDSS spectra are
available in the value-added catalogs produced by the SDSS
collaboration. These catalogs are available at the SDSS website at the
Max Planck Institute (http://www.mpa-garching.mpg.de/SDSS). From these
catalogs we use the emission line fluxes, widths, and derived
metallicities. For more information on the derivation of these
parameters, see \citet{k03a,k03b,k03c}, \citet{b04}, and
\citet{trem04}. These catalogs do not include metallicities for
galaxies with an AGN contribution since this can strongly affect the
line strengths, so only a subset of our sample have metallicity
determinations. In addition, some galaxies have poor line flux
measurements because the emission lines in the fiber aperture are weak
or non-existent, {\it e.g.,} in galaxies with no star formation in the
central region of the galaxy. Thus line flux measurements exist for
only a subset of our sample.
\subsection{Spectral Energy Distribution Modeling}
To gain further information about the properties of the galaxies in
our sample, we compared the observed optical and UV properties of our
sample to a library of model spectral energy distributions (SEDs),
following \citet{salim05}. This was done by first constructing the
broadband optical and UV SEDs from the SDSS and {\em GALEX}\
magnitudes. Each observed SED was then compared to an extensive
library of SEDs generated by the \citet{bc03} population synthesis
code. Each model galaxy is based on a star formation history composed
of an exponentially declining star formation rate (SFR) with
superimposed bursts of star formation, and includes the effects of
attenuation by dust (see Charlot \& Fall 2000). The library contains
$10^5$ models at each of five evenly spaced redshifts from $z=0.05$ to
$z=0.25$, and the grid of models was constructed to span the likely
range of star formation histories.
The goodness of fit for a given model to an observed SED is then
translated to a probability that the parameters for that model apply
to the galaxy. Thus the parameters of the best fitting model will have
the highest probability, and a probability distribution can be
constructed for the entire library at the appropriate redshift. From
this the median and $95$\% confidence limits on each parameter can be
determined. This was done for a list of parameters including stellar
mass and star formation rate over a range of timescales. For more
information on the SED fitting process, see \citet{salim05}. In this
paper the stellar masses and star formation rates were determined
through SED fitting.
\section{Properties of the GR1/DR3 Galaxy Sample}
The {\em GALEX}-SDSS matched catalog provides a valuable resource for
studying the UV-optical properties of star-forming galaxies in the
local universe. In a future paper we will report on the analysis of
the entire galaxy sample. Here we concentrate on the relationship
between UVLGs and the broader galaxy population.
The galaxy sample considered here should be nearly devoid of
unobscured (type I) AGN, so the dominant source of the UV light
detected by {\em GALEX}\ is massive stars. The UV luminosity of a galaxy
therefore traces the total amount of star formation in that galaxy
over the past $10^8$ years \citep{cm05}. We also have measurements of
the sizes of these galaxies. Most of these galaxies are only
marginally resolved in the {\em GALEX}\ images, so we used the half-light
radii measured on the higher-resolution SDSS images. The SDSS $u$-band
was chosen because it is the closest in wavelength to the {\em GALEX}\
bands and therefore the most likely to reflect the true spatial extent
of star-formation. In most cases we use the scale length from the
seeing-corrected exponential model fit calculated by the SDSS pipeline
as the half-light radius. However, for well-resolved galaxies we found
that the seeing-corrected radius derived from the exponential model
fits is systematically {\it larger} than the directly observed
half-light radius. We found that this occurs for galaxies with
half-light radii larger than about 2.2$^{\prime\prime}$. We thus use
the observed $u$-band half-light radius as $r_{50,u}$ for galaxies
larger than 2.2$^{\prime\prime}$, and the seeing-corrected scale
length as $r_{50,u}$ for galaxies smaller than
2.2$^{\prime\prime}$. We can then calculate the effective surface
brightness by dividing one-half of the luminosity by the area of the
galaxy enclosed by the half-light radius ($I_{1530}=L_{1530}/2\pi
r^2_{50,u}$).
Figure~\ref{sbfig} shows a normalized contour plot of the FUV
surface brightness versus FUV luminosity for the 18463 galaxies in the
GR1/DR3 sample that were detected in the FUV images. The luminosity
bins were normalized to have the same number of galaxies in each bin,
thus clarifying the dependence of surface brightness on luminosity by
removing the effects of having a smaller number of galaxies at the low
and high luminosity ends of the distribution.
The plot shows a well-defined trend of slightly increasing surface
brightness with increasing luminosity over the entire luminosity
range. However, at the high luminosities corresponding to the UVLGs
there is an anomalous population of galaxies that defy the general
trend by having a much higher surface brightness than would be
expected given their luminosity. These galaxies have
$I_{1530}\ge10^{8}$~L$_{\odot}$~kpc$^{-2}$. Only among the UVLGs are
galaxies with the highest surface brightnesses
($I_{1530}\ge10^{9}$~L$_{\odot}$~kpc$^{-2}$) relatively common.
Figure~\ref{sizefig} shows the dependence of half-light radius on
luminosity. The surface brightness parameter shown in
Figure~\ref{sbfig} depends on the half-light radius, so
Figure~\ref{sizefig} is an alternative representation of
Figure~\ref{sbfig}. Over most of the range in luminosity the radius
increases with increasing luminosity. Above
$L_{1530}>10^{10}$~L$_{\odot}$ there is a group of galaxies that do
not obey this trend in the sense that they are too small for their
luminosity. The dashed lines in Figure~\ref{sizefig} are lines of
constant $I_{1530}$, and the galaxies responsible for this deviation
from the trend have $I_{1530}>10^9$~L$_{\odot}$~kpc$^{-2}$.
These two figures show that in general galaxies with higher UV
luminosity are larger and have somewhat higher surface brightness than
their less luminous counterparts. At high luminosities, however, some
galaxies behave differently. They have small radii, but this is more
than compensated by their increased surface brightness to put them
among the most UV luminous galaxies in the sample. This is a strong
indication that these UVLGs are distinct from the general galaxy
population. By contrast, UVLGs with lower surface brightness do not
distinguish themselves from the full sample of galaxies except by
their luminosity. They appear to be the largest and therefore most
luminous normal galaxies.
Figure~\ref{massfig} shows how the UV surface brightness changes with
stellar mass, as determined from the SED model fitting
\citep{salim05}. In general the stellar mass determined in this manner
agrees to within a factor of 2 with the dynamical mass calculated as
$M_{dyn} = 3.4\sigma^2_{gas}r_{50,u}/G$, where $\sigma_{gas}$ is the
standard deviation of the gas velocity measured from the emission
lines. The coefficient 3.4 was taken from \cite{erb06a} and represents
a realistic estimate of the mass distribution for LBGs. The UV surface
brightness is relatively constant over a wide range of stellar masses,
and then slowly falls above a mass of $10^{10.5}$~M$_{\odot}$. This
implies that the more massive galaxies have correspondingly larger
sizes over which the young stellar population is distributed. The drop in
surface brightness above $10^{10.5}$~M$_{\odot}$ may be related to
the relatively abrupt transition in the galaxy population at this
mass-scale between young disk-dominated galaxies and old
bulge-dominated ones (e.g. Kauffmann et al. 2003b).
The points shown in Figure~\ref{massfig} are the locations of
individual UVLGs. Unlike the galaxy population as-a-whole, the UVLGs
show a clear inverse correlation between surface brightness and
mass. We have already pointed out that this fact indicates that the
more massive UVLGs owe their large luminosities to their large
mass. The less massive UVLGs have high UV surface brightnesses
indicative of intense star-formation. Figure~\ref{massfig} shows how
the UVLGs relate to the rest of the sample in terms of mass (but keep
in mind that the contours are normalized by the number of galaxies in
each mass bin). The UVLGs with $I_{1530}<10^8$~L$_{\odot}$~kpc$^{-2}$
are among the most massive star-forming galaxies in the GR1/DR3
sample, with log $M_*\ge10.5$~M$_{\odot}$. While these are the lower
surface brightness component of the UVLG sample, they are still
somewhat offset toward higher surface brightness than the full sample
(they are not low surface brightness galaxies). In the main, their
properties appear to be similar to those of large, disk galaxies (they
are the extrema of the population).
The UVLGs with $I_{1530}>10^8$~L$_{\odot}$~kpc$^{-2}$ are generally
lower mass systems (log $M_*\le10.5$~M$_{\odot}$). They clearly stand
out from the full sample by having much higher surface brightness than
would be expected for normal galaxies of similar mass. This is even
more obvious for galaxies with $I_{1530}>10^9$~L$_{\odot}$~kpc$^{-2}$,
which would qualify as LBGs based on their FUV surface brightness.
Based on the analysis above, the UVLG population can be thought of as
two very different types of galaxies. The high-surface brightness
systems (``compact UVLGs'') have high star formation rates per unit
area and would be called starburst galaxies, while the low surface
brightness systems (``large UVLGs'') are large spirals, with high
rates of total star formation but low rates of star formation per unit
area. There is no clear transition from one population to the other,
but a surface brightness value of
$I_{1530}=10^8$~L$_{\odot}$~kpc$^{-2}$ serves as a useful boundary.
There are intermediate cases that do not fit cleanly into either
category. Figure~\ref{massfig} shows that this surface brightness
boundary corresponds to a stellar mass of roughly
$M_*=10^{10.5}$~M$_{\odot}$ (similar to the mass scale that divides
the bimodal galaxy population as-a-whole -- e.g. Kauffmann et
al. 2003b). Using this criterion there are 110 large UVLGs and 105
compact UVLGs in the GR1/DR3 sample. These two diverse populations
were recognized in Paper I, but we can now place them firmly in the
context of the overall galaxy population.
Throughout the rest of this paper we will distinguish between large
and compact UVLGs. Note, however, that while the compact UVLGs have
the properties of intense starbursts, not all of them have FUV surface
brightnesses high enough to be considered typical LBGs, which
generally have $I_{1530}>10^9$~L$_{\odot}$~kpc$^{-2}$. We will
consider the compact UVLGs that meet this more stringent surface
brightness criterion as possible LBG analogs, and will refer to them
as ``supercompact UVLGs''. The GR1/DR3 sample contains 35 supercompact
UVLGs.
\section{Properties of Ultraviolet Luminous Galaxies}
Since the UVLG sample was chosen based on an ultraviolet luminosity
criterion, they are all expected to have high star formation rates. As
in Paper I, the majority of UVLGs (83\%) have concentration parameters
$C<2.6$, where $C$ is defined as $R_{90}/R_{50}$, the ratio of the
radius containing 90\% of the Petrosian $r$-band luminosity to that
containing 50\%. These low concentration parameters are indicative of
disk systems, as expected for a sample of star-forming galaxies. Yet
as was made clear in the previous section, UVLGs span a wide range of
properties. In this section we explore the properties of UVLGs. The
properties of the 215 UVLGs are listed in Table~\ref{samptab}.
\subsection{Ultraviolet Surface Brightness}
Figure~\ref{lvifig} plots the FUV surface brightness of the 215 UVLGs
against the FUV luminosity. The galaxies were chosen to be luminous,
but they span a wide range in surface brightness, and there is no
correlation between luminosity and surface brightness. This implies
that the UVLGs span a similarly large range of size. This is confirmed
in Figure~\ref{lvrfig}, which plots the luminosity against the
half-light radius. UVLGs range in half-light radius from less than a
kpc to $>20$~kpc. The dotted lines in Figure~\ref{lvrfig} show a
constant surface brightness of $10^8$~L$_{\odot}$~kpc$^{-2}$ and
$10^9$~L$_{\odot}$~kpc$^{-2}$, the latter being the lower limit seen
in LBGs at $z=3$ \citep{g02}. Only a fraction of the UVLGs have
surface brightnesses that rival those of LBGs, even though they all
have LBG-like luminosities.
The FUV surface brightness is related to the star formation intensity,
${\it i.e.,}$ the star formation rate per unit
area. Figures~\ref{lvifig} and \ref{lvrfig} show then that only a
subset of the UVLGs are luminous because they have high star formation
intensities. The rest owe their high luminosities to their large size,
{\it i.e.,} they have modest levels of star formation intensity spread
over a large area.
This is also apparent in Figure~\ref{massfig}, which shows more clearly
the correlation between surface brightness and stellar mass noted
above. The UVLGs with low surface brightness are the most massive,
while the high surface brightness UVLGs are low-mass systems. The
typical mass and surface brightness range of LBGs is shown in the
figure \citep{shap01,pap01,g02}. Figures $3-5$ illustrate the fact
that UVLGs span a continuous range of properties, {\it i.e.,} there is
no clear demarcation between the large and compact samples. The
division of the samples at a surface brightness of
$I_{1530}>10^8$~L$_{\odot}$~kpc$^{-2}$ is an arbitrary boundary.
\subsection{Star Formation and Attenuation by Dust}
Figure~\ref{frvifig} shows the FUV-$r$ color and NUV-$r$ color for the
215 UVLGs as a function of surface brightness. Both colors are
well-correlated with surface brightness, with the brightest galaxies
having the bluest color. This agrees with the idea that the UV-optical
colors are sensitive to the ratio of current to past star
formation. \citet{salim05} showed the NUV-$r$ in particular is a good
tracer of the star formation rate parameter $b$, which is the current
SFR divided by the past-average SFR. The blue color of the high
surface brightness UVLGs can be understood if they are undergoing
intense starbursts which are much more significant than the past
average rate of star formation. The FUV surface brightness appears to
be a good indicator of star formation intensity for UV-selected
galaxies. The typical colors of LBGs are also indicated in the plot
\citep{shap01,pap01,g02}.
Figure~\ref{sfrvifig} shows the FUV surface brightness versus the
specific star formation rate (star formation rate normalized by
stellar mass). The extinction-corrected star formation rates were
determined by SED model fitting. We have also calculated star
formation rates using the \ha\ luminosity in the SDSS spectra using
the recipe given in \cite{k98}, these values generally agree within a
factor of 2, which is quite good considering that the H-alpha
measurements were taken through 3$^{\prime\prime}$ fibers. The
specific star formation rate relates the current to past star
formation, and the inverse of this quantity is the ``galaxy building
time,'' the time it would take to build up the current stellar mass at
the current SFR. The specific star formation rate is clearly
correlated with the FUV surface brightness, with the high surface
brightness systems generally have the highest specific SFRs and short
building times, indicating that these are starburst systems. The large
UVLGs have galaxy building times of roughly a Hubble time, as expected
for a galaxy that has been built up over the age of the universe at a
constant or slowly varying rate of star formation. Galaxy building
times of less than 1 Gyr are typical for LBGs
\citep{shap01,pap01,g02}, and the high surface brightness UVLGs
overlap this range. However some of the high surface brightness
systems have lower specific SFRs and longer building times than
typical LBGs, which suggests that they have had significant star
formation prior to the current burst. $Spitzer$ imaging of LBGs
indicates that they do not have significant populations of older stars
\citep{ba04}, suggesting that LBGs are undergoing their first major burst
of star formation. If this is the case, then these systems with longer
building times may not be true analogs for LBGs. However, there is a
significant fraction of the high-surface brightness systems that fall
in the boundaries set by the LBGs, and these may be excellent analogs.
Figure~\ref{avifig} shows the FUV attenuation as a function of surface
brightness. The FUV attenuation was determined using the Balmer
decrement and the \cite{calzetti01} starburst attenuation law. The
compact UVLGs are in the range $A_{1530}\le2$, indicating that a
relatively large fraction ($>10\%$) of the UV light escapes. The large
UVLGs are mostly in the range from 0 to $>4$ magnitudes of
attenuation. Figure~\ref{avifig} shows that compact UVLGs have a lower
amount of attenuation on average than do large UVLGs, and the higher
surface brightness compact UVLGs have still lower average extinction
values. We note that this method of determining attenuation values
uses the SDSS fiber spectra and may be sensitive to aperture
effects. The fibers capture most of the light from the compact UVLGs,
so this should only be an issue for the large UVLGs, where we are
measuring the attenuation in the central core of the
galaxy. Figure~\ref{avifig} also shows the typical range of FUV
attenuation seen in LBGs \citep{shap01,pap01}.
\subsection{Metallicity}
Figure~\ref{allmetfig} shows two determinations of the mass {\it vs.}
metallicity relation for the entire GR1/DR3 sample. The left panel
shows the relation determined via the method of \cite{trem04}. This
method makes use of a grid of models, which combine the \cite{bc03}
population synthesis code with photo-ionization models of \ion{H}{2}
regions. The methodology is described in detail in
\cite{cl01}. Metallicities are constrained by fitting all the strong
emission lines in the SDSS spectra. Only 129 of the UVLGs have
metallicity determinations from the SDSS spectra, because galaxies
with an AGN contribution to their spectra were excluded. The
well-known mass-metallicity relation \citep{trem04} is apparent in
Figure~\ref{allmetfig}, and the best fit to the Tremonti et al. sample
is shown as a dotted line. The UVLGs also show a relation between
mass and metallicity, but it is offset from the general sample. At
high masses ($>10^{10.5}$ M$_{\odot}$) most of the UVLGs have
metallicities similar to those of normal galaxies. Most of these
objects are the large UVLGs, which we have argued are just the
UV-bright tail of the population of normal high-mass star forming
galaxies. Their relatively normal metallicities support this idea. At
lower masses (where the sample is primarily the compact UVLGs), the
slope of the mass-metallicity relation for the UVLGs is significantly
steeper than that of the overall galaxy population. In the mass range
$\sim10^9$ to $10^{10}$~M$_{\odot}$, the compact UVLGs have
metallicities a factor of two to three lower than normal galaxies of
the same mass.
Selection on the basis of high UV luminosity could clearly bias the
sample against dusty objects, and the dust/gas ratio will be larger
for higher metallicity. Thus, selection effects could make a UV-bright
sample have systematically lower metallicity. While such an effect may
be present, it does not explain the mass-dependence of the offset
between the mass-metallicity relation for the sample as-a-whole, and
that of the UVLGs.
It is very interesting to compare the mass-metallicity relation for
the UVLGs to what is found for UV-bright galaxies at higher redshifts.
The form of the mass-metallicity relation for the UVLGs is similar to
that found by \citet{ss05}, who investigated a sample of star-forming
galaxies at $z=0.7$. Their relation is shown in the left panel of
Figure~\ref{allmetfig}. They interpreted the change in the form of
the mass-metallicity relation from $z\sim0.1$ to $\sim0.7$ in terms
of the ``down-sizing'' of the galaxy population. Massive galaxies have
essentially the same metallicity at $z \sim$ 0.1 and $\sim$0.7 because
this population has already come near the end-point of its evolution
by $z\sim0.7$. The strong chemical evolution at late times seen in
the low mass galaxies is because they are still converting significant
mass from gas into stars at the present time. Applied to the compact
UVLGs, this would suggest that they are relatively unevolved compared
to typical galaxies of the same mass. Alternatively, perhaps the
compact UVLGs are UV-bright because they are experiencing a burst of
star formation triggered by the infall of metal-poor gas.
\cite{erb06a} have measured the mass-metallicity relation for LBGs
at $z\sim2$. They find that this relation has evolved considerably
compared to the local universe, with LBGs having systematically lower
metallicities than present-day galaxies by a factor of about two {\it
for all masses}. Erb et al. estimate metallicity using the $N2$
method, which uses the [\ion{N}{2}]/H$\alpha$ ratio, as calibrated by
\cite{pp04}. The resulting metallicities are known to be
systematically lower than those obtained using the \cite{trem04}
method. To fairly compare the UVLGs and LBGs, we also plot our
mass-metallicity relation based on the \cite{pp04} method (right panel
of Figure~\ref{allmetfig}). While the shape of the relation changes,
the offset of the compact UVLGs from the rest of the galaxy population
at low mass remains. The dashed line shows the relation for LBGs from
\cite{erb06a}. At low masses ($<10^{10.5}$ M$_{\odot}$) the compact
UVLGs and LBGs have similar metallicity. At higher masses the UVLGs
are more metal-rich (including the compact UVLGs). Erb et
al. interpret the relation for the LBGs in terms of the loss of metals
by supernova-driven winds that is occurring at all masses. Results by
Shapley et al. (2005) suggest this may also be true at redshifts as
low as ~1 to 1.5 \citep{scmb05}. This contrasts with the
mass-dependent loss of metals inferred for the local galaxy population
\citep{trem04}. The form of the mass-metallicity relation for the
UVLGs suggests mass-dependent metal loss. We will consider this idea
in detail in a future paper.
Figure~\ref{metfig} shows the metallicity of the ionized gas,
determined from the SDSS spectra \citep{trem04}, in UVLGs as a
function of FUV surface brightness. There is a
clear trend of increasing metallicity with decreasing surface
brightness. Given the connection between surface brightness and mass
previously discussed (see Figure~\ref{massfig}), this is likely a
reflection of the mass-metallicity correlation. The high surface
brightness galaxies have the lowest metallicity, and this metallicity
is typical of that found in LBGs \citep{shap04,erb06a}.
\subsection{Active Galactic Nuclei}
Figure~\ref{bpt} is a diagnostic diagram that uses line ratios to
differentiate between active galactic nuclei (AGN) and
star-formation-dominated systems \citep{bpt81}. The dashed line in the
figure shows the line of demarcation determined by \citet{k03a} for
SDSS galaxies, where galaxies below the line are dominated by star
formation. The UVLGs are shown in the figure as large circles, with
filled circles denoting UVLGs with
$I_{1530}>10^8$~L$_{\odot}$~kpc$^{-2}$ and crosses denoting UVLGs with
$I_{1530}>10^9$~L$_{\odot}$~kpc$^{-2}$. Of the 104 compact (and
supercompact) UVLGs in this plot, 22 are classified as AGN or
transition objects, roughly 21\%, compared to 33\% of the entire
sample in the compact UVLG mass range ($9.0$~M$_{\odot} \le $~log $M_*
\le 11.0$~M$_{\odot}$). About 34\% of
the large UVLGs in this diagram
($I_{1530}<10^8$~L$_{\odot}$~kpc$^{-2}$) are classified as AGN (36 out
of 106), while for the entire sample in this mass range
($10.3$~M$_{\odot} \le $~log $M_* \le 11.7$~M$_{\odot}$) the fraction is
54\%. This difference may reflect the fact that it
can be more difficult to recognize a type II AGN in a starburst due to
the strong emission lines produced by star formation. The majority of UVLGs have the line ratios of normal
star-forming galaxies. Note that type I AGN have been removed from the
GR1/DR3 sample because the AGN would contribute significantly to the
UV luminosity.
\section{Discussion}
\subsection{Large and Compact UVLGs}
The most luminous galaxies in the ultraviolet
($L_{1530}>2\times10^{10}$~L$_{\odot}$) are a diverse population
spanning a wide range of properties. Most of these properties are
correlated at some level with the FUV surface brightness, which
reflects the star formation rate per unit area. It is informative to
use the surface brightness distribution to divide the UVLGs into two
groups, ``large'' and ``compact''.
The large UVLGs at the low surface brightness end are very massive
spirals. They have stellar masses of $M_*>10^{10.5}$~M$_{\odot}$,
comparable to massive spirals in the local universe. They have
half-light radii of 5 to 30~kpc. They are at the extreme end of the UV
luminosity distribution, an indication that they have high global
star formation rates. However, this high luminosity is a result of
relatively modest star formation intensity spread out over a large
area. They seem to share many of the characteristics of normal
spirals but extend this population to high UV luminosity. By
selecting the most luminous galaxies we have found the tail of the
distribution of massive spirals.
The compact UVLGs at the high-surface brightness end are systems with
intense star formation in a relatively compact region. They are lower
mass systems ($M_*<10^{10.5}$~M$_{\odot}$) with half-light radii of 1
to 5~kpc. They have high specific star formation rates, suggesting
they may be experiencing their first major burst of star
formation. They are generally metal-poor compared to the large UVLGs.
Figures~\ref{sbfig} and \ref{sizefig} illustrate the basic differences
between large and compact UVLGs. These figures show that the large
UVLGs can be understood as the high-luminosity end of the distribution
of normal galaxies. They are extremely luminous, but this is mostly a
reflection of their large size. On the other hand, the compact UVLGs
deviate from the trends established by the full sample, in that they
are very luminous but not very large. Their high luminosity reflects
an extremely high surface brightness in a relatively small
region. This behavior sets them apart from the rest of the galaxies,
and suggests that compact UVLGs are a distinct population of galaxies.
\subsection{Local Analogs of Lyman Break Galaxies}
One of the initial goals of this investigation was to find nearby
galaxies with properties most similar to LBGs. To do this, we have
defined a surface brightness cutoff at
$I_{1530}>10^9$~L$_{\odot}$~kpc$^{-2}$, which is the lower limit of UV
surface brightness seen in typical LBGs
\citep{g02}. Table~\ref{proptab} shows that the galaxies in this
sample (``supercompact UVLGs'') share all of the characteristics of
LBGs that are considered in this paper, including UV luminosity, mass,
star formation rate, specific star formation rate, UV attenuation, and
metallicity. Like LBGs, they are compact systems undergoing intense
star formation.
The question of whether this is the first major episode of star
formation that these galaxies have undergone is crucial to determining
how similar they are to LBGs. $Spitzer$ observations of LBGs in the
rest-frame near-IR have shown that they do not contain a previously
hidden population of older, low-mass stars \citep{ba04}. The
UV-optical colors of many of the compact UVLGs suggest that this may
be the first major episode of star formation these galaxies have
experienced (see Figure~\ref{sfrvifig}). Near-infrared observations
are necessary to fully trace the population of old stars. Such
observations have been carried out with $Spitzer$, and will be
reported in a forthcoming paper. The current UV-optical data, however,
suggest that the compact UVLGs are indeed excellent local analogs to
LBGs. Further study of these remarkable objects can provide crucial
information about star formation in the early universe. Toward this
end we are currently analyzing {\it Hubble Space Telescope} data (in
addition to our $Spitzer$ data) in order to study the morphologies,
dust content, and stellar populations of compact UVLGs. The results
from these studies will be presented in future papers.
Whether or not the highest surface brightness UVLGs are indeed analogs
of LBGs, they are distinct from any previously studied population of
galaxies in the local universe. They are much more luminous in the UV
(and thus have much higher star formation rates) than blue compact
dwarf galaxies (BCDGs), and have higher UV surface brightnesses (and
thus star formation rates per unit area) than local starburst
galaxies. For example, while I~Zw~18, the prototypical BCDG, has an
FUV surface brightness of
$I_{1530}=8.2\times10^7$~L$_{\odot}$~kpc$^{-2}$, which is close to the
boundary for compact UVLGs, its FUV luminosity
$L_{1530}=1.7\times10^8$~L$_{\odot}$, more than 2 orders of magnitude
below the lower limit for UVLGs \citep{gdp06}.
Figure~\ref{ngs} compares the luminosity and surface brightness of
galaxies in the {\em GALEX}\ Ultraviolet Atlas of Nearby Galaxies (Gil de
Paz et al. 2006), including some nearby well-known starbursts and
BCDGs, with the normalized distribution of the entire sample. The
range in log $L_{1530}$ for 14 starbursts from the atlas (not all
shown in the figure) is from 8.2 to 10.5 , and for 10 BCDGs in the
atlas the range is 7.3 to 10.3. The boundary for UVLGs is log
$L_{1530}$=10.3. In surface brightness the starbursts range from log
I$_{1530}=$6.2 to 9.1, while the BCDGs range from 6.6 to 8.7. The
Cartwheel galaxy, AM~0644-741, and UGC~06697 are three starburst
galaxies that are also local examples of large UVLGs. Figure~\ref{ngs}
shows that while some local galaxies have high FUV surface brightness,
and others have high FUV luminosity, almost none have the combination
of high luminosity and high surface brightness that would qualify as a
compact UVLG. The exception is VV~114, which is the nearest known
Lyman Break Galaxy analog \citep{grimes06}. Compact UVLGs, and
especially supercompact UVLGs, clearly have extreme properties when
compared to local galaxy populations.
\section{Conclusions}
We have used {\em GALEX}\ and the SDSS to identify and study the most UV
luminous galaxies in the local universe. Our main results are as
follows:
\begin{enumerate}
\item{
The most UV luminous galaxies in the local universe comprise a diverse
group, with properties that are well correlated with UV surface
brightness. Although there is not a sharp transition, we can use a
surface brightness boundary of $I_{1530}<10^8$~L$_{\odot}$~kpc$^{-2}$
to divide the UVLG sample into two groups: large and compact UVLGs.}
\item{
The large UVLGs are massive, metal-rich disk galaxies with UV
surface brightnesses only slightly larger than typical star forming
galaxies. They are similar to normal galaxies in most respects, but
are very luminous primarily because of their large size.}
\item{
The compact UVLGs are low-mass, relatively metal-poor systems which
often have a disturbed or interacting morphology. The high UV surface
brightness and high specific star formation rate in these compact
UVLGs indicates that intense star formation is ongoing.}
\item{
It is possible to isolate a sample of local LBG-analogs with UV surface
brightness criterion of $I_{1530}>10^9$~L$_{\odot}$~kpc$^{-2}$. The
galaxies in the resulting sample have many properties in common with
LBGs, including luminosity, mass, star formation rate, specific star
formation rate, extinction, and metallicity.}
\item{
Compact UVLGs stand out from the trends established by the full
DR3/GR1 sample in that they have much smaller sizes and higher surface
brightness than would be predicted for galaxies of their mass or
luminosity. They have metallicities that are generally lower by a
factor of two to three compared to normal galaxies of the same mass.
These properties suggest that they are a distinct population of
objects, perhaps at a different phase of evolution than the bulk of
the galaxies in the local universe.}
\item{
The high UV luminosity and implied high star formation rate of compact
UVLGs distinguishes them from any previously studied local galaxy
population, including local UV-bright starbursts and blue compact
dwarf galaxies.}
\end{enumerate}
\acknowledgments
We thank the referee, Michael Strauss, for providing very helpful
comments that greatly improved the paper. {\em GALEX}\ is a NASA Small
Explorer launched in April 2003. We gratefully acknowledge NASA's
support for construction, operation, and scientific analysis for the
{\em GALEX}\ mission. Funding for the creation and distribution of the SDSS
Archive has been provided by the Alfred P. Sloan Foundation, the
Participating Institutions, the National Aeronautics and Space
Administration, the National Science Foundation, the U.S. Department
of Energy, the Japanese Monbukagakusho, and the Max Planck Society.
|
astro-ph/0609310
|
\section{Introduction}
Sgr B is a molecular complex consisting of Sgr B1 and B2. In particular, Sgr B2 is the most massive giant molecular cloud with ultra compact H\emissiontype{II} regions (UCH\emissiontype{II}) \citep{Gaume1995} and many maser sources near the cloud center \citep{Mehringer1997}. These are the hints of high mass (HM) zero age main sequence (ZAMS) stars or young stellar objects (YSOs). However, extremely high absorption toward the cloud center ($N_{\rm H} \ge 10^{24}~{\rm cm}^{-2}$, $A_{\rm V}$= a few 100) prevents to detect any stars in optical or even in the infrared bands. \citet{Takagi2002} have discovered many compact X-ray sources in the cloud center with Chandra. The X-ray fluxes and spectra indicate that these are likely HM YSOs. Since HM stars evolve very rapidly and finally undergo supernova explosions, it will be reasonable to expect young SNRs near the Sgr B2 cloud. In facts, \citet{Senda2002} have discovered a peculiar SNR candidate with Chandra.
Sgr B2 is also a strong 6.4~keV line source. \citet{Koyama1996} and \citet{Murakami2001} proposed that Sgr B2 is an X-ray reflection nebula (XRN), irradiated by the Galactic center (GC) source Sgr A$^{*}$. Sgr A$^*$ was then thought to be X-ray bright about 300 years ago, the light traveling time between Sgr B2 and Sgr A$^{*}$. In this XRN scenario, it is likely that other XRNe will be found in the molecular complex Sgr B. The Suzaku observation on the Sgr B region is intended to discover new SNRs and XRNe. This paper reports the first results of the Suzaku observation.
\section{Observation and Data Processing}
\subsection{Data Collection}
The Sgr B region was observed with the XIS on 10-12 October 2005. The XIS consists of four sets of X-ray CCD camera systems (XIS0, 1, 2, and 3) placed on the focal planes of four X-Ray Telescopes (XRT) on board the Suzaku satellite. XIS0, 2 and 3 have front-illuminated (FI) CCDs, while XIS1 has a back-illuminated (BI) CCD. The detail descriptions of the Suzaku satellite, the XRT and the XIS are found in \citet{Mitsuda2006}, \citet{Serlemitosos2006} and Koyama et al. (2006a), respectively.
The XIS observation was made with the normal mode. The effective exposure time after removing the epoch of low earth elevation angle ( ELV $\le$5$^\circ$ ) and the South Atlantic Anomaly was about 89~ksec.
\subsection{The Gain Tuning}
In a quick look of the spectrum, we found strong lines at $\sim$6.7~keV and $\sim$6.4~keV, in everywhere in the CCD imaging area (IA), which may be due to the largely extended Galactic center diffuse X-ray emission (GCDX). These lines are most likely K{$\alpha$} lines of Fe\emissiontype{XXV} (6.7~keV) and Fe\emissiontype{I} (6.4~keV). Using the center energies of the two strong lines, we made fine correction of the CTI (Charge Transfer Inefficiency), and fine gain tuning in XIS-to-XIS and segment-to-segment levels (relative gain tuning). Then the absolute gain tuning is made using the $^{55}$Fe calibration sources irradiating the CCD corners. Details of this procedure and high capability are demonstrated in Koyama et al. (2006b).
\subsection{The Position Tuning}
After the CTI correction and fine gain tuning, we add all the XIS data and made a composite image
of the Sgr B region in the 2--10 keV band (figure \ref{fig:suzaku2-10keVimg}). The diffuse enhancement in the northwest corresponds to
the Sgr B2 complex. Other than this, we found two point sources at the southeast edge of the XIS field of view (FOV). We made the radial profiles of these
two sources and determined the peak positions to be $(l,~~b) = (\timeform{0D.5762},~ -\timeform{0D.1736})$ and
$(\timeform{0D.6626},~ -\timeform{0D.2225})$ in the nominal Suzaku coordinate. We search for the Chandra Galactic center
survey map, and found possible counterparts, CXO~J174741$-$283213 and CXO~J174804.8$-$282919 \citep{Muno2003}. The Galactic coordinates of these sources
are $(l, b) = (\timeform{0D.5762}, -\timeform{0D.1796})$ and $(\timeform{0D.6625}, -\timeform{0D.2289}$). Therefore the Suzaku nominal
coordinate is systematically shifted by $(\Delta l, ~\Delta b) =
(-\timeform{0D.0001}, ~-\timeform{0D.0062})$ from the Chandra coordinate. Since the aspect solution of Chandra is very accurate within
sub-arcsec, we made fine tuning of the Suzaku position by shifting $(-\timeform{0D.0001},~ -\timeform{0D.0062})$ in
the $(l,~ b)$ coordinate. Hereafter we use this re-registered coordinate.
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){gc_sgrb_2000-10000_img.eps}
\end{center}
\caption{The X-ray image of the Sgr B region in the 2--10 keV band. All the four XIS data are co-added. The dotted square is the XIS field of view (FOV).}
\label{fig:suzaku2-10keVimg}
\end{figure}
\vspace*{1 cm}
\section{Results and Discussions}
\subsection{The Overall Features}
The X-ray spectrum of all the Sgr B region is given in figure \ref{fig:sgrb-overall}. The spectra of the four XISs (XIS0-XIS3) are co-added and the night earth spectrum (non X-ray background; here, NXBG) is subtracted. With the superior energy resolution of the XIS for diffuse sources, we can clearly resolve the 6.4~keV, 6.7~keV and 6.9~keV lines. These are K$\alpha$ lines from neutral Fe\emissiontype{I}, He-like Fe\emissiontype{XXV} and hydrogenic Fe\emissiontype{XXVI}.
The 6.9~keV line may contain a small fraction of K$\beta$ line of Fe\emissiontype{I} (7.07 keV).
Weak lines seen above ~7 keV are
K$\alpha$ of Ni\emissiontype{I}(at $\sim$ 7.5 keV),
K$\alpha$ of Ni\emissiontype{XXVII} + K$\beta$ of Fe\emissiontype{XXV} (at $\sim$7.8--7.9 keV) and
K$\beta$ of Fe\emissiontype{XXVI} + K$\gamma$ of Fe\emissiontype{XXV} (at $\sim$8.2--8.3 keV).
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){gc_sgrb_overall_xisall_spec.eps}
\end{center}
\caption{The X-ray spectra from the full FOV of the XIS, but the CCD corners irradiated by the build-in calibration sources are excluded. All the four XIS data are co-added}
\label{fig:sgrb-overall}
\end{figure}
\subsection{Discovery of a new SNR}
\vspace*{1 cm}
\begin{figure}
\begin{center}
\FigureFile(80mm, 10mm){gc_sgrb_6700_publish.eps}
\end{center}
\caption{The 6.7~keV line map (the 6.58--6.74 keV band map) showing bright spot at the northwest corner. The fluxes are normalized by the 6.7~keV flat-field image. The source and background regions are shown by the solid and dotted ellipses, respectively.}
\label{fig:sgrb-6700img}
\end{figure}
We made a narrow band image of 6.7~keV (the 6.58--6.74 keV band) in figure \ref{fig:sgrb-6700img}. We see a clear 6.7~keV flux excess at the northwest corner.
To confirm the 6.7 keV excess, we referred the archive data of Chandra (OBSID:944, effective exposure time was 99 ksec)
and XMM (OBSID: 0203930101, effective exposure time was 42~ksec). Since the energy resolution of the Chandra ACIS is limited
to separate the 6.4 and 6.7 keV lines, it is unclear whether the 6.7 keV source is present or not. The XMM image in the 6.7 keV band shows a clear elongated structure near the same position. Accordingly the presence of the 6.7 keV line source is no doubt.
On the other hand, the continuum band image (e.g. 2-5 keV, or 2-8 keV bands) shows only a hint of enhancement, and show no clear structure. This source is therefore very peculiar, which is dominant only in the 6.7 keV line.
Since the dominance of the 6.7 keV line suggests that the excess is a new SNR (see also below for the discussion), we designate this source as Suzaku~J1747.0$-$2824.5 (G0.61+0.01) from its center position.
We made the NXBG-subtracted spectra from the solid ellipse for both the three FI CCDs (XIS0, XIS2 and XIS3 are co-added) and BI CCD (XIS1). In these spectra, the cosmic X-ray background (CXB) and GCDX are still included. We therefore made the NXBG-subtracted spectra from the dotted ellipse in figure \ref{fig:sgrb-6700img},
and subtract this local background (CXB + GCDX) from the source spectrum.
All the spectra have been corrected for the vignetting at 6.7~keV.
The results are given in figure \ref{fig:sgrb-g061001-spec}, for the FIs and BI, separately. We see a pronounced peak at 6.7~keV, but no 6.9 keV line.
The 6.7 line shape is asymmetric with a tail at lower energy.
In order to verify the line structure, we drive fluxes
of the 6.4, 6.7 and 6.9 keV lines ( K$\alpha$ line of Fe \emissiontype{I}, \emissiontype{XXV} and \emissiontype{XXVI}) in the source
and the background regions,
applying a phenomenological model (a bremsstrahlung continuum and many Gaussian lines) in the raw data (no background subtraction).
The resulting 6.4, 6.7 and 6.9 keV line fluxes are 2.22, 5.17 and 0.48 for the G0.61+0.01 (source) region, and 0.61, 0.68 and 0.30 for the background region,
where the flux unit is 10$^{-6}$ photons~cm$^{-2}$~s$^{-1}$ ~arcmin$^{-2}$. In contrast to the 6.7 keV line, we see no large excess in the 6.9 keV line from the source region compared to the background region. Thus we confirm that G0.61+0.01 emits
strong 6.7 keV line, but very weak 6.9 keV line.
The small excess of the 6.4 keV line makes the low energy tail in the 6.7 keV line.
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){gc_sgrb_blob_g_spec_vpshock_fi.eps}
\FigureFile(80mm,80mm){gc_sgrb_blob_g_spec_vpshock_bi.eps}
\end{center}
\caption{Left: the X-ray spectrum of the sum of 3 FI CCDs (XIS0, 2 and 3) for a new SNR (G0.61+0.01) with the best-fit VPSHOCK model. Right: same as the left, but of the BI CCD (XIS1)} \label{fig:sgrb-g061001-spec}
\end{figure}
\begin{table}
\caption{The best-fit parameters for G0.61+0.01 with the VPSHOCK model
plus
two emission lines}
\label{tab:g_fit}
\begin{center}
\begin{tabular}{lc}
\hline\hline
Parameter & Value \\
\hline
$N_{\rm H}$ ($10^{23}$~H~cm$^{-2}$) & $1.6_{-0.4}^{+0.7}$ \\
$kT$ (keV) & $3.2_{-0.9}^{+2.3}$ \\
$n_{\rm e}t$ ($10^{11}$ cm$^{-3}$ s) & $1.9_{-0.8}^{+4.7}$ \\
\multicolumn{2}{l}{Abundances\footnotemark[$*$]}\\
Ca & $3.5_{-2.4}^{+3.1}$ \\
Fe & $5.1_{-1.1}^{+1.2}$ \\
\multicolumn{2}{l}{Neutral iron lines\footnotemark[$a$]}\\
$I_{6.40}$ ($10^{-6}$ photons cm$^{-2}$ s$^{-1}$) & $5.1_{-2.5}^{+
2.4}$ \\
$I_{7.06}$ ($10^{-6}$ photons cm$^{-2}$ s$^{-1}$) & 0.6 \\
\multicolumn{2}{l}{Flux and Luminosity}\\
$F_{\rm 2-10}$\footnotemark[$\dagger$] ($10^{-13}$ ergs cm$^{-2}$
s$^{-1}$) & $7.5_{-2.2}^{+1.1}$ \\
$L_{\rm 2-10}$\footnotemark[$\ddagger$] ($10^{34}$ ergs s$^{-1}$)
& $1.5_{-0.2}^{+0.1}$ \\
\hline
$\chi^2$/dof & 99.0/78\\
\hline
\multicolumn{2}{@{}l@{}}{\hbox to 0pt{\parbox{80mm}{\footnotesize
Note---The uncertainties indicate the 90\% confidence limit.
\par\noindent
\footnotemark[$*$] The elements which are not listed below are
fixed at
1.0 (solar ratio).
\par\noindent
\footnotemark[$a$] The line energy of K$\alpha$ and K$\beta$ is
fixed at the theoretical value (6.40 and 7.06~keV, respectively; Kaastra and Mewe 1993) and the
intensity of K$\beta$ is fixed at 12.5\% (Kaastra and Mewe 1993) of that of K$\alpha$.
\par\noindent
\footnotemark[$\dagger$] Observed flux in the 2.0--10.0~keV band.
\par\noindent
\footnotemark[$\ddagger$] Absorption corrected luminosity in the
2.0--10.0~keV band.
}\hss}}
\end{tabular}
\end{center}
\end{table}
The FI and BI spectra are simultaneously fitted with a plane parallel shock model (VPSHOCK in the XSPEC package) adding two Gaussian lines at 6.4~keV and 7.06~keV. These two lines represent the K$\alpha$ and K$\beta$ lines of Fe \emissiontype{I}, where the flux of latter line is fixed at 12.5\% of the former (Kaastra and Mewe 1993). The best-fit results and parameters are shown in figure \ref{fig:sgrb-g061001-spec} and table \ref{tab:g_fit}.
Although we detected the 6.4 keV line from G0.61+0.01, it is very difficult to judge whether this line is really attributable to G0.61+0.01, due to spilled-over flux from the adjacent source Sgr B2, or due to a fluctuation of a larger scale structure in the 6.4~keV line. As for the last possibility, we see a large scale 6.4~keV enhancement in the northwest compared to
the background region in the southeast (see figure \ref{fig:sgrb-6400img}). In any case, we ignore this line in the discussion of G0.61+0.01 because the 6.4~keV line flux is only 3\% of that in Sgr B2 (see tables 1 and 3).
Since the Suzaku spatial resolution is not good enough, there may be possible contamination of unresolved point sources. To check this problem,
we searched for point sources using the Chandra archive data (OBSID: 944, ~99 ksec exposure time) and found no point source in the source region. On the other
hand, in the background region, there are 48 point sources. The total flux in the 2-10 keV band is 5$\times10^{-13}$ ergs~cm$^{-2}$~s$^{-1}$, which is only $\sim$2\% of the CXB + GCDX flux of 2.6$\times10^{-11}$ ergs~cm$^{-2}$~s$^{-1}$, hence can be ignored in the present data analysis and discussion.
The best-fit temperature of $\sim$3~keV and overabundance of Fe are consistent with an ejecta of an SNR and
are similar to those found in the central region of Sgr A East, a young SNR near at the GC.
The high temperature component of Sgr A East is $kT\sim$4--6 keV (Sakano et al. 2004, Park et al. 2005, Koyama et al. 2006c), and iron is overabundant by factor of 4--5 (Maeda et al. 2002, Sakano et al. 2004, Park et al. 2005, Koyama et al. 2006c). Thus G0.61+0.01 is likely an ejecta dominant central region of an SNR.
We note that Sgr A East has low temperature component of about 1 keV, while not in G0.61+0.01. The absence of softer plasma may be due to the large absorption.
The $N_{\rm H}$ value of $1.6\times 10^{23}$~H~cm$^{-2}$ is larger than that of typical value to the GC ($6\times 10^{22}$~H~cm$^{-2}$ )
\citep{Sakano2002}. Therefore, G0.61+0.01 would be located behind or in the rim of the Sgr B2 cloud. Since G0.61+0.01 is located in the south of an expanding radio shell \citep{Oka1998}, which is probably interacting with the Sgr B2 cloud rim, we assume that the distance of G0.61+0.01 is the same as Sgr B2 and to be 8.5~kpc \citep{Reid1988}. Then the 2--10 keV band luminosity is estimated to be $1.5\times10^{34}$ ergs~s$^{-1}$, which is typical for an ejecta plasma of an SNR.
The size of G0.61+0.01 (the solid ellipse in figure \ref{fig:sgrb-6700img}) is
~2.2$^\prime\times$4.8$^\prime$, which corresponds to ~5.5 pc~$\times$~12 pc at a distance of 8.5 kpc.
Assuming the plasma emission is due to a uniform density ellipsoid with the 3-axis radii of 2.7 pc, 2.7 pc
and 6 pc, we estimate physical parameters of G0.61$+$0.01 ( table \ref{tab:g_physpar}).
Although the iron abundance is 3-4 times of the solar, total number and mass of
protons are fur larger than those of irons. We therefore assume that electron density ($n_{\rm e}$) is equal to the proton density ($ n_{\rm p}$), and that protons
carry most of the plasma mass ($m_{\rm p}$).
Dividing the radius of the major axis (6 pc) by the sound velocity of the 3.2~keV plasma ($v=1.4\times$10$^8$ cm~s$^{-1}$), we obtain the dynamical time scale ($t_{dyn}$) of $\sim$4$\times10^{3}$ years.
If, instead, we use the ionization parameter ($n_{\rm e}t$) and electron density ($n_{\rm e}$), then the
ionization time scale ($t_{ioni}$) is estimated to be $\sim7\times10^{3}$ years.
Since the source size of ~2.2$^\prime\times$4.8$^\prime$ is comparable to that of the half power diameter ($\sim2^\prime$), the real size of G0.61+0.01 must be smaller.
Therefore the quoted value of $t_{dyn}\sim4\times$10$^{3}$ years should be an upper limit, while that of $t_{ioni}\sim7\times$10$^{3}$ years is
a lower limit, because $n_{\rm e}$ is inversely proportional to the root square of the plasma volume.
Thus the age of G0.61+0.01 is probably around several$\times$10$^{3}$ years.
Another possibility is that G0.61+0.01 comprises a part of a larger SNR. Since G0.61+0.01 is found at the edge of the XIS field, other parts of a candidate SNR may be out of the XIS field. In this scenario, G0.61+0.01 may be a part of the expanding radio shell discovered by \citet{Oka1998}. The kinetic energy of the radio shell is a few of $10^{52}$ erg s$^{-1}$, within the range of single or multiple supernova explosions. Thus follow-up X-ray observations including this expanding radio shell is highly required.
\begin{table}
\caption{The physical parameters of G0.61+0.01}
\label{tab:g_physpar}
\begin{center}
\begin{tabular}{lc}
\hline\hline
Parameter & Value \\
\hline
EM\footnotemark[$a$] (cm$^{-3}$) & $1.4\times 10^{57}$ \\
$n_{\rm e}$\footnotemark[$b$] (cm$^{-3}$) & $0.9$ \\
$M$\footnotemark[$c$] (\MO) & 1.3 \\
$E$\footnotemark[$d$] (ergs) & $2.4\times 10^{49}$ \\
$t_{\rm dyn}$\footnotemark[$e$] (s) & $1.3\times10^{11}$ \\
$t_{\rm ioni}$\footnotemark[$f$] (s) & $2.1\times10^{11}$ \\
\hline
\multicolumn{2}{@{}l@{}}{\hbox to 0pt{\parbox{60mm}{\footnotesize
Note---The plasma is assumed to be a uniform density ellipsoid with the 3-axis radii of 2.7 pc, 2.7 pc and 6 pc (see text).
\par\noindent
\footnotemark[$a$] Emission measure (EM) = $n_{\rm e} n_{\rm H}V$ = $n_{\rm e}^{2}V$,
where $n_{\rm e}$ and $n_{\rm H}$ are the electron and hydrogen density and are assumed to be equal.
\par\noindent
\footnotemark[$b$] The electron density.
\par\noindent
\footnotemark[$c$] Total mass $(M) = n_{\rm e}m_{\rm p}V$, where $m_{\rm p}$ is the proton mass and $V$ is the plasma volume.
\par\noindent
\footnotemark[$d$] Thermal energy $(E) = 3n_{\rm e}kTV$.
\par\noindent
\footnotemark[$e$] The dynamical time scale: the radius of the major axis of the plasma ellipsoid divided by the sound velocity
of the $\sim$3 keV plasma.
\par\noindent
\footnotemark[$f$] The ionization time scale: the ionization parameter (see table 1) divided by the electron density
}
\hss}}
\end{tabular}
\end{center}
\end{table}
\subsection{Discovery of a New XRN}
We have made a narrow band image at 6.4~keV (the 6.33--6.46 keV band) in figure \ref{fig:sgrb-6400img}. We see two bright spots in the north. One is Sgr B2 which has been already found as a strong 6.4 keV source \citep{Koyama1996}, and the other is a newly discovered source.
We again referred the same archive data of Chandra and XMM as the case of G0.61+0.01. In the Chandra data, this excess is found near
the edge of the ACIS FOV. The XMM data show a clear excess near this source.
The presence of the 6.4~keV line supports the presence of cool and dense gas clouds. We therefore designate this new source as Suzaku~J1747.7$-$2821.2
(M0.74$-$0.09) from its peak position.
We made X-ray spectra of Sgr B2 and M0.74$-$0.09 from the circles given in figure \ref{fig:sgrb-6400img}.
The former is for comparison to the latter new source. The background spectrum is made from the dotted ellipse and subtracted in the same procedure as the case of G0.61+0.01.
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){gc_sgrb_6400_publish.eps}
\end{center}
\caption{The 6.4~keV line map (the 6.33--6.46 keV band map) showing bright spots at Sgr B2 and M0.74-0.09.
The fluxes are normalized by the 6.4~keV flat-field image.
The sources and background regions are shown by the solid circles and dotted ellipse, respectively.}\label{fig:sgrb-6400img}
\end{figure}
The background-subtracted spectra are shown is figures \ref{fig:sgrb-B2-spec} and \ref{fig:sgrb-m074009-spec}. We simultaneously fit the FIs and BI spectra with a model of absorbed power-law plus two Gaussians near at 6.4 and 7.06~keV, which are for the K$\alpha$ and K$\beta$ lines of Fe\emissiontype{I}. The best-fit parameters are shown in table \ref{tab:m_fit}.
This model nicely fits the data except an excess near the 6.7~keV line in the Sgr B2 spectra. In fact, the 6.7~keV line map (figure \ref{fig:sgrb-6700img}) shows a weak enhancement at the position of Sgr B2. One possibility is that the 6.7~keV enhancement is a part of the new SNR candidate G0.61+0.01, because it is located in the close vicinity of Sgr B2. The other possibility is that the 6.7~keV enhancement is due to YSOs embedded in the center of Sgr B2. In fact, the Sgr B2 region is
relatively crowded with the Chandra point sources (13 point sources), and at least some of them are YSOs with a hint of the 6.7~keV line emission \citep{Takagi2002}. The total flux (in the 2--10 keV band) of the point sources is
$\sim$10$^{-13}$~ergs~cm$^{-2}$~s$^{-1}$, which is $\sim$6\% of the Sgr B2 flux (see table 3).
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){gc_sgrb_B2_spec_fi.eps}
\FigureFile(80mm,80mm){gc_sgrb_B2_spec_bi.eps}
\end{center}
\caption{Left: the X-ray spectrum of the sum of the 3 FI CCDs (XIS0, 2 and 3) for Sgr B2 with an absorbed power-law model and two Gaussian lines. Right: same as the right but of the BI CCD (XIS1)}
\label{fig:sgrb-B2-spec}
\end{figure}
The Sgr B2 cloud has been studied extensively with ASCA and Chandra. \citet{Koyama1996} and \citet{Murakami2001} concluded that the 6.4~keV emission is due to fluorescence by strong X-rays coming from Sgr A$^{*}$, hence named the X-ray reflection nebula (XRN). In this paper, we found a clear K$\beta$ line at 7.06~keV with consistent flux ratio to the K$\alpha$ line (6.4~keV) in the fluorescent X-ray origin and deep Fe edge at 7.1~keV. These discoveries provide additional supports for the XRN scenario of Sgr B2. Further details on the Sgr B2 results with Suzaku will be presented in a separate paper.
\begin{figure}
\begin{center}
\FigureFile(80mm,80mm){gc_sgrb_M074-009_spec_fi.eps}
\FigureFile(80mm,80mm){gc_sgrb_M074-009_spec_bi.eps}
\end{center}
\caption{Same as figure \ref{fig:sgrb-B2-spec}, but for a new source M0.74-0.09}
\label{fig:sgrb-m074009-spec}
\end{figure}
In the M0.74$-$0.09 region, Miyazaki and Tsuboi (2000) reported flux peaks of the CS (J=1-0) line emission at $(l, b) = (\timeform{0D.761}, -\timeform{0D.117})$ and $(\timeform{0D.764}, -\timeform{0D.064})$, clear evidence for the presence of a molecular cloud. The XIS spectrum of this region exhibits a strong 6.4~keV line with an equivalent width of 1.6~keV, a 7.06~keV line and an Fe edge structure at 7.1~keV (see table 3). All these features are consistent with being from K$\alpha$, K$\beta$ and K-edge from Fe\emissiontype{I}. The flux of the 7.06~keV line is about 10\% of that of the 6.4~keV line, which is also consistent with the fluorescent X-ray origin (Kaastra and Mewe 1993). We note here that the background region is the same as the case of G0.61+0.01, hence with the same argument in section 3.2, possible point source contribution can be ignored.
Unlike Sgr B2, no hint of HM YSO is found so far. No bright point source is found in the Chandra image.
Therefore, the X-rays can not be the scattering and fluorescence by embedded YSOs. If the X-rays from Sgr B2 and M0.74$-$0.09 are due to the Thomson scattering and fluorescence of the same irradiating external source like Sgr A$^*$, then the $N_{\rm H}$ ratio between these sources should be similar to the 6.4~keV line flux ratio. The observed $N_{\rm H}$ ratio is 0.42, while that of the 6.4~keV line flux is 0.36, in good agreement of the fluorescence scenario by a single irradiation source. Therefore the XRN scenario by the past activity of Sgr A$^*$, which was successfully applied for Sgr B2 may also be applied for M0.74$-$0.09.
The counter scenario against the XRN is that the 6.4~keV line emission is produced by the collision of electrons. Since the cross section of iron K-shell ionization is maximum at the electron energy of a few 10~keV \citep{Tatischeff2002}, the most probable source is low energy electrons (LEE) as proposed for the origin of the Galactic Ridge iron K-shell emission \citep{Valinia2000}. Since a few 10~keV electrons are
absorbed in less than $10^{22}$~H~cm$^{-2}$ of depth \citep{Tatischeff2002}
, the produced X-ray spectrum should have no large absorption edge. Our observation, however, shows a clear absorption of
(4.0--9.6) $\times 10^{23}$~H~cm$^{-2}$, in far excess to the Galactic interstellar absorption \citep{Sakano2002}. Thus the LEE origin is unlikely, unless we assume a special geometry such that the 6. 4keV source is deep in or behind the dense cloud.
\begin{table*}
\caption{The result of spectral fittings of Sgr B2 and M0.74$-$0.09 with
a power-law and two Gaussian models}\label{tab:m_fit}
\label{tab:m_fit}
\begin{center}
\begin{tabular}{lcc}
\hline\hline
Parameter & Sgr B2 & M0.74$-$0.09 \\
\hline
Absorbed power-law model: & & \\
Column density $N_{\rm H}$ ($10^{23}$ cm$^{-2}$) & $9.6_{-0.8}^{+2.5}$ & $4.0_{-1.1}^{+1.4}$ \\
Photon index $\Gamma$ & $3.2_{-0.6}^{+0.9}$ & $1.4_{-0.7}^{+0.4}$ \\
Gaussian 1 (Fe\emissiontype{I} K\emissiontype{$\alpha$}): & & \\
Line energy (eV) & $6399_{-5}^{+5}$ & $6406_{-6}^{+6}$ \\
Intensity ($10^{-5}$ photons cm$^{-2}$ s$^{-1}$) & $16.5_{-0.3}^{+0.8}$ & $5.9_{-1.0}^{+1.4}$ \\
Equivalent Width (keV) & 1.13 & 1.55 \\
Gaussian 2 (Fe\emissiontype{I} K\emissiontype{$\beta$}): & & \\
Line energy (eV)\footnotemark[$a$] & 7058 & 7065 \\
Intensity ($10^{-5}$ photons cm$^{-2}$ s$^{-1}$) & $1.4_{-0.5}^{+0.5}$ & $0.6_{-0.3}^{+0.3}$ \\
Equivalent Width (keV) & 0.13 & 0.18 \\
\hline
Observed flux\footnotemark[$\dagger$] ($10^{-12}$ ergs cm$^{-2}$ s$^{-1}$) & $1.5_{-0.9}^{+0.1}$ & $1.3_{-0.8}^{+0.2}$ \\
Luminosity\footnotemark[$\ddagger$] ($10^{34}$ ergs s$^{-1}$) & $9.7_{-5.1}^{+0.1}$ & $2.6_{-0.9}^{+0.4}$ \\
$\chi^2$/dof & 154.7/89 & 54.4/66 \\
\hline
\multicolumn{3}{@{}l@{}}{\hbox to 0pt{\parbox{180mm}{\footnotesize
Note---The uncertainties indicate the 90\% confidence limit.
\par\noindent
\footnotemark[$a$] The energy gap between K$\alpha$ and K$\beta$ is fixed at the theoretical value (+659~eV)
(Kaastra and Mewe 1993).
\par\noindent
\footnotemark[$\dagger$] Observed flux in the 4.0--10.0~keV band.
\par\noindent
\footnotemark[$\ddagger$] Absorption corrected luminosity in the 4.0--10.0~keV band.
}\hss}}
\end{tabular}
\end{center}
\end{table*}
\section{Summary}
We summarize the results of the Sgr B observation as follows;
\begin{enumerate}
\item All the Sgr B region is covered with a thin hot plasma, which is regarded as a part of the GCDX.
\item The Sgr B region is separately mapped with the 6.4~keV and 6.7~keV lines.
\item We found a local excess in the 6.7 keV line named as G0.61+0.01, which is likely an ejecta dominant SNR.
\item The 6.4~keV map shows local excess at the giant molecular cloud Sgr B2 and M0.74$-$0.09. Like Sgr B2, M0.74$-$0.09 is a good candidate of an XRN.
\end{enumerate}
\bigskip
The authors thank all the Suzaku team members, especially T. Takahashi, A. Senda, A. Bamba,
J. Kataoka, Y. Tsuboi, H. Uchiyama, H. Nakajima, H. Yamaguchi, and H. Mori for their comments, supports and useful information on the XIS performance.
T.I. and H.Y. are supported by JSPS Research Fellowship for Young Scientists.
This work is supported by the Grant-in-Aid for the 21st Century COE "Center for Diversity and Universality in Physics" from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan.
|
cond-mat/0609455
|
\section{Introduction}
It is very useful to classify quantum magnets
according to the symmetry (if any) that is
broken in the ground state. When the SU(2) symmetry
is broken, the system usually sustains some kind of long-range
magnetic order, although some more exotic examples
involving quadrupolar order have been recently discussed\cite{arikawa,laeuchli}.
When the SU(2) symmetry is not broken, a translation symmetry
may or may not be broken depending on the lattice topology.
The standard case in which no lattice symmetry is broken is
that of systems in which it is possible to build a singlet inside
the unit cell, as for instance in a ladder\cite{dagotto}. If however this is not possible,
as in all systems with half-integer spins and an odd number
of sites per unit cell, the simplest way to keep SU(2) symmetry
is to break the translation symmetry so that the new unit cell
contains an even number of sites per unit cell. The typical example
is the S=1/2 $J_1-J_2$ chain, which has been explicitly shown
quite some time ago by Majumdar and Ghosh\cite{majumdar} to have two-fold
degenerate dimerized ground state when $J_2=J_1/2$.
Another possibility to keep SU(2) symmetry without breaking
any lattice symmetry has been put forward by Anderson\cite{anderson} in 1973.
Concentrating on S=1/2 magnets, he suggested that under
appropriate circumstances the ground state might be a linear
combination of valence bond (VB) states, i.e. states in which sites
are paired to form singlets. Clearly each individual state breaks
at least some of the translational symmetries of the underlying
lattice, but the translational symmetry is restored by the
superposition of such valence bond states. Such a wave
function is known as a Resonating Valence Bond (RVB) state.
To identify such a ground state in realistic models of quantum
antiferromagnets turned out to be much more difficult than
anticipated. The prediction of Fazekas and Anderson\cite{fazekas} that this
might be the case for the S=1/2 Heisenberg model on the triangular
lattice, based on estimates of the energy of ordered and valence
bond states including perturbation corrections, is not supported
by recent numerical investigations, which all point to 3-sublattice
antiferromagnetic order\cite{bernu}.
The most serious candidate still around is the S=1/2 Heisenberg
antiferromagnet on the kagome lattice. No evidence of magnetic
long-range order could be found so far, and the proliferation of
low-lying singlets observed in exact diagonalizations of finite
clusters\cite{lecheminant} can be fairly well described in the short-range RVB
subspace of valence bond states involving only nearest neighbours\cite{mambrini}.
However, several treatments based on some effective Hamiltonian
have reached the conclusion that the ground state support some
kind of valence bond order\cite{syromyatnikov,nikolic,auerbach}, and hence breaks
translational symmetry.
Unfortunately, the resulting unit cell is so large, and the singlet-singlet
gap accordingly so small, that this prediction cannot be cross-checked
by the only unbiased numerical approach available so far, namely
exact diagonalization, and the issue is likely to remain open for quite some
time.
In fact, it has only been possible so far to unambiguously identify
an RVB ground state in a very minimal description of fluctuations
in the RVB subspace that goes under the name of Quantum Dimer
Model (QDM)\cite{rokhsar}. In this approach, valence bond configurations are assumed
to build an orthogonal basis of the Hilbert space, and the effective
Hamilonian contains kinetic terms that shift dimers around loops
and potential terms that favour or penalize specific local configurations
of dimers. For the triangular lattice, the simplest model is defined
by the Hamiltonian:
\begin{figure}[H]
\newcommand{\lb}[1]{\raisebox{-0.8ex}[0.8ex]{#1}}
\begin{center}$H = v \sum \big(\, |$
\lb{\resizebox{0.035\textwidth}{!}{
\includegraphics[height=5cm]{fig1.eps}}} $\rangle\,\langle$
\lb{\resizebox{0.035\textwidth}{!}{
\includegraphics[height=5cm]{fig1.eps}}} $| + |$
\lb{\resizebox{0.035\textwidth}{!}{
\includegraphics[height=5cm]{fig2.eps}}} $\rangle\,\langle$
\lb{\resizebox{0.035\textwidth}{!}{
\includegraphics[height=5cm]{fig2.eps}}} $|\, \big)$
$ \ - t \sum \big(\, |$
\lb{\resizebox{0.035\textwidth}{!}{
\includegraphics[height=5cm]{fig1.eps}}} $\rangle\,\langle$
\lb{\resizebox{0.035\textwidth}{!}{
\includegraphics[height=5cm]{fig2.eps}}} $| + |$
\lb{\resizebox{0.035\textwidth}{!}{
\includegraphics[height=5cm]{fig2.eps}}} $\rangle\,\langle$
\lb{\resizebox{0.035\textwidth}{!}{
\includegraphics[height=5cm]{fig1.eps}}} $|\, \big)$
\end{center}
\end{figure}
\noindent where the sum runs over all plaquettes including
the three possible orientations.
The kinetic term controlled by the amplitude $t$ changes the dimer covering
of every flippable plaquette, i.e., of every plaquette containing two dimers
facing each other, while the potential term controlled by the interaction $v$
describes a repulsion ($v>0$) or an attraction ($v<0$) between dimers
facing each other. Since a positive $v$ favors configurations without flippable plaquettes
while a negative $v$ favors configurations with the largest possible number of
flippable plaquettes, one might expect a phase transition between two phases as a function of $v/t$.
The actual
situation is far richer though. As shown by Moessner and Sondhi\cite{moessner}, who calculated the
temperature
dependence of the structure factor, there are four different
phases: {\bf i)} A staggered phase for $v/t>1$,
in which the ground-state manifold consists of all non-flippable configurations;
{\bf ii)} A columnar ordered phase for $v/t$ sufficiently negative; {\bf iii)} An ordered phase
adjacent to it which probably consists of resonating
plaquettes which make a $12$-site unit-cell pattern\cite{ralko2};
{\bf iv)} A liquid phase with a featureless and temperature independent
structure factor. This last phase has been interpreted as a short-range
(RVB) phase in which all correlations decay exponentially at zero temperature,
an interpretation confirmed by recent Green's function Quantum Monte
Carlo simulations\cite{ralko1}, which have established the presence of topological
degeneracy in this parameter range, a clear characteristic of the RVB phase.
This result defines a new line of research in the field: Indeed, rather than investigating
directly the properties of the Heisenberg model on a given lattice, one can
try to identify models for which a VB subspace is a reasonable variational
subspace, derive an effective QDM, and study it
along the same lines as the minimal model on the triangular lattice.
In that respect, a natural candidate is the trimerized spin-1/2 Heisenberg
model on the kagome lattice. An effective model in terms of the total spin
$\vec \sigma$ and a chirality pseudo-spin $\vec \tau$ per strong triangle has
been derived\cite{subra,mila}. It is defined on the triangular lattice built by strong triangles
and can be written\cite{ferrero}:
\begin{equation}
{\cal H}_0^{{\rm eff}} = \frac{J^\prime}{9}
\sum_{\langle i,j \rangle}
{\vec\sigma}_i \cdot {\vec\sigma}_j
(1 - 4 {\vec e}_{ij} \cdot {\vec \tau}_i)
(1 - 4 {\vec e}_{ij} \cdot {\vec \tau}_j),
\end{equation}
where $J'$ is the weak coupling between the strong triangles of the
trimerized lattice, and where the vectors ${\vec e}_{ij}$ have
to be chosen among
${\vec e}_1=(1,0)$, ${\vec e}_2=(-\frac{1}{2},-\frac{\sqrt{3}}{2})$,
${\vec e}_3=(-\frac{1}{2},\frac{\sqrt{3}}{2})$
according to the pattern of Fig.~\ref{fg:triankind}.
A mean-field decoupling of spin and chirality has identified nearest-neighbour
valence bond states as the lowest solutions\cite{mila}, and, following Rokhsar and
Kivelson\cite{rokhsar}, a QDM
has been derived\cite{zhitomirsky}. Unfortunately, the properties of this model could not
be studied so far. First of all, they involve kinetic terms on longer loops than
the above-mentioned minimal model, but more importantly, it is impossible
to formulate it in such a way that all off-diagonal matrix elements are negative,
so that the Quantum Monte Carlo methods that were successful for the minimal
model cannot be used.
\begin{figure}
\begin{center}
\vspace{0.3cm}
\includegraphics[width=0.3\textwidth]{fig3.eps}
\caption{\label{fg:triankind}
Triangular lattice on which the spin-chirality Hamiltonian is defined.
The unitary vector for the bond is indicated by solid lines
(${\vec e}_\mu = {\vec e}_1$), dashed lines (${\vec e}_\mu = {\vec
e}_2$), and dotted lines (${\vec e}_\mu = {\vec e}_3$).}
\end{center}
\end{figure}
The effective spin-chirality model has another remarkable feature though:
It is formally very similar to the spin-orbital models that are used to describe
Mott insulators with orbital degeneracy. Indeed, as discussed in great details
by Kugel and Khomskii\cite{kugel}, when the local symmetry is such that different
orbitals can be occupied in the open shell of the magnetic ions of a Mott
insulator, this extra degree of freedom can be described by a pseudo spin,
and the resulting model is roughly speaking of the same form. Given the very different
precise forms this model can take for specific systems, a general discussion
cannot be attempted here. Rather, we concentrate in the next section on
a specific example of Mott insulator with orbital degeneracy for which we believe that RVB physics might
be realized. More general comments will be given in the last two sections
of the manuscript.
\section{The spin-orbital model of LiNiO$_2$}
The Mott insulator LiNiO$_2$ and its cousin NaNiO$_2$
are isostructural and isoelectronic. The crystal structure can be envisaged
as a sequence of slabs of edge sharing
octahedra of oxygen O$^{2-}$ ions. Metal ions sit at the centers of
octahedra. There are two kinds of slabs: in A slabs, at every center of
octahedra there is a Ni$^{3+}$, whereas in the B slabs, one finds either
Li$^+$ or Na$^+$ ions. A and B slabs alternate (see Fig.~\ref{ANiO2}).
The Ni ions form well-separated triangular planes.
The Ni$^{3+}$ ions are in the $S=1/2$ low-spin state, which allows for
twofold orbital degeneracy between the $d_{3z^2-r^2}$ and $d_{x^2-y^2}$
orbitals (see Fig.~\ref{ANiO2}).
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=12truecm,angle=0]{fig4.eps}
\caption[ANiO2]{\label{ANiO2}
Left: ANiO$_2$ structure. Ni ions are located in the middle of the O octahedra.
Right: Local structure and degenerate orbitals: $d_{3z^2-r^2}$ (left octahedron) and $d_{x^2-y^2}$
(right octahedron)}
\end{center}
\end{figure}
Surprisingly enough, the two systems have very different properties:
NaNiO$_2$ undergoes a high temperature Jahn-Teller distortion,
followed at low temperature by the antiferromagnetic ordering of
ferromagnetic triangular planes, which makes it a standard example
of orbital ordering followed by magnetic ordering\cite{nanio2exp}.
By contrast,
no ordering could be detected in LiNiO$_2$, and some kind of
freezing seems to take place below 8 K\cite{linio2exp}. The best samples of LiNiO$_2$
are non stoichiometric however, some Li sites being occupied by Ni atoms, and this has been
invoked to explain the difference. The trend upon approaching
stoichiometry is not clear though, and this striking difference between
the two systems, in particular the lack of any sign of a cooperative Jahn-Teller transition in
LiNiO$_2$, suggests to look for alternative explanations. In the
following, starting from a realistic microscopic
description of the system, we study the possibility to stabilize an RVB
spin-orbital liquid in the absence of any disorder.
\subsection{Microscopic Model}
A fairly general description of this system is given by
a Kugel-Khomskii Hamiltonian defined in terms of Wannier functions centered
on the Ni sites by two hopping integrals $t_h$
and $t^\prime_h$, the on-site Coulomb repulsion $U$ and the Hund's coupling
$J$ which, on a given bond, takes the form\cite{vernay1}
\begin{eqnarray}
\mathcal{H}_{ij} =
-\frac{2}{U\!+\!J} \left[
2 t_h t^\prime_h {\bf T}_i {\bf T}_j
- 4 t_h t^\prime_h T^y_i T^y_j
+ (t_h-t^\prime_h)^2 ({\bf n}_{ij}^z{\bf T}_i) ({\bf n}_{ij}^z{\bf T}_j)\right.\nonumber\\
+ \frac{1}{2} \left. (t_h^2-{t^\prime_h}^2) \left( {\bf n}_{ij}^z{\bf T}_i
+ {\bf n}_{ij}^z{\bf T}_j \right) + \frac{1}{4}(t_h^2+ {t^\prime_h}^2)
\right] \mathcal{P}_{ij}^{S=0}
\nonumber\\
-\frac{2}{U -J} \left[
4 t_h t^\prime_h T^y_i T^y_j
+ \frac{1}{2}(t_h^2+ {t^\prime_h}^2)
+ \frac{1}{2} (t_h^2-{t^\prime_h}^2) \left(
{\bf n}_{ij}^z{\bf T}_i + {\bf n}_{ij}^z {\bf T}_j
\right)
\right] \mathcal{P}_{ij}^{S=0}
\nonumber\\
-\frac{2}{U\!-\!3J} \left[ - 2 t_h t^\prime_h {\bf T}_i {\bf T}_j
-
(t_h-t^\prime_h)^2 ({\bf n}_{ij}^z{\bf T}_i) ({\bf n}_{ij}^z{\bf T}_j)+
\frac{1}{4}(t_h^2+{t^\prime_h}^2)
\right] \mathcal{P}_{ij}^{S=1}
\label{eq:effham3}
\end{eqnarray}
with the usual definitions for the projectors on the singlet and triplet states
of a pair of spins:
\begin{equation}
\mathcal{P}_{ij}^{S=0} = \frac{1}{4} - {\bf S}_i{\bf S}_j
\quad\mbox{and}\quad
\mathcal{P}_{ij}^{S=1} = {\bf S}_i{\bf S}_j+\frac{3}{4},
\end{equation}
The vectors ${\bf n}_{ij}^z$ depend on the orientation of the bonds and are given by
${\bf n}_{12}^z=(0,0,1)$,
${\bf n}_{13}^z=(\frac{\sqrt{3}}{2},0,-\frac{1}{2})$ and
${\bf n}_{23}^z=(-\frac{\sqrt{3}}{2},0,-\frac{1}{2})$ for the 3 orientations respectively.
The operators ${\bf T}_i$ are pseudo-spin operators acting on the orbitals.
For the local geometry shown in the right panel of Fig.~\ref{ANiO2}, $t_h$ and $t'_h$ correspond to the hopping between
pairs of $d_{3z^2-r^2}$ and $d_{x^2-y^2}$ respectively, the hopping between
the $d_{3z^2-r^2}$ on one site and the $d_{x^2-y^2}$ on the other being zero by symmetry.
Of course, all bonds are equivalent, but once a basis, i.e. a pair of local orbitals,
has been chosen, the Hamiltonian takes a different form on bonds with different
orientations.
Note that other forms of the microscopic Hamiltonian including explicitly O orbitals
have been used\cite{dare,mostovoy,reitsma} with somewhat different conclusions.
\subsection{Mean-field phase diagram}
Inspired by the results obtained on the trimerized kagome model\cite{mila},
spin and orbital degrees of
freedom can be decoupled in a mean-field way\cite{vernay1}.
This leads to a phase diagram in which phases can be distinguished
by the mean value of the orbital and/or spin part of the Hamiltonian
on each bond. The resulting phase diagram is remarkably rich (see
Fig.~\ref{var-dia}). Orbital ordering in the ferromagnetic phase has also been discussed
in Ref.\cite{mostovoy}.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=8truecm,angle=0]{fig5.eps}
\caption[var-dia]{\label{var-dia}{Mean-field phase diagram on a 16-site cluster
as a function of hopping
integral versus Hund's coupling. The grey phase is the ferromagnetic phase,
with the classical phase boundaries between different types of orbital ordering.}}
\end{center}
\end{figure}
While the planes of NaNiO$_2$ are known by now to
be ferromagnetic, suggesting that NaNiO$_2$ is in one of the ferromagnetic phases,
LiNiO$_2$ is expected to be in one of the antiferromagnetic phases.
The orbital and spin structure of the antiferromagnetic phases is depicted
in Fig.~\ref{phases}, except phase A, which consists of SU(4) plaquettes\cite{penc1}
and cannot be described along these lines.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=12truecm,angle=0]{fig6.eps}
\caption[var-dia]{\label{phases}{Spin and orbital structure in the singlet
phases of the mean-field phase diagram. Solid line indicates AF, dashed line
FM spin correlations.}}
\end{center}
\end{figure}
Since niether orbital nor magnetic ordering could be detected in LiNiO$_2$, and since
phases B and D have a simple orbital ordering pattern while phase E is
likely to be antiferromagnetically ordered, let us concentrate
on phases C and C'.
Both phases are characterized by strong dimer singlets
forming different regular dimer coverings of the triangular
lattice. On each dimer the orbitals are parallel, and
they correspond to $d_{3z^2-r^2}$, $d_{3x^2-r^2}$ or
$d_{3y^2-r^2}$ depending on the orientation of the bond.
Note that all these orbitals are Jahn-Teller active,
leading in all cases to two long and four
short Ni-O bonds.
One might be tempted to conclude that these phases
correspond to two types of valence bond solids with
the patterns depicted in Fig.~\ref{phases}. The mean-field
approach has a very remarkable property however: In
addition to the mean-field solutions with lowest energy shown
in Fig.~\ref{phases}, there are several other
mean-field solutions of the self-consistent equations
with energies very close to the lowest energy corresponding
to other dimer coverings of the triangular lattice\cite{BaVS3}.
In such circumstances, going beyond mean-field is likely
to couple these solutions, and the relevant model would
then be a QDM describing resonances between
these states, a point of view
favoured by exact diagonalizations of finite clusters.
So it is more appropriate to think of these
phases as a region of parameters where all dimer coverings
are relevant states for low-energy physics.
\subsection{Effective Quantum Dimer Model}
Starting from all dimer configurations mentioned in the previous section,
a QDM has been derived\cite{vernay2}. It involves a competition
between kinetic processes and dimer-dimer repulsion. A miminal
version of the model is defined by:
\begin{equation}\label{hamilt}
\begin{array}{rcl}
{ H}= \ &-&t \sum
\left(
|\unitlength=1mm
\begin{picture}(6.2,5)
\linethickness{2mm}
\put(0.9,-.7){\line(1,2){1.8}}
\put(3.8,-.7){\line(1,2){1.8}}
\end{picture}
\rangle
\langle
\unitlength=1mm
\begin{picture}(6.5,5)
\linethickness{0.3mm}
\put(3.2,2.6){\line(1,0){3.2}}
\put(0.9,-.7){\line(1,0){3.2}}
\end{picture}
|
+h.c.\right)
- \ t^\prime \sum
\left(
\left|\unitlength=1mm
\begin{picture}(7,6)
\linethickness{2mm}
\put(0.8,-1.7){\line(1,2){1.8}}
\put(6.4,1.6){\line(-1,2){1.8}}
\linethickness{0.2mm}
\put(3.8,-1.7){\line(1,0){3.6}}
\end{picture}
\right\rangle
\left\langle
\unitlength=1mm
\begin{picture}(7,6)
\linethickness{2mm}
\put(1.8,1.6){\line(1,2){1.8}}
\put(7,-1.7){\line(-1,2){1.8}}
\linethickness{0.2mm}
\put(-0.6,-1.7){\line(1,0){3.6}}
\end{picture}
\right|
+h.c.\right)\nonumber \\
&+& \ V \sum \left(
|\unitlength=1mm
\begin{picture}(6.2,5)
\linethickness{2mm}
\put(0.9,-.7){\line(1,2){1.8}}
\put(3.8,-.7){\line(1,2){1.8}}
\end{picture}
\rangle
\langle
\unitlength=1mm
\begin{picture}(6.2,5)
\linethickness{2mm}
\put(0.9,-.7){\line(1,2){1.8}}
\put(3.8,-.7){\line(1,2){1.8}}
\end{picture}|+
|
\unitlength=1mm
\begin{picture}(6.5,5)
\linethickness{0.3mm}
\put(3.2,2.6){\line(1,0){3.2}}
\put(0.9,-.7){\line(1,0){3.2}}
\end{picture}\rangle
\langle
\begin{picture}(6.5,5)
\linethickness{0.3mm}
\put(3.2,2.6){\line(1,0){3.2}}
\put(0.9,-.7){\line(1,0){3.2}}
\end{picture}
|
\right),
\end{array}
\end{equation}
where the sums run over the 4-site and 6-site loops with all possible
orientations. Although the repulsion is a higher order process, hence quite small,
and although the ratio $t^\prime/t$ is in principle fixed by the
perturbative expansion, these parameters are treated as free
to make contact with the Rokhsar-Kivelson model
on the triangular lattice. The main difference with the effective model derived
for the trimerized
kagome antiferromagnet is that the off-diagonal elements are now all {\it negative}. This
is in practice extremely important since it allows one to use Quantum Monte
Carlo simulations.
The phase diagram of the model has been derived using exact diagonalizations
of finite clusters and Green's function Quantum Monte Carlo\cite{vernay2}. As shown before,
the most convenient way to identify an RVB phase is to look for topological
degeneracy since it is at least partially lifted in all other phases. The resulting
phase diagram is shown in Fig.~\ref{fig:phasediag}. It contains a large RVB liquid phase
which connects the relevant parameter range for LiNiO$_2$ ($t'/t \simeq 2$ and $V$ small)
to the RVB liquid phase of the minimal model ($t'=0$). Translated into spin-orbital
language, this RVB phase corresponds to a spin-orbital liquid wiht
no symmetry breaking and no phase transition, in agreement with
the phenomenology of LiNiO$_2$.
\begin{figure}
\begin{center}
\includegraphics[width=0.40\textwidth]{fig7.eps}
\caption{\label{fig:phasediag}
(Color online) Phase diagram in the $t^\prime{-}V$ plane. A wide disordered region
extends all the way from the standard QDM ($t^\prime/t=0$ axis) to the purely
kinetic QDM ($V/t=0$ axis). The description of the symbols is given in the text.}
\end{center}
\end{figure}
\section{Discussion}
Let us now comment on how generic the mechanism proposed in the context
of the spin-orbital model of LiNiO$_2$ might be. The main ingredients to get
an RVB spin-orbital liquid are: 1) The spontaneous formation of dimers;
2) The degeneracy or
quasi-degeneracy of the energies of the wave-functions constructed out of
different dimer coverings; 3) The presence of an RVB phase in the relevant
QDM. Let us comment on these points separately.
The tendency of spin-orbital models to spontaneously form
dimers is well documented. The possibility to stabilize dimerized
ground states due to orbital
degeneracy has first been put forward by Feiner {\it et al} in the context of
a realistic 3D model\cite{feiner}. Shortly after, the presence of a dimerized
ground state has been explicitly proven for a simple minimal 1D model by Kolezhuk
and Mikeska\cite{kolezhuk}, a result generalized shortly after by Pati {\it et al}\cite{pati}
in the context of a model defined by the Hamiltonian:
\begin{equation}
H=\sum_{\langle i,j \rangle} \left[J_1 \vec S_i.\vec S_j +J_2 \vec T_i.\vec T_j + K(\vec S_i.\vec S_j )(\vec T_i.\vec T_j )\right]
\end{equation}
with $K>0$. When $J_1/K=J_2/K=3/4$, it can be rewritten as
\begin{equation}
H=K\sum_{\langle i,j \rangle} (\vec S_i.\vec S_j +3/4)(\vec T_i.\vec T_j + 3/4)
\end{equation}
Each term is obviously positive, and since $\vec S_i.\vec S_j +3/4$
(resp. $\vec T_i.\vec T_j + 3/4$) is the projector on the spin (resp. orbital) triplet,
the two wave-functions with alternating spin and orbital singlets are zero energy
eigenstates, hence ground-states\cite{kolezhuk}. Pati {\it et al} have shown that this dimerized
phase extends to a very large portion of the phase diagram around this point.
From that point of view, the identification of dimer phases in the context of spin-orbital
models of BaVS$_3$\cite{BaVS3} and of LiNiO$_2$\cite{vernay1} is
not unexpected, and the tendency to dimerize can be considered to be a generic
trend of spin-orbital models.
What seems to be more specific to these spin-orbital models of LiNiO$_2$
and BaVS$_3$ is the quasi-degeneracy of all nearest-neighbour dimer
coverings. But in fact, this can be traced back to a rather generic feature of
spin-orbital models, namely the remarkable symmetry properties of the orbital part
of the Hamiltonian. As can be clearly seen in the spin-orbital model of LiNiO$_2$, the orbital
part does not have the same form in the three directions of the triangular lattice,
a property encoded in the $\bf n_{ij}^z$ vectors. So the Hamiltonian is only
invariant if one simultaneously performs the same rotation in real space
and in pseudo-spin space. For purely orbital models, this is known to have
remarkable consequences\cite{nussinov,ma,doucot,dorier}. For spin-orbital
models, this implies that dimers with different orientations involve different
orbital wave-functions, as can be clearly seen in phases C and C'. What
controls the energy of a given dimer configuration is then the residual dimer-dimer
interaction. It turns out that simple patterns having
all dimers parallel to each other are not naturally favoured if the anisotropy of the orbital
part is strong because it is impossible to gain energy in the other directions.
On a lattice such as the triangular lattice with a large connectivity, it is then
much more
favourable to adopt configurations where dimers are not parallel to each other.
The energy difference between such configurations however is not really
significant, and it is better to look at such states as a variational basis.
Finally, too little is known at that stage about RVB phases in QDM to draw
general conclusions, but it seems plausible that the presence of an RVB phase
between two valence-bond phases is the generic alternative to a first order
transition.
To summarize, the tendency toward dimerization is a rather general feature
of spin-orbital models.
When confronted to a lattice such as the triangular lattice, for which
QDM's are known by now to possess RVB phases, there
is a real chance for quantum fluctuations to stabilize an RVB ground
state.
\section{Conclusion}
We have shown that orbital degeneracy can, under special but
neither unrealistic nor fine-tuned conditions, lead to a spin-orbital
RVB ground state. Clearly, this is not the most common situation.
Indeed, orbital degeneracy usually leads to a cooperative Jahn-Teller
transition, resulting into an effective spin Hamiltonian with a symmetry
different from that of the original lattice\cite{kugel}. This is not either the
only route to spin-orbital liquid behaviour\cite{loidl}.
However, the tendency of spin-orbital models to spontaneously dimerize
is strong enough to make this a promising route towards RVB physics.
Whether LiNiO$_2$ is the first example remains to be seen. To make progress
on this issue will require not only further theoretical work
to better understand the relevant microscopic model and its possible
connection to a QDM, but also and maybe more importantly
further experimental investigations to unambiguously identify
the orbital and magnetic properties of stoichiometric samples.
We acknowledge useful discussions with M. Ferrero and D. Ivanov.
This work was supported by the Swiss National Fund and by MaNEP.
\section*{References}
|
astro-ph/0609558
|
\section{INTRODUCTION}
\label{sec:intro}
The interstellar medium (ISM) plays a vital role in the ongoing cycle of
stellar birth and death and galactic evolution. However, the role of
interstellar matter, from how its properties are influenced by stars
to how in turn its properties influence star formation, is poorly
understood and is arguably the least understood portion of the cycle.
Warm diffuse ionized hydrogen has become recognized as a major phase
of the ISM of our Galaxy; see, for example, reviews by \citet{KH87,
Cox89, Reynolds91b, Mathis00}.
This phase consists of regions of warm (10$^{4}$ K),
low-density (10$^{-1}$ cm$^{-3}$), nearly fully ionized hydrogen that
occupy approximately 20\% of the volume within a 2 kpc thick layer
about the Galactic midplane \citep[e.g.,][]{Reynolds91a, NCT92, TC93, HRT99}.
Near the midplane, the rms density of \ion{H}{2}\ is less than 5\% that of
\ion{H}{1}. However, because of its greater scale height, the total column
density
of interstellar \ion{H}{2}\ along high Galactic latitude sight lines is
relatively large, with $N_{\rm{H\sc{II}}} \sim 1/3~N_{\rm{H\sc{I}}}$.
One kiloparsec above the midplane, warm \ion{H}{2}\ may be the dominant
state of the interstellar medium in the Milky Way. Widespread,
diffuse ionized gas is now firmly established as an important
constituent of the ISM in external galaxies as well
\citep[e.g.,][]{RKH90, HG90, Dettmar92, WB94, Ferguson+96, RD00,
CR01, MV03}.
Despite its significance, the origin and physical conditions within
the warm ionized medium (WIM) remain poorly understood. In particular,
the ubiquitous nature of this gas is difficult to explain. Of the
known sources of ionization within the Galaxy, only O stars generate
enough power to sustain the WIM \citep{Reynolds92}.
Therefore, it is generally believed that the O stars, confined
primarily to widely separated stellar associations near the Galactic
midplane, are somehow able to photoionize a significant fraction of
the ISM not only in the disk but also within the halo, 1-2 kpc above
the midplane.
However, the need to have a large fraction of the Lyman continuum photons from
O stars travel hundreds of parsecs through the disk seems to conflict
with the traditional picture of \ion{H}{1}\ permeating much of the
interstellar volume near the Galactic plane.
It has been suggested that extensive cavities in the neutral gas,
created either by ``superbubbles'' of hot gas from supernovae
\citep{Norman91}, or carved out by O star photons in low-density
regions \citep{MC93}, may extend far above the midplane \citep{DS94,
DSF00}.
Although the existence of \ion{H}{1}\ superbubbles has long been established
\citep{Heiles84}, direct observational evidence that cavities are
actually responsible for the transport of hot gas and ionizing
radiation up into the Galactic halo is very limited. One piece of
evidence for
such transport has been provided recently by the WHAM H$\alpha$\ sky survey
\citep[][also see \S6 below]{RSH01}.
The WHAM H$\alpha$\ sky survey is a velocity-resolved map of diffuse
interstellar H$\alpha$\ emission at $1\ifmmode{^{\circ}}\else $^{\circ}$\fi$ angular resolution over the entire
northern sky ($\delta > -30\ifmmode{^{\circ}}\else $^{\circ}$\fi$) within approximately $\pm100$ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ of
the local standard of rest (LSR) \citep{Haffner+03}. The survey maps show
H$\alpha$\ emission covering the sky, with ionized gas associated with large
scale loops, filaments, and bubbles superposed on a fainter, diffuse
background, as well as the bright classical \ion{H}{2}\ regions near the
Galactic plane. Several of the high latitude structures appear to be
associated with hot stars and OB associations; however, the diffuse
background and many features superposed upon it have no clear
association with known ionizing sources. This survey provides the
basis for studies of the physical conditions within these newly
revealed emission regions and the source of their ionization.
Even though the primary source of ionization is believed to be O stars,
the temperature and ionization conditions within the diffuse ionized gas
differ significantly from conditions within classical O star \ion{H}{2}\
regions. These conditions have been inferred by using optical line ratios
diagnostic techniques. For example, anomalously strong
[\ion{S}{2}]$~\lambda6716$/H$\alpha$\ and [\ion{N}{2}]$~\lambda6583$/H$\alpha$, and weak
[\ion{O}{3}]$~\lambda5007$/H$\alpha$\ emission line ratios (compared to the bright,
classical \ion{H}{2}\ regions) indicate a low state of excitation, with few ions
present that require ionization energies greater than 23 eV
\citep{Reynolds85, HRT99, Rand97}. This is consistent with the small
value of
\ion{He}{1}~$\lambda5876$/H$\alpha$\ near the midplane, indicating that the ionization
fraction of helium is low and suggesting that the spectrum of the diffuse
interstellar radiation field that ionizes the hydrogen is significantly
softer than that from the average Galactic O star population \citep{RT95,
TuftePhD}. In addition, the elevated [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ ratios in the
WIM suggest that this low density diffuse gas is significantly warmer
than traditional \ion{H}{2}\
regions and may require spectral processing of the stellar radiation
\citep[e.g.][]{WM04} and/or an additional heating
source beyond photoionization \citep{RHT99}. Recent observations of
[\ion{N}{2}]$~\lambda5755$/[\ion{N}{2}]$~\lambda6583$ have indeed confirmed that the WIM is about
2000 K warmer than \ion{H}{2}\ regions \citep{Reynolds+01}.
Below we present new WHAM observations of [\ion{N}{2}]$~\lambda6583$, [\ion{S}{2}]$~\lambda6716$,
[\ion{O}{3}]$~\lambda5007$, \ion{He}{1}$~\lambda5876$ and [\ion{N}{2}]$~\lambda5755$\ toward large-scale
emission structures as well as individual lines of sight, representing a
substantial increase in the number of
observations of these diagnostic emission lines in the Galaxy. This is
primarily an empirical study. The emission regions examined span a wide
range in location, environment, morphology, and scale, and we have
compared the line intensity ratios in these different environments in
order to explore the variations in physical conditions between them. A
more detailed analysis and deeper understanding of all these different
regions, the relationships between them, and the reasons for the
observed differences will require combining these observations with
photoionization
models, and is beyond the scope of this work.
We begin with an overview of the relationship between the emission
line ratios and the temperature and ionization state of the gas in
\S\ref{sec:physconds}.
Our observational techniques and data reduction procedure are discussed in
\S\ref{sec:obs}. In \S\ref{sec:hii}, we present our results for several
classical O-star \ion{H}{2}\ regions that form the basis for comparison to the
fainter H$\alpha$\ emission structures. To illustrate the general spectral
difference between the \ion{H}{2}\ regions and the WIM, we present in
\S\ref{sec:wim} spectra toward one of the \ion{H}{2}\ regions, where the diffuse
gas and the classical \ion{H}{2}\ region are along the same line of sight, but
at separate radial velocities. Two large bubble-shaped features that each
span more than 40\ifmmode{^{\circ}}\else $^{\circ}$\fi, the Orion-Eridanus bubble and the Perseus
superbubble, are discussed in \S\ref{sec:bubbles} along with
comparisons to the \ion{H}{2}\ regions and the WIM. Observations of high
latitude filamentary structures are presented in \S\ref{sec:hlfil}. A
direct measure of the temperature of ionized gas,
through observations of [\ion{N}{2}]$~\lambda5755$\ and [\ion{N}{2}]$~\lambda6583$, is discussed in
\S\ref{sec:niiblue}, followed by a summary and conclusions in \S\ref{sec:summary}.
\section{EMISSION LINE RATIOS AND PHYSICAL CONDITIONS}
\label{sec:physconds}
Observations of optical emission lines and their relative strengths is a
common diagnostic tool used to assess the physical conditions of ionized
gas. The WHAM sky survey has measured the H$\alpha$\ surface brightness,
\ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi, which is directly proportional to the emission measure. In the
absence of extinction, this relationship is \begin{equation} EM \equiv
\int n_e^2 dl = 2.75~T_4^{0.9}~\ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi~\rm{(cm}^{-6}~\rm{pc)}, \end{equation}
where $T_4$ is the temperature of the gas in units of 10$^4$~K, and \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi\
is measured in Rayleighs\footnotemark \footnotetext{1 R = $10^6$/4$\pi$
photons s$^{-1}$ cm$^{-2}$ sr$^{-1}$}
\citep{Haffner+03}.
In the WIM, the collisionally excited lines of [\ion{N}{2}]$~\lambda6583$ and
[\ion{S}{2}]$~\lambda6716$ are the next brightest optical lines that can be
observed with WHAM. \citet{HRT99} presented the first velocity resolved
maps of these lines in the Galaxy, toward a 40\ifmmode{^{\circ}}\else $^{\circ}$\fi$\times$30\ifmmode{^{\circ}}\else $^{\circ}$\fi\ region in
Perseus.
Radial velocity interval maps showed a strong trend in [\ion{N}{2}]/H$\alpha$\ and
[\ion{S}{2}]/H$\alpha$, in which these ratios were higher toward regions of low H$\alpha$\
emission, while [\ion{S}{2}]/[\ion{N}{2}]\ remained relatively
constant. These line ratio variations can be interpreted as variations
in the temperature and ionization state of the gas as follows. Using
the standard formulation for the strengths of collisionally excited
lines, the [\ion{N}{2}]/H$\alpha$\ intensity ratio can be parameterized as
\begin{equation} \frac{[\rm{N~\textsc{II}}]}{\rm{H}\alpha} =
1.62\times10^5~T_4^{0.4}~e^{-2.18/T_4}
\left(\frac{\rm{N}^+}{\rm{N}}\right) \left(\frac{\rm{N}}{\rm{H}}\right)
\left(\frac{\rm{H}^+}{\rm{H}}\right)^{-1}, \label{eq:niieq}
\end{equation}
\noindent where the lines strengths are measured in energy units, $T_4$ is
the temperature in units of 10$^4$~K, N/H is the gas phase abundance by
number, and N$^+$/N and H$^+$/H are the ionization fractions of N and H,
respectively \citep{HRT99, Osterbrock89}. The similar first ionization
potentials of N and H (14.5 and 13.6 eV, respectively), along with N-H
charge-exchange, mean that in the WIM the ionization fraction of N$^+$/N
is expected to be similar to H$^+$/H. The fraction H$^+$/H is observed to
be near unity in the WIM \citep{Reynolds+98, Hausen+02-aj}, and the high
second ionization potential of N (29.6 eV) means that little N is likely
to be in the form of N$^{++}$.
This is supported by the weak [\ion{O}{3}]/H$\alpha$\ ratios in the WIM (see below)
and by photoionization modeling \citep[e.g.,][]{Sembach+00}, which
have shown that N$^+$/N $\approx 0.8$ over a wide range of input
spectra and ionization parameters.
As a result, N$^+$/H$^+$\ is likely to depend almost
entirely on the gas phase abundance of N/H, which we have assumed to be
the same for all the emission regions.
Using this argument, \citet{HRT99} attributed the higher [\ion{N}{2}]/H$\alpha$\
ratios in the WIM to higher temperatures of the gas, which has since
been confirmed (see \S\ref{sec:niiblue}). They found in the Perseus
spiral arm, for example, an increase in temperature from $T
\approx 7000$ K close to the Galactic plane up to $T \gtrsim 9000$ K at
$\sim$ 1 kpc from the midplane.
The observed variations in [\ion{S}{2}]/H$\alpha$\ can
also be interpreted as a change in temperature. However, the second
ionization potential of S (23.4 eV) is just below the neutral He edge at
24.6 eV. Therefore, a significant fraction of S can be S$^{++}$, and the
ratio of [\ion{S}{2}]/H$\alpha$\ is a combination of both temperature and ionization
effects. The ratio of [\ion{S}{2}]/[\ion{N}{2}], however, is insensitive to temperature
because of the nearly identical energies required to excite the lines.
This ratio can be parameterized as
\begin{equation}
\frac{[\rm{S~\textsc{II}}]}{[\rm{N~\textsc{II}}]} = 4.62~e^{0.04/T_4}~
\left(\frac{\rm{S}^+}{\rm{S}}\right)
\left(\frac{\rm{S}}{\rm{H}}\right)
\left[\left(\frac{\rm{N}^+}{\rm{N}}\right)
\left(\frac{\rm{N}}{\rm{H}}\right)\right]^{-1},
\label{eq:siieq}
\end{equation}
\noindent with the same conventions as equation \ref{eq:niieq}
\citep{HRT99, Osterbrock89}. By assuming that N$^+$/N does not change in
the WIM, and adopting a value for the abundances of N and S, equation
\ref{eq:siieq} can be used to estimate S$^+$/S.
A more direct method of measuring the temperature of ionized gas is
through observations of multiple emission lines from the same ion. The
extremely faint ``auroral" line of [\ion{N}{2}]$~\lambda5755$, along with
[\ion{N}{2}]$~\lambda6584$, are two such diagnostic lines that are within the
observational capabilities of WHAM. The ratio of the emissivity of these
lines is given simply by
\begin{equation}
\frac{[\rm{N~\textsc{II}}]~\lambda5755}{[\rm{N~\textsc{II}}]~\lambda6584} = 0.192~e^{-2.5/T_4},
\label{eq:niiblueeq}
\end{equation}
\noindent \citep{Osterbrock89}. \citet{Reynolds+01} were the first to
detect this auroral line in the warm ionized medium. Along a single
sightline toward the Perseus spiral arm, \ifmmode{(\ell,b)} \else $(\ell,b)$\fi\ = (130\ifmmode{^{\circ}}\else $^{\circ}$\fi, -7.5\ifmmode{^{\circ}}\else $^{\circ}$\fi), this
ratio was found to be twice as high as observed in traditional O-star
\ion{H}{2}\ regions. They concluded that the WIM in this direction is $\approx$\
2000 K warmer than in \ion{H}{2}\ regions, and that the elevated ratios of
[\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ in the WIM are in fact due, at least in part, to
higher temperatures.
Because the second ionization potential of oxygen is 35 eV, observations
of [\ion{O}{3}]$~\lambda5007$ can provide information about the higher ions. In
particular, in regions where [\ion{O}{3}]/H$\alpha$\ is large, the assumption above
that N$^{++}$ is small may not be valid. The ratio of strengths of [\ion{O}{3}]\
and H$\alpha$\ can be parameterized as
\begin{equation}
\frac{[\rm{O~\textsc{III}}]}{\rm{H}\alpha} = 1.74\times10^5~T_4^{0.4}~e^{-2.88/T_4}
\left(\frac{\rm{O}^{++}}{\rm{O}}\right)
\left(\frac{\rm{O}}{\rm{H}}\right)
\left(\frac{\rm{H}^+}{\rm{H}}\right)^{-1}
\label{eq:oiiieq}
\end{equation}
\citep{Osterbrock89, OGR02}.
\citet{Reynolds85} searched for [\ion{O}{3}]\ emission in the
diffuse WIM along two lines of sight in the Galactic plane, and found that
[\ion{O}{3}]/H$\alpha$\ is very low, $\approx$ 0.06. We confirm this result.
A constraint on the hardness of the radiation field
is provided by observations of \ion{He}{1}$~\lambda5876$. This recombination line
is the helium equivalent of Balmer-$\alpha$, and thus is related to the
number of He-ionizing photons with $h\nu > 24.6$ eV. The strength of
\ion{He}{1}\ relative to H$\alpha$\ is given by
\begin{equation}
\frac{\rm{He~\textsc{I}}}{\rm{H}\alpha} \simeq
0.47~T_4^{-0.14}~\left(\frac{\rm{He}^+}{\rm{He}}\right)
\left(\frac{\rm{He}}{\rm{H}}\right)
\left(\frac{\rm{H}^+}{\rm{H}}\right)^{-1}
\label{eq:heieq}
\end{equation}
\citep{RT95} and is therefore a measure of the relative flux of helium
ionizing photons to hydrogen ionizing photons.
In directions at low Galactic latitude (where \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi $\approx$ 10
R), \citet{RT95} and Tufte (1997) found \ion{He}{1}/H$\alpha$\ significantly
below that measured for O star \ion{H}{2}\ regions. These observations implied
that the spectrum of the diffuse radiation field in the WIM, at least
along those low latitude lines of sight, is significantly softer than that
from the average Galactic O star population.
We have extended these emission line analyses to many other directions in
order to explore the variations in conditions within the different regions
of interstellar ionized hydrogen mapped by the WHAM survey. When
estimating the physical conditions, we assume that H$^+$/H = 1,
and N$^+$/N = 0.8. When estimating electron densities from the emission measure, we assume the gas is at $T=8000$K and completely fills a spherical or cylindrical volume defined by its appearance on the sky.
Except where noted, we also assume interstellar gas phase abundances of N/H = $7.5\times10^{-5}$ \citep{MCS97}, S/H = $1.86\times10^{-5}$ \citep{AG89}, O/H =
3.19$\times10^{-4}$ \citep{MJC98}, and He/H = 0.1.
\section{OBSERVATIONS}
\label{sec:obs}
All of our observations were obtained with the Wisconsin H-Alpha Mapper
(WHAM) spectrometer. WHAM was specifically designed to detect very faint
optical emission lines from the diffuse interstellar medium, and consists
of a 0.6~m siderostat coupled to a 15 cm dual-etalon Fabry-Perot system
\citep{TuftePhD, Haffner+03}. It produces a spectrum at a resolution of 12
\ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ within a 200 \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ wide spectral window, integrated over circular,
$1\ifmmode{^{\circ}}\else $^{\circ}$\fi$ diameter field of view. The spectrometer can be centered on any
wavelength between 4800 and 7400 \AA. WHAM is located at the Kitt Peak
National Observatory in Arizona and is completely remotely operated from
Madison, Wisconsin.
The data presented here can be separated into two categories: survey mode
observations and pointed mode observations. The large-scale maps of [\ion{N}{2}]\
and [\ion{S}{2}]\ toward the Orion-Eridanus bubble ($\sim$ 400 deg$^2$) and the
Perseus bipolar superbubble ($\sim$ 2400 deg$^2$) were taken in survey
mode, similar to the manner in which the WHAM-NSS data were obtained
\citep[see][]{Haffner+03}. For this mode the observations were divided
into contiguous `blocks', with each block consisting of up to 49
observations that cover an approximately $6\ifmmode{^{\circ}}\else $^{\circ}$\fi \times 7\ifmmode{^{\circ}}\else $^{\circ}$\fi$ area of the
sky. Each direction within a block was observed once for 60 s in each
line, with several blocks observed per night. In addition, spectra of
[\ion{N}{2}]$~\lambda6583$, [\ion{N}{2}]$~\lambda5755$, [\ion{S}{2}], \ion{He}{1}, [\ion{O}{3}], and were obtained toward
a number of individual sightlines in pointed mode to probe selected parts
of emission features at a high signal-to-noise ratio. Hereafter, [\ion{N}{2}]\
will refer to the $\lambda6583$
line, unless otherwise indicated. Pointed observations were made by
alternating between a given sightline toward a selected feature, an {\sc{ON}}
direction, and an accompanying, usually nearby, {\sc{OFF}} direction. Each pair
of observations was observed for 120 s at a time (to get well above the effective
$\pm$\ 4 $e^-$\ readnoise of WHAM's CCD) with a total {\sc{ON}} integration time
of a few hours for the faintest lines (e.g., [\ion{N}{2}]$~\lambda5755$). The {\sc{OFF}}
directions were chosen to be as spatially close to the {\sc{ON}} direction as
possible and to provide the off-source spectrum containing the atmospheric
foreground and Galactic background emissions. All of the observations
were carried out during clear, dark of the moon nights to avoid the
contribution of scattered solar and terrestrial lines in the spectra.
\subsection{Removal of Atmospheric Emission Lines}
The geocoronal H$\alpha$\ line, with \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi\ $\approx$ 5-10 R, is the strongest
terrestrial emission line contaminating the Galactic spectra. However, in
addition to H$\alpha$, all of the spectra are contaminated by much weaker
atmospheric lines, typically 5-7 with $I\approx 0.05 - 0.5$ R and FWHM
$\lesssim 10$ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ within the 200 \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ spectral window. The positions
of these lines, which are largely unidentified, are fixed with respect to
a geocentric reference frame. Their strength is observed to vary with
both position in the sky and with time during the night, sometimes by up
to a factor of two. However, their relative strengths do not appear to
change by more than $\approx$ 10\% \citep{Haffner+03, Hausen+02-apj}.
To remove this faint foreground emission in the survey mode observations,
an atmospheric line template was fitted to and then subtracted from each
spectrum, in the manner described by \citet{Haffner+03}. These
templates were constructed by observing the faintest
direction in the H$\alpha$\ sky, in multiple emission lines, for an entire
night. This direction is near the Lockman Hole \citep{LJM86}, and has a
total H$\alpha$\ intensity of \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi\ $\lesssim$\ 0.1 R \citep{Hausen+02-apj}.
The average of the all the spectra had a sufficient signal-to-noise ratio
to reveal the location, width, and relative intensities of these atmospheric lines.
An examination of the changes among individual spectra taken through the
night confirmed the terrestrial origin of the lines and provided a measure
of their overall strength. Since the relative strengths of the lines
appear to be constant, the template was multiplied by a single number, a
scaling factor, for each block of survey mode observations. The value of
this scaling factor was determined by matching the strengths of
atmospheric lines in parts of block-averaged spectra that did not show any
Galactic emission. The scaled template was then subtracted from each
spectrum in the block.
In pointed mode spectra, the atmospheric lines were removed by subtracting
an appropriate {\sc{OFF}} from the {\sc{ON}} to produce a flat continuum. Past
exprience with this technique has shown that the degree to which the
atmospheric emission in the {\sc{OFF}} spectrum represents the emission in the {\sc{ON}}
spectrum is usually the dominant source of uncertainty in the resulting
Galactic spectrum \citep[e.g.,][]{Madsen+01, Gallagher+03}. The large
number of short exposure time observations employed in this study,
combined with
the alternating {\sc{ON}}/{\sc{OFF}} observing technique yielded a very good subtraction
of the atmospheric lines. Some spectra were sensitive to lines as faint
as $I\approx 0.005$ R. This lower limit is set by random errors in the
baseline as well as a slightly incomplete subtraction of the atmospheric
lines.
\subsection{Intensity Calibration}
\label{sec:intcalib}
Intensity calibration involved several steps. The H$\alpha$\ spectra were
calibrated using synoptic observations of a portion of the North America
Nebula (NAN), which has an H$\alpha$\ surface brightness of \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi = 800 R with an
uncertainty of $\approx 10\%$ \citep{Scherb81, Haffner+03}. All of the
other emission line data were calibrated initially by determining the throughput of
the spectrometer at the different emission line wavelengths relative to
H$\alpha$. These included the quantum efficiency of the CCD, the transmission
of the narrowband (FWHM $\approx$ 20\AA) interference filters, the
transmission of the atmosphere, and a correction for the properties of the WHAM
optical train (e.g. coatings, reflections, and transmission of the mirrors and lenses). The corrections for the CCD quantum efficiency and the
interference filters were based on data provided by the manufacturers of
those systems. The transmission of the atmosphere could not be well
determined on a night-by-night basis, due to a lack of observations of a
large number of calibration targets taken at a variety of airmasses each
night. A few nights, however, were spent observing enough targets to
determine the average relative transmission of the atmosphere. We
determined an average zenith transmission that ranges from 94\% at [\ion{S}{2}]\
to 85\% at H$\beta$. These data agree well with the standard atmospheric
transmission curve at Kitt Peak that is included in the popular IRAF data
reduction package. We assumed that the relative transmission of the
atmosphere was the same for each night of observing and that the
atmosphere is plane-parallel.
The correction for the transmission of the WHAM optics was
determined empirically by using a combination of H$\alpha$\ and H$\beta$\
observations toward a part of the large \ion{H}{2}\ region surrounding Spica
($\alpha$ Vir), a nearby B1~III star. In the absence of extinction, the
photon number ratio of \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi\ to \ifmmode{I_{\rm{H}\beta}} \else $I_{\rm H \beta}$\fi\ of warm, low density photoionized
gas is 3.94, set by the `Case B' recombination cascade of hydrogen
\citep{Osterbrock89, HS87}. We assume that the emission from the Spica
\ion{H}{2}\ region suffers no extinction because of its proximity
\citep[$d \approx 80$ pc;][]{Hipparcos}, high Galactic latitude ($b
\approx +50\ifmmode{^{\circ}}\else $^{\circ}$\fi$), and the low interstellar hydrogen column density to the
exciting star \citep[$1.0 \times 10^{19} $ cm$^{-2}$;][]{YR76}. After
applying the CCD, interference filter, and atmospheric corrections mentioned above, we found that the H$\beta$\ spectra
needed to by multiplied by an additional factor of 1.36 for the ratio of
\ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi/\ifmmode{I_{\rm{H}\beta}} \else $I_{\rm H \beta}$\fi\ to be equal to the expected value of 3.94. We assume that this
decrease in transmission from the red to the blue is linear with
wavelength, and interpolate this correction for the other, redder,
emission lines. As a consistency check, fully corrected H$\alpha$\ and H$\beta$\
spectra toward NAN yielded an \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi/\ifmmode{I_{\rm{H}\beta}} \else $I_{\rm H \beta}$\fi\ ratio of 5.1, consistent with
observations of extinction toward stars in the nebula \citep{Cambresy+02}.
The results presented in \citet{MR05} also confirm the validity of this
calibration technique. Note that nearly all the results presented here
involve comparing line ratios in one emission region with the
corresponding ratio in another region. As a result, much of the analysis
is insensitive to calibration errors between different wavelengths.
\subsection{Measurement Uncertainties}
\label{sec:errors}
The velocity calibrations of these spectra were derived from observations
of bright, narrow emission line HII regions, and are based on the
assumption that the emission from all of the lines from an individual
\ion{H}{2}\ region are at the same velocity with respect to the local standard
of rest (LSR). For H$\alpha$, H$\beta$, and [\ion{S}{2}], relatively bright terrestrial
emission lines within the spectra were used to confirm the calibration.
The calibrations were also checked against an empirical prediction based on
the tunes of the Fabry-Perot etalons \citep[see ][Chapter 5]{MadsenPhD}. The
resulting systematic uncertainty is estimated to be typically 2-3
kilometers per second. The uncertainty in the velocity calibration due to
random noise in the data is only a few tenths of a kilometer per second.
One of the contributions to the uncertainty in emission line strengths
comes from the random errors in measuring the level of the continuum. A
least-squares linear fit was used to estimate the continuum level and
remove it from each spectrum. However, there is scatter in the residual
due to the Poisson statistics of the detected photons as well as from the
incomplete subtraction of the atmospheric lines. This scatter introduces
an uncertainty when integrating the area under the emission line. In the
pointed mode observations, this dominant source of uncertainty was
generally $I \approx 0.01$ R, and in some cases half of that value.
The 1$\sigma$ values are listed in Tables~\ref{tab:hiibasic} through
\ref{tab:niiblue}. For some observations, the measured line strength is negative, indicating a non-detection. In this case an upper limit to the line strength, equivalent to the 1$\sigma$ uncertainty, is given in the Tables.
In the survey mode spectra, on the other hand, the large area of the sky
observed in this mode prohibited an alternating {\sc{ON}}/{\sc{OFF}} observing
technique. Also, these spectra were obtained with considerably shorter
total exposure times and often have Galactic emission present across much
of the 200 \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ spectral window. As a result, the dominant source of
uncertainty in this mode is the removal of the atmospheric lines, which in
turn depends upon the accuracy of the atmospheric line template and the
uncerainty in its scaling factor. We conservatively estimate the
uncertainty in the scaling factor to be 30\%, based on the visual
appearance of the corrected spectra, as well as the values of the
different scaling factors used in all of the spectra. The uncertainty
varies with position within each
spectrum, because the atmospheric lines only appear in certain places in
each spectrum. The error bars that appear in the figures below for survey
mode data represent this 30\% uncertainty in the atmospheric scaling
factor.
\section{OBSERVATIONS OF CLASSICAL \ion{H}{2}\ REGIONS}
\label{sec:hii}
The warm ionized medium is thought to be ionized primarily by Lyman
continuum photons from hot stars, although the mechanism by which this
happens is largely unknown (see \S\ref{sec:intro}). Therefore, to investigate
the nature of the WIM, it is useful to compare the observed emission line
ratios in the faint diffuse emission regions with the
corresponding
ratios observed in the bright classical \ion{H}{2}\ regions immediately
surrounding hot stars. In this section we discuss the results of emission
line strengths of H$\beta$, [\ion{N}{2}], [\ion{S}{2}], [\ion{O}{3}], and \ion{He}{1}\ relative to H$\alpha$\ for a
collection of 13 O-star \ion{H}{2}\ regions, plus two regions of ionized gas
surrounding hot evolved stellar cores. Some of these \ion{H}{2}\
regions (immediately surrounding O stars in Orion OB1 and Cas OB6) are
associated with much larger extended regions of filaments and loops,
allowing us to explore the question of whether the diffuse WIM is the
superposition of such extended structures surrounding some O stars and O
associations (\S\S\ref{subsec:ori} and \ref{subsec:per}).
Tables~\ref{tab:hiibasic} and \ref{tab:hiimulti} summarize
the \ion{H}{2}\ region observations, which were taken in pointed mode. The {\sc{OFF}}s
were selected based on the H$\alpha$\ maps from the WHAM-NSS. They were chosen
to be as close to the \ion{H}{2}\ region as possible, but in regions where the
emission is diffuse and could be considered to be part of the WIM. This
selection criteria was somewhat subjective, and many {\sc{OFF}}s were tens of
degrees away from the \ion{H}{2}\ regions. However, the resulting line
intensities are mostly insensitive to the selection of {\sc{OFF}}s, because the
\ion{H}{2}\ region emission lines are much stronger than those of the background
WIM and the atmosphere.
The first column in the top part of Table~\ref{tab:hiibasic} gives the
names of the O-star \ion{H}{2}\ regions from the catalogs of
\citet{Westerhout58}, \citet{Sharpless59}, and \citet{Sivan74}. The names
and spectral types of the stars or OB associations thought to be creating
the \ion{H}{2}\ regions are listed in the second and third columns,
respectively. The identification of the ionizing sources come from the
angular proximity of the \ion{H}{2}\ regions to stars found in the databases of
SIMBAD and the O-star catalog of \citet{Maiz+04}, and should not be
necessarily considered secure. For the OB associations, the listed
spectral type is for the hottest known member of the association. The
exciting O stars have been sorted in order of increasing stellar
temperature. The bottom two rows of the table provide information about
the two \ion{H}{2}\ regions near evolved stellar cores and are also identified
by the spectral type of the likely ionizing source. The Galactic
coordinates of each observation direction appear in columns 4 and 5. The
centroid LSR velocity of the H$\alpha$\ emission from each \ion{H}{2}\ region is
listed in column 6.
Many of the \ion{H}{2}\ regions were observed in H$\beta$\ as well as H$\alpha$\ in order
to quantify the
extinction to these sources, which lie primarily near the Galactic plane.
In photoionized gas at a temperature of $T_e = 8000$ K, the ratio of the
number of H$\alpha$\ to H$\beta$\ photons emitted is 3.94 \citep{HS87,
Osterbrock89}, which is only very weakly dependent on temperature
($\propto T^{0.07}$) and density.
Our observed values of \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi\ and \ifmmode{I_{\rm{H}\beta}} \else $I_{\rm H \beta}$\fi\ can then be used to estimate the
extinction to the nebulae. Assuming the dust has a total-to-selective
extinction ratio $R_V = \ifmmode{A(V)}\else $A(V)$\fi/E(B-V) = 3.1$, and the wavelength dependence
of extinction characterized by \citet{CCM89}, we obtain
\begin{equation}
\ifmmode{A(V)}\else $A(V)$\fi = 3.12\,\mbox{ln}(\frac{\ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi/\ifmmode{I_{\rm{H}\beta}} \else $I_{\rm H \beta}$\fi}{3.94})
\label{eq:aveq}
\end{equation}
\citep{MR05}. These values of \ifmmode{A(V)}\else $A(V)$\fi\ are listed in column 7 of
Table~\ref{tab:hiibasic}. The uncertainties in \ifmmode{A(V)}\else $A(V)$\fi\ are a
reflection of the uncertainties in the strength of the H$\alpha$\ and H$\beta$\
lines as discussed in \S\ref{sec:errors}, with the errors propagated
according to equation \ref{eq:aveq}.
For directions in which \ifmmode{A(V)}\else $A(V)$\fi\ was determined, the intensity of the observed
lines (and its uncertainty) has been adjusted to its extinction-corrected
value, using the extinction at other wavelengths determined by
\citet{CCM89}. A few \ion{H}{2}\ region observations were not observed in H$\beta$,
and hence no value of \ifmmode{A(V)}\else $A(V)$\fi\ was derived. Because of the uncertainties in
the measurements of \ifmmode{A(V)}\else $A(V)$\fi, and their relatively low values, we have not
corrected the emission from the \ion{H}{2}\ regions that were not observed in
H$\beta$. However, the corrections to the line {\it{ratios}}
(Table~\ref{tab:hiimulti}) are not
very sensitive to moderate values of \ifmmode{A(V)}\else $A(V)$\fi, especially for [\ion{N}{2}]/H$\alpha$\ and
[\ion{S}{2}]/H$\alpha$. For a given value of \ifmmode{A(V)}\else $A(V)$\fi, the correction factors are
$e^{0.28\ifmmode{A(V)}\else $A(V)$\fi}$, $e^{0.10\ifmmode{A(V)}\else $A(V)$\fi}$, $e^{-.003\ifmmode{A(V)}\else $A(V)$\fi}$, and $e^{-0.02\ifmmode{A(V)}\else $A(V)$\fi}$ for
[\ion{O}{3}]/H$\alpha$, \ion{He}{1}/H$\alpha$, [\ion{N}{2}]/H$\alpha$, and [\ion{S}{2}]/H$\alpha$, respectively, using the ratio
of optical depths at different wavelengths from \citet{CCM89}. Only one of
the non-corrected O-star \ion{H}{2}\ regions, S292, which is ionized by the CMa
OB1 association, lies near the Galactic plane where the extinction may be
significant. \citet{Claria74} conducted a photometric study of this star
cluster, and found considerable scatter in the visual absorption for
member stars of the OB association, with a mean value of $\ifmmode{A(V)}\else $A(V)$\fi = 0.81\pm
0.30$ mag. If the emission from S292 suffers this same average extinction
within the WHAM beam, than the extinction corrected value for \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi\ is 514
R. The corrections for the line ratios are all smaller than the
uncertainties, except for [\ion{O}{3}]/H$\alpha$\ which increases from $\approx$\ 0.10 to 0.13.
The other two non-corrected O-star \ion{H}{2}\ regions, S264 and S276, are both
part of the nearby Orion OB1 association, which has a lower mean
interstellar extinction of $\ifmmode{A(V)}\else $A(V)$\fi \approx 0.15$ mag \citep{WL78}. The two
non-O-star \ion{H}{2}\ regions at the bottom of Table~\ref{tab:hiibasic} were
also not corrected, but they are at very high latitudes, $b \gtrsim
50\ifmmode{^{\circ}}\else $^{\circ}$\fi$, where the exinction is also likely to be low.
The data in the eighth column of Table~\ref{tab:hiibasic} lists the
intensities of the H$\alpha$\ line within WHAM's $1\ifmmode{^{\circ}}\else $^{\circ}$\fi$ beam.
We note that this measurement is a lower limit for several \ion{H}{2}\
regions that do not fill the beam, as indicated in the Table.
We see that these extinction corrected H$\alpha$\ intensities vary by three
orders of magnitude, from $\approx$ 2 R to $\approx$ 3000 R, with the
brightest toward the OB association Cas OB6, and the faintest toward
the two faint evolved stellar cores.
The line strengths of [\ion{N}{2}], [\ion{S}{2}], [\ion{O}{3}], and \ion{He}{1}\ emission relative to
H$\alpha$, (in energy units) are presented in Table~\ref{tab:hiimulti}. For
[\ion{N}{2}]/H$\alpha$, we find that the average value for the O-star \ion{H}{2}\ regions is
0.27. As will be seen in the following sections, this value is
significantly lower than what is generally observed in the WIM, where a
typical value of [\ion{N}{2}]/H$\alpha$\ is $\approx$ 0.5, but in some cases exceeds
1.0. In addition, there is no strong trend in [\ion{N}{2}]/H$\alpha$\ with spectral
type. If [\ion{N}{2}]/H$\alpha$\ is tracing the electron temperature of the gas, we
might expect to see a slight decrease in [\ion{N}{2}]/H$\alpha$\ with decreasing stellar
temperature. However, the ionizing radiation from the hottest stars may
have a significant flux above 29.6 eV (as suggested by their higher
[\ion{O}{3}]/H$\alpha$), which can ionize N$^+$ and thus
complicate the relationship between electron temperature and [\ion{N}{2}]/H$\alpha$\
(eq. \ref{eq:niieq}). For the three observations taken near the very
large \ion{H}{2}\ region ionized by the O7.5III star $\xi$ Per (S220 and Sivan
4), [\ion{N}{2}]/H$\alpha$\ varies by almost a factor of two, from 0.23 to 0.40, with
the brightest portion of the region, often referred to as the California
nebula, having the highest ratio. The highest [\ion{N}{2}]/H$\alpha$\ ratios are
near unity and are associated with the \ion{H}{2}\ regions
surrounding the hot stellar cores. These results are presented graphically
in Figure~\ref{fig:hiisummary}. The horizontal line in each panel
denotes the average value for that particular line ratio.
For [\ion{S}{2}]/H$\alpha$, we find an average value of 0.11 for the O-star \ion{H}{2}\
regions. This value is also significantly lower than what is generally
found in the WIM, consistent with previous studies.
Because of the low ionization potential of S$^+$, [\ion{S}{2}]/H$\alpha$\ is more
sensitive to ionization effects that [\ion{N}{2}]/H$\alpha$. However we do not see
any strong trends in this ratio with spectral type of the ionizing
sources. Again, the highest [\ion{S}{2}]/H$\alpha$\ ratios are associated with the
regions that surround the hot stellar cores.
On the other hand, line ratios that include ions with higher ionization
potentials, namely, [\ion{O}{3}]/H$\alpha$\ and \ion{He}{1}/H$\alpha$\, are observed to increase
with increasing photospheric temperature of the O star. This is consistent
with the gas near the hotter stars being subject to a harder incident
spectrum. We find a large scatter in [\ion{O}{3}]/H$\alpha$\ with an average value of
[\ion{O}{3}]/H$\alpha$\ of 0.18. Values for \ion{He}{1}/H$\alpha$\ show a strong trend with
spectral type, ranging from about 0.011 for S276 (O9.5 V) to 0.037 for S132
(WN6 + O6 I).
The last two rows of the Table~\ref{tab:hiimulti} summarize the
observations toward two hot, evolved stellar cores. These
lines of sight are centered on a sub-dwarf B star and a helium-rich DO hot
white dwarf. While they are not traditional \ion{H}{2}\ regions, they were
included in this study to compare the emission-line characteristics of the
faint ionized gas around these very hot but low luminosity stars with the
WIM emission. These two \ion{H}{2}\ regions were first noted by
\citet{Haffner01}, in a preliminary search of the WHAM-NSS for faint H$\alpha$\
emission near hot white dwarf and sub-dwarf stars. A more thorough search
of the H$\alpha$\ survey has revealed numerous small ($\lesssim\ 1\ifmmode{^{\circ}}\else $^{\circ}$\fi$)
scale H$\alpha$\
enhancements, many of which are not associated with any known ionizing
source \citep{Reynolds+05}. The detection of enhanced H$\alpha$\ emission around
the $m_V = 13.5$ star PG 1047+003 is the first detection of ionized gas
surrounding this sub-dwarf B star. The emission associated with DO star PG
1034+01 near \ifmmode{(\ell,b)} \else $(\ell,b)$\fi = (248\ifmmode{^{\circ}}\else $^{\circ}$\fi,+48\ifmmode{^{\circ}}\else $^{\circ}$\fi) has been recently explored by
\citet{Hewett+03} and \citet{RKP04}, who conclude that this region is a
high-excitation, planetary-nebula like object. The line ratios,
particularly the high values of [\ion{N}{2}]/H$\alpha$\ and [\ion{O}{3}]/H$\alpha$, toward both
of these regions are consistent with planetary nebula spectra.
We now proceed to compare the spectral characteristics of these
classical \ion{H}{2}\ regions with those of the much of larger scale
emission features revealed by the WHAM H$\alpha$\ survey, including the diffuse WIM.
\section{THE WARM IONIZED MEDIUM}
\label{sec:wim}
As mentioned in \S\S\ref{sec:intro} and \ref{sec:physconds}, the spectral
characteristics of the WIM differ significantly
from the classical \ion{H}{2}\ regions. This has been discussed in detail in
earlier studies (e.g., Haffner et al 1999).
As an illustration of the principal difference between WIM and \ion{H}{2}\
region spectra, we show H$\alpha$, [\ion{N}{2}], and [\ion{S}{2}]\ observations toward the O
star \ion{H}{2}\ region Sivan 2 in Figure~\ref{fig:wimspectra}. Two radial
velocity components are present along this line of sight. Velocity channel
maps from the WHAM survey show that the emission near \ifmmode{v_{\rm{LSR}}}\else $v_{\rm{LSR}}$\fi = 0 \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ is
associated with diffuse foreground emission (i.e., the local WIM). This
emission was well separated in velocity from the \ion{H}{2}\ region emission,
allowing a direct comparison of the relative intensities of the lines from
the two different sources in the same spectrum.
The velocity of each component was determined from a least-squares fit of a sum of Gaussian profiles to the [\ion{S}{2}]\ spectrum. The [\ion{S}{2}]\ spectrum was chosen because of its narrow, well-resolved component profiles and its high signal-to-noise. The resulting component velocities are shown as vertical dashed lines in the Figure. The strength of each component was calculated from a two-component Gaussian fit to each spectrum in which the velocities for each component were fixed as determined from the [\ion{S}{2}]\ spectrum.
These spectra clearly reveal that [\ion{N}{2}]\ $\lambda$6584/H$\alpha$, [\ion{N}{2}]\
$\lambda$5775/H$\alpha$, and [\ion{S}{2}]/H$\alpha$\ are significantly higher in the diffuse
gas compared to the \ion{H}{2}\ region, with [\ion{N}{2}]/H$\alpha$\ = 0.83 and [\ion{S}{2}]/H$\alpha$ =
0.38 in the WIM, compared to 0.23 and 0.12, respectively, for the \ion{H}{2}\
region. The [\ion{N}{2}]/H$\alpha$\ data suggest that the temperature of the diffuse gas
is $T \approx$\ 9000 K, compared to $\approx$\ 6100 K for the \ion{H}{2}\
region. The high temperature derived for the WIM relative to the \ion{H}{2}\
region, from the [\ion{N}{2}]$\lambda$6584/H$\alpha$\ ratio (\S\ref{sec:physconds}), is
confirmed by observations of the highly temperature sensitive [\ion{N}{2}]$~\lambda5755$\
emission line. These observations show relatively bright [\ion{N}{2}]$~\lambda5755$\
emission for the fainter (in H$\alpha$) WIM component, while the [\ion{N}{2}]$~\lambda5755$\ line
is not even detected in the cooler \ion{H}{2}\ region component (see also
\S\ref{sec:niiblue} below).
Additional differences between the \ion{H}{2}\ regions, the WIM, and other faint,
large-scale emission features in the H$\alpha$\ sky are discussed in the
following sections. For this paper, the `WIM' refers to the gas that
produces the faint, diffuse emission outside the classical \ion{H}{2}\
regions, extended bubbles and superbubbles, and high latitude
filamentary structures.
\section{\ion{H}{1}\ CAVITIES AND SUPERBUBBLES}
\label{sec:bubbles}
One of the basic questions concerning the nature of the WIM is how
ionizing photons from O stars are able to travel hundreds of parsecs from
the stars. One possibility is the existence of enormous \ion{H}{1}-free bubbles
surrounding some of the O stars. In \S\ref{subsec:ori} and
\S\ref{subsec:per}, we examine in detail the faint optical line emission
associated with two very extended bubble-like regions which have diameters
of $\sim$ 40\ifmmode{^{\circ}}\else $^{\circ}$\fi\ - 60\ifmmode{^{\circ}}\else $^{\circ}$\fi\ (up to 2 kpc in extent) and which are
ionized by luminous O associations. The line ratios are then compared with
ratios in the more diffuse WIM to examine whether the WIM could be a
superposition of such regions.
\subsection{Orion-Eridanus Bubble}
\label{subsec:ori}
One of the largest networks of interconnected H$\alpha$-emitting structures in
the WHAM sky survey appears in the constellations of Orion and Eridanus,
shown in Figure~\ref{fig:orimap}. The presence of optical and
radio-emitting filaments in this general direction has been known for many
years. \citet{RO79} carried out velocity resolved emission-line
observations of this region, and found that the filaments, loops, and
enhanced H$\alpha$\ emission are all part of an expanding shell of neutral and
ionized gas with a diameter of $\approx 280$ pc. They suggested that Lyman
continuum photons from the Ori OB1 association, located near one side of
the bubble, travel largely unimpeded through the hot ($T \sim 10^6$~K)
interior cavity and ionize the inner surface of its surrounding outer
shell. They estimated the shell has a density near 1 cm$^{-3}$, which
is significantly higher than the density in the WIM; nevertheless, the
large extent of the cavity has produced diffuse and filamentary H$\alpha$\
covering a $40\ifmmode{^{\circ}}\else $^{\circ}$\fi \times 40\ifmmode{^{\circ}}\else $^{\circ}$\fi$ region of the sky and up to 34\ifmmode{^{\circ}}\else $^{\circ}$\fi\ from
the OB association. This picture is supported by the detection of diffuse
X-ray emission interior to the bubble walls, as well as more recent
studies of \ion{H}{1}\ in the region \citep{Burrows+93, BHB95}.
The shell is expanding at a velocity of about 20 \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi, likely as a result of supernova activity; however, \citet{RO79} have shown that the contribution from
shocks to the ionization of the walls of the bubble is likely to be
negligible. They found that among the most luminous stars within the
bubble, $\delta$ Ori, an O9.5~I star, is probably responsible for most of
the ionization. It is the only hot star in the cavity that has no discrete
\ion{H}{2}\ region around it, implying that most of its ionizing radiation
travels unimpeded through the cavity. The Orion-Eridanus bubble, which is
significantly brighter in H$\alpha$\ compared to the more diffuse WIM, is thus
an excellent environment in which to study the relationship between
traditional \ion{H}{2}\ regions and the warm ionized medium. By comparing the
physical conditions within, around, and outside this bubble, we can assess
the similarities and differences between gas that is part of a large
cavity ionized by a known source and the fainter, diffuse WIM.
\subsubsection{Large-Scale Trends}
We have observed this bubble along several lines of sight in the emission
lines of H$\alpha$, [\ion{N}{2}], [\ion{S}{2}], [\ion{O}{3}], and \ion{He}{1}. Superposed on the map of
this region, presented in Figure~\ref{fig:orimap}, are the approximate
locations of the pointed
observations. The bubble is outlined approximately by a circle that goes
through directions A, 1, 2, 5, 7, and G. Direction 3 is located
\emph{outside} of the boundary and samples diffuse interstellar gas near
the Galactic plane. The numbered directions are ordered with increasing
distance from the Orion OB1 association, located near $\sigma$ Ori and its
\ion{H}{2}\ region, which is indicated on the diagram with the letter $\sigma$.
A set of seven closely spaced observations were also made that cut across
a filament near the lower edge of the bubble, and are labeled A-G.
Two directions denoted by `X' were used as {\sc{OFF}}s for the reduction of the
pointed observation spectra. The OFFs have an H$\alpha$\ intensity of $\approx$ 0.5 R, and subtracting them from the ONs allows us to isolate emission in the bubble from any background or foreground emission.
The green box in Figure~\ref{fig:orimap} indicates a region of the
bubble that also was mapped in [\ion{N}{2}]\ and [\ion{S}{2}].
Table~\ref{tab:ori} summarizes our pointed observations of this region,
with the name of each direction corresponding to the labels in
Figure~\ref{fig:orimap}. The columns in the table are similar to those
in
Tables~\ref{tab:hiibasic} and \ref{tab:hiimulti}. However, the
fourth column shows the angular distance of each direction from \ifmmode{(\ell,b)} \else $(\ell,b)$\fi =
(206.5\ifmmode{^{\circ}}\else $^{\circ}$\fi,~-18.0\ifmmode{^{\circ}}\else $^{\circ}$\fi), the H$\alpha$\ flux-weighted center of the bubble,
as determined by \citet{RO79} and roughly the center of the Orion OB 1
association. The spectrum toward direction 1 has two emission
components; the emission line strengths that appear in the table include
the total emission from both components. Also, because of the weak
dependence of the line ratios on \ifmmode{A(V)}\else $A(V)$\fi, and the low extinction to this
nearby region \citep[\ifmmode{A(V)}\else $A(V)$\fi $\sim$ 0.15 mag;][]{WL78}, no extinction
correction was applied to the data. As discussed in \S\ref{sec:obs}, {\sc{OFF}}
spectra were subtracted from each of these pointed observations, and all
of the results in the table represent `background subtracted' values.
This procedure isolates emission that is only associated with the
Orion-Eridanus bubble.
A graphical summary of the line ratios is shown in
Figure~\ref{fig:oridiag}. The panel on the left includes the
observations that are within or on the boundary of the bubble (1-7), except for direction 3. The
panel on the right shows the results for the series of observations that
cut across the outermost filament at $b \approx -50\ifmmode{^{\circ}}\else $^{\circ}$\fi$ (A-G), which
appears to be an edge-on projection of the cavity's outer shell. These
pointings, spaced about 1\ifmmode{^{\circ}}\else $^{\circ}$\fi\ apart, begin on one side of the H$\alpha$\
filament (the side toward the O association), cross the filament, and end
just outside the ionized part of the bubble (see
\S\ref{subsubsec:orisurv}). The column density of \ion{H}{1}\, from
\citet{HIAtlas}, is also shown in the panel on the right (\emph{green}),
and indicates the location of the neutral portion of the shell. The name
of each observation direction appears above the panels. The data are shown
as a function of $D_\theta$, the angular distance from the center of the OB1
association, which is useful in order to search for potential changes in
the physical conditions of the gas with increasing distance from the
ionizing source. The top plot in each panel shows the H$\alpha$\ intensity
toward each direction on a logarithmic scale. The second plot from the
top shows the values of [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$, while the third and fourth
plots give [\ion{O}{3}]/H$\alpha$\ and \ion{He}{1}/H$\alpha$, respectively. The horizontal line in
each plot represents the average value for the ratio in \ion{H}{2}\ regions
(Table~\ref{tab:hiimulti}; Figure~\ref{fig:hiisummary}); for
[\ion{O}{3}]/H$\alpha$, the \ion{H}{2}\ region average, 0.094, is off scale.
For directions 1-7, we see that [\ion{N}{2}]/H$\alpha$\ varies between $\approx$ 0.2 to
0.3, and there is no significant correlation with $D_\theta$ out to
$25\ifmmode{^{\circ}}\else $^{\circ}$\fi$ from the association. The ratio is weakly anti-correlated with
\ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi, with brighter regions of the bubble having lower values of [\ion{N}{2}]/H$\alpha$.
This is a common behavior seen in these bubble structures and in the WIM,
both for the Milky Way and other galaxies \citep{HRT99, Rand98, CR01}.
[\ion{N}{2}]/H$\alpha$\ is near the value of the average \ion{H}{2}\ region, and slightly
lower than in the $\sigma$ Ori \ion{H}{2}\ region, which appears to reside
inside (or perhaps on the wall of) the cavity. The variation in [\ion{S}{2}]/H$\alpha$\
($\approx$ 0.15 - 0.25) is larger than for [\ion{N}{2}]/H$\alpha$, and there is a very weak
trend in which [\ion{S}{2}]/H$\alpha$\ is higher at larger distances from the
association, where it becomes larger than the average ratio in classical \ion{H}{2}\
regions. The ratio [\ion{S}{2}]/[\ion{N}{2}]\ also increases with $D_\theta$, from 0.55 for
direction 1 to 0.95 in direction 6. As will be seen later, [\ion{N}{2}]/H$\alpha$\ and
[\ion{S}{2}]/H$\alpha$\ in the bubble are generally lower and exhibit less scatter than the ratios observed in the WIM.
The [\ion{O}{3}]/H$\alpha$\ ratio is extremely low throughout the bubble, $\lesssim
0.02$, with the exception of direction 4 (direction 3 is outside
the bubble), and is an order of magnitude below the average value of 0.18
for the \ion{H}{2}\ regions (Table~\ref{tab:hiimulti}). Similarly, the
\ion{He}{1}/H$\alpha$\ ratios are low ($\lesssim 0.015$) relative to the \ion{H}{2}\ region
average.
Based upon the discussion presented in \S\ref{sec:hii}, the [\ion{N}{2}]/H$\alpha$\ and
[\ion{S}{2}]/[\ion{N}{2}]\ data suggest that the temperature of the ionized gas is between
$6000~K \lesssim T \lesssim 6500~K$ with $0.4 \lesssim \rm{S}^+/\rm{S}
\lesssim 0.6$. In addition, the \ion{He}{1}\ data suggest that He$^+$/He
$\lesssim 0.3$. We can quantify the
hardness of the radiation field by assuming from the above He$^+$/He
ratio that the volume of the He$^+$ zone along these lines of sight is
smaller than the volume of the H$^+$ zone by a factor of $\lesssim$\ 0.3.
The ratio of total number of He-ionizing photons $Q$(He$^0$) to H-ionizing
photons $Q$(H$^0$) along the line of sight is then proportional to this
volume ratio, specifically $Q$(He$^0$)/$Q$(H$^0$) $\lesssim$ 0.03
\citep{Osterbrock89}. This corresponds to a star with an effective
temperature $T_* \lesssim$ 35,000 K, equivalent to O8.5~I or O9.5~V or
cooler \citep{VGS96}. This is consistent with the likely ionizing source
of the shell, $\delta$\ Ori, which is an O9.5I star, although further
analysis using recent stellar atmosphere models is needed to confirm
this scenario \citep{MSH05}. The low He$^+$/He is
consistent with the apparently low ionization state of oxygen, where
[\ion{O}{3}]/H$\alpha$\ suggests that O$^{++}$/O $\lesssim$\ 0.04 (exception for
direction 4, where O$^{++}$/O $\approx$\ 0.1). Outside the bubble and
closer to the Galactic plane (direction 3), He$^+$/He $\approx 0.7$,
suggesting that the radiation field outside the shell is significantly
harder, consistent with a continuum source with $T_*$ $\gtrsim$ 40,000 K,
an O7 star or earlier.
\subsubsection{The \ion{H}{2}\ to \ion{H}{1}\ Transition through the Shell}
\label{subsubsec:orisurv}
The data on the right panel in Figure~\ref{fig:oridiag} show the
variation in the line ratios across the outer edge of the bubble, where
the hydrogen in the shell is making a transition from fully ionized
(on the shell's inside surface) to completely neutral. Here we see small,
but significant trends in the ratios of [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ across
this outer filament, which is a projection enhancement from an edge-on
view of the shell. The ratio
of [\ion{S}{2}]/H$\alpha$\ follows the variation in [\ion{N}{2}]/H$\alpha$\ very closely, with
[\ion{S}{2}]/[\ion{N}{2}]\ $\approx$ 1.0, except for the direction inside (A),
where [\ion{S}{2}]/[\ion{N}{2}]\ $\approx$ 0.7. Interestingly, the data show that this
filament is brightest in [\ion{N}{2}]\ and [\ion{S}{2}]\ (is highest in temperature?) at a
different location than the brightest part in H$\alpha$, with [\ion{N}{2}]\ and [\ion{S}{2}]\
both peaking about $\approx 0.5\ifmmode{^{\circ}}\else $^{\circ}$\fi = 3.5 (d/400)$\ pc further away from
the ionizing source(s). The peak brightness of \ion{H}{1}\ is even farther to the
outside, near direction G. These trends
are also shown in the line ratio maps of this region presented below. Both
[\ion{O}{3}]\ and \ion{He}{1}\ are very weak and show no clear trends across the
filament.
We note that while there are statistically significant trends in [\ion{N}{2}]/H$\alpha$\
and [\ion{S}{2}]/H$\alpha$\ across the outer edge of the shell, the magnitude of the
variations are small. As a result,
changes in the physical conditions, as suggested by the line ratios, are
not very dramatic. Potential complications introduced by the unknown
geometry and multi-phase nature of the bubble could be important when
using the observed line ratios to infer changes in the actual physical
conditions. For example, small variations in the volume-averaged
ionization fractions of both N and S along the lines of sight may
complicate our interpretation that [\ion{N}{2}]/H$\alpha$\ is tracing the temperature of
the gas. However, the constancy of [\ion{S}{2}]/[\ion{N}{2}]\ strongly indicates that the
variations in [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ are dominated by variations in
temperature (see eq. \ref{eq:siieq}). If we assume that N$^+$/N = 0.8
everywhere across the shell, then the [\ion{N}{2}]/H$\alpha$\ data suggest that the
temperature of the warm ionized gas associated with the edge of this
bubble is $T \approx 7300$ K toward direction A, falling to $\approx$\
6000 K at the inside edge of the filament, and rising back up to about
6300~K before falling back down to near 6000~K just outside the shell.
This apparent rise in temperature toward the backside of the H$\alpha$\ filament
(farther away from the ionizing source) could be due to a slight hardening
of the radiation as it penetrates into the shell and just before it is
completely absorbed where the shell becomes neutral \citep{WM04}.
Figure~\ref{fig:oriratiomap} shows a more comprehensive picture of the
[\ion{N}{2}]\ and [\ion{S}{2}]\ emission in this region, obtained in survey mode. The
figure contains maps of the H$\alpha$\ and \ion{H}{1}\ intensity, [\ion{N}{2}]/H$\alpha$,
[\ion{S}{2}]/H$\alpha$, and [\ion{S}{2}]/[\ion{N}{2}]. The maps were created by integrating
the emission lines over their entire profiles, between $\pm~50~ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi$ of
the LSR. The blue contours outline the location of strong 21 cm emission
from \citet{HIAtlas}, the outermost, neutral portion of the shell, with
the contour levels corresponding to column densities of 5.6, 6, 7, 9, and
11$\times 10^{20}$~cm$^{-2}$. The green circles represent the pointed
observations A-G discussed above. This $20\ifmmode{^{\circ}}\else $^{\circ}$\fi \times 20\ifmmode{^{\circ}}\else $^{\circ}$\fi$ view also
includes a large region outside of the shell to
higher (more negative than -52\ifmmode{^{\circ}}\else $^{\circ}$\fi) Galactic latitude, dominated by the faint,
diffuse WIM.
The trends that were found for the pointed directions that traverse the
outer edge of the bubble are also apparent in these survey mode
observations. However, the maps in Figure~\ref{fig:oriratiomap} reveal
that these trends hold across the entirety of the shell edge, and not just
for one slice through it. The H$\alpha$\ map in Figure~\ref{fig:oriratiomap} shows that the
ionized gas associated with the edge of the bubble lies a few degrees
inside of the neutral \ion{H}{1}\ edge, and decreases in intensity before the
peak in \ion{H}{1}\ emission, as was also shown in Figure~\ref{fig:oridiag}.
Also, note the ridge of enhanced [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ at $b\approx
-51\ifmmode{^{\circ}}\else $^{\circ}$\fi$ that runs parallel to, and just between, the H$\alpha$\ and 21 cm
bright parts of the outer shell.
The [\ion{N}{2}]/H$\alpha$\ map shows a striking anti-correlation between the H$\alpha$\ line
strength and the [\ion{N}{2}]/H$\alpha$\ line ratio. For example, the very faint WIM
emission outside the bubble at $b \lesssim -53\ifmmode{^{\circ}}\else $^{\circ}$\fi$ is the ``brightest"
region in the [\ion{N}{2}]/H$\alpha$\ map. In addition, several individual features
\emph{inside} of the bubble show this anti-correlation, such as the region
of depressed H$\alpha$\ emission near \ifmmode{(\ell,b)} \else $(\ell,b)$\fi = (187\ifmmode{^{\circ}}\else $^{\circ}$\fi, -47\ifmmode{^{\circ}}\else $^{\circ}$\fi) and the region of
enhanced H$\alpha$\ emission near \ifmmode{(\ell,b)} \else $(\ell,b)$\fi\ = (198\ifmmode{^{\circ}}\else $^{\circ}$\fi,~-41\ifmmode{^{\circ}}\else $^{\circ}$\fi). Interestingly, a
spatially coherent depression in [\ion{N}{2}]/H$\alpha$\ follows the shape of the outer
edge of the bubble, where the \ion{H}{1}\ is getting brighter (i.e., directions F and G). This suggests a decrease in temperature (or a decrease in N$^+$ relative to H$^+$) in the transition
region from \ion{H}{2}\ to \ion{H}{1}\ in this outer shell. The data are consistent
with this bubble being a large, hot cavity with the inside walls ionized
by hot stars within the cavity, as originally suggested by \citet{RO79}.
The [\ion{S}{2}]/H$\alpha$\ map is very similar to the [\ion{N}{2}]/H$\alpha$\ map. We see that the
diffuse background is significantly brighter in [\ion{S}{2}]/H$\alpha$\ than the rest of
the bubble. We also see a dark filament that is nearly co-spatial with
the \ion{H}{1}\ shell. This dark filament is similar to the feature in the
[\ion{N}{2}]/H$\alpha$\ map, but seems to be of a higher contrast relative to its
surroundings. The [\ion{S}{2}]/[\ion{N}{2}]\ map shows that the interior of the shell has
an elevated [\ion{S}{2}]/[\ion{N}{2}]\ line ratio (i.e., a lower ionization of sulfur)
than in the WIM at $b \lesssim -53\ifmmode{^{\circ}}\else $^{\circ}$\fi$. We note that the coherent
features in both the [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ maps are well above the
noise, although the numerical variation in the ratios is small (67\% of
the data span a range of $\approx$ 0.10 in the ratio).
The interpretation of these maps is complicated by the presence of
the background (WIM) emission. These maps are as they appear in the sky,
and no
background emission has been subtracted from them as was done for the
pointed observations. Because [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ are relatively
high in the background, the faint [\ion{N}{2}]\ and [\ion{S}{2}]\ emission inside and
near the edge of the bubble is not a representation of emission from
the bubble alone.
The average strength of the H$\alpha$, [\ion{N}{2}], and [\ion{S}{2}]\ emission in the
background is 0.99, 0.35, and 0.26 R, respectively.
Therefore, because the [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ ratios are higher in the
background than within the Orion-Eridanus bubble, the values for the
actual ratios associated with the bubble features in
Figure~\ref{fig:oriratiomap} are slightly lower than that given by the
color bars. That is, there is an even greater difference between the
ionized gas associated with the bubble and the WIM than these maps
indicate.
Another visualization of the data in these maps is presented in
Figure~\ref{fig:oriratioplot}. Here, the ratios [\ion{N}{2}]/H$\alpha$, [\ion{S}{2}]/H$\alpha$, and [\ion{S}{2}]/[\ion{N}{2}]\
are plotted against H$\alpha$\ intensity for every direction in
Figure~\ref{fig:oriratiomap}. The total (random and systematic) uncertainties
in the data points are shown in the top two panels, with their origin and magnitude discussed in
\S\ref{sec:errors}. The uncertainty in [\ion{S}{2}]/[\ion{N}{2}]\ has been omitted for clarity.
The data have been separated by latitude, with
observations outside the bubble ($b < -53\ifmmode{^{\circ}}\else $^{\circ}$\fi$) shown in red (WIM) and
observations inside the bubble ($b > -53\ifmmode{^{\circ}}\else $^{\circ}$\fi$) shown in blue. For
comparison, the average values of these line ratios for all of the O-star \ion{H}{2}\ regions is shown as a green solid line. Note that the abscissa
is on a logarithmic scale. These plots show an increase in [\ion{N}{2}]/H$\alpha$\ and
[\ion{S}{2}]/H$\alpha$\ in parts of the map with the faintest H$\alpha$\ emission outside of
the bubble, and that [\ion{S}{2}]/[\ion{N}{2}]\ is significantly higher than the average \ion{H}{2}\ region.
We also see that the scatter in these ratios at low \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi\ is
similar to the scatter at higher \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi. This scatter is significantly
larger than the random uncertainties in the data.
A useful diagram that separates the effects of temperature from the
effects of
ionization state is shown in Figure~\ref{fig:orirationvs}, where
[\ion{S}{2}]/H$\alpha$\ is plotted against [\ion{N}{2}]/H$\alpha$, using equations \ref{eq:niieq}
and \ref{eq:siieq} after the fashion presented in \citet{HRT99}. The
vertical dashed lines represent the expected ratio of [\ion{N}{2}]/H$\alpha$\ for
temperatures from 5000~K to 10,000~K. The solid
lines
represent the expected ratio of [\ion{S}{2}]/[\ion{N}{2}]\ for an increasing fraction of
S$^+$/S between zero and 1.0. The symbols are the same as
Figure~\ref{fig:oriratioplot},
except that the ratios for the O star \ion{H}{2}\ regions from
\S\ref{sec:hii} have also been added and are shown individually in
green.
The data suggest that on average, the ionized gas in the bubble
(\emph{blue}) has a larger fraction of S in the form of S$^+$
compared to the \ion{H}{2}\ regions (50\% vs. 25\%), and is similar to that
in the WIM (\emph{red}).
The data also suggest that
most of the gas within the bubble is at \ion{H}{2}\ region-like
temperatures (6000 K $< T <$ 7000 K), significantly lower than in the
fainter, more diffuse WIM gas outside the bubble.
\subsubsection{Comparison to the WIM}
In summary, at this point we conclude that although the low
ionization state of the warm ionized gas in the Orion-Eridanus bubble
(i.e., enhanced [\ion{S}{2}]/H$\alpha$\ and low [\ion{O}{3}]/H$\alpha$) is similar to that in the
fainter, more diffuse WIM, the temperature ([\ion{N}{2}]/H$\alpha$) appears to be
consistently lower than that in the WIM and close to the value found
for the classical \ion{H}{2}\ regions. The lower ionization state of the gas
compared to the average classical \ion{H}{2}\ region is probably the result of
the late spectral type (soft ionizing spectrum) of the primary ionizing
star and the low ionization parameter associated with the dilution of the
ionizing radiation as it travels to the distant walls of the cavity. The
decrease in ionization parameter could explain the weak rise in [\ion{S}{2}]/H$\alpha$\
with distance from Ori OB1. On the other hand, we find no evidence for the
[\ion{N}{2}]/H$\alpha$\ ratio becoming more WIM-like with increasing distance from the O
association, even at the outer edge, $33\ifmmode{^{\circ}}\else $^{\circ}$\fi$ (at least 250 pc) distant.
While there is a small increase in both [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ at the
transition from ionized to neutral gas within the outer shell, [\ion{N}{2}]/H$\alpha$\
never approaches the high values found in the more diffuse ionized gas
outside the bubble. Thus it appears that the spectral characteristics of
the WIM are not explained by the leakage of O star radiation onto cavity
walls like those of the Orion-Eridanus bubble.
\subsection{Perseus Superbubble and the Local Foreground}
\label{subsec:per}
To investigate whether the size of the bubble influences the emission line
ratios, we examine next a much larger (2000 pc $\times$ 800 pc) and
fainter superbubble ionized by the Cas OB6 association and covering much
of the constellations Perseus, Cassiopeia, and Camelopardalis. We will
refer to this enormous structure as the ``Perseus superbubble''. Compared
to the Orion-Eridanus bubble, this superbubble has nearly nine times the
linear extent, 1/10 the H$\alpha$\ surface brightness, and 1/5 -- 1/10 the gas
density in its shell (i.e., 0.1 -- 0.2 cm$^{-3}$, comparable to densities
in the WIM). Figure~\ref{fig:bowtiesurv} shows two velocity interval
maps of this region covering about 2400 deg$^2$ from the WHAM sky survey.
Most H$\alpha$\ spectra in this region have two or more components, one centered
near \ifmmode{v_{\rm{LSR}}}\else $v_{\rm{LSR}}$\fi = 0~ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ and one near $-50$~ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi.
The map on the left shows foreground H$\alpha$\ emission from the solar
neighborhood at $-15~ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi < \ifmmode{v_{\rm{LSR}}}\else $v_{\rm{LSR}}$\fi < +15\ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi$,
while the map on the right shows emission from the same piece of sky, but over
the velocity interval $-75\ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi < v_{LSR} < -45\ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi$. The emission at
these more negative velocities is from the Perseus spiral arm, 2-2.5 kpc
distant, and is dominated by the Perseus superbubble \citep{RSH01}.
Figure~\ref{fig:bowtiespectra} shows the emission line spectra from the
pointed observations obtained toward the two sightlines at (130\ifmmode{^{\circ}}\else $^{\circ}$\fi,
-7.5\ifmmode{^{\circ}}\else $^{\circ}$\fi) and (133\ifmmode{^{\circ}}\else $^{\circ}$\fi, +18\ifmmode{^{\circ}}\else $^{\circ}$\fi), denoted by `X's in
Figure~\ref{fig:bowtiesurv}. These sightlines pass through the
bipolar loop structure near the outer boundary of the superbubble.
H$\alpha$, [\ion{N}{2}], and [\ion{S}{2}]\ spectra appear in the top plots, with H$\alpha$, [\ion{O}{3}]\ and
\ion{He}{1}\ spectra on the bottom. Note that the [\ion{O}{3}]\ and \ion{He}{1}\ spectra have
been multiplied by the indicated values to facilitate the comparison of
the relative strengths of the lines. Several fainter {\sc{OFF}} directions, used to
remove the atmospheric lines and background emission, were located at $|b|
\gtrsim 20\ifmmode{^{\circ}}\else $^{\circ}$\fi$ near the longitude of the pointed observations.
Each of the sight lines contains two or more distinct components, whose
velocities were determined by least-squares fits of a sum of Gaussian
profiles to the [\ion{S}{2}]\ spectrum in each direction. The [\ion{S}{2}]\ spectrum was
chosen because of its narrow, well-resolved component profiles and its
high signal-to-noise. The resulting component velocities are shown as
vertical dashed lines in Figure~\ref{fig:bowtiespectra}, with component
identification numbers shown above the top plot. The maps on the left and right
in Figure~\ref{fig:bowtiesurv} correspond to emission from components 1
and 3, respectively, for (130\ifmmode{^{\circ}}\else $^{\circ}$\fi, -7.5\ifmmode{^{\circ}}\else $^{\circ}$\fi), and components 1 and 2,
respectively, for (133\ifmmode{^{\circ}}\else $^{\circ}$\fi, +18\ifmmode{^{\circ}}\else $^{\circ}$\fi). Therefore, we identify component 3
toward (130\ifmmode{^{\circ}}\else $^{\circ}$\fi, -7.5\ifmmode{^{\circ}}\else $^{\circ}$\fi) and component 2 toward (133\ifmmode{^{\circ}}\else $^{\circ}$\fi, +18\ifmmode{^{\circ}}\else $^{\circ}$\fi) with the
Perseus superbubble. Component 1 in each direction is associated with
ionized gas near the outer edge of extended \ion{H}{2}\ regions surrounding
$\phi$ Per and $\alpha$ Cam, respectively. The nature of components 2 and
4 toward (130\ifmmode{^{\circ}}\else $^{\circ}$\fi, -7.5\ifmmode{^{\circ}}\else $^{\circ}$\fi) is not known.
The strength of each component was calculated from a multi-component
Gaussian fit to each spectrum in which the velocities for each component
were fixed as determined from the [\ion{S}{2}]\ spectra. A summary of the line
strengths and their ratios is presented in Table~\ref{tab:per}. The
columns of the table are the same as for Table~\ref{tab:ori}, except
that $D_\theta$ refers to the angular distance of the pointing from the
W4 \ion{H}{2}\ region (Cas OB6), the presumed source of the ionizing radiation
for the superbubble. A graphical representation of the data in
Table~\ref{tab:per} is shown in Figure~\ref{fig:bowtiediag}. The
layout of Figure~\ref{fig:bowtiediag} is the same as
Figure~\ref{fig:oridiag}, with the solid horizontal lines representing
the average line ratios of the O-star \ion{H}{2}\ regions. The H$\alpha$\ intensity
for the W4 \ion{H}{2}\ regions (2800 R) is far off the scale of the plot.
H$\beta$\ observations toward (130\ifmmode{^{\circ}}\else $^{\circ}$\fi, -7.5\ifmmode{^{\circ}}\else $^{\circ}$\fi) suggest that the extinction is
generally low, with \ifmmode{A(V)}\else $A(V)$\fi\ $\approx$\ 0.1, 0.4, 0.7, and 0.9 mag for
components 1, 2, 3, and 4, respectively. This implies a maximal correction
of $\lesssim$ 30\% for the [\ion{O}{3}]\ data (for component 4), and a much
smaller correction for the other lines. No correction has been applied to
the data in Figure~\ref{fig:bowtiespectra} or Table~\ref{tab:per}.
However, the tips of the upward pointed arrows in
Figure~\ref{fig:bowtiediag}, for the [\ion{O}{3}]/H$\alpha$\ data, denote the change
in this ratio if the above extinction corrections are applied. No
extinction correction was applied to the data toward the much fainter
direction of (133\ifmmode{^{\circ}}\else $^{\circ}$\fi, +18\ifmmode{^{\circ}}\else $^{\circ}$\fi), where the uncertainty in the H$\alpha$/H$\beta$\ did
not allow for an accurate determination of \ifmmode{A(V)}\else $A(V)$\fi. However, the low
extinction for (130\ifmmode{^{\circ}}\else $^{\circ}$\fi, -7.5\ifmmode{^{\circ}}\else $^{\circ}$\fi), suggests that the correction for the
higher latitude direction is likely to be negligible.
Figures~\ref{fig:bowtiespectra} and \ref{fig:bowtiediag} both show
significant variations in the relative strengths of the lines, especially
toward (130\ifmmode{^{\circ}}\else $^{\circ}$\fi, -7.5\ifmmode{^{\circ}}\else $^{\circ}$\fi). The emission component identified with the
superbubble (component 3) has spectral characteristics similar to the W4
\ion{H}{2}\ region immediately surrounding Cas OB6, that is, significantly lower
[\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$, and higher [\ion{O}{3}]/H$\alpha$\ than the other components in
this direction. On the other hand, toward (133\ifmmode{^{\circ}}\else $^{\circ}$\fi, +18\ifmmode{^{\circ}}\else $^{\circ}$\fi) the gas associated with
the superbubble (component 2), is more WIM-like, with high
[\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ and low [\ion{O}{3}]/H$\alpha$\ compared to W4. In the
following section, we explore the variations in [\ion{N}{2}]\ and [\ion{S}{2}]\ using the
line ratio maps of this region. These maps reveal subtle differences in
ionization conditions within the low velocity foreground gas, within the loops and filaments of the more distant superbubble, and between the superbubble and the WIM.
\subsubsection{Foreground Emission}
This area of the Galaxy is well suited for emission line studies of the
diffuse ionized gas because the presence of two well separated velocity
components allows one to probe two potentially different environments in
the same spectrum. At radial velocities near the LSR, there are several
large classical \ion{H}{2}\ regions, as shown in Figure~\ref{fig:bowtiesurv}
(left panel). The largest of these are Sivan 3 near (145\ifmmode{^{\circ}}\else $^{\circ}$\fi, +15\ifmmode{^{\circ}}\else $^{\circ}$\fi)
associated with the O9.5~Ia star $\alpha$\ Cam, Sivan 4 near (155\ifmmode{^{\circ}}\else $^{\circ}$\fi,
-15\ifmmode{^{\circ}}\else $^{\circ}$\fi) associated with the O7.5~III star $\xi$\ Per, and an \ion{H}{2}\ region
near (130\ifmmode{^{\circ}}\else $^{\circ}$\fi, -10\ifmmode{^{\circ}}\else $^{\circ}$\fi) associated with the B0.5+sdO star $\phi$\ Per.
Maps of \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi, [\ion{N}{2}]/H$\alpha$, [\ion{S}{2}]/H$\alpha$, and [\ion{N}{2}]/[\ion{S}{2}]\ for this relatively nearby gas with $|\ifmmode{v_{\rm{LSR}}}\else $v_{\rm{LSR}}$\fi| <$
15~ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ are shown in Figure~\ref{fig:bowtiemaplocal}. Many of the
pointed observations toward the classical, relatively bright \ion{H}{2}\ regions
reported in \S\ref{sec:hii} are in this map. The map of [\ion{N}{2}]/H$\alpha$\ reveals
that the faintest regions of H$\alpha$\ (i.e., the WIM) are ``bright" in
[\ion{N}{2}]/H$\alpha$, with [\ion{N}{2}]/H$\alpha$\ $\gtrsim$ 0.5, while most of the bright \ion{H}{2}\
regions appear dark, with [\ion{N}{2}]/H$\alpha$\ $\lesssim$ 0.3. The exception is the
15\ifmmode{^{\circ}}\else $^{\circ}$\fi\ diameter circular region near (130\ifmmode{^{\circ}}\else $^{\circ}$\fi, -10\ifmmode{^{\circ}}\else $^{\circ}$\fi). This is the $\phi$\
Per \ion{H}{2}\ region, which is ionized by a B0.5+sdO binary system
\citep{HRT99}. The [\ion{S}{2}]/H$\alpha$\ map is similar to the [\ion{N}{2}]/H$\alpha$\ map,
suggesting that the [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ line ratio variations are
dominated by temperature changes (see equations \ref{eq:niieq} and
\ref{eq:siieq}). [\ion{S}{2}]/H$\alpha$\ is high, $\gtrsim$\ 0.4, in regions of
faint H$\alpha$\ emission and low, $\lesssim$\ 0.2, toward \ion{H}{2}\ regions. The
$\phi$\ Per \ion{H}{2}\ region is elevated in [\ion{S}{2}]/H$\alpha$, relative to the
background, but not as much as it is in [\ion{N}{2}]/H$\alpha$. This is shown clearly
on the map of [\ion{S}{2}]/[\ion{N}{2}], where the gas associated with classical \ion{H}{2}\
regions is depressed in [\ion{S}{2}]/[\ion{N}{2}], especially for $\phi$ Per.
As discussed in \S\ref{sec:physconds}, features on these ratio maps
can be interpreted as changes in the physical conditions of the gas, where
[\ion{N}{2}]/H$\alpha$\ follows the temperature of the gas and [\ion{S}{2}]/[\ion{N}{2}]\ traces
S$^+$/S. These maps thus indicate that the faint, diffuse WIM is
significantly warmer and in a lower ionization state (more S is in the
form of S$^+$) than in the traditional \ion{H}{2}\ regions. The \ion{H}{2}\ region
surrounding the B star $\phi$\ Per differs from the other \ion{H}{2}\ regions in
that it has a significantly higher temperature (i.e., [\ion{N}{2}]/H$\alpha$\ ratio).
This may be due to the hard ionization ionizing radiation from its hot sdO
companion.
These variations in line ratios are illustrated in
Figures~\ref{fig:bowtieratiolocal} and
\ref{fig:bowtieratiolocalnvs}. Figure~\ref{fig:bowtieratiolocal}
displays the [\ion{N}{2}]/H$\alpha$\, [\ion{S}{2}]/H$\alpha$, and [\ion{S}{2}]/[\ion{N}{2}]\ measurements in
Figure~\ref{fig:bowtiemaplocal} as function of H$\alpha$\ intensity. Data
points with uncertainties greater than 0.1 in the ratio are omitted.
Observations toward the large O-star \ion{H}{2}\ regions are shown in blue,
while observations within 6\ifmmode{^{\circ}}\else $^{\circ}$\fi\ of $\phi$\ Per are shown in red. The data
points with \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi\ $> 20$ R are all close to the $\xi$\ Per \ion{H}{2}\ region
near (160\ifmmode{^{\circ}}\else $^{\circ}$\fi, -10\ifmmode{^{\circ}}\else $^{\circ}$\fi). A few additional points, near (158\ifmmode{^{\circ}}\else $^{\circ}$\fi, 0\ifmmode{^{\circ}}\else $^{\circ}$\fi), are
associated with an unusually large and old planetary nebula S216
\citep{Reynolds85pn}, and are in green. All other directions (the WIM) are
indicated by black data points. The increase in [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\
with decreasing H$\alpha$\ intensity is apparent, confirming the trend found by
\citet{HRT99} for a smaller sample of this region of the sky. We also see that [\ion{S}{2}]/[\ion{N}{2}]\ in the fainter gas is elevated compared to regions of brighter emission associated with \ion{H}{2}\ regions.
The scatter in the data points is much larger than the measurement errors,
implying real variations in the temperature and ionization state of the
gas. These variations are displayed more quantitatively in
Figure~\ref{fig:bowtieratiolocalnvs}, with the same data points and
symbols as Figure~\ref{fig:bowtieratiolocal} and with the same scaling
and labels as Figure~\ref{fig:orirationvs}. The [\ion{N}{2}]/H$\alpha$\ data suggest
that most of emission from the \ion{H}{2}\ regions is relatively cool (6000 K $<
T <$ 7000 K), and that the faintest regions of the WIM extend to
temperatures $T > 9000 K$. This diagram also suggests that emission from
the $\phi$\ Per \ion{H}{2}\ region has a similarly high temperature as the
WIM, but with a lower S$^+$/S. The large, evolved planetary nebula
S216 (green points) has the highest temperature ($\approx$ 11,000 K)
and a low S$^+$/S ($\approx$ 0.24),
consistent with the high photospheric temperature of its ionizing star \citep{CudR85,
TN92}.
\subsubsection{Perseus Arm Emission}
These spectral characteristics of the WIM and the \ion{H}{2}\ regions in the
local gas near the LSR can now be compared to those of the Perseus
superbubble in the $-75\ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi < v_{LSR} < -45\ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi$ maps.
The emission from the Perseus arm (right panel in
Figure~\ref{fig:bowtiesurv}) shows a remarkable bipolar,
closed loop structure centered near (135\ifmmode{^{\circ}}\else $^{\circ}$\fi, 0\ifmmode{^{\circ}}\else $^{\circ}$\fi) that extends almost
30\ifmmode{^{\circ}}\else $^{\circ}$\fi\ above and below the Galactic plane (i.e., $\pm 1200$ pc). The W4
star-forming region, ionized by the Cas OB6 association, is located at a
distance of $\approx$\ 2.2 kpc, and is near the center of this structure
close to the Galactic plane. Several observational and theoretical
studies have suggested that a large `chimney' has been carved out near the
Galactic plane, allowing radiation and hot gas to move out into the
Galactic halo \citep{NTD96, DTS97, BJM99, Terebey+03}. The H$\alpha$\ emission
from the large arc-shaped feature above the plane at $0\ifmmode{^{\circ}}\else $^{\circ}$\fi < b < +30\ifmmode{^{\circ}}\else $^{\circ}$\fi$
was explored by \citet{RSH01}, who found that the size and shape of this
loop is consistent with a sequence of star-forming events over a period of
$\sim$\ 10$^7$ yr that have carved out a 1 kpc-scale cavity in the ISM.
They concluded that the ionized hydrogen in the loop, the upper parts of
which are more than 1 kpc from the O stars in Cas OB6, appeared to be
produced by ionizing radiation escaping the association.
\citet{HRT99} mapped the lower part of this region ($b < -5\ifmmode{^{\circ}}\else $^{\circ}$\fi$), in [\ion{N}{2}]\
and [\ion{S}{2}]. Here we report on new survey observations of this entire region
in [\ion{N}{2}]\ and [\ion{S}{2}], as well as pointed observations in two directions along
the edge of the structure in H$\beta$, \ion{He}{1}, [\ion{O}{3}], and [\ion{N}{2}]$~\lambda5755$.
The H$\alpha$\ and
[\ion{N}{2}]\ and [\ion{S}{2}]\ line ratio maps of this radial velocity interval are shown
in Figure~\ref{fig:bowtiemapper}.
The [\ion{N}{2}]/H$\alpha$\ map reveals that regions of higher H$\alpha$\ intensity have a
lower [\ion{N}{2}]/H$\alpha$, and that this anti-correlation holds not only for the
large scale features but also for the smaller filamentary structures of
the superbubble. The faintest areas of H$\alpha$\ (e.g., the WIM at $l >
150\ifmmode{^{\circ}}\else $^{\circ}$\fi$) have the highest [\ion{N}{2}]/H$\alpha$\ ($\gtrsim$\ 0.7). Along the loops,
[\ion{N}{2}]/H$\alpha$\ is depressed relative to the regions interior and exterior to
the loops. Several smaller filaments in the region $130\ifmmode{^{\circ}}\else $^{\circ}$\fi < l < 150\ifmmode{^{\circ}}\else $^{\circ}$\fi,
-30\ifmmode{^{\circ}}\else $^{\circ}$\fi < b < -10\ifmmode{^{\circ}}\else $^{\circ}$\fi$ show the same detailed anti-correlation of [\ion{N}{2}]/H$\alpha$\
and \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi. As expected, the area within about 5\ifmmode{^{\circ}}\else $^{\circ}$\fi\ of the W4 \ion{H}{2}\ region
has the lowest [\ion{N}{2}]/H$\alpha$\ ratios ($\lesssim$\ 0.4). Because sightlines
toward the loops and filaments contain weak foreground and background
emission at the same velocity, [\ion{N}{2}]/H$\alpha$\ from the loops and filaments
themselves is even more depressed compared to the adjacent regions than
these maps suggest. The lower [\ion{N}{2}]/H$\alpha$\ ratios in the filamentary
structures, particularly in the brighter H$\alpha$\ regions south of the plane
where contamination by the fainter WIM along the line of sight is
negligible, provides strong evidence that these filaments are not
enhancements resulting from folds and edge projections of the ionized
outer skin of the superbubble but instead are regions of cooler
temperature. Furthermore, the generally lower [\ion{N}{2}]/H$\alpha$\ within the entire
superbubble compared to that in the more diffuse WIM at $l > 150\ifmmode{^{\circ}}\else $^{\circ}$\fi$, for
example, implies that the WIM cannot be explained entirely by the
superposition of regions like the Perseus superbubble.
In the [\ion{S}{2}]/H$\alpha$\ map the contrast between the filaments and the background
is not as high. Since the line ratio [\ion{S}{2}]/H$\alpha$\ is a combination of
temperature and ionization effects, this decrease in contrast may be due
to changes in the ionization fraction of S$^+$/S that `wash out' the
temperature effects we see in the [\ion{N}{2}]/H$\alpha$\ map. This is further evidence
that the H$\alpha$\ enhanced filaments and loops are not simply due to
geometrical projection effects of an ionized shell and that the
ionizations conditions (S$^+$/S) within the superbubble differ from that
in the WIM.
Quantitative comparisons between the W4 \ion{H}{2}\ region, the superbubble, and
the diffuse WIM outside the boundary of the superbubble are shown in
Figures~\ref{fig:bowtieratioper} and \ref{fig:bowtieratiopernvs}.
Data points with uncertainties larger than 0.1 in the ratio are not shown.
Observations within 1.5\ifmmode{^{\circ}}\else $^{\circ}$\fi\ of spectroscopically confirmed members of the
Cas OB6 association are shown in green, sightlines through the
superbubble, defined as all points with $l < 150\ifmmode{^{\circ}}\else $^{\circ}$\fi$, are shown in red,
and the WIM, defined as all points with $l > 150\ifmmode{^{\circ}}\else $^{\circ}$\fi$, are in blue. In
general we see that [\ion{N}{2}]/H$\alpha$\ is higher in the diffuse
background where \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi\ is lower. We also see that the brighter regions near
W4 have line ratios close to that of the average \ion{H}{2}\ region, while the
superbubble tends to have [\ion{N}{2}]/H$\alpha$\ ratios closer to that of the
nearby WIM, but weighted to somewhat lower values.
We note one caveat about our analysis of the complicated component structure toward these lines of sight.
The method we used to calculate the line strengths, integrating the
profiles over fixed velocity intervals, is not ideal for exploring
the full range of physical conditions that may be present in the ionized
gas in these maps. Most of the spectra in this part of the Galaxy have two
components centered near 0 \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ and $-50$ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ (LSR) with line widths of
$\approx 20-30$~ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi. However, some spectra have profiles that peak at
somewhat different velocities. This is especially true in the region near
the southern loop with $b \lesssim -10\ifmmode{^{\circ}}\else $^{\circ}$\fi$. Spectra separated by only
1\ifmmode{^{\circ}}\else $^{\circ}$\fi\ in this area show variations in the component strengths by a factor
of 2 and shifts in the velocity centroids of 10 \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ or more. Therefore,
in some cases a map that covers a fixed radial velocity interval may
sample the entire emission component in one part of the map, but only the
wing of that component in another part of the map. This complicates the
interpretation of the line ratios. A detailed examination of the
emission, including a study of the dynamics of the gas, would require
Gaussian fits to all of the components in all of the approximately 2400
spectra, which is beyond the scope of this work. Our focus here is
limited to the general trends that appear among spatially coherent
features in these maps, which are insensitive to these kinematic
variations.
\subsubsection{The Perseus Superbubble and the WIM}
Compared to the smaller and brighter Orion-Eridanus bubble, the Perseus
superbubble is clearly more WIM-like. Compare, for example, the
distribution of blue points in Figures~\ref{fig:oriratioplot} and
\ref{fig:orirationvs} (Orion-Eridanus bubble) with the blue points in
the corresponding Figures~\ref{fig:bowtieratioper} and
\ref{fig:bowtieratiopernvs} (Perseus superbubble). Because the surface
brightness of the Perseus superbubble is not much brighter than that of
the WIM, particularly far from the Galactic plane, searching for a trend
in the line ratios with distance from Cas OB6 is problematic. However,
a comparison of the background subtracted pointed observation toward
(130\ifmmode{^{\circ}}\else $^{\circ}$\fi, $-7.5$\ifmmode{^{\circ}}\else $^{\circ}$\fi), which is 10\ifmmode{^{\circ}}\else $^{\circ}$\fi\ ($\approx$ 350 pc) from Cas OB6, with that
toward (133\ifmmode{^{\circ}}\else $^{\circ}$\fi, $+18$\ifmmode{^{\circ}}\else $^{\circ}$\fi), which is 17\ifmmode{^{\circ}}\else $^{\circ}$\fi\ ($\approx$ 700 pc) from Cas OB6, indicates
that the gas in the latter direction is more WIM-like (i.e., higher [\ion{N}{2}],
[\ion{S}{2}]/H$\alpha$\ and lower [\ion{O}{3}]/H$\alpha$) than in the former (see Table~\ref{tab:per}).
Nevertheless, even the
most distant parts of the superbubble are still clearly distinguishable
from the background WIM, indicating that although this structure does
have ratios similar to the WIM in some regions of the sky (e.g., outside
the Orion-Eridanus bubble at $b < -53\ifmmode{^{\circ}}\else $^{\circ}$\fi$), there is still a significant
difference (which we interpret primarily as a temperature difference) between this superbubble and the adjacent WIM.
\section{HIGH LATITUDE FILAMENTS}
\label{sec:hlfil}
There are other regions of the sky that exhibit H$\alpha$\ enhancements with no
identified O stars or O associations as their source of ionization. Many
of these are high Galactic latitude filamentary structures that have no
visible connection to a superbubble. The emission characteristics of
four such regions are examined in detail in this section to determine
where they fit empirically with respect to the classical \ion{H}{2}\ regions, superbubbles,
and the more diffuse WIM.
\subsection{The Northern Filaments}
\label{subsec:nfil}
A remarkable, $\sim 2\ifmmode{^{\circ}}\else $^{\circ}$\fi \times 60\ifmmode{^{\circ}}\else $^{\circ}$\fi$, H$\alpha$\ filament rises vertically
more than 50\ifmmode{^{\circ}}\else $^{\circ}$\fi\ from the Galactic plane near longitude $l=225\ifmmode{^{\circ}}\else $^{\circ}$\fi$. As
shown in Figure~\ref{fig:nfilmap}, another filament, about 30\ifmmode{^{\circ}}\else $^{\circ}$\fi\ long,
traverses this longer filament at a right angle near $+37\ifmmode{^{\circ}}\else $^{\circ}$\fi$ latitude. We
refer to these as the `northern filaments' to distinguish them from a
comparably sized feature on the WHAM survey map near $l \approx 75\ifmmode{^{\circ}}\else $^{\circ}$\fi$
that extends south of the Galactic plane from $b \approx -15\ifmmode{^{\circ}}\else $^{\circ}$\fi$ to $b
\approx -55\ifmmode{^{\circ}}\else $^{\circ}$\fi$. (This `southern filament' was not included in this study
because of the complex kinematics of the gas toward and near the
filament.) \citet{HRT98} examined the region of the northern filaments and
found no correspondence of the H$\alpha$\ emission with observational tracers of
any other phases of the interstellar medium. A radial velocity gradient
along the length of the vertical filament, from $+18$ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ near the
midplane to $-25$ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ at the highest latitude, suggests that it is a
coherent strand of gas and not simply an enhancement resulting from an
increase in geometrical path through the edge of a very low surface
brightness shell. They also noted that the lower parts of the filament
are at the same longitude as the \ion{H}{2}\ region S292 surrounding the CMa OB
1 association in the Galactic plane, and that the emission from this part
of the filament appears at the same radial velocity as the \ion{H}{2}\ region.
If the long filament is at the same distance as the OB association
($\approx$\ 1 kpc), then it reaches a vertical height of $\approx$\ 1.2
kpc above the plane and has a density near 0.3 cm$^{-3}$. \citet{HRT98}
suggest that the diffuse ionized gas in these filaments is not likely to
be material that has been ejected from the star-forming region below it.
Instead, they argue that the relatively constant H$\alpha$\ surface brightness
along the filament suggests that it is ionized by ambient Lyman continuum
radiation.
We obtained spectra of [\ion{N}{2}], [\ion{S}{2}], [\ion{O}{3}], and \ion{He}{1}\ with pointed
observations along and near these two filaments. A summary of the results
appears in the top ten rows of Table~\ref{tab:nfil}. The columns are similar to those in
Table~\ref{tab:ori}, except that column 4 is the angular distance of
each observation from the S292 \ion{H}{2}\ region near the Galactic midplane.
The labels A-F in Figure~\ref{fig:nfilmap} show the
location of the observations listed in the table, and are sorted by
increasing distance from S292. The three `X' labels show the location of
{\sc{OFF}} directions used to remove atmospheric lines and emission from
the diffuse background. The removal of the background emission was
particularly important for these observations, because the background
(\ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi $\approx 0.6$ R) is only slightly fainter than the total intensity
toward the filamentary structures ($\ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi \approx 3$ R). The background
emission was removed from the pointed observations by choosing the closest
of the three {\sc{OFF}} directions. However, there is a variation in the strength
of the H$\alpha$\ emission among each of the {\sc{OFF}}s, with \ifmmode{I_{\rm{H}\alpha}} \else $I_{\rm H \alpha}$\fi\ = $0.4-0.9$ R. To
assess the impact of this variation on our results, two different
background directions were subtracted from the three observations A, B and
E. This is reflected in the multiple entries for these observations in Table~\ref{tab:nfil}.
A graphical summary of our results is shown in
Figure~\ref{fig:nfildiag}, which is similar to
Figure~\ref{fig:oridiag}. The open symbols represent the two results
obtained by subtracting two different {\sc{OFF}} directions from the same pointed
observation and thus provide a measure of the uncertainty resulting from
the background subtraction. None of the data were corrected for
extinction. However, the upward arrows on the data points for the \ion{H}{2}\ region S292 show
the shift in these data points if a correction for a visual absorption of
$\ifmmode{A(V)}\else $A(V)$\fi = 0.81$ is applied to the \ion{H}{2}\ region (see \S\ref{sec:hii}) All
other data points have shifts smaller than the symbol size.
Observations at A, B, E, and F along the vertical filament show that the
H$\alpha$\ surface brightness of this filament is nearly constant and
independent of distance from the midplane, as first noted by
\citet{HRT98}. [\ion{N}{2}]/H$\alpha$\ also shows little systematic variation
($\approx$ 0.4 - 0.5) along the filament's length, except for a somewhat
elevated value of 0.6 toward direction B. [\ion{N}{2}]/H$\alpha$\ is significantly
higher than the average (0.27) for \ion{H}{2}\ regions and similar to what is
observed in the diffuse WIM. Similarly, we find that [\ion{S}{2}]/H$\alpha$\ ($\approx
0.2 - 0.4$) is significantly higher than the \ion{H}{2}\ regions ($\approx
0.1$) and comparable to parts of the WIM. [\ion{S}{2}]/[\ion{N}{2}]\ ($\approx 0.48 -
0.84$) has significant variations, but is within the scatter of values
seen in the WIM and has no trend with distance from the plane. Directions C,
D, and E, which sample the cross filament were found to have
spectral characteristics similar to those of the vertical filament at the
same latitude (Table~\ref{tab:nfil}).
In contrast to the other emission lines, [\ion{O}{3}]/H$\alpha$\ shows a very strong
trend with distance, with [\ion{O}{3}]/H$\alpha$\ $\approx 0.06$ near the midplane,
where its value is the same as that toward the \ion{H}{2}\ region S292, falling
to $\approx$\ 0.025 at B, 24\ifmmode{^{\circ}}\else $^{\circ}$\fi ($\approx$\ 400 pc) above the midplane,
and dropping to values below or at 0.01 for the two highest latitude
directions. This implies that O$^{++}$/O $\approx$ 0.1 at $z \approx
200$\ pc, $\approx$ 0.04 at 400 pc, and $\lesssim$~0.02 above 700 pc. This
is contrasted with the generally observed behavior of [\ion{O}{3}]\ in external
galaxies, where the [\ion{O}{3}]/H$\alpha$\ ratios are much higher ($\gtrsim$\ 0.2) and
in some cases increase with distance above the plane \citep{CR01, MV03}.
If the filament is photoionized and its temperature is not changing with
height above the plane, as the constant [\ion{N}{2}]/H$\alpha$\ suggests, then these
[\ion{O}{3}]/H$\alpha$\ data suggest that an already low flux of photons having $h\nu
\gtrsim $\ 35 eV rapidly diminishes with increasing height above the
plane. The \ion{He}{1}/H$\alpha$\ ratio for direction B, while quite uncertain, implies
that there are a significant number of He-ionizing photons ($h\nu \gtrsim
24$~eV) at $z \approx 400$\ pc, with He$^+$/He $\approx$\ 0.6 $\pm$ 0.2.
This is higher than what is seen for many \ion{H}{2}\ regions, and implies that
the ionizing radiation field at this location has a spectrum that is
similar to an O7 star or earlier.
\subsection{Observations Toward The High Latitude Arc}
\label{subsec:arc}
We also investigated a much shorter filamentary feature, a $\sim$\ 3\ifmmode{^{\circ}}\else $^{\circ}$\fi\
long arc of H$\alpha$\ emission located near \ifmmode{(\ell,b)} \else $(\ell,b)$\fi = (171\ifmmode{^{\circ}}\else $^{\circ}$\fi, +57\ifmmode{^{\circ}}\else $^{\circ}$\fi) about
$10-15$\ifmmode{^{\circ}}\else $^{\circ}$\fi\ away from the faintest directions in the sky in both H$\alpha$\ and
\ion{H}{1}\ \citep{Hausen+02-apj, LJM86}. A WHAM H$\alpha$\ sky survey map of the
region surrounding this arc is shown in Figure~\ref{fig:lockmap}. The
arc, visible in the top, $|\ifmmode{v_{\rm{LSR}}}\else $v_{\rm{LSR}}$\fi| = \pm15\ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi$ panel, has an H$\alpha$\
intensity of about 1 R ($\approx$ 3 cm$^{-6}$\ pc), approximately 3-4
times brighter than the surrounding H$\alpha$\ background. The location of the
pointed observations is indicated by a circle (the 1\ifmmode{^{\circ}}\else $^{\circ}$\fi\ WHAM beam is much
smaller than this circle). The location of the {\sc{OFF}} direction is denoted by
an `X'.
The H$\alpha$\ spectrum in the direction of this arc actually consists of two
emission components, one associated with the arc at \ifmmode{v_{\rm{LSR}}}\else $v_{\rm{LSR}}$\fi\ $\approx$\ 0\
\ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi, and another at \ifmmode{v_{\rm{LSR}}}\else $v_{\rm{LSR}}$\fi\ $\approx$\ $-70$ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi. The lower panel of
Figure~\ref{fig:lockmap} shows the same piece of sky, but for the
velocity interval $-80\ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi < \ifmmode{v_{\rm{LSR}}}\else $v_{\rm{LSR}}$\fi < -55\ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi$. Fortuitously, the line
of sight through the arc also passes through a faint portion of an
intermediate-velocity \ion{H}{1}\ cloud (IVC) known as the IV Arch located
approximately $500 - 700$ pc above the Galactic midplane \citep{Wakker01, KD96}.
The brightest \ion{H}{1}\ emission from the IV Arch is across the top of the map
near $b=+65\ifmmode{^{\circ}}\else $^{\circ}$\fi$, where its associated H$\alpha$\ emission is also relatively
bright. In H$\alpha$, the IV Arch is typically $3\ifmmode{^{\circ}}\else $^{\circ}$\fi - 5\ifmmode{^{\circ}}\else $^{\circ}$\fi$ wide and extends
more than 60\ifmmode{^{\circ}}\else $^{\circ}$\fi\ across the northern Galactic hemisphere. A
small section of the IV Arch extends down away from the main section and
passes through the direction of the lower velocity high-latitude H$\alpha$\ arc.
The H$\alpha$\ intensity of the IV Arch in this direction is approximately
0.1~R, making it the faintest H$\alpha$\ emission structure in this study,
with a density of $0.1 - 0.2$ cm$^{-3}$.
We have observed emission lines of H$\alpha$, [\ion{N}{2}], [\ion{S}{2}], \ion{He}{1}, and [\ion{O}{3}]\ and
measured the line strengths for these two velocity components. A summary
of the observations appears in the bottom two rows of Table~\ref{tab:nfil}.
\subsubsection{The Arc}
The low velocity arc has [\ion{N}{2}]/H$\alpha$~=~0.72 and [\ion{S}{2}]/[\ion{N}{2}]~=~0.65, which are
comparable to some of the highest values observed in the WIM. The
detection of [\ion{N}{2}]$~\lambda5755$\ confirms that these high ratios are primarily a
result of an elevated temperature in the gas (see \S\ref{sec:niiblue}).
The relatively low [\ion{O}{3}]/H$\alpha$\ ($\approx$ 0.06) is also consistent with
the WIM. However, \ion{He}{1}/H$\alpha$\ ($\approx 0.05 \pm 0.01$) is the highest ratio among all the observations in this study including the classical \ion{H}{2}\ regions. Together, these line ratios suggest that this arc is ionized by a hard radiation source with a low flux that is local to the arc. The very high \ion{He}{1}/H$\alpha$\ ratio is not seen elsewhere in the WIM, and indicate the arc is not a density enhancement ionized only by the diffuse interstellar radiation field. The \ion{He}{1}\ data imply the presence of a hard ionizing spectrum.
A search for a potential ionizing source yielded one candidate,
the DA white dwarf WD1026+453. This star has a visual magnitude $m_V
\approx 16.1$ and is near the arc at (170.92\ifmmode{^{\circ}}\else $^{\circ}$\fi, +56.60\ifmmode{^{\circ}}\else $^{\circ}$\fi). It is the
only spectroscopically confirmed hot star that lies within the arc (in
projection) and within the field of view of the pointed observations
discussed above. Ultraviolet spectroscopy suggests that the temperature of
the star is $T_* \approx$ 35,000 K and that it is at a distance of
$\approx$ 200 pc \citep{Vennes+97}. It has also been detected in extreme
ultraviolet by ROSAT and EUVE \citep{Pye+95, Bowyer+96}. \ion{H}{1}\ emission
line maps in this region show a slight enhancement in 21 cm line emission
at the velocities of the ionized gas on the lower right side of the arc.
If we adopt the above distance, the emission measure suggests the density of the gas in the arc is approximately 0.9 cm$^{-3}$.
The identification of WD1026+453 as the potential ionizing source for the
arc raises the question about the contribution from hot evolved stellar
cores to the diffuse ionizing radiation field above the Galactic disk.
The high \ion{He}{1}/H$\alpha$\ for this region suggests that a future, more
comprehensive study of \ion{He}{1}\ emission at high latitudes could provide
insights about the role of such stars. On the other hand, the rapid
decrease in [\ion{O}{3}]/H$\alpha$\ with latitude along the vertical northern filament
strongly suggests that the radiation field, at least in that part of the
Galaxy, actually softens with distance from the midplane.
\subsubsection{The IV Arch}
The detections of [\ion{N}{2}]\ and [\ion{S}{2}]\ in the $-67$ \ifmmode{\rm km\thinspace s^{-1}}\else km\thinspace s$^{-1}$\fi\ component are the
first detections of these diagnostic emission lines from the IV Arch. No
corresponding [\ion{O}{3}]\ and \ion{He}{1}\ emission were detected. The very high value
of [\ion{S}{2}]/[\ion{N}{2}]\ ($\approx 1.8$) is unique. We caution that because the H$\alpha$\
emission in this component is only $\approx$\ 0.1 R, the uncertainty in
this result is substantial. However, at the 1$\sigma$ limits, for both
[\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$, [\ion{S}{2}]/[\ion{N}{2}]\ $\gtrsim$\ 1.0. These ratios suggest
that the temperature of the emitting gas is about 7000~K and that almost
all of the S is singly ionized, with S$^+$/S $\gtrsim 0.7$. The
abundances that we use to infer these physical conditions
(\S\ref{sec:physconds}) are supported by UV absorption lines studies
\citep{Wakker01}, which show that the IV Arch has a gas phase abundances
similar to that in the local ISM. The ionization state could be
unusually low due to the weak flux of ionizing radiation (and thus low
ionization parameter) inferred from its very low H$\alpha$\ surface brightness
(0.1 R).
In summary, the northern filaments, the high latitude arc, and the IV Arch
are all characterized by high [\ion{N}{2}]/H$\alpha$\ and [\ion{S}{2}]/H$\alpha$\ ratios that are not
unlike the ratios observed in the more diffuse WIM.
\section{OBSERVATIONS OF [\ion{N}{2}]$~\lambda5755$}
\label{sec:niiblue}
Many observations show that in faint, diffuse emission regions,
the ratio [\ion{N}{2}]$~\lambda6584$/H$\alpha$, tends to be significantly
larger than in classical \ion{H}{2}\ regions and increases with decreasing
H$\alpha$\
surface brightness.
This is a general result that is commonly observed
in the Galaxy \citep[e.g.,][]{HRT99} as well as in external galaxies
\citep[e.g.,][]{CR01}. The interpretation is that ionized gas at lower
densities (specifically at n$_e \lesssim 0.1$ cm$^{-3}$) is at
higher temperatures. Furthermore, this increase in temperature may be
beyond what can be attributed to photoionization alone \citep{RHT99, WM04, ED05},
which has important implications for the role of other heating processes
that may be operating in the WIM. However, the [\ion{N}{2}]$~\lambda6584$/H$\alpha$\ line ratio is a function of several physical parameters, not just temperature.
Therefore it is important to test
this interpretation of elevated temperatures using other observational
information.
As discussed in \S\ref{sec:physconds}, the ratio
[\ion{N}{2}]$~\lambda6584$/[\ion{N}{2}]$~\lambda5755$\ provides a direct measure of the electron
temperature of the emitting gas. Here we describe our observations of
these lines toward a collection of classical O star \ion{H}{2}\ regions as well
as sightlines that sample the diffuse WIM. Table~\ref{tab:niiblue}
summarizes the observations. The first 11 rows show the data for the
\ion{H}{2}\ regions, including the five \ion{H}{2}\ regions reported earlier by
\citet{Reynolds+01}. The last 6 rows show the data obtained for
sightlines that sample the much fainter diffuse ionized medium. One WIM
sightline, (130\ifmmode{^{\circ}}\else $^{\circ}$\fi, $-7.5$\ifmmode{^{\circ}}\else $^{\circ}$\fi), was reported previously by
\citet{Reynolds+01}. For some directions, [\ion{N}{2}]$~\lambda5755$\ was not detected, and
only upper limits to the line ratio of [\ion{N}{2}]$~\lambda5755$/[\ion{N}{2}]$~\lambda6584$ are
given, as described in \S\ref{sec:errors}. The temperatures inferred by these line ratios appear in the last
two two columns on the right. $T_{6584}$ is the temperature suggested by
the [\ion{N}{2}]$~\lambda6584$/H$\alpha$\ ratio; $T_{5755}$ is the temperature inferred
from the [\ion{N}{2}]$~\lambda5755$/[\ion{N}{2}]$~\lambda6584$ observations.
These results are shown in Figure~\ref{fig:bluenii}, where
[\ion{N}{2}]$~\lambda5755$/[\ion{N}{2}]$~\lambda6584$ is plotted against [\ion{N}{2}]$~\lambda6584$/H$\alpha$.
Data for the \ion{H}{2}\ regions have blue symbols and the WIM red symbols.
Upper limits to the line ratios are indicated by arrows. The solid black
line is the locus of expected ratios of these lines from equations
\ref{eq:niieq} and \ref{eq:niiblueeq}. The dashed lines show the
predicted ratios for temperatures between 5000 and 10000 K. For the three
WIM sightlines in which [\ion{N}{2}]$~\lambda5755$\ is detected, both of the line ratios are
significantly higher than those for classical O star \ion{H}{2}\ regions. This
provides convincing confirmation that 1) the WIM is approximately $2000 -
3000$~K warmer than classical O star \ion{H}{2}\ regions, and 2) higher
[\ion{N}{2}]/H$\alpha$\ intensity ratios are due at least in large part to higher
temperatures.
The data points tend to lie preferentially above the expected relationship
(solid line). This could be explained if N$^+$/N in the WIM were
significantly lower than the assumed value of 0.8. A ratio of N$^+$/N = 0.6 would bring the two temperature diagnostics into average agreement with one another. However, both
photoionization modeling \citep{Sembach+00} and observations of elevated
[\ion{S}{2}]/H$\alpha$\ ratios suggest that the WIM, and in particular the WIM nitrogen,
is not highly ionized. Another reason could be that the gas phase
abundance of nitrogen in the WIM is lower than in the \ion{H}{2}\ regions by a
factor of $\gtrsim$~2, which also seems improbable. The most likely
explanation is that there is a range of temperatures in the WIM
\citep{Reynolds+01}, as is indicated by the large scatter in [\ion{N}{2}]/H$\alpha$\ (e.g., Figure~\ref{fig:alldiag}b).
The metastable level of the [\ion{N}{2}]$~\lambda5755$\ line lies higher above ground (about 4
eV) than that of the [\ion{N}{2}]$~\lambda6584$\ line (about 2 eV), which means
that [\ion{N}{2}]$~\lambda5755$\ is preferentially produced in regions of higher temperature
along the line of sight. The deviation of the points from the solid line
can be explained with an appropriate (non-unique) range of temperatures
\citep{Reynolds+01}. The conclusion that variations in [\ion{N}{2}]/H$\alpha$\ trace
variations in temperature has also been confirmed by the recent detection
and study of [\ion{O}{2}] $\lambda$3727 emission from the WIM \citep{Mierkiewicz+04}.
\section{SUMMARY AND CONCLUSIONS}
\label{sec:summary}
We have presented a large number of new observations of several optical
emission lines toward H$\alpha$-emitting features in the Galaxy that span a wide
range in surface brightness, angular scale, environment, and morphology.
We have explored the relative intensities of these emission lines to infer
the physical conditions of the emitting gas, and we have compared these
conditions with those of traditional, O star \ion{H}{2}\ regions in an attempt
to gain insight into the nature of the WIM and its relationship to hot
stars and large-scale bubbles and filaments within the interstellar
medium. We found significant variations in the temperature and ionization
state among these emission features, revealing that warm ionized gas is
heterogeneous in nature. We have strengthened the general assertion that
the WIM is warmer and less ionized compared to classical \ion{H}{2}\ regions and
found significant variations in temperature and ionization state
\emph{within} the diffuse WIM.
An overview of some of these observations is presented in
Figure~\ref{fig:alldiag}, which shows the diagnostic plots of [\ion{N}{2}]/H$\alpha$\
vs. [\ion{S}{2}]/H$\alpha$\ for various emission regions observed in pointed and
survey mode. The dashed vertical and solid sloped lines
represent lines of constant temperature and constant S$^+$/S, respectively, and
have the same values as in Figures~\ref{fig:orirationvs},
\ref{fig:bowtieratiolocalnvs}, and \ref{fig:bowtieratiopernvs}. The
range of physical conditions is indicated by the different
distributions and by the scatter in the data points. Different types
of emission regions occupy different areas of the
diagram, and even within a given catagory of emission region, variations
can be significant. The classical O star \ion{H}{2}\ regions (except for the two
regions ionized by hot stellar cores) show the least variation
(Figure~\ref{fig:alldiag}a), with temperatures between 6000~K and
7000~K and S$^+$/S $\approx 0.25$. In contrast, the faint diffuse WIM
(Fig.~\ref{fig:alldiag}b) has the most variation; that is, it occupies
the largest area, with temperatures ranging between 7000~K and 10,000~K
and S$^+$/S ranging from about 0.1 to 1. Figure~\ref{fig:alldiag}b also
shows that the mean properties of the WIM can be slightly different in
different regions of the Galaxy. For example, in the Perseus arm S$^+$/S
in the WIM has an average near 0.25, while in the more nearby gas
toward that direction, its average is near 0.6.
From this study, we make the following closing statements:
1. The temperature of diffuse ionized gas is higher in regions of
lower
emission measure. This is implied by the elevated [\ion{N}{2}]/H$\alpha$\ and
[\ion{S}{2}]/H$\alpha$\ ratios toward the relatively faint Perseus
superbubble, high latitude filaments, and diffuse background WIM,
compared to the relatively bright Orion-Eridanus bubble and the even
brighter O star \ion{H}{2}\ regions. The relation between
temperature (traced by [\ion{N}{2}]/H$\alpha$) and H$\alpha$\ intensity also holds within
the diffuse WIM itself (e.g., Figures~\ref{fig:oriratioplot},
\ref{fig:bowtieratiolocal}, and \ref{fig:bowtieratioper}).
The elevated temperatures in the WIM are confirmed by the
[\ion{N}{2}]$~\lambda5755$/[\ion{N}{2}]$~\lambda6583$ intensity ratios
(Figure~\ref{fig:bluenii}) and recent observations of [O~II]
\citep{Mierkiewicz+04}. \\
2. The ionization state in diffuse ionized gas is generally lower than that in
classical \ion{H}{2}\ regions.
Several new observations of [\ion{O}{3}]\ and \ion{He}{1}\ indicate that, in general,
the fraction of O$^{++}$/O and He$^+$/He in the WIM and in the
large bubble structures is low compared to
\ion{H}{2}\ regions, implying a lower ionization state due to a softer
ionizing
radiation field and probably a lower ionization parameter (e.g.,
Figures~\ref{fig:bowtiediag} and \ref{fig:nfildiag}).
The high ionization of sulfur (i.e., the low values of S$^{+}$/S) in the WIM of the Perseus arm (Figs. 15, 16, 21b) appears to be an exception to this trend.
The data also suggest that the diffuse low density gas close
to the Galactic plane may be more highly ionized than gas at larger
distances from the plane. In the inner Galaxy, this trend appears to
reverse \citep{MR05}. \\
3. Conditions within the WIM vary significantly. The mean
temperature and ionization state can change considerably from one
sight line to the next and even along a single sight line. Moreover,
the mean properties of the WIM change from one
region of the Galaxy to another (e.g., see Fig.~\ref{fig:alldiag}b).
Within the WIM, values of T and S$^+$/S extend to significantly
higher values than are found within classical \ion{H}{2}\ regions or even
the extended bubbles.
\\
4. High latitude filaments superposed on the faint WIM
have spectral characteristics similar to the WIM. This suggest a
close relationship between these high latitude structures and the more
diffuse background. \\
5. The Perseus superbubble provides strong evidence that a luminous
O star cluster near the midplane can produce wide-spread, nearly
WIM-like ionization conditions to distances of 1000 pc or more from the ionizing stars.
This superbubble has spectral
characteristics similar to those
observed in portions of the diffuse WIM (Figs.~\ref{fig:alldiag}b
and d) and quite
different from the spectral characteristics of the bright \ion{H}{2}\ region
that
immediately surrounds its source of ionization, the Cas OB6 star
cluster and other classical \ion{H}{2}\ regions (Fig.~\ref{fig:alldiag}a
and d;
Table~\ref{tab:hiimulti}). The fact that the smaller, brighter,
denser
Orion-Eridanus bubble has low [\ion{N}{2}]/H$\alpha$\ ratios, similar to those in
classical \ion{H}{2}\ regions, implies that bubble size, gas density within
the ionized shell, and/or the flux and spectrum of the radiation
escaping
O star clusters may be important in setting the conditions
within the ionized gas. For example, the presence of an extra source of
non-ionizing heat can raise the gas temperature in low density
($\lesssim 0.1$ cm$^{-3}$)
ionized gas \citep{RHT99}, which could account for the elevated, more
WIM-like temperatures in the low density Perseus superbubble, as
well as the more general relationship between temperature and
emission measure (point 1 above). \\
6. Filamentary structures within the Perseus superbubble have
physical conditions that
differ from conditions
in the fainter, more diffuse parts of the bubble. This implies
that these
filaments are discrete entities, likely regions of higher density,
and not just directions of increased
pathlength through folds or edge
projections of a shell or sheet. The slightly depressed [\ion{N}{2}]/H$\alpha$\ and
[\ion{S}{2}]/H$\alpha$\ ratios in these filaments, compared to the fainter
superbubble emission adjacent to them,
suggest that they are in fact cooler than the gas along adjacent
sightlines through the superbubble. \\
In conclusion, we believe that high spectral resolution emission line
observations at visible wavelengths open a new window on the study of
interstellar matter and processes not available through other techniques
at other wavelengths. In particular, the application of nebular line
diagnostics to the study of the warm ionized component of the interstellar
medium provides an opportunity to understand better, through both
observations and modelling, the large-scale effects of hot stars on the
ionization and morphology of the interstellar medium within the Galaxy's
disk and halo.
\section{ACKNOWLEDGEMENTS}
We gratefully acknowledge the anonymous referee for a thorough review which improved the paper.
We thank Kurt Jaehnig for his outstanding technical support in the
continuing operation of the WHAM instrument. This work was funded by
the National Science Foundation through grants AST-0204973 and
AST-0401416. GJM acknowledges additional support from the Wisconsin
Space Grant Consortium. This research has made use of the SIMBAD
database, operated at CDS, Strasbourg, France.
{\it Facilities:} \facility{WHAM ()}
\newpage
|
1804.01244
|
\section{\label{sec:level1}Introduction}
The heavy-fermion unconventinal superconductor URu$_2$Si$_2$ undergoes an enigmatic phase transition at $T_{\rm O}$ = 17.5 K to the so called `hidden order (HO)' phase~\cite{Palstra1985,Maple1986,Schlabitz1986}, whose order parameter still remains unsolved~\cite{Mydosh2011}. This compound has a body-centered-tetragonal (bct) ThCr$_2$Si$_2$-type crystal structure (space group No. 139, $I4/mmm$; $D_{\rm 4h}^{17}$). Recently, several experimental findings regarding a possible symmetry lowering of the electron and/or lattice system in the HO phase have been reported; including results of magnetic torque~\cite{Okazaki2011}, synchrotron x-ray~\cite{Tonegawa2014}, Raman scattering~\cite{Buhot2014}, and elastoresistance measurements~\cite{Riggs2015}. However, the proposed broken symmetries conflict with each other. Many theories have been proposed to explain the HO phase; e.g., higher multipolar order from rank 3 to 5~\cite{Kusunose2011, Haule2009, Ikeda2012, Ressouche2012, Rau2012}, hastatic order~\cite{Chandra2002}, spin inter-orbital density wave~\cite{Riseborough2012}, and dynamic antiferromagnetic moment fluctuations.~\cite{Elgazzar2009} A comprehensive interpretation, which can explain all of the experimental observations is lacking.
With high magnetic fields applied along the [001] axis at low temperatures, URu$_2$Si$_2$ undergoes three meta-magnetic transitions in the range between 35 and 39 T which are followed by a collapse of the HO phase~\cite{Kim2003}. In Fig. 1(b), we show a temperature-magnetic-field phase diagram of URu$_2$Si$_2$ for $H \parallel$ [001], which is constructed from the data of the present work and previous magnetization measurements~\cite{Scheerer2012}. First, the HO phase is suppressed at 35 T, followed by a cascade of transitions, where the spin-density wave with a propagation wave vector ${\bf k}$ = (0.6, 0, 0) is established in the intermediate phase~\cite{Knafo2016}. Finally, the system enters the polarized paramagnetic (PPM) regime in the high-magnetic-field region above 40 T~\cite{Kim2003}.
URu$_2$Si$_2$ also exhibits a strong hybridization between conduction and $5f$ electrons ($c$-$f$ hybridization) below $T^* \sim$ 50 K in low magnetic fields. This $c$-$f$ hybridization is also suppressed in association with the collapse of HO under high magnetic fields above 40 T for $H \parallel$ [001]\cite{Scheerer2012}. Beyond 40 T, the electronic ground state of URu$_2$Si$_2$ changes from delocalized to a more localized $5f$-electron regime~\cite{Scheerer2012}. Understanding the dual nature of the uranium $5f$ electron that are neither fully localized nor itinerant will likely provide insight in the origin of the HO. A theory which fully describes both the hybridization effect and the localized electron degrees of freedom has yet to be developed. There are two approaches to overcome these issues; either starting from the itinerant electron system (strong-coupling limit) or from the localized electron system (weak-coupling limit). A constraint is that the `symmetry' of the order parameter itself must be the same, both in the itinerant and localized components of the $5f$-electrons as they both play a role in developing the HO.
Ultrasonic measurement is one of the sensitive probing techniques to investigate both itinerant band instabilities, such as the band-Jahn Teller effect, and the local anisotropic charge distribution, such as that found in multipolar ordering. Therefore the present work is aimed at obtaining better information on the dual nature of the $5f$-electron states in URu$_2$Si$_2$. Our recent investigation of the elastic constant $(C_{11}-C_{12})/2$ of URu$_2$Si$_2$ under pulsed-magnetic fields strongly suggests that the hybridized electronic state possesses an orthorhombic ($x^2-y^2$) lattice instability with $\Gamma_3$(B$_{\rm 1g}$) symmetry~\cite{Yanagisawa2013}. The origin of the lattice instability is considered to be either a potential deformation due to the Jahn-Teller effect of hybridized bands or a simple crystalline electric field (CEF) effect of uranium's $5f$ electrons; however, the origin of the $\Gamma_3$(B$_{\rm 1g}$) lattice instability and its relation to the HO parameter are still open questions. In order to verify that the system does not exhibit a lattice instability for other symmetries, and to examine the theoretical predictions of CEF ground-state schemes for high magnetic fields and related higher-multipolar order parameter scenarios for the HO phase as well, we study the elastic responses of the other symmetry-breaking strains. In the present paper, we report on the responses of $C_{44}$ with $\Gamma_5$(E$_{\rm g}$) symmetry and $C_{66}$ with $\Gamma_4$(B$_{\rm 2g}$) symmetry under high magnetic field, and compare these results with the previously reported $(C_{11}-C_{12})/2$ with $\Gamma_3$(B$_{\rm 1g}$) symmetry.
\begin{figure}[h]
\includegraphics[width=0.7\linewidth]{PRB2017_Fig01_jpg}
\caption{\label{fig:fig1}
(a) Magnetic field dependence of elastic constants $C_{11}$, $(C_{11}-C_{12})/2$, $C_{33}$, $C_{44}$, and $C_{66}$ at fixed temperatures of 22-23 K for $H \parallel$ [001]. $C_{11}$ is divided by 10 to allow a better comparison.
(b) The temperature-magnetic-field phase diagram of URu$_2$Si$_2$ for $H \parallel$ [001] is compiled from the present ultrasonic experiments and the previous results\cite{Scheerer2012}. The blue horizontal lines indicate the trajectories where the pulsed-field measurements were performed at fixed temperature of 22.5 and 1.5 K. (c) is the same as (a) at 1.5 K. The dotted lines are visual aids.
}
\end{figure}
\begin{table*}[t]
\caption{\label{tab:table1}Symmetry, symmetrized strain and rotation, and multipole for different elastic constants.}
\begin{ruledtabular}
\begin{tabular}{llcr}
Symmetry (D$_{\rm 4h}$ group)&Strain and Rotation& Multipole&Elastic Constant\\\hline
\\
$\Gamma_1$(A$_{\rm 1g}$)&$\epsilon_{xx},\epsilon_{yy}$&-&$C_{33}=-3C_{\rm B}+4C_{\rm u}+4C_{13}$\\
$\Gamma_1$$\oplus$$\Gamma_3$(A$_{\rm 1g}$$\oplus$B$_{\rm 1g}$)&$\epsilon_{zz}=\epsilon_{\rm B}/3-\epsilon_{\rm B}/\sqrt{3}$&-&$C_{11}=3C_{\rm B}-C_{\rm u}+(C_{11}-C_{12})/2-2C_{13}$\\
$\Gamma_3$(B$_{\rm 1g}$)&$\epsilon_{\rm v}=\epsilon_{xx}-\epsilon_{yy}$&$O_{\rm v}=\sqrt{3}(J_{x}^2-J_{y}^2)/2$&$C_{\rm v}=(C_{11}-C_{12})/2$\\
$\Gamma_4$(B$_{\rm 2g}$)&$\epsilon_{xy}$&$O_{xy}=\sqrt{3}(J_{x}J_{y}+J_{y}J_{x})/2$&$C_{66}$\\
$\Gamma_5$(E$_{\rm g}$)&$\epsilon_{yz}$&$O_{yz}=\sqrt{3}(J_{y}J_{z}+J_{z}J_{y})/2$&$C_{44}$\\
&$\epsilon_{zx}$&$O_{zx}=\sqrt{3}(J_{z}J_{x}+J_{x}J_{z})/2$&$C_{44}$\\
\hline
\\
$\Gamma_1$(A$_{\rm 1g}$)&$\epsilon_{\rm B}=\epsilon_{xx}+\epsilon_{yy}+\epsilon_{zz}$&-&$C_{\rm B}=(2C_{11}+2C_{12}+4C_{13}+C_{33})/9$\\
$\Gamma_1$(A$_{\rm 1g}$)&$\epsilon_{\rm u}=(2\epsilon_{zz}-\epsilon_{xx}-\epsilon_{yy})$&$O_{\rm u}=\sqrt{3}(2J_{z}^2-J_{x}^2-J_{y}^2)/2$&$C_{\rm u}=(C_{11}+C_{12}-4C_{13}+2C_{33})/6$\\
$\Gamma_2$(A$_{\rm 2g}$)&$\omega_{xy}$&$H_{z}^{\rm \alpha}=\sqrt{35}(J_+^4-J_-^4)/4i$&$C_{66},C_{\rm v}$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\section{\label{sec:level2}Experimental Details}
We investigated two single crystals of URu$_2$Si$_2$ grown using the Czochralski technique by a tetra-arc furnace at UC San Diego (sample \#1) and CEA Grenoble (sample \#2). For sample \#1, the dimensions are $3.8\times 1.8\times 1.2$ mm$^3$ with parallel [110] facets as grown. A residual resistivity ratio (RRR) $\sim10$ was used for $(C_{11}-C_{12})/2$, $C_{44}$, and $C_{33}$ measurements, and for sample \#2, $3.38\times 1.67\times 1.5$ mm$^3$ with parallel [100] facets, annealed in vacuum, RRR $\sim$29 is used for $C_{11}$, $C_{44}$, $C_{66}$. Note, there is no obvious sample dependence in the magnetic field dependence of $C_{44}$ for both samples, except for a difference in the signal-to-noise ratio.
The sample surfaces were well polished and characterized by x-ray Laue diffraction to check the characteristic symmetries of the facets. Ultrasound was generated and detected by using LiNbO$_3$ transducers with a thickness of 40-100 $\mu$m, which were fixed on the sample surfaces with room-temperature-vulcanizing (RTV) silicone or superglue. We used pulsed magnetic fields up to 68 T with pulse duration of about 150 ms at the Dresden High Magnetic Field Laboratory. The sound-velocity measurements were performed by using a conventional phase comparative method using a digital storage oscilloscope.
Ultrasound induces both linear strain and a rotation field (similar to Raman modes; a summary with the D$_{\rm 4h}$ point group is shown in Table I) in the solid, which behave as conjugate fields for the electric quadrupole or electric hexadecapole moments. These multipolar responses can be observed as a sound-velocity change and ultrasonic attenuation via electron-phonon interaction. The sound velocity $v_{\rm ij}$ is converted to the elastic constant $C_{\rm ij}$ by using the formula; $C_{\rm ij} = \rho v_{\rm ij}^2$. Here, $\rho$ = 10.01 (g/cm$^3$) is the density of URu$_2$Si$_2$.
\section{\label{sec:level3}Results}
In Fig. 1, we show the magnetic-field dependence of the following elastic constants $C_{11}$/10, $(C_{11}-C_{12})/2$, $C_{33}$, $C_{44}$, and $C_{66}$ at fixed temperatures of 22-23 K [Fig. 1(a)] and 1.5 K [Fig. 1(c)] for $H \parallel$ [001] which are measured with ultrasonic frequencies of 75 MHz for $C_{11}$, 159.5 MHz for $(C_{11}-C_{12})/2$, 78.7 MHz for $C_{33}$, 164 MHz for $C_{44}$, and 166 MHz for $C_{66}$. At 22-23 K, the elastic constants $C_{33}$, $C_{44}$, and $C_{66}$ decrease with increasing magnetic-field through the cross-over region of the $c$-$f$ hybridization (below 30 T) and toward the polar-paramagnetic region (above 45 T), while $C_{11}$ and $(C_{11}-C_{12})/2$, both related to the $\Gamma_3$-symmetry response, increase above 35 T.
The magnetic field-temperature ($H$-$T$) phase diagram is displayed in Fig. 1(b) for comparison,
where the horizontal lines connect to features in the elastic constant data. In Fig. 1(c), all elastic constants at 1.5 K show successive step-like anomalies through the cascade of metamagnetic transitions with the destruction of the hidden order\cite{Note1}. The overall tendency to decrease or increase with field reproduces from the magnetic-field dependence at 22-23 K [Fig. 1(a)]. Such a clear contrast of decreasing or increasing tendency in the three transverse modes in the paramagnetic phase just above $T_{\rm O} \sim$ 17.5 K supports the idea that the $\Gamma_3$-type orthorhombic lattice instability is related to a symmetry-breaking band instability that arises due to the $c$-$f$ hybridization and is probably linked to the origin of HO in this compound\cite{Yanagisawa2013}.
One may consider the possibility of magnetostriction on the sound-velocity change, since the magnetic field change of the elastic constant looks very similar to the magnetization and magnetostriction change in pulsed-magnetic fields. However, by applying magnetic field along the [001] axis of URu$_2$Si$_2$, the $c$-axis length decreases only by $\Delta L_{c}/L_{c} \sim 10^{-4}$ at 45 T and 1.5 K, and the $a$ axis expands by the same order of magnitude due to the Poisson effect~\cite{Correa2012}. In the present case, such an effect would mainly lead to enhanced softening of the longitudinal $C_{11}$ mode in the vicinity of the cascade transitions. $C_{11}$ includes a contribution from the bulk modulus (volume strain). Based on the modified Ehrenfest equation~\cite{Knafo2007}, the estimated contribution of the magnetostriction to the sound-velocity change is $\Delta v_{\rm ij}/v_{\rm ij} \sim 10^{-4}$, which is less than only 5\% of the total velocity change $\sim 2\times 10^{-3}$ of the transverse ultrasonic modes $C_{44}$, $C_{66}$, and $(C_{11}-C_{12})/2$. The hardening of $(C_{11}-C_{12})/2$ at the collapse of the HO phase has a tendency opposite to the magnetostriction along [100], since it is equvalent to $1/\sqrt{2}$ of the magnetostriction along [110]. Consequently, the $\Gamma_3$ elastic response originates from the drastic change of the transverse acoustic phonon dispersion due to strong coupling to the $5f$-electrons.
\begin{figure*}[ht]
\includegraphics[width=0.7\linewidth]{PRB2017_Fig02_jpg}
\caption{\label{fig:fig2} Left column: Magnetic-field dependence of the elastic constants (a) $(C_{11}-C_{12})/2$, (d) $C_{66}$, and (g) $C_{44}$ for $H \parallel$ [001] of URu$_2$Si$_2$ at selected temperatures. The lower panel in each figure shows the sound-attenuation change $\Delta \alpha$ vs. $H$. These data were taken for both increasing and decreasing field. Middle column: Three-dimensional plots of the elastic constants vs. temperature and magnetic field aligned along the $c$ axis of URu$_2$Si$_2$. The bottom of the boxes shows the magnetic field-temperature phase diagram of URu$_2$Si$_2$ for $H \parallel$ [001]. Right column: Normalized elastic constants vs. temperature at various magnetic fields $H \parallel$ [001] converted from (a), (d), and (g), except for the zero-magnetic field data. Green dotted lines indicate the estimated phonon background. The panels arranged horizontally show the modes, (a)-(c) for $(C_{11}-C_{12})/2$ reprinted from Ref. [\onlinecite{Yanagisawa2013}], (d)-(f) for $C_{66}$; and (g)-(i) for $C_{44}$.}
\end{figure*}
\begin{figure*}[ht]
\includegraphics[width=0.7\linewidth]{PRB2017_Fig03_
\caption{\label{fig:fig3}
Temperature dependence of the normalized elastic constants of (a) $\Gamma_3$: $(C_{11}-C_{12})/2$, (b) $\Gamma_4$: $C_{66}$, and (c) $\Gamma_5$: $C_{44}$ at various magnetic fields $H \parallel$ [001], where the phonon background is subtracted. Solid lines in (a) are calculated by using the band-Jahn-Teller model (see text), and the solid lines in (b) and (c) are visual aids. Calculated uniform quadrupolar susceptibilities of (d) $\Gamma_3$: $O_2^2$, (e) $\Gamma_4$: $O_{xy}$ and (f) $\Gamma_5$: $O_{yz}$ for different CEF schemes (see Table II) at 0 and 60 T.
}
\end{figure*}
\begin{figure}[t]
\includegraphics[width=0.7\linewidth]{PRB2017_Fig04_}
\caption{\label{fig:fig4}
Magnetic field dependence of the BJT fit parameters for $(C_{11}-C_{12})/2$: The gap between the two levels $E_{\rm F}$-$E_0$ (red, left axis) and $2d^2N_0$ (blue, right axis, see text for details). The dotted curves are a visual aid.
}
\end{figure}
In Figs. 2(d) and 2(g), we show the isotherms of the modes $C_{44}$ and $C_{66}$ as a function of increasing and decreasing magnetic field applied along [001]. For comparison, our previous results~\cite{Yanagisawa2013} for the $(C_{11}-C_{12})/2$ are also shown in Fig. 2 (a). From these data, we determined the elastic constants as a function of temperature in fixed magnetic field, shown in Figs. 2 (c), (f), and (i).
The middle column conbines three-dimensional plots of the elastic constants versus temperature and magnetic field $H \parallel c$ for the three different symmetries; (b) $(C_{11}-C_{12})/2$ for the $\Gamma_3$(B$_{\rm 1g}$), (e) $C_{66}$ for the $\Gamma_5$(E$_{\rm g}$), and (h) $C_{44}$ for the $\Gamma_4$(B$_{\rm 2g}$) of the D$_{\rm 4h}$ point group symmetry. The bottom of each cubic box shows the $H-T$ phase diagram. The blue-white-red color gradation indicates the relative stiffness of each ultrasonic mode, stiffer in blue and softer in red. In the soft-mode regions, the system may indicate lattice instabilities of the corresponding symmetry. For example, for the $(C_{11}-C_{12})/2$ mode, the corresponding $\Gamma_3$(B$_{\rm 1g}$) lattice instability is enhanced in the low-temperature and low-magnetic-field region, where strong $c$-$f$ hybridization occurs, and suppressed at high temperatures and high magnetic fields. The $\Gamma_4$(B$_{\rm 2g}$) and $\Gamma_5$(E$_{\rm g}$) modes show the opposite tendency. Such a clear difference in the three transverse modes indicates the presence of the $\Gamma_3$(B$_{\rm 1g}$) lattice instability in the HO phase, and in the strong $c$-$f$ hybridization region at low-magnetic fields in URu$_2$Si$_2$.
\section{\label{sec:level4}Discussion}
\subsection{\label{sec:level4-1}Band Jahn-Teller Model:\\
(Delocalized $5f$-electron state)}
In Figs. 3(a)-3(c) the normalized elastic constants versus temperature at various magnetic fields are shown for $\Gamma_3$(B$_{\rm 1g}$): $(C_{11}-C_{12})/2$ [Fig. 3(a)], $\Gamma_4$(B$_{\rm 2g}$): $C_{66}$[Fig. 3(b)], and $\Gamma_5$(E$_{\rm g}$): $C_{44}$[Fig. 3(c)], with the phonon background subtracted. For simplicity, we made phenomenological fits to the elastic constants of ThRu$_2$Si$_2$ measured from 300 to 1.5 K in zero magnetic field as the phonon background shown as the dotted lines in Figs. 2(c), 2(f), and 2(i). A similar subtraction was also performed in our previous work.~\cite{Yanagisawa2012} First, we analyzed the softening of $(C_{11}-C_{12})/2$ by using the phenomenological theory of the band-Jahn-Teller (BJT) effect assuming a rigid degenerate two-band state~\cite{Luethi2006}. The solid lines in Fig. 3(a) were calculated from the following equation:
\begin{equation}
\frac{(C_{11}-C_{12})}{2}=C_{\rm ph}-2d^2N_0\{1-e^{-(E_{\rm F}-E_0)/k_{\rm B}T}\}.
\end{equation}
Here, $C_{\rm ph}$ is the phonon background [as shown in Fig. 2(c)], $d$ is a deformation-potential coupling constant, $N_0$ is the density of states at the Fermi energy $E_{\rm F}$, and $E_0$ is the energy at the bottom of the conduction band. The term $2d^2N_0$ is set to be temperature independent. Figure 4 shows the magnetic-field dependence of the fit parameters ($2d^2N_0$) and ($E_{\rm F}-E_0$). We obtain $E_{\rm F}-E_0$ = 43 K at 0 T and $E_{\rm F}-E_0$ = 28 K at 35 T. The value of $2d^2N_0 = 0.071\times 10^{10}$ J m$^{-3}$ at 0 T gradually decreases with increasing magnetic field, which is consistent with the reduction of $c$-$f$ hybridization under magnetic field, where causes a weakening of the deformation-potential coupling. The parameters obtained below 30 T are comparable to the values reported for the typical band Jahn-Teller system LaAg$_{1-x}$In$_{x}$~\cite{Knorr1980}, where the compounds with $x$ = 0 and $x$ = 0.11 do not show a structural transition but exhibit a softening in $(C_{11}-C_{12})/2$ due to $\Gamma_3$ lattice instability. Here for URu$_2$Si$_2$, the obtained deformation-potential coupling energy is less than 1/5 of the value of LaAg ($x$ = 0, $2d^2N_0 = 0.375\times 10^{10}$ J m$^3$), suggesting that the effect is too weak to induce a structural phase transition. Above 40 T, the gap and the fitting error bar drastically increase, which appears to be extrinsic and shows the limitations of this theory.
\subsection{\label{sec:level4-2}Crystalline Electric Field Models:\\
(Localized $5f$-electron state)}
We compare elastic responses obtained in the high-magnetic field region with uniform quadrupolar susceptibilities, which are calculated by using CEF schemes in the $5f^2$ configuration, proposed thus far. We have considered a variety of CEF level schemes, especially based on the U$^{4+}$($5f^2$) ionization and non-Kramers $^3H_4$ ($J$=4) Hund's rule ground-state multiplet; a non-Kramers configuration can easily reproduce the reported anisotropic magnetization along the $a$ and $c$ axis of this compound\cite{Nieuwenhuys1987}. The details of the four CEF schemes considered are listed in Table II. It should be noted that the present CEF scheme 1 has two lowest-lying U-5$f$ singlets; $\Gamma_1^{(1)}=\alpha(|4\rangle+|-4\rangle)-\beta|0\rangle$ and $\Gamma_2=i(|4\rangle-|-4\rangle)/\sqrt{2}$, which is identical to the level scheme in the theoretical models originally predicting the A$_{\rm 2g}$-type hexadecapolar order as the order parameter of the HO state, which have been proposed by Haule and Kotliar~\cite{Haule2009}, or by Kusunose and Harima\cite{Kusunose2011}.
\begin{table*}
\caption{\label{tab:table2}Labels, CEF level scheme, active multipoles, author and references}
\begin{ruledtabular}
\begin{tabular}{llcrc}
Labels&Level Scheme (K)&Active Multipoles (Symmetry)&Authors&Ref.\\\hline
Scheme 1&$\Gamma_1^{(1)}-\Gamma_2(60)-\Gamma_3(178)-\Gamma_5^{(1)}(491)-$...&$H_{z}^{\rm \alpha} (A_{\rm 2g})$&Yanagisawa {\it et al.}&[\onlinecite{Yanagisawa2013J}]\\
Scheme 2&$\Gamma_5^{(1)}-\Gamma_1^{(1)}(404)-\Gamma_2(1076)-$...&$O_2^2 (B_{\rm 1g})$&Galatanu {\it et al.}&[\onlinecite{Galatanu2005}]\\
Scheme 3&$\Gamma_3-\Gamma_1^{(1)}(44)-\Gamma_2(112)-\Gamma_5^{(1)}(485)$...&$O_2^2 (B_{\rm 1g})$ or $T_{xyz} (B_{\rm 1u})$&Santini and Amoretti&[\onlinecite{Santini1994}]\\
Scheme 4&$\Gamma_1^{(1)}-\Gamma_5^{(2)}(140)-\Gamma_2(300)$...&$T_x^{\rm \beta} (E_{\rm u})$&Hanzawa and Watanabe&[\onlinecite{Hanzawa2005}]\\
\end{tabular}
\end{ruledtabular}
\end{table*}
The present analysis allows us to qualitatively compare the measured normalized elastic constants [Figs. 3 (a)-(c)] with the calculated quadrupolar susceptibilities as shown in Figs. 3 (d)-(f) (Appendix A). At first glance, none of these CEF schemes successfully reproduces experimental observations. A detailed analysis follows below;\\
(i) $(C_{11}-C_{12})/2$, $\Gamma_3$(B$_{\rm 1g}$) symmetry:\\
Only Schemes 1 and 3 reproduce the temperature and magnetic field dependence of $(C_{11}-C_{12})/2$. Scheme 2 shows a steep softening below 20 K at $H$ = 0 T and Scheme 4 shows a broad minimum at around 50 K at $H$ = 0 and 60 T, inconsistent with the experimental data at low and high magnetic fields.\\
(ii) $C_{66}$, $\Gamma_4$(B$_{\rm 2g}$) symmetry:\\
Only Scheme 3 roughly reproduces the temperature dependence of $C_{66}$ at high magnetic field. However, the expected softening at 0 T in Scheme 3 is not seen in the experimental data. Scheme 2 again shows a steep softening at $H$ = 0 below 20 K and Scheme 1 and 4 show local minima and upturns; inconsistent with the experiment.\\
(iii) $C_{44}$, $\Gamma_5$(E$_{\rm g}$) symmetry:\\
Only Scheme 4 reproduces the softening at 60 T, but its magnetic-field dependence shows an opposite tendency (no softening in the magnetic field). All the other schemes (1-3) show neither low-temperature softening nor enhancement under magnetic fields. \\
Therefore, based on this logic, we conclude that the present experimental results can not be fully explained by CEF schemes in the $5f^2$ configuration. Note that other CEF schemes have been tested and also resulted in poor agreement with the experimental data, for example, $\Gamma_1^{(1)}$-$\Gamma_4$(45 K)-$\Gamma_5^{(2)}$(51 K)-$\Gamma_2$(100 K) [\onlinecite{Kiss2005}], which cannot be explained by tetragonal CEF since this theory is considering many-body effects, $\Gamma_1^{(1)}$-$\Gamma_2$(42 K)-$\Gamma_1^{(2)}$(170 K) [\onlinecite{Nieuwenhuys1987}], and $\Gamma_4$-$\Gamma_1^{(1)}$(44 K)-$\Gamma_2$(112) [\onlinecite{Santini1994}].
Here, we discuss conditions for the application of the CEF schemes to URu$_2$Si$_2$. As mentioned, the $5f^2$ non-Kramers multiplet is the best assumption to reproduce the anisotropy in the magnetization. Here, $J_z$ has diagonal matrix elements in doublet states and off-diagonal elements between singlet-singlet and doublet-doublet states. On the other hand, $J_x$ and $J_y$ only have off-diagonal elements between singlet-doublet states. Thus, if the singlet and doublet states are separated in non-Kramers $J=4$ CEF states (as Schemes 1 and 2), one can naturally get magnetic anisotropy. Indeed, CEF Schemes 3 and 4, where the singlet and doublet are relatively close ($\leq$ 300 K), cannot fully reproduce the anisotropic magnetization.
On the other hand, all CEF schemes above are inconsistent with the occurrence of softening in the $C_{44}$ mode, because the corresponding quadrupolar moments of $O_{yz}$ and $O_{zx}$, have a $\Delta J = \pm1$ transition and are always accompanied by a magnetic moment $J_z$. Thus, it is difficult to find a CEF scheme which satisfies the mutually exclusive features. Therefore, it is even more challenging to find a CEF scheme which balances the competing transitions of $O_{xy}$ with $\Delta J = \pm2$, and $O_{yz}$ and $O_{zx}$ with $\Delta J = \pm1$ and also reproduces all elastic constant softenings at high magnetic fields, where the present system is not affected by both $c$-$f$ hybridization and PPM states. Therefore, we need to find an appropriate CEF scheme and/or consider another origin or modulation to reproduce the experimental data.
One possibility is a rotation effect~\cite{Dohm1975, Thalmeier1975}. A rotation invariant of the Hamiltonian describing a quadrupole-strain interaction will produce a finite modulation of the transverse mode under magnetic field. In the present experiments, the geometry of the $C_{44}$ mode ($k \parallel$ [100], $u \parallel H \parallel$ [001]) is the case to consider this effect. This ultrasonic mode induces the strain field $\epsilon_{zx}$ and also induces the rotation of $\omega_{zx}$, which will couple to the magnetic torque of the total angular momentum $J$. We tried to compute such an effect on CEF Scheme 3 which originally show no softening in $C_{44}$, but the rotation does not reproduce this. CEF Scheme 1, on the other hand, can generate the softening in $C_{44}$ when the rotation effect is considered (not shown). To verify whether or not this modulation exists, further measurements of $C_{44}$ with different geometries, for example ($k\parallel H \parallel$ [001], $u \parallel$ [100]) and ($k \parallel H \parallel$ [100], $u \parallel$ [001]), need to be performed.
\subsection{\label{sec:level4-3}Consideration of Hexadecapolar Contribution}
In contrast to $C_{44}$ and other modes, $C_{66}$ measured with ($k \parallel$ [100], $u \parallel$ [010], and $H \parallel$ [001]), has no rotation-effect contribution. As mentioned, none of these CEF schemes could reproduce the low-temperature softening of $C_{66}$ in a high magnetic field.
A possible explanation for this softening is a higher-rank multipolar contribution, such as an electric hexadecapolar contribution to the elastic constant. As shown in Table I, the transverse ultrasonic mode $C_{66}$ and $(C_{11}-C_{12})/2$, which propagate in the $c$ plane ($k \perp$ [001]) also induce the rotation $\omega_{xy}$, which couples to the electric hexadecapole $H_{z}^{\rm \alpha}=\sqrt{35}(J_+^4-J_-^4)/4i$, with $\Gamma_2$ (A$_{\rm 2g}$) symmetry (Appendix B). This is the theoretically predicted order parameter of Scheme 1 in Table II. It should also be noted that recent inelastic x-ray scattering measurements showed that the $5f$ ground-state wave function is mainly composed of $\Gamma_1$ and/or $\Gamma_2$, which is consistent with CEF Scheme 1.~\cite{Sundermann2016}
\begin{figure}[t]
\includegraphics[width=0.9\linewidth]{PRB2017_Fig05_}
\caption{\label{fig:fig5}
Calculated uniform multipolar susceptibilities including the $\Gamma_3$ (B$_{\rm 1g}$) and the $\Gamma_4$ (B$_{\rm 2g}$)-Quadrupole terms $O_{2}^{2}$ and $O_{xy}$, respectively, and the $\Gamma_2$ (A$_{\rm 2g}$)-Hexadecapole term $H_{z}^{\rm \alpha}$ by using CEF model 1 (see Table II) (a) temperature dependence at 0 T (open symbol) and 60 T (solid symbol) and (b) magnetic field dependence at 0 K.
}
\end{figure}
Additionally, from recent resonant x-ray scattering measurements, no superlattice reflections or azimuthal angle-dependences which evidence rank 2 and 3 multipolar order have been observed so far~\cite{Amitsuka2010}. Thus, the lower-rank electric quadrupole order and magnetic octupolar order can be eliminated as candidates for the HO parameter. The remaining unsubscribed order is an electric hexadecapole order with A$_{\rm 2g}$ symmetry or a composite order corresponding to this symmetry such as the chiral density wave order with A$_{\rm 2g} \pm $B$_{\rm 1g}$ symmetry.~\cite{Kung2015} Since the elastic response of chiral density waves is not fully understood, the following analysis is based on the $H_{z}^{\rm \alpha}$-type hexadecapolar order predicted by Kusunose {\it et al.}~\cite{Kusunose2011} with CEF Scheme 1, where the $H_{z}^{\rm \alpha}$ moment is active. Figure 5 show the uniform hexadecapolar susceptibility and quadrupole susceptibility as a function of temperature [Fig. 5(a)] and magnetic field [Fig. 5(b)] calculated by using CEF Scheme 1. The susceptibility of $H_{z}^{\rm \alpha}$ (A$_{\rm 2g}$) shows the opposite temperature dependence as compared to $O_{xy}$(B$_{\rm 2g}$) and similar temperature dependence as $O_2^2$(B$_{\rm 1g}$) with a relatively larger matrix element (in Fig. 5 is divided by 100). Again, the response shows the opposite tendency to the increasing of the softening in higher-magnetic field regions. Since the rotation of $\omega_{xy}$ is a unitary transformation, the hexadecapole moment will not affect the single-ion Hamiltonian at zero magnetic field and/or under the field applied along the $z$ ([001]) axis. In other words, this hexadecapole will affect the sound velocity only when a finite magnetic field along the $xy$ plane and/or an anisotropic multipolar interaction exist. Thus, we need to assume a large anisotropy in the coupling mechanism of hexadecapolar-lattice interactions and a two-electron Hamiltonian to reproduce the opposite elastic responses between the $C_{66}$ and $(C_{11}-C_{12})/2$. A similar elastic response and characteristic ultrasonic attenuation were observed in the $C_{66}$ mode of the iron-based superconductor Ba(Fe$_{1-x}$Co$_x$)$_2$As$_2$ ($x$ = 0.1)~\cite{Kurihara2017}, where a hexadecapolar order and its instability towards the superconducting phase was predicted. However, the authors mention that the hexadecapolar contribution is estimated to be 250 times smaller than the quadrupolar contribution in this iron-based superconductor. Therefore, the hexadecapolar contribution of the present elastic constants $(C_{11}-C_{12})/2$ and $C_{66}$ for URu$_2$Si$_2$ is also expected to be minuscule, and will not reproduce the softening of $C_{66}$ in high magnetic fields, unless the hexadecapolar contribution is strongly enhanced for some unknown reason.
Using a different approach, we also checked the hexadecapolar contribution on the elastic constant $C_{66}$ in a magnetic field applied perpendicular to the $c$ axis. Figure 6 shows the magnetic-field dependence of the elastic constant $C_{66}$ for $H \parallel$ [100] and $H \parallel$ [110] of URu$_2$Si$_2$ at 4.2 and 20 K. There is no obvious difference in the data below and above $T_{\rm O}$ and for both field orientations within the present measurement accuracy.
The quadrupolar susceptibility was calculated using a mean-field approximation, which assumes the $H_{z}^{\rm \alpha}$-type antiferro-hexadecapolar interaction as the HO parameter, based on the theory of Kusunose {\it et al.}~\cite{Kusunose2011}, which predicts that a very tiny difference should appear between the [100] and [110] directions in the antiferro-hexadecapole (AFH) order state. The calculated uniform quadrupolar susceptibility using the mean field theory~\cite{Yanagisawa2013J} with CEF model 1 is also displayed in Fig. 5. This predicted anisotropy between $H \parallel$ [100] (red line) and $H \parallel$ [110] (blue line) can not be distinguished in the present scale of Fig. 6.
We have reported similar results for the mode $(C_{11}-C_{12})/2$ in a previous paper [\onlinecite{Yanagisawa2013J}]. Thus, as in the previous investigation, higher magnetic fields and/or improved measurement accuracy, such as using static magnetic fields, are required to ultimately rule out the existence of a hexadecapole interaction. In conclusion, a hexadecapolar order is not indicated within the present measurement accuracy under a pulsed magnetic field. The origin of the enhanced softening of $C_{66}$ for $H \parallel$ [001] at high magnetic fields remains an open question.
\begin{figure}
\includegraphics[width=0.7\linewidth]{PRB2017_Fig06_}
\caption{\label{fig:fig6}
Left axis: Magnetic field dependence of elastic constant $C_{66}$ for $H \parallel$ [100] and $H \parallel$ [110] of URu$_2$Si$_2$ at 4.2 and 20 K. Right axis: Calculated (uniform) quadrupolar susceptibility using the mean-field theory with CEF Scheme 1 as described in the text.
}
\end{figure}
\subsection{\label{sec:level4-4}Comments on the Low possibility of Rotational Symmetry Breaking in the HO}
Finally, we comment on the recently proposed symmetry-breaking scenarios. Tonegawa {\it et al}. reported that the lattice symmetry is broken from tetragonal to orthorhombic only when using a sample with a very high RRR as found in synchrotron x-ray measurements~\cite{Tonegawa2014}. Ultrasound is a highly powerful tool to detect symmetry-breaking lattice distortions even when the lattice distortions are staggered or small. For example, the tetragonal systems DyB$_2$C$_2$~\cite{Nemoto2003} and BaFe$_2$As$_2$~\cite{Yoshizawa2012, Kurihara2017} systems show an $\epsilon_{xy}$-type staggered/uniform lattice distortion due to antiferro/ferroquadrupolar order. A clear softening towards the phase transitions was observed in the related symmetric ultrasonic modes. The absence of such softening in $C_{66}$ leaves a $\epsilon_{xy}$-type orthorhombic lattice distortion in the HO highly unlikely. Namely, there will be no tetragonal to orthorhombic (fourfold to twofold) symmetry breaking in the HO. Instead, the softening is enhanced above 37 T where the hidden order is suppressed. It should be noted that $C_{66}$ shows a relatively large jump at $T_{\rm O}$ in the temperature dependence at 30 T for $H \parallel$ [001] [as indicated by the red arrowhead in Fig. 3(b)]. This fact may suggest the freezing of the related multipolar degrees of freedom $O_{xy}$ or $H_{z}^{\rm \alpha}$ at $T_{\rm O}$. However, these features appear already above the region of the Fermi-surface reconstruction, which has been pointed out by Shishido {\it et al}. based on the Hall-effect measurement~\cite{Shishido2009}. Thus, it is not clear whether the enhancement of the elastic anomaly of $C_{66}$ at $T_{\rm O}$ in a magnetic field is related to the origin of the pure HO parameter. To more precisely determine the response of $C_{66}$ in these magnetic field regions, further investigation, such as ultrasonic measurements under a static magnetic field around 30 T, are needed.
\section{\label{sec:level5}Summary}
We performed ultrasonic measurements on URu$_2$Si$_2$ in pulsed magnetic fields to check the elastic responses of this compound and found that the $\Gamma_3$(B$_{\rm 1g}$)-type lattice instability is dominant at low temperature and low magnetic fields. In contrast, we observed enhancements of the elastic softening of the $\Gamma_4$(B$_{\rm 2g}$) and $\Gamma_5$(E$_{\rm g}$) symmetric modes towards low temperatures at magnetic fields above 40 T. We discussed the origin of these elastic responses based upon the D$_{\rm 4h}$ symmetry point group analysis, starting from a local multipolar state (crystalline electric field) assuming weak hybridization and used an itinerant scheme based on the deformation-potential coupling due to the band-Jahn-Teller effect of a strongly $c$-$f$ hybridized band which becomes weaker as the field is increased. The present analysis revealed again that the itinerant-band Jahn-Teller model is more applicable and the $c$-$f$ hybridization is important in HO. On the other hand, the results cannot be explained by the quadrupolar susceptibility based on the crystalline-electric-field schemes in the $5f^2$-configuration which have been proposed thus far. To conclude, this work revealed important information on the elastic response towards the crossover from the delocalized to the localized electric state of the present system. However, a comprehensive interpretation of these elastic responses is still pending, and further investigations will be required.
\begin{acknowledgments}
The present research was supported by JSPS KAKENHI Grant No. JP17K05525(C), No. JP16H04006, No. JP15H05882, No. JP15H05884, No. JP15H05885,
No. JP15H05745, No. JP15KK0146, No. JP15K21732, No. JP23740250 and No. JP23102701 and the Strategic Young Researcher Overseas Visits Program for Accelerating Brain Circulation from JSPS. Experiments performed in the U.S. were supported by US DOE, Grant No. DE-FG02-04-ER46105. Experiments performed at CEA Grenoble were supported by the ERC Starting Grant (NewHeavyFermion), and ANR (SINUS). One of the authors (T.Y.) would like to thank Professor John A. Mydosh, Professor Hiroaki Kusunose, Dr. Trevor Keiber, and Dave Landry for fruitful discussions. M.J. gratefully acknowledges financial support by the Alexander von Humboldt foundation. We also acknowledge the support of the Hochfeld-Magnetlabor Dresden at HZDR, a member of the European Magnetic Field Laboratory (EMFL).
\end{acknowledgments}
|
2210.01355
|
\section{Introduction}
Energy storage capabilities and efficiency by electrochemical batteries have rapidly improved in recent times, pushed by the need to robustly deal with the ever increasing energy demands of daily life. As we advance technologically in the search for faster charging batteries, recently the idea of a quantum battery has become a more heavily researched topic \cite{Quach_2022_SA, Gem_2022_IBM, Jar_2016_NJP, Ali_2013_PRE, Bin_2015_NJP, Hov_2013_APS, Kon_2022_APS, Gem_2022_BAT, San_2021_APS, Lev_2018_APS, Dou_2022_APS, Crescente_2020, e23050612, doi:10.1021/acs.jpcc.9b06373, PhysRevResearch.2.023113, PhysRevB.99.205437, PhysRevB.100.115142, Cruz_2022, PhysRevLett.111.240401, PhysRevLett.122.047702, PhysRevResearch.2.023095, PhysRevA.103.033715, PhysRevA.104.043706, PhysRevA.104.L030402}. The goal underpinning the exploration of a battery made of single quantum bits each with a single excited state is to use quantum phenomena to engineer a greatly improved energy storage device. Some limiting factors for classical electrochemical batteries are their thermodynamic energy loss due to heat and their increasing charging times for scaled up batteries \cite{Usdin_2015_PRX, Bha_2021_PJB, Skrzypczyk2014, https://doi.org/10.48550/arxiv.1812.10139}. Investigating ways that a quantum battery can deal with these issues has lead to the desire to understand how quantum states might be utilised to produce a battery with minimal energy loss and how the system can be built to minimise its charging time \cite{Ferraro_2018_PRL, Hu_2021_arx, Hu_2022_IOP, Cam_2017_APS, PhysRevB.98.205423, Friis2018precisionwork, PhysRevLett.125.040601, PhysRevLett.125.236402, PhysRevResearch.4.013172, PhysRevE.99.052106, Chang_2021, PhysRevE.103.042118}.
Previous theoretical work \cite{Ferraro_2018_PRL} found that quantum batteries can display a super-charging characteristic. They found that as the number of two-level systems ($N$) in the battery increased, the speed with which the battery charged increased at a rate of $N\sqrt{N}$. This result has ignited significant interest in quantum batteries and inspired us to explore quantum batteries in the context of the Dicke model \cite{Dicke_1954_APS} and the Jaynes-Cummings-Hubbard (JCH) model \cite{Jaynes_1963}.
Functionally, a quantum battery can be thought of as idealised two-level system inside a cavity whose mode is able to excite the two-level system. For such a system the battery can be thought of as being charged (uncharged) when the two-level system is in the excited (ground) state. Figure 1 schematically describes the two systems we will consider in this work. Specifically the JCH model, Fig. 1(a) and the Dicke model, Fig. 1(b), under the charging protocol shown in Fig 1(c).
In each case we sonsider a scenario where we have $N$ elements in the quantum battery. The system is initialised such that the two-level systems are in the ground state. At $t=0$ the coupling between the two-level systems and the photons is quenched from $0$ to $\beta$. We will first consider the charging in the JCH model in sections \ref{sec:jch_theory} \& \ref{sec:JCHresults}. For the JCH system we find that the maximum charging power, $P_{max}$, is proportional to the number of the cavities in the JCH system. Additionally, we find that the maximum charging power is (inversely) proportional to square root of the number photons initially in each cavity (the photon coupling between individual cavities). The result that the maximum charging power is proportional to the number of two-level systems in the JCH model then prompts us to revisit, in sections \ref{sec:Dicke_theory} \& \ref{sec:dicke_results}, results for the Dicke model, where we construct the Dicke Hamiltonian to ensure that the thermodynamic limit is bounded. For such a regime we regain a scaling for $P_{max}$ proportional to the number of two-level systems in the Dicke cavity.
\begin{figure}[h]
\centering
\includegraphics[width=1.05\linewidth]{intro_fig1_pdf_2}
\caption{(a) Schematic for the JCH model. $N$ identical two-level systems each occupying their own cavity, with photons coupling between cavities with strength $\kappa$. (b) Schematic for the Dicke model. Two-level systems the same as above except that they are all in the one cavity. (c) Representation of the charging sequence of the quantum battery. Initially the photon coupling to the two-level system $\beta$ is zero, then it is quenched to a value $\beta>0$, where charging begins.}
\label{fig:jch_1}
\end{figure}
\section{JCH quantum batteries}\label{sec:jch_theory}
The JCH model can be thought of as representing an atom with a single excited state in the presence of $n$ photons inside a cavity. The two-level atomic system is coupled to the photons in the cavity via $\beta$, and the photons with frequency $\omega_c$ are coupled between the $N$ identical cavities via $\kappa$. Specifically the JCH Hamiltonian \cite{Kirton_2019_AQT} is ($\hbar =1$)
\begin{align}
H_{JCH}=&\sum_{n=1}^N \omega_ca^\dagger_na_n + \sum_{n=1}^N\omega_a\sigma^+_n\sigma^-_n
\notag+\beta \sum_{n=1}^N(a_n\sigma^\dagger_n+a^\dagger_n\sigma^-_n)\\
&-\kappa \sum_{n=1}^N(a^\dagger_{n+1}a_n+a^\dagger_na_{n+1}) \label{eq:JCH_ham}
\end{align}
where $\omega_a$ is the energy of separation between the energy levels of the TLS, $a^\dagger$ and $a$ are the photonic raising and lowering operators, and $\sigma^+$ and $\sigma$ are the spin raising and lowering operators.
Diagonalising the JCH Hamiltonian allows the Time Dependent Schrodinger Equation (TISE) to be solved and the dynamics analysed. Starting with the system in the lowest energy eigenstate, the atom-photon coupling is quenched from $\beta=0$ to $\beta>0$ at time $t=0$. In doing so, the two-level systems are taken from a parameter space where they cannot charge, and instantaneously quenched to one where they are able to begin charging. In order to quantify the charging rate we define that the energy of the system is the difference between the energy of the time varying energy and that of the initial state,
\begin{equation}
E_\beta(t)=\omega_c\{ \braket{\psi^N_\beta(t)|\hat{J}_z|\psi^N_\beta(t)}-\braket{\psi^N(0)|\hat{J}_z|\psi^N(0)} \} , \label{eq:JCH_energy}
\end{equation}
where the energy operator for the atomic spin is
\begin{equation}
\hat{J}_z = \omega_a \sum_{n=1}^N \sigma^+_n\sigma^-_n .
\end{equation}
With the time varying energy we find the maximum charging power of the battery by taking the maximum rate of change of the energy with respect to time,
\begin{equation} P_{max} = {\rm max}\bigg[\frac{E_{\beta}(t) }{t}\bigg], \label{eq:pmax}\end{equation}
which has a charging time to reach $P_{max}$ of $\tau$. This definition of power has been used to make a direct comparison with with existing literature \cite{Ferraro_2018_PRL}. Alternatively, the time to charge the battery to its maximum energy was explored, with both methods returning results with the same scaling factors. The two ways to analyse the power of the quantum battery are to consider how long it takes to fully charge the battery, which has a strong analogous relationship between classical batteries, or to consider the best possible charging power and consider how that scales. In the rest of this paper we will use the later definition, as in equation \eqref{eq:pmax}.
The limit $\kappa=0$ represents the case where individual elements of the cavities are not coupled to each other, and there is no photon transfer between them. We will use this as the baseline by which we analyse how different parameters may change the charging rate of the battery, with express interest in whether increasing the size of the battery improves the charging power. With a system of isolated ($\kappa=0$) JCH two-level systems, the behaviour reduces to that of individual Rabi two-level systems with Hamiltonian,
\begin{align}
H^{JCH}_m =
\begin{pmatrix}
m+\Delta & \sqrt{m}\beta \\
\sqrt{m}\beta & m\omega \label{eq:JCH_singleTLS}
\end{pmatrix}
\end{align}
where $\Delta=\omega_a-\omega_c$ and the average number of photons per two-level system is $m$. In this regime the JCH model can be solved analytically and has its first maximum energy at time
\begin{equation}
\tau = \frac{\pi}{2\Omega},\label{eq:Rabi_TLStau}
\end{equation}
where the Rabi frequency is
\begin{equation}
\Omega = \frac{\sqrt{\Delta^2+4m\beta^2}}{2}.
\end{equation}
It can be seen from equation \eqref{eq:Rabi_TLStau} that when the energy separation between the two energy levels and the photon mode energy is zero ($\Delta = 0$), the charging time will scale with the number of photons according to $\tau \propto 1/\sqrt{m}$, and $E$ scales proportional to $N$. It follows that $P_{max} \propto N$ and $P_{max}\propto\sqrt{m}$. It is therefore of interest to explore how this relationship changes when the two-level systems are able to interact. Allowing the cavities in the quantum battery to interact via photon coupling ($\kappa>0$) makes it possible to analyse how $\kappa$, $N$ and $m$ effect it's charging power.
\section{JCH Results} \label{sec:JCHresults}
In this paper we present results in natural units where $\hbar=1$, and for a resonant regime where the dimensionless photon mode energy and the dimensionless atomic energy separation are both 1, and hence $\Delta=0$. In Fig.\ref{fig:jch_N_1}, the effect of increasing battery size is shown for different values of the photon mode coupling parameter $\kappa$. When $\kappa=0$, the JCH model has an analytical solution. The present simulation results overlap exactly with the analytical results obtained from the Rabi matrix of equation \eqref{eq:Rabi_TLStau}. This serves as the starting point for the comparison of the power for larger JCH systems. It can be seen in this figure, that the charging power of the quantum battery for any value of $\kappa$ never exceeds that of the completely uncoupled $\kappa=0$ case. The quantum battery has the largest maximal charging power when it acts as if it was $N$ independent single atom batteries. With the initial state taken as the lowest energy eigenstate, increasing $\kappa$ moves the state to higher energy eigenstates faster. This appears to decrease the charging power of the quantum battery.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{jch_N_fig1}
\caption{Charging power of the JCH quantum battery. Power is scaled with a factor of $1/N$ and plotted as a function $N$. Each line corresponds to a decreasing value of $\kappa$, with the uncoupled cavities having the largest values for the scaled power. Here $\beta=0.05$ so that $\kappa$ varies from below its energy scale to larger by an order of magnitude. The average number of photons per cavity is set to $m=1$. The initial state is set as the state in which each cavity has $ m$ photons and is in the ground state. }
\label{fig:jch_N_1}
\end{figure}
The maximum charging power of the JCH quantum battery was scaled by a factor of $1/N$ and for each value of $\kappa$, the data tends towards a constant. This strongly implies that $P_{max} \propto N$, the result obtained for uncoupled ($\kappa=0$) JCH two-level systems.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{jch_m_fig1}
\includegraphics[width=0.85\linewidth]{jch_m_fig2}
\caption{Number of photons per two-level system at $t=0$ vs the scaled power of the quantum battery for 2 (a) and 3 (b) two-level systems. The maximum power scaled with a factor of $1/\sqrt{m}$ to highlight that it tends towards a constant value as $m$ increases, implying that the power scales with the $\sqrt{m}$. Larger values of $\kappa$ show decreases power but appear to require larger values of $m$ from them to approach a constant value.}
\label{fig:jch_m}
\end{figure}
Delving into the JCH quantum battery further, looking for other methods of improving their power scaling, Fig.\ref{fig:jch_m} highlights the effect that increasing the average number of photons per cavity has on $P_{max}$, scaled by $\frac{1}{\sqrt{m}}$. In Fig.\ref{fig:jch_m} (upper), there are 2 cavities with a varying number of photons $m$, at $t=0$, and it can be seen that the scaled power tends towards a constant as $m$ increases, strongly implying that the power scales as
\begin{equation}
P_{max}\propto\sqrt{m}.
\end{equation}
This relationship can also be seen in with 4 cavities (Fig.\ref{fig:jch_m} (lower)). The same relationship is observed for up to 6 cavities, with the limiting factor being that as the number of cavities increases the size of the Hilbert space quickly makes the diagonalisation of the Hamiltonian computationally intensive. As a result, it can be seen for $N=2$ there are solutions for relatively large $m$, while $m$ has to be limited for $N>2$. \\
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{jch_k_fig1}
\includegraphics[width=0.85\linewidth]{jch_k_fig2}
\caption{The maximum power of a JCH quantum battery scaled with a factor of the photon coupling $\kappa$ as a function $\kappa$. (a) The upper plot show the behaviour of $N=2$ two-level systems, while (b) the lower shows $N=3$.The tendency of this data towards a straight line for increasing values of $\kappa$ for both $N=2$ and $N=3$ cavities helps illuminate that the power scales inversely with $\kappa$. $\kappa$ covers the regime where the two-level systems are completely uncoupled ($\kappa=0$) to when it is the equal in strength to the photon/two-level system coupling strength ($\beta=\kappa=0.05$). Finally $\kappa$ ids increased to where it is the dominant energy scale of the system ($\kappa>>\beta$). }
\label{fig:jch_k}
\end{figure}
Another factor in the power scaling of the JCH quantum battery is the strength of the photon coupling between adjacent cavities, $\kappa$. Figure \ref{fig:jch_k} demonstrates that $\kappa$ has an inverse scaling relationship with the maximum power, by showing that the power, scaled by $\kappa$ becomes a constant for as $\kappa$ increases, highlighting that,
\begin{equation}
P_{max}\propto\frac{1}{\kappa}.
\end{equation}
Most interesting about this finding, is that the closest relationship to be drawn between the JCH and Dicke models are for high values of $\kappa$. This is due to the Dicke model considering each two-level system as indistinguishable from one another and are all sharing the same photons inside the one cavity. One would expect that for values of $\kappa$ where it is the dominant effect in the system, where $\beta=0.05$ and $\kappa>0.5$ the type of super scaling that was seen in \cite{Ferraro_2018_PRL} would start to show an effect.
In this paper we consider photon coupling between nearest neighbours in a line configuration. As an aside, the hopping of photons between any other cavity (hyper hopping) in the system is also explored for mathematical interest as well as closer comparison to the Dicke model, where each of the two-level systems are indistinguishable in location. It was found that in both nearest neighbour hopping and hyper-hopping systems, the scaling factor for the charging power as a function of $\kappa$, $N$ and $m$ were consistent.
It is clear that there is a disparity between the present results for the JCH quantum battery and that what was found for the Dicke model in \cite{Ferraro_2018_PRL}. This motivated us to revisit the Dicke model of a quantum battery.
\section{Dicke quantum batteries}\label{sec:Dicke_theory}
We begin with the generalised Dicke Hamiltonian \cite{Kirton_2019_AQT},
\begin{align}
H_{Dicke}=& \ \ \omega_ca^{\dagger}a +\omega_a \sum_{n=1}^N \sigma^z_n \nonumber\\
&+\frac{\beta}{\sqrt{N}}\sum_{n=1}^N(a\sigma^+_n+a^{\dagger}\sigma^-_n) \nonumber\\
&+\frac{\beta'}{\sqrt{N}}\sum_{n=1}^N(a\sigma^i_n+a^{\dagger}\sigma^+_n), \label{eq:dicke_ham}
\end{align}
where $\beta$ and $\beta‘$ are the coupling between the photon mode and the atomic excitation degree of freedom for the energy conserving interactions. $\beta’$ is the coupling parameter for the interactions that do not conserve excitation number, namely the $a^\dagger\omega^+_j$ term excites the atom and also produces adds a photon to the system. The Dicke model is a special case of equation \eqref{eq:dicke_ham} where $\beta’=\beta$, while the Tavis-Cummings model is the case where $\beta’=0$. It is worthy to note that there the $\frac{1}{\sqrt{N}}$ terms in \eqref{eq:dicke_ham} are included to ensure that at the thermal limit the energy remains bounded. However, in \cite{Ferraro_2018_PRL} the factor of $\sqrt{N}$ was not included in their Hamiltonian and went on to find that the maximum power scaled according to the relation
\begin{equation}
P_{max}\propto N^{3/2}.\nonumber
\end{equation}
With a power scaling increasing faster than the size of the battery increases, significant interest has been shown in the exciting potential of super charging quantum batteries. The improved charging capabilities has been attributed to entangled states and in the results of the paper we will elucidate the relationship that, the size of the Dicke and JCH quantum batteries in terms of the number of photons and atoms, and the coupling strength, has on the charging power.
The process of setting up the matrix for the Dicke Hamiltonian is well described in \cite{Ferraro_2018_PRL}, and will be briefly re-iterated here. For a particular state and a given number of two-level systems, $N$, in the cavity there with $q$ of them in the ground state, and $n$ photons. The ground state, where there are $m=n/N=1$ photons per two-level system and $q=N$ of the two-level systems which are in the ground state, can be written
\begin{equation}
\ket{\psi^N(0)} = \ket{ N,\frac{N}{2},-\frac{N}{2}}.
\end{equation}
Using this, the elements of the matrix for the Dicke Hamiltonian can found from
\begin{align}
&\braket{n',\frac{N}{2},\frac{N}{2}-q' | H^N(t)| n,\frac{N}{2},\frac{N}{2} -q } =\nonumber \\
&\omega_c\big[(n+\frac{N}{2}-q)\delta_{n',n}\delta_{q',q}
+ \frac{\beta}{\sqrt{N}} \big\{ f^{(1)}_{n,\frac{N}{2},\frac{N}{2}-q} \delta_{n',n+1}\delta_{q',q+1} \nonumber\\
& +f^{(2)}_{n,\frac{N}{2},\frac{N}{2}-q} \delta_{n',n+1}\delta_{q',q-1}
+f^{(3)}_{n,\frac{N}{2},\frac{N}{2}-q} \delta_{n',n-1}\delta_{q',q+1}\nonumber\\
& +f^{(4)}_{n,\frac{N}{2},\frac{N}{2}-q} \delta_{n',n-1}\delta_{q',q-1} \big\} \big] \label{eq:dicke_mat}
\end{align}
with the primed perms denoting the final quantities and
\begin{align}
f^{(1)}_{k,j,m} = &\sqrt{ (k+1)[j(j+1)-m(m-1)]},\nonumber \\
f^{(2)}_{k,j,m}=&\sqrt{(k+1)[j(j+1)-m(m+1)]},\nonumber\\
f^{(3)}_{k,j,m}=&\sqrt{k[j(j+1)-m(m-1)]},\nonumber\\
f^{(4)}_{k,j,m}=&\sqrt{k[j(j+1)-m(m+1)]}.\nonumber \label{eq:dick_f_terms}
\end{align}
From the obtained Hamiltonian the time dependent energy function and the power of the Dicke quantum battery can be determined in the same way that is was in Section \ref{sec:jch_theory}, using equations \eqref{eq:JCH_energy} \& \eqref{eq:pmax}.
\section{Results: Dicke model}\label{sec:dicke_results}
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{dicke_N_fig1}
\caption{ The power scaling of a Dicke quantum battery for an increasing battery size. The number of two-level systems ranges from 2 to 20, while the photon-atom coupling parameter $\beta$ varies from low coupling to where it dominates the dynamics of the system ($0$ to $2$). Asymptotic behaviour is seen for the scaled power towards a constant value for increasing $N$. }
\label{fig:dicke_N_1}
\end{figure}
Initially importance was placed on the model being able to replicate the previous results of \cite{Ferraro_2018_PRL}, which was possible by using the Hamiltonian referenced in their paper, without the factor of $1/\sqrt{N}$. However, when using the Dicke Hamiltonian, of equation \eqref{eq:dicke_ham}, with the $1/\sqrt{N}$ term in the photon to two-level system coupling terms, the super-charging is not present. This strong agreement between the JCH and the Dicke models, confirms that while the charging power of quantum batteries does increase as the size of the battery increases, it does so by a factor of $N$, i.e.
\begin{equation}
P_{max} \propto N, \nonumber
\end{equation}
as demonstrated in Fig.\ref{fig:dicke_N_1}. Here, $\beta$ takes on the same values ($\beta = 0,0.05,0.5,2$). While the starting number of photons in the system is taken to be $N$, the Dicke model allows for behaviour that does not conserve the particle number. As a result, to compute the Dicke model limitations on the maximum number of considered photons needs to be placed. In this work we considered systems of a range of photons from $1$ to $5N$ for each data point. $5N$ was taken to be the maximum because good convergence was already found for $4N$.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{dicke_m_fig1}
\caption{The maximum power as a function of average number of photons $m$, scaled by $\frac{1}{\sqrt{m}}$, for $N=10$ two-level systems. The legend displays the corresponding values of $\beta$ for each plot, with lines drawn between data points to help visualise the data. }
\label{fig:dicke_m}
\end{figure}
Exploring the effect that the average number of photons per two-level system has on $P_{max}$, Fig.\ref{fig:dicke_m} displays that $P_{max}/\sqrt{m}$ converges to a constant value as $m$ increased. This result is the same result which was found in section \ref{sec:JCHresults}, for the JCH battery. The strong agreement between the Dicke and JCH models for a quantum battery is interesting, because of the different ways each model allows the two-level systems to interact with the photon fields. In the Dicke model all of the two-level systems are able to interact with all of the photons at all times, because they exist within the same cavity, while in the JCH model the two-level systems can only interact with the photons in there cavity. This result implies that for large enough $m$ the power returns scale the same way whether you localise the two-level systems or not, without ever considering the strength of the entanglement of states.
As $m$ gets very large we enter into a regime currently access able by experiments, eg. \cite{Quach_2022_SA}, and when looking at a regime of $m=200$ we found the same scaling relation, implying that the charging power of the Dicke quantum battery is only limited by the number of photons input into the cavity.
\section{conclusions}
In this paper we have considered the charging quench dynamics of both the JCH and Dicke models. For the JCH model we have found that $P_{max}$ scales linearly with $N$, i.e. there is no quantum advantage in such a system. More generally we also find that as the coupling between the cavities, in the JCH model, is increased $P_{max}$ is reduced. However, there is an increase in $P_{max}$ when the number of photons, $m$, in each cavity, at $t=0$, is increased. In this case $P_{max}\propto \sqrt{m}$.
This investigation into the JCH system lead us to revisit the charging quench dynamics of the Dicke model. Starting from a form of the Dicke Hamiltonian which ensures a consistent thermodynamic limit we fiund again that $P_{max}$ scales linearly with the number of the two-level systems in the Dicke cavity. Additionally, we recovered a scaling of $P_{max}\propto \sqrt{m}$, where $m$ is the number of photons per two-level system, at $t=0$, in the Dicke cavity.
\begin{acknowledgments}
Andrew R. Hogan is supported by an Australian Government Research Training Program Scholarship and by the University of Melbourne.
\end{acknowledgments}
\section{Introduction}
Energy storage capabilities and efficiency by electrochemical batteries have rapidly improved in recent times, pushed by the need to robustly deal with the ever increasing energy demands of daily life. As we advance technologically in the search for faster charging batteries, recently the idea of a quantum battery has become a more heavily researched topic \cite{Quach_2022_SA, Gem_2022_IBM, Jar_2016_NJP, Ali_2013_PRE, Bin_2015_NJP, Hov_2013_APS, Kon_2022_APS, Gem_2022_BAT, San_2021_APS, Lev_2018_APS, Dou_2022_APS, Crescente_2020, e23050612, doi:10.1021/acs.jpcc.9b06373, PhysRevResearch.2.023113, PhysRevB.99.205437, PhysRevB.100.115142, Cruz_2022, PhysRevLett.111.240401, PhysRevLett.122.047702, PhysRevResearch.2.023095, PhysRevA.103.033715, PhysRevA.104.043706, PhysRevA.104.L030402}. The goal underpinning the exploration of a battery made of single quantum bits each with a single excited state is to use quantum phenomena to engineer a greatly improved energy storage device. Some limiting factors for classical electrochemical batteries are their thermodynamic energy loss due to heat and their increasing charging times for scaled up batteries \cite{Usdin_2015_PRX, Bha_2021_PJB, Skrzypczyk2014, https://doi.org/10.48550/arxiv.1812.10139}. Investigating ways that a quantum battery can deal with these issues has lead to the desire to understand how quantum states might be utilised to produce a battery with minimal energy loss and how the system can be built to minimise its charging time \cite{Ferraro_2018_PRL, Hu_2021_arx, Hu_2022_IOP, Cam_2017_APS, PhysRevB.98.205423, Friis2018precisionwork, PhysRevLett.125.040601, PhysRevLett.125.236402, PhysRevResearch.4.013172, PhysRevE.99.052106, Chang_2021, PhysRevE.103.042118}.
Previous theoretical work \cite{Ferraro_2018_PRL} found that quantum batteries can display a super-charging characteristic. They found that as the number of two-level systems ($N$) in the battery increased, the speed with which the battery charged increased at a rate of $N\sqrt{N}$. This result has ignited significant interest in quantum batteries and inspired us to explore quantum batteries in the context of the Dicke model \cite{Dicke_1954_APS} and the Jaynes-Cummings-Hubbard (JCH) model \cite{Jaynes_1963}.
Functionally, a quantum battery can be thought of as idealised two-level system inside a cavity whose mode is able to excite the two-level system. For such a system the battery can be thought of as being charged (uncharged) when the two-level system is in the excited (ground) state. Figure 1 schematically describes the two systems we will consider in this work. Specifically the JCH model, Fig. 1(a) and the Dicke model, Fig. 1(b), under the charging protocol shown in Fig 1(c).
In each case we sonsider a scenario where we have $N$ elements in the quantum battery. The system is initialised such that the two-level systems are in the ground state. At $t=0$ the coupling between the two-level systems and the photons is quenched from $0$ to $\beta$. We will first consider the charging in the JCH model in sections \ref{sec:jch_theory} \& \ref{sec:JCHresults}. For the JCH system we find that the maximum charging power, $P_{max}$, is proportional to the number of the cavities in the JCH system. Additionally, we find that the maximum charging power is (inversely) proportional to square root of the number photons initially in each cavity (the photon coupling between individual cavities). The result that the maximum charging power is proportional to the number of two-level systems in the JCH model then prompts us to revisit, in sections \ref{sec:Dicke_theory} \& \ref{sec:dicke_results}, results for the Dicke model, where we construct the Dicke Hamiltonian to ensure that the thermodynamic limit is bounded. For such a regime we regain a scaling for $P_{max}$ proportional to the number of two-level systems in the Dicke cavity.
\begin{figure}[h]
\centering
\includegraphics[width=1.05\linewidth]{intro_fig1_pdf_2}
\caption{(a) Schematic for the JCH model. $N$ identical two-level systems each occupying their own cavity, with photons coupling between cavities with strength $\kappa$. (b) Schematic for the Dicke model. Two-level systems the same as above except that they are all in the one cavity. (c) Representation of the charging sequence of the quantum battery. Initially the photon coupling to the two-level system $\beta$ is zero, then it is quenched to a value $\beta>0$, where charging begins.}
\label{fig:jch_1}
\end{figure}
\section{JCH quantum batteries}\label{sec:jch_theory}
The JCH model can be thought of as representing an atom with a single excited state in the presence of $n$ photons inside a cavity. The two-level atomic system is coupled to the photons in the cavity via $\beta$, and the photons with frequency $\omega_c$ are coupled between the $N$ identical cavities via $\kappa$. Specifically the JCH Hamiltonian \cite{Kirton_2019_AQT} is ($\hbar =1$)
\begin{align}
H_{JCH}=&\sum_{n=1}^N \omega_ca^\dagger_na_n + \sum_{n=1}^N\omega_a\sigma^+_n\sigma^-_n
\notag+\beta \sum_{n=1}^N(a_n\sigma^\dagger_n+a^\dagger_n\sigma^-_n)\\
&-\kappa \sum_{n=1}^N(a^\dagger_{n+1}a_n+a^\dagger_na_{n+1}) \label{eq:JCH_ham}
\end{align}
where $\omega_a$ is the energy of separation between the energy levels of the TLS, $a^\dagger$ and $a$ are the photonic raising and lowering operators, and $\sigma^+$ and $\sigma$ are the spin raising and lowering operators.
Diagonalising the JCH Hamiltonian allows the Time Dependent Schrodinger Equation (TISE) to be solved and the dynamics analysed. Starting with the system in the lowest energy eigenstate, the atom-photon coupling is quenched from $\beta=0$ to $\beta>0$ at time $t=0$. In doing so, the two-level systems are taken from a parameter space where they cannot charge, and instantaneously quenched to one where they are able to begin charging. In order to quantify the charging rate we define that the energy of the system is the difference between the energy of the time varying energy and that of the initial state,
\begin{equation}
E_\beta(t)=\omega_c\{ \braket{\psi^N_\beta(t)|\hat{J}_z|\psi^N_\beta(t)}-\braket{\psi^N(0)|\hat{J}_z|\psi^N(0)} \} , \label{eq:JCH_energy}
\end{equation}
where the energy operator for the atomic spin is
\begin{equation}
\hat{J}_z = \omega_a \sum_{n=1}^N \sigma^+_n\sigma^-_n .
\end{equation}
With the time varying energy we find the maximum charging power of the battery by taking the maximum rate of change of the energy with respect to time,
\begin{equation} P_{max} = {\rm max}\bigg[\frac{E_{\beta}(t) }{t}\bigg], \label{eq:pmax}\end{equation}
which has a charging time to reach $P_{max}$ of $\tau$. This definition of power has been used to make a direct comparison with with existing literature \cite{Ferraro_2018_PRL}. Alternatively, the time to charge the battery to its maximum energy was explored, with both methods returning results with the same scaling factors. The two ways to analyse the power of the quantum battery are to consider how long it takes to fully charge the battery, which has a strong analogous relationship between classical batteries, or to consider the best possible charging power and consider how that scales. In the rest of this paper we will use the later definition, as in equation \eqref{eq:pmax}.
The limit $\kappa=0$ represents the case where individual elements of the cavities are not coupled to each other, and there is no photon transfer between them. We will use this as the baseline by which we analyse how different parameters may change the charging rate of the battery, with express interest in whether increasing the size of the battery improves the charging power. With a system of isolated ($\kappa=0$) JCH two-level systems, the behaviour reduces to that of individual Rabi two-level systems with Hamiltonian,
\begin{align}
H^{JCH}_m =
\begin{pmatrix}
m+\Delta & \sqrt{m}\beta \\
\sqrt{m}\beta & m\omega \label{eq:JCH_singleTLS}
\end{pmatrix}
\end{align}
where $\Delta=\omega_a-\omega_c$ and the average number of photons per two-level system is $m$. In this regime the JCH model can be solved analytically and has its first maximum energy at time
\begin{equation}
\tau = \frac{\pi}{2\Omega},\label{eq:Rabi_TLStau}
\end{equation}
where the Rabi frequency is
\begin{equation}
\Omega = \frac{\sqrt{\Delta^2+4m\beta^2}}{2}.
\end{equation}
It can be seen from equation \eqref{eq:Rabi_TLStau} that when the energy separation between the two energy levels and the photon mode energy is zero ($\Delta = 0$), the charging time will scale with the number of photons according to $\tau \propto 1/\sqrt{m}$, and $E$ scales proportional to $N$. It follows that $P_{max} \propto N$ and $P_{max}\propto\sqrt{m}$. It is therefore of interest to explore how this relationship changes when the two-level systems are able to interact. Allowing the cavities in the quantum battery to interact via photon coupling ($\kappa>0$) makes it possible to analyse how $\kappa$, $N$ and $m$ effect it's charging power.
\section{JCH Results} \label{sec:JCHresults}
In this paper we present results in natural units where $\hbar=1$, and for a resonant regime where the dimensionless photon mode energy and the dimensionless atomic energy separation are both 1, and hence $\Delta=0$. In Fig.\ref{fig:jch_N_1}, the effect of increasing battery size is shown for different values of the photon mode coupling parameter $\kappa$. When $\kappa=0$, the JCH model has an analytical solution. The present simulation results overlap exactly with the analytical results obtained from the Rabi matrix of equation \eqref{eq:Rabi_TLStau}. This serves as the starting point for the comparison of the power for larger JCH systems. It can be seen in this figure, that the charging power of the quantum battery for any value of $\kappa$ never exceeds that of the completely uncoupled $\kappa=0$ case. The quantum battery has the largest maximal charging power when it acts as if it was $N$ independent single atom batteries. With the initial state taken as the lowest energy eigenstate, increasing $\kappa$ moves the state to higher energy eigenstates faster. This appears to decrease the charging power of the quantum battery.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{jch_N_fig1}
\caption{Charging power of the JCH quantum battery. Power is scaled with a factor of $1/N$ and plotted as a function $N$. Each line corresponds to a decreasing value of $\kappa$, with the uncoupled cavities having the largest values for the scaled power. Here $\beta=0.05$ so that $\kappa$ varies from below its energy scale to larger by an order of magnitude. The average number of photons per cavity is set to $m=1$. The initial state is set as the state in which each cavity has $ m$ photons and is in the ground state. }
\label{fig:jch_N_1}
\end{figure}
The maximum charging power of the JCH quantum battery was scaled by a factor of $1/N$ and for each value of $\kappa$, the data tends towards a constant. This strongly implies that $P_{max} \propto N$, the result obtained for uncoupled ($\kappa=0$) JCH two-level systems.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{jch_m_fig1}
\includegraphics[width=0.85\linewidth]{jch_m_fig2}
\caption{Number of photons per two-level system at $t=0$ vs the scaled power of the quantum battery for 2 (a) and 3 (b) two-level systems. The maximum power scaled with a factor of $1/\sqrt{m}$ to highlight that it tends towards a constant value as $m$ increases, implying that the power scales with the $\sqrt{m}$. Larger values of $\kappa$ show decreases power but appear to require larger values of $m$ from them to approach a constant value.}
\label{fig:jch_m}
\end{figure}
Delving into the JCH quantum battery further, looking for other methods of improving their power scaling, Fig.\ref{fig:jch_m} highlights the effect that increasing the average number of photons per cavity has on $P_{max}$, scaled by $\frac{1}{\sqrt{m}}$. In Fig.\ref{fig:jch_m} (upper), there are 2 cavities with a varying number of photons $m$, at $t=0$, and it can be seen that the scaled power tends towards a constant as $m$ increases, strongly implying that the power scales as
\begin{equation}
P_{max}\propto\sqrt{m}.
\end{equation}
This relationship can also be seen in with 4 cavities (Fig.\ref{fig:jch_m} (lower)). The same relationship is observed for up to 6 cavities, with the limiting factor being that as the number of cavities increases the size of the Hilbert space quickly makes the diagonalisation of the Hamiltonian computationally intensive. As a result, it can be seen for $N=2$ there are solutions for relatively large $m$, while $m$ has to be limited for $N>2$. \\
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{jch_k_fig1}
\includegraphics[width=0.85\linewidth]{jch_k_fig2}
\caption{The maximum power of a JCH quantum battery scaled with a factor of the photon coupling $\kappa$ as a function $\kappa$. (a) The upper plot show the behaviour of $N=2$ two-level systems, while (b) the lower shows $N=3$.The tendency of this data towards a straight line for increasing values of $\kappa$ for both $N=2$ and $N=3$ cavities helps illuminate that the power scales inversely with $\kappa$. $\kappa$ covers the regime where the two-level systems are completely uncoupled ($\kappa=0$) to when it is the equal in strength to the photon/two-level system coupling strength ($\beta=\kappa=0.05$). Finally $\kappa$ ids increased to where it is the dominant energy scale of the system ($\kappa>>\beta$). }
\label{fig:jch_k}
\end{figure}
Another factor in the power scaling of the JCH quantum battery is the strength of the photon coupling between adjacent cavities, $\kappa$. Figure \ref{fig:jch_k} demonstrates that $\kappa$ has an inverse scaling relationship with the maximum power, by showing that the power, scaled by $\kappa$ becomes a constant for as $\kappa$ increases, highlighting that,
\begin{equation}
P_{max}\propto\frac{1}{\kappa}.
\end{equation}
Most interesting about this finding, is that the closest relationship to be drawn between the JCH and Dicke models are for high values of $\kappa$. This is due to the Dicke model considering each two-level system as indistinguishable from one another and are all sharing the same photons inside the one cavity. One would expect that for values of $\kappa$ where it is the dominant effect in the system, where $\beta=0.05$ and $\kappa>0.5$ the type of super scaling that was seen in \cite{Ferraro_2018_PRL} would start to show an effect.
In this paper we consider photon coupling between nearest neighbours in a line configuration. As an aside, the hopping of photons between any other cavity (hyper hopping) in the system is also explored for mathematical interest as well as closer comparison to the Dicke model, where each of the two-level systems are indistinguishable in location. It was found that in both nearest neighbour hopping and hyper-hopping systems, the scaling factor for the charging power as a function of $\kappa$, $N$ and $m$ were consistent.
It is clear that there is a disparity between the present results for the JCH quantum battery and that what was found for the Dicke model in \cite{Ferraro_2018_PRL}. This motivated us to revisit the Dicke model of a quantum battery.
\section{Dicke quantum batteries}\label{sec:Dicke_theory}
We begin with the generalised Dicke Hamiltonian \cite{Kirton_2019_AQT},
\begin{align}
H_{Dicke}=& \ \ \omega_ca^{\dagger}a +\omega_a \sum_{n=1}^N \sigma^z_n \nonumber\\
&+\frac{\beta}{\sqrt{N}}\sum_{n=1}^N(a\sigma^+_n+a^{\dagger}\sigma^-_n) \nonumber\\
&+\frac{\beta'}{\sqrt{N}}\sum_{n=1}^N(a\sigma^i_n+a^{\dagger}\sigma^+_n), \label{eq:dicke_ham}
\end{align}
where $\beta$ and $\beta‘$ are the coupling between the photon mode and the atomic excitation degree of freedom for the energy conserving interactions. $\beta’$ is the coupling parameter for the interactions that do not conserve excitation number, namely the $a^\dagger\omega^+_j$ term excites the atom and also produces adds a photon to the system. The Dicke model is a special case of equation \eqref{eq:dicke_ham} where $\beta’=\beta$, while the Tavis-Cummings model is the case where $\beta’=0$. It is worthy to note that there the $\frac{1}{\sqrt{N}}$ terms in \eqref{eq:dicke_ham} are included to ensure that at the thermal limit the energy remains bounded. However, in \cite{Ferraro_2018_PRL} the factor of $\sqrt{N}$ was not included in their Hamiltonian and went on to find that the maximum power scaled according to the relation
\begin{equation}
P_{max}\propto N^{3/2}.\nonumber
\end{equation}
With a power scaling increasing faster than the size of the battery increases, significant interest has been shown in the exciting potential of super charging quantum batteries. The improved charging capabilities has been attributed to entangled states and in the results of the paper we will elucidate the relationship that, the size of the Dicke and JCH quantum batteries in terms of the number of photons and atoms, and the coupling strength, has on the charging power.
The process of setting up the matrix for the Dicke Hamiltonian is well described in \cite{Ferraro_2018_PRL}, and will be briefly re-iterated here. For a particular state and a given number of two-level systems, $N$, in the cavity there with $q$ of them in the ground state, and $n$ photons. The ground state, where there are $m=n/N=1$ photons per two-level system and $q=N$ of the two-level systems which are in the ground state, can be written
\begin{equation}
\ket{\psi^N(0)} = \ket{ N,\frac{N}{2},-\frac{N}{2}}.
\end{equation}
Using this, the elements of the matrix for the Dicke Hamiltonian can found from
\begin{align}
&\braket{n',\frac{N}{2},\frac{N}{2}-q' | H^N(t)| n,\frac{N}{2},\frac{N}{2} -q } =\nonumber \\
&\omega_c\big[(n+\frac{N}{2}-q)\delta_{n',n}\delta_{q',q}
+ \frac{\beta}{\sqrt{N}} \big\{ f^{(1)}_{n,\frac{N}{2},\frac{N}{2}-q} \delta_{n',n+1}\delta_{q',q+1} \nonumber\\
& +f^{(2)}_{n,\frac{N}{2},\frac{N}{2}-q} \delta_{n',n+1}\delta_{q',q-1}
+f^{(3)}_{n,\frac{N}{2},\frac{N}{2}-q} \delta_{n',n-1}\delta_{q',q+1}\nonumber\\
& +f^{(4)}_{n,\frac{N}{2},\frac{N}{2}-q} \delta_{n',n-1}\delta_{q',q-1} \big\} \big] \label{eq:dicke_mat}
\end{align}
with the primed perms denoting the final quantities and
\begin{align}
f^{(1)}_{k,j,m} = &\sqrt{ (k+1)[j(j+1)-m(m-1)]},\nonumber \\
f^{(2)}_{k,j,m}=&\sqrt{(k+1)[j(j+1)-m(m+1)]},\nonumber\\
f^{(3)}_{k,j,m}=&\sqrt{k[j(j+1)-m(m-1)]},\nonumber\\
f^{(4)}_{k,j,m}=&\sqrt{k[j(j+1)-m(m+1)]}.\nonumber \label{eq:dick_f_terms}
\end{align}
From the obtained Hamiltonian the time dependent energy function and the power of the Dicke quantum battery can be determined in the same way that is was in Section \ref{sec:jch_theory}, using equations \eqref{eq:JCH_energy} \& \eqref{eq:pmax}.
\section{Results: Dicke model}\label{sec:dicke_results}
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{dicke_N_fig1}
\caption{ The power scaling of a Dicke quantum battery for an increasing battery size. The number of two-level systems ranges from 2 to 20, while the photon-atom coupling parameter $\beta$ varies from low coupling to where it dominates the dynamics of the system ($0$ to $2$). Asymptotic behaviour is seen for the scaled power towards a constant value for increasing $N$. }
\label{fig:dicke_N_1}
\end{figure}
Initially importance was placed on the model being able to replicate the previous results of \cite{Ferraro_2018_PRL}, which was possible by using the Hamiltonian referenced in their paper, without the factor of $1/\sqrt{N}$. However, when using the Dicke Hamiltonian, of equation \eqref{eq:dicke_ham}, with the $1/\sqrt{N}$ term in the photon to two-level system coupling terms, the super-charging is not present. This strong agreement between the JCH and the Dicke models, confirms that while the charging power of quantum batteries does increase as the size of the battery increases, it does so by a factor of $N$, i.e.
\begin{equation}
P_{max} \propto N, \nonumber
\end{equation}
as demonstrated in Fig.\ref{fig:dicke_N_1}. Here, $\beta$ takes on the same values ($\beta = 0,0.05,0.5,2$). While the starting number of photons in the system is taken to be $N$, the Dicke model allows for behaviour that does not conserve the particle number. As a result, to compute the Dicke model limitations on the maximum number of considered photons needs to be placed. In this work we considered systems of a range of photons from $1$ to $5N$ for each data point. $5N$ was taken to be the maximum because good convergence was already found for $4N$.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\linewidth]{dicke_m_fig1}
\caption{The maximum power as a function of average number of photons $m$, scaled by $\frac{1}{\sqrt{m}}$, for $N=10$ two-level systems. The legend displays the corresponding values of $\beta$ for each plot, with lines drawn between data points to help visualise the data. }
\label{fig:dicke_m}
\end{figure}
Exploring the effect that the average number of photons per two-level system has on $P_{max}$, Fig.\ref{fig:dicke_m} displays that $P_{max}/\sqrt{m}$ converges to a constant value as $m$ increased. This result is the same result which was found in section \ref{sec:JCHresults}, for the JCH battery. The strong agreement between the Dicke and JCH models for a quantum battery is interesting, because of the different ways each model allows the two-level systems to interact with the photon fields. In the Dicke model all of the two-level systems are able to interact with all of the photons at all times, because they exist within the same cavity, while in the JCH model the two-level systems can only interact with the photons in there cavity. This result implies that for large enough $m$ the power returns scale the same way whether you localise the two-level systems or not, without ever considering the strength of the entanglement of states.
As $m$ gets very large we enter into a regime currently access able by experiments, eg. \cite{Quach_2022_SA}, and when looking at a regime of $m=200$ we found the same scaling relation, implying that the charging power of the Dicke quantum battery is only limited by the number of photons input into the cavity.
\section{conclusions}
In this paper we have considered the charging quench dynamics of both the JCH and Dicke models. For the JCH model we have found that $P_{max}$ scales linearly with $N$, i.e. there is no quantum advantage in such a system. More generally we also find that as the coupling between the cavities, in the JCH model, is increased $P_{max}$ is reduced. However, there is an increase in $P_{max}$ when the number of photons, $m$, in each cavity, at $t=0$, is increased. In this case $P_{max}\propto \sqrt{m}$.
This investigation into the JCH system lead us to revisit the charging quench dynamics of the Dicke model. Starting from a form of the Dicke Hamiltonian which ensures a consistent thermodynamic limit we fiund again that $P_{max}$ scales linearly with the number of the two-level systems in the Dicke cavity. Additionally, we recovered a scaling of $P_{max}\propto \sqrt{m}$, where $m$ is the number of photons per two-level system, at $t=0$, in the Dicke cavity.
\begin{acknowledgments}
Andrew R. Hogan is supported by an Australian Government Research Training Program Scholarship and by the University of Melbourne.
\end{acknowledgments}
|
1110.1331
|
\section{Introduction}
Chromium nitride is a material which combines practical and industrial relevance as a component in protective coatings \cite{Vetter1995, Reiter2005} with fascinating fundamental physical phenomena. The latter include a phase transition with a magnetically driven lattice distortion\cite{Fillipetti-PRL-85-5166} between an antiferromagnetic orthorhombic low temperature phase and a paramagnetic cubic high temperature phase\cite{Corliss1960-PhysRev-117-929}. The importance of strong electron correlations as well as the necessity to model the paramagnetic state using finite disordered local moments have been recently shown\cite{Herwadkar2009-PRB-79-035125,Alling2010-PRB-82-184430}. Important issues, such as the impact of the phase transition on the compressibility of the material\cite{Rivadulla2009-Nat.Mater-8-947, Alling2010-NatMater-9-283} as well as on the electrical conductivity\cite{ Bhobe2010-PRL-104-236404, Zhang2011_hopping} are still subjects of an intense discussion.
The core problem of obtaining a complete understanding of these phenomena and properties on the most fundamental level of physics arises from the difficulty of simulating the paramagnetic high-temperature phase from first principles. In this work we first discuss the methodologies that have been used in theoretical treatments of paramagnetism. Then we present a practical scheme for calculating thermodynamic properties, in particular the equation of state, of a paramagnetic material at elevated temperature merging \emph{ab initio} molecular dynamics (MD) and the disordered local moments model (DLM). This DLM-MD technique is then applied to investigate the influence of temperature and pressure on the compressibility of CrN. We show that the change of the bulk modulus of CrN upon the pressure induced phase transition is minimal, strengthening conclusions from earlier static calculations~\cite{Alling2010-NatMater-9-283, Alling2010-PRB-82-184430} which questioned its reported collapse~\cite{Rivadulla2009-Nat.Mater-8-947}.
\section{Modeling the paramagnetic state}
\subsection{Background}
A theory that describes the finite-temperature aspects of itinerant electron magnets has to take into account the existence of local magnetic moments present above the magnetic transition temperature, the Curie temperature $T_C$ or the Ne\'el temperature $T_N$ for a ferromagnetic or an antiferromagnetic material respectively~\cite{Moriya1985}.
At the same time, the majority of methods used for \emph{ab initio} electronic structure calculations nowadays are based on the density functional theory (DFT) in the local (local spin density, LSDA) or semi-local (generalized gradient, GGA) approximations. While they are known to give an accurate description of the ground state properties of magnetic systems~\cite{James1999},
its straightforward generalization to finite temperatures leads to quantitative, as well as qualitative errors~\cite{Gyorffy1985-JPhysF-15-1337}. Indeed, $T_C$ of transition metals are overestimated by a factor of five and there are no moments and no Curie-Weiss law above $T_C$.
A solution to this problem should in principle be sought in the physics of strongly correlated electron systems. In particular, the dynamical mean-field theory (DMFT)~\cite{Georges1996}, combined with LDA band structure calculations has been used with success for simulations of finite-temperature magnetism in Fe and Ni~\cite{Lichtenstein2001}. However, its application to the study of structural phase transition in Fe~\cite{Leonov2011} had to neglect a contribution from the lattice dynamics, because of prohibitively high computational cost and difficulties in calculating forces between atoms~\cite{Kotliar2006}.
At the same time, it is realized that LSDA calculations at zero temperature can provide valuable information for the description of the finite temperature magnetism. One way of doing this is to extract magnetic interactions in the form of exchange constants for a classical Heisenberg model~\cite{Liechtenstein1987} or magnetic ``forces'' (the first variation of the total energy for a differential rotation of a local moment)~\cite{Antropov1995, Antropov1996} from DFT calculations and to use them in statistical mechanics~\cite{Rosengaard1997, Ruban2004, Ruban2007, Alling2009, Alling2010TiCrN} or in spin-dynamics~\cite{Skubic2008, Hellsvik2008} simulations of magnetic properties at elevated temperatures. Another useful approach is given by the so-called Disordered Local Moment model, introduced by Hubbard~\cite{Hubbard1979, Hubbard1979B, Hubbard1981} and Hasegawa~\cite{Hasegawa1979, Hasegawa1980}, and combined with the LSDA-DFT by Gyorffy \emph{et al.}~\cite{Gyorffy1985-JPhysF-15-1337}. Within the DLM picture, the local magnetic moments exist in the paramagnetic state above the magnetic transition temperature, but are fully disordered. The magnetically disordered state can be described as a pseudo-alloy of equal amounts of atoms with spin up and spin down orientations of their magnetic moments, and its electronic structure and the total energy can be calculated within the conventional alloy theory using the coherent potential approximation (CPA)~\cite{Gyorffy1985-JPhysF-15-1337} or the supercell technique~\cite{Alling2010-PRB-82-184430}. The methodology is highly successful in applications to many materials systems, ranging from steels~\cite{Olsson2003, Olsson2006} to actinides~\cite{Niklasson2003}, and its generalization for the case of partial magnetic disorder can be used in simulations of structural phase transitions in a vicinity of magnetic $T_C$~\cite{Ruban:2008p094436, Ekholm2010}. Still, to the best of our knowledge all the applications of the DLM model so far neglected the effect of lattice vibrations.
On the other hand, the importance of lattice dynamics for an accurate description of thermodynamic properties of materials is well recognized by now~\cite{Walle2002, Gillan2006}. In particular, its consideration can be essential for a treatment of lattice stabilities~\cite{Mikhailushkin2007, Asker2008, Ozolins2009, Lavrentiev2010}, heat capacities~\cite{Kormann2008, Grabowski2009}, and equations of state~\cite{Isaev2011} of solids. The importance of lattice vibrations should be fully recognized in the paramagnetic state of magnetic materials, as it occurs only at elevated temperatures. However, simultaneous treatment of the magnetic disorder, inherent to the paramagnetic state, and lattice vibrations represent a truly challenging task.
State-of-the art treatments of lattice vibrations are based either on (quasi-) harmonic calculations of the phonon dispersion relations or on molecular dynamics simulations~\cite{Martin2004}. For magnetically ordered materials these techniques can be applied straightforwardly. However, in the presence of magnetic excitations this will not work. In particular, in the paramagnetic state at high temperatures the relevant magnetic excitations are associated with spin-flips. Their characteristic time scale can be estimated by the spin decoherence time $t_{dc}$. Spin-dynamics simulations of the spin autocorrelation function in bcc Fe above $T_C$~\cite{Hellsvik2008} show that $t_{dc}$ is of the order of 20-50 fs. In materials with lower $T_C$ it should be larger by approximately a corresponding factor, because both $T_C$ and the velocity of the propagation of the local moments are related to the strength of the exchange interactions. At the same time a typical MD run should be carried out for at least 3-5 ps, as dictated by the inverse of the Debye frequency ($\sim10^{-12}$ s). This means that magnetic configurations should change often during the MD run. Simultaneously, a typical MD time step is of order of 1 fs, which is still much smaller than $t_{dc}$. Thus, the magnetic degree of freedom is slow on the time scale relevant for the determination of temporal evolution of a particular atomic configuration, but fast on the time scale relevant for a proper exploration of the phase space of atomic configurations. Therefore, the adiabatic decoupling between magnetic and vibrational degrees of freedom cannot be applied, and they should be treated within one single framework. Similar arguments can be used to question a validity of lattice dynamics studies for paramagnetic materials based on quasi-harmonic approximation. Perhaps the most consistent approach to the analysis of spin-lattice interactions at finite temperature would be to apply a combination of molecular dynamics with \emph{ab initio} spin dynamics~\cite{Antropov1996} or with DMFT. However, at present such calculations are hardly feasible in practice.
Within our approach we describe the paramagnetic state of a system within the disordered local moment picture. In this approach, local moments exist at each magnetic site of a system (in our case, at Cr sites in CrN) and are commonly thought to fluctuate fairly independently. Thoughtful discussions of the DLM model can be found in Ref.s.~\cite{Gyorffy1985-JPhysF-15-1337,Hubbard1979, Hubbard1981, Hasegawa1979, Hasegawa1980}. The status of the DLM approach in the many-body lattice models like the Hubbard or s-f exchange ``Kondo lattice'' model is discussed in Ref.~\cite{Niklasson2003}, where it is argued that though in a complete theory the charge and spin fields are dynamically fluctuating both in space and time, a ``static'' DLM approximation, where one neglects the dynamics of the fluctuations captures an important part of the correlations. In the DLM a correlated system is described in terms of a pseudo-alloy of spin up and spin down components. Combined with the coherent potential approximation (CPA)~\cite{Gyorffy1985-JPhysF-15-1337} it becomes equivalent to the ``Hubbard III'' approximation~\cite{Hubbard1964} for the original many-body problem, and it is used with success in many applications.
However, the CPA is applicable for the description of a substitutionally disordered system with atoms at the sites of an ideal underlying crystal lattice~\cite{Ruban2008REV}, and therefore cannot be used for treatment of lattice dynamics at finite temperatures. In a previous work\cite{Alling2010-PRB-82-184430}, we took one step towards the simultaneous modeling of magnetic and vibrational finite temperature effects by suggesting two alternative supercell implementations of the DLM calculations. In the first, a specific collinear distributions of up and down magnetic moments arranged to minimize the spin correlation functions were used, in line with the special quasirandom structure (SQS)\cite{Zunger1990-PRL-65-353} methodology. In the second, a magnetic sampling method (MSM) was proposed. In the MSM, the energies of a number of randomly generated magnetic distributions were calculated and their running average was taken as the potential energy of the paramagnetic sample. In Ref.~\cite{Alling2010-PRB-82-184430} it was shown that for CrN MSM calculations are converged already for 40 different magnetic distributions, and the two approaches, the SQS and MSM give almost identical results.
The SQS approach makes use of the fact that in a static picture with all atoms fixed on ideal lattice points, the description of a spatial disorder between the local moment orientations is a good approximation to model the energetics of the combined space and time fluctuations of magnetic moments in a real paramagnet. Unfortunately, if the vibrations of atoms are to be included, one needs to go beyond the fixed magnetic state described by the SQS. The reason is that if a magnetic state is fixed in time one would see artificial static displacements of atoms off their lattice sites due to forces between the atoms with different orientations of their local moments and with different local magnetic environments. In the CrN case those are likely to be quite large due to the magnetic stress discussed in Ref.~\cite{Fillipetti-PRL-85-5166}. In a real paramagnet, due to the time fluctuations of the local moments, this effects should be at least partially averaged out and suppressed depending on the time scales of the spin fluctuations and atomic motions.
The MSM could in principle be used to obtain the adiabatic approximation where the magnetic fluctuations are considered to be instantaneous on the time scales of atomic motions. This approximation would be obtained if the forces acting on each atom were averaged over a sufficient number of different magnetic samples during each time step of a molecular dynamics simulation. The obvious drawback in this approach is that a large number of calculations needs to be run in parallel leading to an increase, a factor 40 in our case, in computational demands. Furthermore, as stated above it is not at all clear that this adiabatic approximation is motivated in any system. However, the MSM gives us a very good starting point for the implementation of the DLM picture in a MD framework.
\subsection{Disordered local moments molecular dynamics}
In this work we introduce a method for molecular dynamics simulations of paramagnetic materials within the traditional \emph{ab-initio} MD framework. Starting from the DLM idea of a spatial disorder of local moments, we also change the magnetic state periodically and in a stochastic manner during our MD simulation. In this way we deal with a magnetic state that does not show order either on the length scales of our supercell, or time scale of our simulation. We make an approximation that the magnetic state of the system is completely randomly rearranged with a time step given by a spin flip time ($\Delta t_{sf}$), and with a constraint that the net magnetization of the system should be zero.
Hence to simulate a paramagnetic system with a spin flip time $\Delta t_{sf}$, we initialize our calculations by setting up a supercell where collinear local moments are randomly oriented and the total moment of the supercell is zero, and run collinear spin-polarized MD for the number of MD time steps ($\Delta t_{MD}$) corresponding to the spin flip time, that is for $\Delta t_{sf}/\Delta t_{MD}$ time steps. Thereafter the spin state is randomized again, while the lattice positions and velocities are unchanged, and the simulation run continues.
Here it is worth to point out that besides the treatment of the many-body effects important for the description of the paramagnetic state at the DLM-LSDA level, or as will be discussed below for the present case: DLM-LSDA+U, we introduce several additional approximations. In particular, we neglect effects due to non-collinear orientations of the local magnetic moments. This, however, is justified for the paramagnetic state well above the magnetic transition temperature~\cite{Gyorffy1985-JPhysF-15-1337}. Note also that magnitudes of the local magnetic moments are allowed to vary as dictated by the self-consistent solution of the electronic structure problem at each step of MD simulation. At the same time, we substitute the true spin dynamics with instantaneous modification of the sample magnetic structure with time steps $\Delta t_{sf}$. Here we follow Ref.~\cite{Gyorffy1985-JPhysF-15-1337} and make use of the physical picture that the simulated system, although ergodic, does not cover its phase space uniformly in time. In the DLM model one assumes that it gets stuck for long times, of the order of $\Delta t_{sf}$, near points characterized by a finite moment at every site pointing in more or less random directions and then moves rapidly (in our case instantly) to another similar point. The states of temporarily broken ergodicity are characterized by classical unit vectors, $\bf e_i$, assigned to each site i and giving the direction of the magnetization averaged over the spatial extent of the i-th site in the supercell and the time $\Delta t_{sf}$. The motion of temporarily broken ergodicity is mainly characterized by changes in the orientational configuration of the moments.
Note that in Ref.~\cite{Gyorffy1985-JPhysF-15-1337} the magnetic degree of freedom was related to an inverse spin-wave frequency $t_{sw} \sim 1/\omega_{sw} \sim$ 100 fs, which represents the dominating magnetic excitation at low temperatures. However, in the paramagnetic state at high temperatures the relevant magnetic excitations are associated with spin-flips rather than with spin waves. Thus the relevant time scale is better characterized by the spin decoherence time $\Delta t_{dc}$ rather than by the inverse spin-wave frequency. As we pointed out above, the latter was estimated to be of the order 20-50 fs in bcc Fe above $T_C$~\cite{Hellsvik2008}. For CrN, with a $T_N$ around room temperature and probably with weaker exchange interactions, we expect that $t_{dc}$ could be somewhat larger.
However, our procedure makes it possible to model a paramagnetic system for any particular time scale of the spin dynamics. In fact one can span the whole range between the two adiabatic approximations: from the frozen magnetic structure to magnetic configurations that rearrange instantaneously on the time scales of each atomic motion during the MD run. Of course, the appropriate value of this parameter needs to be found with real spin dynamics calculations or taken from experiments. In this paper, we study a range of different spin flip times and their consequences for the obtained structural and thermodynamic properties of CrN.
\subsection{Calculational details}
All our first-principles calculations in this work is performed using the projector augmented wave method~\cite{Blochl:1994p1407} as implemented in the Vienna Ab-Initio Simulation Package (VASP)~\cite{Kresse:1993p4005,Kresse:1996p4007,Kresse:1996p4006}. The electronic exchange-correlation effects are modeled using a combination of the Local Density Approximation with a Hubbard Coloumb term (LDA+U)~\cite{Anisimov1991-PRB-44-943} using the double-counting correction scheme suggested by Dudarev \emph{et al.}~\cite{Dudarev1998-PRB-57-1505}. The value of the effective $U$ ($U^{eff}=U-J$) applied only to the Cr $3d$-orbitals is taken as 3 eV, found to be suitable in comparison with several experimentally measured structural and electronic properties in Ref.~\cite{Alling2010-PRB-82-184430}.
Our simulation box, both in the simulation of the cubic and orthorhombic phases, contains 32 Cr and 32 N atoms arranged in a supercell of 2x2x2 conventional cubic unit cells. In the orthorhombic case, the primitive vectors of the supercell are tilted and scaled in line with the results of a structural optimization of this low temperature antiferromagnetic phase.
The plane wave energy cut-off was set to 400 eV. We used a Monkhorst-Pack scheme~\cite{Monkhorst1976-PRB-13-5188} for sampling of the Brillouin zone using a grid of 2x2x2 k-points. To check the accuracy of the potential energies and pressures a selection of configurations are chosen out of the MD simulation run and recalculated with a higher accuracy. The error arising from for the k-point sampling is relatively constant with a shift of about 35 meV and a standard deviation of less then 2 meV. Hence the relative potential energies that are calculated have a high accuracy. The pressures also have a small constant shift of about 0.2 GPa with a standard deviation less then 0.1 GPa when the electronic structure calculations are converged with respect to the k-point mesh.
In order to control the temperature of the simulation, avoid artificial energy drift and to minimize the influence of the particular choice of initial magnetic and lattice configurations in the simulations we use a Nose thermostat\cite{Nose1991-PTP-103-1}. The values of the bulk modulus, $K_0$, have been determined by fitting our calculated pressure and volume data to the Birch-Murnaghan equation of state\cite{Birch1947-PhysRev-71-809,Murnaghan1944-PNAS-30-244}.
\begin{equation}
\label{eqn:bme}
P=3K_{0} f_{E} (1+2f_{E})^{5/2}\left(1+2/3(K_0^{'}-4)f_{E}\right)
\end{equation}
\noindent where $K_{0}$ is the bulk modulus and $K_0^{'}$ is the derivative of $K_{0}$ with respect to pressure.
The fulerian strain, $f_E$ is defined as
$f_E=1/2[(V_0/V)^{2/3}-1]$,
where $V$ and $V_0$ are the volume respectively equilibrium volume.
\section{Application to C\lowercase{r}N}
\subsection{The potential energy}
From the MD calculation we extract the potential energies of CrN. As can be seen in Fig.~\ref{fig:Epot}, the potential energy of the system is well conserved.
To investigate the influence of the spin flip time, the potential energy of CrN is calculated for several $\Delta t_{sf}$. The results are shown in Fig.~\ref{fig:Epot}.
In Fig.~\ref{fig:Eshift} these potential energies are collected and shown relative to the potential energy of the calculations with shortest $\Delta t_{sf}$, 5 fs. There is a clear shift in potential energy of about 10 meV from the simulations with the shortest spin flip times of 5 fs to the longest of 100 fs. This can be compared with the total energy reduction due to static relaxations of 15 meV that we get by using the SQS approach treating the magnetic state as frozen in time. We note that for the lower values of $\Delta t_{sf}$, corresponding to fast spin decoherence, there is a plateau were the potential energy is only weakly influenced by $\Delta t_{sf}$. However, between spin flip times of 15 to 50 fs,
there is a considerable change in potential energy. Of course, the energy scale should be material specific. We suggest, as a quick test of the importance to consider this effect, a calculation of relaxation energies of a paramagnetic system using the SQS approach~\cite{Alling2010-PRB-82-184430} with a fixed magnetic state through the relaxation. The obtained relaxation energy should correspond to an upper limit on the potential energy dependence on $\Delta t_{sf}$.
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{epot}
\caption{ Potential energy of cubic paramagnetic CrN as a function of simulation time
calculated at 300 K using DLM-MD method. Shown are the results obtained with a spin flip time of 10 fs and 100 fs, as well as with a static magnetic state. Results for conventional AIMD simulations
carried out for CrN in the orthorhombic antiferromagnetic ground state are also shown for comparison.
The potential energy is stable and well converged as can be seen
by the included running averages.}
\label{fig:Epot}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{eshift}
\caption{ Potential energy shift calculated for paramagnetic CrN as a function of the spin flip time $\Delta t_{sf}$.
The shortest spin flip time of 5 fs is taken as reference.}
\label{fig:Eshift}
\end{figure}
\subsection{Pair distances}
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{pairs}
\caption{ Histogram of the Cr - Cr nearest neighbor distances for Cr atoms with parallel (solid line) and antiparallel (dashed line) orientations of local magnetic moments obtained from DLM-MD simulation for the cubic paramagnetic phase at 300 K.
a) $\Delta t_{sf}=$10 fs.
b) $\Delta t_{sf}=$100 fs.
seen.
c) Static magnetic state.
d) The orthorhombic antiferromagnetic phase of CrN calculated with conventional AIMD for comparison.
}
\label{fig:Pairdist}
\end{figure}
In order to analyze the difference between the proposed DLM-MD simulations and magneto-static MD in more details, an investigation of the local environment of the different atoms is carried out, especially the Cr - Cr metal nearest neighbor distances. In Fig.~\ref{fig:Pairdist} histograms are shown of all the Cr - Cr nearest neighbor distances. These are also separated into $\uparrow\uparrow,\downarrow\downarrow$ and $\uparrow\downarrow,\downarrow\uparrow$ pairs. Hence we can see the effect of the magnetic state on the distribution of pair distances. In Fig.~\ref{fig:Pairdist}a the $\Delta t_{sf}$ is very short, 10 fs, hence the atoms do not have time to adjust their positions for the current orientation of local magnetic moments and we do not see any difference in distances between the $\uparrow\uparrow,\downarrow\downarrow$ and the $\uparrow\downarrow,\downarrow\uparrow$ pairs. In Fig.~\ref{fig:Pairdist}b, the spin flip time is increased to 100 fs and now the atoms have had sufficient time to move towards the energetically preferential positions. Consequently, a shift in pair distances between $\uparrow\uparrow,\downarrow\downarrow$ and the $\uparrow\downarrow,\downarrow\uparrow$ is evident. Fig.~\ref{fig:Pairdist}c is obtained with the same orientation of local moments during the whole MD run. Here we also see a splitting in the pair distances between the $\uparrow\uparrow,\downarrow\downarrow$ and the $\uparrow\downarrow,\downarrow\uparrow$ pairs, which is of the same order as in the previous case. Hence, 100 fs between the re-arrangement of magnetic configurations is long enough for the atomic nuclei to adjust considerably their positions in the supercell to the given magnetic configuration.
In the last figure, Fig.~\ref{fig:Pairdist}d, the pair distances are shown for the low temperature antiferromagnetic orthorhombic ground state for comparison. Here the $\uparrow\uparrow,\downarrow\downarrow$ and $\uparrow\downarrow,\downarrow\uparrow$ pairs of magnetic moments are arranged in an ordered way, see e.g. Fig. 4 in Ref.~\cite{Alling2010-PRB-82-184430}, that allows for maximal relaxation of atomic coordinates in combination with a structural relaxation of the unit cell, giving rise to a large separation between the two different kinds of pairs.
A possibility of statistical correlations between the atomic distances and the orientation of atomic moments also in a dynamically changing paramagnetic phase is indeed an intriguing thought experiment. Although we can not rule out its existence from principal considerations, to the best of our knowledge it has never been reported in experiments. However, we note that our two main approximations in the present DLM-MD, the usage of collinear moments and the temporarily broken ergodicity of the DLM approach, are likely to introduce inaccuracies that exaggerate those local spin-lattice correlations when a slow spin dynamics is modelled. Therefore, when the here suggested method is used, a smaller value of $\Delta t_{sf}$, corresponding to the absence of differences in distances between atoms with parallel and antiparallel local moments, like in Fig.~\ref{fig:Pairdist}a, should be recommended. The presence of an energy plateau in Fig.~\ref{fig:Eshift} seems to indicate that this should be a reasonable approach.
\subsection{Bulk modulus of C\lowercase{r}N}
\begin{figure}[htb]
\centering
\includegraphics[scale=1]{eos}
\caption{
Volume as a function of pressure for the cubic and orthorhombic phase from MD simulations at 300 K.
The equation of state for the cubic phase is calculated using a spin flip time of 10 fs. The calculated volumes
are normalized with the calculated equilibrium volume of the cubic phase,
and the experimental points\cite{Rivadulla2009-Nat.Mater-8-947} with the measured equilibrium volume of the cubic phase.
The inset shows the dependence
on spin flip time for the calculated equation of state }
\label{fig:EOS}
\end{figure}
Our main goal with this work is to study the equation of states, and in particular the bulk modulus of paramagnetic CrN which has recently been discussed in the literature~\cite{Rivadulla2009-Nat.Mater-8-947, Alling2010-NatMater-9-283}. Using our DLM-MD approach we are able to calculate volume as a function of temperature and pressure for both the paramagnetic cubic and the antiferromagnetic orthorhombic phases. In the former case we also investigate if there is an impact of the value of the spin flip time parameter on the equation of state. Thus we are able to investigate how both the dynamical change of magnetic configurations in the paramagnetic state and the lattice vibrations, neglected in previous theoretical works but of course present in the experiments, impact on the compressibility. Figure~\ref{fig:EOS} shows the calculated volume as a function of pressure for the two phases at 300 K and compare them to the experimental measurements by Rivadulla~\emph{et al.}~\cite{Rivadulla2009-Nat.Mater-8-947}. One sees very good agreement between theoretical and experimental equations of state. In particular the relative shift in volumes between the two phases is reproduced within the measured error bars. The calculated slope of the orthorhombic phase agrees well with the measured values for this phase where the measurement is done over a large pressure range. The inset in Fig.~\ref{fig:EOS} shows the influence of the spin flip time on the volume versus pressure curves in paramagnetic cubic CrN. A change in $\Delta t_{sf}$ introduces a small shift of the volumes, but does not influence the slope of the curves.
Our results confirm the possibility of a pressure induced phase transition from the cubic paramagnetic to the orthorhombic antiferromagnetic phase due to the slightly smaller volume of the latter, in line with previous investigations. Importantly, as can be seen in Table \ref{tbl:K0} and from the slopes of the curves in Fig.~\ref{fig:EOS}, the bulk modulus is found to be very similar between the two phases. This is the case both at 300 K, 1000 K, and in the static 0 K calculations. The calculations of the orthorhombic low temperature phase at 1000 K is of course not of relevance for any comparison with experiments, but is included to show with certainty that temperature induced vibrations is not influencing the \emph{difference} in bulk modulus between the phases. At T=300K and P=0GPa we find $K_{0}^{para}=290$~GPa while $K_{0}^{AFM}=286$~GPa. This gives an insignificant difference of 4 GPa, far from the collapse of 25\% or 85 GPa suggested in Ref.~\cite{Rivadulla2009-Nat.Mater-8-947} to follow the transition from cubic to orthorhombic structures. A variation of the time between the rearrangement of the magnetic configurations do not influence the value of the bulk modulus in any appreciable way. Thus, explicit considerations of temperature induced magnetic fluctuations and lattice vibrations do not change the main conclusions from previous works~\cite{Alling2010-NatMater-9-283, Alling2010-PRB-82-184430}: There is no theoretical support for a collapse of the bulk modulus of CrN upon the pressure induced phase transition.
\begin{table}
\caption{\label{tbl:K0} Calculated bulk modulus, $K_{0}$, of CrN in the orthorhombic antiferromagnetic and the
cubic paramagnetic phases obtained at ambient pressure and 0, 300, and 1000 K respectively.
}
\begin{ruledtabular}
\begin{tabular}{lccc}
\text{Structure} & \text{Static 0K} & \text{MD 300 K} & \text{MD 1000 K} \\
\hline
\text{Orthorhombic AFM} & 290 & 286 & 261 \\
\text{Cubic Paramagnetc} & 299 & 290 & 269
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Summary}
We present a method for calculation of thermodynamic properties of magnetic materials in their high temperature paramagnetic state.
We use \emph{ab-initio} molecular dynamics and simulate the paramagnetic state with disordered non-vanishing local magnetic moments. Random configurations of the local moments in the simulation cell are switched at predetermined time intervals. Hence we can capture the influence of the dynamically disordered magnetic state on the lattice dynamics as it develops during the simulation. We apply this method to CrN which is known to have a strong interaction between the magnetic state and the lattice. We find that there is a connection between how fast the local moments are allowed to flip and the calculated potential energy. If the spin flip time is short, $\sim10$ fs, the lattice do not have time to respond, but if the spin flip time is increased to about 100 fs then the atomic positions start to show clear relaxation effects. We apply this disordered local moments molecular dynamics method to the calculation of the equation of state of paramagnetic cubic CrN and compare with calculations for the orthorhombic antiferromagnetic phase and with experiments. In particular we calculate the debated bulk modulus and find that there is only a very small difference, and definitely no collapse, in $K_0$ between the orthorhombic antiferromagnetic phase and the cubic paramagnetic phase.
\begin{acknowledgments}
We gratefully acknowledge financial support by the Swedish Foundation for Strategic Research (SSF) Program in Materials Science for Nanoscale Surface Engineering, MS$^2$E, the Swedish Research Council (VR), and G\"oran Gustafsson foundation for research in Natural Sciences and Medicine. . The simulations were carried out using supercomputer resources provided by the Swedish national infrastructure for computing (SNIC).
\end{acknowledgments}
|
1801.05791
|
\section{Introduction \& Main Results} Kac \cite{FKT} introduced a Markov model for the behaviour of a dilute gas, corresponding to the spatially homogeneous Boltzmann equation. We consider an ensemble of $N$ indistinguishable particles, with velocities $v_1(t), ..., v_N(t) \in \mathbb{R}^d$ at time $t\ge 0$, which are are encoded in the empirical velocity distribution \begin{equation} \mu^N_t=N^{-1}\sum_{i=1}^N \delta_{v_i(t)}. \end{equation} Throughout, unless specified otherwise, we consider only the following example, known as the \emph{hard spheres} kernel, of Kac processes, which is one of two main examples of physical interest. The dynamics are as follows: \begin{enumerate}\item For every (unordered) pair of particles with velocities $v, v_\star \in \text{supp}(\mu^N_t)$, the particles collide at a rate $2|v-v_\star|/N$.
\item When two particles collide, take an independent random variable $\Sigma$, distributed uniformly on $S^{d-1}$. The particles then separate in direction $\Sigma.$
\item The velocities change to $v'(v, v_\star, \Sigma)$ and $v'_\star(v,v_\star, \Sigma)$, given by conservation of energy and momentum as \begin{equation}\label{eq: PCV} v'(v, v_\star, \Sigma)=\frac{v+v_\star+\Sigma|v-v_\star|}{2}; \hspace{0.5cm} v_\star'(v, v_\star, \Sigma)=\frac{v+v_\star-\Sigma|v-v_\star|}{2} \end{equation} The measure changes to \begin{equation} \label{eq: change of measure at collision} \mu \mapsto \mu^{N, v, v_\star, \Sigma} = \mu+\frac{1}{N}(\delta_{v'}+\delta_{v'_\star}-\delta_{v}-\delta_{v_\star}). \end{equation} \end{enumerate} More formally, we consider the space $\mathcal{S}$ of Borel measures on $\mathbb{R}^d$, satisfying \begin{equation} \langle 1, \mu \rangle =1; \hspace{0.5cm} \langle v, \mu \rangle =0; \hspace{0.5cm} \langle |v|^2, \mu \rangle = 1\end{equation} where we have adopted the notational conventions that angle brackets $\langle, \rangle$ denote integration against a measure, and $v$ denotes the identity function on $\mathbb{R}^d$. $\mathcal{S}$ is called the \emph{Boltzmann Sphere}, and consists of those measures with normalised mass, momentum and energy. We write $\mathcal{S}^k$ for the subspace of $\mathcal{S}$ where the $k^\text{th}$ moment $\langle |v|^k, \mu\rangle$ is finite, and define the following family of weights: \begin{equation}\Lambda_k(\mu):=\langle (1+|v|^2)^\frac{k}{2}, \mu\rangle. \end{equation} This leads to a natural family of subspaces: \begin{equation} \label{eq: definition of SKA} \mathcal{S}^k_a := \{\mu \in \mathcal{S}: \Lambda_k(\mu)\leq a\}. \end{equation} For shorthand, we will often write $\Lambda_k(\mu, \nu):=\max(\Lambda_k(\mu), \Lambda_k(\nu))$. \medskip \\
Let $\mathcal{S}_N$ be the subset of $\mathcal{S}$ consisting of normalised empirical measures on $N$ points; we will typically write $\mu^N$ for a generic element of $\mathcal{S}_N$. Formally, the Kac process is the Markov process on $\mathcal{S}_N$ with kernel \begin{equation} \label{eq: definition of script Q} \mathcal{Q}_N(\mu^N)(A)= N \int_{\mathbb{R}^d \times \mathbb{R}^d \times S^{d-1}} 1(\mu^{N, v, v_\star, \sigma} \in A) |v-v_\star| \mu^N(dv)\mu^N(dv_\star) d\sigma.\end{equation} Note that, since the map $\mu^N \mapsto \mu^{N,v,v_\star, \sigma}$ preserves particle number, momentum, and kinetic energy, $\mathcal{Q}_N(\mu^N)$ is supported on $\mathcal{S}_N$ whenever $\mu^N \in \mathcal{S}_N$. We write $(\mu^N_t)_{t \geq 0}$ for a Kac process on $N$ particles. Observe that the rates are bounded by $2N$, and so for any initial datum $\mu^N_0$, the law of a Kac process started from $\mu^N_0$ exists, and is unique, and the process is almost surely non-explosive.
\paragraph{Measure Solutions to the Boltzmann Equation}Following many previous works, \cite{L&M, M+M, ACE}, we study measure-valued solutions to the Boltzmann equation. We define the Boltzmann collision operator $Q(\mu, \nu)$ for measures $\mu, \nu \in \mathcal{S}$ as \begin{multline} \label{eq: defn of Q} Q(\mu, \nu)=\int_{\mathbb{R}^d\times \mathbb{R}^d\times S^{d-1}} \left\{\delta_{v'}+\delta_{v_\star'}-\delta_v-\delta_{v_\star}\right\}|v-v_\star|d\sigma \mu(dv)\nu(dv_\star). \end{multline}
For brevity, we will denote $Q(\mu, \mu)$ by $Q(\mu)$. We say that a family $(\mu_t)_{t\geq 0}$ of measures in $\mathcal{S}$ satisfies the \emph{Boltzmann equation} if, for any bounded measurable $f$ of compact support, \begin{equation} \tag{BE}\label{BE} \forall t \geq 0 \hspace{1cm} \langle f, \mu_t \rangle =\langle f, \mu_0 \rangle +\int_0^t \langle f, Q(\mu_s)\rangle ds. \end{equation} The Boltzmann equation is known to have a unique fixed point $\gamma \in \mathcal{S}$, which is given by the Maxwellian, or Gaussian, density: \begin{equation}
\gamma(dv)=\frac{e^{-\frac{d}{2}|v|^2}}{(2\pi d^{-1})^{d/2}}dv.
\end{equation}
\paragraph{Measuring Convergence to the Boltzmann Equation}To discuss the convergence of Kac's process to the Boltzmann equation, we will work with the following \emph{Wasserstein metric} on $\mathcal{S}$. Consider the Sobolev space of test functions \begin{equation} X=W^{1, \infty}(\mathbb{R}^d)=\{\text{Bounded, Lipschitz functions } f:\mathbb{R}^d \rightarrow \mathbb{R} \}; \end{equation}
\begin{equation} \|f\|_X:=\max\left(\sup_{v}|f|(v), \hspace{0.1cm} \sup_{v\neq w} \frac{|f(v)-f(w)|}{|v-w|}\right). \end{equation} We write $B_X$ for the unit ball of $X$; that is, those functions which are $1$-bounded and $1$-Lipschitz. Given a function $f$ on $\mathbb{R}^d$, we write $\hat{f}$ for the function \begin{equation} \hat{f}(v)=\frac{f(v)}{1+|v|^2}. \end{equation} We write $\mathcal{A}$ for the space of weighted-Lipschitz functions:\begin{equation} \label{eq: defn of script A}\mathcal{A}:=\left\{f: \mathbb{R}^d \rightarrow \mathbb{R}: \hat{f}\in X, \|\hat{f}\|_X\leq 1\right\}.\end{equation} We will also write \begin{equation} \label{eq: defn of script A 0}\mathcal{A}_0=\left\{f: \mathbb{R}^d \rightarrow \mathbb{R}: \hat{f}\in L^\infty(\mathbb{R}^d), \|\hat{f}\|_\infty\leq 1\right\}.\end{equation} The weighted Wasserstein metric $W$ is given by the duality: \begin{equation}\label{eq: definition of W} W(\mu, \nu):= \sup_{f\in\mathcal{A}}|\langle f, \mu-\nu\rangle|. \end{equation}
We make the following remark on alternative possible choices of metric. Our metric $W$ is closely related to the $p$- Wasserstein metrics $W_p$ on the subspaces $\mathcal{S}^p$, given by \begin{equation}\label{eq: definition of Wp} W_p(\mu, \nu)=\inf\left\{\int_{\mathbb{R}^d} |v-w|^p\pi(dv,dw): \hspace{0.2cm} \pi\text{ is a coupling of } \mu \text{ and } \nu\right\}.\end{equation} In the special case $p=1$, the metric $W_1$ is known as the Monge-Kantorovich-Wasserstein (MKW) metric, and can alternatively be given by \begin{equation} W_1(\mu, \nu)=W\left(\frac{\mu}{1+|v|^2}, \frac{\nu}{1+|v|^2}\right).\end{equation} It is straightforward to check that, on the space $\mathcal{S}$, the metrics $W, W_1, W_2$ all induce the same topology, and that for some absolute constant $C$, we have the bound $W_1 \le CW$ on $\mathcal{S}$. Moreover, on the subspaces $\mathcal{S}^k_a$ defined in (\ref{eq: definition of SKA}), with $k>2$, we can find explicit bounds $W \le C W_1^\alpha$, with $\alpha\in(0,1)$. \medskip \\We now state the motivating result of \cite{ACE} on the convergence of the Kac process to the Boltzmann equation:
\begin{proposition}\label{thrm: bad convergence theorem} \cite[Theorem 10.1]{ACE} Let $k>2$. We say that a family $(\mu_t)_{t\ge 0}$ is locally $\mathcal{S}^k$-bounded if $\sup_{s\leq t} \hspace{0.1cm} \Lambda_k(\mu_s) <\infty $ for any $t \ge 0$. \\ \\ For any $\mu_0 \in \mathcal{S}^k$, there is a unique locally $\mathcal{S}^k$-bounded solution to the Boltzmann equation (\ref{BE}), starting from $\mu_0$; we write this solution as $(\phi_t(\mu_0))_{t\geq 0}$. \medskip\\ Moreover, for any $\epsilon>0$, $t_\text{fin}<\infty$, $\lambda<\infty$, there exist constants $C(\epsilon, \lambda, k, t_\text{fin})<\infty$ and $\alpha(d,k)>0$ such that, whenever $(\mu^N_t)_{t\geq 0}$ is a Kac process on $N\geq 1$ particles, with $\Lambda_k(\mu^N_0) \leq \lambda, \Lambda_k(\mu_0) \leq \lambda$, we have \begin{equation} \mathbb{P}\left(\sup_{t\leq t_\text{fin}} W(\mu^N_t, \phi_t(\mu_0))>C(W(\mu^N_0, \mu_0)+N^{-\alpha})\right)< \epsilon.\end{equation} For $d\geq 3$ and $k>8$, we can take $\alpha = \frac{1}{d}$. \end{proposition}
While the study of the convergence of the Kac process to the Boltzmann equation is a well-known and extensively studied topic, this is most usually studied through the propagation of chaos, discussed below, by contrast to the \emph{pathwise} style of estimate here which we seek to emulate. We note that the existence of solutions is known \cite{L&M} for the case $k=2$, but that nothing is known for the convergence of the Kac process in this case.\\
From existence and uniqueness, we can consider the Boltzmann equation as describing a non-linear semigroup of flow operators on $(\phi_t)_{t\geq 0}$ on $ \cup_{k>2} \mathcal{S}^k$.
To prove Proposition \ref{thrm: bad convergence theorem}, Norris \cite{ACE} introduces a family of random linear operators $E_{st}$, and develops a representation formula in terms of these operators, which will be reviewed in Sections \ref{sec: continuity of BE}, \ref{sec: LMR}. Cruicial to the proof are estimates for the operator norms of $E_{st}$, which are obtained by Gr\"onwall-style estimates. As a result, the constant $C$ depends badly on the terminal time $t_\text{fin}$, with \textit{a priori} exponential growth. Our work was inspired by the observation that strong \emph{stability estimates} for the non-linear semigroup $(\phi_t)$, proven by Mischler and Mouhot \cite{M+M}, allow us to avoid using Gr\"onwall-style estimates, and hence obtain estimates with better long-time properties.
\paragraph{Chaoticity} We will also discuss the notion of chaoticity, which is the usual framework used to analyse the convergence of the Kac process to the Boltzmann equation. In this context, it is natural to preserve the labels on the particles, and to consider the \emph{labelled Kac process} $\mathcal{V}^N_t=(v_1(t),...,v_N(t))$, taking values in the labelled Boltzmann Sphere \begin{equation} \mathbb{S}^{N}=\left\{(v_1, ..., v_N) \in (\mathbb{R}^d)^N: \hspace{0.2cm}\sum_{i=1}^N v_i=0, \hspace{0.2cm}\sum_{i=1}^N |v_i|^2 = N\right\}. \end{equation} We may recover recover $\mathcal{S}_N$ by taking empirical measures: \begin{equation} \theta_N: \mathbb{S}^N \rightarrow \mathcal{S}_N; \hspace{1cm} (v_1, ..., v_N) \mapsto \frac{1}{N} \sum_{i=1}^N \delta_{v_i}.\end{equation}Moreover, if $\mathcal{V}^N_t$ is a labelled Kac process, then $\mu^N_t=\theta_N(\mathcal{V}^N_t)$ is an unlabelled Kac process. We write $\mathcal{LV}^N_t$ for the law of $(v_1(t),..,v_N(t))$ on $\mathbb{S}^N$. We will measure chaoticity using the following (unweighted) Wasserstein metrics on probability measures on $(\mathbb{R}^d)^l$ for all $l\ge 1$, defined in a similar way to (\ref{eq: definition of W}): \begin{equation}\label{eq: definition of script W} \mathcal{W}_{1,l}\left(\mathcal{L}, \mathcal{L}'\right)=\sup\left\{\int_{(\mathbb{R}^d)^l} f(V)\hspace{0.1cm}(\mathcal{L}(dV)-\mathcal{L}'(dV)) \right\}\end{equation} where the supremum is over all functions $f$ of the form $f=f_1\otimes f_2 \otimes...\otimes f_l$, with each $f_i$ a bounded and Lipschitz test function, $f_i\in B_X$, and the subscript $l$ recalls the relevant dimension. We now recall the following definition from \cite{FKT}:
\begin{definition*}[Finite Dimensional Chaos] For each $N$, let $\mathcal{L}^N$ be a law on $\mathbb{S}^N$, which is symmetric under permutations of the indexes. We say that $(\mathcal{L}^N)_{N\ge 2}$ is $\mu$-chaotic, if, for all $l \ge 1$, we have \begin{equation} \label{eq: POC}\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{L}^N], \mu^{\otimes l}\right) \rightarrow 0\end{equation} where $\Pi_l$ denotes the marginal distribution on the first $l$ factors. \end{definition*} A stronger notion, put forward by Mischler and Mouhot \cite{M+M}, is that of \emph{infinite-dimensional chaos}, which allows the number of marginals $l$ to vary with $N$:\begin{equation} \label{eq: IDPOC} \max_{1\le l\le N}\left[\frac{1}{l} \mathcal{W}_{1,l}\left(\Pi_l[\mathcal{L}^N], \mu^{\otimes l}\right)\right] \rightarrow 0.\end{equation}
Kac proposed the following \emph{propagation of chaos} property. Let $(\mathcal{V}^N_t)_{t\ge 0}$ be a labelled Kac process, such that the initial distribution $\mathcal{L}\mathcal{V}^N_t$ is $\mu_0$-chaotic. Then, for all times $t\ge 0$, the law $\mathcal{LV}^N_t$ will be $\phi_t(\mu_0)$-chaotic, where $\phi_t(\mu_0)$ is the solution to the Boltzmann equation starting at $\mu_0$. This is the original sense in which Kac proposed to study the convergence of his model to the Boltzmann equation, and has been extensively studied; key previous results in this direction will be discussed in our literature review.
\subsection{Main Results} We now state the main results of the paper, concerning the long-time nature of the convergence to the Boltzmann flow. Our first theorem controls the deviation from the Boltzmann flow at a single, deterministic time $t\geq 0$, which we refer to as a \emph{pointwise} estimate. Moreover, this estimate is \emph{uniform in time}.
\begin{thm} \label{thrm: PW convergence} Let $0<\epsilon<\frac{1}{d}$ and let $a\ge 1$. For sufficiently large $k$, depending on $\epsilon, d$, let $(\mu^N_t)_{t\geq 0}$ be a Kac process in dimension $d\geq 3$, and let $\mu_0 \in \mathcal{S}^k$, satisfying the moment bounds \begin{equation} \Lambda_k(\mu^N_0) \leq a; \hspace{1cm} \Lambda_k(\mu_0)\le a. \end{equation} Then for some $C=C(\epsilon, d, k)< \infty$ and $\zeta=\zeta(d)>0$, we have the uniform bound \begin{equation} \sup_{t\geq 0} \hspace{0.1cm} \left\|W\left(\mu^N_t, \phi_t\left(\mu_0\right)\right)\right\|_{L^2(\mathbb{P})} \leq C a \hspace{0.1cm} \left( N^{\epsilon-1/d} +W\left(\mu^N_0, \mu_0\right)^\zeta\right).\end{equation} This generalises, by conditioning, to the case where the initial data $\mu^N_0$ is random, provided that $\mathbb{E}\Lambda_k(\mu^N_0)\le a$. \end{thm} This result is, to the best of our knowledge, new, although an equivalent result is known for Maxwell molecules \cite{CF 2018}. We will see, in Theorem \ref{corr: PW convergence as POC}, that estimates of this form imply the propagation of chaos for hard spheres, in the sense of (\ref{eq: POC}-\ref{eq: IDPOC}), with better rates than found in \cite{M+M} for the hard spheres process. \medskip \\
Our second main theorem controls, in $L^p(\mathbb{P})$, the maximum deviation from the Boltzmann flow up to a time $t_\text{fin}$, in analogy with Proposition \ref{thrm: bad convergence theorem}. We refer to this as a \emph{pathwise, local uniform in time} estimate.
\begin{thm} \label{thrm: Main Local Uniform Estimate} Let $0<\epsilon<\frac{1}{2d}$, $a\ge 1$ and $p\geq 2$. For sufficiently large $k\geq 0$, depending on $\epsilon, d$, let $(\mu^N_t)_{t\geq 0}$ be a Kac process on $N\geq 2$ particles and let $\mu_0 \in \mathcal{S}^k$, with initial moments \begin{equation} \Lambda_{kp}(\mu^N_0)\le a^p ;\hspace{1cm}\Lambda_k(\mu_0)\le a. \end{equation} For some $\alpha=\alpha(\epsilon, d, p)>0$ and $C=C(\epsilon, d, p, k)< \infty$ and $\zeta=\zeta(d)>0$, we can estimate, for all $t_\text{fin}\ge 0$, \begin{equation} \left\|\hspace{0.1cm}\sup_{t\leq t_\text{fin}} \hspace{0.1cm} W\left(\mu^N_t, \phi_t(\mu_0)\right) \hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \leq Ca\left( (1+t_\text{fin})^{1/p}\hspace{0.1cm} N^{-\alpha} + W(\mu^N_0, \mu_0)^\zeta)\right).\end{equation} $\alpha$ is given explicitly by \begin{equation} \alpha = \frac{p'}{2d}-\epsilon \end{equation}where $1<p'\le 2$ is the H\"{o}lder conjugate to $p$. \end{thm} At the end of this section, we will discuss related results, and how they may be compared to this estimate. \medskip\\
An unfortunate feature of these approximation theorems is the dependence on the unknown, and potentially large, moment index $k$; a trivial reformulation which avoids this is to ask instead for an exponential moment bound $\langle e^{z|v|},\mu^N_0\rangle \le b$, for some $z>0$. We will also prove the following variant of the theorems above which allows us to use any moment estimate higher than second. \begin{thm}\label{thm: low moment regime}[Convergence with few moment estimates] Let $k>2$ and $a\ge 1$. Let $(\mu^N_t)$ be an $N$-particle Kac process, and $\mu_0$ in $\mathcal{S}$ with initial moment estimates \begin{equation} \Lambda_k(\mu^N_0) \le a;\hspace{1cm} \Lambda_k(\mu_0) \le a. \end{equation} There exists $\epsilon=\epsilon(d,k)>0$ and a constant $C=C(d,k)$ such that \begin{equation} \label{eq: pw convergence with few moments}\sup_{t\ge 0} \left\|W\left(\mu^N_t, \phi_t(\mu_0)\right)\right\|_{L^1(\mathbb{P})} \le Ca(N^{-\epsilon}+W(\mu^N_0,\mu_0)^\epsilon).\end{equation} For a local uniform estimate, if $p\ge 2$, then there exists a constant $C=C(d,k,p)$ and $\epsilon=\epsilon(d,k,p)>0$ such that, for all $t_\mathrm{fin}<\infty$, \begin{equation} \left\|\sup_{t\le t_\mathrm{fin}}W\left(\mu^N_t, \phi_t(\mu_0)\right)\right\|_{L^1(\mathbb{P})} \le Ca((1+t_\mathrm{fin})^{1/p}N^{-\epsilon}+W(\mu^N_0,\mu_0)^\epsilon).\end{equation} \end{thm} In the course of proving this result, we will see that the higher moment conditions are only required to obtain the optimal rates on a very short time interval $[0, u_N]$ and, in particular, we can obtain very good time-dependence without higher moment estimates. \medskip \\
We also study the long-time behaviour of the Kac Process. We cannot extend Theorem \ref{thrm: Main Local Uniform Estimate} to control the maximum deviations over all times $t\geq 0$, due to the following recurrence features of the Kac process.
\begin{thm}\label{thrm: No Uniform Estimate} There exists a universal constant $C>0$ such that, for every $N$, for every $k> 2$ and $a>1$, there exists a Kac process $(\mu^N_t)_{t\geq 0}$ with initial moment $\Lambda_{k}(\mu^N_0)\le a$ but, almost surely, \begin{equation} \label{eq: conclusion of 1.5} \limsup_{t\rightarrow \infty} \hspace{0.1cm} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \geq 1-\frac{C}{\sqrt{N}}.\end{equation} Hence we cannot omit the factor of $(1+t_\text{fin})^{1/p}$ in Theorem \ref{thrm: Main Local Uniform Estimate}. \end{thm} In keeping with the terminology above, we say that there is no \emph{pathwise, uniform in time} estimate. In the course of proving Theorem \ref{thrm: No Uniform Estimate}, we will show that the long-time deviation (\ref{eq: conclusion of 1.5}) is typical for the Kac process. We will show that the Kac process returns, infinitely often, to `highly ordered' subsets of $\mathcal{S}_N$, which are far from the Boltzmann flow. However, we make the following remark on the times necessary for such deviations to occur. \begin{corollary}\label{corr: variation 3} Define \begin{equation} T_{N, \epsilon}=\inf\left\{t\geq 0: W(\mu^N_t, \phi_t(\mu_0))>\epsilon \right\}.\end{equation} Let $(\mu^N_t)$ be a family of Kac processes with an initial exponential moment bound: $\langle e^{z|v|}, \mu^N_0\rangle \leq b$, for some $z>0$ and $b>0$. Let $\mu_0\in\mathcal{S}$ satisfy $\langle e^{z|v|}, \mu_0\rangle \le b$, and suppose that $W(\mu^N_0, \mu_0)\rightarrow 0$ in probability. \medskip\\ Let $t_{N, \epsilon, \delta}$ be the quantile constants of $T_{N, \epsilon}$ under $\mathbb{P}$; that is, \begin{equation} \mathbb{P}(T_{N, \epsilon} \leq t_{N, \epsilon, \delta})\geq \delta. \end{equation} Then, for fixed $\epsilon, \delta >0$, $t_{N, \epsilon, \delta}\rightarrow \infty$, faster than any power of $N$. \end{corollary} This follows as an immediate consequence of Theorem \ref{thrm: Main Local Uniform Estimate}. Taken together with Theorem \ref{thrm: No Uniform Estimate}, we see that macroscopic deviations occur, but typically at times growing faster than any power of $N$.
In the course of proving Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}, we will establish the following continuity estimate for the Boltzmann flow $\phi_t$ measured in the Wasserstein distance $W$, which may be of independent interest. \begin{thm} \label{thrm: W-W continuity of phit} There exist constants $k, C, w$ depending only on $d$ such that, whenever $a\ge 1$ and $\mu, \nu \in \mathcal{S}^k_a$, we have the estimate \begin{equation} W\left(\phi_t(\mu), \phi_t(\nu)\right) \le Ce^{wt}a W(\mu, \nu). \end{equation} Moreover, for all $k>2$, there exist constants $C=C(k,d)$ and $\zeta=\zeta(k,d)>0$ such that, whenever $\mu, \nu \in \mathcal{S}^k_a$, we have the estimate \begin{equation} \label{eq: good continuity estimate} \sup_{t\ge 0} \hspace{0.1cm}W\left(\phi_t(\mu),\phi_t(\nu)\right) \le C a W(\mu, \nu)^\zeta.\end{equation} \end{thm} In the second part of the theorem, and in Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} above, the exponent $\zeta$ can be taken to be $\lambda_0/({\lambda_0+2w})$ by making $k$ large enough, where $w$ is as in the first part of the theorem, and $\lambda_0=\lambda_0(d)>0$ is the spectral gap of the linearised Boltzmann operator. While it may be possible to obtain better continuity results, with $\zeta$ close to 1, we will not explore this here.\medskip \\
Due to a result of Sznitman \cite{Sznitman Chaos}, the property of chaoticity is equivalent to convergence of the empirical measures in expected Wasserstein distance $W$. Therefore, as mentioned before, the theorems displayed above are closely related to the propagation of chaos for the hard-spheres Kac process, proven in \cite{M+M}. We now give a chaoticity result which may be derived from the previous theorems. \begin{thm}[Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime} as a Chaos Estimate] \label{corr: PW convergence as POC} We can view Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime} as \emph{Propagation of Chaos} and \emph{Conditional Propagation of Chaos}, as follows. \medskip \\ We denote $\mathcal{P}_t^N(\mathcal{V}^N, \cdot)$ the transition probabilities of the $N$-particle labelled Kac process, started at $\mathcal{V}^N\in\mathbb{S}^N$. We form the symmetrised version, which we denote $\mathcal{P}^N_t(\mu^N, \cdot)$ by \begin{equation} \mathcal{P}^N_t(\mu^N, A)=\frac{1}{\#\theta_N^{-1}(\mu^N)}\hspace{0.1cm}\sum_{\mathcal{V}^N \in \theta_N^{-1}(\mu^N)}\mathcal{P}^N_t(\mathcal{V}^N, A). \end{equation} Let $k>2$ and $a\ge 1$, and suppose $\mu^N_0 \in \mathcal{S}_N$ satisfies a moment bound $\Lambda_k(\mu^N_0)\le a$. Then we can estimate \begin{equation} \label{eq: CPOC} \sup_{t\ge 0}\hspace{0.1cm}\max_{1\le l\le N} \hspace{0.1cm} \frac{\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{P}^N_t(\mu^N_0, \cdot)], \phi_t(\mu^N_0)^{\otimes l}\right)}{l} \le C\hspace{0.1cm}a \hspace{0.1cm} N^{-\beta} \end{equation} for some constants $C=C(d,k)<\infty$; $\beta=\beta(d,k)>0$. This has the following consequences:
\begin{enumerate}[label=\roman{*}).]
\item \emph{(Chaotic case)} Let $k, a$ be as above, and suppose $\mu_0\in \mathcal{S}$ satisfies $\Lambda_k(\mu_0)\le a.$ \medskip \\ Construct initial data $\mathcal{V}^N_0=(v_1(0),...v_N(0))$ as follows. Let $u_1, ....u_N$ be an independent, and identically distributed sample from $\mu_0$. Define \begin{equation} \overline{u}_N=\frac{1}{N}\sum_{i=1}^N u_i; \hspace{1cm}s_N=\frac{1}{N}\sum_{i=1}^N|u_i-\overline{u}_N|^2\end{equation} and set \begin{equation} v_i(0)=s_N^{-1/2}(u_i-\overline{u}_N),\hspace{0.3cm} i=1,2,...,N;\hspace{0.6cm} \mathcal{V}^N_0=(v_1(0),...,v_N(0)). \end{equation} Let $\mathcal{V}^N_t$ be a labelled Kac process starting from $\mathcal{V}^N_0$. Then there exist constants $C=C(d,k)<\infty$; $\beta=\beta(d,k)>0$ such that \begin{equation} \sup_{t\ge 0}\hspace{0.1cm}\max_{1\le l\le N} \hspace{0.1cm} \frac{\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{LV}^N_t], \phi_t(\mu_0)^{\otimes l}\right)}{l} \le C\hspace{0.1cm}N^{-\beta}. \end{equation}
\item \emph{(General Case)} Let $a, k$ be as above, and suppose that $(\mathcal{V}^N_t)_{t\ge 0}$ are labelled Kac processes such that the empirical measures $\mu^N_0$ satisfy \begin{equation} \mathbb{E}\Lambda_k(\mu^N_0) \le a.\end{equation} Then we have the estimate \begin{equation} \sup_{t\ge 0}\hspace{0.1cm}\max_{1\le l\le N} \hspace{0.1cm} \frac{\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{LV}^N_t], \mathcal{L}^l_t\right)}{l} \le C\hspace{0.1cm}a\hspace{0.1cm}N^{-\beta} \end{equation} for $C$ and $\beta$ as in the main statement, and where $\mathcal{L}^l_t$ is the probability measure given by \begin{equation} \label{eq: defn of ll} \mathcal{L}^l_t=\mathbb{E}\left[\phi_t(\mu^N_0)^{\otimes l}\right]. \end{equation} \end{enumerate} \end{thm}
\begin{remark} \begin{enumerate}[label=\roman{*}).]
\item Roughly, (\ref{eq: CPOC}) says that, conditional on the observation of the empirical data $\mu^N_0$ at time $0$, the law $\mathcal{L}\mathcal{V}^N_t$ is quantitatively $\phi_t(\mu^N_0)$-chaotic. This may be viewed as propagation of chaos, with the heuristic that `conditional on $\mu^N_0, \mathcal{V}^N_0$ is $\mu^N_0$-chaotic'. We term this \emph{conditional propagation of chaos}. In this spirit, we may view the main estimate (\ref{eq: CPOC}) and point (ii.) as a \emph{quenched} and \emph{annealed} pair.
\item The polynomial result obtained here improves on the previously known result \cite[Theorem 6.2]{M+M} for the hard spheres chaos. This improvement is due to the continuity estimate (\ref{eq: good continuity estimate}), which improves on the corresponding estimate in \cite[Equations 6.39, 6.42]{M+M}; we could derive the chaoticity estimate (\ref{eq: CPOC}) by using the estimate (\ref{eq: good continuity estimate}) in the arguments of \cite[Section 6]{M+M}, at the cost of potentially requiring a stronger initial moment control. We will recall the relevant arguments for completeness, and this will be discussed in the literature review.
\item This construction of chaotic initial data in point (i.) is due to \cite[Proposition 9.2]{ACE}, which may be thought of as `as close to perfect independence as possible'.
\item We will show that the main point can be deduced from Theorem \ref{thrm: PW convergence} or \ref{thm: low moment regime}. However, we will see in Section \ref{sec: proof of POC} that deriving either of these from this result appears to be no less technical than the main proof presented in Section \ref{sec: proof of pw}. \end{enumerate} \end{remark} In our arguments, we will frequently encounter numerical constants which are ultimately absorbed into the constants $C$ whose dependence is specified in the relevant theorem. To ease notation, we will denote inequality, up to such a constant, by $\lesssim$.
\subsection{Plan of the paper} Our programme will be as follows: \begin{enumerate} [label=\text{\roman*.}]
\item In the remainder of this section, we will present a review of known results in the study of the Kac process and similar models. We will then discuss several aspects of our results, and how they may be interpreted.
\item For later convenience, we discuss some classical moment estimates for the Kac process and the Boltzmann equation. These allow us to stochastically control the weights $\Lambda_k$ in appropriate $L^p$ spaces.
\item We cite the analytical \emph{regularity and stability estimates} from Mischler and Mouhot, \cite{M+M}. The stability estimates, in particular, are crucial to obtaining the good time-dependence in Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}.
\item As a first application of the stability estimates, we analyse the continuity of the Boltzmann flow $\phi_t$ on subsets $\mathcal{S}^k_a$, with respect to the metric $W$, and uniformly in time. This is the content of Theorem \ref{thrm: W-W continuity of phit}, and allows us to reduce Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} to the special case $\mu_0=\mu^N_0$.
\item We use ideas of infinite-dimensional differential calculus, developed by \cite{M+M}, to prove an \emph{interpolation decomposition} of the difference $\mu^N_t - \phi_t(\mu^N_0)$. This is the key identity used for the proofs of Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}, as all of the terms appearing in our formula can be controlled by the stability estimates.
\item We then turn to the proof of Theorem \ref{thrm: PW convergence}. The main technical aspect is the control of a family of martingales $(M^{N,f}_t)_{f\in \mathcal{A}}$, uniformly in $f$. This is obtained using a quantitative compactness argument similar to that in \cite{ACE}.
\item For a local uniform analysis, we first adopt the ideas of Theorem \ref{thrm: PW convergence} to a local uniform setting, with suitable adaptations, to state a local uniform martingale estimate, and deduce a preliminary, weak version of Theorem \ref{thrm: Main Local Uniform Estimate} with worse dependence in $t_\text{fin}$. We then use the stability estimates to `bootstrap' to the improved estimate Theorem \ref{thrm: Main Local Uniform Estimate}, and finally return to prove the local martingale estimate.
\item We next prove Theorem \ref{thm: low moment regime}. The strategy here is to use a localised form of the main argument from \cite{ACE} to control behaviour on a very short time interval $[0, u_N]$, and use the previous results, together with the \emph{moment production} property recalled in Section \ref{sec:moment estimates}, to control behaviour at times larger than $u_N$.
\item We prove Theorem \ref{thrm: No Uniform Estimate}, based on relaxation to equilibrium.
\item Finally, we prove the chaoticity result Theorem \ref{corr: PW convergence as POC}. This proof follows a similar pattern to the proof in \cite{M+M}, using our esimates. \end{enumerate}
\subsection{Literature Review} We will now briefly discuss related works, to which our results may be compared.
\paragraph{1. Probabilistic Techniques for the Kac Process and Boltzmann Equation} The probabilistic, \emph{pathwise} approach to the Kac process was pioneered by Tanaka \cite{Tanaka 78,Tanaka 02}, who constructed a Markov process describing the velocity of a `typical' particle in the Kac process with Maxwell molecules, and whose law at time $t$ is the solution to the associated Boltzmann equation. This was generalised by Fournier and M\'el\'eard \cite{FM} to include the cases without cutoff, and for non-Maxwellian molecules. A similar idea was used by Rousset \cite{Rousset} to prove convergence to equilibrium as $t\rightarrow \infty$. \medskip \\ Our main convergence results may be compared to the motivating work of Norris \cite{ACE}, of which the main result is recalled in Proposition \ref{thrm: bad convergence theorem} above. Theorem \ref{thrm: Main Local Uniform Estimate} improves on Proposition \ref{thrm: bad convergence theorem} in two notable ways. Firstly, we have much better asymptotic behaviour in the time-horizon $t_\text{fin}$, which was the original motivation for our work. Secondly, we control the deviation in the stronger sense of $L^p$, rather than in probability; this arises as a result of using moment estimates within the framework of a `growth control', rather than excluding events of small probability where the moments are large. We also remark that the analysis of the martingale term in Sections \ref{sec: proof of pw}, \ref{sec: proof of LU} is simplified from the equivalent analysis in \cite[Theorem 1.1]{ACE} by our `interpolation decomposition', Formula \ref{form:newdecomposition}, which removes anticipating behaviour.
\paragraph{2. Propagation of Chaos for the Kac Process} The problem of propagation of chaos for the Kac Process and Boltzmann equation has been extensively studied. The earliest results in this direction are due to McKean \cite{McKean 67}, Gr\"unbaum \cite{Gruenbaum} and Sznitman \cite{Sznitman BE}, and prove the qualitative statement (\ref{eq: POC}) for the cases of the hard spheres kernel considered here, or for the related case of Maxwell molecules. Recent work has produced quantitative estimates: Mischler and Mouhout \cite{M+M} showed propagation of infinite-dimensional chaos (\ref{eq: IDPOC}) for both hard spheres and Maxwell molecules. The estimates are uniform in time, with a quantitative estimate going as $(\log N)^{-r}$ for the hard spheres case. As remarked above, our estimates (Theorem \ref{thrm: PW convergence}, \ref{thm: low moment regime}, \ref{corr: PW convergence as POC}) improve this rate; this improvement is due to the improvement of Theorem \ref{thrm: W-W continuity of phit} over the corresponding estimate in \cite{M+M}, and this will be discussed further below. More recently, \cite{CF 2018} proved a chaoticity estimate for Maxwell molecules in $d=3$, measured in the $L^2(\mathbb{P})$ norm of Wasserstein$_2$ distance (\ref{eq: definition of Wp}), and with an almost optimal rate $N^{\epsilon-1/3}$, which is almost completely analagous to Theorem \ref{thrm: PW convergence}.
\paragraph{3. Propagation of Chaos for Related Models} We also mention the study of other models in kinetic theory where chaoticity has been studied. Malrieu \cite{Malrieu} studied a McKean-Vlasov model related to granular media equations, and deduced chaoticity for a related system. The main estimate here is a uniform in time estimate, similar in nature to Theorem \ref{thrm: PW convergence}. Similarly, Bolley, Guillin and Malrieu \cite{BGM} have also proven propagation of chaos for a particle system associated to a Vlasov-Focker-Plank equation, through a pointwise convergence result. Most recently, Durmus et al. \cite{Durmus} have proved a uniform in time chaoticity estimate based on a coupling approach, for the case with a confinement potential. Both of these models are amenable to the general framework of \cite{M+M}, and propagation of chaos for these models has been proven using the same techniques in a companion paper \cite{M+MCompanion}. \medskip \\We may also compare Theorem \ref{thrm: Main Local Uniform Estimate} to a result of Bolley, Guillin and Villani \cite[Theorem 2.9]{BGV}, which proves exponential concentration of the maximum $\sup_{t\le t_\text{fin}} W(\mu^N_t, \phi_t(\mu))$ about $0$, for McKean-Vlasov dynamics. This improves upon the rates $\mathcal{O}(N^{-\infty})$ which would be obtained using Theorem \ref{thrm: Main Local Uniform Estimate}, but does not produce an explicit $L^p(\mathbb{P})$ bound. More recently, Holding \cite{Holding} proved a result similar to Theorem \ref{thrm: Main Local Uniform Estimate} for McKean-Vlasov systems interacting through a H\"older continuous force, in order to deduce propagation of chaos. However, neithere of these results track the dependence in the terminal time $t_\text{fin}$, and so may have much weaker time dependence than our result. To the best of our knowledge, no local uniform estimate for the McKean-Vlasov system exists which seeks to optimise time dependence in the spirit of Theorem \ref{thrm: Main Local Uniform Estimate}; the applicability of our methods to this system will be considered in the discussion section below. \medskip \\ The notion of chaoticity has also been studied in more abstract settings. Sznitman \cite{Sznitman Chaos} has studied equivalent conditions for a family of measures to be chaotic, and Gottlieb \cite{Gottlieb} has produced a necessary and sufficient condition for families of Markov chains to propagate chaoticity.
\paragraph{4. Relaxtion to Equilibrium of the Kac Process} Kac \cite{FKT} proposed to relate the asymptotic behaviour of the Boltzmann flow $\phi_t(\mu_0)$ to the asymptotic relaxation to equilibrium of the particle system, and conjectured the existence of a spectral gap for the master equation. This has been extensively studied, and Kac's conjecture on the spectral gap positively answered \cite{Carlen 00,Janvresse,Carlen 03,Malsen}. However, this is not an entirely satisfactory answer for Kac's question on convergence to equilibrium; for chaotic initial data, this still requires times order $\mathcal{O}(N)$ to show relaxation to equilibrium. Carlen et al. also considered in a later paper \cite{Carlen 08} the more intricate notion of convergence \emph{in relative entropy}, which somewhat avoids this problem. Mischler and Mouhot \cite{M+M} answered Kac's question, proving relaxation to equilibrium in Wasserstein distance, uniformly in $N$, for the cases of hard spheres and Maxwell molecules. \medskip \\ We remark that our philosophy is similar to Kac's proposal. Rather than investigating the long-time behaviour of the \emph{law} $\mathcal{LV}^N_t$ of the Kac process, our results use the asymptotics of the Boltzmann equation to partially understand the asymptotics of \emph{realisations} of the Kac process. Moreover, Theorem \ref{thrm: No Uniform Estimate} shows that this cannot be extended to completely understand the full, long-time asymptotics in this sense.
\subsection{Discussion of Our Results}\label{sec: discussion of our results}
In this subsection, we will discuss the interpretation of our results, especially in view of the framework of chaoticity set out above.
\paragraph{1. Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} as a pathwise interpretation of the Boltzmann Equation} The main philosophy of our approach follows \cite{ACE}, in considering the Kac process as a Markov chain, and adapting techniques \cite{D&N,PDF} from the general scaling limits of Markov processes. \medskip \\ It is instructive to compare this to the case of a particle system evolving under Vlasov dynamics. In this case, we write $\mu^{N,\text{Vl}}_t$ for the $N$-particle empirical measure, evolving under (nonrandom) Hamiltonian dynamics; Dobrushin \cite{Dobrushin} showed that $\mu^{N, \text{Vl}}_t$ is a weak measure solution to the associated mean field PDE, the Vlasov equation. For the case of Kac dynamics, we may interpret Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} as saying that \begin{equation} \forall t\ge 0\hspace{1cm} \mu^N_t=\phi_t(\mu^N_0)+\mathcal{N}^N_t\end{equation} where $\mathcal{N}^N_t$ is a stochastic noise term, which is small in an appropriate sense. This is a general phenomenon in the `fluid limit' scaling of Markov processes \cite{D&N,PDF,ACE}. In this sense, we may interpret the Boltzmann equation in a \emph{pathwise} sense; we stress that this interpretation of the Boltzmann equation does \emph{not} require any chaoticity assumptions on the initial data.
\paragraph{2. Theorem \ref{thrm: PW convergence} as Propagation of Chaos} It is natural, and instructive, to compare our chaoticity result Theorem \ref{corr: PW convergence as POC} and our techniques to those of \cite{M+M}, on whose work we build. \medskip \\ In Theorem \ref{corr: PW convergence as POC}, we have improved the rate of chaoticity, from $(\log N)^{-r}$ to a polynomial estimate $N^{-\alpha}$. In proving this result, we will compare our estimates to the estimates of the three error terms $\mathcal{T}_1$, $\mathcal{T}_2$, $\mathcal{T}_3$ in the abstract result \cite[Theorem 3.1]{M+M}:\begin{enumerate}[label=\roman{*}).] \item The first term $\mathcal{T}_1$ is a purely combinatorial term which may be controlled by general, elementary arguments. \item The second error term $\mathcal{T}_2$ may be controlled by $\mathbb{E}W(\phi_t(\mu^N_0), \mu^N_t)$, which is a special case of Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime} with $\mu_0=\mu^N_0$. \item The third error $\mathcal{T}_3$ depends on the continuity of the Boltzmann flow $\phi_t$ in Wasserstein distance, which is controlled by the H\"older estimates Theorem \ref{thrm: W-W continuity of phit}. \end{enumerate} As mentioned above, the improvement over \cite[Theorem 6.2]{M+M} is due to the improved control on $\mathcal{T}_3$, using the estimate (\ref{eq: good continuity estimate}). The controls on $\mathcal{T}_1, \mathcal{T}_2$ are similar to those in \cite{M+M}, and the claimed result (\ref{eq: CPOC}) follows by using our estimates (\ref{eq: pointwise bound on martingale term}, \ref{eq: good continuity estimate}) in the arguments of \cite[Section 6]{M+M}. In order to give a self-contained proof, we will recall the relevant arguments in Section \ref{sec: proof of POC}. \medskip\\ We also remark that we use each of the assumptions (\textbf{A1}-\textbf{5}) from \cite{M+M} in our analysis: \begin{enumerate}[label=\roman{*}).] \item Assumption (\textbf{A1}) corresponds to the moment bounds, which follow from the discussion of moment bounds in Proposition \ref{thrm:momentinequalities}.
\item Assumption (\textbf{A2}i) and (\textbf{A5}) concern the continuity of the Boltzmann flow $\phi_t$, which is addressed in Theorem \ref{thrm: W-W continuity of phit}. Assumption (\textbf{A2}ii) concerns the continuity of the collision operator $Q$, which is discussed in Section \ref{sec: Regularity and Stability Estimates}.
\item Assumption (\textbf{A3}) is the convergence of the generators. A special case of this is the content of Lemma \ref{lemma:DAP}, which is used to prove our `interpolation decomposition' Formula \ref{form:newdecomposition}.
\item Assumption (\textbf{A4}) is the differential stability of the Boltzmann flow $\phi_t$, recalled in Proposition \ref{thrm: stability for BE}, which is crucial to obtaining estimates with good long-time properties.\end{enumerate} We will also see that, in order to recover Theorem \ref{thrm: PW convergence} theorem from either of the chaoticity results (Theorem \ref{corr: PW convergence as POC} or \cite[Theorem 6.2]{M+M}), we would need to move a supremum over test functions $f$ \emph{inside an expectation}, which corresponds to one of the most technical steps in our proof (Lemmas \ref{thrm: pointwise martingale control}, \ref{thrm: local uniform martingale control}). Moreover, this technique cannot generalise to produce a pathwise, local uniform convergence result analogous to Theorem \ref{thrm: Main Local Uniform Estimate} or Proposition \ref{thrm: bad convergence theorem}.
\paragraph{3. Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} without chaoticity} We also remark that neither of the approximation results Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} require special preparation of the initial data, beyond a moment estimate; in particular, both are valid even if the initial data $\mathcal{V}^N_0$ are not chaotic. We will now give an explicit example of such a distribution where this chaoticity property fails. \begin{example}[Non-chaotic initial data]\label{ex: nonchaotic initial data} Assume that $N$ is a multiple of $2^d$. Choose $\Sigma \in S^{d-1}$ uniformly at random, and let $P_1, P_2,...,P_{2^d}$ be the $2^d$ points obtained from $\Sigma$ by all reflections in coordinate axes. Let $\mathcal{V}^N_0$ be given by giving $\frac{N}{2^d}$ particles velocity $P_i$, for each $i=1,...,2^d$, such that the resulting law $\mathcal{LV}^N_0$ is symmetric. Then each marginal distribution is the uniform distribution $\text{Uniform}(S^{d-1}) \in \mathcal{S}$, but there exists a constant $\delta>0$, uniform in $N$, such that \begin{equation} W\left(\mu^N_0, \text{Uniform}(S^{d-1})\right) \ge \delta >0\end{equation} almost surely, where $\mu^N_0$ is the empirical measure of $\mathcal{V}^N_0$. In particular, by Sznitman's characterisation, $\mathcal{V}^N_0$ is not $\text{Uniform}(S^{d-1})$-chaotic. \end{example}
In cases such as this, we may still understand the Boltzmann equation as `nearly' holding pathwise, in the sense of point 1. Alternatively, we may view the result Theorem \ref{thrm: PW convergence}, and its consequence in Theorem \ref{corr: PW convergence as POC}, as a chaoticity estimate for $\mathcal{V}^N_t$ about $\phi_t(\mu^N_0)$, \emph{conditional on the initial measure $\mu^N_0$}.
\paragraph{4. Theorem \ref{thrm: No Uniform Estimate} in view of the $H$-Theorem}
As commented after the statement of Theorem \ref{thrm: No Uniform Estimate}, the key idea of the proof of Theorem \ref{thrm: No Uniform Estimate} is that the Kac process $\mu^N_t$ will, infinitely often, return to `highly ordered' subsets of the state space $\mathcal{S}_N$. However, this appears to contradict a na\"ive statement of Boltzmann's celebrated $H$-Theorem \cite{Boltzmann H Thrm}, that \emph{``entropy increases"}. Indeed, this is highly reminiscent of Zermelo's objection, based on Poincar\'e recurrence of deterministic dynamical systems \cite{Zermelo H Thrm}. \medskip \\ However, our results are compatible with the $H$-Theorem, which is rigorously established in \cite{M+M}. This apparent paradox arises because the $H$-functional, representing the negative of entropy, is a \emph{statistical}, and not \emph{pathwise}, concept; that is, $H_t$ depends on the data $\mathcal{V}^N_t$ through the law $\mathcal{LV}^N_t$, rather than being a random variable depending directly on a particular observation $\mathcal{V}^N_t(\omega)$. In particular, for our case, the time $T_N$ of reaching the `ordered state' is a large, random time, and observing a particular realisation $T_N(\omega)=t$ tells us very little about the general behaviour $\mathcal{LV}^N_t$, and so about the entropy at time $t$.
\paragraph{5. Sharpness of our Results} We will now discuss how sharp the main results (Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}) are, with regards to dependencies in $N$, and the terminal time $t_\text{fin}$ in the case of Theorem \ref{thrm: Main Local Uniform Estimate}. \\
\subparagraph{5a. $N$-dependence} It is instructive to first consider the `optimal' case of independent particles, for which the empirical measure converges in Wasserstein distance at rate $N^{-1/d}$. More precisely, for $d\ge 3$, let $\mu \in \mathcal{S}^k_a$ for $k\ge \frac{3d}{d-1}$, and let $\mu^N$ be an empirical measure for $N$ independent draws from $\mathcal{S}$. Then, for some $C=C(a, k, d)$, we have\begin{equation} \left\|W(\mu^N, \mu)\right\|_{L^2(\mathbb{P})} \le C N^{-1/d}. \end{equation} This is shown in \cite[Proposition 9.3]{ACE}. Moreover, this rate is optimal: if $\mu$ is absolutely continuous with respect to the underlying Lebesgue measure, then the optimal approximation in $W$ metric is of the order $N^{-1/d}$, for $d\ge 3$. Results of Talagrand (\cite{T1,T2}, and discussion in \cite{Wasserstein}) suggest that this may also be true for higher $L^p$ norms, at least for the simple case of the uniform distribution on $(-1, 1]^d$. \medskip \\ In view of this, we see that the exponent for the pointwise bound is \emph{almost sharp}, in the sense that we obtain exponents $\epsilon-\frac{1}{d}$ which are arbitrarily close to the optimal exponent $-\frac{1}{d}$, but cannot obtain the optimal exponent itself. This appears to be a consequence of using a particular estimate (\ref{eq: stability for BE 1}) from \cite{M+M}, which is `almost Lipschitz' in a similar sense. For the local uniform estimate Theorem \ref{thrm: Main Local Uniform Estimate}, we obtain exponent $-\alpha$, where $\alpha$ is given by \begin{equation} \alpha=-\epsilon+\frac{p'}{2d}; \hspace{1cm} \frac{1}{p}+\frac{1}{p'}=1.\end{equation} In the special case $p=2$, this produces the almost sharp exponent as discussed above. However, for $p>2$, the exponents are bounded away from $-\frac{1}{d}$, and so do not appear to be sharp. \\
\subparagraph{5b. Time Dependence} In light of Theorem \ref{thrm: No Uniform Estimate}, we see that we cannot exclude the factor $(1+t_\text{fin})^{1/p}$ in Theorem \ref{thrm: Main Local Uniform Estimate}. Hence, this time dependence is sharp \emph{among power laws}. However, we do not know what the \emph{true} sharpest time-dependence is. Similar techniques to those of Graversen and Peskir \cite{GP} may be able to provide a sharper bound; we do not explore this here.\medskip \\ We remark that Theorem \ref{thrm: Main Local Uniform Estimate} interpolates between almost optimal $N$ dependence at $p=2$, and almost optimal $t_\text{fin}$ dependence as $p\rightarrow \infty$. Moreover, by taking $p\rightarrow \infty$, we sacrifice optimal dependence in $N$, but the exponent $\alpha(d,p)$ is bounded away from $0$, and so we have good convergence, on any polynomial time scale. This is the content of Corollary \ref{corr: variation 3}.
\paragraph{6. Further Applicability of our Methods in Kinetic Theory} Finally, we will mention other models in kinetic theory which may be amenable to our techniques.
\begin{enumerate}[label=\alph{*}).] \item \emph{Sharp $N$ dependence for hard spheres.} We believe that our techniques could be modified to prove an estimate for Theorem \ref{thrm: PW convergence}, and Theorem \ref{thrm: Main Local Uniform Estimate} in the case $p=2$, in order to obtain the optimal rate $N^{-1/d}$ discussed above; however, this would likely come at the cost of poor dependence in time. Since a similar result (Proposition \ref{thrm: bad convergence theorem}) is already known, and since this is not the spirit of this work in seeking to optimise time dependence, we will not consider this further. \\ \item \emph{The Kac process on Maxwell Molecules.} In addition to the hard spheres case analysed here, the main collision kernel of physical interest is the case of \emph{Maxwell molecules} with or without cutoff. Many of the estimates used in our argument for the hard spheres kernel have an analagous version for Maxwell molecules, including the stability estimates proven in \cite{M+M}. For this case, a result similar to Theorem \ref{thrm: PW convergence} is already known \cite[Theorem 2]{CF 2018}.\\
\item \emph{McKean-Vlasov Dynamics, and Inelastic Collisions.} Other kinetic system which may be analysed in the framework of \cite{M+M} include cases of \emph{McKean-Vlasov} dynamics, and \emph{Inelastic Collisions, coupled to a heat bath}, which have been studied in the functional framework of \cite{M+M} by Mischler, Mouhot and Wennburg in a companion paper \cite{M+MCompanion}. In these cases, the analagous estimates for stability and differentiability, computed in \cite{M+MCompanion}, have potentially poor dependence in time. As a result, our methods would still apply, but with correspondingly poor time dependence. \medskip \\ For the case of McKean-Vlasov dynamics without confinement potential, this is a fundamental limitation; Malrieu \cite{Malrieu} showed that the propagation of chaos is \emph{not} uniform in time. Instead, he proposed to study a \emph{projected} particle system, which satisfies uniform propagation of chaos, and whose limiting flow has exponential convergence to equilibrium \cite[Theorem 6.2]{Malrieu}. This suggests that it may be possible to use our bootstrap method, used in the proof of Theorem \ref{thrm: Main Local Uniform Estimate}, to obtain a pathwise estimates with good long-time properties, analagous to Theorem \ref{thrm: Main Local Uniform Estimate}. \medskip \\ We remark that, in the case of McKean-Vlasov dynamics, the presence of Brownian noise may complicate the derivation of the interpolation decomposition (Formula \ref{form:newdecomposition}), which is the key identity required for our argument. \end{enumerate}
\subsection*{Acknowledgements} I am grateful to my supervisor, James Norris, for the suggestion of this project and for several useful remarks which allowed me to strengthen the results, and to Cl\'ement Mouhot, for a useful conversation concerning the interpretation of our results. I would also like to express my gratitude to the two anonymous reviewers, whose suggestions and comments over the course of two iterations led to several substantial strengthening of the results. \section{Moment estimates}\label{sec:moment estimates}
In order to deal with the appearance of the moment-based weights $\Lambda_k$ in future calculations, we discuss the moment structure of Kac's Process and the Boltzmann Equation. That is, we seek bounds on $\Lambda_k(\mu_t)$ where $\mu_t$ is, correspondingly, either a Kac process, or a solution to the Boltzmann equation.\medskip \\ The results presented here are mostly classical, and the arguments are well-known for the Boltzmann equation. Central to the proof is an inequality due to Povzner \cite{Povzner}, from which Elmroth \cite{Elmroth} deduced global moment bounds for the (function-valued) Boltzmann equation in terms of the moments of the initial data. This conclusion was strengthened to moment \emph{production} by Desvillettes \cite{Desvilettes} provided control of an initial moment $\Lambda_s(\mu_0)$ for any $s>2$\iffalse, which is used to deduce Corollary \ref{thrm: variation 1}\fi. Wennberg \cite{Wennberg,Wennberg Mischler} demonstrated an optimal version of this result, only requiring finite initial energy $\langle |v|^2, \mu_0\rangle$. Bobylev \cite{Bobylev} proved propagation of exponential moments, which may also be applied here as a simplification. These results have been proven for measure-valued solutions of the Boltzmann equation by Lu and Mouhot \cite{L&M}, and the techniques have been applied to the Kac process by Mischler and Mouhot \cite{M+M} and Norris \cite{ACE}. We collect below the precise results which we will use.
\begin{proposition} [Moment Inequalities for the Kac Process and Boltzmann Equation] \label{thrm:momentinequalities} We have the following moment bounds for polynomial velocity moments: \\
\begin{enumerate}[label={(\roman*.)},ref={2.\roman*.}] \item \label{lemma:momentboundpt1} Let $(\mu^N_t)_{t\geq 0}$ be a Kac process on $N\geq 1$ particles, and let $q>2$, $p\ge 2$ with $q\ge p$. Then there exists a constant $C(p,q)<\infty$ such that, for all $t\ge 0$, \begin{equation} \label{eq: pointwise moment bound} \mathbb{E}\left[ \Lambda_q(\mu^N_t) \right] \leq C(1+t^{p-q})\Lambda_p(\mu^N_0)\end{equation} and, for another constant $C=C(q)$, \begin{equation} \label{eq: local uniform moment bound} \mathbb{E}\left(\sup_{0\leq t \leq t_\text{fin}} \Lambda_q(\mu^N_t)\right) \leq (1+C(q)t_\text{fin})\Lambda_q(\mu^N_0).\end{equation}
\item \label{lemma:momentboundpt2} Let $p, q$ be as above, and let, and $\mu_0\in \cup_{k>2}\mathcal{S}^k$. Then there exists a constant $C=C(p,q)$ such that the solution $\phi_t(\mu_0)$ to (\ref{BE}) satisfies \begin{equation}\label{eq: BE moment bound} \Lambda_q(\phi_t(\mu_0)) \le C(1+t^{p-q})\Lambda_p(\mu_0).\end{equation} \item There exist constants $C_1, C_2 < \infty$ such that, whenever $\mu_0 \in \cup_{k>2} \mathcal{S}^k$, we have the bound for all $t\ge 0$ \begin{equation} \int_0^t \Lambda_3(\phi_s(\mu_0))ds \le C_1 t+C_2\langle\hspace{0.05cm} (1+|v|^2) \hspace{0.05cm}\log(1+|v|^2),\mu_0\rangle.\end{equation} As a consequence, if $c\ge0$, then there exists $w<\infty, k<\infty$ such that, for all $t\ge 0$, \begin{equation} \exp\left(c\int_0^t \Lambda_3(\phi_s(\mu_0))ds\right)\le e^{wt}\Lambda_k(\mu_0). \end{equation}
\end{enumerate} \end{proposition} The first item is exactly \cite[Proposition 3.1]{ACE}. For the second item, if $\phi_t(\mu_0)$ is locally $\mathcal{S}^k$ bounded for all $k$, then we can apply the same reasoning as the cited proposition to the Boltzmann equation. To remove this condition, we consider the Boltzmann equation started from $\mu_\delta=\phi_\delta(\mu_0)$: thanks to the qualitative moment creation property \cite{Desvilettes, Wennberg Mischler}, the Boltzmann flow started at $\mu_\delta$ is locally $\mathcal{S}^k$ bounded for all $k$, and so the claimed result holds with $\mu_\delta$ in place of $\mu_0$. The claimed result may then be obtained by carefully taking the limit $\delta\downarrow 0$. \medskip \\ The first conclusion of item iii. is proven in \cite[Equation 6.20]{M+M}, and the final point follows, using the interpolation, for all $\mu \in \mathcal{S}$,\begin{equation} \langle (1+|v|^2) \hspace{0.05cm}\log(1+|v|^2),\mu\rangle \le 8(1+\log \Lambda_5(\mu)). \end{equation} In our estimates for the various terms of the interpolation decomposition, we will frequently encounter the weightings $\Lambda_k(\mu^N_t)$ appearing in the integrand. We refer to points (i-ii.) of Proposition \ref{thrm:momentinequalities}, along with the following lemma, as \emph{growth control} of the weightings, which allows us to control these factors in suitable $L^p$ norms. \begin{lemma} \label{lemma:momentincreaseatcollision} Let $\left(\mu^N_t\right)_{t\geq 0}$ be a Kac process on $N\geq 1$ particles, and fix an exponent $k\geq 2$. Then for any time $t\geq 0$, and any measure $\mu^N$ which can be obtained from $\mu^N_t$ by a collision, \begin{equation} \Lambda_k(\mu^N) \leq 2^{\frac{k}{2}+1} \Lambda_k(\mu^N_t)\end{equation} \end{lemma} \begin{proof} This is immediate, by noting that if $v, v_\star$ are pre-collision velocities leading to post-collision $v', v_\star'$, we have the bound \begin{equation} \begin{split} (1+|v'|^2)^k &\le ((1+|v|^2)+(1+|v_\star|^2))^\frac{k}{2} \\ & \le 2^{k/2}((1+|v|^2)^\frac{k}{2}+(1+|v_\star|^2)^\frac{k}{2}). \end{split}\end{equation} Using the same bound for $v_\star'$ leads to the claimed result. \end{proof}
\iffalse FINDME The assertion for the Kac process is proven in \cite[Proposition 3.1]{ACE} for a range of Kac processes including the hard spheres case. T \medskip \\ We also use the following \emph{logarithmic} moment creation property, which was established by \cite{M+M}, and which will be useful in deriving continuity estimates for the Boltzmann equation. \fi
A final property of the weighting estimates which will prove useful is the following correlation inequality: \begin{lemma} \label{lemma: correlation of moments} Let $k_1, k_2 \geq 2$, and let $\mu \in \mathcal{S}^{k_1+k_2}$. Then we have \begin{equation} \Lambda_{k_1}(\mu)\Lambda_{k_2}(\mu)\leq \Lambda_{k_1+k_2}(\mu).\end{equation} \end{lemma} \begin{proof} Since the maps $x\mapsto (1+|x|^2)^{k_i/2}$, for $i=1,2$, are both monotonically increasing on $[0, \infty)$, for any $v, v_\star$ we have the bound \begin{equation} \left\{(1+|v|^2)^{k_1/2}-(1+|v_\star|^2)^{k_1/2}\right\}\left\{(1+|v|^2)^{k_2/2}-(1+|v_\star|^2)^{k_2/2}\right\}\geq 0.\end{equation} Integrating both variables with respect to $\mu$ produces the result. \end{proof}
\section{Regularity and Stability Estimates} \label{sec: Regularity and Stability Estimates}
In this section, we give precise statements of analytical results concerning the flow maps $(\phi_t)_{t\geq 0}$, and the drift operator $Q$, which will be used in our convergence theorems. We need a combination of \emph{regularity} for the drift map $Q$, which appears in the proof of Lemma \ref{thrm: local uniform martingale control}, and \emph{differentiability and stability} results for the flow maps $(\phi_t)_{t\geq 0}$.
\subsection{Stability Estimates}
The key component to our analysis of the Kac process is the \emph{stability} of the limiting Boltzmann equation - that is, that the limit flow suppresses errors, rather than allowing exponential amplification. We begin by defining appropriate linear structures.
\begin{definition}\label{def: weighted normed spaces} Consider the space $Y$ of signed measures, given by \begin{equation} Y=\left\{ \xi: \hspace{0.3cm}\|\xi\|_\mathrm{TV} <\infty;\hspace{0.3cm} \langle 1, \xi \rangle =0\right\}.\end{equation} We equip $Y$ with the total variation norm $\|\cdot\|_\mathrm{TV}$. For real $q\geq 0$, we define the subspace $Y_q$ of measures with finite $q^\text{th}$ moments: \begin{equation} Y_q =\left\{ \xi \in Y: \langle 1+|v|^q, |\xi|\rangle < \infty \right\}. \end{equation} We define the norm with $q$-weighting on $Y_q$ by \begin{equation} \|\xi\|_{\mathrm{TV}+q}=\langle 1+|v|^q, |\xi|\rangle.\end{equation} The notation $\|\cdot\|_{\mathrm{TV}+q}$ is chosen to emphasise that this is a total variation norm, with additional polynomial weighting of order $q$, while avoiding potential ambiguity with the $L^q$ norms of random variables. \end{definition} \begin{remark}\label{rmk: compactness} The total variation norms $\|\cdot\|_{\mathrm{TV}+q}$ appearing in the following analysis are much stronger than the Wasserstein distance appearing in Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}, \ref{thm: low moment regime}. We can understand this as follows. Recalling the definitions of $\mathcal{A}, \mathcal{A}_0$ in (\ref{eq: defn of script A}, \ref{eq: defn of script A 0}), we note that the $\mathrm{TV}+2$ distance is given by a duality \begin{equation} \|\mu-\nu\|_{\mathrm{TV}+2} =\sup_{f\in \mathcal{A}_0} \hspace{0.1cm} |\langle f, \mu-\nu\rangle| \end{equation} and, if we write $\mathcal{A}|_r, \mathcal{A}_0|_r$ for the restriction of functions to $[-r,r]^d$, then the inclusion \begin{equation} \mathcal{A}|_r\subset \mathcal{A}_0|_r\end{equation} is compact in the norm of $\mathcal{A}_0|_r$, by the classical theorem of Arzel\'a-Ascoli. This is at the heart of a \emph{quantitative compactness} argument in Lemmas \ref{thrm: pointwise martingale control}, \ref{thrm: local uniform martingale control}, which allows us to to take the supremum over $f\in\mathcal{A}$ inside the expectation. \end{remark}
We can now state the precise results as they appear in \cite[Lemma 6.6]{M+M}:
\begin{proposition}\label{thrm: stability for BE} Let $\eta \in (0,1)$. Then there are absolute constants $C\in (0, \infty)$ and $\lambda_0>0$ such that, for $k$ large enough (depending only on $\eta$), and all $\mu, \nu \in \mathcal{S}^k$, there is a unique solution $(\xi_t)_{t\geq 0} \subset Y_2$ to the linearised differential equation \begin{equation} \label{eq: definition of the difference term} \xi_0=\nu-\mu; \hspace{0.5cm} \partial_t \xi_t = 2Q(\phi_t(\mu), \xi_t).\end{equation} This solution satisfies the bounds \begin{equation} \label{eq: stability for BE 1} \|\phi_t(\nu)-\phi_t(\mu)\|_{\mathrm{TV}+2} \leq C e^{-\lambda_0 t/2} \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^\eta; \end{equation} \begin{equation} \label{eq: stability for BE 1.5} \|\xi_t\|_{\mathrm{TV}+2} \leq C e^{-\lambda_0 t/2} \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^\eta;\end{equation} \begin{equation}\label{eq: stability for BE 2}
\|\phi_t(\nu)-\phi_t(\mu) - \xi_t \|_{\mathrm{TV}+2} \leq C e^{-\lambda_0 t/2} \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^{1+\eta}.\end{equation} This allows us to define a linear map $\mathcal{D}\phi_t(\mu)$ by \begin{equation} \mathcal{D}\phi_t(\mu)[\nu-\mu]:=\xi_t.\end{equation} This linear map will play the r\^ole of a functional derivative for the Boltzmann flow $\phi_t$ in the calculus developed by \cite{M+M}. \end{proposition}
To obtain estimates with the weighted metric $W$, we will use a version of Proposition \ref{thrm: stability for BE} with the difference $\phi_t(\mu)-\phi_t(\nu)$ measured in stronger norms $\|\cdot \|_{\mathrm{TV}+q}$. The following estimate may be obtained by a simple interpolation between Propositions \ref{thrm:momentinequalities}, \ref{thrm: stability for BE}. \begin{corollary} \label{cor: new stability for BE} Let $q\geq 2$, $\eta \in (0,1)$ and $\lambda<\lambda_0$. Then for all $k$ large enough, depending on $\eta, \lambda$ and $q$, there exists a constant $C$ such that \begin{equation} \forall \mu, \nu \in \mathcal{S}^k, \hspace{1cm}\|\phi_t(\mu)-\phi_t(\nu)\|_{\mathrm{TV}+q} \leq C e^{-\lambda t/2} \Lambda_{k}(\mu, \nu)^\frac{1}{2} \|\mu-\nu\|_\mathrm{TV}^{\eta}. \end{equation} \end{corollary} \iffalse \begin{proof} The case $q=2$ is immediate from Proposition \ref{thrm: stability for BE}; for the rest of the proof, assume that $q>2$. Choose $\eta' \in (0,1)$ and $\delta\in \left(0,\frac{1}{2}\right]$ such that \begin{equation} \eta < \eta'(1-\delta) < 1; \hspace{0.5cm} (1-\delta)\lambda_0 > \lambda.\end{equation} Choose $k_0$ large enough, depending on $\eta'$, such that Proposition \ref{thrm: stability for BE} holds with exponent $\eta'$. Let $k$ be given by \begin{equation} k=k_0+\frac{q}{\delta}. \end{equation} Fix $\mu, \nu \in \mathcal{S}^k$, and $t\geq 0$. For ease of notation, write $\theta$ for the total variation measure $\theta=|\phi_t(\mu)-\phi_t(\nu)|$. \medskip\\ Observe that $(1+|v|^q)\lesssim (1+|v|^2)^{1-\delta}(1+|v|^{q'})$, where $q'=q-2(1-\delta)>0$. Applying H\"{o}lder's inequality, we obtain \begin{equation} \begin{split} \langle 1+|v|^q, \theta\rangle &\lesssim \langle1+|v|^2, \theta\rangle^{1-\delta}\left\langle (1+|v|^{q'})^\frac{1}{\delta}, \theta\right\rangle^\delta \\ &\lesssim \left(e^{-\lambda_0 t/2} \Lambda_{k_0}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^{\eta'}\right)^{1-\delta} \Lambda_{\frac{q'}{\delta}}(\phi_t(\mu), \phi_t(\nu))^\delta \end{split} \end{equation} Since $k\geq \frac{q'}{\delta}+1$, we can apply Proposition \ref{lemma:momentboundpt2} and the correlation property Lemma \ref{lemma: correlation of moments} to obtain \begin{equation} \begin{split} \langle 1+|v|^q, \theta \rangle & \lesssim e^{\lambda_0 (1-\delta)t/2} \|\mu-\nu\|_\mathrm{TV}^{\eta'(1-\delta)} \Lambda_{k_0}(\mu, \nu)^\frac{1}{2} \Lambda_{\frac{q'}{\delta}}(\mu, \nu)^\frac{1}{2} \\ & \lesssim e^{-\lambda_0(1-\delta) t/2} \|\mu-\nu\|_\mathrm{TV}^{\eta'(1-\delta)} \Lambda_{k}(\mu, \nu)^\frac{1}{2}. \end{split} \end{equation}By the choice of $\delta$, we have the bound desired. \end{proof} \fi We emphasise that the rapid decay is the key property that allows us to obtain good long-time behaviour for our estimates. The pointwise estimate Theorem \ref{thrm: PW convergence} and the initial estimate for pathwise local uniform convergence Lemma \ref{lemma: initial LU bound} would hold for estimates \begin{equation} \label{eq: weaker stability 1} \|\phi_t(\nu)-\phi_t(\mu)\|_{\mathrm{TV}+5} \leq F(t) \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^\eta; \end{equation} \begin{equation}\label{eq: weaker stability 2}
\|\phi_t(\nu)-\phi_t(\mu) - \xi_t \|_{\mathrm{TV}+2} \leq G(t) \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^{1+\eta}\end{equation} for functions $F,G$ such that \begin{equation} \label{eq: weaker stability 3} \left(\int_0^\infty F^2 dt\right)^{1/2}<\infty;\hspace{0.5cm} \int_0^\infty G dt<\infty. \end{equation} The full strength of exponential decay is used to `bootstrap' to the pathwise local uniform estimate Theorem \ref{thrm: Main Local Uniform Estimate}, which provides better behaviour in the time horizon $t_\text{fin}$, with only a logarithmic loss in the number of particles $N$. Provided that $F\rightarrow 0$ as $t\rightarrow \infty$, we could use the same `bootstrap', but with a potentially much larger loss in $N$.
\subsection{Regularity Estimates}
For the proof of the local uniform estimate Lemma \ref{thrm: local uniform martingale control}, it will be important to control the continuity of $Q$ \emph{after application of the flow maps} $\phi_t$; for brevity, we will write the composition as $Q_t=Q\circ \phi_t$. We can exploit the use of the stronger $\|.\|_{\mathrm{TV}+2}-$ norm in the stability estimates Proposition \ref{thrm: stability for BE}, to prove a strong notion of continuity for $Q_t$, including the dependence on $t$. \medskip \\ It is well known that, for $q\geq 1$, and $\mu, \nu \in \mathcal{S}^{q+1}$, we have the bilinear estimate \begin{equation} \|Q(\mu)-Q(\nu)\|_{\mathrm{TV}+q}\lesssim \Lambda_{q+1}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_{\mathrm{TV}+(q+1)} \end{equation} and, by interpolating, this leads to \begin{equation} \label{eq: holder continuity of Q} \|Q(\mu)-Q(\nu)\|_{\mathrm{TV}+q}\lesssim \Lambda_{3(q+1)}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_{\mathrm{TV}}^\frac{1}{2}. \end{equation} Combining this the stability estimate in Corollary \ref{cor: new stability for BE}, we deduce the following. For $q\geq 1$, $\eta \in (0,1)$ and $\lambda<\lambda_0$, then there exists $k$ such that, for $\mu, \nu \in \mathcal{S}^k$, we have the estimate \begin{equation} \label{eq: Lipschitz continuity of Q}\|Q_t(\mu)-Q_t(\nu)\|_{\mathrm{TV}+q} \lesssim e^{-\lambda t} \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^\eta.\end{equation}
\section{Proof of Theorem \ref{thrm: W-W continuity of phit}}\label{sec: continuity of BE} As a first application of the stability estimates, we will now prove Theorem \ref{thrm: W-W continuity of phit}, which establishes a continuity result for the Boltzmann flow $(\phi_t)$ with respect to our weighted Wasserstein metric $W$. For Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}, we wish to approximate a given starting point $\mu_0$ by an empirical measure $\mu^N_0 \in \mathcal{S}_N$ on $N$ points; in this context, the total variation distance is too strong, as there is no discrete approximation to any continuous measure $\mu_0$. We therefore seek a continuity estimate for the Boltzmann flow $\phi_t$, measured in the Wasserstein distance $W$ defined in (\ref{eq: definition of W}), and which is uniform in time. \medskip \\ The proof combines a representation formula, and associated estimates, from \cite{ACE}, which establishes the first claim; the second claim will then follow using a long-time estimate recalled in Proposition \ref{thrm: stability for BE}. We will first review the definition, and claimed representation formula for the Boltzmann flow. \begin{definition}[Linearised Kac Process]\label{def: LKP} Write $V=\mathbb{R}^d$ and $V^*$ for the signed space $V^*=V\times\{\pm 1\}=V^+\sqcup V^-$. We write $\pi: V^*\rightarrow V$ as the projection onto the first factor, and $\pi_\pm: V^\pm\rightarrow V$ for the obvious bijections. \\ Let $(\rho_t)_{t\ge 0}$ be family of measures on $V=\mathbb{R}^d$ such that \begin{equation} \langle 1, \rho_t \rangle =1;\hspace{1cm} \langle |v|^2, \rho_t\rangle =1;\end{equation} \begin{equation} \label{eq: integrability for environment} \int_0^t \Lambda_3(\rho_s)ds <\infty \hspace{1cm} \text{for all }t<\infty. \end{equation} The \emph{Linearised Kac Process} \emph{in environment $(\rho_t)_{t\ge 0}$} is the branching process on $V^*$ where each particle of type $(v,1)$, at rate $2|v-v_\star|\rho(dv_\star) d\sigma$, dies, and is replaced by three particles, of types \begin{equation} (v'(v,v_\star,\sigma),1);\hspace{0.5cm}(v_\star'(v,v_\star, \sigma),1);\hspace{0.5cm}(v_\star,-1) \end{equation} where $v', v_\star'$ are the post-collisional velocities given by (\ref{eq: PCV}). The dynamics are identical for particles of type $(v,-1)$, with the signs exchanged. \medskip \\ We write $\Xi^*_t$ for the associated process of unnormalised empirical measures on $V^*$, and define a signed measure $\Xi_t$ on $V$ by including the sign at each particle: \begin{equation} \Xi_t=\Xi^+_t-\Xi^-_t ; \hspace{1cm} \Xi^\pm_t=\Xi^\star_t\circ \pi_\pm^{-1}.\end{equation} We can also consider the same branching process, started from a time $s\ge 0$ instead. We write $E$ for the expectation over the branching process, which is not the full expectation in the case where $\rho$ is itself random. When we wish to emphasise the initial velocity $v$ and starting time $s$, we will write $E_{(s,v)}$ when the process is started from $\Lambda^*_0=\delta_{(v,1)}$ at time $s$, and $E_v$ in the case $s=0$. \end{definition} Provided that the initial data $\Xi_0$ is finitely supported, one can show that the branching process is almost surely non-explosive, and that \begin{equation}\label{eq: no explosion}E_{v_0} \langle 1+|v|^2, |\Xi_t|\rangle \le (1+|v|^2)\exp\left[8\int_0^t \Lambda_3(\rho_s) ds\right]. \end{equation} \begin{remark} We can connect this branching process with a different proof of existence and uniqueness for the difference $\xi_t$ in Theorem \ref{thrm: stability for BE}. For existence, consider the linearised Kac process $(\Xi_t)_{t\ge 0}$ in environment $\rho_t=\phi_t(\mu)$, where particles are initialised at $t=0$ according to a Poisson random measure of intensity \begin{equation} \theta(dv)= \begin{cases} \xi_0^+(dv)=\nu(dv) & \text{on }V^+ \\ \xi_0^-(dv)=\mu(dv) & \text{on }V^-. \end{cases} \end{equation} Let $\xi_t = \mathbb{E}(\Xi_t)$, which may be formalised in the sense of a Bochner integral in the weighted space $(Y_2, \|\cdot\|_{\mathrm{TV}+2})$ defined in (\ref{def: weighted normed spaces}). Then the same proof of the representation formula \cite[Proposition 4.2]{ACE} shows that $\partial_t \xi_t = 2Q(\phi_t(\mu), \xi_t)$, and that this solution is unique. \end{remark} Recall from the introduction that $\mathcal{A}$ is the set of all functions $f$ on $\mathbb{R}^d$, such that $\widehat{f}(v)=(1+|v|^2)^{-1}f(v)$ satisfies \begin{equation} |\widehat{f}(v)|\le 1; \hspace{1cm} \frac{|f(v)-f(w)|}{|v-w|}\le 1 \hspace{1cm}\text{for all }v\neq w. \end{equation} From the bound (\ref{eq: no explosion}), we can now define, for functions of quadratic growth, \begin{equation} \label{eq: defn of f0t} f_{st}(v_0)=E_{(s,v_0)}\left[\langle f, \Xi_t\rangle \right].\end{equation} When we wish to emphasise the environment, we will write $f_{st}[\rho](v_0)$. We now recall the following estimates from \cite{ACE}:
\begin{proposition}[Continuity Estimates for $f_{st}$]\label{prop: continuity for branching process} Fix $t\ge 0$, and let $z_t$ be given by \begin{equation} z_t=3\exp\left[8\int_0^t \Lambda_3(\rho_u)du \right].\end{equation} Then, for $f\in\mathcal{A}$ and $s\le t$, we have $f_{st} \in z_t\hspace{0.1cm} \mathcal{A}$. This is, in our notation, a reformulation of \cite[Propositions 4.3]{ACE}. \end{proposition} The other result which we will use is the representation formula \cite[Proposition 4.2]{ACE}, which expresses the difference of two Boltzmann flows $\phi_t(\mu)-\phi_t(\nu)$ in terms of the functions $f_{0t}$. This may be obtained from the proof of \cite[Proposition 4.2]{ACE} without essential modification, as in the proof of \cite[Theorem 10.1]{ACE}. \begin{proposition}[Representation Formula]\label{prop: bad representation formula} Let $\mu, \nu \in \mathcal{S}^k$ for some $k>2$, and let $(\rho_t)_{t\ge 0}$ be given by \begin{equation} \rho_t=\frac{1}{2}(\phi_t(\mu)+\phi_t(\nu)) \end{equation} where $\phi_t(\mu)$ is the unique, locally $\mathcal{S}^k$-bounded solution to the Boltzmann equation, starting at $\mu$, and similarly for $\nu$. Then, for all $f\in \mathcal{A}$, we have \begin{equation} \label{eq: bad representation formula} \langle f, \phi_t(\mu)-\phi_t(\nu)\rangle =\left\langle f_{0t}[\rho], \mu-\nu\right\rangle.\end{equation} \end{proposition} Note that the moment production property in Proposition \ref{thrm:momentinequalities} guarantees that (\ref{eq: integrability for environment}) holds for this environment. This will allow us to find an estimate for the Boltzmann flow $\phi_t$ which behaves well in short time. We now give the proof of Theorem \ref{thrm: W-W continuity of phit} \begin{proof}[Proof of Theorem \ref{thrm: W-W continuity of phit}] From the representation formula (\ref{eq: bad representation formula}) and continuity estimate Proposition \ref{prop: continuity for branching process}, for any $f\in \mathcal{A}$, \begin{equation} \label{eq: short time bound on BE} \langle f, \phi_t(\mu)-\phi_t(\nu)\rangle =\langle f_{0t}[\rho],\mu-\nu\rangle \le z_t\hspace{0.1cm} W(\mu, \nu) \end{equation} where $\rho_t=(\phi_t(\mu)+\phi_t(\nu))/2$. It therefore suffices to bound \begin{equation} z_t:=3\exp\left(4\int_0^t \left[\Lambda_3(\phi_s(\mu))+\Lambda_3(\phi_s(\nu))\right] ds \right).\end{equation} Using the logarithmic moment production for the Boltzmann equation recalled in Proposition \ref{thrm:momentinequalities}, there exist constants $k,w$ such that \begin{equation} \begin{split} z_t & \lesssim e^{wt} \Lambda_{k/2}(\mu)\Lambda_{k/2}(\nu) \\ &\hspace{0.5cm} \lesssim e^{wt} \Lambda_{k/2}(\mu, \nu)^2 \lesssim e^{wt}\Lambda_k(\mu, \nu).\end{split} \end{equation} This proves the first claim. For the second claim, we first deal with the case where $k\ge 3$ is large enough that the above holds, and such that the stability estimate Proposition \ref{thrm: stability for BE} holds with H\"older exponent $\eta=\frac{1}{2}$. Fix $\mu, \nu \in \mathcal{S}^k_a$, and assume without loss of generality that $0<W(\mu, \nu)<1$. From the stability estimate (\ref{eq: stability for BE 1}) we have \begin{equation} \|\phi_t(\mu)-\phi_t(\nu)\|_{\mathrm{TV}+2} \lesssim a^\frac{1}{2} e^{-\lambda_0t/2} \hspace{0.05cm} \end{equation} for some constants $\lambda_0>0$. It is immediate from the definitions that \begin{equation} W(\mu, \nu)\le \|\mu-\nu\|_{\mathrm{TV}+2} \end{equation} and so combining with the previous result, we have\begin{equation} W\left(\phi_t(\mu), \phi_t(\nu)\right)\lesssim a \min\left(e^{-\lambda_0 t/2}, W(\mu, \nu)e^{wt}\right).\end{equation} The right hand side is maximised when $e^{-\lambda_0 t/2}=W(\mu, \nu)e^{wt}$, which occurs when \begin{equation} t=-\frac{2}{\lambda_0+2w} \hspace{0.1cm} \log \hspace{0.05cm}W(\mu, \nu).\end{equation} Therefore, the maximum value of the right-hand side is \begin{equation} \label{eq: holder continuity} \begin{split} \sup_{t\ge 0} W\left(\phi_t(\mu), \phi_t(\nu)\right) & \lesssim a\exp\left(\frac{\lambda_0}{\lambda_0+2w} \log W(\mu, \nu)\right) \\ & =aW(\mu, \nu)^\zeta\end{split} \end{equation} with \begin{equation} \zeta(d)=\frac{\lambda_0}{\lambda_0+2w}\end{equation} which is the claimed H\"older continuity, for $k$ sufficiently large. \medskip \\ Finally, we deal with the second point for arbitrary $k>2$. This argument uses a localisation principle to control the moments on a very short initial interval $[0,u]$, and may be read as a warm-up to the more involved arguments in the proof of Theorem \ref{thm: low moment regime}. \medskip \\ Let $k_0$ be large enough such that the estimate (\ref{eq: holder continuity}) holds, and let $\zeta_0$ be the resulting exponent. Let $\beta=\frac{k-2}{2}$, let $\mu, \nu$ be as in the statement of the result, and let $u\in (0,1]$ be chosen later. Define \begin{equation} T=\inf\left\{t\ge 0: \Lambda_3(\rho_t)>\frac{\beta t^{\beta-1}+1}{2}\right\}\end{equation} where $\rho_t$ is as above. We now deal with the two cases $T>u, T\le u$ separately. \medskip \\ If $T>u$, then we have the estimate \begin{equation} \begin{split} z_u&:=3\exp\left(4\int_0^u \Lambda_3(\rho_s)ds\right) \\ & \le 3\exp\left(4\int_0^1 \frac{\beta s^{\beta-1}+1}{2} ds\right)\lesssim 1. \end{split} \end{equation} Using the representation formula in Proposition \ref{prop: bad representation formula} as in (\ref{eq: short time bound on BE}), we therefore obtain \begin{equation} \label{eq: v short time BE estimate} \sup_{t\le u} W(\phi_t(\mu),\phi_t(\nu)) \lesssim W(\mu, \nu).\end{equation} Using (\ref{eq: holder continuity}) on $\phi_u(\mu), \phi_u(\nu)$, and using the moment production property recalled in Proposition \ref{thrm:momentinequalities}, we have the estimate \begin{equation} \label{eq: restarted BF estimate}\sup_{t\ge u} W(\phi_t(\mu),\phi_t(\nu)) \lesssim u^{2-k_0}W(\mu, \nu)^{\zeta_0}. \end{equation} We next deal with the case $T\le u$. In this case, comparing the moment production property to the definition of $T$ shows that \begin{equation} T^{\beta-1}\lesssim \Lambda_3(\phi_T(\mu))+\Lambda_3(\phi_T(\nu))\lesssim a T^{k-3} ;\hspace{1cm} T\le u\end{equation} which rearranges to produce the bound $1\lesssim au^{k/2-1}$. In particular, in this case, we have \begin{equation} \label{eq: bad moment case} \sup_{t\ge 0} W(\phi_t(\mu),\phi_t(\nu))\le 4 \lesssim a u^{k/2-1}. \end{equation} Combining estimates (\ref{eq: v short time BE estimate}, \ref{eq: restarted BF estimate}, \ref{eq: bad moment case}), we see that in all cases, \begin{equation} \sup_{t\ge 0} W(\phi_t(\mu), \phi_t(\nu)) \lesssim u^{2-k_0}W(\mu,\nu)^{\zeta_0}+au^{k/2-1}.\end{equation} Now, if we choose $u=\min(1,W(\mu, \nu)^\delta) $ for sufficiently small $\delta>0$, we obtain \begin{equation} \sup_{t\ge 0} W(\phi_t(\mu), \phi_t(\nu)) \lesssim aW(\mu,\nu)^\zeta \end{equation} for a new exponent $\zeta=\zeta(d,k)>0$. \end{proof}
\section{The Interpolation Decomposition for Kac's Process}\label{sec: interpolation decomposition}
We introduce a pair of random measures associated to the Markov process $(\mu^N_t)_{t\geq 0}$. The \emph{jump measure} $m^N$ is the un-normalised empirical measure on $(0,\infty) \times \mathcal{S}_N$, of all pairs $(t, \mu^N)$, such that the system collides at time $t$, with new measure $\mu^N$. Its \emph{compensator} $\overline{m}^N$ is the random measure on $(0, \infty)\times \mathcal{S}_N$ given by \begin{equation}\label{eq: definition of mbar} \overline{m}^N(dt,d\mu^N)=\mathcal{Q}_N(\mu^N_{t-}, d\mu^N)dt \end{equation} where $\mathcal{Q}_N(\cdot, \cdot)$ is the transition kernel of the Kac process, given by (\ref{eq: definition of script Q}). The goal of this section is to prove the following `interpolation decomposition' for the difference between Kac's process and the Boltzmann flow, which is the key identity required for the proofs of Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}. This is based on an idea of Norris \cite{PDF}, which was inspired by \cite[Section 3.3]{M+M}.\begin{formula}\label{form:newdecomposition} Let $\mu^N_t$ be a Kac process on $N\geq 2$ particles, and suppose $f \in \mathcal{A}_0$ is a test function. To ease notation, we write \begin{equation} \label{eq: definition of Delta} \Delta(s,t,\mu^N)=\phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_{s-}); \hspace{0.5cm} 0\le s \le t, \hspace{0.2cm}\mu^N \in \mathcal{S}_N; \end{equation} \begin{equation} \label{eq: definition of psi} \psi(u,\mu, \nu)= \phi_{u}(\nu)-\phi_{u}(\mu)-\mathcal{D}\phi_{u}(\mu)[\nu-\mu]; \hspace{0.3cm} u\ge 0,\hspace{0.3cm} \mu, \nu \in \bigcap_{k>2} \mathcal{S}^k\end{equation} where $\mathcal{D}\phi_t$ is the derivative of the Boltzmann flow $\phi_t$, defined in Proposition \ref{thrm: stability for BE}; this makes sense, provided that all moments of $\mu, \nu$ are finite. Then we can decompose \begin{equation} \langle f, \mu^N_t -\phi_t(\mu^N_0) \rangle =M^{N,f}_t + \int_0^t \langle f, \rho^N(t-s, \mu^N_s) \rangle ds \end{equation} where \begin{equation} \label{eq: definition of MNFT} M^{N,f}_t=\int_{(0,t]\times \mathcal{S}_N} \langle f, \Delta(s,t,\mu^N) \rangle (m^N-\overline{m}^N)(ds, d\mu^N_s)\end{equation} and where $\rho^N$ is given in terms of the transition kernel $\mathcal{Q}_N$ (\ref{eq: definition of script Q}) by \begin{equation} \label{eq: definition of rho} \langle f, \rho^N(u, \mu^N)\rangle = \int_{\mathcal{S}_N}\langle f, \psi(u,\mu^N, \nu) \rangle \mathcal{Q}_N(\mu^N, d\nu).\end{equation} \end{formula} \begin{remark} \begin{enumerate}[label=\roman{*}).] \item This is the key identity needed for Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate}; the remainder of the proofs are to establish suitable controls over each of the two terms.
\item This representation formula offers two major advantages over the equivalent representation formula in \cite{ACE}, which will be recalled in Proposition \ref{prop: very bad rep formula}. \begin{itemize} \item Firstly, all the quantities appearing in our formula are adapted to the natural filtration of $(\mu^N_t)_{t\ge 0}$, and so we can use martingale estimates directly; by contrast, \cite[Proposition 4.2]{ACE} contains anticipating terms. This allows us to prove convergence in $L^p$ spaces, rather than simply in probability.
\item Secondly, all terms appearing in our formula may be controlled by the stability estimates (\ref{eq: stability for BE 1}, \ref{eq: stability for BE 2}). This allows us to exploit the stability of the limit equation, at the level of \emph{individual realisations of} the empirical particle system $\mu^N_0$. \end{itemize} \end{enumerate} \end{remark} The main technicality in the proof of this is to derive a Chapman-Kolmogorov-style equation, which allows us to manipulate the functional derivatives $\mathcal{D}\phi_t$. This is the content of the following lemma.
\begin{lemma}[Exchange Lemma]\label{lemma:DAP} Let $\mu^N \in \mathcal{S}_N$ and $f\in \mathcal{A}$. Then for all times $t\geq 0$, we have the equalities \begin{equation}\label{eq:exchangeDandI} \begin{split} &\frac{d}{dt}\langle f, \phi_t(\mu^N)\rangle = \langle f,\mathcal{D}\phi_t(\mu^N)\left[Q(\mu^N)\right]\rangle \\& \hspace{1cm}= \int_{\mathbb{R}^d \times \mathbb{R}^d \times S^{d-1}} \hspace{0.05cm} \langle f,\mathcal{D}\phi_t(\mu^N)[\mu^{N, v, v_\star, \sigma}-\mu^N]\rangle \hspace{0.05cm} |v-v_\star|\hspace{0.05cm}N\hspace{0.05cm}d\sigma \mu^N(dv)\mu^N(dv_\star)\end{split} \end{equation} where $\mu^{N,v,v_\star,\sigma}$ is the post-collision measure given by (\ref{eq: change of measure at collision}), $\mathcal{Q}_N$ is the generator of the Kac process (\ref{eq: definition of script Q}) and where $\mathcal{D}\phi_t$ is the functional derivative given by Proposition \ref{thrm: stability for BE}. \end{lemma}The first equality is familiar from semigroup theory, but is complicated by the non-linearity of the flow maps; we resolve this by using ideas of the infinite dimensional differential calculus developed in \cite{M+M}. The second equality can be thought of as a continuity property for the linear map $\mathcal{D}\phi_t(\mu^N)[\cdot]$, and is justified in Lemma \ref{lemma:DAP} by the explicit construction of the derivative in Proposition \ref{thrm: stability for BE}.\medskip \\ Assuming this for the moment, we now prove the interpolation decomposition Formula \ref{form:newdecomposition}. \begin{proof}[Proof of Formula \ref{form:newdecomposition}] To begin with, we restrict to bounded, measurable $f$. Fix $t\geq 0$, and consider the process $\Gamma^{N,f,t}_s=\langle f, \phi_{t-s}(\mu^N_s)\rangle$, for $0\leq s \leq t$. Then $\Gamma^{N,f,t}$ is c\`{a}dl\`{a}g, and is differentiable on intervals where $\mu^N_s$ is constant. On such intervals, Lemma \ref{lemma:DAP} tells us that
\begin{multline}\begin{split} \frac{d}{ds} & \langle f, \phi_{t-s}(\mu^N_s)\rangle =-\left.\frac{d}{du}\right|_{u=t-s}\langle f, \phi_u(\mu^N_s)\rangle\\[1ex] & = -\int_{\mathbb{R}^d\times \mathbb{R}^d\times S^{d-1}} \langle f, \mathcal{D}\phi_{t-s}(\mu^N_s)[\mu^{N,v,v_\star, \sigma}-\mu^N_s] \rangle |v-v_\star|\hspace{0.05cm}N\hspace{0.05cm} \mu^N_s(dv)\mu^N_s(dv_\star)d\sigma \\[1ex] & = - \int_{\mathcal{S}_N} \langle f, \mathcal{D}\phi_{t-s}(\mu^N_s)[\mu^N-\mu^N_s] \rangle \mathcal{Q}_N(\mu^N_s, d\mu^N)\end{split}\end{multline}
where the final equality is to rewrite integral in terms of the transition kernel $\mathcal{Q}_N$ of the Kac process, defined in (\ref{eq: definition of script Q}). Writing $\mathcal{I}_t$ for the (finite) set of jumps $\mathcal{I}_t=\{s\le t: \mu^N_s \neq \mu^N_{s-}\}$, the contribution to $\Gamma^{N,f,t}_t-\Gamma^{N,f,t}_0$ from drift between jumps is \begin{equation} \begin{split}&\int_{(0,t]\setminus \mathcal{I}_t} \hspace{0.1cm}\frac{d}{ds}\langle \phi_{t-s}(\mu^N_s)\rangle \hspace{0.1cm} ds \\&=-\int_{((0,t]\setminus \mathcal{I}_t)\times\mathcal{S}_N} \langle f, \mathcal{D}\phi_{t-s}(\mu^N_s)[\mu^N-\mu^N_s]\rangle\mathcal{Q}_N(\mu^N_s, d\mu^N) ds.\end{split} \end{equation} Using the definitions (\ref{eq: definition of Delta}, \ref{eq: definition of psi}) of $\psi$ and $\Delta$, the integrand can be expressed as \begin{equation} \langle f, \mathcal{D}\phi_{t-s}(\mu^N_s)[\mu^N-\mu^N_s]\rangle = \langle f,\Delta(s,t,\mu^N)-\psi(t-s, \mu^N_s, \mu^N)\rangle \end{equation} for any $s \not \in \mathcal{I}_t.$ Since the set $\mathcal{I}_t$ has $0$ Lebesgue measure, the set $\mathcal{I}_t\times\mathcal{S}_N$ has $0$ measure with respect to $\mathcal{Q}_N(\mu^N_s,d\mu^N)ds$, and so the inclusion of this set does not change the integral. Using the definitions (\ref{eq: definition of mbar}, \ref{eq: definition of rho}) of $\overline{m}^N$ and $\rho^N$, we can rewrite the integral as \begin{equation} \label{eq: contribution from drift} \begin{split} &\int_{(0,t]\times \mathcal{S}_N} \langle f, \psi(t-s, \mu^N_s,\mu^N)-\Delta(s, t, \mu^N)\rangle \mathcal{Q}_N(\mu^N_s, d\mu^N) ds \\ =&\int_0^t \langle f, \rho^N(t-s, \mu^N_s)\rangle ds - \int_{(0,t]\times \mathcal{S}_N} \langle f, \Delta(s,t,\mu^N)\rangle \overline{m}^N(ds, d\mu^N). \end{split}\end{equation}
On the other hand, at the times when $\mu^N_s$ jumps, we have \begin{equation} \Gamma^{N,f,t}_s-\Gamma^{N,f,t}_{s-}=\langle f, \phi_{t-s}(\mu^N_s)-\phi_{t-s}(\mu^N_{s-})\rangle =\langle f, \Delta(s,t,\mu^N_s)\rangle. \end{equation}
Therefore, the contribution to $\Gamma^{N,f,t}_t-\Gamma^{N,f,t}_0$ from jumps is \begin{equation} \label{eq: contribution from jump} \begin{split} \sum_{s\in \mathcal{I}_t} \Gamma^{N,f,t}_s-\Gamma^{N,f,t}_{s-}= \int_{(0,t]\times \mathcal{S}_N}\langle f, \Delta(s,t,\mu^N)\rangle\hspace{0.1cm} m^N(ds, d\mu^N) \\ = M^{N,f}_t+\int_{(0,t]\times \mathcal{S}_N}\langle f, \Delta(s,t,\mu^N)\rangle\hspace{0.1cm} \overline{m}^N(ds, d\mu^N) \end{split} \end{equation} Combining the contributions (\ref{eq: contribution from drift}, \ref{eq: contribution from jump}), we see that \begin{equation} \begin{split} \langle f, \mu^N_t-\phi_t(\mu^N_0)\rangle &= \Gamma^{N,f,t}_t-\Gamma^{N,f,t}_0 \\[1ex]& = \int_{(0,t]\setminus \mathcal{I}_t} \hspace{0.1cm}\frac{d}{ds} \langle f, \phi_{t-s}(\mu^N_s)\rangle ds + \sum_{s\in \mathcal{I}_t} \Gamma^{N,f,t}_s-\Gamma^{N,f,t}_{s-} \\[1ex] & = M^{N,f}_t+\int_0^t\langle f ,\rho^N(t-s, \mu^N_s)ds \end{split} \end{equation} as desired. \end{proof}
\subsection{Proof of Lemma \ref{lemma:DAP}} In this subsection, we will prove the Chapman-Kolmogorov property Lemma \ref{lemma:DAP}, which is crucial to the interpolation decomposition. We prove the two claimed equalities separately.
\begin{lemma} Let $N\ge 2$ and let $\mu^N \in \mathcal{S}_N$. Then, for all $t>0$ and $f\in \mathcal{A}$, we have the differentiability \begin{equation} \label{eq: first equality of CK} \frac{d}{dt}\langle f, \phi_t(\mu^N)\rangle = \langle f, \mathcal{D}\phi_t(\mu^N)[Q(\mu^N)]\rangle. \end{equation} At $t=0$, this is a one-sided, right differentiability. \end{lemma} The following proof uses ideas of \cite{M+M}, notably the concept of the infinite-dimensional differential calculus and building on ideas of \cite[Lemma 2.11]{M+M}. \begin{proof} Throughout, fix $\mu^N\in \mathcal{S}_N$ and $f\in \mathcal{A}$. Recall, for clarity, the notation $Q_t(\mu)=Q(\phi_t(\mu))$. Using the boundedness of appropriate moments of $\mu^N\in\mathcal{S}_N$, together with the continuity estimate (\ref{eq: holder continuity of Q}), it is straightforward to see that the map $t\mapsto Q_t(\mu^N)$ is H\"older continuous in time, with respect to the weighted norm $\|\cdot\|_{\mathrm{TV}+2}$: for some constant $C_1=C_1(N)$, we have the estimate \begin{equation}\label{eq: time Holder continuity of Qt} \|Q_t(\mu^N)-Q_s(\mu^N)\|_{\mathrm{TV}+2} \le C_1 |t-s|^\frac{1}{2}. \end{equation} From the definition (\ref{BE}) of the Boltzmann dynamics, together with dominated convergence, we have that \begin{equation} \langle f, \phi_t(\mu^N_0)\rangle =\langle f, \mu^N\rangle +\int_0^t \langle f, Q_s(\mu^N)\rangle ds. \end{equation} Therefore, the map $t\mapsto \langle f, \phi_t(\mu^N)\rangle$ is continuously differentiable in time, with derivative \begin{equation} \frac{d}{dt} \langle f, \phi_t(\mu^N)\rangle=\langle f, Q_t(\mu^N)\rangle\end{equation} where, at $t=0$, this is a one-sided, right derivative. It therefore suffices to show that (\ref{eq: first equality of CK}) holds as a \emph{right} derivative. \medskip \\ Fix $t\ge 0$, and observe that, for $s>0$ small enough, $\nu^N_s=\mu^N+sQ(\mu^N)$ defines a measure $\nu^N_s\in \mathcal{S}$. From the semigroup property, it follows that $\phi_t(\phi_s(\mu^N))=\phi_{t+s}(\mu^N)$, and we can therefore expand \begin{equation} \begin{split} &\big\langle f, \phi_{t+s}(\mu^N)-\phi_t(\mu^N)-s\mathcal{D}\phi_t(\mu^N)[Q(\mu^N)]\big\rangle\\ &=\underbrace{\langle f, \phi_t(\phi_s(\mu^N))-\phi_t(\nu^N_s)\rangle}_{:=\mathcal{T}_1(s)} + \underbrace{\langle f,\phi_t(\nu^N_s)-\phi_t(\mu^N)-s\mathcal{D}\phi_t(\mu)[Q(\mu^N)]\rangle}_{:=\mathcal{T}_2(s)}. \end{split}\end{equation} We will now show that each of the two terms $\mathcal{T}_1, \mathcal{T}_2$ are $o(s)$, which implies the result.
\paragraph{Estimate on $\mathcal{T}_1(s)$} Let $\eta\in (\frac{2}{3},1)$, and choose $k$ large enough that the stability estimates (\ref{eq: stability for BE 1}, \ref{eq: stability for BE 2}) hold with exponent $\eta$. As $s\downarrow 0$, the probability measures $\nu^N_s =\mu^N+sQ(\mu^N)$ and $\phi_s(\mu^N)$ are bounded in $\mathcal{S}^k$. Therefore, from (\ref{eq: stability for BE 1}), there exists a constant $C_2=C_2(N)<\infty$ such that, for all $s>0$ small enough, \begin{equation} \label{eq: est of T1, 1} \|\phi_t(\phi_s(\mu))-\phi_t(\nu_s)\|_{\mathrm{TV}+2} \le C_2\|\phi_s(\mu)-\nu_s\|^\eta_{\mathrm{TV}+2}.\end{equation} The left-hand side is a bound for $\mathcal{T}_1(s)$. Using the estimate (\ref{eq: time Holder continuity of Qt}) above, we estimate the right-hand side, following \cite[Lemma 2.11]{M+M}: \begin{equation} \label{eq: est of T1, 2}\begin{split} \|\phi_s(\mu^N)-\nu^N_s\|_{\mathrm{TV}+2} &= \left\|\int_0^s (Q_u(\mu^N)-Q_0(\mu^N)) du\right\|_{\mathrm{TV}+2} \\ & \le \int_0^s \|Q_u(\mu^N)-Q_0(\mu^N)\|_{\mathrm{TV}+2} \hspace{0.1cm}du \\ & \le C_1(N)\int_0^s u^\frac{1}{2}du = \frac{2}{3}C_1(N) s^\frac{3}{2}.\end{split} \end{equation} Combining the estimates (\ref{eq: est of T1, 1}, \ref{eq: est of T1, 2}), we see that \begin{equation} \mathcal{T}_1(s) \le C_2\left(\frac{2}{3}C_1\right)^\eta s^\frac{3\eta}{2}.\end{equation} Since we chose $\eta>\frac{2}{3}$, this shows that $\mathcal{T}_1$ is $o(s)$ as $s\downarrow 0$.
\paragraph{Estimate on $\mathcal{T}_2$} Let $\eta$ and $k$ be as above, and recall that in (\ref{eq: stability for BE 2}), $\xi_t$ is the definition of $\mathcal{D}\phi_t(\mu)[\nu-\mu]$. We now apply this estimate to $\mu^N$ and $\nu^N_s$, noting that $\nu^N_s=\mu^N+sQ(\mu^N)$ and $\phi_s(\mu^N)$ are bounded in $\mathcal{S}^k$ as $s\downarrow 0$, and that $\nu^N_s-\mu^N=sQ(\mu^N)$. The bound (\ref{eq: stability for BE 2}) now shows that, for some constants $C_3, C_4<\infty$, \begin{equation} \begin{split} \|\phi_t(\nu^N_s)-\phi_t(\mu^N)-s\mathcal{D}\phi_t(\mu^N)[Q(\mu^N)]\|_{\mathrm{TV}+2} &\le C_3\|\nu^N_s-\mu^N\|_\mathrm{TV}^{1+\eta} \\& = C_3\|sQ(\mu^N)\|_\mathrm{TV}^{1+\eta}\\ & \le C_4 s^{1+\eta}. \end{split}\end{equation} The left-hand side is a bound for $\mathcal{T}_2$, which implies that $\mathcal{T}_2$ is $o(s)$, as desired. Together with the previous estimate on $\mathcal{T}_1$, this concludes the proof.\end{proof}
We now turn to the proof of the second equality in (\ref{eq:exchangeDandI}), that is, \begin{equation}\label{eq:DAP}\begin{split}&\langle f,\mathcal{D}\phi_t(\mu^N)\left[Q(\mu^N)\right]\rangle \\& = \int_{\mathbb{R}^d \times \mathbb{R}^d \times S^{d-1}} \hspace{0.05cm} \langle f,\mathcal{D}\phi_t(\mu^N)[\mu^{N, v, v_\star, \sigma}-\mu^N]\rangle \hspace{0.05cm}N\hspace{0.05cm} |v-v_\star|d\sigma \mu^N(dv)\mu^N(dv_\star). \end{split} \end{equation} Using the definition (\ref{eq: change of measure at collision}), we see that the integral on the right-hand side is equivalent to that defining $Q(\mu^N)$ in (\ref{eq: defn of Q}). However, we \emph{cannot} simply exchange the integration with the linear map $\mathcal{D}\phi_t$, as the construction in Proposition \ref{thrm: stability for BE} does not guarantee that $\mathcal{D}\phi_t(\mu^N)$ is bounded as a linear map. We will instead prove (\ref{eq:DAP}) from the \emph{explicit} way in which $\mathcal{D}\phi_t(\mu^N)$ is constructed in Proposition \ref{thrm: stability for BE}, and show that this construction implies `enough' continuity. \medskip\\ This is closely related to, and may be derived from, condition (\textbf{A3}), convergence of the generators, in \cite{M+M}. We present here a more direct proof, to avoid introducing additional spaces and notation. The crucial observation of our argument is that `enough' small perturbations of a discrete measure $\mu^N\in\mathcal{S}_N$ will remain in $\mathcal{S}$; this is made precise in equation (\ref{eq: proof of dap delta is small}). The same idea is present in the corresponding argument \cite[Section 5.5]{M+M}, but not made explicit. \medskip \\ Before turning to the proof of (\ref{eq:DAP}), we will prove the following auxiliary lemma. In order to justify the exchange of various integrals, we wish to improve the moments of the derivative $\xi_t=\mathcal{D}\phi_t(\mu)[\nu-\mu]$ in Proposition \ref{thrm: stability for BE}. The following argument combines ideas of \cite[Proposition 4.2]{ACE} and \cite[Lemma 6.3]{M+M}. \begin{lemma}\label{lemma: moments of xi} Suppose $\mu, \nu \in \cap_{k\ge 2} \mathcal{S}^k$, and let $(\xi_t)_{t\ge 0}$ be the solution to the differential equation (\ref{eq: definition of the difference term}). Then, for all $k\ge 2$, there exists a constant $c=c(k)$ such that, for all $t\ge 0$, \begin{equation} \|\xi_t\|_{TV+k}\le 2\Lambda_k(\mu,\nu) \exp\left(ct \Lambda_{k+1}(\mu)\right). \end{equation} Moreover, if $k'>2$ is large enough, then we have the continuity estimate, for all $0\le s \le t$, and for some absolute constants $C_1, C_2$, \begin{equation} \|\xi_t-\xi_s\|_{\mathrm{TV}+k}\le C_1\Lambda_{k+k'}(\mu, \nu)^\frac{1}{2}\exp\left(\frac{1}{2}C_2\Lambda_{2(k+1)}(\mu)t\right)(t-s)^\frac{1}{2}.\end{equation} \end{lemma} \begin{proof} Firstly, we observe that, by hypothesis, the map $t\mapsto \xi_t$ is continuous in the norm $\|\cdot\|_{\mathrm{TV}+2}$, and is therefore locally bounded. We have the estimate on total variation \begin{equation}\label{eq: bound TV of Q} \|Q(\phi_t(\mu), \xi_t)\|_\mathrm{TV} \le 4\int_{\mathbb{R}^d\times\mathbb{R}^d}|v-v_\star|\phi_t(\mu)(dv)|\xi_t|(dv_\star) \le 8 \|\xi_t\|_{\mathrm{TV}+2}\end{equation} where we have used the bound $|v-v_\star|\le (1+|v|^2)(1+|v_\star|^2)$. Similarly, we estimate \begin{equation} \label{eq: continuity of partial t xi t}\begin{split} &\|Q(\phi_t(\mu), \xi_t)-Q(\phi_s(\mu), \xi_s)\|_\mathrm{TV} \\[0.5ex] & \hspace{1cm}\le \|Q(\phi_t(\mu)-\phi_s(\mu), \xi_t)\|_\mathrm{TV} +\|Q(\phi_s(\mu), \xi_t-\xi_s)\|_\mathrm{TV} \\[0.5ex] & \hspace{1cm} \le 4(\|\xi_t\|_{\mathrm{TV}+2}\hspace{0.05cm}\|\phi_t(\mu)-\phi_s(\mu)\|_{\mathrm{TV}+2}+2\hspace{0.05cm}\|\xi_t-\xi_s\|_{\mathrm{TV}+2}). \end{split} \end{equation} Since $t\mapsto \phi_t(\mu)$ is continuous in $\|\cdot\|_{\mathrm{TV}+2}$, it follows that the map \begin{equation} t\mapsto \partial_t\xi_t=2Q(\phi_t(\mu), \xi_t) \end{equation} is continuous and locally bounded in $\|\cdot\|_{\mathrm{TV}}$. Therefore, for all $t\ge 0$, the measure $\pi_t=\int_0^t |\partial_s\xi_s| ds$ is a finite measure, and $\partial_s \xi_s$ is absolutely continuous with respect to $\pi_t$ for all $0\le s\le t$. Therefore, by a result of Norris \cite[Lemma 11.1]{ACE} on the time variation of signed measures, there exists a measurable map $f: [0,\infty)\times \mathbb{R}^d \rightarrow \{-1,0,1\}$ such that \begin{equation}\label{eq: definition of f} \xi_t=f_t|\xi_t|;\hspace{1cm} |\xi_t|=|\xi_0|+\int_0^t f_s\partial_s\xi_s ds.\end{equation}Writing $\check{f}_s(v)=(1+|v|^k)f_s$, we have the bound \begin{equation} \begin{split} &\hspace{1cm}\label{eq: gain of integrability of xi} \langle 1+|v|^k, |\xi_t|-|\xi_0|\rangle\\ &=\int_0^t ds \int_{\mathbb{R}^d\times\mathbb{R}^d\times S^{d-1}} (\check{f}(v')+\check{f}(v_\star')-\check{f}(v_\star)-\check{f}(v))|v-v_\star|\hspace{0.05cm}\phi_s(\mu)(dv)\hspace{0.05cm}\xi_s(dv_\star)d\sigma \\[1ex]& \le \int_0^t ds \int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} (2+|v'|^k+|v_\star'|^k+|v_\star|^k-|v|^k)|v-v_\star|\hspace{0.05cm}\phi_s(\mu)(dv_\star)\hspace{0.05cm}|\xi_s|(dv)d\sigma.\end{split} \end{equation} Now, there exists a constant $C_1=C_1(k)$ such that, for all $v, v_\star, \sigma$, we have the bound \begin{equation} |v'|^k+|v_\star'|^k+|v_\star|^k-|v|^k \le C_1(k)(|v|^{k-2}|v_\star|^2+|v_\star|^k)\end{equation} Therefore, for a different constant $C_2=C_2(k)$, \begin{equation} 2|v-v_\star|(2+|v'|^k+|v_\star'|^k+|v_\star|^k-|v|^k) \le C_2(k)(1+|v|^k)(1+|v_\star|^{k+1}).\end{equation} Using the moment bounds in Proposition \ref{thrm:momentinequalities}, we obtain for some $c=c(k)$, \begin{equation} \begin{split} & \langle 1+|v|^k, |\xi_t|\rangle \le \langle 1+|v|^k, |\xi_0|\rangle \\& \hspace{2cm}+ C_2\int_0^t \int_{\mathbb{R}^d\times\mathbb{R}^d}(1+|v|^k)(1+|v_\star|^{k+1})|\xi_s|(dv)\phi_s(\mu)(dv_\star) \\[1ex] &\hspace{1cm} \le \langle1+|v|^k, |\xi_0|\rangle +c\Lambda_{k+1}(\mu)\int_0^t \langle 1+|v|^k, |\xi_s|\rangle ds.\end{split} \end{equation} Gr\"onwall's lemma now gives the claimed moment bound. For the continuity statement, if $k'$ is chosen large enough that (\ref{eq: stability for BE 1.5}) holds for some $\eta<1$, then (\ref{eq: bound TV of Q}) gives the bound \begin{equation}\label{eq: continuity in TV} \|Q(\phi_t(\mu), \xi_t)\|_\mathrm{TV} \le C_3\Lambda_{k'}(\mu, \nu) \end{equation} and therefore, for all $0\le s\le t$, \begin{equation} \label{eq: continuity in TV'} \|\xi_t-\xi_s\|_\mathrm{TV} \le C_3\Lambda_{k'}(\mu, \nu)(t-s).\end{equation} The continuity statement follows by combining (\ref{eq: continuity in TV'}) with the moment bound for $2k$, with the interpolation \begin{equation} \|\xi_t-\xi_s\|_{\mathrm{TV}+k}\le \|\xi_t-\xi_s\|^{1/2}_{\mathrm{TV}}\hspace{0.1cm}\|\xi_t+\xi_s\|_{\mathrm{TV}+2k}^{1/2} \end{equation} and using the correlation property (Lemma \ref{lemma: correlation of moments}) to absorb both moment terms. \end{proof} We can now prove the second claimed equality in Lemma \ref{lemma:DAP}. \begin{lemma} Let $\mu^N \in \mathcal{S}_N$, for $N\ge 2$. Then we have the equality \begin{equation}\label{eq:DAP2}\begin{split}&\hspace{1cm}\mathcal{D}\phi_t(\mu^N)\left[Q(\mu^N)\right] \\& = \int_{\mathbb{R}^d \times \mathbb{R}^d \times S^{d-1}} \hspace{0.05cm} \mathcal{D}\phi_t(\mu^N)[\mu^{N, v, v_\star, \sigma}-\mu^N] \hspace{0.05cm}N\hspace{0.05cm} |v-v_\star|d\sigma \mu^N(dv)\mu^N(dv_\star). \end{split} \end{equation} where the right hand side is a Bochner integral in the space $(Y_2,\|\cdot\|_{\mathrm{TV}+2})$. In particular, the equality (\ref{eq:DAP}) holds. \end{lemma} \begin{proof} We exploit the fact that, for $\delta>0$ small enough, we have \begin{equation} \label{eq: proof of dap delta is small} \mu^N + \delta Q(\mu^N)\in \mathcal{S}; \hspace{0.5cm} \forall v, v_\star, \sigma, \hspace{0.2cm}\mu^N +\delta[\mu^{N,v,v_\star,\sigma}-\mu^N] \in \mathcal{S}. \end{equation} We will assume that $\delta>0$ is chosen so that this holds. For $v, v_\star \in \text{Supp}(\mu^N)$ and $\sigma \in S^{d-1}$, we define $\xi^{N,v,v_\star,\sigma}_t$ by the differential equation \begin{equation} \xi^{N,v,v_\star,\sigma}_0 = \delta [\mu^{N, v, v_\star, \sigma}-\mu^N]; \hspace{1cm} \partial_t \xi^{N,v,v_\star,\sigma}_t = 2Q(\phi_t(\mu^N),\xi^{N,v,v_\star,\sigma}_t). \end{equation}From Proposition \ref{thrm: stability for BE}, the solution to this equation exists, and is unique. By the characterisation of the derivative $\mathcal{D}\phi_t(\mu^N)$, we also have \begin{equation} \xi^{N,v,v_\star,\sigma}_t = \delta \hspace{0.1cm} \mathcal{D}\phi_t(\mu^N)[\mu^{N,v,v_\star,\sigma}-\mu^N]\end{equation} From Lemma \ref{lemma: moments of xi}, we also have a bound that $\| \xi^{N,v,v_\star,\sigma}_s\|_{\mathrm{TV}+4} \leq C$ for all $s\le t$, and for some constant $C=C(\mu^N,N,t)$ independent of $v, v_\star$ and $\sigma$. In this notation, we wish to establish the equality \begin{equation}\label{eq: desired equality for DAP} \mathcal{D}\phi_t(\mu^N)[Q(\mu^N)]\equalsquestion\int_{\mathbb{R}^d\times\mathbb{R}^d\times S^{d-1}} \xi^{N,v,v_\star,\sigma} |v-v_\star|\mu^N(dv)\mu^N(dv_\star) d\sigma.\end{equation} From the bound above, the right-hand side is well-defined as a Bochner integral in $(Y_2, \|\cdot\|_{\mathrm{TV}+2})$. \medskip \\ Firstly, arguing as in (\ref{eq: bound TV of Q}), for all $t \ge 0$, there is a constant $C=C(\mu^N, N, t)$ such that, for all $v, v_\star, \sigma$ and $s\le t$, we have \begin{equation}\begin{split} &\|Q(\phi_t(\mu^N),\xi^{N,v,v_\star,\sigma}_s)\|_{\mathrm{TV}+3} \le C.\end{split}\end{equation} We now define \begin{equation} \xi_t = \int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} \xi^{N,v,v_\star,\sigma}_t |v-v_\star| d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star) \end{equation} where the right-hand side is a Bochner integral in $(Y_3, \|\cdot\|_{\mathrm{TV}+3})$. From the definition (\ref{eq: defn of Q}) of $Q$, we have \begin{equation} \xi_0 = \delta \hspace{0.1cm}N^{-1}\hspace{0.1cm} Q(\mu^N)\end{equation} Moreover, using Fubini, we can express \begin{equation} \label{eq: use fubini}\begin{split} &\xi_t-\xi_0 \\[1ex] &=\int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} \left\{ \int_0^t 2Q(\phi_s(\mu^N),\xi^{N,v,v_\star,\sigma}_s) ds\right\} |v-v_\star| d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star) \\[1ex]& =\int_0^t \left\{ \int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} 2Q(\phi_s(\mu^N),\xi^{N,v,v_\star,\sigma}_s) |v-v_\star| d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star) \right\} ds.\end{split}\end{equation} The same argument as in (\ref{eq: bound TV of Q}) shows that, for fixed $\mu\in \mathcal{S}^3$, the map \begin{equation} Q(\mu, \cdot): (Y_3, \|\cdot\|_{\mathrm{TV}+3})\rightarrow (Y_2, \|\cdot\|_{\mathrm{TV}+2});\hspace{1cm}\xi\mapsto Q(\mu, \xi) \end{equation} is a bounded linear map. It follows that, for all $s \ge 0$, \begin{equation} Q(\phi_s(\mu^N), \xi_s)=\int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} Q(\phi_s(\mu^N),\xi^{N,v,v_\star,\sigma}_s) |v-v_\star| d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star) \end{equation} as an equality of Bochner integrals in $(Y_2, \|\cdot\|_{\mathrm{TV}+2})$. Therefore, (\ref{eq: use fubini}) shows that, for all $t\ge 0$,\begin{equation}\label{eq: integral eqn for xi t} \xi_t=\xi_0+\int_0^t 2Q(\phi_s(\mu^N), \xi_s)ds. \end{equation} From Lemma \ref{lemma: moments of xi}, there exists a constant $C=C(\mu^N,N,t)$ such that, for all $v, v_\star, \sigma$ and $0\le s \le t$, \begin{equation} \|\xi^{N,v,v_\star, \sigma}_t-\xi^{N,v,v_\star, \sigma}_s\|_{\mathrm{TV}+2} \le C(t-s)^\frac{1}{2}\end{equation} and therefore, for a different constant $C'$, \begin{equation} \|\xi_t-\xi_s\|_{\mathrm{TV}+2} \le C'(t-s)^\frac{1}{2}.\end{equation} By the same reasoning as (\ref{eq: continuity of partial t xi t}), we see that the map $t\mapsto 2Q(\phi_t(\mu^N),\xi_t)$ is continuous with respect to the norm $\|\cdot\|_{\mathrm{TV}+2}$, and so we may differentiate (\ref{eq: integral eqn for xi t}) to obtain $\partial_t \xi_t = 2Q(\phi_t(\mu^N),\xi_t)$. From Proposition \ref{thrm: stability for BE}, this uniquely characterises the derivative $\mathcal{D}\phi_t(\mu^N)[\delta \hspace{0.05cm}N^{-1}\hspace{0.05cm} Q(\mu^N)]$. Hence we have the claimed equality\begin{equation} \begin{split} &\mathcal{D}\phi_t(\mu^N)[Q(\mu^N)] =\delta^{-1} N \xi_t \\& =\delta^{-1} \int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} \xi^{N,v,v_\star,\sigma}_t |v-v_\star|\hspace{0.05cm}N\hspace{0.05cm} d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star) \\& =\int_{\mathbb{R}^d\times \mathbb{R}^d \times S^{d-1}} \mathcal{D}\phi_t(\mu^N)[\mu^{N,v,v_\star,\sigma}-\mu^N] |v-v_\star|\hspace{0.05cm}N\hspace{0.05cm} d\sigma \hspace{0.1cm} \mu^N(dv) \mu^N (dv_\star). \end{split} \end{equation} \end{proof}
\section{Proof of Theorem \ref{thrm: PW convergence}} \label{sec: proof of pw}
The main difficulty in obtaining a pathwise statement is the martingale term $M^{N,f}_t$ in Formula \ref{form:newdecomposition}, which we defined above as \begin{equation} M^{N,f}_t = \int_{(0,t]\times \mathcal{S}_N} \bigg \langle f, \phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_{s-})\bigg\rangle (m^N-\overline{m}^N)(ds, d\mu^N). \end{equation} Recall the definition of $\mathcal{A}$ as those functions $f: \mathbb{R}^d \rightarrow \mathbb{R}$ satisfying \begin{equation} \forall \hspace{0.1cm} v, v'\in \mathbb{R}^d, \hspace{0.5cm}|\hat{f}(v)|\leq 1; \hspace{1cm}|\hat{f}(v)-\hat{f}(v')|\leq |v-v'|.\end{equation} We will be interested in controlling an expression of the form $\sup_{f\in\mathcal{A}} |M^{N,f}_t|$, either pointwise in time, or (pathwise) locally uniformly in time. However, unlike in the finite dimensional cases in \cite{D&N}, we cannot directly apply estimates from the elementary theory of martingales, as such estimates degrade in large dimensions. Instead, we will use the relative compactness discussed in Remark \ref{rmk: compactness} to argue that this is an \emph{effectively finite dimensional} problem. More precisely, we show that it can be approximated by a discretised, finite dimensional martingale approximation problem, with the following trade off: that making the truncation error small requires taking a large (finite) dimensional martingale. As in \cite{D&N,ACE}, the martingale term is `small', as a function of $N$, but will increase as a function of the dimension of the approximation. By optimising over the discretisation, we will be able to balance the two terms to find a useful estimate on the family of processes. This is the same approach as used for an equivalent problem in \cite[Theorem 1.1]{ACE}. \medskip \\ Finding the best exponents of $N$ we have been able to obtain uses a `hierarchical decomposition'. This approach was inspired by an equivalent technique used in \cite[Proposition 7.1]{ACE}.
\begin{lemma} \label{thrm: pointwise martingale control} Let $\epsilon>0$, $a\ge 1$ and $0<\lambda<\lambda_0$. Let $k$ be large enough that Corollary \ref{cor: new stability for BE} holds with $q=4$, exponent $\lambda$ and H\"{o}lder exponent $1-\epsilon$. \\ Let $(\mu^N_t)_{t\geq 0}$ be a Kac process in dimension $d\geq 3$, with initial moment $\Lambda_{k}(\mu^N_0) \leq a$. Let $M^{N,f}_t$ be the processes given by (\ref{eq: definition of MNFT}). Then we have, uniformly in $t\geq 0$, \begin{equation} \left\|\hspace{0.1cm} \sup_{f\in \mathcal{A}} \left|M^{N,f}_t\right| \hspace{0.1cm} \right\|_{L^2(\mathbb{P})} \lesssim a^{1/2} \hspace{0.1cm} N^{\epsilon-1/d}.\end{equation} \end{lemma} Once we have obtained the control of the martingale term, the remaining proof of Theorem \ref{thrm: PW convergence} is straightforward. \begin{proof}[Proof of Theorem \ref{thrm: PW convergence}] Take $k=k(\epsilon)$ as in Lemma \ref{thrm: pointwise martingale control}, and such that Proposition \ref{thrm: stability for BE} holds with exponent $\max(1-\epsilon, \frac{1}{2})$. \medskip \\ We first note that it is sufficient to prove the case $\mu_0=\mu^N_0$. Given this case, we use the continuity established in Theorem \ref{thrm: W-W continuity of phit} to estimate the difference \begin{equation} W\left(\phi_t(\mu^N_0), \phi_t(\mu_0)\right) \lesssim a^{1/2} W(\mu^N_0, \mu_0)^\zeta\end{equation} for some $\zeta=\zeta(d,k)$, which implies the claimed result. \medskip \\ From now on, we assume that $\mu_0=\mu^N_0$. From the interpolation decomposition Formula \ref{form:newdecomposition}, we majorise \begin{equation} \label{eq: dominate pointwise bound} W\left(\mu^N_t, \phi_t\left(\mu^N_0\right)\right) \leq \sup_{f\in \mathcal{A}} \left|M^{N,f}_t\right| + \int_0^t \sup_{f\in \mathcal{A}} \hspace{0.1cm} \langle f, \rho^N(t-s, \mu^N_s)\rangle \hspace{0.1cm} ds \end{equation} where, as in (\ref{eq: definition of psi}, \ref{eq: definition of rho}), the integrand is given by \begin{equation} \langle f, \rho^N(t-s, \mu^N_s)\rangle = \int_{\mathcal{S}_N} \langle f, \psi(t-s,\mu^N_s, \nu)\rangle \mathcal{Q}_N(\mu^N, d\nu); \end{equation} \begin{equation} \psi(u,\mu, \nu)=\phi_u(\nu)-\phi_u(\mu)-\mathcal{D}\phi_u(\mu)[\nu-\mu] \end{equation} and $\mathcal{Q}_N$ is the transition kernel (\ref{eq: definition of script Q}) of the Kac process. \medskip \\ The first term of (\ref{eq: dominate pointwise bound}) is controlled in $L^2$ by Lemma \ref{thrm: pointwise martingale control}, and so it remains to bound the second term in $L^2$. Let $s\ge 0$, and let $\mu^N$ be a measure obtained from $\mu^N_s$ by a collision, as in (\ref{eq: change of measure at collision}). Then, using the estimate (\ref{eq: stability for BE 2}), we bound
\begin{equation} \begin{split} \|\psi(t-s, \mu^N_s, \mu^N)\|_{\mathrm{TV}+2} & = \|\phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_s)-\mathcal{D}\phi_{t-s}(\mu^N_s)\|_{\mathrm{TV}+2} \\[2ex] &\lesssim e^{-\lambda_0(t-s)/2} \|\mu^N-\mu^N_2\|^{2-\epsilon}_\mathrm{TV}\hspace{0.1cm}\Lambda_k(\mu^N, \mu^N_s)^\frac{1}{2}. \end{split}\end{equation} By Lemma \ref{lemma:momentincreaseatcollision}, we know that $\Lambda_k(\mu^N)\lesssim \Lambda_k(\mu^N_s)$. Moreover, from the form (\ref{eq: change of measure at collision}) of possible $\mu^N$, we know that \begin{equation} \|\mu^N-\mu^N_s\|_\mathrm{TV}\le \frac{4}{N}\hspace{0.5cm}\text{for }\mathcal{Q}_N(\mu^N_s, \cdot)\text{-almost all }\mu^N. \end{equation} Therefore, almost surely, for all $s$ and $\mathcal{Q}_N(\mu^N_s, \cdot)$-almost all $\mu^N$, we have the bound \begin{equation} \|\psi(t-s,\mu^N_s, \mu^N)\|_{\mathrm{TV}+2} \lesssim e^{-\lambda_0(t-s)/2} N^{\epsilon-2}\hspace{0.1cm}\Lambda_k(\mu^N_s)^\frac{1}{2} \end{equation} where the implied constants are independent of $s, \mu^N_s$. Integrating with respect to $\mathcal{Q}_N(\mu^N_s, d\mu^N)$, we obtain an upper bound for $\langle f, \rho^N(t-s,\mu^N_s)\rangle$:\begin{equation} \label{eq: dominate rho} \begin{split} \sup_{f\in \mathcal{A}}\hspace{0.05cm} \langle f, \rho^N(t-s, \mu^N_s)\rangle &\leq \int_{\mathcal{S}_N} \left\|\psi(t-s, \mu^N_s, \mu^N)\right\|_{\mathrm{TV}+2} \hspace{0.1cm} \mathcal{Q}_N(\mu^N_s, d\mu^N) \\ & \lesssim e^{-\lambda_0(t-s)/2} \hspace{0.1cm} N^{\epsilon-1} \hspace{0.1cm} \Lambda_k(\mu^N_s)^\frac{1}{2}.\end{split} \end{equation} We now take the $L^2$ norm of the second term in (\ref{eq: dominate pointwise bound}). Using Proposition \ref{lemma:momentboundpt1} to control the moments $\Lambda_k$ appearing in the integral, we obtain \begin{equation} \begin{split} \left\|\hspace{0.1cm} \int_0^t \sup_{f\in \mathcal{A}} \hspace{0.1cm} \langle f, \rho^N(t-s, \mu^N_s)\rangle \hspace{0.1cm} ds \hspace{0.1cm}\right\|_{L^2(\mathbb{P})} &\leq \mathlarger{\mathlarger{\int}}_0^t \hspace{0.1cm} \left\| \sup_{f\in \mathcal{A}} \hspace{0.1cm} \langle f, \rho^N(t-s, \mu^N_s)\rangle \right\|_{L^2(\mathbb{P})} ds \\ & \lesssim \int_0^t e^{-\lambda(t-s)/2}\hspace{0.1cm} N^{\epsilon-1} \hspace{0.1cm} \left\|\Lambda_{k}(\mu^N_s)^\frac{1}{2}\right\|_{L^2(\mathbb{P})} ds \\ & \lesssim N^{\epsilon-1} \hspace{0.1cm} a^{1/2}. \end{split} \end{equation} Noting that the exponent $\epsilon-1 < \epsilon-\frac{1}{d}$, we combine this with Lemma \ref{thrm: pointwise martingale control}, and keep the worse asymptotics. \end{proof} \begin{proof}[Proof of Lemma \ref{thrm: pointwise martingale control}] We begin by reviewing the following estimates for $1-$Lipschitz functions from \cite{ACE}. Following \cite{ACE}, we use angle brackets $\langle f \rangle_C $ to denote the average of a bounded function $f$ over a Borel set $C$ of finite, nonzero measure. \\ Let $f$ be $1-$ Lipschitz, and consider $B=[0,2^{-j}]^d$. Then, for some numerical constant $c_d$, we have \begin{equation} \label{eq:scalebound} \forall v \in B, \hspace{0.1cm} |f(v)-\langle f \rangle_B|\le c_d 2^{-j};\hspace{1cm} |\langle f \rangle _B -\langle f \rangle _{2B} | \le c_d 2^{-j}. \end{equation} We note that both of these bounds are linear in the length scale $2^{-j}$ of the box. We deal with the case $N\ge 2^{2d}$. \medskip\\ The proof is based on the following `hierarchical' partition of $\mathbb{R}^d$, given in the proof \cite[Proposition 7.1]{ACE}. \begin{itemize} \item For $j \in \mathbb{Z}$, we take $B_j=(-2^j, 2^j]$. \item Set $A_0 = B_0$ and, for $j\geq 1$, $A_j = B_j \setminus B_{j-1}$. \item For $j\geq 1$ and $l \ge 2$, there is a unique partition $\mathcal{P}_{j,l}$ of $A_j$ by $2^{ld}-2^{(l-1)d}$ translates of $B_{j-l}$. \item Similarly, write $\mathcal{P}_{0,l}$ for the unique partition of $A_0$ by $2^{dl}$ translates of $B_{-l}$. \item For $l\geq 3$ and $k\in \mathbb{Z}$, let $B\in \mathcal{P}_{j,l}$. We write $\pi(B)$ for the unique element of of $\mathcal{P}_{j,l-1}$ such that $B\subset \pi(B)$.\end{itemize} We deal first with the case $d\geq 3$. Fix discretisation parameters $L, J \ge 1$. Given a test function $f\in \mathcal{A}$, we can decompose \begin{equation} f=\sum_{j=0}^J \hspace{0.2cm} \sum_{l=2}^L \hspace{0.1cm} \sum_{B \in \mathcal{P}_{j,l}} a_B(f)(1+|v|^2) 1_{B}+ \beta(f)\end{equation} where we define \begin{equation} a_B(f)=\begin{cases} \langle \hat{f} \rangle_ B & \text{if } B \in \mathcal{P}_{j,2}, \text{ for some } j \ge 0 \\ \langle \hat{f} \rangle _B - \langle \hat{f} \rangle _{\pi(B)} & \text{if } B \in \mathcal{P}_{j,l}, \text{ for some } j \ge 0, l\ge 3\end{cases} \end{equation} and the equation serves to define the remainder term $\beta(f)$. Write $h_B = 2^{2j}(1+|v|^2)1_B$, for $B\in \mathcal{P}_{j,l}$, and write $M^{N;B}_t = M^{N, h_B}_t$. We can now write \begin{equation}\begin{split} \label{eq: decomposition of MNFT} & M^{N,f}_t= \sum_{j=0}^J\sum_{l=2}^L \sum_{B \in \mathcal{P}_{j,l}} 2^{-2j}a_B(f) M^{N;B}_t +R^{N,f}_t; \end{split} \end{equation} \begin{equation} \label{eq: definition of RNFT} R^{N,f}_t= \int_{(0,t]\times \mathcal{S}_N} \langle \beta(f), \Delta(s,t,\mu^N)\rangle (m^N-\overline{m}^N)(ds, d\mu^N) \end{equation} and where $\Delta$, $m^N$ and $\overline{m}^N$ are defined in Section \ref{sec: interpolation decomposition}. This is the key decomposition in the proof. Roughly speaking: \begin{itemize} \item The martingales $M^{N;B}$ are controlled by a bound (\ref{eq: elltwo martingale control}) from the general theory of Markov chains, \emph{independently of f}. \item The coefficients $a_B$ depend on $f$, but are bounded, uniformly over $f\in \mathcal{A}$. \item On $B_J$, $\beta(f)$ will be small, uniformly in $f$, due to the Lipschitz bound on $f$ and the estimate (\ref{eq:scalebound}). This may be viewed as a \emph{relative compactness} argument, as discussed in Remark \ref{rmk: compactness}: given $\epsilon>0$, one could use this construction to produce a finite $\epsilon$-net for $\mathcal{A}|_{B_J}$ in the norm of $\mathcal{A}_0|_{B_J}$. \item $|\beta(f)|\leq 1$ is bounded on $\mathbb{R}^d\setminus B_J$, and the contribution from this region will be controlled by the moment bounds.\end{itemize} To control the martingale term uniformly in $f$, observe that for $B \in \mathcal{P}_{j,l}$, the bound (\ref{eq:scalebound}) gives $2^{-2j}|a_B(f)|\lesssim 2^{-j-l}$, and $\#\mathcal{P}_{j,l}\le 2^{dl}$. Hence, independently of $f\in \mathcal{A}$, \begin{equation} \left(\sum_{j=0}^J \sum_{B \in \mathcal{P}_{j,l}} (a_B(f)2^{-2j})^2\right) \lesssim 2^{(d-2)l}.\end{equation} Now, by Cauchy-Schwarz, \begin{equation} \label{eq: use of CS} \sup_{f\in \mathcal{A}} \hspace{0.1cm} \left|\sum_{j=0}^J\sum_{l=2}^L \sum_{B \in \mathcal{P}_{j,l}} 2^{-2j}a_B(f) M^{N;B}_t\right| \lesssim \sum_{l=2}^L \left(\sum_{j=0}^J \sum_{B \in \mathcal{P}_{j,l}} \left\{M^{N;B}_t\right\}^2\right)^{1/2}2^{(d/2-1)l}. \end{equation} Let $(M^{N;B;t}_s)_{s\leq t}$ be the martingale \begin{equation} \label{eq: MNBTS} M^{N;B;t}_s = \int_{(0,s]\times\mathcal{S}_N} \langle h_B, \Delta(u,t,\mu^N)\rangle (m^N-\overline{m}^N)(du, d\mu^N).\end{equation} We can control the remaining martingale term pointwise in $L^2$ by applying the martingale bound (\ref{eq: elltwo martingale control}) at the terminal time $t$: \begin{equation} \begin{split} & \left\|M^{N;B}_t\right\|_{L^2(\mathbb{P})}^2 = \mathbb{E} \int_{(0,t]\times \mathcal{S}_N} \langle (1+|v|^2)2^{2j}1_{B}, \Delta(s,t,\mu^N)\rangle ^2 \overline{m}^N(ds, d\mu^N) \\& \lesssim \mathbb{E} \left[\int_{(0,t]\times \mathcal{S}_N} \langle (1+|v|^4)1_{B}, |\Delta(s,t,\mu^N)|\rangle ^2 \overline{m}^N(ds, d\mu^N)\right].\end{split} \end{equation} Summing over $B\in \mathcal{P}_{j,l}$ and $j=0, ..,J$, we Minkowski's inequality to move the sum inside the integral against $\Delta$, and note that $\sum_j \sum_{B\in\mathcal{P}_{j,l}} h_B \lesssim (1+|v|^4)$. This produces the bound \begin{equation}
\begin{split} &\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\|M^{N;B}_t\right\|_{L^2(\mathbb{P})}^2 \lesssim \mathbb{E} \left[\int_{(0,t]\times \mathcal{S}_N} \langle (1+|v|^4), |\Delta(s,t,\mu^N)|\rangle ^2 \overline{m}^N(ds, d\mu^N) \right] \\[1ex] & = \mathbb{E} \left[\int_{(0,t]\times \mathcal{S}_N} \|\phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_{s-})\|^2_{\mathrm{TV}+4} \hspace{0.1cm} \overline{m}^N(ds, d\mu^N) \right]
\end{split}\end{equation} where the second line follows by the definition of $\Delta$ in (\ref{eq: definition of Delta}). Using the stability estimates in Corollary \ref{cor: new stability for BE} with $q=4$, we find \begin{equation} \begin{split} \sum_{j=0}^J\sum_{B\in\mathcal{P}_{j,l}} \left\|M^{N;B}_t\right\|_{L^2(\mathbb{P})}^2 \lesssim \mathbb{E} \left[\int_{(0,t]\times \mathcal{S}_N} e^{-\lambda(t-s)}\Lambda_k(\mu^N_s, \mu^N) N^{2(\epsilon-1)}\hspace{0.1cm}\overline{m}^N(ds, d\mu^N) \right]. \end{split} \end{equation} For $\overline{m}^N$-almost all $(s, \mu^N)$, we bound $\Lambda_k(\mu^N_s, \mu^N)\lesssim \Lambda_k(\mu^N_s)$ by Lemma \ref{lemma:momentincreaseatcollision}, and $\overline{m}^N(ds, \mathcal{S}_N)\le 2Nds$, to bound the right hand side by \begin{equation} \begin{split} \sum_{j=0}^J\sum_{B\in\mathcal{P}_{j,l}} \left\|M^{N;B}_t\right\|_{L^2(\mathbb{P})}^2 &\lesssim \int_0^t e^{-\lambda(t-s)}N^{2\epsilon-1} \hspace{0.1cm} \mathbb{E}[\Lambda_k(\mu^N_s)]\hspace{0.1cm}ds \\ & \lesssim N^{2\epsilon-1}a^\frac{1}{2}\end{split} \end{equation} where the second line follows using the moment estimates for the Kac process, established in Proposition \ref{thrm:momentinequalities}. Therefore, (\ref{eq: use of CS}) gives \begin{equation} \begin{split} \label{eq: pointwise bound on martingale term} \left\|\hspace{0.1cm} \sup_{f\in \mathcal{A}} \hspace{0.1cm} \left|\sum_{j=0}^J\sum_{l=2}^L \sum_{B \in \mathcal{P}_{j,l}} a_B(f) M^{N;l}_t\right|\hspace{0.1cm} \right\|_{L^2(\mathbb{P})} & \lesssim N^{\epsilon-1/2} a^{1/2} \sum_{l=2}^L 2^{(d/2-1)l} \\[1ex] & \lesssim N^{\epsilon-1/2} \hspace{0.2cm}2^{(d/2-1)L}\hspace{0.1cm}a^{1/2}. \end{split}
\end{equation} The remaining points are a control on $\beta(f)$, uniformly in $f\in\mathcal{A}$, dealing with $B_J$ and $\mathbb{R}^d \setminus B_J$ separately. Fix $f\in \mathcal{A}$ and let $B \in \mathcal{P}_{j,L}$ with $j\le J$. The definition gives $\hat{\beta}(f)= \hat{f} - \langle \hat{f} \rangle _B$ on $B$, and so \begin{equation} \text{On } B, \hspace{0.5cm} |\beta(f)| = (1+|v|^2)|\hat{f}-\langle \hat{f} \rangle _B| \hspace{0.1cm}\lesssim \hspace{0.1cm} (1+|v|^2)2^{j-L}. \end{equation} Since $|v|\ge 2^{j-1}$ on $B$, and $B \in \mathcal{P}_{j,L}$ is arbitrary, we see that \begin{equation} \text{On } B_J, \hspace{0.5cm} |\beta(f)|\lesssim 2^{-L}(1+|v|^4).\end{equation} On the other hand, the uniform bound $\|\hat{f}\|_\infty \le 1$ implies that \begin{equation} \text{On }B_J^c, \hspace{0.5cm} |\beta(f)|\leq (1+|v|^2) \leq 2^{-2J}(1+|v|^4).\end{equation} Combining, we have the global bound for all $f\in \mathcal{A}$:\begin{equation} \forall v\in \mathbb{R}^d, \hspace{0.5cm} |\beta(f)| \lesssim (2^{-2J}+2^{-L})(1+|v|^4).\end{equation} Recalling the definition (\ref{eq: definition of Delta}) of $\Delta$, we use the stability estimate in Corollary \ref{cor: new stability for BE}, with $q=4$, and the moment increase bound Lemma \ref{lemma:momentincreaseatcollision}, as above to see that almost surely, for $m^N+\overline{m}^N$-almost all $(s, \mu^N)$, we have the bound \begin{equation} \label{eq: majorise integrand of error term} \begin{split} \sup_{f\in \mathcal{A}} \left| \langle \beta(f), |\Delta(s,t,\mu^N)|\rangle \right| & \lesssim (2^{-2J}+2^{-L})\hspace{0.1cm}\|\Delta(s,t,\mu^N)\|_{\mathrm{TV}+4} \\ & \lesssim (2^{-2J}+2^{-L})\hspace{0.1cm} e^{-\lambda(t-s)/2}\hspace{0.1cm} N^{\epsilon-1} \hspace{0.1cm} \Lambda_{k}(\mu^N_{s-})^\frac{1}{2} \\ &=:H_s\end{split}\end{equation} where we introduced the shorthand $H_s$ for the final expression, for simplicity. We now use the trivial observation that \begin{equation} \label{eq: dominate integrand and integrator seperately} \sup_{f\in \mathcal{A}} \hspace{0.1cm}\left|R^{N,f}_t\right| \le \int_{(0,t]\times\mathcal{S}_N} \left\{\sup_{f\in \mathcal{A}}\hspace{0.1cm}\left\langle |\beta(f)|, |\Delta(s,t,\mu^N)|\right\rangle\right\}(m^N+\overline{m}^N)(ds, d\mu^N).\end{equation}We split the measure $m^N +\overline{m}^N= (m^N-\overline{m}^N)+2\overline{m}^N$ to obtain a uniform bound for the error terms $R^{N,f}_t$ defined in (\ref{eq: definition of RNFT}): \begin{equation} \label{eq: introduce t1 t2} \begin{split} \left\|\hspace{0.1cm}\sup_{f\in \mathcal{A}} \hspace{0.1cm}R^{N,f}_t\hspace{0.1cm}\right\|_{L^2(\mathbb{P})} & \lesssim \left\| \int_0^t H_s (m^N+\overline{m}^N)(ds,\mathcal{S}_N)\right\|_{L^2(\mathbb{P})} \\[1ex] & \lesssim (2^{-2J}+2^{-L})N^{\epsilon-1}\left[\mathcal{T}_1+\mathcal{T}_2\right]\end{split} \end{equation} where we have written \begin{equation} \mathcal{T}_1=\left\| \int_0^t e^{-\lambda(t-s)/2} \Lambda_{k}(\mu^N_{s-})^\frac{1}{2} \overline{m}^N(ds,\mathcal{S}_N)\right\|_{L^2(\mathbb{P})} \end{equation} \begin{equation} \mathcal{T}_2= \left\| \int_0^t e^{-\lambda(t-s)/2} \Lambda_{k}(\mu^N_{s-})^\frac{1}{2} (m^N-\overline{m}^N)(ds,\mathcal{S}_N)\right\|_{L^2(\mathbb{P})}. \end{equation} $\mathcal{T}_1$ is controlled by dominating $\overline{m}^N(ds, \mathcal{S}_N)\leq 2N ds$ to obtain \begin{equation} \begin{split} \label{eq:dominatembar} \mathcal{T}_1 \lesssim N \left\|\int_0^t e^{-\lambda(t-s)/2} \Lambda_{k}(\mu^N_{s})^\frac{1}{2} ds \right\|_{L^2(\mathbb{P})} & \lesssim N \int_0^t e^{-\lambda(t-s)/2} \|\Lambda_{k}(\mu^N_{s})^\frac{1}{2}\|_{L^2(\mathbb{P})} \hspace{0.1cm} ds \\[1ex] &\lesssim N a^{1/2}. \end{split}\end{equation} We control $\mathcal{T}_2$ by It\^{o}'s isometry for $m^N-\overline{m}^N$, which is reviewed in (\ref{eq: QV of M}): \begin{equation} \label{eq:itoisometrycontrol}\begin{split} \mathcal{T}_2^2 &= \mathbb{E} \left\{ \int_0^t e^{-\lambda(t-s)} \Lambda_k(\mu^N_{s-}) \overline{m}^N(ds,\mathcal{S}_N)\right\} \\& \lesssim N \int_0^t e^{-\lambda(t-s)} \mathbb{E}\left\{\Lambda_k(\mu^N_{s-}) \right\} ds \\ & \lesssim N \hspace{0.1cm}a.\end{split} \end{equation} Combining (\ref{eq: introduce t1 t2}, \ref{eq:dominatembar}, \ref{eq:itoisometrycontrol}), we obtain \begin{equation} \label{eq:control of error term pw} \begin{split} \left\|\hspace{0.1cm}\sup_{f\in \mathcal{A}} \hspace{0.1cm}R^{N,f}_t\hspace{0.1cm}\right\|_{L^2(\mathbb{P})} \lesssim (2^{-2J}+2^{-L})\hspace{0.1cm}N^{\epsilon-1}\hspace{0.1cm}a^{1/2}.\end{split} \end{equation} Finally, we combine (\ref{eq: decomposition of MNFT}, \ref{eq: pointwise bound on martingale term}, \ref{eq:control of error term pw}) to obtain \begin{equation} \left\|\hspace{0.1cm} \sup_{f \in \mathcal{A}} \left|M^{N,f}_t\right|\right\|_{L^2(\mathbb{P})} \hspace{0.2cm} \lesssim \hspace{0.2cm} N^{\epsilon} \hspace{0.1cm} a^{1/2} (N^{-1/2}\hspace{0.1cm}2^{(d/2-1)L}+2^{-L}+2^{-2J}). \end{equation} Taking $L=\lfloor \log_2(N)/d \rfloor$ and $J\uparrow \infty$ produces the claimed result. For $d=2$, we replace $2^{(d/2-1)L}$ by $L$ in (\ref{eq: pointwise bound on martingale term}), and optimise as before, absorbing the factors of $(\log N)$ to make the exponent of $N$ slightly larger. \end{proof}
\section{Proof of Theorem \ref{thrm: Main Local Uniform Estimate}} \label{sec: proof of LU}
We now adapt the ideas of Theorem \ref{thrm: pointwise martingale control} to a local uniform setting, and working in $L^p$, to prove the local uniform approximation result Theorem \ref{thrm: Main Local Uniform Estimate}. As in the proof above, most of the work is in controlling the martingale term $(M^{N,f}_t)_{f\in \mathcal{A}}$ defined in (\ref{eq: definition of MNFT}), uniformly in $f$; for a pathwise local uniform estimate, we wish to control an expression of the form \begin{equation}\label{eq: local unf mg exp} \left\| \hspace{0.1cm}\sup_{f\in \mathcal{A}} \hspace{0.1cm}\sup_{t\leq t_\text{fin}} \hspace{0.1cm} \left|M^{N,f}_t\right|\right\|_{L^p(\mathbb{P})}. \end{equation} Since we will frequently encounter suprema of processes on compact time intervals, we introduce notation. For any stochastic process $M$, we write \begin{equation}\label{eq: use of star} M_{\star,t}=\sup_{s\leq t}|M_t| \end{equation} Proving the sharpest asymptotics in the time horizon $t_\text{fin}$ requires working in $L^p$ instead of $L^2$, for large exponents $p$. This leads to a weaker exponent in $N$: we obtain only $N^{\epsilon-p'/2d}$ instead of $N^{\epsilon -1/d}$, where $p'\leq 2$ is the H\"{o}lder conjugate to $p$. However, by making $p$ large, we are able to obtain estimates which degrade slowly in the time horizon $t_\text{fin}$, with only a factor of $(1+t_\text{fin})^{1/p}$. The exponent for $t_\text{fin}$ can thus be made arbitrarily small, while the resulting exponent for $N$ is bounded away from $0$ as we make $p$ large. \\ \\ The key result required for the local uniform estimate is the following control of the expression (\ref{eq: local unf mg exp}), in analogy to Lemma \ref{thrm: pointwise martingale control}. \begin{lemma} \label{thrm: local uniform martingale control} Let $\epsilon>0$, $a\ge 1$ and $p\geq 2$, and let $1<p'\le 2$ be the H\"{o}lder conjugate to $p$. Let $k$ be large enough that Corollary \ref{cor: new stability for BE} holds for $q=5$, with H\"{o}lder exponent $1-\epsilon$, and with some $0<\lambda<\lambda_0.$ \\ Let $(\mu^N_t)_{t\geq 0}$ be a Kac process on $N\geq 2$ particles, with initial moment $\Lambda_{kp}(\mu^N_0)\leq a^p$. Let $M^{N,f}_t$ be the processes given by (\ref{eq: definition of MNFT}), and $M^{N,f}_{\star, t}$ their local suprema, as in (\ref{eq: use of star}). Then, for any time horizon $t_\text{fin}\in [0,\infty)$, we have the control \begin{equation} \left\|\hspace{0.1cm} \sup_{f\in \mathcal{A}} \hspace{0.1cm} \hspace{0.1cm} M^{N,f}_{\star,t_\text{fin}}\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \lesssim a^{1/2} \hspace{0.1cm} N^{-\alpha}\hspace{0.1cm}(\log N)^{1/p'}\hspace{0.1cm}(1+t_\text{fin})^\frac{3p+1}{2p}\end{equation} where $\alpha = \frac{p'}{2d}-\epsilon$.\end{lemma} The proof of this Lemma follows the same ideas as the proof of the equivalent result, Lemma \ref{thrm: pointwise martingale control}, for the pointwise bound. However, in this case, we must modify the argument to work in $L^p$ rather than $L^2$, and also to control all terms uniformly on the compact time interval $[0, t_\text{fin}]$. This will be deferred until the end of this section.\medskip\\
Following the argument of the pointwise bound in Theorem \ref{thrm: PW convergence}, we can now produce an initial pathwise, local uniform estimate for the case $\mu_0=\mu^N_0$, with worse long-time behaviour. From this, we will `bootstrap' to the desired long-time behaviour in Theorem \ref{thrm: Main Local Uniform Estimate}. \begin{lemma} \label{lemma: initial LU bound} Let $\epsilon>0$, $a\ge 1$ and $p\geq 2$, with H\"older conjugate $p'\le 2$. Choose $k$ large enough that Proposition \ref{thrm: stability for BE} holds with exponent $1-\epsilon$, and that Corollary \ref{cor: new stability for BE} holds with exponent $1-\epsilon$ and $q=5$. Let $(\mu^N_t)_{t\geq 0}$ be a Kac process on $N\geq 2$ particles, with initial moment $\Lambda_{kp}(\mu^N_0)\leq a^p$. Then, for any time horizon $t_\text{fin}\ge 0$, we have the control \begin{equation} \left\| \hspace{0.1cm} \sup_{t\leq t_\text{fin}} \hspace{0.1cm}W\left(\mu^N_t, \phi_t\left(\mu^N_0\right)\right) \hspace{0.1cm} \right\|_{L^p(\mathbb{P})} \lesssim a^{1/2} \hspace{0.1cm}N^{\epsilon-\frac{p'}{2d}}\hspace{0.1cm} (\log N)^{1/p'}(\hspace{0.1cm}1+t_\text{fin})^\frac{3p+1}{2p}.\end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma: initial LU bound}] As in Theorem \ref{thrm: PW convergence}, it remains to control the supremum of the integral term in Formula \ref{form:newdecomposition}\begin{equation} \sup_{t\leq t_\text{fin}} \int_0^t \sup_{f\in \mathcal{A}} \langle f, \rho^N(t-s, \mu^N_s)\rangle ds \end{equation} where $\rho^N$ is given by (\ref{eq: definition of rho}). Following the previous calculation (\ref{eq: dominate rho}), we majorise, for $s\leq t\leq t_\text{fin}$, \begin{equation} \label{eq: loc unf dominate rho}\sup_{f\in \mathcal{A}} \langle f, \rho^N(t-s, \mu^N_s)\rangle \lesssim N^{\epsilon-1} \hspace{0.1cm} \sup_{u\leq t_\text{fin}} \left\{ \Lambda_{k}(\mu^N_u)^\frac{1}{2}\right\} \end{equation} from which it follows that \begin{equation} \label{eq: loc unf dominate rho 2} \sup_{t\leq t_\text{fin}} \int_0^t \sup_{f\in \mathcal{A}} \langle f, \rho^N(t-s, \mu^N_s)\rangle ds \lesssim N^{\epsilon-1} \hspace{0.1cm} t_\text{fin} \hspace{0.1cm} \sup_{u\leq t_\text{fin}} \left\{ \Lambda_{k}(\mu^N_u)^\frac{1}{2}\right\}.\end{equation} From the local uniform moment bound established in Proposition \ref{lemma:momentboundpt1}, and the initial moment bound on $\mu^N_0$, \begin{equation} \label{eq: locunf moment bound} \begin{split} \left\|\hspace{0.1cm} \sup_{u\leq t_\text{fin}} \left\{ \Lambda_{k}(\mu^N_u)^\frac{1}{2}\right\} \right\|_{L^p(\mathbb{P})} &\leq \left\| \hspace{0.1cm} \sup_{u\leq t_\text{fin}} \left\{ \Lambda_{k}(\mu^N_u)^\frac{1}{2}\right\} \right\|_{L^{2p}(\mathbb{P})} \leq \mathbb{E}\left[\sup_{u\leq t_\text{fin}} \Lambda_{pk}(\mu^N_u)^\frac{1}{2}\right]^{1/2p} \\[1ex] & \lesssim a ^{1/2} \hspace{0.1cm} (1+t_\text{fin})^{1/2p}. \end{split} \end{equation} Combining the estimates (\ref{eq: loc unf dominate rho 2}, \ref{eq: locunf moment bound}), we see that \begin{equation} \left\|\sup_{t\leq t_\text{fin}} \int_0^t \sup_{f\in \mathcal{A}} \langle f, \rho^N(t-s, \mu^N_s)\rangle ds\right\|_{L^p(\mathbb{P})} \lesssim N^{\epsilon-1}\hspace{0.1cm}a^{1/2} \hspace{0.1cm}(1+t_\text{fin})^\frac{2p+1}{2p}.\end{equation} We combine this with Lemma \ref{thrm: local uniform martingale control} and keep the worse asymptotics.\end{proof}
We will now show how to `bootstrap' to better dependence of the time horizon $t_\text{fin}$. Heuristically, the proof allows us to replace powers of $t_\text{fin}$ in the initial bound with the same power of $\log N$, and introduce an additional factor of $(1+t_\text{fin})^{1/p}$. As was remarked below Proposition \ref{thrm: stability for BE}, we could derive Theorem \ref{thrm: PW convergence} and Lemma \ref{lemma: initial LU bound} under the milder assumptions \begin{equation} \label{eq: weaker stability 4} \|\phi_t(\nu)-\phi_t(\mu)\|_{\mathrm{TV}+5} \leq F(t) \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^\eta; \end{equation} \begin{equation}\label{eq: weaker stability 5}
\|\phi_t(\nu)-\phi_t(\mu) - \xi_t \|_{\mathrm{TV}+2} \leq G(t) \Lambda_{k}(\mu, \nu)^\frac{1}{2}\|\mu-\nu\|_\mathrm{TV}^{1+\eta}\end{equation} for functions $F,G$ such that \begin{equation} \label{eq: weaker stability 6} \left(\int_0^\infty F^2 dt\right)^{1/2}<\infty;\hspace{0.5cm} \int_0^\infty G dt<\infty. \end{equation} If we also assume that $F\rightarrow 0$ as $t\rightarrow \infty$, we can use an identical bootstrap argument, with $\log N$ replaced by a power of \begin{equation} \tau_N := \sup\{t: F(t) > N^{-\alpha}\}\end{equation} which produces a potentially larger loss. \emph{Hence, the the full strength of exponential decay in Proposition \ref{thrm: stability for BE} is used to control the asymptotic loss due to the bootstrap}.
\begin{proof}[Proof of Theorem \ref{thrm: Main Local Uniform Estimate}] As in the proof of Theorem \ref{thrm: PW convergence}, it is sufficient to prove the case $\mu^N_0=\mu_0$. Then, making $k$ larger if necessary, we may use Theorem \ref{thrm: W-W continuity of phit} to control $\sup_{t\ge 0}W(\phi_t(\mu^N_0), \phi_t(\mu_0))$, which proves the general result.\medskip \\ Let $0<\epsilon'<\epsilon$, and choose $k$ such that Lemma \ref{lemma: initial LU bound} holds for $\epsilon'$. Let $\alpha'<\alpha$ be the exponent of $N$ obtained with $\epsilon'$ in place of $\epsilon$. From the stability estimate Proposition \ref{thrm: stability for BE}, we have \begin{equation} \forall \mu, \nu \in \mathcal{S}^k_a, \hspace{0.2cm}\|\phi_t(\mu)-\phi_t(\nu)\|_{\mathrm{TV}+2} \lesssim \Lambda_{k}(\mu, \nu)^\frac{1}{2} e^{-\lambda_0 t/2}.\end{equation}Define $\tau = \tau_N = -2\lambda_0^{-1} \log(N^{-\alpha'})$ and consider $t_\text{fin}> \tau +1 $. Fix a positive integer $n$, and partition the interval $[0, t_\text{fin}]$ as $I_1\cup I_1 \cup...\cup I_n$: \begin{equation} I_0=[0,\tau]; \hspace{0.5cm}I_r = \left[\tau+(r-1)\frac{t_\text{fin}-\tau}{n},\tau+r\frac{t_\text{fin}-\tau}{n}\right]=:[s_r+\tau,t_r].\end{equation} Write also $H_r=[s_r, t_r] \supset I_r$. Since the norm $\|\cdot\|_{\mathrm{TV}+2}$ dominates the Wasserstein distance $W$, we have the bound\begin{equation} \label{eq: bootstrap bound}\sup_{t\in I_r} \hspace{0.1cm} W(\mu^N_t, \phi_t(\mu^N_0)) \lesssim \sup_{t\in H_r} \hspace{0.1cm} W(\mu^N_t, \phi_{t-s_r}(\mu^N_{s_r})) + e^{-\lambda \tau} \Lambda_{k}(\mu^N_{s_r}, \phi_{s_r}(\mu^N_0))^\frac{1}{2}.\end{equation} We bound the two terms in (\ref{eq: bootstrap bound}) separately. Denote $(\mathcal{F}^N_t)_{t\geq 0}$ the natural filtration of $(\mu^N_t)_{t\geq 0}$. We control the first term by Lemma \ref{lemma: initial LU bound}, applied to the restarted process $(\mu^N_t)_{t\geq s_r}$: \begin{equation}\begin{split} \left\|\hspace{0.1cm}\sup_{t\in H_r} \hspace{0.1cm} W(\mu^N_t, \phi_{t-s_r}(\mu^N_{s_r})) \right \|_{L^p(\mathbb{P})} ^p = \mathbb{E}\left\{\mathbb{E}\left(\left[\left.\sup_{s_r \leq t \leq t_r} \hspace{0.1cm} W(\mu^N_t, \phi_{t-s_r}(\mu^N_{s_r}))\right]^p \right| \mathcal{F}^N_{s_r} \right) \right\} \\ \lesssim \mathbb{E} \left\{\Lambda_{pk}(\mu^N_{s_r}) ^{1/p}\right\} \left(1+ \tau + \frac{t-\tau}{n}\right)^\frac{3p+1}{2} N^{-p\alpha'}\hspace{0.1cm}(\log N)^\frac{p}{p'}.\end{split}\end{equation} We control the moment in the usual way, using Proposition \ref{thrm:momentinequalities}\ref{lemma:momentboundpt1}, to obtain \begin{equation} \label{eq: Bootstrap bound 2} \left\|\hspace{0.1cm}\sup_{t\in H_r} \hspace{0.1cm} W(\mu^N_t, \phi_{t-s_r}(\mu^N_{s_r})) \right \|_{L^p(\mathbb{P})} ^p \lesssim a^p \left(1+ \tau + \frac{t-\tau}{n}\right)^\frac{3p+1}{2} N^{-p\alpha'}\hspace{0.1cm}(\log N)^\frac{p}{p'}.\end{equation} We now turn to the second term in (\ref{eq: bootstrap bound}). Using the definition of $\tau$ and the moment estimates (\ref{eq: pointwise moment bound}, \ref{eq: BE moment bound}) in Proposition \ref{thrm:momentinequalities}, \begin{equation} \label{eq: Bootstrap bound 3} \|e^{-\lambda \tau/2} \Lambda_{k}(\mu^N_{s_r}, \phi_{s_r}(\mu^N_0))^\frac{1}{2} \|_{L^p(\mathbb{P})} \lesssim N^{-\alpha'} \hspace{0.1cm} a^{1/2}. \end{equation} Combining the estimates (\ref{eq: Bootstrap bound 2}, \ref{eq: Bootstrap bound 3}), and absorbing powers of $\tau$ into the powers of $(\log N)$, we obtain \begin{equation} \left\| \hspace{0.1cm} \sup_{t\in I_r} W(\mu^N_t, \phi_t(\mu^N_0)) \right\|_{L^p(\mathbb{P})} \lesssim a^{1/2} \hspace{0.1cm} \left(1+ \frac{t_\text{fin}-\tau}{n}\right)^\frac{3p+1}{2p}\left(N^{-\alpha'} (\log N)^{\frac{3p+1}{2p}+\frac{1}{p'}}\right).\end{equation} Observe that \begin{equation} \left\{ \sup_{\tau \leq t \leq t_\text{fin}} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\}^p \leq \mathlarger{\mathlarger{\sum}}_{r=1}^n \left\{ \sup_{t \in I_r} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\}^p. \end{equation} Taking expectations and $p^\text{th}$ root, we find that\begin{equation} \begin{split} &\left\|\hspace{0.1cm} \sup_{\tau \leq t \leq t_\text{fin}} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\|_{L^p(\mathbb{P})} \\& \hspace{2cm} \lesssim n^\frac{1}{p} \hspace{0.1cm} a^{1/2} \hspace{0.1cm} \left(1+ \frac{t_\text{fin}-\tau}{n}\right)^\frac{3p+1}{2p}\left(N^{-\alpha'} (\log N)^{\frac{3p+1}{2p}+\frac{1}{p'}}\right).\end{split} \end{equation} This is optimised at $n\sim (t_\text{fin}-\tau)$, where we obtain the estimate \begin{equation} \begin{split} \label{eq: tfin > tau} \left\|\hspace{0.1cm} \sup_{\tau \leq t \leq t_\text{fin}} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\|_{L^p(\mathbb{P})} &\lesssim a^{1/2} (t_\text{fin}-\tau)^\frac{1}{p}\hspace{0.1cm}\left(N^{-\alpha'} (\log N)^{\frac{3p+1}{2p}+\frac{1}{p'}}\right) \\ &\leq a^{1/2} \hspace{0.1cm} t_\text{fin}^\frac{1}{p}\hspace{0.1cm}\left(N^{-\alpha'} (\log N)^{\frac{3p+1}{2p}+\frac{1}{p'}}\right). \end{split} \end{equation} From Lemma \ref{lemma: initial LU bound} applied up to time $\tau=\tau_N$, we have \begin{equation} \begin{split} \label{eq:shorttimecontrolforiteration} \left\|\hspace{0.1cm} \sup_{0 \leq t \leq \tau_N} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\|_{L^2(\mathbb{P})} &\lesssim a^{1/2} \hspace{0.1cm} N^{-\alpha'} \left(1+\frac{2\alpha}{\lambda} \log(N)\right)^{\frac{3p+1}{2p}}(\log N)^{\frac{1}{p'}} \\ &\lesssim a^{1/2} \hspace{0.1cm}\left(N^{-\alpha} (\log N)^{\frac{3p+1}{2p}+\frac{1}{p'}}\right) .\end{split}\end{equation} Combining (\ref{eq: tfin > tau}, \ref{eq:shorttimecontrolforiteration}), and absorbing the powers of $(\log N)$ into $N^{\epsilon-\epsilon'}$, we have \begin{equation} \begin{split} \left\|\hspace{0.1cm} \sup_{0 \leq t \leq t_\text{fin}} W\left(\mu^N_t, \phi_t(\mu^N_0)\right) \right\|_{L^p(\mathbb{P})} \lesssim a^{1/2} \hspace{0.1cm}(1+t_\text{fin})^\frac{1}{p} \hspace{0.1cm}N^{-\alpha}.\end{split} \end{equation} The case where $t_\text{fin}\leq \tau+ 1$ is essentially identical to (\ref{eq:shorttimecontrolforiteration}). \end{proof} \begin{remark} We note that this `bootstrap' argument would produce the same result with any \emph{polynomial} time dependence in Lemma \ref{lemma: initial LU bound}. As a result, the precise time dependence of Lemmas \ref{thrm: local uniform martingale control}, \ref{lemma: initial LU bound} is uninteresting, and we do not attempt to optimise it. We also remark that this method produces the same long-time behaviour even starting from an exponential estimate, at the cost of a fractional power of $N$. \end{remark}
It remains to prove Lemma \ref{thrm: local uniform martingale control}. We draw attention to the fact that $M^{f, N}$ are \emph{not} themselves martingales, despite the general construction (\ref{eq: M is for martingale}), since the integrand $\phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_{s-})$ depends on the terminal time $t$. We address this by computing an associated family of martingales: \begin{lemma}\label{lemma:newmartingaleconstruction} Let $(M^{N,f}_t)_{t\geq 0}$ be the processes defined in Formula \ref{form:newdecomposition}. Recalling the notation $Q_t=Q\circ \phi_t$, define \begin{equation} \label{eq: defn of chi} \chi(s,t,\mu^N)=Q_{t-s}(\mu^N)-Q_{t-s}(\mu^N_{s-}).\end{equation} Suppose $f$ satisfies a growth condition $|f(v)|\leq (1+|v|^q)$, for some $q\geq 0$. Consider the martingales $Z^{N,f}_t$ given by \begin{equation}Z^{N,f}_t=\int_{(0,t]\times \mathcal{S}_N}\langle f,\mu^N-\mu^N_{s-}\rangle (m^N-\overline{m}^N)(ds, d\mu^N) \rangle. \end{equation} Then we have the equality \begin{equation}\begin{split} &Z^{N,f}_t=M^{N,f}_t- C^{N,f}_t\\ &= M^{N,f}_t-\int_0^t ds \int_{(0,s]\times \mathcal{S}_N} \langle f, \chi(u,s,\mu^N) \rangle (m^N-\overline{m}^N)(du, d\mu^N).\end{split}\end{equation} \end{lemma} \begin{proof} Firstly, we note that $Z^{N,f}_t$ are martingales by standard results from Markov chains, (\ref{eq: M is for martingale}). Observe that the integrand in the definition of $C^{N,f}_t$ is bounded, since whenever $0\leq u \leq s$, and $\mu^N$ is obtain from $\mu^N_{u-}$ by collision, we use the estimate (\ref{eq: Lipschitz continuity of Q}) with $\eta=\frac{1}{2}$, to obtain for some $k$ \begin{equation} \begin{split} |\langle f, \chi(u,s,\mu^N)\rangle | & \leq \| Q_{s-u}(\mu^N)-Q_{s-u}(\mu^N_{u-})\|_{\mathrm{TV}+q} \\[1ex] & \lesssim\Lambda_{k}(\mu^N, \mu^N_{u-})^\frac{1}{2}N^{-\frac{1}{2}} \lesssim N^{\frac{k-2}{4}} <\infty. \end{split} \end{equation} Moreover, for initial data $\mu^N \in \mathcal{S}_N$, the Boltzmann flow $(\phi_s(\mu^N))_{s=0}^t$ has uniformly bounded $(q+1)^\text{th}$ moments and so, by approximation, the Boltzmann dynamics (\ref{BE}) extend to $f$. Now, we apply Fubini to the integral: \begin{equation} \begin{split} &C^{N,f}_t \\ &= \int_{(0,t]\times \mathcal{S}_N} \int_0^t ds\hspace{0.1cm} \langle f, Q_{s-u}(\mu^N)-Q_{s-u}(\mu^N_{u-})\rangle \hspace{0.1cm} 1[u \le s \le t] \hspace{0.1cm} (m^N-\overline{m}^N)(du, d\mu^N) \\& = \int_{(0,t]\times \mathcal{S}_N} \left\{ \int_u^t \left(\langle f, Q_{s-u}(\mu^N)\rangle -\langle f, Q_{s-u}(\mu^N_{u-})\rangle \right) ds\right\} (m^N-\overline{m}^N)(du, d\mu^N) \\ & =\int_{(0,t]\times\mathcal{S}_N} \left\{\langle f, \phi_{t-u}(\mu^N)-\phi_{t-u}(\mu^N_{u-})\rangle -\langle f, \mu^N-\mu^N_{u-}\rangle\right\}(m^N-\overline{m}^N)(du, d\mu^N) \\& \hspace{2cm} =: M^{N,f}_t - Z^{N,f}_t \end{split}\end{equation} where the third equality is precisely the (extended) Boltzmann dynamics (\ref{BE}) in the variable $s \in[u,t]$. \end{proof}
To prove Lemma \ref{thrm: local uniform martingale control}, we return to the decomposition (\ref{eq: decomposition of MNFT}) used in the proof of Lemma \ref{thrm: pointwise martingale control}. Our first point is to establish a control on \begin{equation} \mathbb{E} \left[ \sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{ M^{N;B}_{\star,t_\text{fin}}\right\}^p\right]\end{equation} where $\star$ denotes the local supremum (\ref{eq: use of star}). We will do so by breaking the supremum into two parts, each of which can be controlled by elementary martingale estimates. Let $(J^{N;B;t}_s)_{0\le s\le t}$ be the process \begin{equation} J^{N;B;t}_s = \int_{(0,s]\times\mathcal{S}_N} \langle h_B, Q_{t-u}(\mu^N)-Q_{t-u}(\mu^N_{u-})\rangle (m^N-\overline{m}^N)(du, d\mu^N)\end{equation} where, as in the proof of Theorem \ref{thrm: PW convergence}, \begin{equation}\label{eq: defn of hB} h_B=2^{2j}(1+|v|^2)1_B; \hspace{1cm} B \in \mathcal{P}_{j,l}. \end{equation} Each process $(J^{N;B:t}_s)_{0\le s\le t}$ is a martingale, by standard results for Markov chains (\ref{eq: M is for martingale}). Writing $Z^{N;B}=Z^{N,h_B}$, Lemma \ref{lemma:newmartingaleconstruction} gives \begin{equation} Z^{N;B}_t = M^{N;B}_t +\int_0^t J^{N;B;s}_s \hspace{0.1cm} ds. \end{equation} \begin{lemma} \label{lemma:break up LU} Let $p\geq 2$, and let $p'$ be the H\"{o}lder conjugate to $p$. In the notation above, we have the comparison \begin{equation} \mathbb{E} \left[ \sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{ \left|M^{N;B}_{\star,t_\text{fin}}\right|\right\}^p\right] \lesssim \mathbb{E} \left[ \sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{\hspace{0.1cm} \left|M^{N;B}_{t_\text{fin}}\right|^p+t_\text{fin}^{p/p'}\int_0^{t_\text{fin}} \left|J^{N;B;t}_t \right|^p dt\right\}\right]. \end{equation} \end{lemma} \begin{proof} For each $B$, we observe that \begin{equation} \begin{split} \sup_{t\le t_\text{fin}} \left|M^{N;B}_t - Z^{N;B}_t\right| \le \int_0^{t_\text{fin}}\left|J^{N;B;s}_s\right| ds\end{split} \end{equation} which implies the two bounds \begin{equation} \label{eq: comparing ZNB and MNB} M^{N;B}_{\star,t_\text{fin}} \le Z^{N;B}_{\star, t_\text{fin}} +\int_0^{t_\text{fin}} \left|J^{N;B;s}_s\right| ds; \hspace{1cm} Z^{N;B}_{t_\text{fin}} \le M^{N;B}_{t_\text{fin}}+\int_0^{t_\text{fin}} \left|J^{N;B;s}_s\right| ds.\end{equation} By Doob's $L^p$ inequality, we have \begin{equation} \label{eq: Doob LP} \begin{split} \left\|\hspace{0.1cm} Z^{N;B}_{\star,t_\text{fin}}\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} &\leq p'\hspace{0.1cm}\left\|\hspace{0.1cm} Z^{N;B}_{t_\text{fin}}\hspace{0.1cm}\right\|_{L^p(\mathbb{P})}\end{split}. \end{equation} Combining (\ref{eq: comparing ZNB and MNB}, \ref{eq: Doob LP}), we obtain \begin{equation} \left\|\hspace{0.1cm} M^{N;B}_{\star,t_\text{fin}}\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \lesssim \left\|\hspace{0.1cm} M^{N;B}_{t_\text{fin}}\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} + \left\|\hspace{0.1cm}\int_0^{t_\text{fin}}\left|J^{N;B;s}_s\right|ds \hspace{0.1cm}\right\|_{L^p(\mathbb{P})}.\end{equation} Using H\"{o}lder's inequality on the integral,\begin{equation} \begin{split} \mathbb{E} \left[\left\{\hspace{0.1cm} M^{N;B}_{\star,t_\text{fin}}\right\}^p \right] &\lesssim \mathbb{E}\left[ \hspace{0.1cm}\left|M^{N;B}_{t_\text{fin}}\right|^p \hspace{0.1cm}\right] + \mathbb{E}\left[\hspace{0.1cm}\left\{\int_0^{t_\text{fin}} \left|J^{N;B;s}_s\right| \hspace{0.1cm} ds \right\}^p\hspace{0.1cm}\right] \\ & \lesssim \mathbb{E}\left[ \hspace{0.1cm}\left|M^{N;B}_{t_\text{fin}}\right|^p \hspace{0.1cm}\right] +t_\text{fin}^{p/p'} \int_0^{t_\text{fin}} \mathbb{E}\left[\hspace{0.1cm}\left|J^{N;B;t}_t\right|^p \hspace{0.1cm}\right]\hspace{0.1cm} ds .\end{split} \end{equation} Summing over $B\in \mathcal{P}_{j,l}$ and $j=0,1,\dots ,J$, we obtain the desired comparison. \end{proof}
\begin{proof}[Proof of Lemma \ref{thrm: local uniform martingale control}] We begin by controlling the integral term in Lemma \ref{lemma:break up LU}. The quadratic variation is given by \begin{equation} \begin{split} \left[J^{N;B;t}\right]_s &= \int_{(0,s]\times \mathcal{S}_N} \langle h_B, \chi(u,t,\mu^N)\rangle^2 m^N(du, d\mu^N) \\& \le \int_{(0,s]\times \mathcal{S}_N} \langle h_B, |\chi(u,t,\mu^N)|\rangle^2 m^N(du, d\mu^N) \end{split} \end{equation} where $h_B$ is as in (\ref{eq: defn of hB}) and $\chi$ is as in (\ref{eq: defn of chi}). Hence, using Burkholder's inequality (\ref{lemma:Burkholder}) we see that, for all $t\leq t_\text{fin}$, \begin{equation} \begin{split} &\mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{\hspace{0.1cm} \left|J^{N;B;t}_t\right|\right\}^p\right] \\ & \hspace{2cm}\lesssim \mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{\int_{(0,t]\times \mathcal{S}_N}\langle h_B, |\chi(u,t,\mu^N)|\rangle^2 m^N(du, d\mu^N)\right\}^{p/2}\right]. \end{split} \end{equation} Using Minkowski's inequality to move the double sum inside the parentheses, and recalling that $\sum_j \sum_{B\in \mathcal{P}_{j,l}} h_B \lesssim (1+|v|^4)$, we obtain the bound \begin{equation}\label{eq: LU mg bound on J}\begin{split} &\mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{\hspace{0.1cm} \left|J^{N;B;t}_t\right|\right\}^p\right]\\ &\hspace{0.5cm}\lesssim \mathbb{E}\left[ \left\{ \int_{(0,t]\times \mathcal{S}_N} \langle 1+|v|^4, |\chi(u,t,\mu^N)|\rangle^2 m^N(du, d\mu^N) \right\}^{p/2} \right] \\ &\hspace{0.5cm}\lesssim \mathbb{E} \left[\left\{\int_{(0,t]\times \mathcal{S}_N} \|Q_{t-u}(\mu^N)-Q_{t-u}(\mu^N_{u-})\|_{\mathrm{TV}+4}^2 \hspace{0.1cm} m^N(du, d\mu^N)\right\}^{p/2}\right] \end{split} \end{equation} where the second equality is the definition of $\chi$ (\ref{eq: defn of chi}). \medskip \\ Using the continuity estimate for $Q$ established in (\ref{eq: Lipschitz continuity of Q}), and arguing as in the proof of Lemma \ref{thrm: pointwise martingale control}, we see that almost surely, for $m^N$-almost all $(u, \mu^N)$, we have \begin{equation} \|Q_{t-u}(\mu^N)-Q_{t-u}(\mu^N_{u-})\|_{\mathrm{TV}+4} \lesssim N^{\epsilon-1}\Lambda_k(\mu^N_{u-}).\end{equation} Therefore, using Cauchy-Schwarz, (\ref{eq: LU mg bound on J}) gives the bound \begin{equation} \label{eq: LU mg bound on J 2} \begin{split} & \mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{\left|J^{N;B;t}_t\right|\right\}^p\right] \\& \hspace{1cm}\lesssim N^{p(\epsilon-1)} \hspace{0.1cm} \mathbb{E}\left[\sup_{t\leq t_\text{fin}} \Lambda_{kp}(\mu^N_t)\right]^{1/2} \left\|m^N\left((0,t_\text{fin}]\times \mathcal{S}_N\right)\right\|_{L^p(\mathbb{P})}^{p/2}.\end{split}\end{equation} The moment term is controlled by the initial moment bound and Proposition \ref{thrm:momentinequalities} : \begin{equation} \label{eq: LU mg moment bound} \mathbb{E}\left[\sup_{t\leq t_\text{fin}} \Lambda_{kp}(\mu^N_t)\right] \lesssim (1+t_\text{fin})\Lambda_{kp}(\mu^N_0)\leq (1+t_\text{fin})a^p.\end{equation} Since the rates of the Kac process are bounded by $2N$, we can stochastically dominate $m^N(dt \times \mathcal{S}_N)$ by a Poisson random measure $\mathfrak{m}^N(dt)$ of rate $2N$. By the additive property of Poisson processes, it follows that \begin{equation} \label{eq: Poisson} \|m^N((0,t_\text{fin}]\times \mathcal{S}_N) \|_{L^p(\mathbb{P})}\le \|\mathfrak{m}^N(0,t_\text{fin}]\|_{L^p(\mathbb{P})} \lesssim N(1+t_\text{fin}). \end{equation} Combining (\ref{eq: LU mg bound on J 2}, \ref{eq: LU mg moment bound}, \ref{eq: Poisson}), we have the control of the integrand: \begin{equation} \begin{split} \sup_{t\leq t_\text{fin}} \hspace{0.2cm} \mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{ \left|J^{N;B;t}_t\right|\right\}^p\right] \lesssim N^{p(\epsilon-1/2)}\hspace{0.1cm} a^{p/2} (1+t_\text{fin})^{\frac{p+1}{2}}. \end{split} \end{equation} This gives the following control of the integral term in Lemma \ref{lemma:break up LU}: \begin{equation} \label{eq:control of integral term} \begin{split} t_\text{fin}^{p/p'} \hspace{0.2cm} \mathbb{E} \left[\sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \mathlarger{\mathlarger{\int}}_0^{t_\text{fin}}\left\{\left|J^{N;B;t}_t\right|\right\}^p dt \right] \lesssim N^{p(\epsilon-1/2)}\hspace{0.1cm} a^{p/2} (1+t_\text{fin})^{\frac{p+3}{2}+\frac{p}{p'}}. \end{split} \end{equation} Using the definition of $p'$ as the H\"older conjugate to $p$, it is straightforward to see that the exponent of $(1+t_\text{fin})$ is $\frac{3p+1}{2}$. \medskip \\ We now perform a similar analysis for the terms $M^{N;B}_{t_\text{fin}}$ in Lemma \ref{lemma:break up LU}. Let $(M^{N;B;t}_s)_{s\leq t}$ be the martingale defined in (\ref{eq: MNBTS}). The quadratic variation is \begin{equation} \begin{split} \left[M^{N;B;t}\right]_s&=\int_{(0,s]\times\mathcal{S}_N}\langle h_B, \phi_{t-u}(\mu^N)-\phi_{t-u}(\mu^N_{u-})\rangle^2 \hspace{0.1cm} m^N(du, d\mu^N) \\ & \leq \int_{(0,s]\times\mathcal{S}_N}\langle h_B, |\phi_{t-u}(\mu^N)-\phi_{t-u}(\mu^N_{u-})|\rangle^2 \hspace{0.1cm} m^N(du, d\mu^N). \end{split} \end{equation} Arguing using Burkholder and the stability estimate Corollary \ref{cor: new stability for BE}, an identical calculation to the above shows that \begin{equation} \label{eq: control of M at terminal time} \sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\|M^{N;B}_{{t_\text{fin}}}\right\|_{L^p(\mathbb{P})}^p \lesssim N^{p(\epsilon-1/2)}\hspace{0.1cm}a^{p/2}\hspace{0.1cm}(1+t_\text{fin})^{\frac{p+1}{2}}. \end{equation} Hence, by Lemma \ref{lemma:break up LU}, we obtain \begin{equation} \mathbb{E} \left[ \sum_{j=0}^J \sum_{B\in \mathcal{P}_{j,l}} \left\{ \left|M^{N;B}_{\star,t_\text{fin}}\right|\right\}^p\right] \lesssim N^{p(\epsilon-1/2)}a^{p/2}(1+t_\text{fin})^\frac{3p+1}{2}.\end{equation} We control the coefficients $2^{-2j}a_B(f)$ as in the argument of Lemma \ref{thrm: pointwise martingale control}. Using H\"{o}lder's inequality in place of Cauchy-Schwarz, we obtain \begin{equation} \begin{split} \label{eq: control of mg term LU} &\left\|\hspace{0.1cm}\sup_{f\in \mathcal{A}} \hspace{0.1cm} \sup_{t\leq t_\text{fin}} \hspace{0.1cm} \left|\sum_{j=0}^J\sum_{l=2}^L \sum_{B \in \mathcal{P}_{j,l}} 2^{-2j}a_B(f)M^{N;B}_t \hspace{0.1cm} \right| \hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \\& \hspace{2cm} \lesssim \sum_{l=2}^L \left[ \mathbb{E}\sum_{j=0}^J \sum_{B \in \mathcal{P}_{j,l}} \left\{M^{N;B}_{\star,t_\text{fin}}\right\}^p\right]^{1/p}2^{(d/p'-1)l}J^{1/p'} \\& \hspace{2cm} \lesssim \sum_{l=2}^L N^{\epsilon-\frac{1}{2}} \hspace{0.1cm} a^{1/2} \hspace{0.1cm} (1+t_\text{fin})^{\frac{3p+1}{2p}}\hspace{0.1cm} 2^{(d/p'-1)l}J^{1/p'} \\ & \hspace{2cm} \lesssim N^{\epsilon-\frac{1}{2}} \hspace{0.1cm} a^{1/2} \hspace{0.1cm} (1+t_\text{fin})^{\frac{3p+1}{2p}}\hspace{0.1cm}2^{(d/p'-1)L} \hspace{0.1cm} J^{1/p'}. \end{split} \end{equation} Following the argument of Lemma \ref{thrm: pointwise martingale control}, we wish to control the error terms $R^{N,f}_t$ given by (\ref{eq: definition of RNFT}), locally uniformly in time. As in (\ref{eq: majorise integrand of error term}), we majorise, for $m^N+\overline{m}^N$-almost all $(s, \mu^N)$, \begin{equation} \begin{split} \sup_{f\in \mathcal{A}} \left|\langle \beta(f), \phi_{t-s}(\mu^N)-\phi_{t-s}(\mu^N_{s-})\rangle \right| &\lesssim (2^{-2J}+2^{-L}) \hspace{0.1cm}N^{\epsilon-1} \hspace{0.1cm} \Lambda_{k}(\mu^N_{s-})^\frac{1}{2} \\& =:H'_s. \end{split} \end{equation} As in (\ref{eq: dominate integrand and integrator seperately}), we may bound \begin{equation} \label{eq: break up error term} \left\|\hspace{0.1cm} \sup_{f\in\mathcal{A}}\hspace{0.1cm} \sup_{t\le t_\text{fin}}\left|R^{N,f}_t\right|\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \le \left\|\hspace{0.1cm} \int_0^{t_\text{fin}}H'_s(m^N+\overline{m}^N)(ds, \mathcal{S}_N)\hspace{0.1cm}\right\|_{L^p(\mathbb{P})} \le \mathcal{T}_1+\mathcal{T}_2 \end{equation} where the two error terms are \begin{equation} \mathcal{T}_1= \left\|\int_0^{t_\text{fin}} H'_s \hspace{0.1cm}m^N(ds, \mathcal{S}_N)\right\|_{L^p(\mathbb{P})}\end{equation} and \begin{equation} \mathcal{T}_2=\left\|\int_0^{t_\text{fin}} H'_s \hspace{0.1cm}\overline{m}^N(ds, \mathcal{S}_N)\right\|_{L^p(\mathbb{P})}. \end{equation} We now deal with the two terms separately. For the $\mathcal{T}_1$, we dominate $\overline{m}^N(ds, \mathcal{S}_N)\leq 2N ds$ to see that \begin{equation} \begin{split} \int_0^{t_\text{fin}} H'_s\hspace{0.1cm} \overline{m}^N(ds, \mathcal{S}_N) \lesssim (2^{-2J}+2^{-L})\hspace{0.1cm}N^{\epsilon}\hspace{0.1cm}t_\text{fin}\hspace{0.1cm}\left( \hspace{0.1cm} \sup_{s\leq t_\text{fin}} \Lambda_{k}(\mu^N_s)^\frac{1}{2}\right). \end{split} \end{equation} Using the monotonicity of $L^p$ norms, and using the moment control in the usual way, \begin{equation} \begin{split} \label{eq: control of mbar integral} \mathcal{T}_1 &\lesssim (2^{-2J}+2^{-L})\hspace{0.1cm}N^{\epsilon}\hspace{0.1cm}t_\text{fin}\hspace{0.1cm}\mathbb{E}\left[ \hspace{0.1cm} \sup_{s\leq t_\text{fin}} \Lambda_{pk}(\mu^N_s)\right]^\frac{1}{2p} \\[1ex] & \lesssim (2^{-2J}+2^{-L})\hspace{0.1cm}N^{\epsilon}\hspace{0.1cm}a^{1/2}\hspace{0.1cm}(1+t_\text{fin})^\frac{2p+1}{2p}. \end{split} \end{equation} For $\mathcal{T}_2$, we dominate $m^N(ds, \mathcal{S}_N)$ by a Poisson random measure $\mathfrak{m}^N(ds)$ of rate $2N$, as above. Controlling $\mathfrak{m}^N$ as in (\ref{eq: Poisson}), we obtain \begin{equation} \begin{split} \label{eq: control of m integral} \mathcal{T}_2 & \lesssim (2^{-2J}+2^{-L})N^{\epsilon-1} \left\|\int_0^{t_\text{fin}} \Lambda_{k}(\mu^N_{s-})^\frac{1}{2}\mathfrak{m}^N(ds) \right\|_{L^p(\mathbb{P})} \\[1ex]& \lesssim (2^{-2J}+2^{-L})N^{\epsilon-1} \left\|\left(\hspace{0.1cm}\sup_{s\leq t_\text{fin}} \Lambda_{k}(\mu^N_s)^\frac{1}{2}\right)\right\|_{L^{2p}(\mathbb{P})}\left\|\mathfrak{m}^N\left(\left(0, t_\text{fin}\right]\right)\right\|_{L^{2p}(\mathbb{P})} \\[1ex]& = \lesssim (2^{-2J}+2^{-L})N^{\epsilon}(1+t_\text{fin})^\frac{2p+1}{2p}. \end{split} \end{equation} Combining the local uniform estimates (\ref{eq: control of mg term LU}, \ref{eq: break up error term}, \ref{eq: control of mbar integral}, \ref{eq: control of m integral}) of the terms in the decomposition (\ref{eq: decomposition of MNFT}), we find that \begin{multline*} \left\|\hspace{0.1cm} \sup_{f\in \mathcal{A}} \hspace{0.1cm}M^{N,f}_{\star,t_\text{fin}} \hspace{0.1cm} \right\|_{L^p(\mathbb{P})} \lesssim N^\epsilon a^{1/2} \hspace{0.1cm} (1+t_\text{fin})^\frac{3p+1}{2p}\hspace{0.1cm} \left(N^{-1/2} \hspace{0.1cm} 2^{(d/q-1)L}J^{1/p'}+2^{-2J}+2^{-L}\right). \end{multline*} Taking $J=\lfloor\frac{p'}{4d}\log_2(N)\rfloor$ and $L=\lfloor \frac{p'}{2d}\log_2(N)\rfloor$ proves the result claimed. \end{proof}
\section{Proof of Theorem \ref{thm: low moment regime}} \label{sec: LMR}
We now turn to the proof of Theorem \ref{thm: low moment regime}, which establishes a convergence estimate in the presence of a $k^\text{th}$ moment bound, for any $k>2$. Our strategy will be to use the ideas of \cite{ACE}, which work well with few moments, to prove convergence on a small initial time interval $[0,u_N]$, for some $u_N$ to be chosen later. Then, thanks to the moment production property recalled in Proposition \ref{thrm:momentinequalities}, we may use Theorems \ref{thrm: PW convergence}, \ref{thrm: Main Local Uniform Estimate} to control the behaviour at times $t\ge u_N$. The argument is similar to the final argument in the proof of Theorem \ref{thrm: W-W continuity of phit} given in Section \ref{sec: continuity of BE}, which may be read as a warm-up to this proof. \medskip \\ Throughout, let $k, a, (\mu^N_t), \mu_0$ be as in the statement of the Theorem. \medskip \\ We begin by recalling the representation formula established in \cite[Proposition 4.2]{ACE}, which is a noisy version of Proposition \ref{prop: bad representation formula}. \begin{proposition} \label{prop: very bad rep formula} Let $\mu \in \mathcal{S}^k$ for some $k>2$, and let $\mu^N_t$ be a Kac process on $N$ particles. Let $\rho_t=(\phi_t(\mu_0)+\mu^N_t)/2$, and for $f\in \mathcal{A}, 0\le s\le t$, let $f_{st}$ be the propagation described in Definition \ref{def: LKP} in this environment. Then, for all $t\ge 0$, we have the equality \begin{equation} \begin{split} & \langle f, \mu^N_t-\phi_t(\mu_0)\rangle =\langle f_{0t}, \mu^N_0-\mu_0\rangle \\&\hspace{2cm}+\int_{(0,t]\times \mathcal{S}_N} \langle f_{st},\mu^N-\mu^N_{s-}\rangle (m^N-\overline{m}^N)(ds,d\mu^N) \end{split} \end{equation} where $m^N, \overline{m}^N$ are as defined in Section \ref{sec: interpolation decomposition}.\end{proposition} The major difficulty in using this representation formula is the appearance of an exponentiated random moment in the quantity $z_t$ parametrising the continuity of $f_{st}$. We will use the following proposition, which controls the stochastic integrals on the right-hand side, modulo this difficulty. \begin{proposition} \label{prop: short time mg estimate} Let $\rho_t$ be a potentially random environment such that, for some $\beta>0$, \begin{equation} \label{eq: moment condition for environment} w=\left\|\hspace{0.1cm}\sup_{t\le 1} \left(\frac{ \Lambda_3(\rho_t)}{\beta t^{\beta-1}+1}\right)\hspace{0.1cm}\right\|_{L^\infty(\mathbb{P})}<\infty. \end{equation} For $f\in \mathcal{A}$ and $0\le s\le t\le 1$, let $f_{st}[\rho]$ denote the propagation in this environment, as described in Definition \ref{def: LKP}. \medskip \\ Let $k>2$ and $a\ge 1$, and let $\mu^N_t$ be a Kac process with initial moment $\Lambda_k(\mu^N_0)\le a$, and let $m^N, \overline{m}^N$ be as in Section \ref{sec: interpolation decomposition}. We write \begin{equation} \widetilde{M}^{N,f}_t[\rho]=\int_{(0,t]\times \mathcal{S}_N} \langle f_{st}[\rho], \mu^N-\mu^N_{s-}\rangle (m^N-\overline{m}^N)(ds, d\mu^N).\end{equation} In this notation, we have the bound \begin{equation} \left\|\hspace{0.1cm} \sup_{t\le 1}\hspace{0.1cm} \sup_{f\in \mathcal{A}} \hspace{0.1cm} \widetilde{M}^{N,f}_t[\rho]\right\|_1 \le CaN^{-\eta}\end{equation} for some $C=C(d,k,\beta)$ and $\eta=\eta(d,\beta)>0$. Here, we emphasise that $\|\cdot\|_{L^1(\mathbb{P})}$ refers to the $L^1$ norm with simultaneous expectation over $\mu^N_t$ and the environment $\rho$. \end{proposition} This largely follows from the proof of \cite[Theorem 1.1]{ACE}, and the argument follows a similar pattern to Lemmas \ref{thrm: pointwise martingale control}, \ref{thrm: local uniform martingale control}, using the continuity estimate recalled in Proposition \ref{prop: continuity for branching process} and a similar estimate for the dependence on the initial time $s$. The key difference is that the hypotheses on the environment $\rho$ guarantee an $L^\infty(\mathbb{P})$ control on the quantities \begin{equation} z_1=\exp\left(8\int_0^1 \Lambda_3(\rho_u)du\right); \hspace{0.8cm} y_\beta=z_1\hspace{0.1cm}\sup_{0\le s\le s'\le 1}\left[(s'-s)^{-\beta}\int_s^{s'}\Lambda_3(\rho_u)du\right]\end{equation} which describe the continuity of $f_{st}(v)$ in $v$ and $s$ respectively. By contrast, these are only controlled in probability in \cite[Theorem 1.1]{ACE}; correspondingly, we obtain an $L^1(\mathbb{P})$ estimate rather than an estimate in probability. With this estimate, we turn to the proof of Theorem \ref{thm: low moment regime}.
\begin{proof}[Proof of Theorem \ref{thm: low moment regime}] We first introduce a localisation argument, following the argument in Section \ref{sec: continuity of BE}, which allows us to guarantee that (\ref{eq: moment condition for environment}) holds for the environment $\rho= (\mu^N_t+\phi_t(\mu_0))/2$. Let $\beta=\frac{k-2}{2}$, and let $u_N \le 1$ be chosen later. Now, define $T_N$ to be the stopping time \begin{equation} T_N=\inf\left\{t\le u_N: \Lambda_3(\rho_t) > \frac{(\beta t^{\beta-1}+1)}{8\sqrt{2}}\right\}. \end{equation} We use the convention that $\inf \emptyset =\infty$, so that if $T_N>u_N$, then $T_N=\infty$. Let $\rho^T$ be the stopped environment $\rho^T_t=\rho_{t\land T_N}$, and write $f_{st}^T$ for the propagation in the stopped environment.\medskip \\ We observe first that on the event $T_N=\infty$, we have the equality $f_{st}^T=f_{st}$ for all $f\in \mathcal{A}, s\le t\le u_N$. Moreover, since $\Lambda_3(\rho_t)$ increases by a factor of at most $4\sqrt{2}$ at jumps by Lemma \ref{lemma:momentincreaseatcollision}, we have the bound, almost surely for all $t\ge 0$, \begin{equation} \Lambda_3(\rho^T_t) \le \frac{(\beta t^{\beta-1}+1)}{2}.\end{equation} Therefore, the stopped environment $\rho^T$ satisfies the bound \ref{eq: moment condition for environment} with $w=\frac{1}{2}$. Now, we write $\widetilde{M}^{N,f}_t=\widetilde{M}^{N,f}_t[\rho^T]$ as in the proposition above, and by the representation formula in Proposition \ref{prop: very bad rep formula}, we have the bound for all $t\le u_N$,\begin{equation}\begin{split} &W\left(\mu^N_t,\phi_t(\mu_0)\right) 1[T_N=\infty] \le CW(\mu^N_0, \mu_0)+\sup_{f\in \mathcal{A}}\hspace{0.2cm}\widetilde{M}^{N,f}_t \end{split}\end{equation} for some absolute constant $C$. By Proposition \ref{prop: short time mg estimate}, we obtain the estimate \begin{equation} \label{eq: shorttime bound 1} \left\|\sup_{t\le u_N} W\left(\mu^N_t, \phi_t(\mu_0)\right)1[T_N=\infty]\right\|_1 \lesssim W(\mu^N_0,\mu_0)+aN^{-\eta}.\end{equation} Let $k_0=k_0(d)$ be large enough that Theorem \ref{thrm: PW convergence} holds with $\epsilon=\frac{1}{2d}$. By applying Theorem \ref{thrm: PW convergence}, restarted at time $u_N$, and the moment production property, we obtain \begin{equation} \label{eq: restarted estimate} \begin{split}\sup_{t\ge u_N} \left\|W(\mu^N_t,\phi_{t-u_N}(\mu^N_{u_N}))\right\|_2&\lesssim N^{\epsilon-1/d}\hspace{0.1cm}\mathbb{E}\left[\Lambda_{k_0}(\mu^N_{u_N})\right]^{1/2} \\ & \lesssim N^{\epsilon-1/d}u_N^{1-k_0/2}.\end{split} \end{equation} Using our continuity estimate Theorem \ref{thrm: W-W continuity of phit}, we have the bound for some $\zeta=\zeta(d)$\begin{equation} \begin{split} &\sup_{t\ge u_N} W(\phi_{t-u_N}(\mu^N_{u_N}),\phi_t(\mu_0)) \\&\hspace{1cm}\lesssim W(\mu^N_{u_N},\phi_{u_N}(\mu_0))^\zeta\Lambda_{k_0}(\mu^N_{u_N},\phi_{u_N}(\mu_0))\end{split}\end{equation} and, considering the cases $\{T_N\le u_N\}, \{T_N=\infty\}$ separately, we see that \begin{equation} \begin{split} &\sup_{t\ge u_N} W(\phi_{t-u_N}(\mu^N_{u_N}),\phi_t(\mu_0)) \\&\hspace{2.5cm}\lesssim W(\mu^N_{u_N},\phi_{u_N}(\mu_0))^\zeta\Lambda_{k_0}(\mu^N_{u_N},\phi_{u_N}(\mu_0))1[T_N=\infty]\\[1ex]&\hspace{3.5cm}+ 1[T_N\le u_N].\end{split}\end{equation}To ease notation, we will write $\mathcal{T}_1, \mathcal{T}_2$ for the two terms respectively. We estimate the expectation of $\mathcal{T}_1$ using H\"older's inequality: for some $k_1>k_0$, \begin{equation}\label{eq: holder estimate on BF}\begin{split} &\left\|\mathcal{T}_1\right\|_{L^1(\mathbb{P})} \lesssim\mathbb{E}\left(W(\mu^N_{u_N},\phi_{u_N}(\mu_0))1[T_N=\infty]\right)^\zeta\mathbb{E}\left(\Lambda_{k_1}(\mu^N_{u_N},\phi_{u_N}(\mu_0))\right) \\[1ex] & \hspace{3cm}\lesssim (N^{-\eta}+W(\mu^N_0,\mu_0))^\zeta\hspace{0.1cm} u_N^{1-k_1/2}.\end{split}\end{equation} where $\eta$ is as in (\ref{eq: shorttime bound 1}) with our choice of $\beta$. In order to deal with $\mathcal{T}_2$, we now estimate $\mathbb{P}(T_N\le u_N)$. Let $Z_N$ be given by \begin{equation} Z_N=\sum_{l:2^{-l}\le u_N} 2^{(\beta-1)l+1}\beta^{-1}\sup_{t\in [2^{-l},2^{1-l}]}\langle 1+|v|^3, \rho_t\rangle \end{equation} and observe that, for all $t\le u_N$, we have the bound \begin{equation} \langle 1+|v|^3, \rho_t\rangle \le \frac{(\beta t^{\beta-1}+1)Z_N}{2}. \end{equation} Therefore, \begin{equation} \mathbb{P}(T_N\le u_N)\le \mathbb{P}(Z_N>1/8) \le 8 \mathbb{E}[Z_N].\end{equation} Using the moment production property of the Kac process and Boltzmann equation in Proposition \ref{thrm:momentinequalities}, we compute \begin{equation} \mathbb{E}(Z_N)\le \sum_{l: 2^{-l}\le u_N}2^{(\beta-1)l+1}2^{-l(k-3)}\hspace{0.1cm} \beta^{-1}a \lesssim a u_N^\beta \end{equation} and so \begin{equation} \label{eq: estimate on restarted BF} \begin{split} &\left\|\sup_{t\ge u_N} W(\phi_{t-u_N}(\mu^N_{u_N}),\phi_t(\mu_0))\right\|_{L^1(\mathbb{P})} \\[1ex]& \hspace{1.5cm}\lesssim (N^{-\eta}+W(\mu^N_0,\mu_0))^\zeta\hspace{0.1cm} u_N^{1-k_1/2} +au_N^\beta. \end{split} \end{equation} We now return to (\ref{eq: shorttime bound 1}) and observe that \begin{equation} \label{eq: shorttime bound 2} \begin{split}& \left\|\sup_{t\le u_N} W(\mu^N_t, \phi_t(\mu_0))\right\|_1 \\& \hspace{2cm}\lesssim \left\|\sup_{t\le u_N} W\left(\mu^N_t, \phi_t(\mu_0)\right)1[T_N=\infty]\right\|_1 +\mathbb{P}(T_N\le u_N) \\[1ex] & \hspace{2cm} \lesssim W(\mu^N_0,\mu_0)+aN^{-\eta}+au_N^\beta. \end{split} \end{equation} Combining (\ref{eq: restarted estimate}, \ref{eq: estimate on restarted BF}, \ref{eq: shorttime bound 2}) and keeping the worst terms, we have shown that \begin{equation} \sup_{t\ge 0} \left\|W(\mu^N_t, \phi_t(\mu_0)\right\|_{L^1(\mathbb{P})}\lesssim (N^{-\eta}+W(\mu^N_0,\mu_0))^{\delta} u_N^{-\alpha}+ au_N^\beta\end{equation} for some $\eta, \delta, \alpha, \beta>0$, depending on $d, k$. If we choose \begin{equation} u_N=(N^{-\eta}+W(\mu^N_0,\mu_0))^{\delta/(\alpha+\beta)}\end{equation} then we finally obtain \begin{equation}\begin{split} \sup_{t\ge 0} \left\|W(\mu^N_t, \phi_t(\mu_0))\right\|_{L^1(\mathbb{P})}& \lesssim a(N^{-\eta}+W(\mu^N_0,\mu_0))^{-\beta \delta/(\alpha+\beta)} \\ & \lesssim a\left(N^{-\eta \beta \delta/(\alpha+\beta)}+W(\mu^N_0,\mu_0)^{\beta \delta/(\alpha+\beta)}\right) \\ & \lesssim a\left(N^{-\epsilon}+W(\mu^N_0,\mu_0)^\epsilon\right)\end{split} \end{equation} as desired, for sufficiently small $\epsilon=\epsilon(d,k)>0$. The case for the local uniform estimate is similar, using Theorem \ref{thrm: Main Local Uniform Estimate} in place of Theorem \ref{thrm: PW convergence}. \end{proof}
\section{Proof of Theorem \ref{thrm: No Uniform Estimate}}
The proof of Theorem \ref{thrm: No Uniform Estimate} is based on the following heuristic argument: \begin{heuristic} Fix $N$, and consider a Kac process $(\mu^N_t)$ on $N$ particles. As $t\rightarrow \infty$, its law relaxes to the equilibrium distribution $\pi_N$, which is known to be the uniform distribution $\sigma^N$ on $\mathcal{S}_N$. Since this measure assigns non-zero probability to regions $R_N$ at macroscopic distance from the fixed point $\gamma$, given by \begin{equation}
\gamma(dv)=\frac{e^{-\frac{d}{2}|v|^2}}{(2\pi d^{-1})^{d/2}}dv,
\end{equation} the process will almost surely hit $R_N$ on an unbounded set of times. Meanwhile, the Boltzmann flow $\phi_t(\mu_0)$ will converge to $\gamma$. Therefore, at some large time, the particle system $\mu^N_t$ will have macroscopic distance from the Boltzmann flow $\phi_t(\mu^N_0)$.\end{heuristic}
The regions $R_N$ which we construct in the proof are those where the energy is concentrated in only a few particles, which might na\"{i}vely be considered `highly ordered, and so low-entropy'. This appears to contradict the principle that entropy should increase; this \emph{apparent} paradox is explained in the discussion section at the beginning of the paper. \medskip\\
We recall that a \emph{labelled} Kac process is the Markov process of velocities $(v_1(t),....,v_N(t))$ corresponding to the particle dynamics. The state space is the set $\mathbb{S}^{N}=\left\{(v_1, ..., v_N) \in (\mathbb{R}^d)^N: \sum_{i=1}^N v_i=0, \sum_{i=1}^N |v_i|^2 = N\right\}$, which we call the labelled Boltzmann Sphere. We denote $\theta_N$ the map taking $(v_1,...,v_N)$ to its empirical measure in $\mathcal{S}_N$: \begin{equation} \theta_N: \mathbb{S}^N \rightarrow \mathcal{S}_N; \hspace{1cm} (v_1, ..., v_N) \mapsto \frac{1}{N} \sum_{i=1}^N \delta_{v_i}.\end{equation} Moreover, if $\mathcal{V}^N_t$ is a labelled Kac process, then the empirical measures $\mu^N_t:= \theta_N(\mathcal{V}^N_t)$ are a Kac process in the sense defined in the introduction.\medskip \\
Considered as a $((N-1)d-1)$-dimensional sphere, $\mathbb{S}^N$ has a uniform (Hausdorff) distribution $\gamma^N$. We define the `uniform distribution' $\sigma^N$ on $\mathcal{S}_N$ to be the pushforward of $\gamma^N$ by $\theta_N$: \begin{equation} \label{eq: defn of sigmaN} \sigma^N(A):=\gamma^N\left\{(v_1, ...v_N)\in \mathbb{S}^d: \theta_N(v_1,...,v_N)\in A\right\}. \end{equation} We will use this definition to transfer the positivity of the measure $\gamma^N$ forward to $\sigma^N$. \medskip\\
As discussed in the literature review, the problem of relaxation to equilibrium for the Kac process is a subtle problem, and has been extensively studied. For our purposes, the following $L^2$ convergence is sufficient: \begin{proposition}\label{prop: relaxation} Suppose that $(\mu^N_t)_{t\geq 0}$ is a hard-spheres Kac process, where the law of the initial data $\mathcal{L}\mu^N_t$ has a density $h^N_0\in L^2(\sigma^N)$ with respect to $\sigma^N$. Then at all positive times $t\geq 0$, the law $\mathcal{L}\mu^N_t$ has a density $h^N_t\in L^2(\sigma^N)$ with respect to $\sigma^N$, and for some universal constant $\lambda_0>0$, we have \begin{equation} \left\|h^N_t-1\right\|_{L^2(\sigma^N)} \leq e^{-\lambda_0 t}\left\| h^N_0-1\right\|_{L^2(\sigma^N)}. \end{equation} \end{proposition} A version of this, for the labelled Kac process, appears as \cite[Theorem 6.8 and corollary]{M+M}; the result stated above follows by a pushforward argument. This is sufficient to prove the following weak ergodic theorem: \begin{lemma}\label{lemma: ergodic theorem} Let $(\mu^N_t)_{t\geq 0}$ be a hard-spheres Kac process on $N$ particles, started from $\mu^N_0\sim \sigma^N$. Let $R_N\subset \mathcal{S}_N$ be such that $p=\sigma^N(R_N)>0$. Then \begin{equation} \frac{1}{t}\int_0^t 1(\mu^N_s\in R_N)ds \rightarrow p\end{equation} in $L^2$. In particular, almost surely, $\mu^N_t$ visits $R_N$ on an unbounded set of times. \end{lemma} \begin{proof} Observe that \begin{equation} \mathbb{E}\left[\frac{1}{t}\int_0^t 1(\mu^N_s \in R_N) ds\right] = \frac{1}{t}\int_0^t \mathbb{P}(\mu^N_s\in R_N) ds = p\end{equation} so our claim reduces to bounding the variance. \\ \\ For times $t\geq 0$, write $A(t)$ as the event $A(t)=\{\mu^N_t\in R_N\}$; we will compute the covariance of $1_{A(s_1)}$ and $1_{A(s_2)}$, for $0 \leq s_1 \leq s_2$. Observe that \begin{equation} \mathbb{E}\left[1_{A(s_1)}(1_{A(s_2)}-p)\right]=p\left(\mathbb{P}\left(A(s_2)|A(s_1)\right)-p\right).\end{equation} Conditional on $A(s_1)$, the law of $\mu^N_{s_1}$ has a conditional density $h^N_{s_1}\propto 1_{R_N}$ with respect to $\sigma^N$. By Proposition \ref{prop: relaxation}, conditional on $A(s_1)$, $\mu^N_{s_2}$ has a density $h^N_{s_2}$, and we can bound \begin{equation}|\mathbb{P}(A(s_2)|A(s_1))-p|\leq \left\|h^N_{s_2}-1\right\|_{L^1(\sigma^N)}\leq \left\|h^N_{s_2}-1\right\|_{L^2(\sigma^N)} \leq C(R_N) e^{-\lambda_0(s_2-s_1)} \end{equation} for some constant $C(R_N)$ independent of time. Hence \begin{equation} \mathbb{E}\left[(1_{A(s_1)}-p)(1_{A(s_2)}-p)\right]=p(\mathbb{P}(A(s_2)|A(s_1))-p) \leq p C(R_N) e^{-\lambda_0(s_2-s_1)}.\end{equation}We can now integrate to bound the variance: \begin{equation} \begin{split} \text{Var}\left(\frac{1}{t}\int_0^t 1(\mu^N_s \in R_N) ds\right) & =\frac{2}{t^2}\int_0^t ds_1 \int_{s_1}^t ds_2 \hspace{0.2cm} \mathbb{E}\left[(1_{A(s_1)}-p)(1_{A(s_2)}-p)\right] \\[1ex] & \leq \frac{2pC}{t^2} \int_0^t ds_1\int_{s_1}^\infty ds_2\hspace{0.2cm} e^{-\lambda_0(s_2-s_1)} \\[1ex] & \leq \frac{2pC}{\lambda_0 t} \rightarrow 0. \end{split} \end{equation} \end{proof} An immediate corollary is that the long-run deviation must be bounded \emph{below} by the essential supremum of the deviation under the invariant measure: \begin{corollary} Let $(\mu^N_t)_{t\geq 0}$ be a $N$- particle Kac process in equilibrium. Then, almost surely, \begin{equation} \begin{split} \limsup_{t\rightarrow \infty}W(\mu^N_t, \gamma) \geq &\left\|W(\cdot, \gamma)\right\|_{L^\infty(\sigma^N)} \\ =& \esssup_{\sigma^N(d\mu)}\hspace{0.05cm} W(\mu, \gamma). \end{split} \end{equation} \end{corollary} \begin{proof} For ease of notation, write $W^*$ as the essential supremum appearing on the right hand side. For any $\epsilon>0$, let $R_{N, \epsilon}=\{\mu\in \mathcal{S}_N: W(\mu, \gamma)>W^*-\epsilon\}$; it is immediate that $\sigma^N(R_{N, \epsilon})>0$. By the remark in Lemma \ref{lemma: ergodic theorem}, almost surely, $\mu^N_t$ visits $R_{N, \epsilon}$ on an unbounded set of times, and so \begin{equation} \limsup_{t\rightarrow \infty} W(\mu^N_t, \gamma) \geq W^*- \epsilon. \end{equation} The conclusion now follows on taking an intersection over some sequence $\epsilon_n \downarrow 0$. \end{proof} To prove Theorem \ref{thrm: No Uniform Estimate}, it now only remains to show a lower bound on the essential supremum. \begin{lemma} \label{lemma: construct bad regions} Let $f$ be given by \begin{equation} f(v)=(1+|v|^2)\min\left(\frac{|v|}{\sqrt{N/2}},1\right). \end{equation} Then $f \in \mathcal{A}$, and \begin{equation} \left\|\langle f, \mu-\gamma\rangle\right\|_{L^\infty(\sigma^N)} \geq 1-\frac{C}{\sqrt{N}} \end{equation} for some constant $C=C(d)$. In particular, this is a lower bound for the essential supremum $W^*$, and so for the long-run deviation.\end{lemma}
\begin{proof} It is easy to see that $f \in \mathcal{A}$. Moreover, the region \begin{equation} \widetilde{R}_{N}=\left\{(v_1,...v_N)\in \mathbb{S}^N: \langle f, \theta_N(v_1,...,v_N) \rangle > 1 \right\}\end{equation} is an open subset of $\mathbb{S}^N$, containing $\left(\sqrt{\frac{N}{2}}e_1,-\sqrt{\frac{N}{2}}e_1,0,..,0\right)$. By positivity of the uniform measure $\gamma^N$ on $\mathbb{S}^N$, it follows that $\gamma^N(\widetilde{R}_{N})>0$. The corresponding region in $\mathcal{S}_N$: \begin{equation} R_{N}=\{\mu^N \in \mathcal{S}_N: \langle f, \mu^N\rangle >1 \} \supset \theta_N(\widetilde{R}_{N}).\end{equation} By definition (\ref{eq: defn of sigmaN}) of $\sigma^N$, we have \begin{equation} \sigma^N(R_{N}) \geq \gamma^N(\widetilde{R}_{N})>0. \end{equation} For all $\mu^N \in R_{N}$, we have \begin{equation} W(\mu^N, \gamma) \geq \langle f, \mu^N-\gamma \rangle \geq 1-N^{-1/2}\langle (1+|v|^2)|v|, \gamma\rangle. \end{equation} Since $R_{N}$ has positive measure, taking $C=\langle (1+|v|^2)|v|, \gamma\rangle$, we can conclude that \begin{equation} W^*\geq 1-\frac{C}{\sqrt{N}}. \end{equation} \end{proof}
\begin{proof}[Proof of Theorem \ref{thrm: No Uniform Estimate}] From the previous two lemmas, we know that for all $N\geq 2$, and for $\sigma^N$- almost all $\mu^N$, \begin{equation} \label{eq: ergodicistation} \mathbb{P}_{\mu^N}\left(\limsup_{t\rightarrow \infty} W(\mu^N_t, \gamma) \geq 1-\frac{C}{\sqrt{N}}\right)=1 \end{equation} where $\mathbb{P}_{\mu^N}$ denotes the law of a Kac process started at $\mu^N$. \\ \\ Let $N\geq 2, k> 2$ and $a> 1$. The region $R_{\star, N}$ of the labelled sphere such that $\Lambda_{k}(\theta_N(\mathcal{V}))<a$ is an open set; to conclude that it has positive $\sigma^N$- measure, it suffices to show that it is nonempty. \medskip\\ Let $r$ be a rotation by $\frac{2\pi}{N}$ in the plane corresponding to the first two axes $(e_1,e_2)$. Then the data \begin{equation} \mathcal{V}_\star=(e_1, re_1, ..., r^{N-1}e_1) \end{equation} belongs to $\mathbb{S}^N$, and has $\Lambda_{k}(\theta_N(\mathcal{V}_\star))=\frac{1}{N}\sum_{i=1}^N 1^s = 1$. Hence $\mathcal{V}_\star \in R_{\star, N}$ is open and nonempty, so $\gamma^N(R_{\star, N})>0$. The positivity transfers to the corresponding region of $\mathcal{S}_N$: \begin{equation} \sigma^N\left\{\mu^N\in \mathcal{S}_N: \Lambda_{k}(\mu^N)<a\right\}=\gamma^N(R_{N, \star})>0.\end{equation} Hence, for any $N\geq 2$, we can choose an initial datum $\mu^N_0=\mu^N$, with $\Lambda_{k}(\mu^N_0)<a$, such that (\ref{eq: ergodicistation}) holds. Observing that \begin{equation} W(\phi_t(\mu^N_0),\gamma) \leq \|\phi_t(\mu^N_0)-\phi_t(\gamma)\|_{\mathrm{TV}+2} \rightarrow 0\end{equation} it follows that, $\mathbb{P}_{\mu^N}$- almost surely \begin{equation} \limsup_{t\rightarrow \infty} W(\mu^N_t, \gamma) = \limsup_{t\rightarrow \infty} W(\mu^N_t, \phi_t(\mu^N_0)) \geq 1-\frac{C}{\sqrt{N}}. \end{equation} \end{proof}
\begin{remark} \begin{enumerate}[label=\roman{*}).] \item The proof of Lemma \ref{lemma: ergodic theorem} leaves open the possibility that there is a non-empty `exceptional set' of initial data $\mu^N$ where (\ref{eq: ergodicistation}) does not hold. A stronger assertion would be positive Harris recurrence, as defined in \cite{Harris recurrence}, which allows a similar ergodic theorem for \emph{any} initial data $\mu^N$. This is not necessary for our purposes. \item In principle, one could use this compute the typical time scales necessary for these deviations to occur, and sharper estimates may be obtained by using more detailed forms of relaxation, such as the entropic relaxation considered by \cite{Carlen 08}. This is not necessary for our arguments. \end{enumerate}\end{remark}
\section{Proof of Theorem \ref{corr: PW convergence as POC}} \label{sec: proof of POC}
Finally, we show that Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime} implies the claimed chaoticity estimates in Theorem \ref{corr: PW convergence as POC}. The following proof largely follows that of \cite[Theorem 3.1]{M+M}, using the estimates derived in this paper. As remarked in the introduction, the novelty is the use of the H\"older estimate (\ref{eq: good continuity estimate}) to control the term $\mathcal{T}_3$.
\medskip In the following proof, we will use estimates from Theorem \ref{thm: low moment regime}, which allow us to minimise the moment conditions required on the initial data. Better results can be obtained using Theorem \ref{thrm: PW convergence} at the cost of requiring a stronger moment estimate, although these still do not obtain optimal rates.
\begin{proof}[Proof of Theorem \ref{corr: PW convergence as POC}] Let $k>2$, and $\epsilon=\epsilon(d,k)>0$ be the resulting exponent from Theorem \ref{thm: low moment regime}. Let $\mu^N_0 \in \mathcal{S}_N$ satisfy $\Lambda_k(\mu^N_0)\le a$. \medskip \\ Recall that we wish to estimate \begin{equation} \frac{\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{P}^N_t(\mu^N_0, \cdot), \phi_t(\mu^N_0)^{\otimes l}\right)}{l}\end{equation} uniformly in $t\ge 0$ and $l=1,...,N$, and where $\mathcal{W}_{1,l}$ is the Wasserstein$_1$ distance on laws, given by (\ref{eq: definition of script W}). Let $\mathcal{V}^N_t$ be a labelled Kac process, and let $\mu^N_t$ be the associated process of empirical measures. Fixing a test function $f\in B_X^{\otimes l}$, we break up the difference as \begin{equation}\begin{split} \label{eq: decomposition for chaos} &\int_{(\mathbb{R}^d)^N} f(V)\left(\Pi_l[\mathcal{P}^N_t(\mu^N_0, \cdot)]-(\phi_t(\mu^N_0))^{\otimes l}\right)(dV) \\ & \hspace{1cm} =\mathbb{E}_{\mu^N_0}\left[\prod_{j=1}^l f_j(v_j(t))\right]-\prod_{j=1}^l\langle f_j, \phi_t(\mu^N_0)\rangle \\[1ex] & \hspace{1cm} = \mathcal{T}_1+\mathcal{T}_2\end{split} \end{equation} where $\mathbb{E}_{\mu^N_0}$ denotes expectation under the law $\mathcal{P}^N_t(\mu^N_0, \cdot)$, and where the two error terms are\begin{equation} \mathcal{T}_1:= \mathbb{E}_{\mu^N_0}\left[\prod_{j=1}^l f_j(v_j(t))-\prod_{j=1}^l \langle f_j, \mu^N_t\rangle\right]; \end{equation} \begin{equation} \mathcal{T}_2:=\mathbb{E}_{\mu^N_0}\left[\prod_{j=1}^l \langle f_j, \mu^N_t\rangle-\prod_{j=1}^l\langle f_j, \phi_t(\mu^N_0)\rangle \right].\end{equation} Now, $\mathcal{T}_1$ is a purely combinatorial term, based on the use of empirical measures, and $\mathcal{T}_2$ may be controlled using the pointwise estimates Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime}. We will indicate how these terms may be controlled for the simple case $l=2$, and use this to show the full, `infinite dimensional' chaos estimate claimed.
\paragraph{Step 1: Estimate on $\mathcal{T}_1$} Since the law $\mathcal{P}^N_t(\mu^N_0, \cdot)$ is symmetric, we may rewrite \begin{equation} \mathbb{E}_{\mu^N_0}\left[f_1(v_1(t))f_2(v_2(t))\right]=\mathbb{E}_{\mu^N_0}\left[\frac{1}{N(N-1)}\sum_{i\neq j} f_1(v_i(t))f_2(v_j(t))\right] \end{equation} where $N(N-1)$ counts the number of \emph{ordered} pairs of indexes $(i,j)$. Similarly, the second term may be written \begin{equation} \mathbb{E}_{\mu^N_0} \left[\langle f_1, \mu^N_t\rangle\langle f_2, \mu^N_t\rangle\right]=\mathbb{E}_{\mu^N_0}\left[\left(\frac{1}{N}\sum_{i=1}^N f_1(v_i(t))\right)\left(\frac{1}{N}\sum_{j=1}^N f_1(v_j(t))\right)\right].\end{equation} Comparing the two terms, and using the bound $\|f_j\|_{L^\infty}\le \|f_j\|_X\le 1$ for $j=1,2$, we obtain the estimate \begin{equation} \begin{split} \left|\mathcal{T}_1\right| &\le \sum_{i\neq j} \left|\frac{1}{N(N-1)}-\frac{1}{N^2}\right|+\sum_{i=1}^N \frac{1}{N^2}. \end{split} \end{equation} Therefore, we have the bound $|\mathcal{T}_1| \le \frac{2}{N}$, uniformly in $f$ and $t$.
\paragraph{Step 2: Estimate on $\mathcal{T}_2$} For the case $l=2$, we break up the product as \begin{equation} \begin{split} &\prod_{j=1}^2 \langle f_j, \mu^N_t\rangle-\prod_{j=1}^2\langle f_j, \phi_t(\mu^N_0)\rangle \\&\hspace{1cm} = \langle f_1, \mu^N_t-\phi_t(\mu^N_0)\rangle \langle f_2, \mu^N_t\rangle + \langle f_1, \phi_t(\mu^N_0)\rangle \langle f_2, \mu^N_t-\phi_t(\mu^N_0)\rangle. \end{split}\end{equation} In each case, the difference term is dominated by a multiple of the Wasserstein distance $W(\mu^N_t, \phi_t(\mu))$, where $W$ is as in (\ref{eq: definition of W}), and the remaining term is absolutely bounded, by the boundedness of $f_j, j=1,2$. Therefore, we estimate \begin{equation}\label{eq: extensivity}\left|\prod_{j=1}^l \langle f_j, \mu^N_t\rangle-\prod_{j=1}^l\langle f_j, \phi_t(\mu^N_0)\rangle\right| \lesssim W(\mu^N_t, \phi_t(\mu^N_0)). \end{equation} Now, the right-hand side is precisely the term controlled by Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime}, in the special case $\mu_0=\mu^N_0$. By the choice of $\epsilon$ and $k$ above, we obtain the control \begin{equation} \mathcal{T}_2\lesssim \hspace{0.05cm}\Lambda_k(\mu^N_0)^\frac{1}{2}\hspace{0.05cm}N^{-\epsilon} \lesssim \hspace{0.05cm}a\hspace{0.05cm}N^{-\epsilon} \end{equation} for some explicit $\epsilon=\epsilon(d,k)>0$.\medskip \\ We also remark here that this implication, given Theorems \ref{thrm: PW convergence}, \ref{thm: low moment regime} is immediate. However, attempting to reverse this implication, and deduce a theorem similar to \ref{thrm: PW convergence} from a control of $\mathcal{T}_2$, requires moving the supremum over test functions $f$ \emph{inside} the expectation. This corresponds to the most technical step in our proof (Lemmas \ref{thrm: pointwise martingale control}, \ref{thrm: local uniform martingale control}). Therefore, while it may be possible to deduce a version Theorem \ref{thrm: PW convergence} from the control of $\mathcal{T}_2$ given by \cite{M+M}, this would scarcely be less technical than the proof given, and would not lead to a proof of Theorem \ref{thrm: Main Local Uniform Estimate}.
\paragraph{Step 3: Deduction of Infinite-Dimensional Chaos} Combining the two estimates for the case $l=2$ above, we deduce that there exists $\epsilon=\epsilon(d,k)>0$ such that\begin{equation} \sup_{t\ge 0} \hspace{0.05cm} \mathcal{W}_{1,2}\left(\Pi_2\left[\mathcal{P}^N_t(\mu^N_0, \cdot)\right], \phi_t(\mu^N_0)^{\otimes l}\right) \lesssim a\hspace{0.05cm}N^{-\epsilon}.\end{equation} To deduce the full statement, we appeal to the following result from \cite{Hauray Mischler}, which may also be found in \cite[Theorem 2.1]{Mischler}. For any probability measure $\mu$ on $\mathbb{R}^d$, and any symmetric distribution $\mathcal{L}^N$ on $(\mathbb{R}^d)^N$, we may estimate \begin{equation} \max_{l\le N}\hspace{0.05cm}\frac{\mathcal{W}_{1,l}\left(\Pi_l[\mathcal{L}^N], \mu^{\otimes l}\right)}{l} \le C\left(\mathcal{W}_{1,2}\left(\Pi_2[\mathcal{L}^N], \mu^{\otimes 2}\right)^{\alpha_1}+N^{-\alpha_2}\right)\end{equation} for some explicit constants $C, \alpha_1, \alpha_2>0$ depending on the dimension $d$. The claimed result (\ref{eq: CPOC}) now follows. \medskip \\ We now turn to the two consequences claimed as a result. \paragraph{i). Chaotic Case} Let $\mu_0 \in \mathcal{S}$ have an $k^\mathrm{th}$ moment $\Lambda_k(\mu_0)\le a$, and construct $\mathcal{V}^N_0=(v_1(0),...,v_N(0))$ be as described in the statement of the theorem with associated empirical measure $\mu^N_0$. It is straightforward to show that this construction preserves moments up to a constant: that is, $\mathbb{E}(\Lambda_k(\mu^N_0))\lesssim a.$ \medskip \\ For a fixed test function $f\in B_X^{\otimes l}$, we return to the decomposition (\ref{eq: decomposition for chaos}). For this case, where $\mu^N_0\neq \mu_0$, we have a third error term: \begin{equation} \int_{(\mathbb{R}^d)^N} f(V)(\Pi_l[\mathcal{LV}^N_t]-(\phi_t(\mu_0))^{\otimes l})(dV)=\mathcal{T}_1+\mathcal{T}+\mathcal{T}_3.\end{equation} Here, $\mathcal{T}_1$ and $\mathcal{T}_2$ are as above, replacing $\mathbb{E}_{\mu^N_0}$ by the full expectation $\mathbb{E}$, and $\mathcal{T}_3$ is an additional error term, from approximating $\mu_0$ by $\mu^N_0$: \begin{equation} \mathcal{T}_3:=\mathbb{E}\left[\prod_{j=1}^l \langle f_j, \phi_t(\mu^N_0)\rangle-\prod_{j=1}^l\langle f_j, \phi_t(\mu_0)\rangle\right].\end{equation} As in the case above, we consider first the case $l=2$. The first two terms $\mathcal{T}_1, \mathcal{T}_2$ may be estimated as above, by conditioning on $(v_1(0),...,v_N(0))$ to conclude that \begin{equation} \mathcal{T}_1+\mathcal{T}_2 \lesssim aN^{-\epsilon}\end{equation} for some $\epsilon>0$, uniformly in $f\in B_X^{\otimes l}$ and $t\ge 0$. \medskip \\ Arguing as in (\ref{eq: extensivity}), we bound \begin{equation} \mathcal{T}_3 \lesssim\mathbb{E}W(\phi_t(\mu^N_0), \phi_t(\mu_0)).\end{equation}We estimate this term using the contunity estimate Theorem \ref{thrm: W-W continuity of phit}. Let $k'\in (2,k)$, and let $\zeta>0$ be the resulting exponent using Theorem \ref{thrm: W-W continuity of phit}; by making $\zeta$ smaller if necessary, we assume that \begin{equation} \frac{\zeta k}{k-k'} \le 1. \end{equation} From Theorem \ref{thrm: W-W continuity of phit}, we have the estimate \begin{equation} \sup_{t\ge 0} W(\phi_t(\mu^N_0),\phi_t(\mu_0))\lesssim\Lambda_{k'}(\mu^N_0,\mu_0)W(\mu^N_0,\mu_0)^\zeta\end{equation} and we use H\"older's inequality to obtain, uniformly in $t\ge 0$, \begin{equation} \begin{split}\mathbb{E}\left[W(\phi_t(\mu^N_0)\phi_t(\mu_0))\right] &\lesssim \mathbb{E}\left[\Lambda_k(\mu^N_0)\right]^{k'/k}\mathbb{E}\left[W(\mu^N_0,\mu_0)^\frac{\zeta k}{k-k'}\right]^\frac{k-k'}{k} \\[1ex] & \lesssim a^{k'/k} \hspace{0.1cm}\mathbb{E}\left[W(\mu^N_0,\mu_0)\right]^\zeta. \end{split} \end{equation} From \cite[Proposition 9.2]{ACE}, there is a constant $\beta=\beta(d,k)>0$ such that $ \mathbb{E} W(\mu^N_0,\mu_0)\lesssim N^{-\beta}$, so we obtain \begin{equation} \mathbb{E}\left[W(\phi_t(\mu^N_0),\phi_t(\mu_0))\right] \lesssim aN^{-\beta\zeta}.\end{equation} Combining, and since all of our estimates are uniform in $f$ and $t$, we have shown that \begin{equation} \mathcal{W}_{1,2}\left(\Pi_2[\mathcal{LV}^N_t], \phi_t(\mu_0)^{\otimes 2}\right) \lesssim aN^{-\alpha}\end{equation} for some $\alpha=\alpha(d,k)>0$. The improvement to infinite-dimensional chaos is exactly as above.
\paragraph{ii). General Case} The general case follows from the first case, by taking expectations over the initial data $\mu^N_0$. Indeed, for all $l\le N$, all $f\in B_X^{\otimes l}$ and $t\ge 0$, and for any initial data $(v_1(0),...v_N(0))$ with associated measure $\mu^N_0$, we have the bound \begin{equation} \frac{1}{l}\hspace{0.1cm}\mathbb{E}_{\mu^N_0}\left[f_1(v_1(t))...f_l(v_l(t))-\prod_{j=1}^l \langle f_j, \phi_t(\mu^N_0)\rangle\right] \lesssim \Lambda_k(\mu^N_0)N^{-\epsilon}.\end{equation} Taking expectation over the random initial data $(v_1(0),...,v_N(0))$ produces a full expectation on the left-hand side, and by definition of $\mathcal{L}^l_t$ in (\ref{eq: defn of ll}), \begin{equation}\mathbb{E}\left[\prod_{j=1}^l \langle f_j, \phi_t(\mu^N_0)\rangle\right]=\int_{(\mathbb{R}^d)^l} f(V)\hspace{0.1cm}\mathcal{L}^l_t(dV). \end{equation} Optimising over $f\in B_X^{\otimes l}$, $l\le N$ and $t\ge 0$ proves the claimed result. \end{proof}
|
1801.05565
|
\section{Introduction}
In mathematical statistics, it is common to assume that data satisfy an underlying model along with a set of assumptions on this model -- for example, that the sequence of vector-valued observations is i.i.d. and has multivariate normal distribution.
Since real-world data typically do not fit the model or satisfy the assumptions exactly (e.g., due to outliers and noise), reducing the number and strictness of the assumptions helps to reduce the gap between the ``mathematical'' world and the ``real'' world. The concept of robustness occupies one the central roles in understanding this gap.
One of the viable ways to model noisy data and outliers is to assume that the observations are generated by a heavy-tailed distribution, and this is precisely the approach that we follow in this work.
Robust M-estimators introduced by P. Huber \cite{huber1964robust} constitute a powerful method in the toolbox for the analysis of heavy-tailed data.
Huber noted that ``it is an empirical fact that the best [outlier] rejection procedures do not quite reach the performance of the best robust procedures.''
His conclusion remains valid in today's age of high-dimensional data that poses new challenging questions and demand novel methods.
The goal of this work is to introduce robust modifications for the class of operator-valued U-statistics, which naturally appear in the problems related to estimation of covariance matrices.
Statistical estimation in the presence of outliers and heavy-tailed data has recently attracted the attention of the research community, and the literature on the topic covers the wide range of topics.
A comprehensive review is beyond the scope of this section, so we mention only few notable contributions.
Several popular approach to robust covariance estimation and robust principal component analysis are discussed in \cite{hubert2008high,polyak2017principle,candes2011robust}, including the Minimum Covariance
Determinant (MCD) estimator and the Minimum Volume Ellipsoid estimator (MVE).
Maronna's \cite{maronna1976} and Tyler's \cite{tyler1987distribution,zhang2016marvcenko} M-estimators are other well-known alternatives. Rigorous results for these estimators are available only for special families of distributions, such as elliptically symmetric.
Robust estimators based on Kendall's tau have been recently studied in \cite{wegkamp2016adaptive,han2017statistical}, again for the family of elliptically symmetric distributions and its generalizations.
The papers \cite{catoni2016pac,catoni2017dimension,giulini2016robust} discuss robust covariance estimation for heavy-tailed distributions and are all based on the ideas originating in work \cite{catoni2012challenging} that provided detailed non-asymptotic analysis of robust M-estimators of the univariate mean.
The present paper can be seen as a direct extension of these ideas to the case of matrix-valued U-statistics, and continues the line of work initiated in \cite{fan2016robust} and \cite{minsker2016sub}; the main advantage of the techniques proposed is that they result in estimators that can be computed efficiently, and cover scenarios beyond covariance estimation problem.
Recent advances in this direction include the works \cite{fan2017farm} and \cite{wei2017estimation} that present new results on robust covariance estimation; see Remark \ref{remark:comparison} for more details.
Finally, let us mention the paper \cite{joly2016robust} that investigates robust analogues of U-statistics obtained via the median-of-means technique \cite{alon1996space,devroye2016sub,Nemirovski1983Problem-complex00,lerasle2011robust}.
We include a more detailed discussion and comparison with the methods of this work in Section \ref{section:main} below.
The rest of the paper is organizes as follows.
Section \ref{sec:prelim} explains the main notation and background material. Section \ref{section:main} introduces the main results.
Implications for covariance estimation problem and its versions are outlined in Section \ref{sec:covariance}.
Finally, the proofs of the main results are contained in Section \ref{section:proofs}.
\section{Preliminaries}
\label{sec:prelim}
In this section, we introduce main notation and recall useful facts that we rely on in the subsequent exposition.
\subsection{Definitions and notation}
\label{sec:definitions}
Given $A\in \mathbb C^{d_1\times d_2}$, let $A^\ast\in \mathbb C^{d_2\times d_1}$ be the Hermitian adjoint of $A$.
The set of all $d\times d$ self-adjoint matrices will be denoted by $\mathbb H^d$.
For a self-adjoint matrix $A$, we will write $\lambda_{\mbox{\footnotesize{max}\,}}(A)$ and $\lambda_{\mbox{\footnotesize{min}\,}}(A)$ for the largest and smallest eigenvalues of $A$. Hadamard (entry-wise) product of matrices $A,B\in \mathbb C^{d_1\times d_2}$ will be denoted $A_1\odot A_2$.
Next, we will introduce the matrix norms used in the paper.
Everywhere below, $\|\cdot\|$ stands for the operator norm $\|A\|:=\sqrt{\lambda_{\mbox{\footnotesize{max}\,}}(A^\ast A)}$.
If $d_1=d_2=d$, we denote by $\mbox{tr\,} A$ the trace of $A$.
Next, for $A\in \mathbb C^{d_1\times d_2}$, the nuclear norm $\|\cdot\|_1$ is defined as
$\|A\|_1=\mbox{tr\,}(\sqrt{A^*A})$, where $\sqrt{A^*A}$ is a nonnegative definite matrix such that $(\sqrt{A^*A})^2=A^\ast A$.
The Frobenius (or Hilbert-Schmidt) norm is $\|A\|_{\mathrm{F}}=\sqrt{\mbox{tr\,}(A^\ast A)}$, and the associated inner product is
$\dotp{A_1}{A_2}=\mbox{tr\,}(A_1^\ast A_2)$.
Finally, define $\|A\|_{\max}:=\sup_{i,j}|A_{i,j}|$.
For a vector $Y\in \mathbb R^d$, $\left\| Y \right\|_2$ stands for the usual Euclidean norm of $Y$.
\noindent Given two self-adjoint matrices $A$ and $B$, we will write $A\succeq B \ (\text{or }A\succ B)$ iff $A-B$ is nonnegative (or positive) definite.
\noindent Given a random matrix $Y\in \mathbb C^{d_1\times d_2}$ with $\mathbb E\|Y\|<\infty$, the expectation $\mathbb EY$ denotes a $d_1\times d_2$ matrix such that $\left( \mathbb EY \right)_{i,j} = \mathbb EY_{i,j}$. For a sequence $Y_1,\ldots,Y_n$ of random matrices, $\mathbb E_j[\,\cdot \,]$ will stand for the conditional expectation $\mathbb E[\,\cdot\,|Y_1,\ldots,Y_{j}]$.
\noindent For $a,b\in \mathbb R$, set $a\vee b:=\max(a,b)$ and $a\wedge b:=\min(a,b)$. Finally, recall the definition of the function of a matrix-valued argument.
\begin{definition}
\label{matrix-function}
Given a real-valued function $f$ defined on an interval $\mathbb T\subseteq \mathbb R$ and a self-adjoint $A\in \mathbb H^d$ with the eigenvalue decomposition
$A=U\Lambda U^\ast$ such that $\lambda_j(A)\in \mathbb T,\ j=1,\ldots,d$, define $f(A)$ as
$f(A)=Uf(\Lambda) U^\ast$, where
\[
f(\Lambda)=f\left( \begin{pmatrix}
\lambda_1 & \, & \,\\
\, & \ddots & \, \\
\, & \, & \lambda_d
\end{pmatrix} \right)
=\begin{pmatrix}
f(\lambda_1) & \, & \,\\
\, & \ddots & \, \\
\, & \, & f(\lambda_d)
\end{pmatrix}.
\]
\end{definition}
Finally, we introduce the Hermitian dilation which allows to reduce the problems involving general rectangular matrices to the case of Hermitian matrices.
\begin{definition}
Given the rectangular matrix $A\in\mathbb C^{d_1\times d_2}$, the Hermitian dilation $\mathcal D: \mathbb C^{d_1\times d_2}\mapsto \mathbb C^{(d_1+d_2)\times (d_1+d_2)}$ is defined as
\begin{align}
\label{eq:dilation}
&
\mathcal D(A)=\begin{pmatrix}
0 & A \\
A^\ast & 0
\end{pmatrix}.
\end{align}
\end{definition}
Since
$\mathcal D(A)^2=\begin{pmatrix}
A A^\ast & 0 \\
0 & A^\ast A
\end{pmatrix},$
it is easy to see that $\| \mathcal D(A) \|=\|A\|$.
\subsection{U-statistics}
\label{sec:setup}
Consider a sequence of i.i.d. random variables $X_1,\ldots,X_n$ ($n\geq2$) taking values in a measurable space $(\mathcal S,\mathcal{B})$, and let $P$ be the distribution of $X_1$.
Assume that $H: \mathcal S^m\rightarrow\mathbb H^d$ ($2\leq m\leq n$) is a $\mathcal{S}^m$-measurable permutation symmetric kernel, meaning that $H(x_1,\ldots,x_m) = H(x_{\pi_1},\ldots,x_{\pi_m})$ for any $(x_1,\ldots,x_m)\in \mathcal S^m$ and any permutation $\pi$.
The U-statistic with kernel $H$ is defined as \cite{hoeffding1948class}
\begin{equation}
\label{u-stat}
U_n:=\frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I_n^m}H(X_{i_1},\ldots,X_{i_m}),
\end{equation}
where $I_n^m:=\{(i_1,\ldots,i_m):~1\leq i_j\leq n,,~i_j\neq i_k~\textrm{if}~j\neq k\}$; clearly, it is an unbiased estimator of $\mathbb EH(X_1,\ldots,X_m)$. Throughout this paper, we will impose a mild assumption stating that $\mathbb E\left\| H(X_1,\ldots,X_m)^2 \right\|<\infty$.
One of the key questions in statistical applications is to understand the concentration of a given estimator around the unknown parameter of interest. Majority of existing results for U-statistics assume that the kernel $H$ is bounded \cite{arcones1993limit}, or that $\left\| \mathbb EH(X_1,\ldots,X_m)\right\|$ has sub-Gaussian tails \cite{gine2000exponential}.
However, in the case when only the moments of low orders of $\left\| H(X_1,\ldots,X_m)\right\|$ are finite, deviations of the random variable
\[
\left\| H(X_1,\ldots,X_m) - \mathbb EH(X_1,\ldots,X_m)\right\|
\]
do not satisfy exponential concentration inequalities.
At the same time, as we show in this paper, it is possible to construct ``robust modifications'' of $U_n$ for which sub-Gaussian type deviation results hold.
In the remainder of this section, we recall several useful facts about U-statistics.
The projection operator $\pi_{m,k}~(k\leq m)$ is defined as
\[
\pi_{m,k}H(\mathbf{x}_{i_1},\ldots,\mathbf{x}_{i_k})
:= (\delta_{\mathbf{x}_{i_1}}- P)\ldots(\delta_{\mathbf{x}_{i_k}} - P) P^{m-k}H,
\]
where
\[
\mathcal{Q}^mH := \int\ldots\int H(\mathbf{y}_1,\ldots,\mathbf{y}_m)dQ(\mathbf{y}_1)\ldots dQ(\mathbf{y}_m),
\]
for any probability measure $Q$ in $(\mathcal S,\mathcal{B})$, and $\delta_{x}$ is a Dirac measure concentrated at $x\in \mathcal S$.
For example, $\pi_{m,1}H(x) = \mathbb E \left[ H(X_1,\ldots,X_m)| X_1=x\right] - \mathbb E H(X_1,\ldots,X_m)$.
\begin{definition}
An $\mathcal{S}^m$-measurable function $F: \mathcal S^m\rightarrow\mathbb H^d$ is $P$-degenerate of order $r$
($1\leq r<m$), if
\[
\mathbb EF(\mathbf{x}_1,\ldots,\mathbf{x}_r,X_{r+1},\ldots,X_m)=0,~\forall \mathbf{x}_1,\ldots,\mathbf{x}_r\in \mathcal S,
\]
and $\mathbb E F(\mathbf{x}_1,\ldots,\mathbf{x}_r, \mathbf{x}_{r+1},X_{r+2},\ldots,X_m)$ is not a constant function.
Otherwise, $F$ is non-degenerate.
\end{definition}
The following result is commonly referred to as Hoeffding's decomposition; see \cite{Decoupling} for details.
\begin{proposition}
\label{hoeffding}
The following equality holds almost surely:
\begin{equation*}
U_n=\sum_{k=0}^m{m \choose k}V_n(\pi_{m,k}H),
\end{equation*}
where
\[
V_n(\pi_{m,k}H)=\frac{(n-k)!}{n!}\sum_{(i_1,\ldots,i_k)\in I^k_n}\pi_{m,k}H(X_{i_1},\ldots,X_{i_k}).
\]
\end{proposition}
\noindent For instance, the first order term ($k=1$) in the decomposition is
\[
m V_n(\pi_{m,1}H)=\frac{m}{n}\sum_{j=1}^n\pi_{m,1}H(X_j).
\]
In this paper, we consider non-degenerate U-statistics which commonly appear in applications such as estimation of covariance matrices and that serve as a main motivation for this paper. It is well-known that
\[
\mathbb E \left(U_n- \mathbb E H(X_1,\ldots,X_m)\right)^2
={n \choose m}^{-1}\sum_{k=1}^m{m \choose k}{n-m \choose m-k}\Sigma_k^2,
\]
where $\Sigma_k^2 =\mathbb E\big(\pi_{m,k}H(X_1,X_2,\ldots,X_k)\big)^2$, $k=1,\ldots,m$.
As $n$ gets large, the first term in the sum above dominates the rest that are of smaller order, so that
\[
\left\|\expect{(U_n-\mathcal{P}^mH)^2}\right\|
= \left\|{n \choose m}^{-1}m{n-m \choose m-1}\Sigma_1^2\right\| + o(n^{-1})
= \Big\|\frac{m^2}{n}\Sigma_1^2 \Big\| + o(n^{-1})
\]
as $n\to\infty$.
\begin{comment}
When direct computation of the asymptotic variance is difficult, the following upper bound is useful; it is well-known in the scalar case, and the generalization to $\mathbb H^d$-valued U-statistics is straightforward.
\begin{proposition}
\label{hajek-projection}
Let $\sigma^2=\Big\| \mathbb E \big(H(X_1,X_2,\ldots,X_m) - \mathbb E H(X_1,X_2,\ldots,X_m) \big)^2 \Big\|.$
The following inequality holds:
\[
\left\| \mathbb E \left(\pi_{m,1}H(X_i) \right)^2\right\|
\leq\frac{\sigma^2}{m}.
\]
\end{proposition}
\noindent The proof is given in Appendix \ref{section:appendix}.
\end{comment}
\section{Robust modifications of U-statistics}
\label{section:main}
The goal of this section is to introduce the robust versions of U-statistics, and state the main results about their performance.
\noindent Define
\begin{equation}
\label{eq:psi}
\psi(x) =
\begin{cases}
1/2, & x >1,\\
x - \mathrm{sign}(x)\cdot x^2/2, & |x|\leq 1, \\
-1/2, & x<-1
\end{cases}
\end{equation}
and its antiderivative
\begin{equation}
\label{eq:psi2}
\Psi(x) =
\begin{cases}
\frac{x^2}{2} - \frac{|x|^3}{6}, & |x| \leq 1,\\
\frac{1}{3} + \frac{1}{2}(|x|-1), & |x|>1.
\end{cases}
\end{equation}
The function $\Psi(x)$ is closely related to Huber's loss \cite{huber2011robust}; concrete choice of $\Psi(x)$ is motivated by its properties, namely convexity and the fact that its derivative $\psi(x)$ is operator Lipschitz and bounded (see Lemma \ref{lemma:optimization} below).
\begin{figure}[t]
\centering
\subfloat[$\psi(x)$]{
\boxed{\includegraphics[width=0.5\textwidth]{figures/function1-1.pdf}}
\label{psi1}}
\subfloat[$\Psi(x)$]{
\boxed{ \includegraphics[width=0.5\textwidth]{figures/function2-2.pdf}}
\label{psi2}}
\caption{Graphs of the functions $\psi(x)$ and $\Psi(x)$.}
\end{figure}
Let $U_n$ be $\mathbb H^d$-valued U-statistic,
\[
U_n:=\frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I_n^m} H(X_{i_1},\ldots,X_{i_m}).
\]
Since $U_n$ is the average of matrices of the form $H(X_{i_1},\ldots,X_{i_m}), \ (i_1,\ldots,i_m)\in I_n^m,$ it can be equivalently written as
\begin{align*}
U_n & = \argmin_{U\in \mathbb H^d}
\sum_{(i_1,\ldots,i_m)\in I_n^m} \left\| H(X_{i_1},\ldots,X_{i_m}) - U\right\|^2_{\mathrm{F}} \\
&
=\argmin_{U\in \mathbb H^d} \mbox{tr\,}\Bigg[ \sum_{(i_1,\ldots,i_m)\in I_n^m}
\left( H(X_{i_1},\ldots,X_{i_m}) - U \right)^2 \Bigg].
\end{align*}
A robust version of $U_n$ is then defined by replacing the quadratic loss by (rescaled) loss $\Psi(x)$.
Namely, let $\theta>0$ be a scaling parameter, and define
\begin{align}
\label{eq:estimator1}
\widehat U_n^\star & =
\argmin_{U\in \mathbb H^d} \mbox{tr\,}\Bigg[ \sum_{(i_1,\ldots,i_m)\in I_n^m}
\Psi\Big( \theta\left( H(X_{i_1},\ldots,X_{i_m}) - U\right)\Big) \Bigg].
\end{align}
For brevity, we will set
\[
H_{i_1\ldots i_m}:= H(X_{i_1},\ldots,X_{i_m}) \text{ and } \mathbb EH:=\mathbb EH_{i_1\ldots i_m}
\]
in what follows.
Define
\begin{align}
\label{eq:F}
&
F_\theta(U):= \frac{1}{\theta^2}\frac{(n-m)!}{n!}\mbox{tr\,}\Bigg[ \sum_{(i_1,\ldots,i_m)\in I_n^m}
\Psi\Big( \theta\left( H_{i_1\ldots i_m} - U\right)\Big) \Bigg].
\end{align}
Clearly, $\widehat U_n^\star$ can be equivalently written as
\[
\widehat U_n^\star = \argmin_{U\in \mathbb H^d} \mbox{tr\,} \left[ F_\theta(U) \right].
\]
The following result describes the basic properties of this optimization problem.
\begin{lemma}
\label{lemma:optimization}
The following statements hold:
\begin{enumerate}
\item Problem \eqref{eq:estimator1} is a convex optimization problem.
\item The gradient $\nabla F_\theta(U)$ can be represented as
\[
\nabla F_\theta(U) = -\frac{1}{\theta}\frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I_n^m}
\psi\Big(\theta\left( H_{i_1\ldots i_m} - U\right) \Big).
\]
Moreover, $\nabla F_\theta(\cdot): \mathbb H^d\mapsto \mathbb H^d$ is Lipschitz continuous in Frobenius and operator norms with Lipschitz constant $1$.
\item Problem \eqref{eq:estimator1} is equivalent to
\begin{align}
\label{eq:estimator3}
\sum_{(i_1,\ldots,i_m)\in I^m_n}\psi\Big( \theta \left(H_{i_1\ldots i_m} - \widehat U_n^\star\right) \Big) = 0_{d\times d}.
\end{align}
\end{enumerate}
\end{lemma}
Proofs of these facts are given in Section \ref{proof:opt}. Next, we present our main result regarding the performance of the estimator $\widehat U_n^\star$.
Define the \textit{effective rank} \cite{vershynin2010introduction} of a nonnegative definite matrix $A\in \mathbb H^d$ as
\[
\mathrm{r}(A) = \frac{\mbox{tr\,} A}{\| A \|}.
\]
It is easy to see that for any matrix $A\in \mathbb H^d$, $\mathrm{r}(A)\leq d$.
We will be interested in the effective rank of the matrix $\mathbb E \left( H_{1\ldots m} - \mathbb EH \right)^2$, and will denote
\[
\mathrm{r}_H:= \mathrm{r}\left( \mathbb E \left( H_{1\ldots m} - \mathbb EH \right)^2\right).
\]
\begin{theorem}
\label{thm:new-performance}
Let $k=\lfloor n/m \rfloor$, and assume that $t>0$ is such that
\[
\mathrm{r}_H\frac{ t}{k}\leq \frac{1}{104}.
\]
Then for any $\sigma\geq \| \mathbb E \left( H_{1\ldots m} - \mathbb EH \right)^2 \|^{1/2}$ and
$\theta:=\theta_\sigma=\frac{1}{\sigma}\sqrt{\frac{2t}{k}}$,
\[
\left\| \widehat U_n^\star - \mathbb EH\right\| \leq 23\sigma\sqrt{\frac{t}{k}}
\]
with probability $\geq 1-(4d+1)e^{-t}$.
\end{theorem}
\noindent The proof is presented in Section \ref{proof:new-performance}.
\begin{remark}
\label{remark:rank}
Condition $\mathrm{r}_H\frac{ t}{k}\leq \frac{1}{104}$ in Theorem \ref{thm:new-performance} can be weakened to
\[
\frac{\mbox{tr\,}\left( \mathbb E \left( H_{1\ldots m} - \mathbb EH \right)^2 \right)}{\sigma^2} \frac{t}{k}\leq \frac{1}{104},
\]
where $\sigma^2\geq \| \mathbb E \left( H_{1\ldots m} - \mathbb EH \right)^2 \|$. This fact follows from the straightforward modification of the proof of Theorem \ref{thm:new-performance} and can be useful in applications.
\end{remark}
\begin{remark}
\label{remark:mom}
The paper \cite{joly2016robust} investigates robust analogues of univariate U-statistics based on the median-of-means (MOM) technique.
This approach can be extended to higher dimensions via replacing the univariate median by an appropriate multivariate generalization (e.g., the spatial median).
When applied to covariance estimation problem, it yields estimates for the error measured in Frobenius norm; however, is not not clear whether it can be used to obtain the error bounds in the operator norm.
More specifically, to obtain such a bound via the MOM method, one would need to estimate
$
\mathbb E \left\| \frac{1}{n}\sum_{j=1}^n (Y_j - \mathbb EY)(Y_j - \mathbb EY)^T - \Sigma \right\|^2,
$
where $Y_1,\ldots,Y_j$ are i.i.d. copies of a random vector $Y\in \mathbb R^d$ such that $\mathbb E(Y-\mathbb EY)(Y-\mathbb EY)^T=\Sigma$ and $\mathbb E\|Y\|_2^4<\infty$.
We are not aware of any existing (non-trivial) upper bounds for the aforementioned expectation that require only 4 finite moments of $\|Y\|_2$.
On the other hand, it is straightforward to obtain the upper bound in the Frobenius norm as
$
\mathbb E \big\| \frac{1}{n}\sum_{j=1}^n (Y_j - \mathbb EY)(Y_j - \mathbb EY)^T - \Sigma \big\|_{\mathrm{F}}^2 =
\frac{1}{n}\left(\mathbb E\| Y - \mathbb EY\|_2^4 - \left\| \Sigma \right\|^2_{\mathrm{F}}\right).
$
\end{remark}
\subsection{Construction of the adaptive estimator}
\label{section:lepski}
The downside of the estimator $\widehat U_n^\star$ defined in \eqref{eq:estimator1} is the fact that it is not completely data-dependent as the choice of $\theta$ requires the knowledge of an upper bound on
\[
\sigma_\ast^2 := \left\| \mathbb E \left( H_{1\ldots m} - \mathbb EH \right)^2\right\|.
\]
To alleviate this difficulty, we propose an adaptive construction based on a variant of Lepski's method \cite{lepskii1992asymptotically}.
Assume that $\sigma_{\mbox{\footnotesize{min}\,}}$ is a known (possible crude) lower bound on $\sigma_\ast$.
Choose $\gamma>1$, let $\sigma_j := \sigma_{\mbox{\footnotesize{min}\,}} \gamma^j$, and
for each integer $j\geq 0$, set $t_j:=t+\log \left[j(j+1)\right]$ and
\[
\theta_j=\theta(j,t)=\sqrt{\frac{2 t_j}{k}}\frac{1}{\sigma_j},
\]
where $k=\lfloor n/m \rfloor$ as before.
Let
\[
\widehat U_{n,j}=\argmin_{U\in \mathbb H^d} F_{\theta_j}(U),
\]
with $F_{\theta}$ was defined in \eqref{eq:F}.
Finally, set
\[
\mathcal L:=\mathcal L(t) = \left\{ l\in \mathbb N: \ \mathrm{r}_H\frac{t_l}{k} \leq \frac{1}{104}\right\}
\]
and
\begin{align}
\label{eq:lepski}
j_\ast:=\min\left\{ j\in \mathcal L: \forall l\in \mathcal L, \ l>j ,\ \left\| \widehat U_{n,l} - \widehat U_{n,j} \right\|\leq 46\sigma_{l} \sqrt{\frac{ t_l}{k}} \right\}
\end{align}
and $\widetilde U_n^\star:=\widehat U_{n,j_\ast}$; if condition \eqref{eq:lepski} is not satisfied by any $j\in \mathcal L$, we set $j_\ast=+\infty$ and $\widetilde U_n^\star=0_{d\times d}$.
\noindent Let
\begin{align}
\label{eq:xi}
&
\Xi = \log\left[\left( \Big\lfloor \frac{\log \left(\sigma_\ast/\sigma_{\mbox{\footnotesize{min}\,}}\right)}{\log \gamma}\Big\rfloor+1\right)\left(\Big\lfloor \frac{\log \left(\sigma_\ast/\sigma_{\mbox{\footnotesize{min}\,}}\right)}{\log \gamma}\Big\rfloor+2 \right)\right].
\end{align}
\begin{theorem}
\label{th:lepski}
Assume that $t>0$ is such that
\[
\mathrm{r}_H\frac{(t+\Xi)}{k}\leq \frac{1}{104}.
\]
Then with probability $\geq 1 - (4d+1)e^{-t}$,
\[
\left\| \widetilde U_n^\star - \mathbb EH \right\| \leq
69\gamma\cdot\sigma_\ast \sqrt{\frac{t+\Xi}{k}},
\]
\end{theorem}
\noindent In other words, adaptive estimator can be obtained at the cost of the additional multiplicative factor $3\gamma$ in the error bound.
\begin{proof}
Let $\bar j=\min\left\{ j\geq 1: \ \sigma_j \geq \sigma_\ast\right\}$, and note that
$\bar j\leq \Big\lfloor \frac{\log \left(\sigma_\ast/\sigma_{\mbox{\footnotesize{min}\,}}\right)}{\log \gamma}\Big\rfloor+1$ and $\sigma_{\bar j}\leq \gamma \sigma_\ast$.
Note that condition of Theorem \ref{th:lepski} guarantees that $\bar j \in \mathcal L$.
We will show that $j_\ast \leq \bar j$ with high probability.
Indeed,
\begin{align*}
\Pr\left( j_\ast > \right. & \left. \bar j\right)\leq \Pr\left( \bigcup_{l\in \mathcal L: l > \bar j} \left\{ \left\| \widehat U_{n,l} - \widehat U_{n,\bar j} \right\| > 46\sigma_{l} \sqrt{\frac{t_j}{k}} \right\} \right)\\
&
\leq \Pr\left( \left\| \widehat U_{n,\bar j} - \mathbb EH \right\| > 23\sigma_{\bar j} \sqrt{\frac{t_{\bar j}}{k}} \right) +
\sum_{l\in \mathcal L: l>\bar j}\Pr\left( \left\| \widehat U_{n,l} - \mathbb EH \right\| > 23\sigma_{l} \sqrt{\frac{t_l}{k}} \right) \\
&
\leq (4d+1) e^{-t}\frac{1}{\bar j(\bar j+1)} + (4d+1) e^{-t} \sum_{l>\bar j}\frac{1}{l(l+1)}\leq (4d+1) e^{-t}.
\end{align*}
where we used Theorem \ref{thm:new-performance} to bound each of the probabilities in the sum.
The display above implies that the event
\[
\mathcal B = \bigcap_{l\in \mathcal L: l\geq \bar j}
\left\{ \left\| \widehat U_{n,l} - \mathbb EH \right\|\leq 23 \sigma_{l} \sqrt{\frac{t_l}{k}} \right\}
\]
of probability $\geq 1-(4d+1) e^{-t}$ is contained in $\mathcal E=\left\{ j_\ast\leq \bar j \right\}$.
Hence, on $\mathcal B$ we have
\begin{align*}
\left\| \widetilde U_n^\star - \mathbb EH \right\|&
\leq \| \widetilde U_n^\star - \widehat U_{n,\bar j} \| + \| \widehat U_{n,\bar j} - \mathbb EH \| \leq
46 \sigma_{\bar j} \sqrt{\frac{t_{\bar j}}{k}} + 23 \sigma_{\bar j} \sqrt{\frac{t_{\bar j}}{k}} \\
&\leq \gamma\cdot 69 \sigma_\ast \sqrt{\frac{t+\Xi}{k}},
\end{align*}
where $\Xi = \log\left[\left( \Big\lfloor \frac{\log \left(\sigma_\ast/\sigma_{\mbox{\footnotesize{min}\,}}\right)}{\log \gamma}\Big\rfloor+1\right)\left(\Big\lfloor \frac{\log \left(\sigma_\ast/\sigma_{\mbox{\footnotesize{min}\,}}\right)}{\log \gamma}\Big\rfloor+2 \right)\right]$.
\end{proof}
\subsection{Extension to rectangular matrices}
\label{sec:rectangular}
In this section, we assume a more general setting where $H: \mathcal S^m\mapsto \mathbb C^{d_1\times d_2}$ is a $\mathbb C^{d_1\times d_2}$-valued permutation-symmetric function.
As before, our goal is to construct an estimator of $\mathbb EH$.
We reduce this general problem to the case of $\mathbb H^{d_1+d_2}$-valued functions via the self-adjoint dilation defined in \eqref{eq:dilation}.
Let
\[
\mathcal D(H_{i_1\ldots i_m}) =
\begin{pmatrix}
0 & H(X_{i_1},\ldots,X_{i_m}) \\
\left[H(X_{i_1},\ldots,X_{i_m})\right]^\ast & 0
\end{pmatrix},
\]
and
\[
\bar U_n^\star =
\argmin_{U\in \mathbb H^{d_1+d_2}} \mbox{tr\,}\Bigg[ \sum_{(i_1,\ldots,i_m)\in I_n^m}
\Psi\Big( \theta\left( \mathcal D(H_{i_1\ldots i_m}) - U\right)\Big) \Bigg].
\]
Let $\hat U^\star_{11}\in \mathbb C^{d_1\times d_1}$, $\hat U^\star_{22}\in \mathbb C^{d_2\times d_2}$, $\hat U^\star_{12}\in \mathbb C^{d_1\times d_2}$ be such that $\bar U^\star_n$ can be written in the block form as
$\bar U_n^\star
=\begin{pmatrix}
\hat U^\star_{11} & \hat U^\star_{12} \\
( \hat U^\star_{12} )^\ast & \hat U^\star_{22}
\end{pmatrix}.$
Moreover, define
\[
\sigma^2_\star := \max\left( \big\| \mathbb E (H_{1\ldots m} - \mathbb EH) (H_{1\ldots m} - \mathbb EH)^\ast \big\|, \big\|
\mathbb E (H_{1\ldots m} - \mathbb EH)^\ast (H_{1\ldots m} - \mathbb EH) \big\| \right)
\]
and
\[
\tilde{\mathrm{r}}_H:= 2\cdot\frac{\mbox{tr\,}\left[ \mathbb E (H_{1\ldots m} - \mathbb EH) (H_{1\ldots m} - \mathbb EH)^\ast\right] }{\sigma^2_\star}.
\]
\begin{corollary}
\label{cor:rectangular}
Let $k=\lfloor n/m \rfloor$, and assume that $t>0$ is such that
\[
\tilde{\mathrm{r}}_H \frac{t}{k} \leq \frac{1}{104}.
\]
Then for any $\sigma \geq \sigma_\star$ and $\theta:=\theta_\sigma=\frac{1}{\sigma}\sqrt{\frac{2t}{k}}$,
\[
\left\| \hat U^\star_{12} - \mathbb EH \right\|
\leq 23\sigma \sqrt{\frac{t}{k}}
\]
with probability $\geq 1 - \left( 4(d_1+d_2)+1 \right)e^{-t}$.
\end{corollary}
The proof is outlined in Section \ref{proof:rectangular}.
\subsection{Computational considerations}
\label{sec:computational}
Since the estimator $\widehat U_n^\star$ is the solution of the convex optimization problem \eqref{eq:estimator1}, it can be approximated via the gradient descent.
We consider the simplest gradient descent scheme with constant step size equal $1$.
Note that the Lipschitz constant of $F_\theta(U)$ is $L_F=1$ by Lemma \ref{lemma:optimization}, hence this step choice is exactly equal to $\frac{1}{L_F}$.
Given a starting point $U_0\in \mathbb H^d$, the gradient descent iteration for minimization of $\mbox{tr\,} F_\theta(U)$ is
\begin{align*}
U^{(0)}_n :&= U_0, \\
U^{(j)}_n :&= U^{(j-1)}_n - \nabla \left( \mbox{tr\,} F_\theta\left( U^{(j-1)}_n \right)\right) \\
&
=U^{(j-1)}_n+\frac{1}{\theta}\frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I^m_n}\psi\Big(\theta\left( H_{i_1\ldots i_m} - U^{(j-1)}_n\right) \Big), \ j\geq 1.
\end{align*}
\begin{lemma}
\label{lemma:grad-descent}
The following inequalities hold for all $j\geq 1$:
\[
(a) \quad \mbox{tr\,} \left[ F_\theta \left(U_n^{(j)}\right) - F_\theta \left(\widehat U_n^\star\right)\right] \leq \frac{\left\| U_0 - \widehat U_n^\star \right\|_F^2}{2j};
\]
Moreover, under the assumptions of Theorem \ref{thm:new-performance},
\[
(b) \quad \Big\| U_n^{(j)} - \mathbb EH \Big\| \leq \left( \frac{3}{4} \right)^{j}\left\| U_0 - \mathbb EH \right\| + 23\sigma\sqrt{\frac{t}{k}}.
\]
\end{lemma}
The proof is given is Section \ref{proof:grad-descent}.
Note that part (b) implies that a small number of iterations suffice to get an estimator of $\mathbb EH$ that achieves performance bound similar to $\widehat U_n^\star$.
\section{Estimation of covariance matrices}
\label{sec:covariance}
In this section, we consider applications of the previously discussed results to covariance estimation problems.
Let $Y\in \mathbb R^d$ be a random vector with mean $\mathbb EY=\mu$, covariance matrix $\Sigma=\mathbb E\left[ (Y-\mu)(Y-\mu)^T \right]$, and such that $\mathbb E \|Y-\mu\|_2^4<\infty$.
Assume that $Y_1,\ldots, Y_{n}$ be i.i.d. copies of $Y$.
Our goal is to estimate $\Sigma$; note that when the observations are the heavy-tailed, mean estimation problem becomes non-trivial, so the assumption $\mu=0$ is not plausible.
$U$-statistics offer a convenient way to avoid explicit mean estimation.
Indeed, observe that $\Sigma = \frac{1}{2}\mathbb E\left[ (Y_1 - Y_2)(Y_1 - Y_2)^T\right]$, hence
the natural estimator of $\Sigma$ is the $U$-statistic
\begin{equation}\label{eq:sample-covariance}
\widetilde \Sigma_n = \frac{1}{n(n-1)} \sum_{i\ne j}\frac{ (Y_i - Y_j)(Y_i - Y_j)^T}{2}.
\end{equation}
It is easy to check that $\widetilde \Sigma$ coincides with the usual sample covariance estimator
\[
\widetilde \Sigma_n=\frac{1}{n-1}\sum_{j=1}^n (Y_j - \bar Y_n)(Y_j - \bar Y_n)^T.
\]
The robust version is defined according to \eqref{eq:estimator1} as
\begin{align}
\label{eq:estimator4}
\widehat \Sigma_\star =
\argmin_{S\in \mathbb R^{d\times d}, S=S^T} \Bigg[ \mbox{tr\,} \sum_{i\ne j}
\Psi\left(\theta\left( \frac{ (Y_i - Y_j)(Y_i - Y_j)^T}{2} - S\right)\right) \Bigg],
\end{align}
which, by Lemma \ref{lemma:optimization}, is equivalent to
\begin{align*}
\sum_{i\ne j}\psi\left( \theta \left( \frac{(Y_i - Y_j)(Y_i - Y_j)^T}{2} - \widehat\Sigma_\star \right) \right) = 0_{d\times d}.
\end{align*}
\begin{remark}
\label{remark:comparison}
Assume that $\Sigma_n^{(0)} = 0_{d\times d}$, then the first iteration of the gradient descent for the problem \eqref{eq:estimator4} is
\begin{align*}
\Sigma_n^{(1)} & =
\frac{1}{\theta}\frac{1}{n(n-1)}\sum_{i\ne j}\psi\Big(\theta \, \frac{(Y_i - Y_j)(Y_i - Y_j)^T}{2} \Big).
\end{align*}
$\Sigma_n^{(1)}$ can itself be viewed as an estimator of the covariance matrix.
It has been proposed in \cite{minsker2016sub} (see Remark 7 in that paper), and its performance has been later analyzed in \cite{fan2017farm} (see Theorem 3.2). These results support the claim that a small number of gradient descent steps for problem \eqref{eq:estimator1} suffice in applications.
\end{remark}
\noindent To assess performance of $\widehat \Sigma_\star$, we will apply Theorem \ref{thm:new-performance}.
First, let us discuss the ``matrix variance'' $\sigma^2$ appearing in the statement.
Direct computation shows that for $H(Y_1,Y_2) = \frac{(Y_1 - Y_2)(Y_1 - Y_2)^T}{2}$,
\[
\mathbb E(H_{i_1,\ldots,i_m} - \mathbb EH)^2 = \frac{1}{2}\left( \mathbb E\left( (Y-\mu)(Y-\mu)^T \right)^2 + \mbox{tr\,}(\Sigma)\Sigma\right).
\]
The following result (which is an extension of Lemma 2.3 in \cite{wei2017estimation}) connects $\big\| \mathbb E(H - \mathbb EH)^2 \big\|$ with $r(\Sigma)$, the effective rank of the covariance matrix $\Sigma$.
\begin{lemma}
\label{lemma:variance}
\begin{enumerate}
\item[(a)] Assume that kurtosis of the linear forms $\langle Y,v \rangle$ is uniformly bounded by $K$, meaning that
$\sup\limits_{v:\|v\|_2=1}\frac{\mathbb E \langle Y - \mathbb EY, v \rangle^4}{\left[\mathbb E \langle Y - \mathbb EY, v \rangle^2\right]^2}\leq K$. Then
\begin{align*}
\big\| \mathbb E\left( (Y-\mu)(Y-\mu)^T \right)^2 \big\| &\leq K \,\mbox{tr\,}(\Sigma) \, \|\Sigma\|
\\
\end{align*}
\item[(b)] Assume that the kurtosis of the coordinates $Y^{(j)}:=\dotp{Y}{e_j}$ of $Y$ is uniformly bounded by $K'<\infty$, meaning that
$\max\limits_{j=1,\ldots,d}\frac{\mathbb E\left( Y^{(j)} - \mathbb E Y^{(j)} \right)^4}{\left[\mathbb E\left( Y^{(j)} - \mathbb E Y^{(j)} \right)^2\right]^2}\leq K'$. Then
\begin{align*}
&
\mbox{tr\,}\left[ \mathbb E\left( (Y-\mu)(Y-\mu)^T \right)^2 \right] \leq K' \left( \mbox{tr\,}(\Sigma) \right)^2.
\end{align*}
\item[(c)] The following inequality holds:
\[
\big\| \mathbb E\left( (Y-\mu)(Y-\mu)^T \right)^2 \big\| \geq \mbox{tr\,}(\Sigma) \left\| \Sigma \right\|.
\]
\end{enumerate}
\end{lemma}
\noindent Lemma \ref{lemma:variance} immediately implies that under the bounded kurtosis assumption,
\[
\big\| \mathbb E(H - \mathbb EH)^2\big\| \leq K \,\mathrm{r}(\Sigma) \, \|\Sigma\|^2.
\]
The following corollary of Theorem \ref{thm:new-performance} (together with Remark \ref{remark:rank}) is immediate:
\begin{corollary}
\label{corollary:cov}
Assume that the kurtosis of linear forms $\langle Y,v \rangle, \ v\in \mathbb R^d,$ is uniformly bounded by $K$. Moreover, let $t>0$ be such that
\[
r(\Sigma)\frac{t}{\lfloor n/2\rfloor} \leq \frac{1}{104}.
\]
Then for any $\sigma\geq \sqrt{K \,\mathrm{r}(\Sigma)}\, \|\Sigma\|$ and
$\theta:=\theta_\sigma = \frac{1}{\sigma}\sqrt{\frac{2t}{\lfloor n/2\rfloor}}$,
\[
\Big\| \widehat \Sigma_\star - \Sigma \Big\|\leq 23\sigma \sqrt{\frac{t}{\lfloor n/2\rfloor}}
\]
with probability $\geq 1 - (4d+1)e^{-t}$.
\end{corollary}
\noindent An adaptive version of the estimator $\widetilde \Sigma_\star$ can be constructed as in \eqref{eq:lepski}, and its performance follows similarly from Theorem \ref{th:lepski}.
\begin{remark}
It is known \cite{koltchinskii2017concentration} that the quantity $\sqrt{\mathrm{r}(\Sigma)} \|\Sigma\|$ controls the expected error of the sample covariance estimator in the Gaussian setting.
On the other hand, fluctuations of the error around its expected value in the Gaussian case \cite{koltchinskii2017concentration} are controlled by the ``weak variance'' $\sup_{v\in\mathbb R^d:\|v\|_2=1}\mathbb E^{1/2}\dotp{Z}{v}^4\leq \sqrt{K} \|\Sigma\|$, while in our bounds fluctuations are controlled by the larger quantity $\sigma^2$; this fact leaves room for improvement in our results.
\end{remark}
\subsection{Estimation in Frobenius norm}
\label{sec:frob}
Next, we show that thresholding the singular values of the adaptive estimator $\widetilde \Sigma_\star$ (defined as in \eqref{eq:lepski} for some $\gamma>1$) yields the estimator that achieves optimal performance in Frobenius norm.
Given $\tau>0$, define
\begin{align}
\label{eq:thresholding}
&
\widetilde \Sigma_\star^\tau = \sum_{j=1}^d \max\left(\lambda_j\left(\widetilde \Sigma_\star\right) -\tau/2, 0\right) v_j(\widetilde \Sigma_\star) v_j(\widetilde \Sigma_\star)^T,
\end{align}
where $\lambda_j(\widetilde \Sigma_\star)$ and $v_j(\widetilde \Sigma_\star)$ are the eigenvalues and the corresponding eigenvectors of $\widetilde \Sigma_\star$.
\begin{corollary}
\label{cor:frob}
Assume that the kurtosis of linear forms $\langle Y,v \rangle, \ v\in \mathbb R^d,$ is uniformly bounded by $K$. Moreover, let $t>0$ be such that
\[
\mathrm{r}(\Sigma)\frac{t+\Xi}{\lfloor n/2\rfloor} \leq \frac{1}{104},
\]
where $\Xi$ was defined in \eqref{eq:xi} with $\sigma_\ast:=\sqrt{K\, \mathrm{r}(\Sigma)} \|\Sigma\|$. Then for any
\[
\tau \geq \gamma\cdot 138\sqrt{K}\, \|\Sigma\| \,\sqrt{\frac{\mathrm{r}(\Sigma)(t+\Xi)}{\lfloor n/2\rfloor}},
\]
\begin{align}
&
\label{eq:ex70}
\left\| \widetilde \Sigma_\star^\tau - \Sigma \right\|_{\mathrm{F}}^2\leq
\inf_{S\in \mathbb R^{d\times d},S=S^T} \left[ \left\| S - \Sigma \right\|_{\mathrm{F}}^2 + \frac{(1+\sqrt{2})^2}{8}\tau^2\mathrm{rank}(S) \right].
\end{align}
with probability $\geq 1-(4d+1)e^{-t}$.
\end{corollary}
\noindent The proof of this corollary is given in Section \ref{proof:frob}.
\subsection{Masked covariance estimation}
\label{sec:masked}
Masked covariance estimation framework is based on the assumption that some entries of the covariance matrix $\Sigma$ are ``more important.''
This is quantified by a symmetric mask matrix $M\in \mathbb R^{d\times d}$, whence the goal is to estimate the matrix
$M\odot \Sigma$ that ``downweights'' the entries of $\Sigma$ that are deemed less important, or incorporates the prior information on $\Sigma$.
This problem formulation has been introduced in \cite{levina2012partial}, and later studied in a number of papers including \cite{chen2012masked} and \cite{kabanava2017masked}.
We will be interested in finding an estimator $\widehat \Sigma_\star^M$ such that
$\| \widehat \Sigma_\ast^M - M\odot\Sigma\|$ is small with high probability, and specifically in dependence of the estimation error on the mask matrix $M$.
Consider the following estimator:
\begin{align}
\label{eq:masked}
\widehat \Sigma_\star^M =
\argmin_{S\in \mathbb R^{d\times d}, S=S^T} \Bigg[ \mbox{tr\,} \sum_{i\ne j}
\Psi\left(\theta\left( \frac{ M \odot (Y_i - Y_j)(Y_i - Y_j)^T}{2} - S\right)\right) \Bigg],
\end{align}
which is the ``robust'' version of the estimator $M\odot \widetilde \Sigma_n$, where $\widetilde \Sigma_n$ is the sample covariance matrix defined in \eqref{eq:sample-covariance}.
\begin{comment}
This idea has been formalized previously in
\cite{chen2012masked} and \cite{levina2012partial} under Gaussian and subgaussian samples
by introducing a self-adjoint mask matrix $M\in \mathbb H^d$, which takes non-zero values on the entries that are known to be significant and 0 otherwise. Then, we define the regularized estimator as $M\odot\widehat{\Sigma}$, where $\odot$ denotes the entry-wise Schur product. The goal is to bound $\|M\odot\widehat{\Sigma}-\Sigma\|$, which can be decomposed as
A typical example is the estimation of a banded covariance matrix where the entries decay polynomially away from the diagonal, i.e.
\[
\Big|\Sigma_{ij}\Big|\leq |i-j+1|^{-\alpha},~~\forall i,j\in\{1,2,\ldots,d\},
\]
and $\alpha>1$ is a fixed constant. A simple way of estimating such matrix is to focus only on the entries that are close to the diagonal and ignore other entries. Specifically, we introduce a mask $M$ which has the form
\[
M_{ij} =
\begin{cases}
1,~&\text{if}, |i-j|\leq b,\\
0,~&\text{otherwise,}
\end{cases}
\]
where $b>0$ is a fixed positive integer. Then, we have the bias of the regularized estimator $M\odot\widehat{\Sigma}$:
\[
\|M\odot\Sigma-\Sigma\|\leq 2\sum_{i>b}(i+1)^{-\alpha}\leq\frac{2}{\alpha-1}(b+1)^{1-\alpha},
\]
which follows from the fact that the operator norm is bounded by the maximum $l_1$-norm among all columns. Similarly, we have $\|\Sigma\|\leq 1+\frac{2}{\alpha-1}$, which implies.
$\|M\odot\Sigma-\Sigma\|\leq(b+1)^{1-\alpha}\|\Sigma\|$. This value decays very fast as we increase $b$. In view of \eqref{var-bias-decompose}, it remains to bound $\|M\odot\widehat{\Sigma}-M\odot\Sigma\|$, which can be effectively controlled when the samples are Gaussian vectors (or more generally subgaussian). It is not known though what is a good estimator under more general heavy-tailed samples as we are considering here.
On the other hand, there are several recent works on robust covariance estimation and finite sample guarantee under the heavy-tailed distributions. The majority of the works, such as \cite{Fan-robust-estimation-2016} and \cite{minsker2016sub}, focus on the scenario where the sample vectors have zero or known mean and the covariance matrix does not possess any structure.
They obtain robust covariance estimators with similar finite sample guarantee as that of subgaussian vectors.
More recently, \cite{wei2017estimation} considers robust covariance estimation with unknown mean, where a plug-in estimator is proposed and analyzed, while extensions to such a structured setting is not known.
\end{comment}
Next, following \cite{chen2012masked} we introduce additional parameters that appear in the performance bounds for
$\widehat \Sigma_\star^M$.
Let
\[
\|M\|_{1\rightarrow 2}:=\max_{j=1,\ldots,d} \sqrt{\sum_{i=1}^d M_{ij}^2}
\]
be the maximum $\|\cdot\|_2$ norm of the columns of $M$. We also define
\[
\nu_4(Y):=\sup_{\|\mathbf{v}\|_2\leq1}\mathbb E^{1/4}\langle\mathbf{v},Y -\mathbb E Y\ \rangle^4
\]
and
\[
\mu_4(Y)= \max_{j=1\ldots d} \mathbb E^{1/4}( Y^{(j)} - \mathbb EY^{(j)})^4.
\]
The following result describes the finite-sample performance guarantees for $\widehat \Sigma_\star^M$.
\begin{corollary}
\label{cor:masked}
Assume that the kurtosis of the coordinates $Y^{(j)}=\dotp{Y}{e_j}$ of $Y$ is uniformly bounded by $K'$.
Moreover, let $t>0$ be such that
\[
\sqrt{K'} \frac{\mbox{tr\,}(\Sigma)}{\nu_4^2(Y)} \frac{ t}{\lfloor n/2 \rfloor}\leq \frac{1}{104}.
\]
Then for any $\Delta\geq \sqrt{2} \|M\|_{1\rightarrow 2} \,\nu_4(Y) \, \mu_4(Y)$ and
$\theta=\frac{1}{\Delta}\sqrt{\frac{2t}{\lfloor n/2 \rfloor}}$,
\[
\Big\| \widehat \Sigma_\star^M - M\odot \Sigma \Big\|\leq 23\Delta\sqrt{\frac{t}{\lfloor n/2 \rfloor}}
\]
with probability $\geq 1-(4d+1)e^{-t}$.
\end{corollary}
\begin{proof}
Let $X$ and $X'$ be independent and identically distributed random variables.
Then it is easy to check that
\begin{align}
\label{eq:c30}
&
\mathbb E(X-X')^4\leq 8 \mathbb E(X-\mathbb EX)^4.
\end{align}
It implies that $\nu^2_4(Y_1 - Y_2)\leq 2\sqrt{2} \nu_4^2(Y)$ and
$\mu_4(Y_1-Y_2)\leq 2\sqrt{2} \mu_4(Y)$.
\noindent Next, Lemma 4.1 in \cite{chen2012masked} yields that
\begin{align}
\label{eq:c11}
&
\Bigg\| \mathbb E \left( \frac{(Y_1 - Y_2)(Y_1 - Y_2)^T}{2}\odot M\right)^2 \Bigg\| \leq
2 \|M\|^2_{1\rightarrow 2} \, \mu^2_4(Y) \, \nu_4^2(Y).
\end{align}
Next, we will find an upper bound for the trace of $\mathbb E \left( \frac{(Y_1 - Y_2)(Y_1 - Y_2)^T}{2}\odot M\right)^2$.
It is easy to see that (e.g., see equation (4.1) in \cite{chen2012masked})
\[
\mathbb E \left( \frac{(Y_1 - Y_2)(Y_1 - Y_2)^T}{2}\odot M\right)^2 =
\sum_{j=1}^d M^{(j)} \left(M^{(j)}\right)^T \odot \mathbb E \left(\frac{Y_1^{(j)} - Y_2^{(j)}}{\sqrt{2}}\right)^2 \frac{(Y_1 - Y_2)(Y_1 - Y_2)^T}{2},
\]
where $M^{(j)}$ denotes the $j$-th column of the matrix $M$.
It follows from \eqref{eq:c30}, H\"{o}lder's inequality and the bounded kurtosis assumption that
\begin{align*}
\mbox{tr\,}\left[ \mathbb E \left( \frac{(Y_1 - Y_2)(Y_1 - Y_2)^T}{2}\odot M\right)^2 \right] &=
\sum_{i,j=1}^d M_{i,j}^2 \mathbb E\left[\left(\frac{Y_1^{(i)} - Y_2^{(i)}}{\sqrt{2}}\right)^2\left(\frac{Y_1^{(j)} - Y_2^{(j)}}{\sqrt{2}}\right)^2 \right] \\
&\leq
2\sum_{i,j=1}^d M_{i,j}^2 \mathbb E^{1/2} \left(Y^{(i)} - \mathbb E Y^{(i)}\right)^4 \mathbb E^{1/2} \left(Y^{(j)} - \mathbb E Y^{(j)}\right)^4 \\
&\leq
2\sqrt{K'} \mu_4^2(Y) \|M\|^2_{1\rightarrow 2} \, \mbox{tr\,}(\Sigma).
\end{align*}
Next, we deduce that for $\Delta^2 \geq 2 \|M\|^2_{1\rightarrow 2} \, \mu^2_4(Y) \, \nu_4^2(Y)$,
\[
\frac{\mbox{tr\,}\left[ \mathbb E \left( \frac{(Y_1 - Y_2)(Y_1 - Y_2)^T}{2}\odot M\right)^2 \right]}{\Delta^2}
\leq \sqrt{K'} \frac{\mbox{tr\,}(\Sigma)}{\nu_4^2(Y)}.
\]
Result now follows from Theorem \ref{thm:new-performance} and Remark \ref{remark:rank}.
\end{proof}
\begin{remark}
Let
\[
K:=\sup\limits_{v:\|v\|_2=1}\frac{\mathbb E \langle Y - \mathbb EY, v \rangle^4}{\left[\mathbb E \langle Y - \mathbb EY, v \rangle^2\right]^2}.
\]
Since $\nu_4^2(Y)\leq \sqrt{K}\| \Sigma \|$ by Lemma \ref{lemma:variance} and
$\mu_4^2\leq \sqrt{K'} \left\|\Sigma \right\|_{\max}$, we can state a slightly modified version of Corollary \ref{cor:masked}.
Namely, let $t>0$ be such that
\[
\sqrt{\frac{K'}{K}} \mathrm{r}(\Sigma) \frac{ t}{\lfloor n/2 \rfloor}\leq \frac{1}{104}.
\]
Then for any $\Delta\geq \sqrt{2K} \|M\|_{1\rightarrow 2} \sqrt{\left\|\Sigma \right\|_{\max} \, \|\Sigma\|}$ and
$\theta=\frac{1}{\Delta}\sqrt{\frac{2t}{\lfloor n/2 \rfloor}}$,
\[
\Big\| \widehat \Sigma_\star^M - M\odot \Sigma \Big\|\leq 23\Delta\sqrt{\frac{t}{\lfloor n/2 \rfloor}}
\]
with probability $\geq 1-(4d+1)e^{-t}$.
In particular, if $ \|M\|^2_{1\rightarrow 2} \ll \mathrm{r}(\Sigma)\frac{\|\Sigma\|_{\mbox{\quad }}}{\left\|\Sigma \right\|_{\max}}$, then our bounds show that $M\odot \Sigma$ can be estimated at a faster rate than $\Sigma$ itself.
This conclusion is consistent with results in \cite{chen2012masked} for Gaussian random vectors (e.g., see Theorem 1.1 in that paper); however, we should note that our bounds were obtained under much weaker assumptions.
\begin{comment}
[Comparison with previous results]
The previous best known result for masked covariance estimation was obtained in \cite{chen2012masked}. In particular, they prove the estimator $M\odot\widehat{\Sigma}_0:=M\odot\frac1n\sum_{i=1}^nX_iX_i^T$ has the following performance bound
\begin{multline}\label{compare-bound}
\expect{\left\| M\odot\widehat{\Sigma}_0 - M\odot\Sigma \right\|^2}^{1/2}
\leq C\|\Sigma\| \left( \sqrt{\frac{\log d}{n}\frac{\|\Sigma\|_{\max}}{\|\Sigma\|}}\|M\|_{1\rightarrow2} + \frac{\log d\log (nd)}{n}\frac{\|\Sigma\|_{\max}}{\|\Sigma\|}\|M\|\right),
\end{multline}
where $\|\Sigma\|_{\max}:=\max_i |\Sigma_{ii}|$ and $\|M\|$ denotes the spectral norm of $M$. Thus, if $\|M\|_{1\rightarrow2}$ and $\|M\|$ are constants,\footnote{For example, in previous banded covariance matrix estimation, one can show that $\|M\|_{1\rightarrow2}\leq\sqrt{2b+1}$ and $\|M\|\leq 2b+1$.}
then, the error rate from \eqref{compare-bound} is $\mathcal{O}\left(\sqrt{\log d/n}+(\log d\log (nd))/n\right)$.
Corollary \ref{thm:masked} shows our robust estimator achieves the error rate $\mathcal{O}\left(\sqrt{\log d/n}\right)$ which improves upon the previous result by a $\log (nd)$ factor, while adapting to both unknown mean and heavy-tailed distributions.
Finally, it is also worth noting that the above shrinkage estimator via $\psi(\cdot)$ function, when applying to subgaussian sample vectors, can also improve the logarithm factor in the bound. While chopping down the logarithm seems to be an interesting by-product, we feel the value of the new estimator stems from the fact that it can cope with much more general distributions beyond subgaussian assumptions.
\end{comment}
\end{remark}
\section{Proofs of the mains results}
\label{section:proofs}
In this section, we present the proofs that were omitted from the main exposition.
\subsection{Technical tools}
We recall several useful facts from probability theory and matrix analysis that our arguments rely on.
\begin{fact}
\label{fact:01}
Let $f:\mathbb R\mapsto \mathbb R$ be a convex function.
Then $A\mapsto \mbox{tr\,} f(A)$ is convex on the set of self-adjoint matrices.
In particular, for any self-adjoint matrices $A,B$,
\[
\mbox{tr\,} f\left( \frac{A+B}{2} \right)\leq \frac{1}{2}\mbox{tr\,} f(A) + \frac{1}{2}\mbox{tr\,} f(B).
\]
\end{fact}
\begin{proof}
This is a consequence of Peierls inequality, see Theorem 2.9 in \cite{carlen2010trace} and the comments following it.
\end{proof}
\begin{fact}
\label{fact:05}
Let $F:\mathbb R\mapsto \mathbb R$ be a continuously differentiable function, and $S\in \mathbb C^{d\times d}$ be a self-adjoint matrix. Then the gradient of $G(S):=\mbox{tr\,} F(S)$ is
\[
\nabla G(S) = F'(S),
\]
where $F'$ is the derivative of $F$ and $F'(S): \mathbb C^{d\times d}\mapsto \mathbb C^{d\times d}$ is the matrix function in the sense of the definition \ref{matrix-function}.
\end{fact}
\begin{proof}
See Lemma A.1 in \cite{minsker2016sub}.
\end{proof}
\begin{fact}
\label{fact:06}
Function $\psi(x)$ defined in \eqref{eq:psi} satisfies
\begin{align}
\label{eq:psi-ineq}
&
-\log(1-x+x^2)\leq \psi(x) \leq \log(1+x+x^2)
\end{align}
for all $x\in \mathbb R$.
Moreover, as a function of $\mathbb H^d$-valued argument (see definition \ref{matrix-function}), $\psi(\cdot)$ is Lipschitz continuous in the Frobenius and operator norms with Lipschitz constant $1$, meaning that for all $A_1,A_2\in \mathbb H^d$,
\begin{align*}
&
\left\| \psi(A_1) - \psi(A_2) \right\|_{\mathrm{F}} \leq \left\| A_1 - A_2 \right\|_{\mathrm{F}},
\\
&
\left\| \psi(A_1) - \psi(A_2) \right\| \leq \left\| A_1 - A_2 \right\|.
\end{align*}
\end{fact}
\begin{proof}
To show \eqref{eq:psi-ineq}, it is enough to check that $x-x^2/2\geq -\log(1-x+x^2)$ for $x\in[0,1]$ and that
$x-x^2/2\leq \log(1+x+x^2), \ x\in[0,1]$. Other inequalities follow after the change of variable $y=-x$.
To check that $f(x):=x-x^2/2\geq -\log(1-x+x^2):=g(x)$ for $x\in[0,1]$, note that $f(0)=g(0)=0$ and that
$f'(x)=1-x\geq 1 - \frac{x(1+x)}{1-x+x^2}=g'(x)$ for $x\in[0,1]$. Inequality $x-x^2/2\leq \log(1+x+x^2), \ x\in[0,1]$ can be established similarly.
Note that the function $\psi:\mathbb R\mapsto \mathbb R$ is Lipshitz continuous with Lipschitz constant $1$ as a function of real variable. Lemma 5.5 (Chapter 7) in \cite{bhatia2013matrix} immediately implies that it is also Lipshitz continuous in the Frobenius norm, still with Lipschitz constant $1$.
Lipshitz property of $\psi$ in the operator norm follows from Corollary 1.1.2 in \cite{aleksandrov2016operator} which states that
if $g\in C^1(\mathbb R)$ and $g'$ is positive definite, then the Lipschitz constant of $g$ (as a function on $\mathbb H^d$) is equal to $g'(0)$. It is easy to check that
\[
\psi'(x)=\begin{cases}
1- |x|,~&|x|\leq1,\\
0,~&\text{otherwise},
\end{cases}
\]
which is the Fourier transform of the positive integrable function $\mathrm{sinc}(y) = \left(\frac{\text{sin}(\pi y)}{\pi y}\right)^2$, hence $\psi'$ is positive definite and the (operator) Lipschitz constant of $\psi$ is equal to $1$.
\end{proof}
\begin{fact}
\label{fact:02}
Let $T_1,\ldots,T_L$ be arbitrary $\mathbb H^d$-valued random variables, and $p_1,\ldots,p_L$ be non-negative weights such that $\sum_{j=1}^L p_j=1$.
Moreover, let $T=\sum_{j=1}^L p_j T_j$ be convex combination of $T_1,\ldots, T_L$. Then
\[
\Pr\left(\lambda_{\mbox{\footnotesize{max}\,}}(T)\geq t \right) \leq \max_{j=1,\ldots,L} \left[ \inf_{\theta>0} e^{-\theta t}\mathbb E \mbox{tr\,} e^{\theta T_j}\right].
\]
\end{fact}
\begin{proof}
This fact is a corollary of the well-known Hoeffding's inequality (see Section 5 in \cite{hoeffding1963probability}).
Indeed, for any $\theta>0$,
\begin{align*}
\pr{\lambda_{\mbox{\footnotesize{max}\,}}\left(\sum_{j=1}^L p_j T_j \right)\geq t} &\leq
\pr{\exp\left(\theta \lambda_{\mbox{\footnotesize{max}\,}}\left( \sum_{j=1}^L p_j T_j \right)\right) \geq e^{\theta t}} \\
& \leq
e^{-\theta t} \mathbb E \mbox{tr\,} \exp\left(\theta\sum_{j=1}^L p_j T_j \right)
\leq e^{-\theta t}\sum_{j=1}^L p_j \mathbb E \mbox{tr\,}\exp\left( \theta T_j\right),
\end{align*}
where the last inequality follows from Fact \ref{fact:01}.
\end{proof}
\begin{fact}[Chernoff bound]
\label{fact:03}
Let $\xi_1,\ldots,\xi_n$ be a sequence of i.i.d. copies of $\xi$ such that $\pr{\xi=1}=1-\pr{\xi=0}=p\in (0,1)$, and define
$S_n:=\sum_{j=1}^n \xi_j$.
Then
\[
\pr{S_n/n \geq (1+\tau)p}\leq \inf_{\theta>0}\Big[ e^{-\theta np(1+\tau)}\mathbb E e^{\theta S_n} \Big]\leq
\begin{cases}
e^{-\frac{\tau^2 np}{2+\tau}}, & \tau>1, \\
e^{-\frac{\tau^2 np}{3}}, & 0<\tau\leq 1.
\end{cases}
\]
\end{fact}
\begin{proof}
See Proposition 2.4 in \cite{angluin1979fast}.
\end{proof}
Let $\pi_n$ be the collection of all permutations $i:\{1,\ldots,n\}\mapsto \{1,\ldots,n\}$.
For integers $m\leq \lfloor n/2\rfloor$, let $k=\lfloor n/m \rfloor$.
Given a permutation $(i_1,\ldots,i_n)\in \pi_n$ and a U-statistic $U_n$ defined in \eqref{u-stat}, let
\begin{align}
\label{eq:w}
&
W_{i_1,\ldots,i_n}:=\frac{1}{k}\left( H\left(X_{i_1},\ldots,X_{i_m} \right) + H\left(X_{i_{m+1}},\ldots,X_{i_{2m}}\right) + \ldots +
H\left(X_{i_{(k-1)m+1}},\ldots,X_{i_{km}} \right) \right).
\end{align}
\begin{fact}
\label{fact:04}
The following equality holds:
\[
U_n = \frac{1}{n!}\sum_{(i_1,\ldots,i_n) \in \pi_n} W_{i_1,\ldots,i_n}.
\]
\end{fact}
\begin{proof}
See Section 5 in \cite{hoeffding1963probability}.
\end{proof}
Let $Z_1,\ldots,Z_n$ be a sequence of independent copies of $Z\in \mathbb H^d$ such that $\left\| \mathbb EZ^2\right\|<\infty$.
\begin{fact}[Matrix Bernstein Inequality]
\label{fact:bernstein}
Assume that $\|Z -\mathbb EZ\|\leq M$ almost surely. Then for any $\sigma\geq \left\| \mathbb E(Z - \mathbb EZ)^2 \right\|$,
\[
\Bigg\| \frac{\sum_{j=1}^n Z_j}{n} - \mathbb EZ \Bigg\|\leq 2\sigma\sqrt{\frac{t}{n}}\bigvee \frac{4Mt}{3n}
\]
with probability $\geq 1 - 2de^{-t}$.
\end{fact}
\begin{proof}
See Theorem 1.4 in \cite{tropp2012user}.
\end{proof}
Assume that $\left\| H\left(X_{i_1},\ldots, X_{i_m}\right) \right\|\leq M$ almost surely. Together with Facts \ref{fact:04} and \ref{fact:02}, Bernstein's inequality can be used to show that
\begin{align}
\label{eq:p10}
&
\Big\| U_n - \mathbb EH \Big\|\leq 2 \big\| \mathbb E(H - \mathbb E H)^2 \big\|^{1/2}\sqrt{\frac{t}{k}} \bigvee \frac{4Mt}{3n}
\end{align}
with probability $\geq 1 - 2de^{-t}$. This corollary will be useful in the sequel.
\begin{fact}
\label{fact:legacy}
Let $\psi(\cdot)$ be defined by \eqref{eq:psi}. Then the following inequalities hold for all $\theta>0$:
\begin{align*}
&
\mathbb E \mbox{tr\,} \exp\left( \sum_{j=1}^n \left( \psi(\theta Z_j) -\theta\mathbb EZ\right) \right)\leq \mbox{tr\,} \exp\left( n\theta^2 \mathbb EZ^2 \right), \\
&
\mathbb E \mbox{tr\,} \exp\left( \sum_{j=1}^n \left( \theta\mathbb EZ - \psi(\theta Z_j)\right) \right)\leq \mbox{tr\,} \exp\left( n \theta^2 \mathbb EZ^2 \right).
\end{align*}
\end{fact}
\begin{proof}
These inequalities follow from \eqref{eq:psi-ineq} and Lemma 3.1 in \cite{minsker2016sub}.
Note that we did not assume boundedness of $\|Z -\mathbb EZ\|\leq M$ above.
\end{proof}
Finally, we will need the following statement related to the self-adjoint dilation \eqref{eq:dilation}.
\begin{fact}
\label{fact:dilation}
Let $S\in \mathbb C^{d_1\times d_1}, \ T\in \mathbb C^{d_2\times d_2}$ be self-adjoint matrices, and $A\in \mathbb C^{d_1\times d_2}$.
Then
\[
\left\| \begin{pmatrix}
S & A \\
A^\ast & T
\end{pmatrix}\right\|
\geq
\left\| \begin{pmatrix}
0 & A \\
A^\ast & 0
\end{pmatrix} \right\|.
\]
\end{fact}
\begin{proof}
See Lemma 2.1 in \cite{minsker2016sub}.
\end{proof}
\subsection{Proof of Lemma \ref{lemma:optimization}}
\label{proof:opt}
(1) Convexity follows from Fact \ref{fact:01} since the sum of convex functions is a convex function. \\
(2) The expression for the gradient follows from Fact \ref{fact:05}.
To show that $\nabla F_\theta(U)$ is Lipschitz continuous, note that
\begin{multline*}
\Big\| \frac{1}{\theta}\psi\left( \theta\left(H_{i_1,\ldots,i_m} - U_1 \right)\right) - \frac{1}{\theta}\psi\left( \theta\left(H_{i_1,\ldots,i_m} - U_2 \right)\right) \Big\|
\\
\leq \Big\| \frac{1}{\theta} \left( \theta\left(H_{i_1,\ldots,i_m} - U_1 \right) - \theta\left(H_{i_1,\ldots,i_m} - U_2 \right) \right) \Big\|
= \left\| U_1 - U_2 \right\|,
\end{multline*}
\begin{multline*}
\Big\| \frac{1}{\theta}\psi\left( \theta\left(H_{i_1,\ldots,i_m} - U_1 \right)\right) - \frac{1}{\theta}\psi\left( \theta\left(H_{i_1,\ldots,i_m} - U_2 \right)\right) \Big\|_{\mathrm{F}}
\\
\leq
\Big\| \frac{1}{\theta} \left( \theta\left(H_{i_1,\ldots,i_m} - U_1 \right) - \theta\left(H_{i_1,\ldots,i_m} - U_2 \right) \right) \Big\|_{\mathrm{F}}
= \left\| U_1 - U_2 \right\|_{\mathrm{F}}
\end{multline*}
by Fact \ref{fact:06}. Since the convex combination of Lipschitz continuous functions is still Lipschitz continuous, the claim follows. \\
(3) Since $\widehat U_n^\star$ is the solution of the problem \eqref{eq:estimator1}, the directional derivative
\[
dF_\theta(\widehat U_n^\star; B) := \lim_{t\to 0}\frac{F_\theta(\widehat U_n^\star + tB) - F_\theta(\widehat U_n^\star)}{t}
= \mbox{tr\,} \left( \nabla F_\theta(\widehat U_n^\star) \, B\right)
\]
is equal to 0 for any $B\in \mathbb H^d$. Result follows by taking consecutively $B_{i,j}=e_i e_j^T + e_j e_i^T, \ i\ne j$ and $B_{i,i} = e_i e_i^T, \ i=1,\ldots,d$, where $\left\{ e_1,\ldots,e_d\right\}$ is the standard Euclidean basis.
\qed
\subsection{Proof of Theorem \ref{thm:new-performance}}
\label{proof:new-performance}
The proof is based on the analysis of the gradient descent iteration for the problem \eqref{eq:estimator1}.
Let
\[
G(U) := \mbox{tr\,} F_\theta(U) = \mbox{tr\,} \left[ \frac{1}{\theta^2}\frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I^m_n}\Psi\Big( \theta \left(H(X_{i_1},\ldots,X_{i_m}) - U \right) \Big) \right],
\]
and define
\begin{align*}
U^{(0)}_n :&= \mathbb EH = \mathbb E H(X_1,\ldots,X_m), \\
U^{(j)}_n :&= U^{(j-1)}_n - \nabla G\left( U^{(j-1)}_n \right) \\
&
=U^{(j-1)}_n+\frac{1}{\theta}\frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I^m_n}\psi\Big(\theta\left( H_{i_1\ldots i_m} - U^{(j-1)}_n\right) \Big), \ j\geq 1,
\end{align*}
which is the gradient descent for \eqref{eq:estimator1} with the step size equal to $1$.
We will show that with high probability (and for an appropriate choice of $\theta$), $U^{(j)}_n$ does not escapes a small neighborhood of $\mathbb E H(X_1,\ldots,X_m)$.
The claim of the theorem then easily follows from this fact.
\noindent Give a permutation $(i_1,\ldots,i_n)\in \pi_n$ and $U\in\mathbb H^d$, let $k = \lfloor n/m \rfloor$ and
\begin{align*}
Y_{i_1\ldots i_m}(U;\theta)&:=\psi\left(\theta\left(H_{i_1\ldots i_m}-U\right)\right),
\\
W_{i_1\ldots i_n}(U;\theta)& := \frac{1}{k}\left( Y_{i_1\ldots i_m}(U;\theta) + Y_{i_{m+1}\ldots i_{2m}}(U;\theta) + \ldots + Y_{i_{(k-1)m+1}\ldots i_{km}}(U;\theta) \right).
\end{align*}
Fact \ref{fact:04} implies that
\begin{equation}
\label{inter-permutation}
\nabla G\left( U \right)=\frac{(n-m)!}{n!}\sum_{(i_1\ldots i_m)\in I^m_n} \frac{1}{\theta}\psi\Big(\theta\left( H_{i_1\ldots i_m} - U\right) \Big) = \frac{1}{n!}\sum_{(i_1\ldots i_n)\in\pi_n}\frac{1}{\theta} W_{i_1 \ldots i_n}(U;\theta),
\end{equation}
where $\pi_n$ ranges over all permutations of $(1,\ldots,n)$.
Next, for $j\geq 1$ we have
\begin{align}
\label{eq:b10}
\nonumber
\Big\| U_n^{(j)} - \mathbb EH \Big\| & =
\Bigg\| \frac{1}{\theta}\frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I^m_n}\psi\Big(\theta\left( H_{i_1\ldots i_m} - U^{(j-1)}_n\right)
- \left( \mathbb EH - U^{(j-1)}_n \right) \Bigg\| \\
&
=\Bigg\| \frac{1}{\theta n!}\sum_{(i_1\ldots i_n)\in\pi_n}W_{i_1 \ldots i_n}(U_n^{(j-1)};\theta) - \left( \mathbb EH - U^{(j-1)}_n \right) \Bigg\| \\
& \nonumber
\leq
\Bigg\| \frac{1}{\theta n!}\sum_{(i_1\ldots i_n)\in\pi_n}
\Big( W_{i_1 \ldots i_n}(U_n^{(j-1)};\theta) - W_{i_1,\ldots,i_n}(\mathbb EH;\theta_\sigma) \Big) - \left( \mathbb EH - U^{(j-1)}_n \right) \Bigg\| \\
& \nonumber
+ \Bigg\| \frac{1}{\theta_\sigma}\frac{1}{n!}\sum_{\pi_n} W_{i_1,\ldots,i_n}(\mathbb E H;\theta_\sigma) \Big) \Bigg\|.
\end{align}
The following two lemmas provide the bounds that allows to control the size of $\Big\| U_n^{(j)} - \mathbb EH \Big\|$.
For a given $\sigma^2\geq \left\| \mathbb E(H-\mathbb EH)^2 \right\|$ and $\theta_\sigma = \frac{1}{\sigma}\sqrt{\frac{2 t}{k}}$, consider the random variable
\[
L_n(\delta) = \sup_{\|U-\mathbb EH\|\leq\delta} \left\| \frac{1}{\theta_\sigma}\frac{1}{n!}\sum_{\pi_n}
\Big( W_{i_1,\ldots,i_n}(U;\theta_\sigma)
-W_{i_1,\ldots,i_n}(\mathbb EH;\theta_\sigma) \Big) - (\mathbb EH - U)
\right\|.
\]
\begin{lemma}
\label{lemma:sup-bound}
With probability $\geq 1 - (2d+1)e^{-t} $, for all $\delta\leq \frac{1}{2}\frac{1}{\theta_\sigma}$ simultaneously,
\[
L_n(\delta) \leq \left( \mathrm{r}_H\frac{26 t}{k}+ \frac{1}{2}\right)\delta
+ \frac{3(1+\sqrt{2})}{2} \sigma \sqrt{\frac{t}{k}}.
\]
\end{lemma}
\noindent The proof of this lemma is given in Section \ref{proof:lemma-sup-bound}.
\begin{lemma}
\label{lemma:EH}
With probability $\geq 1 - 2d e^{-t}$,
\[
\Bigg\| \frac{1}{\theta_\sigma}\frac{1}{n!}\sum_{\pi_n} W_{i_1,\ldots,i_n}(\mathbb E H;\theta_\sigma) \Big) \Bigg\| \leq
\frac{3}{\sqrt{2}}\sigma\sqrt{\frac{t}{k}}.
\]
\end{lemma}
\noindent The proof is given in Section \ref{proof:lemma-EH}.
Next, define the sequence
\begin{align*}
\delta_0 & = 0, \\
\delta_j & = \left( \mathrm{r}_H\frac{26 t}{k} + \frac{1}{2}\right)\delta_{j-1} + 5.75 \sigma\sqrt{\frac{t}{k}}.
\end{align*}
If $\mathrm{r}_H\frac{26 t}{k}\leq \frac{1}{4}$, then $t\leq \frac{k}{104}$, hence
$5.75 \sigma\sqrt{\frac{t}{k}} \leq \frac{1}{8} \frac{1}{\theta_\sigma}$ and
\[
\delta_j \leq \frac{3}{4}\delta_{j-1} + \frac{1}{8} \frac{1}{\theta_\sigma} \leq \frac{1}{2} \frac{1}{\theta_\sigma}
\]
for all $j\geq 0$.
Let $\mathcal E_0$ be the event of probability $\geq 1 - (4d+1)e^{-t}$ on which the inequalities of Lemmas \ref{lemma:sup-bound} and \ref{lemma:EH} hold.
It follows from \eqref{eq:b10}, Lemma \ref{lemma:sup-bound} and Lemma \ref{lemma:EH} that on the event $\mathcal E_0$, for all $j\geq 1$
\begin{align*}
\Big\| U_n^{(j)} - \mathbb EH \Big\| &\leq L_n\left( \left\| U_n^{(j-1)} - \mathbb EH \right\|\right)
+\Bigg\| \frac{1}{\theta_\sigma}\frac{1}{n!}\sum_{\pi_n} W_{i_1,\ldots,i_n}(\mathbb E H;\theta_\sigma) \Big) \Bigg\| \\
&\leq
\left( \mathrm{r}_H\frac{26 t}{k}+ \frac{1}{2}\right)\delta_{j-1} + \frac{3(1+2\sqrt{2})}{2} \sigma \sqrt{\frac{t}{k}}
\leq \delta_j
\end{align*}
given that $\mathrm{r}_H\frac{26 t}{k} \leq \frac{1}{4}$; we have also used the numerical bound $\frac{3(1+2\sqrt{2})}{2}\leq 5.75$.
\noindent Finally, it is easy to see that for all $j\geq 1$ and
$\gamma = \mathrm{r}_H\frac{26 t}{k}+\frac{1}{2}\leq \frac{3}{4}$,
\begin{align}
\label{eq:b30}
&
\delta_j = \delta_0 \gamma^j + \sum_{l=0}^{j-1} \gamma^l \cdot 5.75\sigma\sqrt{\frac{t}{k}}
\leq \sum_{l\geq 0} (3/4)^{l} \cdot 5.75\sigma\sqrt{\frac{t}{k}} \leq 23\sigma\sqrt{\frac{t}{k}}.
\end{align}
Since $U_n^{(j)}\to \widehat U_n^\star$ pointwise as $j\to\infty$, the result follows.
\subsection{Proof of Lemma \ref{lemma:sup-bound}}
\label{proof:lemma-sup-bound}
Recall that $\sigma^2\geq \left\| \mathbb E(H_{i_1,\ldots,i_m} -\mathbb EH)^2 \right\|$, $\theta_\sigma:=\frac{1}{\sigma}\sqrt{\frac{2 t}{k}}$, and
\[
\psi(\theta_\sigma x)=\begin{cases}
\theta_\sigma x - \mathrm{sign}(x)\frac{\theta_\sigma^2 x^2}{2}, & x\in[-1/\theta_\sigma,1/\theta_\sigma], \\
1/2, & |x|>1/\theta_\sigma.
\end{cases}
\]
The idea of the proof is to exploit the fact that $\psi(\theta_\sigma x)$ is ``almost linear'' whenever $x\in[-1/\theta_\sigma,1/\theta_\sigma]$, and its nonlinear part is active only for a small number of multi-indices $(i_1,\ldots,i_m)\in I_n^m$.
Let
\[
\chi_{i_1,\ldots,i_m} = I\left\{ \left\| H_{i_1,\ldots,i_m} - \mathbb EH \right\| \leq \frac{1}{2\theta_\sigma} \right\}.
\]
Note that by Chebyshev's inequality, and taking into account the fact that
\[
\| H_{i_1,\ldots,i_m} - \mathbb EH\|\leq \| H_{i_1,\ldots,i_m}-\mathbb EH \|_{\mathrm{F}},
\]
\begin{align}
\label{eq:a30}
\nonumber
\pr{\chi_{i_1,\ldots,i_m} = 0 }&\leq 4\theta_\sigma^2\mathbb E \left\| H_{i_1,\ldots,i_m} - \mathbb EH \right\|^2_\mathrm{F} \\
&
\leq \frac{8t}{k}\frac{\mbox{tr\,} \left( \mathbb E(H_{i_1,\ldots,i_m}-\mathbb EH)^2 \right)}{\left\| \mathbb E(H_{i_1,\ldots,i_m}-\mathbb EH)^2 \right\|}
= \mathrm{r}_H \frac{8t}{k}.
\end{align}
Define the event
\[
\mathcal E=\left\{ \sum_{(i_1,\ldots,i_m)\in I_n^m} \Big( 1 - \chi_{i_1,\ldots,i_m} \Big) \leq \mathrm{r}_H\frac{8t }{k} \frac{n!}{(n-m)!}\cdot \left(1 + \sqrt{\frac{3}{8 \mathrm{r}_H}}\right) \right\}.
\]
We will apply a version of Chernoff bound to the $\mathbb R$-valued U-statistic
$\frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I_n^m} \left( 1 - \chi_{i_1,\ldots,i_m} \right)$.
A combination of Fact \ref{fact:04}, Fact \ref{fact:02} applied in the scalar case $d=1$, and Fact \ref{fact:03} implies that
\[
\Pr\left( \frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I_n^m} \left( 1 - \chi_{i_1,\ldots,i_m} \right) \geq \mathrm{r}_H\frac{8t }{k} \cdot \left(1 + \tau\right) \right)
\leq e^{-\tau^2 8t \, \mathrm{r}_H/3 }
\]
for $0<\tau<1$. Hence, choosing $\tau = \sqrt{ \frac{3}{8 \mathrm{r}_H}}$ implies that $\pr{\mathcal E} \geq1- e^{-t}$.
By triangle inequality, whenever $\chi_{i_1,\ldots,i_m} =1$ and
$\delta\leq \frac{1}{2}\frac{1}{\theta_\sigma}$, it holds that $\left\| H_{i_1,\ldots,i_m} - U \right\| \leq \frac{1}{\theta_\sigma}$ for any $U$ such that $\| U - \mathbb EH \|\leq \delta$,
and consequently
\[
\frac{1}{\theta_\sigma}\psi(\theta_\sigma (H_{i_1,\ldots,i_m} - U)) = (H_{i_1,\ldots,i_m} - U) -
\frac{\theta_\sigma}{2}\mathrm{sign} \left( H_{i_1,\ldots,i_m} - U \right) \left( H_{i_1,\ldots,i_m} - U \right)^2.
\]
Denoting
\[
S_{i_1,\ldots,i_m}(U) := \mathrm{sign} \left( H_{i_1,\ldots,i_m} - U \right) \left( H_{i_1,\ldots,i_m} - U \right)^2
\]
for brevity, we deduce that
\begin{multline*}
\frac{1}{\theta_\sigma}\frac{1}{n!}\sum_{\pi_n}\Big( W_{i_1,\ldots,i_n}(U;\theta_\sigma)
-W_{i_1,\ldots,i_n}(\expect{H};\theta_\sigma) \Big) - (\mathbb EH - U)
\\
= \frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I_n^m} \left( \frac{\theta_\sigma}{2}S_{i_1,\ldots,i_m}(\mathbb EH) - \frac{\theta_\sigma}{2}S_{i_1,\ldots,i_m}(U) \right)\chi_{i_1,\ldots,i_m} \\
+\frac{1}{\theta_\sigma}\frac{(n-m)!}{n!} \sum_{(i_1,\ldots,i_m)\in I_n^m} \left( 1 - \chi_{i_1,\ldots,i_m} \right)
\Big( Y_{i_1,\ldots,i_m}(U;\theta_\sigma)
-Y_{i_1,\ldots,i_m}(\expect{H};\theta_\sigma) \Big) \\
-\frac{(n-m)!}{n!} \sum_{(i_1,\ldots,i_m)\in I_n^m} \left( 1 - \chi_{i_1,\ldots,i_m} \right) \left( \mathbb EH - U \right).
\end{multline*}
We will separately control the terms on the right hand side of the equality above.
First, note that on event $\mathcal E$,
\begin{align}
\label{eq:a40}
&
\left\| \frac{(n-m)!}{n!} \sum_{(i_1,\ldots,i_m)\in I_n^m} \left( 1 - \chi_{i_1,\ldots,i_m} \right) \left( \mathbb EH - U \right)\right\| \leq
\mathrm{r}_H\frac{8t }{k}\cdot \left(1 + \sqrt{\frac{3}{8 \mathrm{r}_H }}\right)\delta \leq \mathrm{r}_H \frac{13 t}{k}\delta
\end{align}
since $\| \mathbb EH - U\|\leq \delta$.
Next, recalling that $\psi(\cdot)$ is operator Lipschitz (by Fact \ref{fact:06}), wee see that for any $(i_1,\ldots,i_m)\in I_n^m$
\[
\frac{1}{\theta_\sigma}\Big\| Y_{i_1,\ldots,i_m}(U;\theta_\sigma) -Y_{i_1,\ldots,i_m}(\mathbb E{H};\theta_\sigma) \Big\| \leq
\left\| \mathbb EH - U\right\| \leq \delta,
\]
hence on event $\mathcal E$,
\begin{multline}
\label{eq:a50}
\frac{1}{\theta_\sigma}\frac{(n-m)!}{n!} \left\| \sum_{(i_1,\ldots,i_m)\in I_n^m} \left( 1 - \chi_{i_1,\ldots,i_m} \right)
\Big( Y_{i_1,\ldots,i_m}(U;\theta_\sigma)
-Y_{i_1,\ldots,i_m}(\mathbb E{H};\theta_\sigma) \Big) \right\| \\
\leq
\mathrm{r}_H \frac{8t }{k}\cdot \left(1 + \sqrt{\frac{3}{8 \mathrm{r}_H}}\right)\delta \leq \mathrm{r}_H\frac{13 t }{k}\delta.
\end{multline}
Finally, it remains to control the term
\begin{align*}
&
\mathcal Q(\delta):=\sup_{\|U-\mathbb{E}H\|\leq\delta} \left\| \frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I_n^m} \left( \frac{\theta_\sigma}{2}S_{i_1,\ldots,i_m}(\mathbb EH) - \frac{\theta_\sigma}{2}S_{i_1,\ldots,i_m}(U) \right)\chi_{i_1,\ldots,i_m} \right\|.
\end{align*}
\begin{lemma}
\label{lemma:supplement}
With probability $\geq 1 - 2de^{-t}$,
\[
\mathcal Q(\delta) \leq \frac{3(1+\sqrt{2})}{2} \sigma \sqrt{\frac{t}{k}} + \frac{\delta}{2}.
\]
\end{lemma}
\begin{proof}
Observe that for all $U\in \mathbb H^d$ and $(i_1,\ldots,i_m)\in I_n^m$,
\begin{align*}
&
- \left( H_{i_1,\ldots,i_m} - U \right)^2\preceq \mathrm{sign} \left( H_{i_1,\ldots,i_m} - U \right) \left( H_{i_1,\ldots,i_m} - U \right)^2
\preceq \left( H_{i_1,\ldots,i_m} - U \right)^2,
\end{align*}
hence
\begin{multline*}
\Bigg\|\frac{(n-m)!}{n!}\sum_{(i_1,\ldots,i_m)\in I_n^m} \left( \frac{\theta_\sigma}{2}S_{i_1,\ldots,i_m}(\mathbb EH) - \frac{\theta_\sigma}{2}S_{i_1,\ldots,i_m}(U) \right)\chi_{i_1,\ldots,i_m} \Bigg\| \\
\leq
\frac{(n-m)!}{n!} \Bigg\| \sum_{(i_1,\ldots,i_m)\in I_n^m} \frac{\theta_\sigma}{2} \left( H_{i_1,\ldots,i_m} - U \right)^2 \chi_{i_1,\ldots,i_m} \Bigg\| \\
+ \frac{(n-m)!}{n!} \Bigg\| \sum_{(i_1,\ldots,i_m)\in I_n^m} \frac{\theta_\sigma}{2} \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} \Bigg\| .
\end{multline*}
Moreover,
\begin{align*}
&
\left( H_{i_1,\ldots,i_m} - U \right)^2 \preceq 2\left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 + 2\left( U - \mathbb EH\right)^2,
\end{align*}
implying that
\begin{multline*}
\frac{(n-m)!}{n!} \Bigg\| \sum_{(i_1,\ldots,i_m)\in I_n^m} \frac{\theta_\sigma}{2} \left( H_{i_1,\ldots,i_m} - U \right)^2 \chi_{i_1,\ldots,i_m} \Bigg\| \\
\leq
2\frac{(n-m)!}{n!} \Bigg\| \sum_{(i_1,\ldots,i_m)\in I_n^m} \frac{\theta_\sigma}{2} \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} \Bigg\| + \theta_\sigma\Big\| U - \mathbb EH \Big\|^2.
\end{multline*}
Hence, we have shown that
\begin{align}
\label{eq:a51}
&
\mathcal Q(\delta) \leq 3\frac{(n-m)!}{n!} \Bigg\| \sum_{(i_1,\ldots,i_m)\in I_n^m} \frac{\theta_\sigma}{2} \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} \Bigg\| + \theta_\sigma \delta^2.
\end{align}
Since $\delta\leq \frac{1}{2\theta_\sigma}$,
\begin{align}
\label{eq:a52}
&
\theta_\sigma \delta^2 \leq \frac{\delta}{2}.
\end{align}
Next, we will estimate the first term in \eqref{eq:a51} as follows:
\begin{multline*}
3\frac{(n-m)!}{n!} \Bigg\| \sum_{(i_1,\ldots,i_m)\in I_n^m} \frac{\theta_\sigma}{2} \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} \Bigg\| \\
\leq
3\frac{(n-m)!}{n!} \Bigg\| \sum_{(i_1,\ldots,i_m)\in I_n^m}
\frac{\theta_\sigma}{2} \bigg[ \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} -
\mathbb E \left[ \left( H_{i_1,\ldots,i_m} - \mathbb E H \right)^2
\chi_{i_1,\ldots,i_m} \right] \bigg] \Bigg\| \\
+\frac{3\theta_\sigma}{2} \Big\| \mathbb E\left[ \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m}\right] \Big\|.
\end{multline*}
Clearly, $\Big\| \mathbb E \left[ \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2\chi_{i_1,\ldots,i_m} \right] \Big\|\leq \sigma^2$, hence
\begin{align}
\label{eq:a53}
\frac{3\theta_\sigma}{2} \Big\| \mathbb E \left[ \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2\chi_{i_1,\ldots,i_m} \right] \Big\| \leq \frac{3\sigma}{2}\sqrt{\frac{2t}{k}}.
\end{align}
The remaining part will be estimated using the Matrix Bernstein's inequality (Fact \ref{fact:bernstein}).
\noindent To this end, note that by the definition of $\chi_{i_1,\ldots,i_m}$,
\[
\Big\| \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} - \mathbb E \left[ \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} \right]\Big\|
\leq
\left( \frac{1}{2\theta_\sigma}\right)^2
\]
almost surely.
Moreover,
\begin{multline*}
\Big\| \mathbb E\left( \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} - \mathbb E \left[ \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} \right] \right)^2\Big\| \\
\leq
\Big\| \mathbb E\left( \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} \right)^2\Big\|
\leq \left( \frac{1}{2\theta_\sigma}\right)^2 \, \big\| \mathbb E\left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2\big\|,
\end{multline*}
where we used the fact that
\begin{align*}
&
\left( \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} \right)^2 \preceq
\left( \frac{1}{2\theta_\sigma}\right)^2 \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2.
\end{align*}
Applying the Matrix Bernstein inequality (Fact \ref{fact:bernstein}), we get that with probability $\geq 1 - 2d e^{-t}$
\begin{multline}
\label{eq:a60}
3\frac{(n-m)!}{n!} \Bigg\| \sum_{(i_1,\ldots,i_m)\in I_n^m}
\frac{\theta_\sigma}{2} \left[ \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2 \chi_{i_1,\ldots,i_m} - \mathbb E \left( H_{i_1,\ldots,i_m} - \mathbb EH \right)^2
\chi_{i_1,\ldots,i_m} \right] \Bigg\| \\
\leq
\frac{3\theta_\sigma}{2}\left[ \frac{2}{2\theta_\sigma} \left\| \mathbb E(H_{i_1,\ldots,i_m} - \mathbb EH)^2 \right\|^{1/2} \sqrt{\frac{t}{k}} \bigvee \frac{4}{3}\frac{t}{k}\frac{1}{(2\theta_\sigma)^2}\right]
\leq \frac{3}{2}\sigma \sqrt{\frac{t}{k}}.
\end{multline}
The bound of Lemma \ref{lemma:supplement} now follows from the combination of bounds \eqref{eq:a52}, \eqref{eq:a53}, \eqref{eq:a60} and \eqref{eq:a51}.
\qed
\noindent Combining the bound of Lemma \ref{lemma:supplement} with \eqref{eq:a40} and \eqref{eq:a50},
we get the desired result of Lemma \ref{lemma:sup-bound}.
\end{proof}
\subsection{Proof of Lemma \ref{lemma:EH}}
\label{proof:lemma-EH}
Fact \ref{fact:02} implies that for all $s>0$,
\begin{align}
\label{eq:e10}
\Pr\left( \lambda_{\mbox{\footnotesize{max}\,}}\left( \frac{1}{\theta_\sigma}\frac{1}{n!}\sum_{\pi_n} W_{i_1,\ldots,i_n}(\mathbb E H;\theta_\sigma) \Big) \right)\geq
s \right)
& \nonumber
\leq
\inf_{\theta>0} \left[ e^{-\theta s} \mathbb E\mbox{tr\,} e^{(\theta/\theta_\sigma)\,W_{1,\ldots,n}(\mathbb EH,\theta_\sigma)}\right] \\
& \leq e^{-\theta_\sigma s \, k} \,\mathbb E\mbox{tr\,} e^{k\,W_{1,\ldots,n}(\mathbb EH,\theta_\sigma)}.
\end{align}
Since
\[
W_{1,\ldots,n}(\mathbb EH,\theta_\sigma) =
\frac{1}{k}\left( \psi\left( \theta_\sigma( H_{1,\ldots,m} -\mathbb EH)\right) + \ldots + \psi\left( \theta_\sigma( H_{(k-1)m+1,\ldots,km} -\mathbb EH)\right) \right)
\]
is a sum of $k$ independent random matrices, we can apply the first inequality of Fact \ref{fact:legacy} to deduce that
\[
\mathbb E\mbox{tr\,} e^{k\,W_{1,\ldots,n}(\mathbb EH,\theta_\sigma)} \leq
\mbox{tr\,} \exp\left( k\theta_\sigma^2 \mathbb E(H - \mathbb EH)^2 \right) \leq d \exp\left( k\theta_\sigma^2 \sigma^2\right),
\]
where we used the fact that $\mbox{tr\,}(A)\leq d \|A\|$ for $\mathbb H^{d\times d} \ni A\succeq 0$.
Finally, setting $s=\frac{3}{\sqrt{2}}\sigma\sqrt{\frac{t}{k}}$, we obtain from \eqref{eq:e10} that
\[
\Pr\left( \lambda_{\mbox{\footnotesize{max}\,}}\left( \frac{1}{\theta_\sigma}\frac{1}{n!}\sum_{\pi_n} W_{i_1,\ldots,i_n}(\mathbb E H;\theta_\sigma) \Big) \right)\geq
s \right)
\leq d e^{-t}.
\]
Similarly, since $-\lambda_{\mbox{\footnotesize{min}\,}}(A)=\lambda_{\mbox{\footnotesize{max}\,}}(-A)$ for $A\in \mathbb H^{d\times d}$, it follows from the second inequality of Fact \ref{fact:legacy} that
\begin{align*}
\Pr\Bigg( \lambda_{\mbox{\footnotesize{min}\,}} & \left(\frac{1}{\theta_\sigma}\frac{1}{n!}\sum_{\pi_n} W_{i_1,\ldots,i_n}(\mathbb E H;\theta_\sigma) \Big) \right) \leq -s\Bigg) \\
& = \Pr\left( \lambda_{\mbox{\footnotesize{max}\,}}\left( -\frac{1}{\theta_\sigma}\frac{1}{n!}\sum_{\pi_n} W_{i_1,\ldots,i_n}(\mathbb E H;\theta_\sigma) \Big) \right) \geq s\right) \\
&
\leq e^{-\theta_\sigma s\,k}\,\mathbb E \mbox{tr\,} \exp\left(k W_{1,\ldots,n}(\mathbb EH,\theta_\sigma) \right)\\
&
\leq d e^{-\theta_\sigma s\,k}\, \exp\left( k\theta_\sigma^2 \sigma^2\right)
\leq d e^{-t}
\end{align*}
for $s=\frac{3}{\sqrt{2}}\sigma\sqrt{\frac{t}{k}}$, and result follows.
\subsection{Proof of Lemma \ref{lemma:grad-descent}}
\label{proof:grad-descent}
Part (a) follows from a well-known result (e.g., \cite{bertsekas2009convex}) which states that, given a convex, differentiable function $G: \mathbb R^{D}\to \mathbb R$ such that its gradient satisfies
$\Big\| \nabla G(U_1) - \nabla G(U_2) \Big\|_2 \leq L \| U_1 - U_2 \|_2$, the $j$-th iteration $U^{(j)}$ of the gradient descent algorithm run
with step size $\alpha\leq \frac{1}{L}$ satisfies
\[
G\left( U^{(j)} \right) - G(U_\ast) \leq \frac{\left\| U^{(0)} - U_\ast\right\|_2^2}{2\alpha j},
\]
where $U_\ast = \argmin \, G(U)$.
\\
The proof of part (b) follows the lines of the proof of Theorem \ref{thm:new-performance}: more specifically, the claim follows from equation \eqref{eq:b30}.
\qed
\subsection{Proof of Corollary \ref{cor:rectangular}}
\label{proof:rectangular}
\begin{proof}
Note that
\[
\big\| \mathbb E\,\mathcal D(H_{i_1\ldots i_m})^2 \big\| = \max\left( \big\| \mathbb E H_{i_1\ldots i_m} \,H_{i_1\ldots i_m}^\ast \big\|, \big\| \mathbb E H_{i_1\ldots i_m}^\ast \, H_{i_1\ldots i_m} \big\| \right).
\]
We apply Theorem \ref{thm:new-performance} applied to self-adjoint random matrices
\[
\mathcal D(H_{i_1\ldots i_m})\in \mathbb C^{(d_1+d_2)\times (d_1+d_2)}, \ (i_1,\ldots,i_m)\in I_n^m,
\]
and obtain that
\[
\big\| \bar U_n^\star - \mathcal D(\mathbb E H) \big\| \leq 15\sigma \sqrt{\frac{t}{k}}
\]
with probability $\geq 1 - \left( 2(d_1+d_2)+1 \right)e^{-t}$.
It remains to apply Fact \ref{fact:dilation}:
\begin{align*}
\left\| \bar U_n^\star - \mathcal D(\mathbb E H)\right\| & =
\left\| \begin{pmatrix}
\hat U_{11}^\star & \hat U_{12}^\star - \mathbb E H \\
(\hat U^\star_{12})^\ast - \mathbb E H^\ast & \hat U_{22}^\star
\end{pmatrix} \right\| \\
&\geq
\left\| \begin{pmatrix}
0 & \hat U_{12}^\star - \mathbb E H \\
(\hat U^\star_{12})^\ast - \mathbb E H^\ast & 0
\end{pmatrix} \right\| =
\left\| \hat U_{12}^\star - \mathbb E H \right\|,
\end{align*}
and the claim follows.
\subsection{Proof of Lemma \ref{lemma:variance}}
\label{proof:variance}
Recall that $\mu=\mathbb EY$.
\noindent (a) Observe that
\begin{align*}
\big\| \mathbb E\left( (Y-\mu)(Y-\mu)^T \right)^2 \big\| &=
\sup_{\|v\|_2=1} \mathbb E \left\langle v, Y-\mu \right\rangle^2 \left\| Y - \mu\right\|_2^2 \\
&
=\sup_{\|v\|_2=1}\left[ \sum_{j=1}^d \langle v, Y-\mu\rangle^2 (Y^{(j)} - \mu^{(j)})^2 \right].
\end{align*}
Next, for $j=1,\ldots,d$,
\begin{align*}
\mathbb E\langle v, Y-\mu\rangle^2 (Y^{(j)} - \mu^{(j)})^2 & \leq
\mathbb E^{1/2} \langle v, Y-\mu\rangle^4 \, \mathbb E^{1/2}(Y^{(j)} - \mu^{(j)})^4 \\
&
\leq K \mathbb E \langle v, Y-\mu\rangle^2 \, \mathbb E (Y^{(j)} - \mu^{(j)})^2,
\end{align*}
hence
\begin{align*}
\big\| \mathbb E\left( (Y-\mu)(Y-\mu)^T \right)^2 \big\| \leq
K\sup_{\|v\|_2=1}\mathbb E \langle v, Y-\mu\rangle^2 \sum_{j=1}^d \mathbb E (Y^{(j)} - \mu^{(j)})^2,
\end{align*}
and the result follows.
\noindent (b)
Note that
\begin{align*}
\mbox{tr\,}\left[ \mathbb E\left( (Y-\mu)(Y-\mu)^T \right)^2 \right] & =
\sum_{j=1}^d \mathbb E (Y^{(j)} - \mu^{(j)})^2 \left\| Y - \mu\right\|_2^2 \\
&
=\sum_{j=1}^d \mathbb E (Y^{(j)} - \mu^{(j)})^4 + \sum_{i\ne j} \mathbb E\left[ (Y^{(i)} - \mu^{(i)})^2 (Y^{(j)} - \mu^{(j)})^2 \right] \\
&
\leq \sum_{j=1}^d \mathbb E (Y^{(j)} - \mu^{(j)})^4 + \sum_{i\ne j} \mathbb E^{1/2}(Y^{(i)} - \mu^{(i)})^4 \mathbb E^{1/2} (Y^{(j)} - \mu^{(i)})^4 \\
&
= \left( \sum_{j=1}^d \mathbb E^{1/2} (Y^{(j)} - \mu^{(j)})^4 \right)^2
\leq K'\left( \sum_{j=1}^d \mathbb E (Y^{(j)} - \mu^{(j)})^2\right)^2 \\
&
= K' \left( \mbox{tr\,}(\Sigma) \right)^2.
\end{align*}
\noindent (c) The inequality follows from Corollary 5.1 in \cite{wei2017estimation}.
\qed
\subsection{Proof of Corollary \ref{cor:frob}}
\label{proof:frob}
It is easy to see ((e.g., see the proof of Theorem 1 in \cite{lounici2014high}) that $\widetilde \Sigma_\star^\tau$ can be equivalently represented as
\begin{align}
&
\widetilde \Sigma_\star^\tau=\argmin_{S\in \mathbb R^{d\times d}, S=S^T}\left[ \left\| S - \widetilde \Sigma_\star \right\|^2_{\mathrm{F}} +\tau \left\| S \right\|_1\right].
\end{align}
The remaining proof is based on the following lemma:
\begin{lemma}
Inequality (\ref{eq:ex70}) holds on the event $\mathcal E=\left\{ \tau\geq 2\left\| \widetilde \Sigma_\star^\tau - \Sigma \right\| \right\}$.
\end{lemma}
\noindent To verify this statement, it is enough to repeat the steps of the proof of Theorem 1 in \cite{lounici2014high}, replacing each occurrence of the sample covariance $\hat S_{2n}$ by its robust counterpart $\widetilde \Sigma_\star^\tau$. \\
Result of Corollary \ref{cor:frob} then follows from the combination of Theorem \ref{th:lepski} and Lemma \ref{lemma:variance} which imply that
\[
\Pr(\mathcal E)\geq 1-(4d+1)e^{-t}
\]
whenever $\tau \geq \gamma\cdot 138\sqrt{K}\, \|\Sigma\| \,\sqrt{\frac{\mathrm{r}(\Sigma)(t+\Xi)}{\lfloor n/2\rfloor}}$.
\end{proof}
\begin{comment}
|
2108.11443
|
\section{Introduction}
Given a graph $G$, the \emph{crossing number} problem asks for the minimum
number of edge crossings in any drawing of $G$, denoted by $\mathit{cr}(G)$.
This problem is NP-complete~\cite{garey1983crossing}, even when $G$ is restricted to
cubic graphs~\cite{DBLP:journals/jct/Hlineny06a} or graphs that become
planar after removing a single edge~\cite{DBLP:journals/algorithmica/CabelloM11}.
While the currently known integer linear programming approaches to the
problem~\cite{DBLP:conf/esa/ChimaniW16, DBLP:journals/disopt/BuchheimCEGJKMW08,DBLP:conf/esa/ChimaniMB08}
solve sparse instances within a reasonable time
frame~\cite{DBLP:journals/jea/ChimaniGM09}, dense instances require the use of
heuristics.
One such heuristic is the well-known \emph{planarization
method}~\cite{DBLP:journals/jss/BatiniTT84,DBLP:conf/gd/GutwengerM03}, which
constructs a \emph{planarization}, i.e., a planar representation of $G$
with crossings replaced by dummy vertices of degree $4$.
The heuristic first computes a spanning planar subgraph of $G$ and then
iteratively inserts the remaining edges. %
Several variants of the planarization method have been thoroughly evaluated,
including different edge insertion algorithms and postprocessing
strategies; see~\cite{DBLP:journals/jgaa/ChimaniG12} for the latest study.
In a recent paper~\cite{DBLP:journals/jgaa/ClancyHN19}, Clancy et al.\ present
an alternative heuristic---the \emph{star reinsertion method}---, which differs
in two key aspects from the planarization method: It (i) starts with a full
planarization (instead of a planar subgraph) that is iteratively improved by
reinserting elements, and (ii) the reinserted elements are stars (vertices with
their incident edges) rather than individual edges.
These star insertions are performed using a straight-forward but never tried
algorithm from literature~\cite{DBLP:conf/soda/ChimaniGMW09}.
Clancy et al.\ were faced with the problem that the implementations of the
aforementioned heuristics were written in different languages, leading to
incomparable running times.
In their evaluation, they thus focus on variants of the star reinsertion method;
their comparison with the planarization method only gives averages over (a quite
limited number of) full instance sets and relies on old data from previous
experiments.
Herein, we present a comprehensive experimental evaluation of a wide
array of crossing minimization heuristics based on edge and star insertion
encompassing all known strong candidates.
This includes not only variants of the planarization and star reinsertion
methods but also \emph{combined} approaches.
In addition, we present and evaluate a new heuristic that builds up a
planarization from a planar subgraph using \emph{both} star and edge insertions.
All of these algorithms are implemented as part of the same framework, enabling
us to accurately compare their running times.
Furthermore, we suggest ways of simplifying the implementation of the
heuristics, increasing their speed in practice, and improving their
results---e.g., by properly handling crossings between adjacent edges and
multiple crossings between the same two edges.
\section{Preliminaries}\label{sec:preliminaries}
In the following, we consider a connected undirected graph $G$ (that is usually
simple, i.e., does not contain parallel edges or self-loops) with $n$ vertices
and $m$ edges, denoted by $V(G)$ and $E(G)$ respectively.
Let $\Delta$ be the maximum degree of any vertex in $V(G)$ and $N(v) \coloneqq
\{w \mid \edge{v}{w} \in E\}$ the neighborhood of a vertex~$v$.
Then, $v$ along with a subset of its incident edges $F \subseteq \{\edge{v}{w}
\in E\}$ is collectively called a \emph{star}, denoted by~$(v,F)$.
Furthermore, a (combinatorial) \emph{embedding} of a planar graph $G$
corresponds to a cyclic ordering of the edges around each vertex in $V(G)$ such
that the resulting drawing can be realized without any edge crossings.
This induces a set of cycles that bound the \emph{faces} of the embedding.
Based on a combinatorial embedding of the \emph{primal graph}~$G$, we can define
the \emph{dual graph}~$\dual{G}$, whose vertices correspond to the faces of~$G$,
and vice versa.
For each primal edge $e \in E(G)$, there exists a dual edge $\dual{e} \in
E(\dual{G})$ between the dual vertices corresponding to the $e$-incident primal
faces.
Note that $\dual{G}$ may be a multi-graph with self-loops even if $G$ is~simple.
For the purpose of this paper, it is of particular concern how to insert an edge
$\edge{v_1}{v_2}$ into a planarization.
First, it is necessary to find a corresponding \emph{insertion path}, i.e.,
a sequence of faces $f_1,\dots,f_k$ such that $v_1$ is incident to $f_1$, $v_2$
incident to $f_k$, and $f_i$ adjacent to $f_{i+1}$ for $i \in \{1,\dots,k-1\}$.
An edge between $v_1$ and $v_2$ can then be inserted into a planarization by
subdividing a common edge for each face pair $(f_i,f_{i+1})$ and routing the
new edge as a sequence of edges from $v_1$ along the subdivision vertices to
$v_2$.
By extension, the \emph{insertion spider} of a star $(v, F)$ is a set of
insertion paths, one for each edge in $F$. These insertion paths necessarily
share a common face into which $v$ can be inserted.
\section{Algorithms}
\label{sec:algorithms}
\subsection{Solving Insertion Problems}
\label{subsec:insertion_problems}
Insertion problems, and their efficient solutions, form the cornerstone of all
known strong crossing minimization heuristics.
\begin{definition}[EIF, SIF]
Given a planar graph~$G$, an embedding~$\Pi$ of $G$, and an edge (or star)
not yet in~$G$, insert this edge (star) into $\Pi$ such that the number of
crossings in $\Pi$ is minimized.
We refer to these problems as the \emph{edge (star) insertion
problem with fixed embedding~EIF~(SIF,~resp.)}.
\end{definition}
Given a primal vertex $v$, let $\contr{v}$ be the vertex that is created by
contracting the dual vertices that correspond to $v$-incident faces.
Then, the EIF for any given edge~$\edge{v_1}{v_2}$ can be solved optimally in
$\mathcal{O}(n)$ time by computing the shortest path from $\contr{v_1}$ to
$\contr{v_2}$ in the dual graph~$\dual{G}$ via breadth-first
search~(BFS)~\cite{DBLP:journals/jss/BatiniTT84}.
By extension, the SIF for a star $(v,F)$ can be solved in $\mathcal{O}(|F| \cdot n)$
time as follows~\cite{DBLP:conf/soda/ChimaniGMW09}:
For each edge $(v,w) \in F$, solve the single-source shortest path problem in
$\dual{G}$ with $\contr{w}$ as the source (via BFS).
For each face $f$, the sum over all of the resulting distance values at this $f$
then represents the number of crossings that would be created if $v$ was to be
inserted into $f$.
Hence, the face with the minimum distance sum is the optimal face to
insert $v$ into, and the computed shortest paths to this face collectively
form the insertion spider.
To avoid crossings between these shortest paths (due to them not being
necessarily unique), we can construct the insertion spider using a final
BFS starting at the optimal face.
\begin{definition}[EIV, MEIV, SIV]
Given a planar graph~$G$ and an edge (a set of $k$~edges, or a star) not yet
in $G$, find an embedding~$\Pi$ among all possible embeddings of~$G$ such
that optimally inserting the edge (set of $k$~edges, star) into this~$\Pi$
results in the minimum number of crossings.
We refer to these problems as the \emph{edge (multiple edge, star) insertion
problem with variable embedding~EIV~(MEIV,~SIV,~resp.)}.
\end{definition}
The EIV can be solved in $\mathcal{O}(n)$~time using an algorithm by Gutwenger et
al.~\cite{DBLP:journals/algorithmica/GutwengerMW05}, which finds a suitable
embedding (with the help of SPR-trees) and then executes the EIF-algorithm
described above.
Now consider the MEIV: Solving it for general $k$ is
NP-hard~\cite{DBLP:phd/dnb/Ziegler01}, however there exists an $\mathcal{O}(kn +
k^2)$-time approximation algorithm with an additive guarantee of $\Delta k \log
k + \binom{k}{2}$~\cite{DBLP:journals/jco/ChimaniH17} that performs well in
practice~\cite{DBLP:journals/jgaa/ChimaniG12}.
Put briefly, the EIV-algorithm is run for each of the $k$~edges independently,
and a single final embedding is identified by combining the individual
(potentially conflicting) solutions via voting.
Then, the EIF-algorithm can be executed once for each edge.
Note that the SIV can be solved optimally in polynomial time by using
dynamic programming techniques~\cite{DBLP:conf/soda/ChimaniGMW09}.
However, for graphs that are not series-parallel, the resulting running times
are exorbitant and there is no known implementation of this algorithm.
In fact, our results herein suggest that in the context of crossing minimization
heuristics, the solution power of the SIV-algorithm is fortunately not
necessary in practice.
Each problem discussed above has a \emph{weighted} version which can be solved
in the same manner if each $c_e$-weighted edge $e$ is replaced by $c_e$ parallel
$1$-weighted edges beforehand.
In practice it is worthwhile to compute the shortest paths during the
EIF/SIV-algorithm on the weighted instance directly. However, this does not
allow for the same theoretical upper bounds of the running times
since the weights may be arbitrarily large.
\subsection{Crossing Minimization Heuristics}
\label{subsec:heuristics}
We start with reviewing several crossing minimization heuristics that
iteratively build up a planarization, starting with a planar subgraph:
\myparagraph{The planarization method (plm)} is the longest studied and best-known
approach considered, achieving strong results in previous
evaluations~\cite{DBLP:journals/jss/BatiniTT84,DBLP:journals/jgaa/ChimaniG12,DBLP:conf/gd/GutwengerM03}.
First, we compute a spanning planar subgraph $G' = (V,E') \subseteq G$, usually
by employing a maximum planar subgraph heuristic and extending the result such
that it becomes (inclusion-wise) maximal.
Then, the remaining edges $F \coloneqq E \setminus E'$ are either inserted one
after another---by solving the respective EIF (\emph{fix}) or EIV
(\emph{var})---or simultaneously using the MEIV-approximation algorithm
(\emph{multi}).
Gutwenger and Mutzel~\cite{DBLP:conf/gd/GutwengerM03} describe a postprocessing
strategy for \emph{plm} based on edge insertion: Each edge is deleted from the
planarization and reinserted one after another~(\emph{all}).
To incrementally improve the planarization, \emph{all} can also be executed once
after each individual edge
insertion~(\emph{inc})~\cite{DBLP:journals/jgaa/ChimaniG12}.
In the following, we represent the use of these postprocessing strategies by
appending the respective shorthand to the algorithm's abbreviation, e.g.\
\emph{fix-all}.
When neither \emph{all} nor \emph{inc} is employed, we use the specifier
\emph{none} instead.
\myparagraph{The chordless cycle method (ccm)} realizes the idea of extending a
\emph{vertex-induced} planar subgraph to a full planarization via star
insertion~\cite{DBLP:conf/soda/ChimaniGMW09}. It corresponds to the
best-performing scheme for the star insertion algorithm as examined by Clancy et
al.~\cite{DBLP:journals/jgaa/ClancyHN19}:
Search for a chordless cycle in $G$, e.g., via breadth-first search. Let $G'$
denote the subgraph of $G$ that is already embedded and initialize it with this
chordless cycle.
Iteratively (until the whole graph is embedded) select a vertex $v \not\in
V(G')$ such that there exists at least one edge $\edge{v}{w}$ that connects $v$
with the already embedded subgraph $G'$; insert $v$ into $G'$ by solving the SIF
for the star~$(v,~\{\edge{v}{w} \in E \mid w \in V(G')\})$.
\myparagraph{The mixed insertion method (mim)} is a novel approach that we propose
as an alternative to the planarization schemes above. It proceeds in a fashion
that is similar to \emph{plm} but relies on star insertion instead of edge
insertion in as many cases as possible.
Accordingly, let $G'$ denote the subgraph of $G$ that is already embedded and
initialize it with a spanning planar subgraph $(V,E') \subseteq G$.
Then, (attempt to) insert the remaining edges $F \coloneqq E \setminus E'$ by
reinserting at least one endpoint of each edge $e \in F$ via star insertion.
Since removing and then reinserting a \emph{cut vertex} of the planar subgraph
$G'$ would temporarily disconnect it, the cut vertices of the planar subgraph
are computed (cf.\ \cite{DBLP:journals/cacm/HopcroftT73}) and each edge $e \in
F$ is processed as follows:
If both endpoints of $e$ are cut vertices of $G'$, insert the edge via edge
insertion (we choose to do so in a variable embedding setting as such edge
insertions happen rarely).
If only one endpoint of the edge is a cut vertex, reinsert the other one.
If neither endpoint of the edge is a cut vertex, the endpoint to be reinserted
can be chosen freely---globally, this corresponds to finding a vertex
cover on the graph induced by $F$ that has to include all vertices
neighboring a cut vertex in $G'$. Finding an optimal vertex cover is NP-hard
\cite{DBLP:conf/coco/Karp72}; therefore we compare several heuristics:
For each edge~$e$, choose one of the endpoints
randomly~(\emph{random}), choose the one with the higher or lower degree
in~$G$~(\emph{high}$_G$, \emph{low}$_G$), choose the one with the higher or
lower degree in the graph induced by all edges in $F$ not incident to a cut
vertex in~$G'$~(\emph{high}$_{F}$, \emph{low}$_{F}$), or choose both
endpoints~(\emph{both}).
Each of the chosen vertices is then deleted from the planar subgraph and
reinserted together with all of its edges in the original graph by solving the
corresponding~SIF.
\myparagraph{}Herein, we evaluate the aforementioned heuristics not only on their own but
also in combination with the \emph{star reinsertion method} (\emph{srm}) by
Clancy et al.~\cite{DBLP:journals/jgaa/ClancyHN19}, a postprocessing strategy
based on star insertion.
It starts with an already existing planarization, which may be constructed using
any of the methods outlined above (or even more trivial ones, such as extracting
a planarization from a circular layout of the vertices, which, however, is known
to perform worse~\cite{DBLP:journals/jgaa/ClancyHN19}).
To represent that the result of an algorithm is improved via \emph{srm}, we
append \enquote{\emph{srm}} to its abbreviation, e.g. \emph{fix-none-srm}.
The given planarization is thereby processed as follows:
Iteratively choose a vertex $v$, delete $v$ from $G$, and reinsert it
again by solving the SIF for the star $(v,v \times N(v))$.
Continue the loop until there is no more vertex whose reinsertion improves
the solution (in which case the latter is said to be \emph{locally
optimal}).
Clancy et al.\ propose different methods for choosing $v$; here, we
consider the scheme they report to be the best compromise between
solution quality and running time:
In each iteration, try to reinsert every vertex once and continue with
the next iteration as soon as a vertex is found whose reinsertion
improves the number of crossings in the~planarization.
The original algorithm only updates a planarization once an actual improvement
is found and resets it to its original state otherwise. We propose to never
reset it.
This approach is permissible as the SIF is solved optimally and the number of
crossings hence never increases after the reinsertion of a star.
Not resetting the planarization has the potential to save time in practice as it
allows for a simpler implementation without any need to copy the dual graph.
\section{A Note on Non-simple Crossings}
\begin{figure}[p]
\begin{minipage}{\textwidth}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics{figures/create_alpha_crossing}
\caption{Creation of an $\alpha$-crossing}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics{figures/create_beta_crossing}
\caption{Creation of a $\beta$-crossing}
\end{subfigure}
\end{minipage}
\caption{A non-simple crossing on the red dashed edge as the result of
incrementally solving the same kind of insertion problem.
When starting with the black planar subgraph, this may happen by solving the
SIV using the described algorithm for the colored vertices in the order of
their label numbers.
Alternatively, if all solid edges constitute the initial planar subgraph,
solving the EIV for the dashed edges in the order of their label numbers
can have the same result.
The examples apply both in the fixed and the variable embedding setting.
Dummy vertices for (non-simple) crossings are represented by small (black)
diamonds.
}
\label{fig:crossing_existence}
\end{figure}
\begin{figure}[p]
\begin{minipage}{\textwidth}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics{figures/remove_alpha_crossing}
\caption{Removal of an $\alpha$-crossing}
\end{subfigure}\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics{figures/remove_beta_crossing}
\caption{Removal of a $\beta$-crossing}
\end{subfigure}
\end{minipage}
\caption{Non-simple crossings between the red and green edges. After their
removal (new edge paths drawn as dashed), the red edge is involved in a
new~non-simple crossing of the same type and the green edge in a new non-simple
crossing of the opposite type.
Thus, the removal procedure may have to be iterated.}
\label{fig:non_simple_crossings}
\end{figure}
It is well-known that any crossing-optimal drawing can be assumed to be
\emph{simple}: No edge self-intersects and each pair of edges intersects at most
once (either in a crossing or an endpoint).
In particular, a simple drawing may not contain crossings between adjacent edges
(\emph{$\alpha$-crossings}) or multiple crossings between the same two edges
(\emph{$\beta$-crossings}).
We may hence call any such undesired crossings \emph{non-simple}.
Surprisingly, earlier implementations of the planarization method did not
consider the emergence and removal of any non-simple
crossings~\cite{DBLP:journals/jgaa/ChimaniG12} while the implementation of the
star reinsertion method by Clancy et al.\ only considers $\beta$- but not
$\alpha$-crossings~\cite{DBLP:journals/jgaa/ClancyHN19}.
However, we show in Figure~\ref{fig:crossing_existence} that incrementally
solving the same kind of insertion problem may result in a planarization
with $\alpha$- or $\beta$-crossings, even when starting with a planar subgraph.
Non-simple crossings can be removed by reassigning edges in the planarization to
different edges in the original graph and then deleting the respective dummy
vertices (see Figure~\ref{fig:non_simple_crossings}).
Doing so leads to better results overall,
\inappendix{as shown in Appendix~\ref{sec:eval_nonsimple_crossings}}%
{see~\cite[Appendix~C]{chimani2021starstruck}}.
\section{Experiments}
\label{sec:experiments}
\myparagraph{Setup:}
All algorithms are implemented in \texttt{C++} as part of the Open Graph Drawing
Framework (OGDF, \url{www.ogdf.net}, based on the release \enquote{2020.02
Catal\-pa})~\cite{DBLP:reference/crc/ChimaniGJKKM13}, and compiled with GCC 8.3.0.
Each computation is performed on a single physical processor of a
Xeon~Gold~6134~CPU~(3.2~GHz), with a memory limit of 4~GB but no time limit.
All instances and results are available for download at
\url{http://tcs.uos.de/research/cr}. %
\myparagraph{Instances:}
Table~\ref{tab:instances} lists the instance sets used for our
evaluation\,(see
\inappendix{Appendix~\ref{sec:instance_statistics}}%
{\cite[Appx.\,A]{chimani2021starstruck}}
for further statistical analysis).
To enable a proper comparison of the tested algorithms (and potentially
in the future, their competitors), we consider multiple well-known benchmark
sets as well as constructed, random, and real-world instances with
varying characteristics.
These are preprocessed by computing the \emph{non-planar
core}~(NPC)~\cite{DBLP:journals/dm/ChimaniG09} for each non-planar
biconnected component.
We consider only those instances that have at least $25$ vertices after the NPC
reduction unless the instance is part of the Complete, Complete-Bip., or
KnownCR instance sets.
Moreover, we precompute a planar subgraph and chordless cycle for each
instance such that different runs of \emph{plm}, \emph{mim} and \emph{ccm} can
be started with the same initialization.
The planar subgraph is computed by using Chalermsook and Schmid's diamond
algorithm~\cite{DBLP:conf/walcom/Chalermsook017} and extending the result to
a maximal planar subgraph.
On average, this computation took only 0.77\% of the time needed to execute the
fastest evaluated heuristic \emph{fix-none}---a comparatively negligible amount
of time that is not further taken into consideration during the evaluation.
\begin{table}[t]
\centering
\caption{Considered instance sets. \enquote{\#} denotes the number of graphs
and $|V(G)|$ the (range of the) numbers of nodes---both values refer to
the instance sets \emph{after} preprocessing.
Further, let $\delta$ denote the node degree, $\mathbin{\text{\scalebox{.84}{$\square$}}}$ the Cartesian
product of two graphs, $C_i$~the cycle with $i$ edges, $P_j$ the path
with $j$ edges, and $G_k$ the $21$ non-isomorphic connected graphs on
$5$ vertices indexed by $k$.
}
\label{tab:instances}
\begin{tabular}{lrr@{\ \;}p{7.39cm}}
Name & \# & $|V(G)|$ & Description \\
\hline\hline
\textbf{Rome} & 3668 & 25--58 & Well-known benchmark set~\cite{DBLP:journals/comgeo/WelzlBGLTTV97}, sparse\\
\hline
\textbf{North} & 106 &
25--64 &
Well-known benchmark set collected by S.\ North~\cite{DBLP:journals/ijcga/BattistaGLPTTVV00}\\
\hline
\textbf{Webcompute} & 75 & 25--112 &
Instances sent to our online tool~\cite{DBLP:conf/esa/ChimaniW16} for the exact computation of crossing
numbers, \url{crossings.uos.de}\\
\hline
\textbf{Expanders} & 240 & 30--100 & 20 random regular graphs~\cite{steger_wormald_1999}
(\emph{expander graphs} with high probability)
for each parameterization $(|V(G)|,\delta) \in \{30,50,100\} \times
\{4,6,10,20\}$\\
\hline
\textbf{Circuit-Based} & 45 & 26--3045 &
\multirow{4}{7.39cm}{\justifying Hypergraphs from real world electrical
networks, transformed into traditional graphs by replacing each
hyperedge $h$ by a new hypervertex connected to all vertices contained in $h$}\\
\emph{ISCAS-85}~\cite{brglez1985neutral} & 9 & 180--3045 & \\
\emph{ISCAS-89}~\cite{brglez1989notes} & 24 & 60--584 & \\
\emph{ITC-99}~\cite{DBLP:journals/dt/CornoRS00} & 12 & 26--980 & \\
\hline
\textbf{KnownCR} & 1946 & 9--250 & Benchmark set with $\mathit{cr}$ known through proofs~\cite{gutwenger10}:\\
$C \mathbin{\text{\scalebox{.84}{$\square$}}} C$ & 251 & 9--250 & $\to$ $C_i \mathbin{\text{\scalebox{.84}{$\square$}}} C_j$ with $3 \leq i \leq 7$, $j \geq i$ such that $i\cdot j \leq 250$\\
$G \mathbin{\text{\scalebox{.84}{$\square$}}} P$ & 893 & 15--245 & $\to$ Subset of $G_i \mathbin{\text{\scalebox{.84}{$\square$}}} P_j$ with $1 \leq i \leq 21$, $3 \leq j \leq 49$\\
$G \mathbin{\text{\scalebox{.84}{$\square$}}} C$ & 624 & 15--250 & $\to$ Subset of $G_i \mathbin{\text{\scalebox{.84}{$\square$}}} C_j$ with $1 \leq i \leq 21$, $3 \leq j \leq 50$\\
$P(\_,\_)$ & 178 & 10--250 & $\to$ Generalized Petersen graphs $P(2k + 1, 2)$ with $2 \leq k \leq 62$ and $P(m, 3)$ with $9 \leq m \leq 125$\\
\hline
\textbf{Complete} & 46 & 5--50 & Complete graphs $K_n$ for $5 \leq n \leq 50$\\
\hline
\textbf{Complete-Bip.} & 666 & 10--80 & Complete bipartite graphs\,$K_{n_1,n_2}$\,for\,$5 \leq n_1,n_2 \leq 40$\\
\end{tabular}
\end{table}
The precomputed chordless cycle almost always consists of 3--6 vertices,
containing 7--11 vertices for only 15~instances overall.
How many edges are deleted to create the planar subgraph, on the other hand,
varies greatly depending on the size and density of the graph.
Of particular interest is the number of deleted edges that are incident to one
or two cut vertices of the planar subgraph:
During \emph{mim}, the former ones have a fixed endpoint that
must be reinserted via star insertion (disallowing a choice of the reinserted
endpoint) while the latter ones must be inserted via edge insertion.
Clearly, more dense instances such as the complete (bipartite) ones and the
expanders require more edges to be deleted to form a planar subgraph.
At the same time, due to their high connectivity, these instances also have less
deleted edges that are connected to cut vertices in the planar subgraph.
In particular, the complete (bipartite) instances do not have a single such edge.
However, even on the sparser instances, \emph{mim} inserts almost all edges via
star insertion and one can usually choose the endpoint to be reinserted (see the
\emph{mim}-variants described in Subsection~\ref{subsec:heuristics}).
\subsection{Fast Heuristics: Mixed Insertion Method, Chordless Cycle Method and
Fixed Embedding Edge Insertion}
The \emph{mim}-variants, \emph{ccm}, and \emph{fix-none} (all without
\emph{srm}-postprocessing) are very fast but yield a comparably high number
of crossings.
Figure~\ref{fig:mim_comparison} displays some representative results on the
expanders, contrasting them with the \emph{BEST} solution found by 50
random permutations of any heuristic tested herein (cf.\
Subsection~\ref{subsec:permutations}).
Among the \emph{mim}-variants, there are only little differences in computation
speed and resulting number of crossings.
However, reinserting \emph{both} endpoints whenever a choice between two
endpoints can be made clearly provides the best results across all instances
while only taking an insignificant amount of additional time.
The variant leads to the highest amount of reinserted stars and hence also to
more chances for an improvement of the number of crossings.
In contrast, \emph{high}$_{F}$ needs the lowest amount of star insertions and is
thus the fastest variant (but provides results of mixed quality).
\begin{figure}[tbp]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics{plots/legend/mim}
\end{subfigure}\\[0.1cm]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics{plots/mim_expanders}
\end{subfigure}\hfill%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics{plots/mim_expanders_time}
\end{subfigure}
\caption{Comparison of the \emph{mim}-variants, \emph{ccm} and \emph{fix-none}
on the expanders.}
\label{fig:mim_comparison}
\end{figure}
Compared with \emph{fix-none} and \emph{ccm}, \emph{mim} (from now on always
referring to the \emph{both}-variant) provides better results on almost all
instances.
The fastest of the algorithms, on the other hand, is \emph{fix-none}.
The last of the three, \emph{ccm}, should only be considered when examining
particularly dense instances:
On sparse instance sets such as Rome or KnownCR, %
it is slower and yields far worse results than \emph{fix-none} (which in turn
yields worse results than \emph{mim}), but the solution and speed disparity between
the algorithms becomes smaller on instances with a higher density---see, e.g.,
Figure~\ref{fig:mim_comparison}.
On complete (bipartite) instances, \emph{ccm} even surpasses \emph{mim} both in terms
of solution quality and speed.
\subsection{Planarization Method}
\label{subsec:planarization_method}
The different edge insertion algorithms and postprocessing strategies for the
planarization method allow to greatly improve the final planarizations at the
cost of additional running time.
A detailed experimental comparison of these \emph{plm}-variants was already
carried out in 2012~\cite{DBLP:journals/jgaa/ChimaniG12}.
We are able to replicate the results of that study and corroborate its
claims with findings on additional instances:
In terms of solution quality, \emph{none} provides much worse results than
\emph{all} and \emph{inc} across all instance sets.
However, postprocessing and \emph{inc} in particular has the drawback of very
high running times and a large amount of required memory.
Among the edge insertion algorithms, \emph{var} performs better (but is also
slower) than \emph{multi}, which in turn performs better than \emph{fix}.
Overall, \emph{fix-all} is the fastest \emph{plm}-variant that still benefits
from the quality improvements of postprocessing.
The best compromise between solution quality and speed is provided by the
\emph{multi}-variants while the best results are achieved by
\emph{var-inc} (cf.\
\inappendix{Appendix~\ref{sec:plm_appendix}}%
{\cite[Appx.~B]{chimani2021starstruck}}).
\subsection{Improvements via the Star Reinsertion Method}
We tested \emph{srm} as a postprocessing method for the eight most promising and
interesting algorithms that construct an initial planarization:
The three fast algorithms \emph{mim}, \emph{ccm}, and \emph{fix-none}, as
well as the more involved \emph{fix-all}, \emph{multi-all}, \emph{multi-inc},
\emph{var-all}, and \emph{var-inc}.
In the case of the latter five, a form of postprocessing is already used, and
the additional application of \emph{srm} only leads to a small increase in
running time, comparatively speaking.
In the case of the former three, the additional postprocessing via \emph{srm}
significantly increases the running times (\emph{fix-none-srm} becomes even
slower than \emph{fix-all-srm}), but the algorithms are still
surprisingly fast:
On sparse instances, the running times are comparable to \emph{multi-inc}
(without \emph{srm}); on dense instances, the algorithms are even faster than
\emph{fix-all}.
This is especially interesting as all \emph{srm}-enhanced algorithms typically
outperform even the best previously known heuristic variant \emph{var-inc} (see
Figures~\ref{fig:srm_comparison_knowncr} and \ref{fig:srm_comparison}).
In spite of its simplicity, star insertion in a fixed embedding setting is able
to greatly improve intermediate planarizations by inserting multiple edges at
once.
It provides better results and is faster than edge insertion in a variable
embedding setting even if the latter uses incremental postprocessing.
\begin{figure}[p]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics{plots/srm_KnownCR}
\end{subfigure}
\\[0.2cm]
\begin{subfigure}{\textwidth}
\centering
\includegraphics{plots/srm_KnownCR_time}
\end{subfigure}
\caption{Comparison of the \emph{srm}-variants on the KnownCR instances. The
legend of Figure~\ref{fig:srm_comparison} applies. Instance sizes
are rounded up to the nearest multiple of fifty. Note that the results of
\emph{ccm-srm} heavily depend on the structure of the instance; they also vary a
lot across other instance sets\,(with middling results on average).}
\label{fig:srm_comparison_knowncr}
\end{figure}
\begin{figure}[tb]
\centering
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/legend/srm}
\end{subfigure}\\[0.2cm]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/srm_rome}
\end{subfigure}\hfill%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=\textwidth]{plots/srm_rome_time}
\end{subfigure}
\caption{Comparison of the \emph{srm}-variants on the Rome instances.
The grayed out plots represent the heuristic variants without
\emph{srm}-postprocessing. Instance sizes are rounded up to the nearest multiple
of five.}
\label{fig:srm_comparison}
\end{figure}
When observing the solution quality of the \emph{srm}-algorithms, the same
hierarchy as for the algorithms without \emph{srm} emerges:
\emph{fix-none-srm} performs worse than the other \emph{plm}-based
\emph{srm}-variants, with \emph{var-inc-srm} providing the best results overall.
However, \emph{var-inc-srm} is rarely worth the additional running time since
the three significantly faster \emph{mim-srm}, \emph{ccm-srm} and
\emph{fix-none-srm} perform similarly well or even surpass it on many instances such
as several circuit-based ones and the expanders.
In comparison to \emph{mim-srm} for example, \emph{var-inc-srm}'s
solution quality difference to BEST is only 1.7\% smaller but its median
running time is eight times higher (when averaged over all instances).
The running times of the faster algorithms seem to coincide with the quality
of the planarization delivered by the base algorithm:
While \emph{fix-none-srm} is generally faster than \emph{ccm-srm} on sparse
instances, %
the opposite is true on denser ones. %
On complete (bipartite) instances, \emph{ccm-srm} becomes even faster than
\emph{mim-srm}. %
However, \emph{mim-srm} is the otherwise fastest among these algorithms, and
thus we recommend to use it.
\subsection{Improvements via Permutations}
\label{subsec:permutations}
We will consider one last question: Whether multiple runs of the same algorithm
with different random permutations of the inserted elements can
significantly improve the results.
For \emph{plm}, we permute the order in which the deleted edges are inserted,
and for \emph{mim}, \emph{ccm} and \emph{srm}, we permute the order of
(re)inserted stars.
Our experiments compare the effect of 50 random permutations with
respect to the Rome, North, Webcompute and KnownCR instance sets.
For the larger instances and more time-consuming algorithms,
this number of permutations is the limit of what we are able to compute.
We focus on the \emph{(relative) improvement} for each instance, i.e., the
lowest number of crossings divided by the average number of crossings across 50
permutations (%
\inappendix{cf.\ Appendix~\ref{sec:relative_improvement}}%
{cf.\ \cite[Appendix~D]{chimani2021starstruck}}).
\begin{figure}[p]
\centering
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics{plots/perms_rome}
\caption{Rome}
\end{subfigure}\hfill%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics{plots/perms_north}
\caption{North}
\end{subfigure}%
\caption{Comparison of relative improvements for 50 permutations over their
average on the Rome and North instances.
The legend of Figure~\ref{fig:srm_comparison} applies.}
\label{fig:relative_comparison}
\end{figure}
\begin{figure}[p]
\centering
\captionsetup[subfigure]{justification=centering}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[width=.98\textwidth]{plots/legend/perms}
\end{subfigure}
\\[0.2cm]
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics{plots/perms_absolute_rome_sim_cmp}
\caption{Rome}
\end{subfigure}\hfill%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics{plots/perms_absolute_north_sim_cmp}
\caption{North}
\end{subfigure}%
\caption{Comparison of high-solution-quality heuristics (with a single or 50
permutations) on the Rome and North instances.}
\label{fig:highperforming_comparison}
\end{figure}
Overall, permutations can significantly improve the results of \emph{mim},
\emph{ccm}, and \emph{plm-none} at the cost of little additional time.
However, when more time is available, \emph{plm} with postprocessing is clearly
preferable.
Multiple permutations of \emph{all} and \emph{inc} can be of use if
one tries to marginally improve already good solutions.
Among the \emph{srm}-algorithms, the relative improvement via permutations is
consistently low with little variance; for a comparison with
the respective \emph{plm}-variants see Figure~\ref{fig:relative_comparison}.
The one outlier is \emph{ccm-srm}, which achieves the greatest relative
improvements for 50 permutations.
Note, however, that we initialize all permutations of \emph{ccm-srm} with a
fixed small chordless cycle instead of a fixed maximal planar subgraph.
This allows for greater variance in the solutions of \emph{ccm-srm} and makes it
difficult to compare the results to other \emph{srm}-algorithms.
The general trend of high-solution-quality algorithms, taking multiple
permutations into account, is shown in
Figure~\ref{fig:highperforming_comparison}:
A single permutation of \emph{mim-srm} or \emph{ccm-srm} will yield better
solutions than a \emph{plm}-variant with incremental postprocessing (but no
\emph{srm}).
Two layers of postprocessing, i.e., \emph{-all-srm} or \emph{-inc-srm}, improve
the results even more.
Solutions resulting from 50 permutations are in a tier of their own, with
\emph{srm}-heuristics achieving higher quality than those without.
Overall, 50 permutations of \emph{mim-srm} or \emph{ccm-srm} provide some of the
best results while taking a lot less time than other algorithms in
their category.
Consider, e.g., the Rome instances in a 50-permutations setting;
\emph{var-inc-srm} can reduce the average solution quality difference to BEST by
only 1.2\% more than \emph{mim-srm}, but its median running time is ten times as
high.
\FloatBarrier
\section{Conclusion}
\label{sec:conclusion}
Our in-depth experimental evaluation not only corroborates the
results of previous papers~\cite{DBLP:journals/jgaa/ChimaniG12,
DBLP:journals/jgaa/ClancyHN19} but also provides new insights into the
performance of star insertion in crossing minimization heuristics.
We presented the novel heuristic \emph{mim}, which proceeds
similarly to the planarization method but inserts most edges by reinserting one
of their endpoints as a star. %
Whenever neither endpoint is a cut vertex of the initial planar subgraph,
the endpoint can be chosen freely, and our experiments indicate
that reinserting \emph{both} endpoints one after another provides the best results.
In general, \emph{mim} performs better than the basic heuristics
from~\cite{DBLP:journals/jgaa/ChimaniG12, DBLP:journals/jgaa/ClancyHN19} that
have a similarly low running time (i.e., \emph{ccm} and \emph{fix-none}).
A central observation is that postprocessing via star insertion
(\emph{srm}) can greatly improve the planarizations resulting from fast
heuristics:
\emph{mim-srm}, \emph{ccm-srm}, and \emph{fix-none-srm} are all faster than the
previously best-performing heuristic \emph{var-inc} and provide better results.
By inserting multiple adjacent edges at once, star (re-)insertion changes the
planarization and its underlying graph decomposition in a way that is sufficient
to properly explore the search space and find good solutions.
Fixed embedding star insertion is thus preferable over the much
slower insertion of edges (or even stars) in a variable embedding setting.
We note that many heuristics---in particular those without edge-wise
post\-processing---are prone to create non-simple crossings (due to lack of
space see
\inappendix{Appendix~\ref{sec:eval_nonsimple_crossings}}%
{\cite[Appendix~C]{chimani2021starstruck}}).
Such crossings can be detected and it is worthwhile to remove them in order to
speed up the procedure and improve the results.
Lastly, multiple permutations are beneficial for heuristics that already
employ postprocessing.
In particular, their application to \emph{mim-srm} and \emph{ccm-srm} provides
very high solution quality at moderate running times.
|
2108.11540
|
\section{Introduction}
The next generation wireless systems are expected to provide not only ultra high-speed communication rate, but also high-accuracy sensing services \cite{liu2020joint}. Conventionally, communication and sensing systems are allocated to different orthogonal frequency bands and designed independently. Recently, the rapid development of multi-antenna technologies, especially the massive multiple-input multiple-output (MIMO) and millimeter wave (mmWave) technologies, grants future communication systems the capability to also perform high-accuracy sensing tasks. As such, the notion of integrated sensing and communication (ISAC) \cite{liu2021survey, akan2020internet}, in which sensing and communication systems are co-designed to share the same frequency band and hardware, has been proposed as an enabling technology for beyond fifth-generation (5G) and sixth-generation (6G) wireless systems to further improve the spectral efficiency and to reduce the hardware-cost \cite{wong2017key}.
It is envisioned that the ISAC can support various essential applications \cite{liu2021survey}, ranging from indoor localization, extended reality, to unmanned aerial vehicle (UAV) sensing and communication.
Among various emerging network architectures, vehicle-to-everything (V2X) networks \cite{wymeersch20175g} play an important role to unlock the potential of upcoming next-generation wireless communication, which promise a low-latency data transmission in a high user mobility scenario.
In particular, vehicle-to-infrastructure (V2I) communication \cite{kuutti2018survey} is an indispensable component of V2X networks. Thus, it is natural to deploy ISAC to support a high-data rate transmission as well as a high-resolution localization to facilitate V2I networks.
Indeed, various initial experimental results have shown that ISAC-based V2I networks can achieve a centimeter-order localization accuracy while maintaining a satisfactory communication rate \cite{wymeersch20175g}. Meanwhile, ISAC-based V2I networks not only conserve the spectral resources, but also reduce the hardware cost.
Therefore, research efforts towards the ISAC technology and the ISAC-based V2I systems have attracted tremendous attention from both academia and industry \cite{feng2020joint, xiao2020overview}.
Generally, according to whether the sensing targets can independently transmit sensing signals or not, ISAC can be divided into two types, i.e., (a) device-based ISAC and (b) device-free ISAC \cite{liu2021survey}.
One typical method of (a) is the integrated localization and communication (ILAC) \cite{xiao2020overview}.
For example, \cite{ghatak2018positioning} investigated an mmWave network for enabling both positioning and data-communication services.
In particular, by exploiting the native communication signal transmitted from the single-antenna users, the base station (BS) can provide the positioning service while satisfying heterogenous quality of service requirements.
Besides, the authors in \cite{destino2017trade} adopted a pilot-based beam training scheme for positioning in a single-user mmWave communication scenario.
In the proposed scheme, both the BS and the user are equipped with multi-antenna arrays to periodically transmit and receive the sensing signals for joint beam alignment, thus further improving the ISAC performance.
On top of \cite{destino2017trade}, \cite{kumar2018trade} extended the related work to a multi-user case, where the device-based multi-user single-input multiple-output (SIMO) uplink beam training scheme was explored.
By quantifying the localization-communication tradeoff as a function of beam training phase duration, the authors revealed that there exists an optimal beam training overhead to strike a balance between the effective rate and the localization accuracy.
Note that the above device-based methods focus on low-mobility users and require the sensing targets to independently transmit sensing signals to facilitate positioning at the BS.
However, for high-speed user scenarios, such as V2I networks, the use of sensing signal transmission would introduce a relatively long time delay.
This often leads to the outdatedness of location information acquisition jeopardizing to the beam tracking performance.
To overcome the above problems, dual-function radar-communication (DFRC) \cite{liu2020joint}, a typical example for device-free ISAC has been proposed, which enables a single device to provide dual functionalities: radar sensing and communication without the need of transmitting any sensing signals, such as pilot signals.
As early research efforts, the authors in \cite{roberton2003integrated} and \cite{saddik2007ultra} employed a classical radar waveform, i.e., chirp signals, as the carrier for information transmission and sensing. For example, the developed systems can transmit binary information bits via choosing whether to adopt the down-chirp waveform or the up-chirp waveform. Yet, this scheme can only support a quite low-speed data rate.
Recently, the orthogonal frequency division multiplexing (OFDM) technique has been introduced and exploited as a promising solution for the deployment of efficient DFRC \cite{xu2021wideband}. In contrast to conventional radar waveforms, OFDM-based radar signals can naturally decouple the Doppler and range estimators, which further improve the sensing performance \cite{sen2010adaptive, shi2017power}.
Besides the frequency domain, the rapid development of advanced MIMO techniques enable DFRC to explore the spatial domain to facilitate the implementation of ISAC.
For instance, pioneered by \cite{kumari2017ieee}, a MIMO radar was deployed for DFRC, where the pencil-like mainlobe was formed to facilitate target detection, meanwhile the information can be conveyed via adaptively controlling the powers of sidelobes. To further improve the overall performance of both radar and communication, \cite{liu2020jointtransmit} developed a joint transmit beamforming scheme to optimize the radar target detection accuracy under the constraint of quality of service in terms of communication.
On the other hand, to fully exploit the degrees-of-freedom brought by the spatial domain, the authors in \cite{huang2020majorcom} designed a carrier agile phased array radar-based DFRC scheme that adopts index modulation for information transmission.
Also, to provide a faster data rate and a higher sensing resolution for high mobility V2I networks, mmWave-based DFRC systems have recently been widely investigated, where a road side unit (RSU) is introduced to perform both beam tracking and motion parameter prediction to facilitate efficient ISAC.
Along with this line of thought, the authors in \cite{liu2020radar} developed an extended Kalman filtering (EKF) scheme to accurately track and predict the kinematic parameters of vehicles in mmWave frequency bands, so as to further improve the beam alignment performance. However, the proposed scheme in \cite{liu2020radar} incurs an exceedingly large computational complexity which is not suitable for practical implementation. To further simplify the prediction process, \cite{yuan2020bayesian} exploited the message passing technique to predict the motion parameters based on a Bayesian framework and the proposed scheme enjoys a relatively lower computational complexity compared with the EKF-based method, especially for large scale V2I networks.
It should be highlighted that although the aforementioned methods provided some intermediate solutions to the deployment of DFRC in mmWave-based V2I networks, these existing methods still adopt a cascaded two-stage scheme, i.e., predicting the future communication channels and then operating beam alignment, which is essentially a disjoint beamforming design for guaranteeing both the sensing and communication performance simultaneously.
Besides, it has been shown that the achievable beamforming performance is limited by the channel prediction accuracy.
Furthermore, the two-stage mechanism \cite{liu2020radar, yuan2020bayesian} directly aligns the transmit beam to the desired vehicle ignoring the existence of multiple access interference, which inevitably degrades the system sum-rate.
Thus, a practical predictive beamforming approach that can directly predict the joint beamforming matrix for next time slot without the need of explicit channel tracking/prediction is desired for the deployment of efficient ISAC in V2I networks.
Recently, the new developing deep learning (DL) technology has presented its powerful data-driven capability in various wireless communication applications \cite{wang2017deep}, ranging from channel estimation \cite{lxm2020deepresidual}, signal detection \cite{liu2020deeptransfer}, to resource allocation \cite{ye2019deep}. Note that the joint beamforming design in ISAC is generally a high-dimensional nonconvex problem \cite{liu2021survey}, thus it is challenging to find an optimal solution. On the other hand, optimizing the beamformer for ISAC is essentially an objective maximization problem which can be addressed in a data-driven approach to further improve the system performance.
In fact, the data-driven approach is a data-based processing mechanism where the DL technology is usually adopted to exploit effective features from the data to further improve the performance of a task \cite{wang2017deep, lecun2015deep}.
It has been proven that the data-driven DL approach has various advantages, such as model-free and non-linear mapping \cite{lecun2015deep}, which facilitates the development of efficient algorithms for addressing sophisticated optimization problems.
Motivated by this, in this paper, we adopt a DL approach to design a predictive joint beamforming scheme for V2I networks which not only maximizes the average achievable system sum-rate, but also guarantees the estimation accuracy of motion parameters for vehicles.
In fact, by exploiting the excellent feature extraction capability of DL \cite{liu2019deep, lecun2015deep}, the proposed predictive scheme can implicitly learn the features from historical estimated channels and directly predict the beamforming matrix to be used for the next time slot, which significantly reduces the system complexity.
The main contributions of our work are summarized as follows:
\begin{enumerate}[(1)]
\item We develop a predictive communication protocol, which bypasses the need of explicit channel tracking process to reduce the signaling overhead. Accordingly, a general predictive beamforming problem for ISAC systems is formulated to maximize the communication sum-rate while guaranteeing the sensing performance, where the Cramer-Rao lower bounds (CRLBs) are derived to characterize the estimation accuracy. Specifically, different from existing works \cite{liu2020radar, yuan2020bayesian} that over simplistically ignore the interference from other vehicles, our formulated ISAC beamforming problem takes into account the multiple access interference to design practical schemes for the implementation of ISAC systems.
\item To address the beamforming design problem, we propose a versatile unsupervised DL-based predictive beamforming framework, in which a penalty-based method is first exploited to transform the original optimization problem to an unconstrained optimization problem such that an unsupervised DL approach is then developed to handle the design problem in a data-driven manner.
\item As a realization of the proposed framework, a historical channels-based convolutional long short-term memory (LSTM) network (HCL-Net) is designed for predictive beamforming in ISAC-based V2I networks. Specifically, in HCL-Net, the historical estimated channels are considered as the input while the convolution and LSTM modules are adopted successively to exploit the spatial and temporal features of communication channels to further improve the learning capability for predictive beamforming.
\item We have conducted extensive simulations to verify the efficiency of the proposed algorithm in terms of both communication and sensing performance.
In particular, the results demonstrate that the proposed predictive method can even guarantee harsh requirements on the sensing CRLBs and its achievable sum-rate approaches closely to the upper bound obtained by a genie-aided scheme, where the downlink channels in the latter are assumed to be parallel such that the optimal beamforming \cite{rao2003performance} can be derived with the availability of perfect instantaneous channel state information (ICSI).
\end{enumerate}
The remainder of this work is organized as follows.
The system model of the considered ISAC-based V2I network is introduced in Section \Rmnum{2}.
Section \Rmnum{3} formulates a general optimization problem for beamforming design in ISAC.
To tackle this problem, a DL-based predictive beamforming scheme for ISAC is proposed in Section \Rmnum{4}.
Then, we conduct simulations to verify the effectiveness of the proposed scheme in Section \Rmnum{5}.
Finally, Section \Rmnum{6} concludes this paper.
\emph{Notations}:
Unless otherwise specified, we adopt the bold uppercase letter, bold lowercase letter, and the normal font to represent the matrix, vector, and scalar, respectively.
Superscripts $T$, $*$, and $H$ denote the transpose operation, conjugate operation, and conjugate transpose operation, respectively.
$\mathbb{C}$ and $\mathbb{R}$ represent the sets of complex numbers and real numbers, respectively.
${\mathcal{CN}}( \bm{\mu},\mathbf{\Sigma} )$ and ${\mathcal{N}}( \bm{\mu},\mathbf{\Sigma} )$ denote the circularly symmetric complex Gaussian (CSCG) distribution and the real-valued Gaussian distribution, respectively, where $\bm{\mu}$ and $\mathbf{\Sigma}$ are the mean vector and the covariance matrix, respectively.
${\mathcal{U}}( a,b )$ denotes the uniform distribution within the range of $[a,b]$.
$\mathbf{0}$ denotes a zero vector or matrix according to requirements.
${\mathbf{I}}_N$ and ${\mathbf{1}}_N$ are used to represent the $N$-by-$N$ identity matrix and the $N$-by-$1$ all-ones vector, respectively.
$|\cdot|$ is the absolute value of a complex-valued number, $\|\cdot\|$ is the Euclidean norm of a vector, and $\|\cdot\|_F$ denotes the Frobenius norm of a matrix.
$(\cdot)^{-1}$ denotes the matrix inverse.
$\mathrm{diag}(\mathbf{x})$ denotes to generate a diagonal matrix based on vector $\mathbf{x}$.
$\mathbb{E}(\cdot)$ refers to the statistical expectation operation.
$\det(\cdot)$ denotes the determinant of a matrix.
$\frac{\partial f(x,y,\cdots)}{\partial x}$ denotes the partial derivative of a function $f(x,y,\cdots)$ with respect to variable $x$.
$\mathbf{A} \succeq \mathbf{B}$ denotes $\mathbf{A} - \mathbf{B}$ is positive semidefinite.
$\max(c,d)$ denotes the maximum between real-valued $c$ and $d$.
In addition, we adopt $\mathrm{Re}\{\cdot\}$ and $\mathrm{Im}\{\cdot\}$ to denote the real part and the imaginary part of a complex-valued matrix, respectively.
\vspace{-0.2cm}
\section{System Model}\vspace{-0.2cm}
As shown in Fig. \ref{Fig:RSU_scenario}, we consider an ISAC-assisted V2I network, where a roadside unit (RSU) serves $K$ single-antenna vehicles.
The RSU is a DFRC system which is equipped with an mmWave massive MIMO-type \cite{marzetta2016fundamentals} uniform linear array (ULA) consisting of $N_t$ transmit antennas and $N_r$ receive antennas. By exploiting full-duplex radio techniques \cite{barneto2021full} on transmit and receive antennas, the RSU can receive the potential signal echoes for sensing while maintaining uninterrupted downlink communications concurrently \cite{yuan2020bayesian}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{RSU_scenario_v2.pdf
\caption{The considered ISAC-assisted V2I system with an RSU serving multiple vehicles.}\label{Fig:RSU_scenario
\end{figure}
\subsection{Sensing Model}
Denote by $s_{k,n}(t)$ the ISAC downlink signal to be transmitted to the $k$-th, $k \in \{1,2,\cdots,K\}$, vehicle at time instant $t$ within the $n$-th, $n \in \{1,2,\cdots,N\}$, time slot.
The ISAC signal vector for all $K$ vehicles is given by $\mathbf{s}_{n}(t)=[s_{1,n}(t),s_{2,n}(t),\cdots,s_{K,n}(t) ]^T \in \mathbb{C}^{K \times 1}$ with $\mathbb{E}\{|s_{k,n}(t)|^2\}=1$.
Thus, the transmitted signal via the $N_t$ antennas at the RSU can be expressed as
\begin{equation}\label{Wsn_t}
\tilde{\mathbf{s}}_n(t) = \mathbf{W}_n \mathbf{s}_{n}(t) \in \mathbb{C}^{N_t \times 1},
\end{equation}
where $\mathbf{W}_n = [\mathbf{w}_{1,n},\mathbf{w}_{2,n},\cdots,\mathbf{w}_{K,n}]\in \mathbb{C}^{N_t\times K}$ denotes the downlink beamforming matrix at time slot $n$ and $\mathbf{w}_{k,n} \in \mathbb{C}^{N_t\times 1}$ is the dedicated beamforming vector for the $k$-th vehicle. In this case, the reflected signal echoes received at the RSU can be formulated as \cite{liu2020radar, yuan2020bayesian}
\begin{equation}\label{r_nt}
{\mathbf{r}}_n(t) =G \sum_{k=1}^K \beta_{k,n} e^{j2\pi \mu_{k,n}t} \mathbf{b}(\theta_{k,n}) \mathbf{a}^{\rm H}(\theta_{k,n})\tilde{\mathbf{s}}_{n}(t-\nu_{k,n}) + \mathbf{z}(t).
\end{equation}
Here, $G=\sqrt{N_t N_r}$ is the total antenna array gain and $\nu_{k,n}$ and $\mu_{k,n}$ are the time-delay and the Doppler frequency with respect to the $k$-th vehicle, respectively.
$\theta_{k,n}$ denotes the angle between the $k$-th vehicle and the RSU at time slot $n$.
$\beta_{k,n} = \frac{\varrho}{2d_{k,n}}$ is the reflection coefficient, where $\varrho$ denotes the fading coefficient based on the radar cross-section and $d_{k,n}$ is the distance between the $k$-th vehicle and the RSU at time slot $n$.
$\mathbf{z}(t)\in \mathbb{C}^{N_r\times 1}$ denotes the CSCG noise vector at the RSU.
In practical mmWave communication systems, a line-of-sight (LoS) channel model\footnotemark\footnotetext{The existence of occlusion or blocking obstacles may hinder the execution of the sensing and communication tasks \cite{liu2020joint}.
In addition, the echoes from the non-LoS channels can mislead the locations of the desired targets \cite{liu2020joint, liu2021survey}.
For ease of investigation, the study of these factors will be left for future work and only the LoS channel model is considered in this paper.} \cite{niu2015survey} is usually adopted and the transmit and the receive steering vectors at the RSU can be expressed as
\begin{equation}\label{}
\mathbf{a}(\theta_{k,n})=\sqrt{\frac{1}{N_t}}[1,e^{-j\pi\cos\theta_{k,n}},\cdots,e^{-j\pi(N_t-1)\cos\theta_{k,n}}]^T
\end{equation}
and
\begin{equation}\label{}
\mathbf{b}(\theta_{k,n})=\sqrt{\frac{1}{N_r}}[1,e^{-j\pi\cos\theta_{k,n}},\cdots,e^{-j\pi(N_r-1)\cos\theta_{k,n}}]^T,
\end{equation}
respectively.
Since a massive MIMO system is adopted at the RSU, the steering vectors with different angles are asymptotically orthogonal \cite{marzetta2016fundamentals}, i.e., $\forall k \neq k^{'}$, we have $|\mathbf{b}^H(\theta_{k,n})\mathbf{b}(\theta_{k^{'},n})| \approx 0$.
Thus, the inter-beam interference between different vehicles in the uplink echoes is negligible
and the RSU can distinguish different vehicles in terms of angle-of-arrivals (AoAs) for independent processing.
In this case, the received echo at the RSU from the $k$-th vehicle at time slot $n$, denoted by ${{r}}_{k,n}(t)$, can be extracted from (\ref{r_nt}) via a spatial filtering process \cite{van2004optimum}, i.e., multiplying an item of $\mathbf{b}^H(\dot{\theta}_{k,n})$ with ${\mathbf{r}}_n(t)$, which can be expressed as
\begin{equation}\label{r_kn}
\begin{aligned}
{{r}}_{k,n}(t) &= \mathbf{b}^H(\dot{\theta}_{k,n})\mathbf{r}_n(t) \\
&= G \beta_{k,n} e^{j2\pi \mu_{k,n}t}
\mathbf{a}^{\rm H}(\theta_{k,n})\tilde{\mathbf{s}}_n(t-\nu_{k,n}) + {z}_{k,n}(t).
\end{aligned}
\end{equation}
Here, $\mathbf{b}^H(\dot{\theta}_{k,n})$ is the adopted receive beamforming vector for spatial filtering, $\dot{\theta}_{k,n}$ is an estimation of ${\theta}_{k,n}$ obtained via AoA estimation technology \cite{gross2015smart, van2004optimum}. For simplification, we assume $\dot{\theta}_{k,n} \approx {\theta}_{k,n}$, i.e., $\mathbf{b}^H(\dot{\theta}_{k,n})\mathbf{b}({\theta}_{k,n}) \approx 1$.
In addition, ${z}_{k,n}(t) = \mathbf{b}^H(\dot{\theta}_{k,n})\mathbf{z}(t) \in \mathbb{C}$ is a CSCG noise variable, i.e., ${z}_{k,n}(t) \sim \mathcal{CN}(0,\sigma_z^2)$ and $\sigma_z^2$ is the noise variance.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{RSU_model_v2.pdf
\caption{The kinematic model of vehicles in the considered V2I system.}\label{Fig:RSU_model
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{DFRC_frame_structure_v2.pdf}
\caption{The developed communication protocol for ISAC in the considered V2I system.}\label{Fig:ISAC_frame_structure}
\end{figure*}
\subsection{Vehicle Mobility and Observation Model}
As shown in Fig. \ref{Fig:RSU_model}, the directions of all vehicles always keep in parallel to the road, i.e., the direction of velocity vector hardly changes \cite{wymeersch20175g}. Thus, we can characterize the velocity model in terms of the magnitude of the corresponding velocity vector, which is given by
\begin{equation}\label{}
v_{k,n} = v_{k,n-1} + \Delta v_{k,n-1}, \forall k,n,
\end{equation}
where $v_{k,n}$ denotes the average velocity magnitude of the $k$-th vehicle at time slot $n$ and $\Delta v_{k,n-1}$ is the velocity increment within time slot $n-1$. For ease of study, we assume $v_{k,n} \sim \mathcal{U}(v_{\min},v_{\max})$, $\forall k,n$, with $v_{\min}$ and $v_{\max}$ being the minimum and the maximum of velocity magnitude, respectively \cite{wymeersch20175g}.
Based on this and the illustration in Fig. \ref{Fig:RSU_model}, the kinematic equations \cite{liu2020radar} characterizing the variations of the angle and the distance can be expressed as
\begin{equation}
\left\{
\begin{aligned}
&\sin(\theta_{k,n} - \theta_{k,n-1})d_{k,n} = v_{k,n-1} \Delta T \sin \theta_{k,n-1}, \\
&d_{k,n}^2 = d_{k,n-1}^2 + (v_{k,n-1} \Delta T)^2 \\
&\hspace{1cm} - 2d_{k,n-1}v_{k,n-1} \Delta T \cos\theta_{k,n-1},
\end{aligned}
\right.
\end{equation}
where $\Delta T$ is the time duration of each time slot.
In this case, the time-delay $\nu_{k,n}$ and the Doppler frequency $\mu_{k,n}$ can be estimated via the conventional matched-filtering method \cite{richards2014fundamentals}, i.e.,
\begin{equation}\label{matched-filtering}
\begin{aligned}
&\{\tilde{\nu}_{k,n},\tilde{\mu}_{k,n}\} \\
&= \arg \max_{\nu,\mu} \left|\int_0^{\Delta T_e}{{r}}_{k,n}(t)s_{k,n}^*(t- \nu)e^{-j2\pi\mu t } dt \right|^2,
\end{aligned}
\end{equation}
where $\Delta T_e \leq \Delta T$ is the length of the received echo and $\tilde{\nu}_{k,n}$ and $\tilde{\mu}_{k,n}$ are the estimated values.
Based on the estimated values, we can then adopt interference cancellation scheme \cite{bechter2017analytical} to remove the multi-user interference in (\ref{r_kn}).
For simplification, we assume the interference can be removed ideally. Thus, after operating the interference cancellation in (\ref{r_kn}), we have ${{\dot{r}}}_{k,n}(t) = G \beta_{k,n} e^{j2\pi \mu_{k,n}t}
\mathbf{a}^{\rm H}(\theta_{k,n})\mathbf{w}_{k,n} {s}_{k,n}(t-\nu_{k,n}) + {z}_{k,n}(t)$.
In this case, the received echo at the RSU can be rewritten as an observation model of $\theta_{k,n}$, i.e.,
\begin{equation}\label{r_kn_theta}
\begin{aligned}
\tilde{{r}}_{k,n} & \triangleq \int_0^{\Delta T_e}{{\dot{r}}}_{k,n}(t)s_{k,n}^*(t-\tilde{\nu}_{k,n})e^{-j2\pi\tilde{\mu}_{k,n} t } dt \\
& = G \beta_{k,n} \mathbf{a}^{\rm H}(\theta_{k,n})\mathbf{w}_{k,n} \int_0^{\Delta T_e}s_{k,n}(t-\nu_{k,n}) \\
& \hspace{0.4cm}\cdot s_{k,n}^*(t-\tilde{\nu}_{k,n})e^{-j2\pi (\tilde{\mu}_{k,n}t - \mu_{k,n} t)} dt \\
& \hspace{0.4cm} + \int_0^{\Delta T_e} z_{k,n}(t)s_{k,n}^*(t-\tilde{\nu}_{k,n})e^{-j2\pi\tilde{\mu}_{k,n} t} dt \\
& = G \beta_{k,n} \xi \mathbf{a}^{\rm H}(\theta_{k,n})\mathbf{w}_{k,n} + \tilde{{z}}_{k,n}.
\end{aligned}
\end{equation}
Here, $\xi \hspace{-0.1cm}=\hspace{-0.1cm} \int_0^{\Delta T_e}s_{k,n}(t-\nu_{k,n})s_{k,n}^*(t-\tilde{\nu}_{k,n})e^{-j2\pi (\tilde{\mu}_{k,n}t - \mu_{k,n} t)} dt$ is the matched-filtering gain.
$\tilde{{z}}_{k,n} \sim \mathcal{CN}(0,\sigma_{r_k}^2)$ is the noise term with $\sigma_{r_k}^2$ being the noise variance.
Accordingly, the terms $d_{k,n}$ and $v_{k,n}$ obey the observation models, which yields
\begin{equation}\label{tau_kn_d}
\tilde{\nu}_{k,n} = {\nu}_{k,n} + \epsilon_{k,n} = \frac{2d_{k,n}}{c} + \epsilon_{k,n}
\end{equation}
and
\begin{equation}\label{mu_kn_v}
\tilde{\mu}_{k,n} = {\mu}_{k,n} + \varepsilon_{k,n} = \frac{2\dot{v}_{k,n} f_c}{c} + \varepsilon_{k,n},
\end{equation}
respectively. Here, $f_c$ is the carrier frequency, $c$ is the speed of signal propagation, $\dot{v}_{k,n}$ denotes the radial velocity. $\epsilon_{k,n} \sim \mathcal{N}(0,\sigma_{\nu_k}^2)$ \cite{tse2005fundamentals} with the noise variance $\sigma_{\nu_k}^2$ and $\varepsilon_{k,n} \sim \mathcal{N}(0,\sigma_{\mu_k}^2)$ with the noise variance $\sigma_{\mu_k}^2$ are the estimation errors of ${\nu}_{k,n}$ and ${\mu}_{k,n}$, respectively\footnotemark\footnotetext{Note that the noise terms in (\ref{tau_kn_d}) and (\ref{mu_kn_v}) are introduced by the matched-filtering process in (\ref{matched-filtering}), whose distributions mainly depend on the noise term in (\ref{matched-filtering}).
In fact, the noise term in (\ref{matched-filtering}) has only one random component, i.e., a CSCG vector $\mathbf{z}(t)$ as defined in (\ref{r_nt}). In this case, the noise term in (\ref{matched-filtering}) is a CSCG variable. Thus, the noise terms in (\ref{tau_kn_d}) and (\ref{mu_kn_v}) also follow the Gaussian distribution.}.
Note that $\sigma_{\nu_k}^2$ and $\sigma_{\mu_k}^2$ generally depend on the signal-to-noise ratio ($\mathrm{SNR}$) at the RSU \cite{kay1993fundamentals}, i.e., the received SNR of (\ref{r_kn}), which is derived as
\begin{equation}\label{SNR}
\begin{aligned}
&\mathrm{SNR}_{k,n}\\
&= \frac{\mathbb{E}\left\{\left|G \beta_{k,n} e^{j2\pi \mu_{k,n}t}
\mathbf{a}^{\rm H}(\theta_{k,n}) \mathbf{w}_{k,n} {s}_{k,n}(t-\nu_{k,n}) \right|^2\right\}}{\mathbb{E}\left\{\left|\sum_{i \neq k}^K G \beta_{i,n} \mathbf{a}^{\rm H}(\theta_{k,n}) \mathbf{w}_{i,n}{s}_{i,n}(t-\nu_{k,n}) + {z}_{k,n}(t)\right|^2\right\}} \\
&= \frac{ G^2|\beta_{k,n}|^2|\mathbf{a}^H(\theta_{k,n})\mathbf{w}_{k,n}|^2}{\sum_{i \neq k}^K G^2|\beta_{i,n}|^2 \left|\mathbf{a}^{\rm H}(\theta_{k,n}) \mathbf{w}_{i,n}\right|^2 + \sigma_z^2}, \end{aligned}
\end{equation}
where the terms of interference from other users are considered as noise, $\mathbb{E}\{|s_{k,n}(t)|^2\}=1$, and $\mathbb{E}\{|z_{k,n}(t)|^2\}=\sigma_z^2$, as defined in Section II-A.
Based on this, we have \cite{liu2020radar}
\begin{equation}\label{sigma_tau}
\sigma_{\nu_k}^2 = \frac{\rho_{\nu}^2(\sum_{i \neq k}^K G^2|\beta_{i,n}|^2\left|\mathbf{a}^{\rm H}(\theta_{k,n}) \mathbf{w}_{i,n}\right|^2+\sigma_z^2)}{ G^2|\beta_{k,n}|^2|\mathbf{a}^H(\theta_{k,n})\mathbf{w}_{k,n}|^2}
\end{equation}
and
\begin{equation}
\sigma_{\mu_k}^2 = \frac{\rho_{\mu}^2(\sum_{i \neq k}^K G^2|\beta_{i,n}|^2 \left|\mathbf{a}^{\rm H}(\theta_{k,n}) \mathbf{w}_{i,n}\right|^2+\sigma_z^2)}{ G^2|\beta_{k,n}|^2|\mathbf{a}^H(\theta_{k,n})\mathbf{w}_{k,n}|^2},
\end{equation}
respectively, where $\rho_{\nu}$ and $\rho_{\mu}$ are given constants depending on the detailed system deployment and estimation algorithms.
In particular, it is shown that noise variances highly depend on the downlink beamforming vector via term $|\mathbf{a}^H(\theta_{k,n})\mathbf{w}_{k,n}|$, thus we can improve the observation accuracy through adjusting $\mathbf{w}_{k,n}$.
\subsection{Communication Model}
At time instant $t$ within time slot $n$, the $k$-th vehicle receives the downlink signal from the RSU, which can be expressed as
\begin{equation}\label{cmodel}
\vartheta_{k,n}(t) = \tilde{G} \sqrt{\alpha_{k,n}} e^{j2\pi \mu_{k,n}t}\mathbf{a}^{ H}(\theta_{k,n})\sum_{i=1}^K \mathbf{w}_{i,n}{s}_{i,n}(t) + \eta_{k,n}(t).
\end{equation}
Here, $\tilde{G} = \sqrt{N_t}$ is the antenna gain, $\alpha_{k,n} = \alpha_0(d_{k,n}/d_0)^{-\zeta}$ is the path loss coefficient, where $\alpha_0$ is the path loss at reference distance $d_0$, and $\zeta$ is the associated path loss exponent. In addition, $\eta_{k,n}(t) \sim \mathcal{CN}(0,\sigma_k^2)$ denotes the noise at the $k$-th vehicle with $\sigma_k^2$ being the noise variance.
For each vehicle receiver, the interference originates from the signals transmitted for other vehicles, which should be taken into account in beamforming design to further improve the sum-rate performance.
It should be highlighted that the effect of the multi-user interference has been considered during both the sensing and communication process since both (\ref{r_kn}) and (\ref{cmodel}) have the multi-user interference component, i.e., $\sum_{i\neq k}^K \mathbf{w}_{i,n}{s}_{i,n}(t)$.
In particular, the received signal-to-interference-plus-noise ratio (SINR) at the $k$-th vehicle within time slot $n$ can be expressed as
\begin{equation}\label{SINR}
\begin{aligned}
\mathrm{SINR}_{k,n}(\mathbf{h}_{k,n},\mathbf{w}_{k,n}) &= \frac{\left|\tilde{G} \sqrt{\alpha_{k,n}} \mathbf{a}^{H}(\theta_{k,n})\mathbf{w}_{k,n}\right|^2 }
{ \sum\limits_{k^{'} \neq k}^K \left|\tilde{G} \sqrt{\alpha_{k,n}} \mathbf{a}^{H}(\theta_{k,n}) \mathbf{w}_{k^{'},n}\right|^2 + \sigma_{k}^2} \\
&= \frac{ \left| \mathbf{h}_{k,n}^H\mathbf{w}_{k,n}\right|^2 }{\sum\limits_{k^{'} \neq k}^K \left| \mathbf{h}_{k,n}^H\mathbf{w}_{k^{'},n}\right|^2 + \sigma_{k}^2},
\end{aligned}
\end{equation}
where $\mathbf{h}_{k,n}^H = \tilde{G}\sqrt{\alpha_0(d_{k,n}/d_0)^{-\zeta}}\mathbf{a}^{H}(\theta_{k,n})$ is the equivalent channel vector between the $k$-th vehicle and the RSU at time slot $n$.
\subsection{Proposed Protocol}
Note that if the beamforming matrix $\mathbf{W}_n$ was optimized in time slot $n-1$, the expected communication performance at time slot $n$ can be guaranteed in advance.
Inspired by this, we develop a hierarchical transmission protocol for ISAC in the considered V2I system, where a predictive beamforming matrix for the next time slot is preset in advance, thus bypassing the real-time channel tracking or motion parameter prediction to further reduce the signaling overhead compared with existing works, e.g., \cite{kumari2017ieee, liu2020jointtransmit, huang2020majorcom, liu2020radar, yuan2020bayesian}.
As depicted in Fig. \ref{Fig:ISAC_frame_structure}, the data stream is divided into different time slots. In each time slot, the RSU requires to send ISAC signals for both the downlink communication and the sensing tasks.
Specifically, there are two stages for each time slot in the developed protocol, i.e., Stage I: ISAC signal transmission and echo reception; Stage II: ISAC signal processing.
Taking the $n$-th time slot for example, in Stage I, the RSU transmits ISAC signals with the optimized beamforming matrix obtained from the last prediction and receives echoes. In Stage II, the RSU first estimates vehicles' motion parameters at the current time slot, i.e., time slot $n$, based on the received echoes from different vehicles and then optimizes the beamforming matrix for the next time slot, i.e., time slot $n+1$, according to the current and historical estimated channels. As such, the RSU can bypass the process of acquiring the ICSI and directly design the beamforming matrix to guarantee the ISAC performance. In this paper, we mainly focus on the predictive beamforming design\footnotemark\footnotetext{Note that as described in Fig. \ref{Fig:RSU_scenario}, Stage I can be handled via some full-duplex radio techniques, e.g., \cite{barneto2021full}.} in Stage II and the related problem formulation will be given in the next section.
In addition, a comparison in terms of the complexity of the frameworks with different existing beamforming protocols are shown in Fig. \ref{Fig:BF_framework_comparison}, where three beamforming protocols are studied: (a) beam training protocol \cite{zhang2019codebook}, (b) two-stage beam prediction protocol \cite{liu2020radar}, and (c) the proposed predictive beamforming protocol.
It can be seen that both the downlink pilots and the uplink feedback are indispensable for beam training in (a), which introduces huge complexity and signaling overhead.
Although (b) adopts a DFRC block which does not require beam training, it still needs a channel prediction-beamforming two-stage disjoint process, which also brings considerable computational overhead.
On top of (b), our proposed (c) develops a learning-based joint predictive mechanism to directly predict the beamforming matrix based on the historical channels without the need of performing explicit channel prediction and beam training.
Thus, our proposed predictive beamforming protocol has a lower complexity compared with the other related protocols.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{ISAC_beamforming_framework_comparison_use.pdf}
\caption{Frameworks of different beamforming protocols for ISAC.}\label{Fig:BF_framework_comparison}
\end{figure}
\section{Problem Formulation}
In this paper, we aim to maximize the average achievable sum-rate of the V2I downlink communication while guaranteeing the sensing performance for vehicles. In the following, we will first derive CRLBs of motion parameter estimation to quantify the sensing performance and then formulate the corresponding optimization problem.
\subsection{Cramer-Rao Lower Bound for Parameter Estimation}
In this section, we derive the Cramer-Rao lower bound (CRLB) to characterize the estimation accuracy. To start with, denote by $\mathbf{y}_{k,n} = [\tilde{{r}}_{k,n}, \tilde{\nu}_{k,n}, \tilde{\mu}_{k,n}]^T \in \mathbb{C}^{(N_r + 2) \times 1}$ and $ \mathbf{x}_{k,n} = [\theta_{k,n},d_{k,n},\dot{v}_{k,n}]^T$ the observation vector and the motion parameter vector, respectively, we have
\begin{equation}\label{}
\mathbf{y}_{k,n} = \mathbf{g}(\mathbf{x}_{k,n}) + \mathbf{u}_{k,n},
\end{equation}
where $\mathbf{g}(\cdot)$ is defined according to (\ref{r_kn_theta})-(\ref{mu_kn_v}), $\mathbf{u}_{k,n} = [\tilde{{z}}_{k,n}, \epsilon_{k,n}, \varepsilon_{k,n}]^T$, and $\mathbf{y}_{k,n} \sim \mathcal{CN}(g(\mathbf{x}_{k,n}),\mathbf{\Sigma} )$ with $\mathbf{\Sigma} = \mathrm{diag} ([\sigma_{r_k}^2, \sigma_{\nu_k}^2, \sigma_{\mu_k}^2]) $ being the covariance matrix of $\mathbf{y}_{k,n}$\footnotemark\footnotetext{Note that the correlations among noise terms can provide additional observation information of the desired sensing targets, which can be exploited to further improve the sensing performance.
In this paper, we assume the noise terms are uncorrelated since this assumption represents an unfavourable scenario of ISAC systems \cite{liu2021survey, liu2020radar} and the adopted framework can serve as a solid foundation for possible further extensions.}. In this case, the conditional probability density function (PDF) of $\mathbf{y}_{k,n}$ given $\mathbf{x}_{k,n}$ can be expressed as
\begin{equation}\label{}
\begin{aligned}
&p(\mathbf{y}_{k,n}|\mathbf{x}_{k,n}) = \frac{1}{\pi^{N_r+2}\det(\mathbf{\Sigma})}\\
&\hspace{1.0cm}\exp\left(-(\mathbf{y}_{k,n} - \mathbf{g}(\mathbf{x}_{k,n})^H)\mathbf{\Sigma}^{-1}(\mathbf{y}_{k,n} - \mathbf{g}(\mathbf{x}_{k,n}))\right).
\end{aligned}
\end{equation}
According to the CRLB theorem \cite{kay1993fundamentals}, the Fisher information matrix (FIM) \cite[p. 529]{kay1993fundamentals} for $\mathbf{x}_{k,n}$ is given by (\ref{FIM_equation}), as shown at the top of the next page, where $\mathbf{F}(\mathbf{x}_{k,n})$ denotes the FIM of $\mathbf{x}_{k,n}$, $\mathbb{E}\left[(\mathbf{y}_{k,n} - \mathbf{g}(\mathbf{x}_{k,n}))(\mathbf{y}_{k,n} - \mathbf{g}(\mathbf{x}_{k,n}))^H\right] = \mathbf{\Sigma}$ and
\newcounter{mytempeqncnt1}
\begin{figure*}[!t]
\normalsize
\setcounter{mytempeqncnt1}{\value{equation}}
\setcounter{equation}{18}
{\begin{equation}\label{FIM_equation}
\begin{aligned}
\mathbf{F}(\mathbf{x}_{k,n}) & = \mathbb{E}\left[\frac{\partial \ln p(\mathbf{y}_{k,n}|\mathbf{x}_{k,n}) }{\partial \mathbf{x}_{k,n}^*} \left(\frac{\partial \ln p(\mathbf{y}_{k,n}|\mathbf{x}_{k,n}) }{\partial \mathbf{x}_{k,n}^*}\right)^H \right] \\
& = \mathbb{E}\left[\left(\frac{\partial \mathbf{g}(\mathbf{x}_{k,n}) }{\partial \mathbf{x}_{k,n}}\right)^H\mathbf{\Sigma}^{-1}(\mathbf{y}_{k,n} - \mathbf{g}(\mathbf{x}_{k,n}))(\mathbf{y}_{k,n} - \mathbf{g}(\mathbf{x}_{k,n}))^H \mathbf{\Sigma}^{-1}\frac{\partial \mathbf{g}(\mathbf{x}_{k,n})}{\partial \mathbf{x}_{k,n}} \right] \\
& = \left(\frac{\partial \mathbf{g}(\mathbf{x}_{k,n}) }{\partial \mathbf{x}_{k,n}}\right)^H\mathbf{\Sigma}^{-1}\mathbb{E}\left[(\mathbf{y}_{k,n} - \mathbf{g}(\mathbf{x}_{k,n}))(\mathbf{y}_{k,n} - \mathbf{g}(\mathbf{x}_{k,n}))^H\right]\mathbf{\Sigma}^{-1}\frac{\partial \mathbf{g}(\mathbf{x}_{k,n})}{\partial \mathbf{x}_{k,n}} \\
& = \left(\frac{\partial \mathbf{g}(\mathbf{x}_{k,n})}{\partial \mathbf{x}_{k,n}}\right)^H\mathbf{\Sigma}^{-1}\frac{\partial \mathbf{g}(\mathbf{x}_{k,n})}{\partial \mathbf{x}_{k,n}}.
\end{aligned}
\end{equation}}
\setcounter{equation}{\value{mytempeqncnt1}}
\hrulefill
\vspace*{4pt}
\end{figure*}
\setcounter{equation}{19}
\begin{equation}\label{P}
\frac{\partial \mathbf{g}(\mathbf{x}_{k,n})}{\partial \mathbf{x}_{k,n}} = \left[
\begin{matrix}
\frac{\partial \tilde{{r}}_{k,n}}{\partial \theta_{k,n}} & 0 & 0 \\
0 & \frac{2}{c} & 0 \\
0 & 0 & \frac{2f_c}{c} \\
\end{matrix}
\right] \in \mathbb{C}^{(N_r+2) \times (N_r+2)}.
\end{equation}
Based on this, the mean squared error (MSE) matrix of $\mathbf{x}_{k,n}$ is bounded by below:
\begin{equation}\label{}
\mathbb{E}\left[(\tilde{\mathbf{x}}_{k,n} - \mathbf{x}_{k,n})(\tilde{\mathbf{x}}_{k,n} - \mathbf{x}_{k,n})^H\right] \succeq \mathbf{F}^{-1}(\mathbf{x}_{k,n}).
\end{equation}
Accordingly, the MSE lower bounds of $\theta_{k,n}$ and $d_{k,n}$ are given by
\begin{equation}\label{}
\mathbb{E}\left[(\tilde{\theta}_{k,n} - \theta_{k,n})^2\right] \geq f_{11} \triangleq \mathrm{CRLB}(\theta_{k,n},\mathbf{w}_{k,n})
\end{equation}
and
\begin{equation}\label{}
\mathbb{E}\left[(\tilde{d}_{k,n} - d_{k,n})^2\right] \geq f_{22} \triangleq \mathrm{CRLB}(d_{k,n},\mathbf{w}_{k,n}) ,
\end{equation}
respectively. Here, $f_{ij}$ denotes the $i$-th row and the $j$-th column element of $\mathbf{F}^{-1}(\mathbf{x}_{k,n})$ and we have
\begin{equation}\label{CRLB_theta}
\mathrm{CRLB}(\theta_{k,n},\mathbf{w}_{k,n}) = \left[\frac{1}{\sigma_{r_k}^2}\left(\frac{\partial \tilde{{r}}_{k,n}}{\partial \theta_{k,n}} \right)^H \frac{\partial \tilde{{r}}_{k,n}}{\partial \theta_{k,n}}\right]^{-1}
\end{equation}
with
\begin{equation}\label{partial_r}
\begin{aligned}
&\frac{\partial \tilde{{r}}_{k,n}}{\partial \theta_{k,n}} \\
& = -\sqrt{N_r}\beta_{k,n}\xi\hspace{-0.05cm}\sum_{n_t=2}^{N_t} w_{k,n}^{(n_t)} e^{j\pi(n_t-1)\cos\theta_{k,n}} j\pi(n_t \hspace{-0.05cm} - \hspace{-0.05cm} 1)\sin\theta_{k,n}
\end{aligned}
\end{equation}
and
\begin{equation}\label{CRLB_d}
\mathrm{CRLB}(d_{k,n},\mathbf{w}_{k,n}) =
\left[\frac{1}{\sigma_{\nu_k}^2}\left(\frac{2}{c}\right)^2\right]^{-1},
\end{equation}
where $w_{k,n}^{(n_t)}$ denotes the $n_t$-th, $n_t \in \{1,2,\cdots, N_t\}$, element of $\mathbf{w}_{k,n}$, $z_{k,n}^{(n_r)}$ denotes the $n_r$-th, $n_r \in \{1,2,\cdots, N_r\}$, element of $\dot{\mathbf{z}}_{k,n}$, and terms $\sigma_{r_k}^2$ and $\sigma_{\nu_k}^2$ are defined in (\ref{r_kn_theta}) and (\ref{sigma_tau}), respectively.
In addition, $\mathrm{CRLB}(\theta_{k,n},\mathbf{w}_{k,n})$ and $\mathrm{CRLB}(d_{k,n},\mathbf{w}_{k,n})$ represent the CRLBs of estimating $\theta_{k,n}$ and $d_{k,n}$ given $\mathbf{w}_{k,n}$, respectively.
\textbf{Remark 1}: Note that $\tilde{{r}}_{k,n}$ in (\ref{CRLB_theta}) and $\sigma_{\nu_k}^2$ in (\ref{CRLB_d}) are the functions of $\mathbf{w}_{k,n}$, as defined in (\ref{r_kn_theta}) and (\ref{sigma_tau}), respectively.
Through optimizing the beamforming matrix $\mathbf{W}_n$, one may consider aligning each transmit beam to the desired vehicle for accurate sensing by improving the desired signal strength, as commonly adopted in e.g., \cite{liu2020radar, yuan2020bayesian}. Yet, it ignores the impacts created by multiple access interference on the downlink communication. Thus, a proper design of beamforming to strike a balance between sensing performance and communication performance is needed.
\subsection{Problem Formulation}
Based on the derived CRLBs of motion parameter estimation, we aim to maximize the average achievable sum-rate via optimizing the beamforming matrix at the RSU subject to the maximum transmit power constraint and the CRLB constraints of the sensing task.
Thus, the problem formulation at time slot $n$ can be expressed as
\begin{align}
(\mathrm{P}1): &\mathop{\max}\limits_{{\mathbf{W}}_n } ~ \mathbb{E}_{\mathbf{H}_{n}|\mathbf{\Omega}_{n}^{\tau}}
\left\{\sum_{k = 1}^K\mathrm{log}_2\left( 1 + \mathrm{SINR}_{k,n}(\mathbf{h}_{k,n},\mathbf{w}_{k,n})\right) \hspace{-0.05cm} \right\} \label{P1_OP} \\
&\mathrm{s.t.}~
\mathbb{E}_{\mathbf{c}_{n}|\mathbf{\Theta}_{n}^{\tau}} \left\{\frac{1}{K}\sum_{k=1}^K\mathrm{CRLB}(\theta_{k,n},\mathbf{w}_{k,n})\right\} \leq \gamma_{\theta}, \label{P1_C_theta} \\
&\hspace{0.6cm}
\mathbb{E}_{\mathbf{d}_{n}|\mathbf{D}_{n}^{\tau}} \left\{\frac{1}{K}\sum_{k=1}^K\mathrm{CRLB}(d_{k,n},\mathbf{w}_{k,n})\right\} \leq \gamma_d, \label{P1_C_d} \\
&\hspace{0.6cm} \|\mathbf{W}_n\|_F^2\leq P. \label{P1_power}
\end{align}
Here, $\mathbf{W}_n$ defined in (\ref{Wsn_t}) and $\mathbf{h}_{k,n}$ defined in (\ref{SINR}) are the beamforming matrix and the channel vector, respectively. $\mathbb{E}_{\mathbf{H}_{n}|\mathbf{\Omega}_{n}^{\tau}}\{\cdot\}$ in the objective function is the ergodic average with respect to (w.r.t.) $\mathbf{H}_n = [\mathbf{h}_{1,n},\mathbf{h}_{2,n},\cdots,\mathbf{h}_{K,n}]$, given the historical estimated channels $\mathbf{\Omega}_{n}^{\tau} \triangleq [\tilde{\mathbf{H}}_{n-1},\tilde{\mathbf{H}}_{n-2},\cdots,\tilde{\mathbf{H}}_{n-\tau}]$,
where $\tilde{\mathbf{H}}_{n} =[\tilde{\mathbf{h}}_{1,n},\tilde{\mathbf{h}}_{2,n},\cdots,\tilde{\mathbf{h}}_{K,n}]$ and $\tilde{\mathbf{h}}_{k,n} = \tilde{G}\sqrt{\alpha_0 \tilde{d}_{k,n}^{-\zeta}}\mathbf{a}^{ H}(\tilde{\theta}_{k,n})$ with $\tilde{\theta}_{k,n}$ and $\tilde{d}_{k,n}$ being the estimated angles and distances, respectively.
Note that the term $\mathbf{\Omega}_{n}^{\tau}$ represents the set of historical estimated channels, which is related to the number of transmit antennas, the path loss, the estimated angles and distances.
Similarly, $\mathbb{E}_{\mathbf{c}_{n}|\mathbf{\Theta}_{n}^{\tau}}$ is the ergodic average w.r.t. $\mathbf{c}_n = [\theta_{1,n},\theta_{2,n},\cdots,\theta_{K,n}]^T$, given the historical estimated angles $\mathbf{\Theta}_{n}^{\tau} \triangleq [\tilde{\mathbf{c}}_{n-1},\tilde{\mathbf{c}}_{n-2},\cdots,\tilde{\mathbf{c}}_{n-\tau}]$
with $\tilde{\mathbf{c}}_{n} =[\tilde{\theta}_{1,n},\tilde{\theta}_{2,n},\cdots,\tilde{\theta}_{K,n}]^T$.
In addition, the expectation $\mathbb{E}_{\mathbf{d}_{n}|\mathbf{D}_{n}^{\tau}}$ is taken w.r.t. $\mathbf{d}_n = [d_{1,n},d_{2,n},\cdots,d_{K,n}]^T$, given the historical estimated distances $\mathbf{D}_{n}^{\tau} \triangleq [\tilde{\mathbf{d}}_{n-1},\tilde{\mathbf{d}}_{n-2},\cdots,\tilde{\mathbf{d}}_{n-\tau}]$
with $\tilde{\mathbf{d}}_{n} =[\tilde{d}_{1,n},\tilde{d}_{2,n},\cdots,\tilde{d}_{K,n}]^T$.
On the other hand, since only the historical channel information (i.e., from time slot $n-1$ to time slot $n-\tau$) can be exploited when designing the beamforming matrix for time slot $n$, the ergodic average is adopted to characterize the performance of communication and sensing tasks at time slot $n$ \cite{tse2005fundamentals}.
Moreover, $\gamma_{\theta}$ and $\gamma_d$ are the maximum tolerable CRLB thresholds to guarantee the sensing performance. $P$ in (\ref{P1_power}) is the power budget at the RSU for every time slot $n$.
According to Jensen's inequality \cite{tse2005fundamentals}, it can be derived that if one aims to improve the sum-rate, $\mathbf{W}_n$ should be designed to make all vehicles maintain the same received SINRs as defined in (\ref{SINR}).
On the other hand, if we aim to minimize the CRLB bound,
we should adapt $\mathbf{W}_n$ such that all the received SNR values at the RSU from different vehicles in (\ref{SNR}) are identical as possible.
Thus, there exists a tradeoff between communication rate and sensing accuracy, which will be verified via simulations in Section V.
Generally, problem ($\mathrm{P}1$) is challenging since it is intractable to derive the closed-forms of (\ref{P1_OP}), (\ref{P1_C_theta}), and (\ref{P1_C_d}).
Besides, even if the perfect CSI is available, the objective function and constraints are non-convex w.r.t. $\mathbf{w}_{k,n}$.
As an alternative, we will adopt a data-driven approach to develop a learning-based predictive beamforming scheme to address problem ($\mathrm{P}1$) in the following.
\section{Deep Learning-based Predictive Beamforming for ISAC}
Note that generally the DL approach is designed to handle unconstrained optimization problems \cite{goodfellow2016deep}.
To facilitate the application of the DL approach, we exploit the penalty method which transforms the constrained optimization problem (P1) to an unconstrained problem equivalently \cite{gill2019practical}.
In the following, we will first introduce the developed DL-based predictive beamforming framework and then design the historical channels-based convolutional LSTM network (HCL-Net) structure as a realization of the developed framework. Finally, we will propose the HCL-Net-based predictive beamforming algorithm.
\subsection{DL-based Predictive Beamforming Framework for ISAC}
First, we handle the non-convex constraints. According to \cite{gill2019practical}, the penalty method can be adopted to transform the constrained optimization problem (P1) to an equivalent unconstrained problem, which is given by
\begin{equation}\label{P1'}
\begin{aligned}
&(\mathrm{P}1^{'}): \mathop{\max}_{{\mathbf{W}}_n } ~ \mathbb{E}_{\mathbf{H}_{n}|\mathbf{\Omega}_{n}^{\tau}}
\left\{\sum_{k = 1}^K\mathrm{log}_2\left( 1 + \mathrm{SINR}_{k,n}(\mathbf{h}_{k,n},\mathbf{w}_{k,n}) \right) \hspace{-0.1cm} \right\} \\
& - \hspace{-0.1cm}\lambda_1\hspace{-0.1cm}\left[\max\left(0, \mathbb{E}_{\mathbf{c}_{n}|\mathbf{\Theta}_{n}^{\tau}} \left\{\frac{1}{K}\sum_{k=1}^K\mathrm{CRLB}(\theta_{k,n},\mathbf{w}_{k,n})\right\} - \gamma_{\theta} \hspace{-0.05cm}\right)\hspace{-0.05cm}\right]^2 \\
& - \hspace{-0.1cm}\lambda_2\hspace{-0.1cm}\left[\max\left(0,\mathbb{E}_{\mathbf{d}_{n}|\mathbf{D}_{n}^{\tau}} \left\{\frac{1}{K}\sum_{k=1}^K\mathrm{CRLB}(d_{k,n},\mathbf{w}_{k,n})\right\} - \gamma_d \hspace{-0.05cm}\right)\hspace{-0.05cm}\right]^2 \\
& - \hspace{-0.1cm}\lambda_3\hspace{-0.1cm}\left[\max\left(0, \|\mathbf{W}_n\|_F^2 - P \right)\right]^2,
\end{aligned}
\end{equation}where $\lambda_\iota \gg 0$, $\iota \in \{1,2,3\}$, denotes the penalty parameter to determine the penalty magnitude\footnotemark\footnotetext{According to the penalty convergence theorem in \cite{luenberger2021linear}, the penalty method can solve the original constrained problem with either fixed or adaptive penalty parameters. In this paper, we adopt a fixed penalty scheme for ease of implementation.}.
Note that it is still generally intractable to derive a closed-form for the objective function of ($\mathrm{P}1^{'}$).
As a compromise approach, we adopt a data-driven approach to asymptotically approximate the statistical expectations involved in ($\mathrm{P}1^{'}$). Then, by exploiting the powerful capability of deep neural network (DNN) in feature extraction, we can finally obtain the solution of problem ($\mathrm{P}1^{'}$).
Based on this, the developed predictive beamforming framework is illustrated in Fig. \ref{Fig:PBF_framework}, which consists of two phases, i.e., (a) penalty method-based problem transformation and (b) DL-based problem solving.
In phase (a), we transform the original optimization problem to an unconstrained optimization problem, denoted by $\max \mathbb{E}\{f(\mathbf{W}_n)\}$.
In phase (b), we first adopt the Monte-Carlo method to approximate $\mathbb{E}\{f(\mathbf{W}_n)\}$ \cite{fishman2013monte}, i.e.,
\begin{equation}\label{}
\mathbb{E}\{f(\mathbf{W}_n)\}\approx\frac{1}{N_e}\sum_{i=1}^{N_e}f(\mathbf{W}_n^{(i)})=\frac{1}{N_e}\sum_{i=1}^{N_e}f(g_{\omega}(\mathbf{\Omega}_n^{\tau(i)})).
\end{equation}
Here, the approximation holds when the number of Monte-Carlo experiments $N_e$ is sufficiently large \cite{goodfellow2016deep}. Note that $g_{\omega}(\cdot)$ denotes the DNN-based mapping function from the available input $\mathbf{\Omega}_n^{\tau(i)}$ to the desired output $\mathbf{W}_n^{(i)}$, where $\omega$ is the network parameters of DNN and $i \in \{1,2,\cdots, N_e\}$ denotes the $i$-th Monte-Carlo experiment.
Based on this, we can then set the cost function for DNN training, which is given by
\begin{equation}\label{}
J(\omega)=-\frac{1}{N_e}\sum_{i=1}^{N_e}f(g_{\omega}(\mathbf{\Omega}_n^{\tau(i)})).
\end{equation}
Finally, the optimized beamforming matrix can be obtained from the DNN training by updating $\omega$ to minimize the cost function which can be expressed as
\begin{equation}\label{}
\mathbf{W}_n^*=g_{\omega^*}(\mathbf{\Omega}_n^{\tau})
\end{equation}
with $\omega^*=\arg \min\limits_\omega J(\omega)$, where $\omega^*$ is the well-trained network parameter and $\mathbf{W}_n^*$ is the optimized beamforming matrix for an arbitrary $\mathbf{\Omega}_n^{\tau}$.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{PBF_framework_v2.pdf}
\caption{The developed predictive beamforming framework for ISAC.}\label{Fig:PBF_framework
\end{figure}
\textbf{Remark 2}: Note that the adopted penalty approach in phase (a) of the developed framework can be applied to any optimization problem in handling non-convex constraints. Meanwhile, the adopted DNN in phase (b) of the developed framework can be realized by any kind of neural network architecture according to the requirements, such as the convolutional neural network (CNN) \cite{lecun2015deep}, the dense neural network \cite{goodfellow2016deep}, and the residual neural network \cite{liu2020deepresidual}, etc.
Thus, the proposed framework is a universal framework for solving constrained optimization problems in a data-driven approach.
\begin{figure*}[t]
\centering
\includegraphics[width=0.88\linewidth]{HCCL_v2.pdf
\caption{The developed HCL-Net architecture for the predictive beamforming in the considered V2I system.}\label{Fig:HCCL_structure
\end{figure*}
\subsection{Proposed Historical Channels-based Convolutional LSTM Network (HCL-Net)}
As a realization of the developed framework, an HCL-Net is designed to optimize the predictive beamforming matrix, which consists of $K$ CNN modules, one concatenate layer, one LSTM module, and one fully-connected (FC) layer, as shown in Fig. \ref{Fig:HCCL_structure}.
To fully exploit the temporal dependencies\footnotemark\footnotetext{The temporal dependency here refers to the temporal correlations among historical channel vectors.} for predictive beamforming design, a classical convolutional LSTM structure \cite{goodfellow2016deep} is adopted in HCL-Net, i.e., a CNN module is first applied to extract the spatial features of the involved channel matrices to facilitate the subsequent LSTM. Then, an LSTM module is adopted to exploit the temporal dependencies of historical channels from the extracted features for predictive beamforming design.
Note that although other neural networks, such as multilayer perceptron, CNN, also have powerful capability in feature extraction, their advantages mainly lie in extracting spatial features while ignoring temporal features, which may lead to limited system performance.
In contrast to them, our proposed neural network is equipped with CNN units and memory units to exploit both spatial and temporal features to further improve the predictive beamforming performance.
The network hyperparameters are presented in Table I and the details will be introduced in the following.
\begin{table*}[t]
\normalsize
\caption{Hyperparameters of the proposed HCL-Net}\label{Tab:Hyperparameters HCL-Net}
\centering
\small
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{c c c}
\hline
\multicolumn{3}{l}{\textbf{Input}: $\tilde{\mathbf{\Omega}}_{n}^{\tau}$ with the size of $\tau \times K \times N_t \times 2$} \\
\hline
\hspace{0.1cm} \textbf{Layers/Modules} & \textbf{Parameters} & \hspace{0.3cm} \textbf{Values} \\
\hspace{0.1cm} CNN module - Convolutional layer& Size of filters & \hspace{0.3cm} $ 4 \times 3 \times 3 \times 2$ \\
\hspace{0.1cm} CNN module - Pooling layer & Size of filters & \hspace{0.3cm} $ 3 \times 3 $ \\
\hspace{0.1cm} CNN module - Flatten layer & Output shape & \hspace{0.3cm} $ 32 \times 1 $ \\
\hspace{0.1cm} Concatenate layer & Output shape & \hspace{0.3cm} $ 96 \times 1 $ \\
\hspace{0.1cm} LSTM module & Size of output & \hspace{0.3cm} $ 64 \times 1 $ \\
\hspace{0.1cm} FC layer & Activation function & \hspace{0.3cm} Linear \\
\hline
\multicolumn{3}{l}{\textbf{Output}: $[ \mathrm{Re}\{\mathbf{W}_n\}, \mathrm{Im}\{\mathbf{W}_n\} ]$} \\
\hline
\end{tabular}
\end{table*}
\subsubsection{\underline{Input Layer}}
To exploit the features of real-part and imaginary-part of input independently, we divide the complex-valued input into two real-valued parts, i.e.,
\begin{equation}\label{HCL_input}
\tilde{\mathbf{\Omega}}_n^{\tau}= \mathcal{M}([ \mathrm{Re}\{\mathbf{\Omega}_n^{\tau}\}, \mathrm{Im}\{\mathbf{\Omega}_n^{\tau}\} ]),
\end{equation}
where $\tilde{\mathbf{\Omega}}_n^{\tau}$ denotes the network input and $\mathcal{M}(\cdot)\!:\mathbb{R}^{N_t \times 2\tau K}\mapsto\mathbb{R}^{\tau \times K \times N_t \times 2}$ represents the mapping function.
\subsubsection{\underline{CNN Module}}
In HCL-Net, we adopt $K$ CNN modules to independently extract the spatial features from the $K$ channel inputs at each time slot.
Specifically, each CNN module shares an identical network structure, which consists of one input layer, one convolutional layer, one pooling layer, and one flatten layer.
For the convolutional layer, we adopt 4 filters with each filter size of $3\times 3 \times 2$ to generate feature maps and a rectified linear unit (ReLU) is added after each convolution operation. Then, a maximum pooling layer with $3 \times 3$ filter size is connected to reduce the feature size. Finally, a flatten layer is adopted to generate a appropriate shape for the subsequent procedure.
\subsubsection{\underline{Concatenate Layer}}
The concatenate layer is adopted to concatenate all the extracted spatial channel features from different CNN modules. By doing this, the output of concatenate layer contains the channel features of all the $K$ vehicles at each time slot, which are fit for the subsequent temporal channel feature extraction by the LSTM module.
\subsubsection{\underline{LSTM Module}}
In the LSTM module, one LSTM unit is recurrently adopted to handle the input for the past $\tau$ time slots. Assume that there are $\tau$ time steps, the output of LSTM in time step $l-1$, $l \in \{2,3,\cdots,\tau\}$, is the input of LSTM for time step $l$. In particular, the output of LSTM in time step $\tau$ is directly adopted as the output of the entire LSTM module since it exploits the temporal dependency from all the past $\tau$ steps of historical channels.
\subsubsection{\underline{FC Layer}}
To fully exploit the extracted features from previous layers, a FC layer with a linear activation function is connected with the LSTM module to generate the desired output.
Finally, the HCL-Net can be expressed as
\begin{equation}\label{HCL_output}
h_{\varsigma}(\tilde{\mathbf{\Omega}}_n^{\tau}) = [ \mathrm{Re}\{\mathbf{W}_n\}, \mathrm{Im}\{\mathbf{W}_n\} ],
\end{equation}
where $h_{\varsigma}(\cdot)$ is the non-linear function expression of HCL-Net with network parameter $\varsigma$. In this case, the optimized beamforming matrix can be written as
\begin{equation}\label{W_n_net}
\mathbf{W}_n = \mathcal{F}(h_{\varsigma}(\tilde{\mathbf{\Omega}}_n^{\tau})) \in \mathbb{C}^{N_t\times K},
\end{equation}
where $\mathcal{F}(\cdot):\mathbb{R}^{N_t\times 2K}\mapsto\mathbb{C}^{N_t\times K}$ is the mapping function to generate a complex-valued matrix.
The rationale of the proposed HCL-Net can be summarized into two points:
(a) Inspired by its powerful capability in extracting both spatial and temporal features from the input, a convolutional LSTM structure is adopted in HCL-Net to further improve the learning performance; (b) We adopt $K$ four-layer CNN modules and one LSTM module in the developed HCL-Net to balance the tradeoff between the learning performance and the neural network complexity.
\textbf{Remark 3}: It should be highlighted that due to the powerful scalability of DNN, the proposed HCL-Net architecture is scalable and it can be easily extended via altering the neural network input and output sizes based on the number of vehicles, the number of transmit/receive antennas at the RSU, i.e., the proposed HCL-Net is suitable for different system deployments, which will be verified via the simulation results in Section \Rmnum{5}.
\subsection{HCL-Net-based Predictive Beamforming Algorithm}
In this section, we adopt the designed HCL-Net as the core DNN in Fig. \ref{Fig:PBF_framework} and propose an HCL-Net-based predictive beamforming algorithm, which consists of offline training and online optimization. The details of the algorithm will be introduced in the following.
\subsubsection{\underline{Offline Training}}
Given an unlabeled training set $\mathcal{X} = \left\{(\tilde{\mathbf{\Omega}}_k^{\tau(1)},\bar{\mathbf{H}}_{n}^ {(1)} ), (\tilde{\mathbf{\Omega}}_k^{\tau(2)},\bar{\mathbf{H}}_{n}^ {(2)} ), \cdots, \right. $ $\left. ( \tilde{\mathbf{\Omega}}_k^{\tau(N_e)},\bar{\mathbf{H}}_{n}^{ (N_e)} ) \right\}$, where $N_e$ is the number of training examples, $(\tilde{\mathbf{\Omega}}_k^{\tau(i)},\bar{\mathbf{H}}_{n}^ {(i)} )$ denotes the $i$-th, $i \in \{1,2,\cdots,N_e\}$, training example in $\mathcal{X}$.
According to the developed framework in Fig. \ref{Fig:PBF_framework}, we can then define the cost function as
\begin{equation}\label{cost_function}
\begin{aligned}
&J_{\mathrm{HCL-Net}}(\varsigma)\\
& = -\frac{1}{N_e}\sum_{i=1}^{N_e}
\sum_{k = 1}^K\mathrm{log}_2\left( 1 + \frac{\left| (\mathbf{h}_{k,n}^{(i)})^H\mathbf{w}_{k,n}^{(i)}(\varsigma)\right|^2 }{ {\sum_{k^{'} \neq k}^K \left| \mathbf{h}_{k,n}^H\mathbf{w}_{k^{'},n}\right|^2 + \sigma_{k}^2}}\right) \\
& + \lambda_1 \left[\max\left(0,\frac{1}{N_eK}\sum_{i=1}^{N_e} \sum_{k=1}^K\mathrm{CRLB}(\theta_{k,n}^{(i)},\mathbf{w}_{k,n}^{(i)}(\varsigma)) - \gamma_{\theta} \right)\right]^2 \\
& + \lambda_2\left[\max\left(0, \frac{1}{N_eK}\sum_{i=1}^{N_e} \sum_{k=1}^K\mathrm{CRLB}(d_{k,n}^{(i)},\mathbf{w}_{k,n}^{(i)}(\varsigma)) - \gamma_d \right)\right]^2 \\
& + \lambda_3 \frac{1}{N_e}\sum_{i=1}^{N_e} \left[\max\left(0, \|\mathbf{W}_{n}^{(i)}(\varsigma)\|_F^2 - P \right)\right]^2.
\end{aligned}
\end{equation}Here, the corresponding $\mathrm{CRLB}(\cdot,\cdot)$ were defined in (\ref{CRLB_theta}) and (\ref{CRLB_d}), respectively, and $\mathbf{w}_{k,n}^{(i)}(\varsigma)$ is the $k$-th column of $\mathbf{W}_{n}^{(i)}(\varsigma)=\mathcal{F}(h_{\varsigma}(\tilde{\mathbf{\Omega}}_n^{\tau(i)}))$, as defined in (\ref{W_n_net}). For the maximum operators, i.e., $\max(\cdot,\cdot)$ in (\ref{cost_function}), a rectified linear unit (ReLU) function \cite{goodfellow2016deep}, i.e., $f_{\mathrm{R}}(x) = \max(0,x)$ can be adopted to replace them to facilitate the network training.
Based on (\ref{cost_function}), we can then adopt the back propagation algorithm (BPA) to update the network parameters $\varsigma$ progressively to minimize the cost function value.
After the training process, a well-trained HCL-Net can be expressed as
\begin{equation}\label{well_trained_HCL}
h_{\varsigma^*}(\tilde{\mathbf{\Omega}}_n^{\tau}) = [ \mathrm{Re}\{\mathbf{W}_n^*\}, \mathrm{Im}\{\mathbf{W}_n^*\} ], \forall n,
\end{equation}
where $h_{\varsigma^*}(\cdot)$ denotes the well-trained HCL-Net with the well-trained network parameters $\varsigma^*$, and $\mathbf{W}_n^*$ denotes the optimized predictive beamforming matrix based on the HCL-Net.
\begin{table}[t]
\small
\centering
\begin{tabular}{l}
\toprule[1.8pt] \vspace{-0.3 cm}\\
\hspace{-0.1cm} \textbf{Algorithm 1} {HCL-Net-based Predictive Beamforming Algorithm} \vspace{0.2 cm} \\
\toprule[1.8pt] \vspace{-0.3 cm}\\
\textbf{Initialization:} $i_t = 0$, and $I_t = N_{\max}$ \\
\hspace{2.2cm}$\varsigma$ with random weights \\
\hspace{2.2cm}a training set $\mathcal{X}$ \\
\textbf{Unsupervised Offline Training:} \\
1:\hspace{0.75cm}\textbf{Input:} Training set $\mathcal{X}$\\
2:\hspace{1.1cm}\textbf{while} $i_t \leq I_t $ \textbf{do} \\
3:\hspace{1.6cm}Update $\varsigma$ by BPA to minimize $J_{\mathrm{HCL-Net}}(\varsigma)$ in (\ref{cost_function}) \\
\hspace{1.8cm} $i_t = i_t + 1$ \\
4:\hspace{1.1cm}\textbf{end while} \\
5:\hspace{0.75cm}\textbf{Output}: Well-trained ${h}_{\varsigma^*}( \cdot ) $ as defined in (\ref{well_trained_HCL})\\
\textbf{Online Beamforming Design:} \\
6:\hspace{0.75cm}\textbf{Input:} Test data $\tilde{\mathbf{\Omega}}_m^{\tau}= \mathcal{M}([ \mathrm{Re}\{\mathbf{\Omega}_m^{\tau}\}, \mathrm{Im}\{\mathbf{\Omega}_m^{\tau}\} ])$ \\
7:\hspace{1.1cm}\textbf{do} Predictive Beamforming using \\
\hspace{1.7cm} well-trained HCL-Net ${h}_{\varsigma^*}( \cdot ) $ \\
8:\hspace{0.75cm}\textbf{Output:} $\mathbf{W}_m^* = \mathcal{F}(h_{\varsigma^*}(\tilde{\mathbf{\Omega}}_m^{\tau}))$ \vspace{0.2cm}\\
\bottomrule[1.8pt]
\end{tabular}
\end{table}
\subsubsection{\underline{Online Optimization}}
Given a test example $\mathbf{\Omega}_m^{\tau}$, $m \neq n$, we can obtain the network input, i.e., $\tilde{\mathbf{\Omega}}_m^{\tau}= \mathcal{M}([ \mathrm{Re}\{\mathbf{\Omega}_m^{\tau}\}, \mathrm{Im}\{\mathbf{\Omega}_m^{\tau}\} ])$. Then, we send $\tilde{\mathbf{\Omega}}_m^{\tau}$ into the well-trained HCL-Net to obtain the optimized predictive beamforming matrix:
\begin{equation}\label{W*_n_net_test}
\mathbf{W}_m^* = \mathcal{F}(h_{\varsigma^*}(\tilde{\mathbf{\Omega}}_m^{\tau})).
\end{equation}
\subsubsection{ \underline{Algorithm Steps}}
Based on the above discussion, we then summarize the proposed algorithm steps in \textbf{Algorithm 1}, where $i_t$ is the iteration index and $I_t = N_{\max}$ is the maximum iteration number.
\subsection{Complexity Analysis}
The complexity of the proposed algorithm consists of two parts: offline training and online beamforming. In fact, both the offline and online processes require the calculation of the proposed neural network, which mainly consists of two parts: the calculations of the CNN module and the LSTM module.
According to \cite{he2015convolutional}, the computation of a CNN module at each time step is $\mathcal{O}\left( \sum_{l = 1}^{N_L}n_{l-1} s_l^2 n_l a_l b_l\right)$.
Here, $\mathcal{O}(\cdot)$ represents the order of the computational complexity, $N_L$ represents the number of convolutional layers, and $n_0$ is the input dimension.
In addition, subscript $l$, $l \in \{1,2,\cdots, N_l\}$, denotes the index of the convolutional layer, and $n_l$ and $s_l$ are the number of neural network channels and the spatial size of the filter, respectively. Moreover, $a_l$ and $b_l$ are the length and the width of the output feature map.
Note that we adopt $K$ CNN modules to independently handle the channel inputs from the $K$ vehicles at each time step and there are $\tau$ time steps in the proposed neural network.
In this case, the complexity of the calculation of CNN modules is $\mathcal{O}\left( \tau K\sum_{l = 1}^{N_L}n_{l-1} s_l^2 n_l a_l b_l\right)$.
Concerning the complexity of the LSTM module, one LSTM unit is recurrently adopted for all the $\tau$ time steps.
According to \cite{hochreiter1997long, tsironi2017analysis}, the complexity of an LSTM unit per time step is $\mathcal{O}\left( 4(\kappa_1\kappa_2 + \kappa_2^2 + \kappa_2 )\right)$, where $\kappa_1$ and $\kappa_2$ represent the input dimension and the output dimension of the LSTM unit, respectively.
Since there are $\tau$ time steps in the proposed neural network, the complexity of the calculation of the LSTM module is $\mathcal{O}\left( 4\tau(\kappa_1\kappa_2 + \kappa_2^2 + \kappa_2 )\right)$.
Based on the above discussion, the complexity of the offline training of the proposed algorithm is
\begin{equation}\label{}
\begin{aligned}
&C_\mathrm{offline} \\
&= \mathcal{O}\hspace{-0.05cm}\left(\hspace{-0.05cm} I_tN_e\hspace{-0.05cm}\left(\hspace{-0.05cm}\tau K\sum\limits_{l = 1}^{N_L}n_{l-1} s_l^2 n_l a_l b_l + 4\tau(\kappa_1\kappa_2 + \kappa_2^2 + \kappa_2 )\hspace{-0.05cm}\right)\hspace{-0.05cm}\right),
\end{aligned}
\end{equation}
where $I_t$ and $N_e$ are the maximum iteration number and the number of training examples, respectively, as defined in Section IV-C.
Moreover, the complexity of the online beamforming of the proposed algorithm is
\begin{equation}\label{}
C_\mathrm{online} = \mathcal{O}\left( \tau K\sum\limits_{l = 1}^{N_L}n_{l-1} s_l^2 n_l a_l b_l + 4\tau(\kappa_1\kappa_2 + \kappa_2^2 + \kappa_2 )\right).
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{scenario_simulations_v2.pdf}
\caption{The considered V2I system for simulations.}\label{Fig:scenario_simulations}
\end{figure}
\section{Numerical Results}
In this section, we conduct simulations in an ISAC-assisted V2I network under mmWave setting and provide extensive numerical results to verify the effectiveness of the proposed method.
In the considered V2I network, one RSU equipped with $N_t$ transmit antennas and $N_r$ receive antennas serves $K$ single-antenna vehicles, as defined in the system model.
Unless otherwise specified, we set $N_t = N_r = 32$, $K = 3$, $\sigma_z^2 = \sigma_k^2 = -80~\mathrm{dBm}$.
Fig. \ref{Fig:scenario_simulations} adopts the two-dimensional (2D) coordinate system to illustrate the relative positions of the RSU and the $K$ vehicles, where the RSU is located at $[0~\mathrm{m}, 0~\mathrm{m}]$.
Without loss of generality, the initial positions of vehicles are set randomly, which are formulated as $[x_k,y_k] = [\tilde{x}_k + \Delta x,\tilde{y}_k + \Delta y]$. Here, $[x_k, y_k]$ denotes the coordinate of the $k$-th vehicle, $[\tilde{x}_k,\tilde{y}_k]$ is the mean initial position, and $\Delta x, \Delta y \sim \mathcal{N}(0,1)$ are random variables to characterize the uncertainty of initial positions.
Accordingly, we set $\Delta T = 0.02~\mathrm{s}$ and the average velocity of the $k$-th vehicle is set as $v_{k,n} \sim \mathcal{U}(8~\mathrm{m/s}, 8.25~\mathrm{m/s})$, i.e., the average velocity for vehicles is around $30~\mathrm{km/h}$.
For the observation model, we set $\xi= 10$ and $\rho_{\nu} = 2.0 \times 10^{-6}$.
For the optimization problem, we set $\tau = 6 $, $\gamma_{\theta} = 0.01~\mathrm{rad}^2$, $\gamma_d = 0.01~\mathrm{m}^2$, and $\lambda_1 = \lambda_2 = \lambda_3 = 10^3$.
In addition, the historical estimated motion parameters are set with a normalized mean squared error (NMSE) of $0.01$.
The other default settings for simulations are presented in Table \ref{Tab:Simulation_settings}.
\begin{table}[t]
\normalsize
\caption{Default Settings in Simulations}\label{Tab:Simulation_settings}
\centering
\small
\renewcommand{\arraystretch}{0.6}
\begin{tabular}{c c}
\hline
\\
\textbf{Parameters} & \textbf{Default Values} \vspace{0.2cm} \\
\hline
\\
Path loss exponent & $\zeta = 2.55$ \\
\\
Path loss \cite{niu2015survey} at $d_0 = 1~\mathrm{m}$ & $\alpha_0 = -70~\mathrm{dB}$ \\
\\
Fading coefficient \cite{yuan2020bayesian} & $\varrho = 10 + 10j$ \\
\\
Position of RSU & $[0~\mathrm{m},0~\mathrm{m}]$ \\
\\
Mean initial position of vehicle 1 & $[15~\mathrm{m},20~\mathrm{m}]$ \\
\\
Mean initial position of vehicle 2 & $[25~\mathrm{m},20~\mathrm{m}]$ \\
\\
Mean initial position of vehicle 3 & $[35~\mathrm{m},20~\mathrm{m}]$ \\
\\
Carrier frequency & $f_c = 30~\mathrm{GHz}$ \\
\\
Speed of signal propagation & $c = 30~\mathrm{GHz}$ \\
\\
Maximum CRLB threshold (angle) & $\gamma_{\theta} = 0.01~\mathrm{rad}^2$\\
\\
Maximum CRLB threshold (distance) & $\gamma_d = 0.01~\mathrm{m}^2$ \\
\\
\hline
\end{tabular}
\end{table}
To evaluate the beamforming performance of the proposed method, we adopt the following three methods as benchmarks for comparisons:
\begin{itemize}
\item Benchmark 1 (Upper bound): It is a genie-aided scheme in which downlink multiple access interference does not exist. Besides, constraints (\ref{P1_C_theta}) and (\ref{P1_C_d}) are ignored. Also, perfect ICSI is available such that the optimal beamforming can be obtained \cite{rao2003performance} to achieve the upper bound performance of problem (P1).
\item Benchmark 2 (Naive DL method): In this scheme, only the estimated channels at time slot $n-1$, i.e., $\tilde{\mathbf{H}}_{n-1}$ is available and this method naively regards $\tilde{\mathbf{H}}_{n-1}$ as the channel at time slot $n$. In contrast to the proposed method, this method adopts a fully-connected neural network to extract features from $\tilde{\mathbf{H}}_{n-1}$ for beamforming design at time slot $n$.
\item Benchmark 3 (Random beamforming method): A random beamforming scheme is adopted, where the beamforming matrix is set randomly without the consideration of constraints (\ref{P1_C_theta}) and (\ref{P1_C_d}).
\end{itemize}
For the proposed method, the adopted hyperparameters for simulations are based on Table \ref{Tab:Hyperparameters HCL-Net}, the numbers of training examples and test examples are both set as $2,000$\footnotemark\footnotetext{In the simulations, since the illustrative system setting is not complicated and the adopted HCL-Net is with few parameters, only a relatively small number of training examples are required \cite{goodfellow2016deep}. Indeed, our simulation results show that a training set with a total number of $2,000$ examples is sufficient to achieve an excellent performance.}, and the number of epochs \cite{goodfellow2016deep}, i.e., the number of times that the learning algorithm works through the entire training dataset, is set as $6$.
In addition, each point in the simulation results is obtained via averaging over $2,000$ Monte Carlo realizations.
In the following, we will conduct simulations in terms of communication performance, sensing performance, and the neural network training performance, respectively.
\begin{figure}[t]
\centering
\includegraphics[width=3in,height=2.6in]{paperuse_rate_power_v2.pdf}
\caption{The average achievable sum-rate with different transmit powers under $N_t = N_r = 32$.}\label{Fig:Rate_P}
\end{figure}
\subsection{Communication Performance}
To evaluate the communication performance, we investigate the achievable sum-rates of V2I network by varying different power budgets, number of transceiver antennas, and number of vehicles, respectively.
Fig. \ref{Fig:Rate_P} presents the average achievable sum-rate versus the maximum transmit power $P$ at the RSU. It is shown that the achievable sum-rates of all the considered algorithms increase with $P$. This is because a higher power budget improves the received signal strengths at vehicles.
In particular, the random beamforming method presents a very limited performance since the random beamformer fails to align with the desired channels between the RSU and vehicles. On the other hand, although the naive DL method outperforms that of the random beamforming method, it only offers a small performance gain. The reason is that for the naive DL method, only the outdated channel information is available for optimizing the beamforming matrix, which can hardly track the high-speed vehicles in V2I networks leading to inefficient beamforming.
Different from these two methods, we can observe that the proposed method presents a significant performance improvement and its achievable sum-rate performance approaches closely to that of the upper bound exploiting perfect ICSI.
Specifically, the sum-rate of our proposed method enjoys the same slope as the upper bound method in exploiting the transmit power for performance improvement. In particular, a $10~\mathrm{dB}$ of performance gain is achieved when the data rate is $3~\mathrm{bits/s/Hz}$ compared with the naive DL method.
This is expected since our proposed method can intelligently predict the beamforming matrix via exploiting features from the historical estimated channels to further improve the sum-rate performance.
\begin{figure}[t]
\centering
\includegraphics[width=3in,height=2.6in]{rate_power_with_otherDNNs.pdf}
\caption{The average achievable sum-rate with different transmit powers under $N_t = N_r = 32$.}\label{Fig:Reviewer1_rate_power_comparison}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3in,height=2.6in]{paperuse_rate_N_v2.pdf}
\caption{The average achievable sum-rate with different number of antennas under $P = 25~\mathrm{dBm}$ and $N_t = N_r = N_a$.}\label{Fig:Rate_N}
\end{figure}
To study the impacts of different neural network structure on the sum-rate performance, we compare our proposed HCL-Net with the classical LSTM structure and CNN structure.
As shown in Fig. \ref{Fig:Reviewer1_rate_power_comparison}, the proposed HCL-Net method achieves the best sum-rate performance and outperforms the other two neural networks significantly.
The reason is that both LSTM and CNN can only exploit either temporal feature or spatial feature, while the proposed HCL-Net can leverage the CNN and LSTM modules to simultaneously exploit both the spatial feature and the temporal feature of communication channels to further improve the sum-rate performance. Therefore, although the proposed HCL-Net has a larger computational complexity compared with LSTM and CNN, the former can achieve an excellent sum-rate performance.
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=3in,height=2.6in]{rate_speed_v2.pdf}
\label{Fig:rate_speed}}
\\
\subfigure[]{
\includegraphics[width=3in,height=2.6in]{rate_vehicles.pdf}
\label{Fig:rate_vehicles}}
\\
\caption{The effect of the vehicles' parameters on the ISAC system performance under $N_t = N_r = 32$ and $P = 30~\mathrm{dBm}$. (a) The average achievable sum-rate versus the average velocity of vehicles; (b) The average achievable sum-rate versus the number of vehicles.}
\end{figure}
On the other hand, the number of transmit/receive antennas at the RSU also has some important impacts on the sum-rate performance.
For ease of study, we set $N_t = N_r = N_a$, where $N_a$ denotes the number of transmit/receive antennas at the RSU.
Then, we fix $P = 25~\mathrm{dBm}$ and present the simulation results of the average achievable sum-rate under different number of antennas in Fig. \ref{Fig:Rate_N}.
It can be seen that when the number of antennas equipped at the RSU increases, the random beamforming method only achieves a quite small sum-rate gain, while all the other methods can achieve remarkable performance gain.
The reason is that the random beamforming policy can hardly exploit the degrees-of-freedom in the spatial domain for focusing the information carrying beams on the desired directions for interference management.
Similar to the results in Fig. \ref{Fig:Rate_P}, our proposed method outperforms the naive DL method and the random beamforming method significantly while achieving a considerable performance of the upper bound method.
For example, when $N_t = N_r = 28$, the average achievable sum-rate of our proposed method is around $5~\mathrm{bits/s/Hz}$, which is $1.5$ times higher than that of the naive DL method.
In fact, our proposed HCL-Net method can exploit the DNN's powerful scalability, thus presenting satisfactory performance under different number of antennas.
\begin{figure}[t]
\centering
\subfigure[]{
\includegraphics[width=3in,height=2.6in]{rate_time_steps.pdf}
\label{Fig:rate_time_steps}}
\\
\subfigure[]{
\includegraphics[width=3in,height=2.6in]{rate_lambda.pdf}
\label{Fig:rate_lambda}}
\caption{The effect of the algorithm's parameters on the ISAC system performance under $N_t = N_r = 32$ and $P = 30~\mathrm{dBm}$. (a) The average achievable sum-rate versus the number of historical time steps; (b) The average achievable sum-rate versus the penalty parameters with $\lambda_1 = \lambda_2 = \lambda_3=\lambda$.}
\end{figure}
In the following, we will investigate the effect of the vehicles' parameters on the communication performance.
Fig. \ref{Fig:rate_speed} presents the curves of the achievable sum-rate versus the average velocity, where we vary the velocity from $10~\mathrm{km/h}$ to $60~\mathrm{km/h}$ to obtain the test results.
It can be observed that for the upper bound method and the random beamforming method, the achievable sum-rate almost keeps the same value under different velocity values since they do not need to explore the temporal dependency of channels for beamforming design.
In addition, the sum-rate of the naive method decreases with the increase of the vehicles' average velocity. The reason is that the naive method only exploits the current channel for the beamforming design in the future time slot, while the increase of the velocity would make the future channel differs significantly from the current channel.
In contrast to the naive method, the performance of our proposed method can still approach that of the upper bound under different velocity values since it can leverage deep learning technology to implicitly and accurately predict the channel in the next time slot for predictive beamforming design.
The curves of the achievable sum-rate under different number of vehicles are shown in Fig. \ref{Fig:rate_vehicles}, where we vary the size of the proposed HCL-Net according to the number of vehicles $K$ to test its performance.
The results show that the achievable sum-rate increases with the number of vehicles since the increased number of vehicles can introduce certain multi-user diversity gain.
In addition, the sum-rate performance of the proposed method can still approach that of the upper bound method while outperforming the other two methods significantly.
The reason is that the proposed algorithm can exploit the powerful scalability of DNN to efficiently exploit the multi-user diversity gain to facilitate the beamforming design.
Besides, we study the effect of the algorithm's parameters on the system performance.
The simulation results of the achievable sum-rate versus number of historical time steps $\tau$ is presented in Fig. \ref{Fig:rate_time_steps}.
It can be observed that for the upper bound method, the naive DL method, and the random beamforming method, their sum-rate performance almost keep constant when $\tau$ changes.
This is as expected since these three methods do not exploit the temporal dependency for beamforming design.
In contrast, the performance of our proposed method increases with $\tau$ since the developed HCL-Net can fully exploit the temporal dependency from the historical channels for beamforming design to further improve the system performance.
On the other hand, to study the effect of the penalty parameters on the system performance, we let $\lambda_1 = \lambda_2 = \lambda_3=\lambda$ and vary the value of $\lambda$ to obtain the simulation results, as shown in Fig. \ref{Fig:rate_lambda}.
It can be observed that the performance of the upper bound method and the random beamforming method does not change with $\lambda$ since they do not adopt the penalty method.
In addition, we can find that for the proposed method and the naive DL method, the achievable sum-rate increases with $\lambda$.
This is as expected since if $\lambda$ is small, it is easy to yield large constraint violations, which results in infeasible solutions and eventually leads to a low average achievable sum-rate.
Therefore, a larger valued $\lambda$ contributes to the improvement of the achievable sum-rate.
In particular, the achievable sum-rate converges to the optimal upper bound after $\lambda\geq 1000$ in all the considered cases, which verified that the setting of $\lambda_1 = \lambda_2 = \lambda_3 = 1000$ in our simulations is reasonable.
\begin{figure}[t]
\centering
\includegraphics[width=3in,height=2.6in]{paperuse_sqrt_CRLB_angle_power_v2.pdf}
\caption{The square root of CRLB achieved by the proposed method in terms of angle estimation with different transmit powers and number of antennas under $\gamma_{\theta} = 0.01~\mathrm{rad}^2$ and $\gamma_d = 0.01~\mathrm{m}^2$.}\label{Fig:Sqrt_CRLB_angle_P}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3in,height=2.6in]{paperuse_sqrt_CRLB_range_power_v2.pdf}
\caption{The square root of CRLB achieved by the proposed method in terms of distance estimation with different transmit powers and number of antennas under $\gamma_{\theta} = 0.01~\mathrm{rad}^2$ and $\gamma_d = 0.01~\mathrm{m}^2$.}\label{Fig:Sqrt_CRLB_range_P}
\end{figure}
\subsection{Sensing Performance}
To evaluate the sensing performance of our proposed method, we investigate the achieved tracking performance of vehicles' motion parameters by using the predictive beamforming matrix.
For ease of study, we adopt the square root of CRLB, i.e., the root MSE (RMSE) lower bound of parameter estimation as the metric to characterize the sensing performance in the following simulations.
Denote by $\mathrm{CRLB}^{\frac{1}{2}}$ the square root of CRLB, Fig. \ref{Fig:Sqrt_CRLB_angle_P} illustrates the angle estimation results under different transmit powers and number of antennas, where the maximum tolerable CRLB thresholds in (P1) are set as $\gamma_{\theta} = 0.01~\mathrm{rad}^2$ and $\gamma_d = 0.01~\mathrm{m}^2$, as shown in Table \ref{Tab:Simulation_settings}.
We can observe that all the $\mathrm{CRLB}^{\frac{1}{2}}$ values are around an order of $10^{-3}~\mathrm{rad} \sim 10^{-2}~\mathrm{rad} $, which are much less than the constrained thresholds. Thus, our proposed method can achieve a satisfactory tracking accuracy for V2I networks.
In addition, it is shown that the $\mathrm{CRLB}^{\frac{1}{2}}$ values of all the curves decrease with the increase of $P$.
For example, when $N_t = N_r = 48$ and the transmit power increases from $10~\mathrm{dBm}$ to $15~\mathrm{dBm}$, the $\mathrm{CRLB}^{\frac{1}{2}}$ dramatically decreases from $10^{-2}~\mathrm{rad}$ to $10^{-3}~\mathrm{rad}$.
The reason is that a large transmit power contributes to improving the received SNR at the RSU when receiving the echoes, thus the impact of noise becomes relatively weak and a more accurate angle estimation can be achieved.
Moreover, it can be seen that by increasing the number of antennas at the RSU, the achievable $\mathrm{CRLB}^{\frac{1}{2}}$ is decreasing. It is as expected since large scale antenna array can bring an improved antenna gain to the RSU system.
Based on the results, we can observe that increasing the power budget or number of antennas at the RSU can enhance the received signal strengths at both the RSU and vehicles, thus further improving the sensing and communication performance simultaneously.
Correspondingly, the $\mathrm{CRLB}^{\frac{1}{2}}$ results in terms of distance estimation under different transmit powers and number of antennas are presented in Fig. \ref{Fig:Sqrt_CRLB_range_P}.
It can be observed that when $N_t = N_r = 48$, the proposed method can achieve a $\mathrm{CRLB}^{\frac{1}{2}}$ value of $10^{-4}~\mathrm{m}$, which is sufficient for accurately tracking high-speed vehicles.
Therefore, by exploiting the DL approach to solve the predictive beamforming problem for V2I networks, the proposed method not only performs a satisfactory communication rate but also guarantees the sensing performance.
\begin{figure}[t]
\centering
\includegraphics[width=3in,height=2.6in]{paperuse_testing_rate_epoch_v2.pdf}
\caption{The testing average achievable sum-rate with different epochs under $N_t = N_r = 32$.}\label{Fig:Testing_rate_epoch}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=3in,height=2.3in]{paperuse_sqrt_CRLB_epoch_v2.pdf}
\caption{The square root of CRLB with different epochs under $N_t = N_r = 32$: The upper half is the angle estimation performance and the lower half is the distance estimation performance.}\label{Fig:Sqrt_CRLB_epoch}
\end{figure}
\subsection{Training Performance of HCL-Net for ISAC}
To investigate the training performance of the proposed HCL-Net for ISAC, we will evaluate the communication and sensing performance with different number of training epochs in the following.
To demonstrate the effect of neural network training on the communication performance, Fig. \ref{Fig:Testing_rate_epoch} presents the testing sum-rate versus the number of training epochs under $N_t = N_r = 32$.
It can be observed that the testing sum-rate increases with the number of training epochs. Also, for different maximum transmit powers, the proposed HCL-Net can quickly converge to a stable sum-rate within only $6$ training epochs.
The reason is that the adopted convolutional LSTM structure requires a relatively small number of parameters, thus the designed HCL-Net is more efficient for fast network training.
Correspondingly, the testing $\mathrm{CRLB}^{\frac{1}{2}}$ curves of both the angle estimation and the distance estimation under different training epochs are presented in Fig. \ref{Fig:Sqrt_CRLB_epoch} to evaluate the convergence performance of the proposed method in terms of sensing task.
It is shown that the performance of both the angle and distance estimations can converge within $6$ epochs, which demonstrates the practicality of the proposed HCL-Net method.
\begin{figure}[t]
\centering
\includegraphics[width=3in,height=2.6in]{sensing_communication_tradeoff_v2.pdf}
\caption{Sensing-communication tradeoff under $N_t = N_r = 32$ and $\gamma_{\theta}=\gamma_d=\gamma$.}\label{Fig:tradeoff}
\end{figure}
\subsection{Sensing-Communication Tradeoff}
To study the interplay between sensing and communication of the proposed method, we plot the maximum tolerable CRLB threshold versus the achievable sum-rate in Fig. \ref{Fig:tradeoff}. We let $\gamma_{\theta}=\gamma_d=\gamma$ and vary the maximum tolerable CRLB threshold $\gamma$ to obtain the corresponding sum-rate.
It can be observed that the sum-rate increases with $\gamma$ and the sum-rate saturates when $\gamma \geq 9\times 10^{-3}$.
This unveils an interesting sensing-communication tradeoff, where if the sensing constraints are loose, the achievable sum-rate can be further improved via beamforming design, otherwise, the sum-rate performance drops.
The results are as expected since the goals of sensing task and communication task are not completely aligned, which makes the desired beamforming matrix to maximize the communication rate differs a lot from that to meet the sensing constraints.
In particular, the sensing-communication tradeoff provides practical guidelines on how to balance the sensing and communication performance in ISAC systems.
\section{Conclusion}
This paper investigated the ISAC in V2I networks and proposed an unsupervised DL-based predictive beamforming scheme to further improve ISAC performance of a V2I system.
First, we formulated a general predictive beamforming problem for ISAC to maximize the communication rate while guaranteeing the sensing accuracy. In particular, we derived the CRLB-based constraints to restrict the tracking performance of vehicles within a required accuracy.
Then a versatile learning-based predictive beamforming framework was developed, which consists of a penalty method-based problem transformation and a DL-based algorithm for solving the problem at hand.
As a realization of the developed predictive framework, a HCL-Net was designed to facilitate the beamforming design of ISAC which is model-free and only requires the historical estimated channels as the network input. In particular, to fully exploit the spatial and temporal features of historical estimated channels, a convolutional LSTM structure was adopted in HCL-Net to further improve the beamforming performance.
Finally, numerical results demonstrated that the proposed HCL-Net can achieve satisfactory performance in terms of both communication and sensing tasks, and the achievable sum-rate of the proposed predictive method approaches the upper bound obtained by a genie-aided scheme exploiting the perfect ICSI.
\bibliographystyle{ieeetr}
\setlength{\baselineskip}{10pt}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.